Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.jacobpevans.com/llms.txt

Use this file to discover all available pages before exploring further.

The goal: file a GitHub Issue, grab coffee, come back to a PR that’s been implemented, tested, and reviewed by multiple AI models — just waiting for a thumbs up. Not fully there yet, but close enough to be dangerous. Humans decide what to build. AI agents handle the how. Automation runs the boring parts. A human gives the final sign-off. Claude, Gemini, Copilot, and local MLX models each do what they’re best at — the right model for the right job instead of throwing everything at one.

The pipeline

The model-routing philosophy

Every model has a sweet spot:
  • Claude — best at multi-file refactors, deep reasoning, agentic loops. The default for non-trivial implementation work.
  • Gemini — great for second opinions, code review, broad context understanding.
  • GitHub Copilot — fastest for line-level completions inside the editor. Cheap for high-volume routine work.
  • Local MLX — Apple Silicon native inference for typo fixes, quick edits, and “I don’t want to burn cloud tokens on this” tasks.
The routing is opinionated, not magic: clear rules in ~/CLAUDE.md and AGENTS.md say which model to use when.

Repos that power this pipeline

See AI Development · Overview for what each one does in detail.

Observability layer

Every interaction in this pipeline emits OpenTelemetry. The data flows through Cribl Edge, Cribl Stream, and Splunk; the VisiCore apps visualize it. If an AI agent touched code, there’s a trace.