The goal: file a GitHub Issue, grab coffee, come back to a PR that’s been implemented, tested, and reviewed by multiple AI models — just waiting for a thumbs up. Not fully there yet, but close enough to be dangerous. Humans decide what to build. AI agents handle the how. Automation runs the boring parts. A human gives the final sign-off. Claude, Gemini, Copilot, and local MLX models each do what they’re best at — the right model for the right job instead of throwing everything at one.Documentation Index
Fetch the complete documentation index at: https://docs.jacobpevans.com/llms.txt
Use this file to discover all available pages before exploring further.
The pipeline
The model-routing philosophy
Every model has a sweet spot:- Claude — best at multi-file refactors, deep reasoning, agentic loops. The default for non-trivial implementation work.
- Gemini — great for second opinions, code review, broad context understanding.
- GitHub Copilot — fastest for line-level completions inside the editor. Cheap for high-volume routine work.
- Local MLX — Apple Silicon native inference for typo fixes, quick edits, and “I don’t want to burn cloud tokens on this” tasks.
~/CLAUDE.md and AGENTS.md say which model to use when.
Repos that power this pipeline
- ai-assistant-instructions — universal AI configuration layer (rules, permissions, workflows, agents)
- claude-code-plugins — commands, skills, hooks, agents for Claude Code
- claude-code-routines — scheduled remote-agent routines on Claude.ai
- ai-workflows — reusable GitHub Copilot agentic workflows
- nix-ai — Nix package and config layer for every AI coding tool
- raycast-smart-issue — Raycast extension for AI-drafted GitHub issues via local MLX