Weaving cosmic threads...
Weaving cosmic threads...
Claude + OpenCode + Codex + Gemini — orchestrated.
Install one command. The Arcanea Orchestrator detects which AI CLIs you have, which subscriptions you own, and which tasks you're running — then routes each task to the right model via the right sub-CLI. Claude Max does the heavy reasoning. Free tier handles bulk. BYOK gets the second opinion. You never think about it again.
16 task classes (code.debug, world.canon, research.deep…). Each declares primary + fallback models. You pick the class; the orchestrator picks the model.
Max sub on → claude -p uses it. OpenCode Zen free → opencode runs free. BYOK keys for Codex + Gemini get used when relevant. No wasted spend.
Every run writes to ~/.arcanea/history.jsonl. After 10 events, adaptive routing re-ranks candidates by local success rate + speed + recent-failure penalty.
`arco plan "build a landing page"` calls claude -p to break the goal into 3-7 concrete sub-tasks, each mapped to a task class with a concrete prompt.
`arco workflow` ships reusable compositions (build-landing-page, refactor-ts, add-feature). Fill in --var placeholders, get an expanded plan.
Prompts never touch Arcanea infrastructure. They go straight from your shell to whichever vendor CLI you invoked. We only log locally.
$ arco doctor
claude sub /path/to/claude
opencode free /path/to/opencode
codex byok /path/to/codex
gemini unknown /path/to/gemini
Preference: sub-first
$ arco explain code.debug --surface claude-arcanea
Task: code.debug
→ claude-opus-4-7 claude sub
claude-sonnet-4-6 claude sub
minimax-m2.5-free opencode free
$ arco run --task code.debug "find the null-deref in apps/web/app/api/auth/route.ts"
[arcanea] task=code.debug pref=sub-first → claude-opus-4-7 via claude [auth: sub]
(claude streams the fix, orchestrator logs the event to ~/.arcanea/history.jsonl)
$ arco stats
TASK MODEL RUNS SUCCESS AVG
code.debug claude-opus-4-7 7 100% 8214ms
code.debug minimax-m2.5-free 4 75% 3420ms
world.canon claude-opus-4-7 12 92% 14502ms
$ arco workflow run build-landing-page --var page=/pricing --var pitch="..."
(JSON plan with 5 tasks streams to stdout)
YOU (any terminal)
│
▼
┌────────────────────────────────────────────────────────┐
│ @arcanea/orchestrator (headless brain, ~1000 LOC) │
│ │
│ - routes by task class + surface + your preference │
│ - planner: claude -p breaks goals into sub-tasks │
│ - reasoning bank: learns from ~/.arcanea/history │
│ - workflow templates: reusable compositions │
└────────────────┬───────────────────────────────────────┘
│ execs via -p headless mode
┌────────┼──────────┬──────────┬──────────┐
▼ ▼ ▼ ▼ ▼
claude opencode codex gemini ao
(Max sub) (Zen free) (OpenAI) (Google) (swarm)
↑ each uses its OWN auth — no proxy ↑
Source of truth: @arcanea/router-spec/models.yaml
Single YAML declares 14 models × 16 tasks × 7 surfaces.
Edit it, every surface re-reads on next invocation.
Nobody ships a routing layer that spans Claude, OpenCode, Codex, and Gemini with a single spec. Composio's AO handles claude-code only. Kilo Code forked OpenCode and stayed inside OpenCode. aider is single-model. This is the gap.
The orchestrator doesn't replace any CLI. It coordinates them. Your Max sub covers `claude -p`. Your free Zen tier covers bulk. Your BYOK keys get used for second-opinion review. Every decision is declared in one YAML file you can read, edit, and fork. If you disagree with our routing, override with `--model`. If our spec is stale, patch the spec.
And it learns. Not via a remote service — right there on your disk. The adaptive ranker reads your own history and re-ranks candidates by what actually worked. No data leaves your machine.