v2.1 · Open Source · MIT License

THE OPS LAYER [FOR] MULTI-AGENT_

Once you run 3+ agents, the hard part is knowing which runtime owns which workspace, what each touched, and how to stop the stuck ones — without reading every terminal buffer. ATO is the desktop control panel that gives you live runs, file attribution per dispatch, and cross-runtime regression detection across Claude, Codex, Gemini, OpenClaw, Hermes, Ollama. Local-first. MIT.

6
Native Runtimes
Live
Runs Registry
15+
API Providers
MIT
Open Source

Claude Code · Codex · Gemini CLI · OpenClaw · Hermes · Ollama ·
+ DeepSeek, Qwen, MiniMax, Kimi, GLM, Yi via API

AI agents don’t talk to each other

🔄

Each tool is its own island

Claude has Claude Code, OpenAI has Codex, Google has Gemini. Each starts every conversation from zero. Switching tools means re-explaining context.

⛓️

No way to chain agents

You want a workflow: writer drafts code, reviewer checks security, summarizer reports back. Today you copy-paste between three CLIs by hand.

👁️

Agents run blind

Production agents need variables, hooks, summarizers, evaluators — but most tools give you a single system-prompt textarea and call it a day.

The agentic workspace, not just a control panel

New in v2.1 — The Multi-Agent Ops Layer

Live runs registry • File attribution per dispatch • Cross-runtime regression detection • File history modal • Honest concurrent attribution • Configuration impact ledger

  • Live runs panel — See every in-flight dispatch with agent slug, runtime, workspace, and elapsed time. Kill stuck dispatches with one click — no more reading every terminal buffer to find the runaway. Shows up the moment you fire something via Quick Test, the chat pane, scheduled cron, or MCP run_agent.
  • File attribution per dispatch — Every run captures the list of files touched in the project root via mtime-snapshot diff. Works across every runtime since it’s filesystem-level, not stream-parsing. Click any file in the dashboard to see every dispatch that ever touched it — agent, runtime, timestamp, prompt summary, sibling files.
  • Cross-runtime regression detection — Switch @reviewer from Sonnet 4.6 to Opus 4.7 and the dashboard flags “success rate dropped 17pp across 412 conversations.” Joins the configuration-change ledger with trace windows automatically. Severity-tagged: regressions first, improvements second, neutral hidden by default.
  • Honest concurrent attribution — When two agents dispatch into the same workspace, the OS gives us mtimes, not PIDs. Instead of pretending we can disambiguate, ATO tags the run as “ambiguous × N” with peer agent slugs. Truth over false confidence.
  • External agents — Build customer-facing chatbots in the same IDE you use for daily ops. Bundle generators for Cloudflare Worker, Vercel Edge, Docker, and standalone Node. 9 chat-LLM providers. Embed widget bundled with every deploy. Customer’s API key, customer’s infra — ATO never holds inference compute.
  • Dynamic prompts that adapt at fire time — Reference {user_name}, {project_root}, {recent_orders} in your system prompt. Resolvers: static, env, project path, file, database query, MCP call, computed JS.
  • Sequential automation pipelines — One prompt fires the whole workflow. Each child runs on its own runtime, so Claude → Codex → Gemini chains work natively. Routed groups + visual graph editor for specialist routing.
  • 15+ providers, 6 native runtimes — Claude Code, Codex, Gemini CLI, OpenClaw, Hermes, Ollama + Anthropic, OpenAI, Google AI, Mistral, Groq, xAI, Together, Fireworks, DeepSeek, Qwen, MiniMax, Kimi, GLM, Yi via API key.
Insights · Live runs · 3 in flight
@code-writer CLAUDE 14s
📁 ato/repo-a
@security-reviewer CODEX 8s
📁 ato/repo-a · ⚠ ambiguous ×1
@docs-summarizer GEMINI 2m 04s
📁 ato/docs-site
3 dispatches across 3 runtimes · 2 sharing repo-a · click any file in trace history for cross-run lineage

Multi-Runtime Context

Per-runtime context breakdown. Switch between Claude, Codex, OpenClaw, and Hermes to see what each agent has loaded. Skills shown as on-demand — not counted in the total.

  • Runtime tabs: Claude / Codex / OpenClaw / Hermes
  • "Not connected" state for uninstalled runtimes
  • Color warnings at 75% and 90% usage
Context Usage 67,234 / 200,000 tokens · 33.6%
System (30K) Skills (12K) MCP (8K) CLAUDE.md (5.2K) Conversation (12K) Free (132.8K)

Skills Manager + Marketplace

Manage skills across all runtimes with per-runtime tabs. Browse the marketplace, install community skills, or ask AI to create one for you.

  • Per-runtime tabs: Claude / Codex / OpenClaw / Hermes
  • AI skill creation: describe what you want, AI writes it
  • In-app approval dialog for file saves
code-review.md
2,340 tokens
testing-patterns.md
1,876 tokens
api-conventions.md
3,102 tokens
⚠ legacy-rules.md
conflict

Automation Builder

Visual workflow editor that auto-detects flows from your installed skills. Any skill with Step or Phase headers becomes a visual automation.

  • Auto-generates flows from skill content
  • Per-node runtime selection (mix agents)
  • Run workflows with one click
Today
45,230
$0.68 estimated
Burn Rate
12.4K/hr
~6.2h to limit
This Week
312K
$4.68 total
This Month
1.2M
$18.40 total

Scheduled jobs

Pick an agent (or a routed/sequential group) and a schedule. The agent’s system prompt, variables, hooks, memory, and skills all fire on every run — not just a raw prompt.

  • Agent / Group / Raw dispatch — agent-based by default
  • Friendly schedule presets (every weekday 9am, hourly, every 15 min…) or full cron expression
  • Wake-from-sleep on every desktop OS — launchd on macOS, systemd --user timers on Linux, Task Scheduler on Windows. Jobs fire even when ATO is closed.
  • Calendar view: click a day to see output or error; smart silent-failure detection
filesystem
stdio 12 tools 23ms
github
stdio 8 tools 45ms
postgres
stdio 5 tools 120ms
slack
sse timeout

Production-ready for teams and companies

LLM API Key Management

Centralized dashboard to store, rotate, and scope API keys for every major LLM provider. Keys are encrypted locally — never sent to any server.

  • Anthropic, OpenAI, Google, Mistral, Groq, Cohere, Together, Fireworks
  • Plus the Chinese providers: DeepSeek, Qwen, MiniMax, Kimi, GLM, Yi (OpenAI-compatible base URLs surfaced in-app)
  • One-click key rotation with masked preview, per-runtime scoping
  • Usage tracking: see which keys are active and how often
A Anthropic Production
sk-a...4f2x
O OpenAI GPT-4
sk-p...9k3m
G Google AI Staging
AI...7xq2
G Groq Fast
gsk...r4p1

Real-time Agent Monitor

Live dashboard showing active agent sessions, token consumption rates, runtime health, and smart alerts — across all your AI coding tools at once.

  • Live session tracking with 3-second refresh (Pro)
  • Token usage timeline charts and burn rate
  • Smart alerts: error spikes, high token usage, offline runtimes
  • Basic stats and recent sessions free for everyone
Tokens/hr
24.5K
Sessions
18
Avg Duration
4.2s
Errors
0
claude code-review session 2.1K tok · 3.4s
codex test generation 1.8K tok · 2.1s
hermes documentation update 956 tok · 1.8s

Audit Log

Complete audit trail of every action across your agentic systems. Filter by action type, resource, and time range. Export to JSON for compliance.

  • Track skill changes, key rotations, config updates, cron triggers
  • Filterable by action type and resource
  • Stats dashboard: today, this week, top actions
  • One-click JSON export
skill.create — code-review.md 2m ago
config.update — claude runtime 5m ago
cron.trigger — daily-backup 1h ago
secret.delete — old-api-key 3h ago

SSO & Enterprise Auth PRO

Connect your company's identity provider. Google Workspace, Okta, Microsoft Entra, or any OIDC provider — with domain restriction and auto-provisioning.

  • Google Workspace, Okta, Microsoft Entra built-in
  • Any custom OIDC provider via URL config
  • Domain restriction: only @company.com can join
  • Auto-provision users on first SSO login
SSO Providers
G
Google
Active
M
Microsoft
Configure
O
Okta
Configure

Cross-runtime, by protocol — 17 tools

Every ATO agent is exposed as an MCP tool. Any MCP-aware runtime — Claude Code, Codex, Cursor, others — can dispatch to any ATO agent regardless of which runtime owns it.

$ npx ato-mcp # Add to ~/.claude/settings.json: { "mcpServers": { "ato": { "command": "npx", "args": ["ato-mcp"] } } } # Agent dispatch (cross-runtime) list_agents — All ATO agents + groups run_agent — Dispatch to any agent or group, transparently # Context & Usage get_context_usage — Context window breakdown get_usage_stats — Token and cost analytics get_mcp_status — MCP server health # Skills Management list_skills — All skills with token counts toggle_skill — Enable/disable skills get_skill_index_stats — Index & watcher status rescan_skills — Force full rescan # Runtime Health get_runtime_status — Check any runtime get_all_runtime_statuses — All runtimes at once get_agent_logs — Execution logs / traces get_runtime_path_cache — Cached CLI paths refresh_runtime_paths — Re-discover CLIs set_runtime_path — Manual CLI path # Cache Management get_cache_stats — Cache statistics clear_cache — Flush cache

Built for developers

Desktop

offline-first · free
  • Tauri 2.x (Rust + React)
  • SQLite local database
  • LLM API key management
  • Audit logging
  • Agent monitor (basic)
  • Skills, automation, cron, MCP
Sync

Cloud (Pro features)

free with sign-up · early access
  • 7 microservices on Railway
  • PostgreSQL + SSO (OIDC)
  • Real-time agent monitoring
  • Smart alerts & token charts
  • Cloud trace retention + observability
  • Cloud sync of agents across devices

Available in English, Português, and Español

English EN
Português PT
Español ES

Download ATO

Free, open source, and ready for your platform.

> Early access: Pro features free with a cloud sign-up — cloud sync, trace retention, observability, evaluators.

# Install via Homebrew (macOS) $ brew tap WillNigri/ato $ brew install --cask ato # Or install the SDK for auto-tracing $ npm install @ato-sdk/js # Or install just the MCP server $ npx ato-mcp