List view
Hook-based routing: sentinel-injected instructions via PostToolUse hooks
No due date•1/23 issues closedComplete observability coverage across all AutoSkillit systems — context utilization, skill/tool access, sub-agent tracking, guard enforcement. Get all diagnostic data captured in machine-readable formats (JSON, CSV, OTel-compatible) for downstream HTML report consumption.
No due date•6/13 issues closedEHO Coordinator: AI-judgment-based oversight layer for monitoring and steering parallel agent sessions. See brainstorm gist for full design exploration.
No due date•0/1 issues closedProgressive resolution planner recipe — 3-pass sequential decomposition into GitHub-issue-ready work packages. Adapts helper_agents multi-pass planning strategy to AutoSkillit's recipe model with sub-agent context filtering, sequential elaboration, and feature-gated delivery.
No due date•56/59 issues closedMulti-layer feature gate system for gating experimental features (tools, skills, tests, imports, recipes) behind configurable flags. First test case: franchise. Enables safe promotion to main/stable with in-development features invisible.
No due date•17/17 issues closedSmart test path filtering system: AST-based import analysis + git diff + non-Python dependency manifest with bucket-based trigger system. Reduces test execution time in headless sessions by 40-76% while maintaining full CI coverage.
No due date•21/53 issues closedLevel-3 (L3) Franchise Orchestrator — interactive Claude session that dispatches headless L2 food trucks via dispatch_food_truck. Canonical plan: .autoskillit/temp/make-plan/franchise_l3_orchestrator_plan_2026-04-11_174213.md. Ticket-ready concretization at Addendum CA-1 through CA-20.
No due date•26/58 issues closedUmbrella milestone for decoupling AutoSkillit from Claude Code as the sole backend. Covers two axes: **CLI Backend Abstraction** — support multiple agentic coding CLIs (Claude Code, Codex, Open Code) via a driver-based abstraction over headless sessions and interactive entry (`cook`/`init`), selectable by config. **LLM Provider Abstraction** — support multiple LLM providers (Anthropic, OpenAI, MiniMax, etc.) as the model behind sessions, including per-recipe-step provider selection. ## Tickets ### CLI Backend - #55 — Extract `CodingAgentBackend` protocol from current Claude Code coupling (prerequisite for all below) - #23 — Codex CLI backend support - #820 — Open Code CLI backend support ### LLM Provider - #772 — LLM provider abstraction layer - #773 — Per-recipe-step LLM provider selection - #770 — Evaluate MiniMax M2.7 ## Priority #55 is the top priority — the extraction layer enables everything else.
No due date•44/60 issues closedResearch recipe generalization — domain-agnostic in-silico research platform
No due date•82/99 issues closed