Linear-native planning, repo context, and local agent automation from one CLI.
Create backlog items, sync planning files, run reusable workflows, and supervise unattended ticket execution without leaving the terminal.
Install · Quick Start · Commands · Reference
The MetaStack CLI is a Rust terminal tool for engineers who want repository planning context, Linear workflows, and agent-backed automation to stay close to the code.
It is built for teams that want to:
- manage repo-scoped planning state under
.metastack/ - move between Linear and local backlog files without context switching
- run local agents such as Codex or Claude with repository-aware prompts
- supervise unattended issue execution with
meta agents listen
Most planning tools split work across issue trackers, docs, scripts, and ad hoc prompts. MetaStack pulls those workflows back into one place:
meta runtime configsaves install-scoped Linear and agent defaults.meta runtime setupbootstraps the repo and saves repo-scoped defaults under.metastack/.meta context scanturns the codebase into reusable planning context.meta backlog spec,meta backlog plan,meta backlog split,meta backlog improve,meta backlog tech,meta linear issues refine, andmeta agents workflowsgenerate structured backlog work.meta mergebatches open GitHub PRs into one isolated aggregate merge run and publish step.meta linear ...andmeta backlog synckeep Linear and local files aligned.meta agents reviewaudits GitHub PRs in a guided dashboard, queuesmetastack-labeled PRs for explicit human approval, and can open remediation PRs when required.meta agents retroanalyzes shipped PRs for follow-up backlog opportunities and opens a plan-style Linear ticket curation flow.meta agents improveinspects open PRs, accepts improvement instructions, and publishes stacked PRs targeting the source PR branch from an isolated workspace.meta agents execute <ISSUE_ID>runs a one-off headless agent session for a single Linear issue, persisting session state for later adoption bymeta agents listen.meta agents listenruns unattended ticket execution in dedicated workspace clones instead of your source checkout. Execute-started sessions are visible in the listen dashboard but not auto-claimed.meta workspaceinventories and cleans sibling workspace clones (listener, improve, and review) with automatic merged-workspace cleanup and batch reconciliation.
From the root of the repository:
cargo install --path . --forceThis will install the meta command to your Cargo bin directory, which is typically ~/.cargo/bin.
Cargo installs are intentionally not self-updatable with meta upgrade. Use the GitHub Release
installer below when you want secure in-place updates.
Install the latest GitHub Release into ~/.local/bin:
curl -fsSL https://raw.githubusercontent.com/metastack-systems/metastack-cli/main/scripts/install-meta.sh | shInstall a pinned release instead:
curl -fsSL https://raw.githubusercontent.com/metastack-systems/metastack-cli/main/scripts/install-meta.sh | sh -s -- --version v0.1.0Install into a custom bin directory without sudo:
curl -fsSL https://raw.githubusercontent.com/metastack-systems/metastack-cli/main/scripts/install-meta.sh | META_INSTALL_DIR="$HOME/bin" shDownload the installer first when you do not want curl | sh:
curl -fsSL https://raw.githubusercontent.com/metastack-systems/metastack-cli/main/scripts/install-meta.sh -o install-meta.sh
sh install-meta.sh --version v0.1.0After installation:
meta --helpCheck whether a newer stable GitHub Release is available for the installed binary:
meta upgrade --checkPreview the verified replacement plan without mutating the install:
meta upgrade --dry-runApply the latest stable GitHub Release in place:
meta upgradeAdvanced version-management path:
meta upgrade --version 0.2.0 --dry-run
meta upgrade --version 0.3.0-rc.1 --prerelease
meta upgrade --version 0.1.0 --allow-downgradeInside a repository you want metastack to manage:
meta runtime config
meta runtime setup
meta backlog spec --root .
meta context scan
meta context show
meta backlog plan --request "Break the next release into Linear-ready tickets"If you are ready to supervise issue execution:
meta agents listen --team MET --project "MetaStack CLI"Before running meta agents listen with the built-in providers:
- Built-in Codex workers require
~/.codex/config.tomlto include:
approval_policy = "never"
sandbox_mode = "danger-full-access"- Remove
[mcp_servers.linear]from the Codex config when possible. The preflight warns when Linear MCP is detected. - Built-in Claude workers require
claudeonPATH. - Built-in Claude listen runs should not inherit
ANTHROPIC_API_KEY; headless listen is expected to use the local Claude subscription instead of an API-key override. - Run
meta agents listen --checkto validate the active listen provider prerequisites plus Linear reachability/auth without starting the daemon.
meta runtime setup bootstraps the repo-local .metastack/ workspace:
.metastack/
SPEC.md
README.md
meta.json
agents/
README.md
briefs/
sessions/
backlog/
README.md
_TEMPLATE/
README.md
index.md
checklist.md
contacts.md
decisions.md
proposed-prs.md
risks.md
specification.md
implementation.md
validation.md
context/
tasks/
artifacts/
codebase/
README.md
workflows/
README.md
cron/
README.md
The preferred public surface is domain-first. Legacy top-level commands such as meta plan, meta technical, meta listen, and meta sync remain available during the migration window and print a hint toward the preferred path.
| Command family | Use it for |
|---|---|
meta backlog |
Plan, analyze dependencies, batch into releases, create technical backlog children, and sync backlog work for the current repository |
meta linear |
Browse, create, edit, refine, and dashboard Linear work |
meta agents |
Run the unattended listener and reusable workflow playbooks |
meta context |
Inspect, map, doctor, scan, or reload the effective agent context |
meta runtime |
Configure install-scoped and repo-scoped defaults and supervise cron jobs |
meta dashboard |
Open Linear, agents, team, or ops-oriented dashboard views |
meta merge |
Discover open GitHub PRs, batch them in a one-shot dashboard, and publish one aggregate PR |
meta workspace |
List, clean, and prune sibling workspace clones (listener, improve, review) under the fixed workspace root |
meta upgrade |
Check and apply verified GitHub Release self-updates for release installs on macOS/Linux |
Long-form editors and preview panes in the terminal UI now share one scrolling model. When a multiline editor or preview has focus, Up, Down, PgUp, PgDn, Home, and End move within the wrapped content, and mouse-wheel scrolling applies to the focused pane instead of leaking into surrounding lists or forms.
This applies to flows such as meta backlog spec, meta backlog plan, meta linear issues create, meta linear issues edit, meta dashboard linear, meta backlog tech, meta merge, and related preview-driven dashboards that render long descriptions or generated file content.
Interactive TUIs also share one copy/export contract. Press Ctrl+Y to copy the focused field or
pane, and when the local clipboard is unavailable MetaStack opens a terminal-safe export overlay
instead of failing silently. The shared implementation and coverage matrix are documented in
docs/tui-copy-contract.md.
Build and install the CLI into your local Cargo bin directory:
cargo install --path .Make sure Cargo's bin directory is on your PATH:
export PATH="$HOME/.cargo/bin:$PATH"A typical end-to-end loop looks like this:
- Run
meta runtime configonce to save install-scoped Linear auth and agent defaults. - Run
meta runtime setuponce per repository to scaffold.metastack/and save repo defaults. - Run
meta backlog specto create or refine the repo-local.metastack/SPEC.md. - Run
meta context scanto refresh the repo context under.metastack/codebase/. - Use
meta backlog plan,meta backlog split, ormeta backlog techto create structured backlog work. - Use
meta linear ...,meta dashboard ..., ormeta backlog syncto coordinate with Linear. - Use
meta mergewhen you want to batch open GitHub PRs in one isolated aggregate merge run. - Use
meta agents listenwhen you want unattended ticket execution inside a dedicated workspace clone. - Use
meta workspacewhen you want to inspect or clean those listener-created clones later.
Engineer:
meta runtime setup --team MET --project "MetaStack CLI"
meta backlog spec --root .
meta context scan
meta backlog plan --request "Break the next release into Linear-ready tickets"
meta backlog split MET-35
meta backlog tech MET-35Team lead:
meta linear issues list --team MET --state "In Progress"
meta linear issues refine MET-35 --passes 2
meta dashboard team --team MET --project "MetaStack CLI"Ops-style operator:
meta agents listen --team MET --project "MetaStack CLI" --once
meta dashboard agents --team MET --project "MetaStack CLI" --render-once
meta dashboard ops
meta runtime cron statusAggregate merge operator:
meta merge --json
meta merge
meta merge --no-interactive --pull-request 101 --pull-request 102 --validate "make quality"Inspect or update the install-scoped MetaStack CLI config:
meta runtime config
meta runtime config --json
meta runtime config --api-key lin_api_work
meta runtime config --default-profile work
meta runtime config --default-agent codex --default-model gpt-5.4 --default-reasoning medium
meta runtime config --default-assignee viewer --default-state Backlog --default-priority 2 --default-label platform --default-label cli
meta runtime config --velocity-project "MetaStack CLI" --velocity-state Backlog --velocity-auto-assign viewer
meta runtime config --vim-mode enabled
meta runtime config --listen-context-budget-tokens 180000
meta runtime config --listen-retry-initial-backoff 3 --listen-retry-max-backoff 45
meta runtime config --listen-ci-poll-interval 30 --listen-ci-poll-timeout 900 --listen-ci-timeout-behavior block
meta runtime config --merge-validation-repair-attempts 8
meta runtime config --merge-validation-transient-retry-attempts 2
meta runtime config --merge-publication-retry-attempts 6
meta runtime config --route backlog --route-agent claude --route-model opus
meta runtime config --route backlog.plan --route-agent codex --route-model gpt-5.3-codex
meta runtime config --clear-route backlog.plan
meta runtime config --advanced-routing
meta runtime config --replay-onboardingLegacy alias: meta config
meta runtime config writes a TOML config file to $METASTACK_CONFIG when set, otherwise:
$XDG_CONFIG_HOME/metastack/config.toml~/.config/metastack/config.toml
The persisted config can store:
- install-scoped Linear API key/default team values
- install-scoped backlog ticket defaults under
[backlog], includingdefault_assignee,default_state(the default Linear workflow status for new standalone issues),default_priority, additivedefault_labels, and zero-promptvelocity_defaults - install-scoped onboarding completion state
- install-scoped Linear API key/default team/default project values
- install-scoped global defaults for listen label, listen assignment scope, listen refresh policy, listen poll interval, listen context budget, listen retry backoff, listen agent turn timeout, listen graceful shutdown window, post-publication GitHub CI settle polling, plan follow-up limit, and plan/technical issue labels
- install-scoped UI defaults under
[defaults.ui], includingvim_mode = true|falsefor safeh/j/k/lnavigation aliases - named global Linear profiles under
[linear.profiles.<name>] - an optional global
linear.default_profile - global default provider/model/reasoning values for the built-in
codex/claudecatalog - install-scoped merge defaults under
[merge], including validation repair, transient retry, and publication retry caps formeta merge - advanced family-level agent routing under
[agents.routing.families.<family>] - advanced command-level agent routing under
[agents.routing.commands."<route>"]
Agent-backed routes resolve install-scoped settings in this order:
- command route override
- command family override
- repo default from
.metastack/meta.jsonwhen present - global default
For an individual run, explicit CLI flags still win over the routed defaults:
--agent/--provider first, then --model, then --reasoning.
First-run behavior:
- a fresh install routes normal
metacommands into the shared onboarding wizard before the requested command runs meta runtime setup/meta setupare also intercepted until install onboarding completes, then return to being the manual repo-scoped editing flowsmeta runtime config --replay-onboardingandmeta config --replay-onboardingrerun the same wizard for debugging or refreshes- plain
meta runtime config/meta configremain the manual install-scoped editing dashboards after onboarding
For the built-in providers, --reasoning, default_reasoning, and route_reasoning are validated
against the selected provider/model catalog instead of being accepted as free text. The dashboards
now render reasoning as a select field tied to the current provider/model choice.
When vim_mode is enabled, supported TUIs add h/j/k/l aliases only while a non-text control is
focused. Search bars, single-line inputs, and multiline editors continue to insert literal
characters, and search-first dashboards require leaving query focus before vim navigation takes
over.
Built-in reasoning options shipped in-repo:
codexgpt-5.4,gpt-5.3-codex,gpt-5.2-codex,gpt-5.1-codex-max,gpt-5.1-codex,gpt-5.1-codex-mini,gpt-5-codex,gpt-5-codex-mini:low,medium,highclaudesonnet,opus,haiku,sonnet[1m],opusplan:low,medium,high,max
Use meta runtime config --advanced-routing for the dedicated routing dashboard, or use
--route, --route-agent, --route-model, --route-reasoning, and --clear-route for
non-interactive edits.
Listen retry backoff lives under [defaults.listen.retry] with
initial_backoff_seconds and max_backoff_seconds. When unset, the effective shared listen
policy defaults to 2s initial and 60s max. Both values must stay within 1..=3600, and the
max backoff must be greater than or equal to the initial backoff. meta runtime config --json
also renders the effective resolved values under effective.listen_retry.
Post-publication GitHub CI settle polling lives under [defaults.listen] as
ci_poll_interval_seconds, ci_poll_timeout_seconds, and ci_timeout_behavior. When unset, the
effective shared policy defaults to 30s, 900s, and block. The timeout must be greater than
or equal to the poll interval. meta runtime config --json renders the resolved values under
effective.listen_ci_settle.
Supported route families:
backlogcontextlinearagentsruntime.cronmerge
Supported command route keys:
backlog.specbacklog.planbacklog.improvebacklog.splitbacklog.techcontext.scancontext.reloadlinear.issues.refineagents.listenagents.buildagents.workflows.runruntime.cron.promptmerge.run
Example global config:
[linear]
default_profile = "work"
[linear.profiles.work]
api_key = "lin_api_work"
api_url = "https://api.linear.app/graphql"
team = "MET"
[agents]
default_agent = "codex"
default_model = "gpt-5.4"
default_reasoning = "medium"
[backlog]
default_assignee = "viewer"
default_state = "Backlog"
default_priority = 2
default_labels = ["platform", "cli"]
[backlog.velocity_defaults]
project = "MetaStack CLI"
state = "Backlog"
auto_assign = "viewer"
[merge]
validation_repair_attempts = 6
validation_transient_retry_attempts = 3
publication_retry_attempts = 5
[agents.routing.families.backlog]
provider = "claude"
model = "opus"
reasoning = "high"
[agents.routing.commands."backlog.plan"]
provider = "codex"
model = "gpt-5.3-codex"Scaffold repo-local .metastack/ state and inspect or update repo-scoped defaults:
meta runtime setup
meta runtime setup --json
meta runtime setup --team MET --project "MetaStack CLI"
meta runtime setup --api-key lin_api_repo --team MET --project "MetaStack CLI"
meta runtime setup --provider codex --model gpt-5.4 --reasoning medium
meta runtime setup --listen-label agent --assignee-scope viewer-only --refresh-policy reuse-and-refresh --listen-context-budget-tokens 180000
meta runtime setup --default-assignee viewer --default-state Todo --default-priority 3 --default-label platform
meta runtime setup --velocity-project "MetaStack CLI" --velocity-state Backlog --velocity-auto-assign viewerInspect or apply GitHub Release self-updates for supported macOS/Linux release installs:
meta upgrade --check
meta upgrade --dry-run
meta upgrade
meta upgrade --version 0.2.0 --dry-run
meta upgrade --version 0.3.0-rc.1 --prerelease
meta upgrade --version 0.1.0 --allow-downgradeDefault behavior resolves the latest stable GitHub Release for the running platform, verifies the
selected archive against the published SHA256SUMS, stages extraction outside the live install
path, and replaces the installed meta binary only after verification succeeds.
meta upgrade refuses Cargo installs and source-checkout builds because those origins are not safe
to mutate in place. Reinstall from GitHub Releases when you want an install that can self-update.
Use --check to inspect the latest stable release without mutating the current install. Use
--dry-run to resolve the same release and print the planned replacement path without swapping the
binary. The advanced path keeps the default UX strict while still allowing pinned versions,
prerelease opt-in, and deliberate downgrades with explicit flags.
Legacy alias: meta setup
meta runtime setup is safe to rerun in an existing checkout. It creates .metastack/ when needed, seeds .metastack/backlog/_TEMPLATE/ from the canonical Markdown tree shipped in src/artifacts/BACKLOG_TEMPLATE, lets the setup flow inherit shared Linear auth or save a project-specific Linear API key in install-scoped CLI config when a project needs its own token, validates any repo-selected profiles and built-in provider/model/reasoning combinations against the install-scoped catalog, resolves --project <NAME> to a canonical Linear project ID before saving, and writes repo defaults only to .metastack/meta.json.
Repo setup dashboards automatically honor the install-scoped vim_mode toggle from meta runtime config, but only for non-text controls such as select lists. Title, label, path, and prompt fields keep literal h/j/k/l editing behavior.
For listen setup, use assignment_scope = "viewer_only" to watch only issues assigned to the authenticated viewer, or assignment_scope = "viewer_or_unassigned" to also admit unassigned tickets. Existing repo config that still stores the legacy value assignment_scope = "viewer" continues to load as viewer_or_unassigned for compatibility. Use meta agents listen --all-assignees when a single run should ignore assignee scope without mutating repo setup.
Promoted repo-aware settings now resolve as CLI override -> repo default -> install default for the shared team/project, listen label/scope/refresh/poll interval, interactive plan follow-up limit, and plan/technical issue labels. Repo-scoped .metastack/meta.json still overrides the install-scoped defaults saved by onboarding.
For listen setup, assignment_scope = "viewer" now means viewer + unassigned for unattended listen runs. Use meta agents listen --all-assignees when a single run should ignore assignee scope without mutating repo setup.
For listen setup, use assignment_scope = "viewer_only" to watch only issues assigned to the authenticated viewer, or assignment_scope = "viewer_or_unassigned" to also admit unassigned tickets. Existing repo config that still stores the legacy value assignment_scope = "viewer" continues to load as viewer_or_unassigned for compatibility. Use meta agents listen --all-assignees when a single run should ignore assignee scope without mutating repo setup.
For unattended meta agents listen runs, setup should be paired with a provider preflight:
- Codex requires
~/.codex/config.tomlwithapproval_policy = "never"andsandbox_mode = "danger-full-access", and[mcp_servers.linear]should be removed or disabled. - Claude requires
claudeonPATHandANTHROPIC_API_KEYunset so the local subscription is used. - Run
meta agents listen --check --root .to verify the current machine before starting the daemon.
If setup finds canonical template files with local changes, interactive TTY runs prompt for overwrite, skip, or cancel. Non-interactive paths such as --json and direct flag updates stop with a clear error instead of silently overwriting those backlog template files.
Repo-dependent commands such as meta backlog plan, meta backlog split, meta backlog tech, meta backlog sync, and meta agents listen now require repo setup and point back to meta runtime setup when .metastack/meta.json is missing.
Example repo-scoped config:
{
"linear": {
"profile": "work",
"team": "MET",
"project_id": "project-42"
},
"agent": {
"provider": "codex",
"model": "gpt-5.4",
"reasoning": "medium"
},
"listen": {
"poll_interval_seconds": 30
},
"plan": {
"interactive_follow_up_questions": 6
}
}Precedence is consistent across the CLI:
- Linear-backed commands use
CLI flag override -> install-scoped repo auth -> repo .metastack/meta.json/profile -> global config -> LINEAR_* environment fallback - Agent-backed launches use
CLI override -> repo .metastack/meta.json -> global config - Default issue status for standalone tickets resolves as
CLI --state override -> velocity_defaults.state (zero-prompt) -> repo backlog.default_state -> global backlog.default_state -> built-in "Backlog". Child tickets created bymeta backlog splitfollow that standalone resolution path. Child tickets created bymeta backlog techinherit the parent issue's status instead of using the configured default; explicit--stateoverrides still take precedence. meta linear issues createalso resolves the default issue status from repo and global config when no--stateflag is provided.- The CLI is read-only for workflow state selection: onboarding and config pickers query existing states from the Linear team but cannot create new ones. If a configured
default_statedoes not match any state on the target team, the command fails with a clear error. Create new workflow states in the Linear UI first. Seedocs/linear-workflow-state-creation.mdfor the full decision.
Inspect open GitHub pull requests for the current checkout, select a batch in a one-shot ratatui dashboard, run an aggregate merge in an isolated workspace outside the source checkout, rerun validation, and open or update one aggregate PR back into the repository default branch.
meta merge requires:
ghonPATH- a repo that has already been bootstrapped with
meta runtime setup - a configured local agent for merge planning and conflict help
Common invocations:
meta merge --json
meta merge
meta merge --render-once --events space,down,space,enter
meta merge --no-interactive --pull-request 101 --pull-request 102 --validate "make quality"
meta merge --resume-run 20260320T150254ZBehavior summary:
--jsonemits the resolved GitHub repository metadata plus the open PR list used by the dashboard and planner.- Plain
meta mergeopens a one-shot dashboard that lets you select multiple PRs, review the selected batch summary, scroll the focused preview pane withUp/Down,PgUp/PgDn,Home/End, or the mouse wheel when it overflows, launch immediately, then stay in a live progress screen until the merge run succeeds or fails. - The focused
meta mergepreview keeps the PR metadata header and now renders the selected PR body with the shared TUI markdown renderer, preserving headings, lists, blockquotes, fenced code blocks, and blank lines. --render-onceprints a deterministic dashboard snapshot for tests and proofs.--no-interactiveskips the dashboard and runs the selected--pull-requestvalues directly while printing textual phase updates to stdout.--resume-run <RUN_ID>reuses an existing aggregate branch and run artifact directory under.metastack/merge-runs/<RUN_ID>/, revalidates the preserved workspace, repushes the branch, and updates the aggregate PR instead of starting from scratch.--validate <COMMAND>overrides the post-merge validation commands. When omitted,meta mergeprefersmake qualitywhen the repo Makefile exposes that target, otherwisemake all, otherwisecargo testfor Rust repositories.- Validation now narrows repeated failures before rerunning the full suite. When
make qualityreports a specific Rust test or clippy failure,meta mergefirst reruns the exact failing target, fingerprints the failure, and stops treating the same signature as transient once it repeats. That avoids wasting loops on deterministic failures that only looked flaky on the first pass. - Validation is no longer a hard publication gate. When validation stays red after bounded automated recovery,
meta mergestill pushes the aggregate branch, creates or updates the aggregate PR, and records the unresolved validation status in both the run artifacts and the PR body so repair work can continue without restarting the batch. - Push and aggregate PR publication retry on transient remote errors, and the install-scoped merge knobs now cover all three control points:
[merge].validation_repair_attempts,[merge].validation_transient_retry_attempts, and[merge].publication_retry_attempts. - Both interactive and non-interactive runs publish the same major phases: workspace preparation, plan generation, merge application, validation, push, and PR publication. Merge application also records finer-grained per-PR substeps such as the active pull request and whether conflict assistance ran.
Each run writes local audit artifacts under .metastack/merge-runs/<RUN_ID>/, including:
context.jsonwith the repository, selected PR set, aggregate branch, and isolated workspace pathagent-plan-prompt.mdwith the exact planner prompt sent to the configured local agentplan.jsonwith the agent-selected merge order and conflict hotspotsprogress.jsonwith the current phase, active substep detail, phase states, and the full structured event trail needed to reconstruct success and failure pathsmerge-progress.jsonwith the structured run snapshot plus per-PR outcomesvalidation.jsonwith each validation attempt, captured command output, and any repair commits recorded between attemptsaggregate-pr-body.mdwith the Markdown body used when creating or updating the aggregate PRpublication.jsonwith the aggregate PR publication resultconflict-prompt-pr-<NUMBER>.mdandconflict-resolution-pr-<NUMBER>.mdwhen agent-assisted conflict handling was requiredvalidation-repair-prompt-attempt-<N>.mdandvalidation-repair-output-attempt-<N>.mdwhen agent-assisted validation repair was required
Inspect the current repository, write a deterministic scan fact base, then launch the configured local agent to refresh the higher-level planning docs:
meta context scanLegacy alias: meta scan
Outputs:
.metastack/codebase/SCAN.md.metastack/codebase/ARCHITECTURE.md.metastack/codebase/CONCERNS.md.metastack/codebase/CONVENTIONS.md.metastack/codebase/INTEGRATIONS.md.metastack/codebase/STACK.md.metastack/codebase/STRUCTURE.md.metastack/codebase/TESTING.md
When stdout is attached to a TTY, meta context scan renders a compact progress dashboard. The underlying agent output is captured in .metastack/agents/sessions/scan.log.
meta context scan treats the resolved repository root as the default target scope for the run. In monorepos, that means the top-level directory you invoked as --root (or the current working directory when --root is omitted). The scan prompt stays focused on that repository only and should narrow to a subproject only when the user explicitly asks for it.
List, explain, and run reusable workflow playbooks. The CLI ships with built-in playbooks for backlog planning, ticket implementation, PR review, and incident triage, and it also loads repo-local playbooks from .metastack/workflows/.
meta agents workflows list
meta agents workflows explain backlog-planning
meta agents workflows run backlog-planning
meta agents workflows run ticket-implementation
meta agents workflows run ticket-implementation --no-interactive --param issue=MET-93
meta agents workflows run ticket-implementation --render-once --param issue=MET-93Legacy alias: meta workflows
Compatibility alias under meta agents: meta agents workflow ...
meta agents workflow run ticket-implementation --render-oncePlaybooks use Markdown with YAML front matter. The front matter defines the workflow name, summary, default provider, parameter contract, validation steps, optional instructions, and optional Linear issue lookup parameter. See src/artifacts/workflows/README.md for the shipped format and .metastack/workflows/README.md for the repo-local scaffold.
Interactive terminal runs are TUI-first:
- TTY runs open a guided wizard that collects required workflow inputs step by step.
- After generation the command lands on a review dashboard instead of exiting immediately.
eopens multiline edit mode for the generated Markdown.sopens a one-off save-path prompt whose default lives under.metastack/workflows/generated/.- Existing files require explicit overwrite confirmation in the TUI, or
--overwritein the headless fallback.
Deterministic fallback rules:
- Use
--no-interactivefor scripts, CI, and tests. - Runs without a TTY use the same fallback automatically unless
--render-onceis set. - The fallback path still requires explicit
--param key=valuepairs for all required inputs. --output <PATH>saves the generated Markdown artifact directly.--render-onceprints a deterministic snapshot of the wizard for snapshot-style tests.--render-once --events ...scripts wizard, review, edit, and save transitions for deterministic TUI proofs.- Use
accept-edit,discard-edit, andpaste=TEXTin--eventsto prove edited save/cancel behavior explicitly.
Reference:
Inspect and refresh the effective context that agent-backed runs consume:
meta context show
meta context map
meta context doctor
meta context scan --json
meta context reloadshowprints the effective repo-scoped instructions, loaded project rules, and known codebase context sourcesmapprints a repo-map style summary derived from the live repository treedoctorreports missing or stale inputs such as.metastack/meta.json, repo rules, instructions files, and generated codebase docsreloadre-runs the context refresh path used bymeta scanscan --jsonruns the scan pipeline without a terminal snapshot and emits a JSON report describing the refreshed codebase context plus the written and removed files
Use these flags when an outer agent or shell wrapper needs deterministic non-interactive behavior:
| Command | Promptless mode | JSON selector | Machine output behavior |
|---|---|---|---|
meta backlog plan |
--no-interactive |
implicit in --no-interactive |
success and failure emit JSON |
meta backlog split |
--no-interactive |
implicit in --no-interactive |
success and failure emit JSON |
meta backlog tech |
--no-interactive |
implicit in --no-interactive |
success and failure emit JSON |
meta backlog sync <subcommand> |
--no-interactive |
--json or implicit in --no-interactive |
direct subcommands emit JSON |
meta linear issues create |
--no-interactive |
implicit in --no-interactive |
success and failure emit JSON |
meta linear issues edit |
--no-interactive |
implicit in --no-interactive |
success and failure emit JSON |
meta linear issues refine |
n/a | --json |
success and failure emit JSON |
meta context scan |
n/a | --json |
success and failure emit JSON |
meta agents listen --once |
headless | --json |
emits one poll-cycle JSON payload |
meta agents workflows run |
--no-interactive |
n/a | human-readable output or --render-once snapshot |
meta runtime cron init |
--no-interactive |
--json or implicit in --no-interactive |
success and failure emit JSON |
Notes:
- Mutation commands default to structured JSON when
--no-interactiveis active. --render-onceremains a terminal snapshot mode, not a machine-output mode, and conflicts with--json,--no-interactive, or--oncewhere those modes overlap.- Machine-readable failures use one stable top-level shape:
status,command, anderror { code, message, context? }.
This matrix is the contract for agent callers deciding whether to drive a command as JSON, as a promptless mutation, or as a text snapshot:
| Command | --no-interactive |
--json |
--render-once |
|---|---|---|---|
meta backlog plan |
required for promptless runs; implies JSON | n/a | not supported |
meta backlog split |
required for promptless runs; implies JSON | n/a | not supported |
meta backlog tech |
required for promptless runs; implies JSON | n/a | not supported |
meta backlog sync status |
optional | supported | supported on dashboard form (meta backlog sync --render-once) |
meta backlog sync link |
optional for scripting; requires explicit selectors | supported and implied by --no-interactive |
not supported |
meta backlog sync pull |
optional for scripting; requires explicit selectors | supported and implied by --no-interactive |
not supported |
meta backlog sync push |
optional for scripting; requires explicit selectors | supported and implied by --no-interactive |
not supported |
meta linear issues create |
required for promptless runs; implies JSON | n/a | supported for the create form |
meta linear issues edit |
required for promptless runs; implies JSON | n/a | supported for the edit form |
meta linear issues refine |
not needed; command is already headless | supported | not supported |
meta context scan |
not needed; command is already headless | supported | not supported |
meta agents listen --once |
not needed; --once is the headless poll mode |
supported only with --once; returns one poll cycle |
supported separately as a text dashboard snapshot |
meta agents workflows run |
required for promptless scripted runs with explicit params | not supported | supported for the workflow wizard snapshot |
meta runtime cron init |
required for promptless writes; implies JSON | supported | supported as a text dashboard snapshot |
meta runtime config |
not needed | supported | supported |
meta merge |
required for promptless execution with explicit PR selection | supported | supported |
Rules:
- Prefer
--no-interactivefor any mutation command that would otherwise prompt for missing input. - Prefer
--jsonfor read-only or already-headless flows such asmeta context scan,meta linear issues refine, andmeta agents listen --once. - Use
--render-onceonly when a text snapshot of the TUI is useful for humans or snapshot-style tests; it is not part of the machine JSON contract. - Where both modes exist,
--render-onceis mutually exclusive with--json,--no-interactive, and--oncebecause snapshot output is a separate text contract.
Create repository-local cron jobs as Markdown plus YAML front matter, then supervise them from the CLI:
meta runtime cron init
meta runtime cron init nightly --no-interactive --schedule "0 * * * *" --command "cargo test" --prompt "Review the latest test output and fix any failures"
meta runtime cron --root target/cli-proof/cron init nightly --no-interactive --schedule "0 * * * *" --command "cargo test"
meta runtime cron list
meta runtime cron validate
meta runtime cron status
meta runtime cron start
meta runtime cron stop
meta runtime cron run nightly
meta runtime cron approvals
meta runtime cron approve <RUN_ID> --note "ship it"
meta runtime cron reject <RUN_ID> --reason "not ready"
meta runtime cron resume <RUN_ID>Legacy alias: meta cron
Machine mode:
meta runtime cron init --no-interactive ...now emits structured JSON by defaultmeta runtime cron init --json ...forces JSON without changing the rest of the command contractmeta runtime cron list --json,validate --json, andapprovals --jsonemit structured inspection output--render-oncestays a text snapshot path for the dashboard and is separate from machine JSON output
Side effects:
- ensures
.metastack/cron/exists - creates
.metastack/cron/<NAME>.mdjob definitions - discovers workflow definitions from the install-scoped data root first, then overrides them by name with repo-scoped definitions under
.metastack/cron/ - creates
.metastack/cron/.runtime/on demand for scheduler state, per-run logs, and persisted run-state JSON under.metastack/cron/.runtime/runs/ - persists approval checkpoints, retry history, and resumable step state for every workflow run
In the interactive cron editor, the prompt field submits on Enter, inserts a newline on Shift+Enter, and supports Up/Down, PgUp/PgDn, Home/End, plus mouse-wheel scrolling to keep long wrapped prompts reachable.
Legacy single-step cron job files still work:
---
schedule: "0 * * * *"
command: "cargo test"
agent: "codex"
shell: "/bin/sh"
working_directory: "."
timeout_seconds: 900
enabled: true
---
Review the command output and update the repository when needed.Explicit workflow files use the same Markdown wrapper but can declare durable steps, retries, and approval checkpoints:
---
schedule: "0 * * * *"
mode: workflow
enabled: false
retry:
max_attempts: 2
backoff_seconds: 5
steps:
- id: collect
type: shell
command: "cargo test"
- id: summarize
type: agent
agent: "codex"
prompt: "Review the collected test output and summarize failures."
- id: approval
type: approval
approval_message: "Approve opening follow-up issues for the failing tests."
- id: follow_up
type: cli
command: "linear"
args: ["issues", "list", "--state", "Todo"]
guardrails:
allow: ["linear"]
mutates: []
---
Operator notes and runbook text live in the Markdown body.Use meta runtime cron validate before starting the daemon when you are editing workflow files, meta runtime cron approvals to inspect paused runs, and meta runtime cron approve|reject|resume to move persisted runs forward without replaying completed steps. Shipped disabled-by-default examples live under src/artifacts/cron/.
Turn a planning request into one or more Linear backlog issues:
meta backlog plan
meta backlog plan --no-interactive --request "Plan a dashboard for feature intake" --answer "Use the existing TUI patterns" --answer "Split the work into multiple tickets"
meta backlog plan --fast
meta backlog plan --fast --multi --questions 1
meta backlog plan --fast --no-interactive --request "Plan the onboarding rewrite"
meta backlog plan ENG-10144
meta backlog plan ENG-10144 --velocityLegacy alias: meta plan
In a TTY, meta backlog plan opens one persistent ratatui planning session to capture the request, collect follow-up answers, and review the generated ticket breakdown before creating Backlog issues in Linear.
Fast mode keeps the same downstream issue creation path but switches the interaction model to a single pass. When --fast is active, the command captures the request, optionally asks at most one round of follow-up questions, prompts once for Anything Else To Add?, streams the draft preview while the agent is still generating it, and then shows an approve-or-reject review screen. Fast review intentionally removes merge groups, skip states, and regeneration controls.
Within one meta backlog plan run, the shared agent runtime now reuses a built-in Codex or Claude session across follow-up generation, ticket generation, and interactive revisions when the provider returns a resume handle. That continuation is run-scoped only: the command does not persist planning sessions under .metastack/ or share them with listen workers.
Multiline request and follow-up editors submit on Enter; use Shift+Enter when you need to insert a newline without advancing the workflow. Focused editors also support Up/Down, PgUp/PgDn, Home/End, and mouse-wheel scrolling so long wrapped prompts and follow-up answers stay editable after they overflow the visible pane.
For deterministic automation, pass --no-interactive with --request and repeated --answer values. In zero-prompt mode (--no-interactive or no TTY), backlog planning resolves ticket defaults in this order: explicit flags, remembered project/team for the canonical repo root, repo backlog.velocity_defaults, global backlog.velocity_defaults, repo defaults from .metastack/meta.json, global defaults from config.toml, then built-in behavior. Generated plan priority still wins over config priority unless you pass --priority.
Fast planning adds three mode-specific controls:
--fastenables the single-pass planning flow. Repo or install config can also enable it by default withplan.default_mode = "fast".--multionly applies in fast mode and overrides the default single-ticket shape so the agent may emit multiple issues.--questions <N>only applies in fast mode and caps the one-round follow-up phase.0skips fast Q&A entirely.
Fast mode defaults to a single ticket unless you pass --multi or disable that preference through repo/install config with plan.fast_single_ticket = false. The follow-up cap resolves in the same precedence order as other plan defaults: explicit CLI flag, repo config, install config, then built-in default. Fast non-interactive runs do not accept --answer; they go straight from --request to generation so the flow stays single-pass.
Machine mode:
meta backlog plan --no-interactive ...emits created or reshaped issue data as JSON instead of text- missing-input and other machine-mode failures also emit JSON with
error.code,error.message, and optionalerror.context - the old harness
intuition product "desc" --velocitymaps tometa backlog plan --no-interactive --request "desc" --answer "..." - the old harness
intuition product "desc" --velocity --dry-runmaps tometa backlog plan --no-interactive --request "desc" --answer "..." --dry-run
meta backlog plan also accepts --state, --priority, repeated --label, and --assignee. Built-in plan labeling remains mandatory and additive, so config labels and explicit labels are appended rather than replacing it.
meta backlog plan <IDENTIFIER> reshapes an existing Linear issue in place instead of creating a new one. The command loads the current issue context, asks the configured planning agent for a stronger rewrite, and then updates the same ticket through issueUpdate.
Interactive reshape runs print a before/after diff preview and require confirmation before the update. Pass --velocity to skip that preview and auto-apply the rewrite. Reshape mode preserves assignee, labels, project, state, cycle, and priority, updates or creates the active ## Codex Workpad comment, and intentionally leaves local .metastack/backlog/<ISSUE>/ files unchanged in this slice.
--fast, --multi, and --questions do not apply to reshape mode. The command rejects those combinations instead of silently changing reshape behavior.
The planning prompt is repo-scoped by default: it derives the active project identity from the resolved repository root, plans for the full repository directory, and asks the agent to create backlog issues only for that repository unless the user explicitly narrows the request to a subproject.
Side effects:
- ensures
.metastack/backlog/_TEMPLATE/exists - creates one or more Linear backlog issues
- copies the full canonical template tree into
.metastack/backlog/<NEW_ISSUE_ID>/ - writes each generated backlog item to
.metastack/backlog/<NEW_ISSUE_ID>/ - uses
.metastack/backlog/<NEW_ISSUE_ID>/index.mdas the initial Linear issue description - writes
.metastack/backlog/<NEW_ISSUE_ID>/.linear.jsonto persist issue metadata
Create or improve the repo-local .metastack/SPEC.md for the active repository:
meta backlog spec --root .
meta backlog spec --root . --no-interactive --request "Add a repo-local SPEC workflow"
meta backlog spec --root . --no-interactive --request "Improve the current SPEC" --answer "Clarify the non-goals"In a TTY, meta backlog spec opens a staged ratatui interview. On first run it asks what the repository should build, asks concise follow-up questions, and drafts .metastack/SPEC.md. On later runs it loads the existing SPEC, asks what should change, and revises that same file in place.
The command is repo-local by design: it targets only .metastack/SPEC.md, does not create Linear issues, and does not write .metastack/backlog/<ISSUE>/ packets. Generated markdown must include uppercase OVERVIEW, GOALS, FEATURES, and NON-GOALS headings.
For deterministic automation, pass --no-interactive with --request and optional repeated --answer values. Hidden --render-once hooks exist for snapshot testing of the major TUI states.
Split an existing Linear issue into a reviewed inverse-planning proposal, then optionally apply that proposal by creating child backlog issues, rewriting the parent into an umbrella ticket, and linking dependencies:
meta backlog split --api-key "$LINEAR_API_KEY" MET-35
meta backlog split --api-key "$LINEAR_API_KEY" --no-interactive MET-35The command requires a configured local agent, or one of the built-in supported agents (codex / claude) available on PATH.
meta backlog split uses the same repo-root scope contract as meta backlog plan: the agent sees the active repository identity derived from the resolved root, defaults work to the top-level repository directory, and should only propose a narrower split when the user explicitly requested a subproject.
Interactive runs stay inside one ratatui review flow: source issue review, proposed child review and selection, dependency review, addendum entry, final confirmation, apply, or cancel. Approved runs create one .metastack/backlog/<CHILD>/ packet per created issue, rewrite the source ticket into an umbrella parent, and create dependency links for any reviewed suggestions that still resolve after final selection.
meta backlog split accepts --no-interactive, --state, --priority, repeated --label, and --assignee. In machine mode, meta backlog split --no-interactive <ISSUE> emits a structured proposal with child issues, a parent rewrite, and dependency suggestions as JSON under the backlog.split command envelope. Missing-input failures also emit structured JSON.
Create a technical sub-issue from an existing Linear parent issue and have the configured local agent turn the repo template into a concrete backlog item:
meta backlog tech --api-key "$LINEAR_API_KEY" MET-35
meta backlog derive --api-key "$LINEAR_API_KEY" MET-35Legacy alias: meta technical
The command requires a configured local agent, or one of the built-in supported agents (codex / claude) available on PATH.
meta backlog tech uses the same repo-root scope contract as meta backlog plan: the agent sees the active repository identity derived from the resolved root, defaults work to the top-level repository directory, and should only produce a narrower technical backlog item when the user explicitly requested a subproject.
meta backlog tech also accepts --no-interactive, repeated --answer, --state, --priority, repeated --label, and --assignee. Interactive runs can now pause after acceptance-criteria selection for a dedicated follow-up-questions step, then loop in review with a refine action until you confirm, cancel, or hit the configured technical refinement cap. Child tickets inherit their parent issue's workflow status by default. When the parent has no status or an explicit --state override is passed, the command falls back to the configured default (repo > global > built-in Backlog). Parent issue project and priority are preserved over config defaults unless an explicit CLI override is passed, and the final project/team selection is persisted for later zero-prompt runs in the install-scoped data directory.
In machine mode, meta backlog tech --no-interactive <ISSUE> emits the created child issue, parent issue, and local backlog path as JSON. Repeated --answer values are consumed in the same order the agent asks follow-up questions, and mismatched answer counts fail before the command creates a child issue or writes a backlog packet. The technical generation contract now expects tagged JSON responses: {"kind":"questions","questions":[...]} or {"kind":"draft","files":[...]}.
Across meta backlog plan, meta backlog spec, meta backlog split, and meta backlog tech, recovered generation failures now stay visible until the next real edit or resubmit instead of disappearing on routine navigation. If capture-mode execution fails with agent returned empty response — check provider CLI version or agent configuration, treat that as a provider CLI regression or local agent-command misconfiguration before debugging downstream JSON parsing.
In a TTY, the parent-issue picker now uses the shared Linear issue browser:
- type to search by identifier, title, state, project, or description
- matching is case-insensitive and ranks exact identifiers first, then identifier prefixes and exact token matches, then broader substring matches
- shared semantic styling highlights identifiers, titles, state, priority, project, and preview metadata while you review the selected parent issue
Side effects:
- ensures
.metastack/backlog/_TEMPLATE/exists - asks the configured local agent to inspect the parent Linear issue and author the backlog files from
.metastack/backlog/_TEMPLATE/ - creates a new Linear child issue under the referenced parent
- copies the full canonical template tree into
.metastack/backlog/<NEW_ISSUE_ID>/ - writes the generated backlog item to
.metastack/backlog/<NEW_ISSUE_ID>/ - uses
.metastack/backlog/<NEW_ISSUE_ID>/index.mdas the Linear issue description - uploads the remaining managed backlog files as Linear attachments
Review repo-scoped backlog issues for issue hygiene gaps before execution:
meta backlog improve
meta backlog improve --mode basic
meta backlog improve ENG-10144 --mode advanced
meta backlog improve ENG-10144 --mode advanced --applymeta backlog improve is the repo-scoped backlog triage pass for existing backlog issues. Use it when you want to scan the current backlog for missing labels, weak titles/descriptions, missing acceptance criteria, absent priority or estimate, and parent-child structure opportunities. The default basic mode stays conservative and focuses on metadata hygiene. advanced mode can rewrite issue content more deeply and propose or apply an existing parent issue when the repository backlog clearly supports that structure.
In a TTY, the command now stays inside one guided dashboard flow after issue selection. For each issue, the configured agent classifies the ticket into one of four states: no_update_needed, ready_for_update, needs_planning, or needs_questions. The engineer then explicitly accepts, skips, or rejects that recommendation before the command moves on. The decision panel always shows the primary Enter action for the current ticket, plus explicit skip and reject paths. When the agent needs direct follow-up answers, the dashboard captures them inline, reruns the same issue review, and only offers an apply path once the recommendation is concrete enough to mutate local backlog content or Linear.
Use meta linear issues refine when you already know which issue needs a critique/rewrite and the main goal is improving the issue description itself. Use meta backlog improve when you want a backlog-quality sweep that evaluates whether existing repo-scoped backlog issues are ready for execution.
Side effects:
- scans either explicit issue identifiers or repo-scoped issues in the configured backlog state
- writes
original.md,issue.json,local-index.mdwhen present,proposal.json,proposal.md, andsummary.jsonunder.metastack/backlog/<ISSUE>/artifacts/improvement/<RUN_ID>/ - in the interactive dashboard, keeps local and remote changes gated behind an explicit human decision for each issue
- keeps the default flow proposal-only, without mutating
.metastack/backlog/<ISSUE>/index.mdor the Linear issue - with
--apply, writes the local artifact trail first, then updates.metastack/backlog/<ISSUE>/index.mdwhen the proposal includes a description rewrite, and finally pushes the proposed metadata/content updates back to Linear
Critique and rewrite one or more existing Linear issues that already belong to the active repository scope:
meta issues refine MET-35
meta issues refine MET-35 MET-36 --passes 2
meta issues refine MET-35 --applymeta issues refine is the quality-improvement step after meta plan or meta backlog tech. It reuses the configured local agent to critique the current Linear description, persist each refinement pass under .metastack/backlog/<ISSUE>/artifacts/refinement/<RUN_ID>/, and generate a proposed rewrite. By default the command is critique-only.
Pass --apply only when you want to promote the final rewrite into .metastack/backlog/<ISSUE>/index.md and then push that rewritten description back to Linear. The command always writes the local before/after snapshots first so the refinement run stays auditable even if the remote mutation fails.
Machine mode:
meta linear issues refine --json ...emits structured findings, artifact paths, and apply-state details- failure paths also use the same structured JSON envelope as the other machine-facing commands
Side effects:
- validates that every requested issue matches the configured repo team/project scope
- writes
original.md, per-pass findings JSON/Markdown,final-proposed.md, andsummary.jsonunder.metastack/backlog/<ISSUE>/artifacts/refinement/<RUN_ID>/ - keeps the default flow critique-only, without mutating
.metastack/backlog/<ISSUE>/index.mdor the Linear issue description - with
--apply, updates.metastack/backlog/<ISSUE>/index.mdbefore attempting the Linear description update - during
meta listen, blocks--applyfor the active ticket so the primary issue description is not overwritten in unattended execution
Browse backlog entries from local .metastack/backlog/, hydrate linked rows from Linear, and pull or push the selected linked entry without leaving the terminal:
meta backlog sync --api-key "$LINEAR_API_KEY"
meta backlog sync --api-key "$LINEAR_API_KEY" status
meta backlog sync --api-key "$LINEAR_API_KEY" status --fetch
meta backlog sync --api-key "$LINEAR_API_KEY" link MET-35 --entry manual-notes
meta backlog sync --api-key "$LINEAR_API_KEY" link MET-35 --entry manual-notes --pull
meta backlog sync --api-key "$LINEAR_API_KEY" pull MET-35
meta backlog sync --api-key "$LINEAR_API_KEY" pull --all
meta backlog sync --api-key "$LINEAR_API_KEY" push MET-35
meta backlog sync --api-key "$LINEAR_API_KEY" push MET-35 --update-description
meta backlog sync --api-key "$LINEAR_API_KEY" push --allLegacy alias: meta sync
Side effects:
- bare
meta backlog syncopens a ratatui backlog-entry dashboard sourced from local.metastack/backlog/state - the dashboard starts with query focus; finish editing the search, then press
TaborEnterto move into the backlog list - linked dashboard rows hydrate the mapped Linear issue from
.linear.json, while unlinked rows stay visible with explicitunlinkedstate - unlinked dashboard rows are local-only until you run
meta backlog sync link <ISSUE> --entry <SLUG> linkassociates an existing.metastack/backlog/<ENTRY>/directory with a Linear issue by writing.linear.jsonlinkprompts for an unlinked backlog entry in a TTY when--entry <SLUG>is omittedlink --pullimmediately hydrates the linked entry from Linear after writing metadatastatusscans.metastack/backlog/and printsidentifier | title | status | last syncstatusresolves only local change state by default; pass--fetchto check the current Linear issue and surfaceremote-aheadordivergedpullrefreshes.metastack/backlog/<ISSUE_ID>/index.mdfrom the Linear descriptionpullrestores CLI-managed attachment files into the same directory when presentpullre-downloads every markdown image referenced by the issue description, parent description, and Linear comments into.metastack/backlog/<ISSUE_ID>/artifacts/pullwrites.metastack/backlog/<ISSUE_ID>/artifacts/ticket-images.mdas a localized-image manifestpullwrites.metastack/backlog/<ISSUE_ID>/context/ticket-discussion.mdwith chronological author-attributed comment contextpullfilters generated workpad and[harness-sync]comments out of the persisted discussion context and storeslast_pulled_comment_idsin.metastack/backlog/<ISSUE_ID>/.linear.jsonpulllogs per-image download failures without failing the overall syncpulluses rawAuthorization: <LINEAR_API_KEY>only foruploads.linear.appimage downloads; other hosts are fetched without that special auth headerpullreuses previously localized ticket images when the same generated artifact path is still currentpullpersists.metastack/backlog/<ISSUE_ID>/.linear.json, includinglocal_hash,remote_hash,last_sync_at, andlast_pulled_comment_idsalongside the existing issue metadata- when
pullsees aremote-aheadordivergedpacket, it shows a diff between the localindex.mdand the incoming Linear description before any files are overwritten - in a TTY,
pullasks for confirmation before overwriting local backlog content; in non-interactive runs it exits non-zero instead of silently replacing changed files pull --allwalks every linked backlog entry sequentially and prints a synced/skipped/error summarypushreplaces only CLI-managed attachments by default, leaving unrelated Linear attachments untouchedpushparses.metastack/backlog/<ISSUE_ID>/checklist.mdwhen present and upserts a single[harness-sync]Linear comment with per-milestone and overall completion statuspushleaves the Linear issue description unchanged unless you pass--update-descriptionpush --update-descriptionrefuses to overwrite the Linear description when the stored baselines resolve toremote-aheadordivergedpush --allwalks every linked backlog entry sequentially, respects--update-description, and exits non-zero when any entry fails- during
meta listen,push --update-descriptionis blocked for the active ticket so the primary issue description stays untouched - pass
--no-interactivewithlink,pull, orpushwhen scripting; in that mode every required selector must be explicit - direct subcommands emit JSON when
--no-interactiveis active;statusalso supports explicit--json .metastack/meta.jsonoptionally acceptssync.discussion_file_char_limitandsync.discussion_prompt_char_limitto tune the persisted discussion file budget and themeta listenprompt excerpt budget
The sync dashboard and render-once snapshot now include a shared issue search bar plus each issue's local sync state:
- type while the issue list is focused to search by identifier, title, state, project, or description
- matching is case-insensitive and ranks exact identifiers first, then identifier prefixes and exact token matches, then broader substring matches
- the shared browser highlights matches in issue rows and previews and keeps sync-specific actions on the right-hand side
The sync dashboard and render-once snapshot also show each issue's local sync state:
synced: current local and remote hashes still match the stored baselineslocal-ahead: local tracked backlog files changed since the last stored baseline, but the Linear issue did notremote-ahead: the Linear issue changed since the last stored baseline, but the local backlog packet did notdiverged: both local backlog files and the Linear issue changed since the last stored baselineunlinked: the local packet is missing or the existing.linear.jsonpredates hash baselines
Local hashes are derived deterministically from tracked files under .metastack/backlog/<ISSUE>/. Dotfiles, including .linear.json, are excluded, and generated discussion/image artifacts stay local-only so repeat no-op syncs remain synced.
Use Linear from the command line:
meta linear projects list --team MET
meta linear issues list --team MET --project "MetaStack CLI"
meta linear issues list --team MET --json
meta linear issues create --team MET
meta linear issues create --no-interactive --team MET --title "Add docs" --description "Cover command usage"
meta linear issues edit --issue MET-11
meta linear issues edit --no-interactive --issue MET-11 --state "In Progress"
meta linear issues refine MET-11
meta linear issues refine MET-11 --passes 2 --apply
meta dashboard linear --demo
meta dashboard team --team METLegacy aliases: meta issues, meta projects, meta dashboard
Notes:
meta linear issues listopens an interactive issue browser unless you pass--jsonmeta linear issues list,meta dashboard linear, andmeta dashboard teamshare the same free-text search behavior when the issue list is focused: type to search by identifier, title, state, project, or description, with exact identifiers ranked ahead of broader matches- the shared Linear dashboards keep their existing filters, and the search query narrows the visible issue set after those filters are applied
- All TUI preview and detail panes that display markdown-authored content (Linear issue previews, PR bodies, review reports, planning ticket details, improvement comparisons, and listen session descriptions) render through the shared markdown renderer at
src/tui/markdown.rs, preserving headings, bullet/ordered lists, blockquotes, fenced code blocks, inline bold/italic/code, tables, and blank lines. meta linear issues createandmeta linear issues editopen ratatui workflows when stdin/stdout are attached to a TTYmeta linear issues create --no-interactive ...andmeta linear issues edit --no-interactive ...emit structured JSON instead of text- In the interactive create/edit forms, multiline descriptions advance on
Enter, insert a newline onShift+Enter, and supportUp/Down,PgUp/PgDn,Home/End, plus mouse-wheel scrolling while the description pane is focused; the summary/review sidebar also scrolls with the mouse wheel when long descriptions overflow meta linear issues refineis non-interactive, uses the configured local agent, defaults to critique-only unless you pass--apply, and emits machine-readable results when--jsonis setmeta dashboard linearis the preferred Linear dashboard path; baremeta dashboardremains a compatibility alias during migration
Required auth:
LINEAR_API_KEY- optional:
LINEAR_API_URL - optional:
LINEAR_TEAM
Review open GitHub PRs with a holistic audit pipeline that gathers PR metadata, review state, changed files, diff scope, linked Linear ticket details, and repository context. Interactive TTY runs stay inside one guided dashboard flow:
- Direct review:
meta agents review <PR_NUMBER>loads one PR into the dashboard, shows a review preview, and waits for explicit approval before the audit starts. - Guided queue review:
meta agents review(no PR number) discovers open PRs with themetastacklabel viagh, shows a searchable candidate queue, and waits for a human to approve starting each review session.
The interactive dashboard keeps candidates and live sessions in separate views so you can search, multi-select, and queue more PRs while earlier reviews are still running. Enter queues a normal review, Tab rotates focus between the candidate list, candidate preview, session list, and session detail panes, and R refreshes the candidate discovery set without leaving the dashboard. Once a review reaches Review Complete, switch to the Sessions view and press A to start the remediation agent PR workflow from the saved report.
Across review, retro, and follow-up ticket review panes, Ctrl+Y copies the focused search field
or detail pane through the shared TUI copy contract, with the same fallback export overlay when
the clipboard is unavailable.
# One-shot review
meta agents review 42 --root .
meta agents review 42 --root . --dry-run
meta agents review 42 --root . --agent claude --model opus
# Prerequisite check
meta agents review --check --root .
# Guided queue mode
meta agents review --root .
meta agents review --root . --once
meta agents review --root . --once --json
meta agents review --root . --render-once
# Non-interactive remediation dispatch
meta agents review --root . --fix-pr 42
meta agents review --root . --fix-pr 42 --json
meta agents review --root . --skip-pr 42
meta agents review --root . --skip-pr 42 --jsonThe review instructions are stored as source-controlled artifacts at src/artifacts/REVIEW.md and src/artifacts/VIEW_LINEAR.md, and loaded at compile time via include_str!. --dry-run and --check now print the resolved transport alongside provider, model, reasoning, route key, and config-source diagnostics so stdin-vs-argv behavior is visible before launch.
Each reviewed PR transitions through a per-PR state model:
- Selected - User picked the PR for review in the dashboard.
- Review In Progress - Review agent is running.
- Review Complete - Review finished; user must decide: create fix PR or skip.
- Fix Agent Pending - User chose "Create fix PR"; agent is queued.
- Fix Agent Running - Fix agent is applying changes in an isolated workspace.
- Fix Agent Complete - Remediation PR created and pushed.
- Skipped - User explicitly declined remediation.
When a review requires remediation, the interactive dashboard shows [a] Create fix PR and [n] Skip actions on the selected session. Press d to delete a stored session after confirmation when you no longer want it in the session view. Dispatching a fix agent does not exit the TUI; the dashboard remains active and shows live progress for the remediation run. Multiple reviewed PRs maintain independent state in the same session, and each new meta agents review run starts from a fresh session set instead of restoring prior runs into the dashboard.
For scripted and CI usage, --fix-pr N and --skip-pr N act on a previously reviewed PR without requiring the interactive TUI. Both produce JSON output with --json.
When the fix agent runs, it creates a workspace-safe follow-up branch from the original PR context, applies agent-authored fixes, commits and pushes them, opens a remediation PR that references the original reviewed PR, and posts a Linear comment. Failures surface back to the session state as FixAgentFailed with actionable error details.
Re-entering the interactive dashboard restores existing review and remediation state for PRs processed in previous sessions.
Analyze completed PRs for non-blocking follow-up backlog opportunities. Interactive TTY runs stay inside a guided retro dashboard:
- Direct retro:
meta agents retro <PR_NUMBER>loads one PR into the dashboard and waits for explicit approval before the retro analysis starts. - Guided queue retro:
meta agents retro(no PR number) discovers both open and closed PRs with themetastacklabel viagh, shows a searchable candidate queue, and waits for a human to approve starting each retro session.
After a retro analysis finishes, selecting that session and pressing Enter opens a dedicated backlog-plan-style review screen. From there you can keep, skip, or merge suggested tickets before creating the curated batch in Linear.
meta agents retro 42 --root .
meta agents retro --root .Interactive runs now show explicit loading states for auth, PR discovery, context assembly, agent review, and remediation so the current phase stays visible throughout the session.
Press F in the retro candidate view to open a lightweight filter panel overlay without leaving the dashboard. The panel lets you narrow the loaded candidate list by:
| Category | Semantics |
|---|---|
| State | open, closed, or both |
| Author | Match against the PR author login |
| Labels | Multi-select; candidates must contain all selected labels |
| Assignees | Multi-select; candidates must match any selected assignee, with an explicit (unassigned) option |
Filters combine conjunctively across categories: a candidate must satisfy every active category to remain visible. Applying or clearing filters updates the candidate list in place, keeps your current selection when it is still visible, and shows a [filtered] badge in the header when active.
Panel controls: Space toggle, Up/Down navigate, C clear all, F/Enter/Esc close.
Prerequisites:
ghCLI installed and authenticated (gh auth login)- Repository with a configured
.metastack/meta.json - For guided queue mode: PRs must carry the
metastacklabel
Inspect open PRs for the current repository, describe an improvement request, and publish the result as a stacked PR targeting the source PR branch. Interactive TTY runs stay inside one TUI dashboard:
- The left panel lists open PRs discovered via
gh pr list. - The right panel lists persisted improve sessions loaded from
.metastack/agents/improve/sessions/state.json. Enteron a PR opens a detail view with branch info and body preview.Tabtoggles focus between the PR list and session list.Enteron a session opens a detail view with phase, instructions, and stacked PR link.Backspacereturns from a detail view to the parent list.Ctrl+Ycopies the focused list or detail pane with the shared terminal export fallback when direct clipboard access is unavailable.
Sessions persist across restarts. Each session records the source PR metadata, user instructions, execution phase, workspace path, improve branch, and stacked PR URL.
When an improve session executes, it:
- Clones the repository into an isolated sibling workspace under
<repo>-workspace/improve-<session-id>/. - Checks out the source PR branch and creates an
improve/<source-branch>branch. - Publishes a stacked PR targeting the source PR branch with a title and body linking back to the original PR.
meta agents improve --root .
meta agents improve --root . --render-once
meta agents improve --root . --render-once --events enter
meta agents improve --root . --render-once --events tab,enterPrerequisites:
ghCLI installed and authenticated (gh auth login)- Repository with a GitHub remote
Run a one-off headless agent session for a single Linear issue. The execute command reuses the same bootstrap path as meta agents listen — workspace provisioning, backlog setup, workpad creation, and worker launch — but records the session origin as execute instead of listen. This means:
- The listen dashboard surfaces execute-origin sessions with an
execute-originlabel. - The listen daemon does not auto-resume or auto-claim execute-origin sessions when their worker finishes. They remain blocked until an operator explicitly adopts them via
R(resume) in the dashboard ormeta listen sessions resume. - Workspace safety guarantees remain identical: the source checkout is never used as the turn cwd.
Use meta agents execute when you want to kick off work on a specific ticket without leaving the continuous listen daemon running. The session will persist and be visible in the next meta agents listen dashboard.
Examples:
meta agents execute MET-45 --team MET --project "MetaStack CLI"
meta agents execute MET-45 --root . --max-turns 10
meta agents execute MET-45 --root . --jsonRun an agent directly inside an existing workspace checkout without the Linear listener/bootstrap ceremony. Interactive TTY runs are now workspace-dashboard-first:
- Bare
meta agents buildopens the sibling workspace inventory under<repo>-workspace/, shows each available checkout, and lets you switch between workspaces before sending prompts. - Selecting a workspace keeps its run history, live output, and current git status visible in the same session so you can move between multiple PR/ticket clones without restarting the command.
- The status pane now shows whether the selected workspace is clean or dirty, which files are currently changed, and what changed during the last run so engineers get direct confirmation that the agent is editing the intended checkout.
- Before each run starts, the dashboard fetches
origin, rebases the selected workspace branch on top of its upstream branch, and surfaces that sync progress in the live output pane. If the rebase hits conflicts, the same configured build agent is prompted to resolve the conflicted files in place and leave the workspace ready forgit rebase --continue. - After a successful run, the dashboard stages the resulting workspace changes, creates a
prompt-derived commit, and pushes the active branch back to GitHub with
--force-with-leaseso rebased workspace branches stay publishable without leaving the TUI flow.
Passing MET-45 or another workspace selector preselects that workspace in the dashboard, and
passing both a workspace and a prompt auto-starts the first run there. When the resolved provider
exposes a continuation handle, the dashboard keeps that resume state available for the next
compatible run in the same session. Use --no-interactive when you want a one-shot scripted run
with an explicit workspace and prompt.
Examples:
meta agents build
meta agents build MET-45 "fix the auth bug"
meta agents build --dir ../repo-workspace/MET-45 "tighten the failing tests"
meta agents build --dir ../repo-workspace/MET-45
meta agents build --dir ../repo-workspace/MET-45 --agent codex --model gpt-5.4
meta agents build --dir ../repo-workspace/MET-45 "run the focused QA pass" --no-interactiveRun the unattended agent daemon. The listener watches Todo issues, applies repo-scoped label and assignee filters, moves newly claimed work to In Progress, prepares a per-ticket standalone clone under a sibling -workspace directory, bootstraps a ## Codex Workpad comment on the Linear issue, downloads issue attachments into a local attachment-context manifest under .metastack/agents/issue-context/<TICKET>/, and launches a supervised listen worker inside that workspace. The worker now follows an explicit phased loop: one execution turn tries to complete as much of the Linear ticket as possible, a review phase compares the current workspace against the Linear ticket acceptance criteria and validation requirements, continuation turns receive only the remaining-work delta, a final review decides whether the ticket is ready for deeper checks, a dedicated Verifying phase runs code review plus optional E2E and battle-test recipes, and an explicit Validating phase runs before any PR create, edit, or ready-promotion mutation. The first turn keeps the prompt focused on the Linear ticket and core repo instructions, references large legacy overlay files instead of inlining them, and tells the agent to execute the ticket directly rather than expanding it into extra planning or backlog maintenance. Existing local backlog files are treated as lightweight tracking only unless the ticket explicitly asks for more; after each review phase, the listener updates both the active workpad comment and a managed progress checklist section in the local backlog index.md. When the ticket branch is pushed, shared automation creates or updates that branch PR as a draft, keeps the metastack label attached, and promotes the same PR to ready for review during the existing review handoff without demoting an already-ready PR on continuation. If no matching open branch PR exists during handoff, the worker leaves the PR state as none and continues safely without creating a new review PR at completion time.
Built-in codex and claude listen turns no longer depend on stdout EOF to make progress. Each turn runs under one shared process-group supervisor with install-scoped defaults from meta runtime config --listen-agent-turn-timeout <SECONDS> --listen-agent-graceful-shutdown <SECONDS> or [defaults.listen]. Unset values fall back to 1800s per turn and 5s of graceful shutdown before escalation. A timed-out subprocess is reported separately from a stalled turn: timeout reporting records the turn number, elapsed time, timeout limit, terminated PID, and whether the worker stopped at SIGTERM or escalated to SIGKILL, while stall reporting still refers only to repeated completed turns that made no meaningful progress.
Listen context pressure is resolved once per run and passed to every hidden worker launch with this precedence: meta agents listen --context-budget-tokens <TOKENS> / meta listen sessions resume --context-budget-tokens <TOKENS>, then repo .metastack/meta.json listen.context_budget_tokens, then install [defaults.listen].context_budget_tokens, then the built-in default 180000. The worker derives normal, elevated, high, and critical from cumulative known input tokens on completed turns only. elevated annotates continuation prompts only, high triggers one durable Context Checkpoint turn when the effective workpad body has no checkpoint yet, and critical keeps that one-time checkpoint behavior while capping only the remaining execution-turn budget to one. Checkpoint detection and later review-workpad rewrites use the same effective workpad body contract: pending_linear_sync.workpad_body first, then the active Linear ## Codex Workpad comment. The selected-session detail pane derives Context Pressure from the live session turn_history; the session table stays unchanged.
The verifying phase resolves a dedicated agents.listen.verification route through the same precedence and diagnostics helpers used by other agent-backed commands. Install-scoped [verification] settings control code-review verification, route-scoped E2E execution, battle-test sampling, and additive quality criteria. Before verification can report success, the worker also resolves the active branch PR, confirms the local workspace HEAD matches that PR's exact current headRefOid, and checks the GitHub Actions workflow named quality for that same SHA; missing PR metadata, local/remote SHA mismatches, stale runs from older SHAs, pending runs, and failed runs all fail verification closed with remediation. E2E recipe steps run with bounded stdout/stderr capture plus a per-step timeout so a hung verification command cannot stall the worker indefinitely. Recipe steps accept an optional timeout_seconds; omitted values default to 300, and timeout failures are reported as timed-out verification steps instead of as generic stalls. Each verification pass persists JSON and markdown reports in the listen store, then mirrors the latest compact summary into inspect output, dashboard detail, PR rendering, and workpad updates.
Listen and merge now share the same validation-profile resolver. Validation selection follows CLI override > repo-scoped .metastack/meta.json validation.commands > built-in repo heuristics, and the resolved profile can include an optional repo-scoped label for diagnostics. meta agents listen --check --root . prints the active validation profile plus the resolved verification route, provider, model, reasoning, and effective verification settings so operators can confirm the deterministic gate before starting the daemon.
The listener now also shares one install-scoped Linear retry contract and failure classifier across
preflight, daemon, and worker paths. Transient failures during viewer lookup, issue listing, or
session reconciliation no longer terminate the daemon. Instead, the listener keeps the last known
queue and session snapshot visible, records degraded Linear state in the install-scoped listen
store, and schedules the next retry using the shared backoff policy from meta runtime config.
Textual --once output, --once --json, the dashboard, and meta listen sessions inspect
surface the degraded failure kind, message, and retry timing.
The shared classifier distinguishes transient, authentication, permission, configuration, and other Linear failures. Authentication, permission, and configuration failures still surface as degraded operator-actionable state rather than being silently treated as retryable outages.
Repo-scoped listen validation defaults live in .metastack/meta.json:
{
"validation": {
"commands": ["cargo test --test listen -- --test-threads=1"],
"repair_attempts": 2,
"profile": "listen-smoke"
}
}repair_attempts seeds two bounded repair loops: one budget for verification retries before PR mutation, and a separate budget for pre-PR local validation failures plus post-publication GitHub CI failures. When verification fails, the draft PR stays in place and the next execution turn receives concrete remediation. When local validation fails, the worker captures stdout/stderr excerpts, rewrites the continuation delta with concise repair context, decrements the validation/CI repair budget, and blocks PR mutation when that budget is exhausted. After draft publication and again before ready handoff, the worker polls the active branch PR until GitHub CI passes, fails, reports no configured checks, or times out. Pending checks surface explicit waiting progress in session summaries, inspect output, and dashboard detail. Failed checks keep the same PR in place instead of creating a duplicate, rerun local validation before the next PR mutation, and preserve the metastack label. Timeouts honor the install-scoped ci_timeout_behavior: block stops review handoff, while warn_and_proceed records a warning and continues.
Install-scoped verification defaults live in meta runtime config:
[verification]
code_review = true
e2e_verification = true
battle_test_count = 0
quality_criteria = []Route-scoped verification assets live under the repo project directory, for example .intuition/verification/recipes/agents.listen.yaml and .intuition/verification/inputs/agents.listen/.
With repo setup assignment_scope = "viewer_only", listen watches only Todo issues assigned to the authenticated viewer. Use assignment_scope = "viewer_or_unassigned" to also watch unassigned Todo issues, or --all-assignees to disable assignee filtering for just the active run.
Legacy alias: meta listen
meta agents listen keeps the same repository identity as the source checkout, but the worker prompt is anchored to the provided workspace checkout as the only local write scope. Implementation, validation, and local backlog updates must stay inside that workspace for the active repository unless the issue explicitly asks for a narrower subproject.
The live terminal dashboard refreshes locally every second so session-state changes stay visible, while the configured listen poll interval continues to control how often Linear is queried. Steady-state listen runs stay entirely in the terminal TUI as an interactive session browser, --render-once emits a terminal snapshot, and --once --json emits one machine-readable poll-cycle payload without going through the ratatui snapshot path.
When a worker has already created local workspace progress, later Linear failures in issue refresh,
workpad sync, PR attachment, or review-state transitions are persisted as pending_linear_sync
state instead of discarding the session. The next meta agents listen, meta agents execute, or
meta listen sessions resume attempt replays that pending sync immediately after Linear recovers
and clears the deferred state on success.
When built-in codex or claude workers emit structured usage telemetry, meta agents listen accumulates session-level input and output tokens across repeated turns. Runtime summaries, detail panes, and default textual inspection output render session-level in, out, and total, while the session table keeps a compact total-only token column. The worker also appends one per-turn token summary line to the per-issue log and persists additive turn-history snapshots in the mirrored detail artifact so meta listen sessions inspect --turns can render the exact turn order, prompt mode (full_prompt or continuation), and per-turn token counts without reparsing raw provider JSON. The listener also persists canonical provider, model, reasoning, token metadata, and optional blocked-taxonomy metadata into install-scoped session state plus mirrored detail artifacts so mixed-provider histories total correctly across Codex and Claude runs and blocked failures render consistently across the CLI and dashboard. On startup, the listener performs a best-effort historical repair pass from canonical detail data, legacy state, and worker logs; when exact counts still cannot be recovered, the dashboard and textual summaries continue to show n/a. Persisted worker logs are a small compatibility surface for that repair pass: the current branded --- intu listen turn ... / --- intu listen preflight failed @ ... headers and the legacy --- meta ... equivalents remain readable, and preflight-failure blocks act only as repair boundaries rather than as recoverable canonical metadata on their own.
The interactive dashboard has two primary panes: Agent Sessions (active, blocked, and completed listener
workers) and In Progress Issues - All Users (all Linear issues currently in In Progress). The In Progress Issues
pane displays each issue's short title, assignee, and whether an open GitHub PR is attached.
GitHub enrichment considers only open PRs; closed or merged PRs are not shown as active
attachments. Issues with no assignee or no PR attachment are handled gracefully.
Use Tab to switch focus between the Agent Sessions and In Progress Issues panes. When focused on
Agent Sessions, Left/Right (or h/l in vim mode) cycle between Active, Blocked, and
Completed session views. Press Enter on a selected item to open a detail/preview pane: session
detail shows milestones, references, prompt context, PR state, log excerpts, and a Block Detail
section with category, reason, retryable status, and suggested action when the selected session is
blocked; In Progress Issue detail shows the full issue description, assignee, PR link, and Linear
URL. Esc or Backspace closes detail mode, and PgUp/PgDn scrolls the focused detail pane.
Both panes can be independently hidden via CLI flags or config:
--hide-active-issueshides the In Progress Issues pane for this run--hide-previewhides the preview/detail pane for this run- Set
listen.dashboard_active_issuesorlisten.dashboard_previewtofalsein.metastack/meta.jsonto change the default
When vim_mode is enabled, the dashboard also accepts h/l as aliases for left/right and
j/k as aliases for up/down. The session table renders a compact PR badge (none,
draft #N, ready #N) plus category-aware blocked STAGE labels such as Setup Err, Turn Err,
Gate Err, or Infra Err; legacy sessions without blocked metadata continue to render plain
Blocked. Press P to pause a running session, R to resume paused or retry blocked.
The resolved execution agent is shown in both the interactive dashboard header and the textual
--once runtime summary so operators can confirm which configured worker route the listener will
launch. --once --json continues to return the machine-readable poll-cycle payload without adding
presentation-only fields to the JSON shape.
Examples:
meta agents listen --demo --render-once
meta agents listen --check --root .
meta agents listen --team MET --once
meta agents listen --check --root . --all-assignees
meta agents listen --team MET --project "MetaStack CLI" --once
meta agents listen --team MET --project "MetaStack API"
meta agents listen --team MET --project "MetaStack CLI" --once --all-assignees
meta agents listen --team MET --project "MetaStack CLI"
meta runtime setup --listen-label agent --assignee-scope viewer-only --refresh-policy reuse-and-refreshListen prerequisites:
- Codex:
~/.codex/config.tomlmust include:
approval_policy = "never"
sandbox_mode = "danger-full-access"- Codex: remove
[mcp_servers.linear]from the Codex config or disable it; the preflight warns when Linear MCP is detected. - Claude:
claudemust be onPATH, andANTHROPIC_API_KEYshould be unset for unattended subscription-backed runs. meta agents listen --check --root .runs the same startup preflight, including Linear reachability/auth validation, without starting the daemon.--checkalso prints the effective assignee filter, for exampleonly Kames,Kames + unassigned, orall assignees.
Outputs:
<parent>/<repo>-workspace/<TICKET>/- repo-scoped listen refresh policy in
.metastack/meta.json <parent>/<repo>-workspace/<TICKET>/.metastack/agents/briefs/<TICKET>.md<parent>/<repo>-workspace/<TICKET>/.metastack/agents/issue-context/<TICKET>/README.md- install-scoped MetaListen state under the global MetaStack data root, keyed by the canonical
source project
.metastackroot - install-scoped MetaListen logs under the same project store
- a live terminal dashboard in steady-state mode, or a render-once terminal snapshot when requested
When $METASTACK_CONFIG points to a custom config file, the listener store lives under that
config file's parent data/ directory. Otherwise the default install-scoped root is derived from
the existing config path rules, for example ~/.config/metastack/data/. Each project is stored in
listen/projects/<PROJECT_KEY>/ with project.json, session.json, an active-listener lock,
session-details/<ISSUE>.json, and per-issue logs. session.json stays compact and list-oriented;
the per-session detail files hold session milestones, workspace/backlog/workpad references,
prompt-context references, PR publication state, and short log excerpts for the drill-down pane.
Writes for project.json, session.json, session-details/*.json, verification JSON, and
active-listener.lock.json now use same-directory temp-file persistence so failed writes leave the
previous readable primary file intact. Startup-critical loads for project.json, session.json,
and active-listener.lock.json use deterministic sibling recovery (.bak before .tmp),
rewrite the recovered snapshot back into the primary path, and best-effort remove the consumed
recovery artifact afterward; optional detail artifacts remain best-effort companion state. Active
lock cleanup compares file identity before deleting on Unix hosts and falls back to best-effort
path deletion on non-Unix hosts. Unreadable or stale orphaned active locks are removed with
warnings, while a valid recovered live PID still blocks duplicate listener runs with the existing
failure behavior.
Missing or malformed detail files are treated as temporarily unavailable detail, not as a fatal
dashboard or reload error; the next successful listener refresh rewrites them. Historical repair
continues to read both branded intu and legacy meta turn/preflight listen headers from the
stored per-issue logs, but the compatibility promise stops there: the listener does not rewrite old
logs in place or attempt fuzzy recovery from unrelated text.
Stored-session management commands:
meta listen sessions list
meta listen sessions inspect
meta listen sessions inspect --turns
meta listen sessions clear
meta listen sessions resume --project-key <PROJECT_KEY> --oncemeta listen sessions ... manages the install-scoped listener store only. It does not inventory or delete the sibling workspace clones themselves.
meta listen sessions inspect now expands the latest stored session with structured detail-artifact
fields when available, including PR URL/state, workspace/backlog/workpad references, recent
milestones, prompt-context references, compact log excerpts, optional blocked category/retryability
metadata, and a fallback Detail PR Ref: #N line when the detail artifact only carries a PR
number. Pass --turns to append the persisted per-turn token breakdown (turn N tokens: in ... | out ... | prompt_mode=...) from the detail artifact; without that flag the inspect output stays
compact.
The interactive selected-session detail pane follows the same fallback contract and shows PR Ref: #N when the detail artifact has a PR number but no published PR URL yet.
Within the live dashboard, P pauses the selected running worker, and R either resumes a paused
worker or retries a blocked session from its existing workspace state.
Ctrl+Y copies the focused listen pane or detail view and opens the shared terminal export overlay
when direct clipboard writes are unavailable.
Manage the sibling workspace clones created by meta agents listen, meta agents improve, and meta agents review. These commands always resolve the workspace root from the repository root with the fixed sibling convention:
<parent>/<repo>-workspace/
That root is intentionally not configurable.
When a listener session completes (the Linear ticket moves to a non-active state such as Done or Cancelled), the listener worker attempts to auto-clean the corresponding workspace clone immediately. Auto-clean succeeds only when the workspace has no uncommitted changes, no unpushed commits, and HEAD is not detached. When any safety check fails, the workspace is left in place and a manual-review-needed skip is logged. This ensures no local work is ever lost automatically.
The same ticket-scoped listen artifacts (session entry, detail file, log file) that manual meta workspace clean removes are also removed during auto-clean.
meta workspace prune reconciles previously missed merged workspaces across all managed workspace families:
- Listener clones (
<TICKET>/): removed when the Linear ticket is Done or Cancelled and the workspace is safe. - Improve workspaces (
improve-<session-id>/): removed when the associated PR is merged or closed and the workspace is safe. - Review remediation workspaces (
review-runs/pr-<number>/): removed when the associated PR is merged or closed and the workspace is safe.
Workspaces with uncommitted changes, unpushed commits, or detached HEAD are always kept regardless of PR or ticket state.
Examples:
meta workspace list --root .
meta workspace clean ENG-10175 --root .
meta workspace clean --target-only --root .
meta workspace clean --target-only ENG-10175 --root .
meta workspace prune --dry-run --root .
meta workspace prune --root .Behavior:
meta workspace listprints one row per ticket clone with the ticket directory name, branch, disk usage, last modified timestamp, local git safety state, Linear state, and optional GitHub PR state.- Done or Cancelled tickets are marked as safe removal candidates in the list output.
- GitHub PR enrichment is optional. When
ghauth is unavailable,listandprunestill succeed and mark PR data as unavailable while continuing from Linear completion state alone. meta workspace clean <TICKET>deletes one clone after confirmation unless--forceis passed, and it always reports dirty or ahead safety signals before removal.meta workspace clean --target-onlyremovestarget/directories across all listener clones by default, or narrows to one ticket when a ticket identifier is also supplied.meta workspace prune --dry-runpreviews every clone, whether it would be removed or kept, why, and the estimated reclaimed space. Includes listener ticket clones, improve workspaces, and review remediation workspaces.meta workspace pruneremoves clones whose Linear tickets are Done or Cancelled (for listener clones) or whose associated PRs are merged or closed (for improve and review workspaces), keeps clones with open PRs when PR data is available, skips clones with unpushed commits, and prints a finalRemoved N clones, freed X GB. Kept M clones.summary.- Clone deletion also removes only the matching ticket-scoped MetaListen session entry and per-ticket log artifact from the install-scoped project store, leaving unrelated sessions for the same repository intact.
For built-in codex and claude listen workers, the install-scoped session.json state now keeps
the latest provider-native manual resume target separately from the Linear issue identity. The
dashboard SESSION column renders only the compact provider-native handle, while
meta listen sessions list and meta listen sessions inspect surface the latest provider plus the
full resume ID so operators can copy the correct codex or claude resume target directly.
Capture is latest-only and silent best effort: new listen turns overwrite the stored provider/ID
when capture succeeds, and leave those fields explicitly unavailable when it does not. The same
{ provider, id } record is mirrored into the per-session detail artifact so dashboard detail and
inspect render the same full handle, and built-in worker restarts reuse that stored
provider-native handle instead of falling back to a legacy session_id. Codex live token
hydration follows that same contract by resolving token files from the stored provider-native
handle or the session log's thread.started record rather than from legacy continuation
bookkeeping. Older stored records are not backfilled.
The same persisted session/detail artifacts also carry the latest timeout snapshot when a turn
times out, and inspect / dashboard detail keep that timeout summary distinct from stalled-turn
summaries.
Reference:
Linear commands also read repo-scoped defaults from .metastack/meta.json, plus optional project-specific Linear auth stored in install-scoped CLI config for the current repo root. Repo defaults should store the canonical Linear project ID; meta setup --project <NAME> resolves names to IDs before saving, while older name-based values are still resolved at read time for compatibility. When repo values are absent, MetaStack falls back to install-scoped onboarding defaults for the default project, listen label, listen assignment scope, listen refresh policy, listen poll interval, listen context budget, interactive plan follow-up question limit, interactive technical follow-up question limit, technical refinement round limit, and plan/technical issue labels. meta listen also reads the optional listen.required_labels filter list, assignee filter, instructions file, default poll interval, and listen.context_budget_tokens from .metastack/meta.json; legacy listen.required_label values still load for compatibility, but new saves persist the list form and accept comma-separated labels in meta runtime setup. An issue is eligible when any configured listen label matches one of its Linear labels case-insensitively. Canonical assignee-scope values are any, viewer_only, and viewer_or_unassigned, while the legacy value viewer still loads as viewer_or_unassigned for compatibility. --all-assignees provides a run-scoped opt-out without changing repo config. Interactive meta plan reads the optional plan.interactive_follow_up_questions override there, while meta backlog tech reads technical.interactive_follow_up_questions and technical.refinement_rounds. meta plan / meta backlog split resolve the repo-scoped planning label defaults to real Linear label IDs before issue creation, falling back to plan when unset. meta backlog tech resolves the technical label the same way, falling back to technical when unset. Backlog ticket creation also merges optional global and repo [backlog] defaults with the contract CLI override > repo override > global override > built-in behavior; zero-prompt runs additionally consult remembered project/team selections and velocity_defaults before the repo/global fallbacks. The optional linear.ticket_context.discussion_prompt_chars and linear.ticket_context.discussion_persisted_chars settings control the comment-character budgets used for agent-facing and persisted context/ticket-discussion.md output. During meta setup saves and onboarding saves, MetaStack checks that the effective listen, plan, technical, and required listen labels exist on the selected team and creates any missing team labels so later issue creation stays deterministic. When meta linear issues list returns no rows, it prints the applied filters so hidden defaults remain visible.
Agent-backed commands use stable route keys so different workflows can resolve different defaults from the same install-scoped config. meta backlog spec, meta backlog plan, meta backlog improve, meta backlog split, meta backlog tech, meta context scan, meta context reload, meta linear issues refine, meta agents build, meta agents workflows run, meta runtime cron run, meta agents listen, and meta merge run all resolve provider/model/reasoning in this order:
- explicit CLI overrides such as
--agent,--provider,--model, and--reasoning - command route override
- command family override
- repo default from
.metastack/meta.jsonwhen present - global default
Workflow playbooks can still declare a built-in provider, but that value is now only used as the final fallback when the explicit, route, repo, and global config layers do not select one.
The built-in provider adapters are the single source of truth for metadata and launch behavior. They run codex exec and claude -p, pass --model=<value> automatically when a model is configured, validate reasoning against the selected provider/model, and expose resolution diagnostics before launch. Built-in codex and claude now default to stdin prompt delivery, so large review-family payloads stay off argv unless an explicit transport override selects arg. Before spawning a built-in provider, the CLI now checks the installed shell help surface for the emitted flags and fails fast with the resolved provider/model/reasoning plus transport and the exact attempted command if the local binary has drifted. Codex reasoning is passed as -c reasoning.effort="<value>"; Claude reasoning is passed as --effort=<value>.
For capture-oriented non-interactive runs such as meta backlog plan, the runtime requests machine-readable built-in output, unwraps the final assistant text before returning it to the caller, captures provider-native session IDs, and can resume the next phase inside the same command. If a resumed built-in launch fails with a narrow invalid-resume signal, the runtime clears that handle and retries the phase once as a fresh launch.
Sandbox and permission handling depends on the command path:
meta agents listenuses unrestricted execution for built-in providers so unattended workers can run validation, git/GitHub flows, and Linear updates. Codex uses--dangerously-bypass-approvals-and-sandbox; Claude uses--permission-mode=bypassPermissions.meta agents listenalso enables machine-readable provider output for built-in workers so the listener can capture the latest provider-native manual resume ID. Codex listen runs usecodex exec --json, and Claude listen runs useclaude -p --verbose --output-format=stream-json.- Built-in listen worker restarts and
meta listen sessions resumeonly reuse a stored manual resume target when provider-native metadata exists for the active built-in provider; operator-facing detail falls back to explicitunavailable, not to legacy continuation bookkeeping. meta context scan,meta backlog spec,meta backlog plan,meta backlog improve,meta backlog split,meta linear issues refine, workflow runs, merge flows, and cron prompts keep the built-in Codex adapter on--sandbox workspace-write --ask-for-approval never.
Listen startup now runs a provider preflight before polling Linear, and worker pickup reruns it inside the workspace before the first agent turn. Codex checks require a readable ~/.codex/config.toml with approval_policy = "never" and sandbox_mode = "danger-full-access" and warn when [mcp_servers.linear] is configured. Claude checks require claude on PATH and fail fast when ANTHROPIC_API_KEY is set. Both providers also validate that the resolved built-in launch command exposes the required unrestricted mode for unattended listen runs.
This is intentionally stricter than Codex --full-auto: in codex-cli 0.115.0, codex exec --help documents --full-auto as --sandbox workspace-write, which is still too restrictive for unattended listen workers that need network, git, GitHub, and Linear mutations.
Agent launches receive:
For meta plan, meta backlog split, meta backlog tech, meta issues refine, meta scan, and meta listen, the rendered agent prompt also includes a shared repo-target contract derived from the resolved command root:
-
the built-in workflow contract shipped in
src/artifacts/injected-agent-workflow-contract.md -
the resolved
RepoTargetscope block, including repo identity and root path -
optional repo overlays from root
AGENTS.mdand legacyWORKFLOW.md -
optional repo-scoped instructions configured in
.metastack/meta.json -
for
meta listen, an additional unattended workspace/workpad layer on top of that shared contract -
a combined payload via the configured transport (
argorstdin) -
METASTACK_AGENT_NAME -
METASTACK_AGENT_PROMPT -
METASTACK_AGENT_INSTRUCTIONS -
METASTACK_AGENT_MODEL -
METASTACK_AGENT_REASONING -
METASTACK_AGENT_ROUTE_KEY -
METASTACK_AGENT_FAMILY_KEY -
METASTACK_AGENT_PROVIDER_SOURCE -
METASTACK_AGENT_MODEL_SOURCE -
METASTACK_AGENT_REASONING_SOURCE -
METASTACK_LINEAR_ATTACHMENT_CONTEXT_PATHwhen the issue has downloaded attachment context
meta agents workflows run --dry-run now prints the resolved provider/model/reasoning plus their
resolution sources. meta context scan also writes the same diagnostics into the scan agent log so
misrouting can be proved from the persisted runtime evidence.
If you need to override the built-in launch command, you can still customize the persisted agent command in the config file:
[agents]
default_agent = "codex"
default_model = "gpt-5.3-codex"
[agents.commands.codex]
command = "codex"
args = ["exec", "{{model_arg}}"]
transport = "arg"Run the canonical root validation flow with:
make qualitymake quality is the local maintainer and pull-request gate. It runs:
cargo fmt --checkcargo clippy --all-targets --all-features -- -D warningscargo testcargo test --test release_artifacts
The interactive planning integration proof in tests/plan.rs shells out to expect, so local
make quality runs also require that binary on PATH in addition to Rust.
The focused release_artifacts proof keeps the GitHub Release packaging contract explicit in the root gate by verifying the release-script archive names, SHA256SUMS, and extracted meta --version output.
Repository-level reviewer evidence is ticket-scoped and belongs under artifacts/validation/<TICKET>.md.
Packet-local execution proof stays in .metastack/backlog/<ISSUE>/validation.md alongside the
backlog packet, and packet-local artifact indexes stay in .metastack/backlog/<ISSUE>/artifacts/README.md.
Repo-root validation.md is a stable guide, not a mutable per-ticket ledger.
Run the full Rust test suite from the repository root with:
cargo testThe integration suite is split by command domain, so local iteration can stay focused:
cargo test --test configcargo test --test scancargo test --test plancargo test --test refinecargo test --test synccargo test --test linearcargo test --test listencargo test --test cron
Maintainers can package the supported GitHub Release assets with:
make release-artifactsUse make release-artifacts when you need the full versioned archives under target/release-artifacts/<version>/.
Reference: