feat(cli): add --output json flag to run command#31
feat(cli): add --output json flag to run command#31Jdubin1417 wants to merge 3 commits intojamjet-labs:mainfrom
Conversation
Adds a new --output option to the 'jamjet run' command that accepts either 'text' (default, human-readable) or 'json' (machine-readable). When --output json is specified: - Suppresses all Rich console output - Always follows execution to completion - Outputs a JSON object with: - execution_id: the execution identifier - final_state: the complete execution state at completion - steps_executed: count of nodes that started - total_duration_us: wall-clock duration in microseconds - events: full list of per-step events This enables easy integration into CI pipelines and scripting.
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAdds a Changes
Sequence DiagramsequenceDiagram
participant User
participant CLI
participant Runtime as Executor
participant EventsAPI
participant Stdout
User->>CLI: run --output json ...
CLI->>CLI: validate options (output, timeout)
CLI->>Runtime: start execution -> exec_id
CLI->>Runtime: poll status until terminal OR timeout
Runtime-->>CLI: final_state
CLI->>EventsAPI: get_events(exec_id)
EventsAPI-->>CLI: events (or [])
CLI->>CLI: count node_started, compute total_duration_us
CLI->>Stdout: print compact JSON result
Stdout-->>User: JSON summary
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
sunilp
left a comment
There was a problem hiding this comment.
Code Review
Thanks for the contribution @Jdubin1417! The implementation is well-structured, focused, and correctly implements the core feature. A few things to address before merging:
Strengths
- Clean, focused 47-line change with no scope creep
- Correct Rich bypass using
print()instead ofconsole.print()for JSON output - Good timing approach with
time.monotonic_ns()(monotonic, high-res) - Consistent
if not json_outputgating across all Rich output points
Critical
No timeout on the polling loop. The while True loop with 1-second sleeps will spin forever if the execution never reaches a terminal state (runtime crash, hung execution, etc.). For text mode this is tolerable (user can Ctrl+C), but for JSON/CI mode -- the primary use case -- an infinite hang with no output is a serious problem. CI jobs will time out with no JSON on stdout.
Recommendation: Add a --timeout option (default 300s) and break with an error state:
max_wait = timeout # new CLI option
elapsed = 0
while True:
await asyncio.sleep(1)
elapsed += 1
state = await c.get_execution(exec_id)
status = state.get("status", "unknown")
if status in terminal:
break
if elapsed >= max_wait:
state = {"status": "timeout", "detail": f"No terminal state after {max_wait}s"}
breakImportant
1. No error handling around get_events(). If this HTTP call fails, the command crashes with an unhandled traceback on stdout. In JSON mode, callers expect parseable JSON. Wrap in try/except:
try:
events_data = await c.get_events(exec_id)
events = events_data.get("events", [])
except Exception:
events = []2. Validation error goes to stdout with Rich markup. The console.print(f"[red]Error:[/red] ...") writes Rich-formatted text to stdout. A CI caller doing jamjet run --output jso ... | jq . gets a jq parse error. Consider using Typer enum validation instead:
class OutputFormat(str, enum.Enum):
text = "text"
json = "json"
output: OutputFormat = typer.Option(OutputFormat.text, "--output", "-o", ...)This gives automatic --help docs, consistent error messages (to stderr), and type safety.
3. No tests. Given this modifies user-facing CLI behavior, please add at minimum:
- Invalid
--outputvalues are rejected - JSON output is valid JSON with expected keys
- JSON mode suppresses Rich/ANSI output
Minor
import timecould be a top-level import (consistent withasyncio,json,sys)json.dumps(..., indent=2)may be verbose for CI -- consider compact JSON as defaulttotal_duration_usincludes connection setup time, not just server-side execution -- worth documenting
- Add --timeout option (default 300s) to polling loop with error state - Wrap get_events() in try/except for robust JSON output - Use Typer Enum (OutputFormat) for --output validation - Add tests for enum validation, JSON output, and ANSI suppression - Move 'import time' to top-level imports - Use compact JSON (no indent) as default - Document that total_duration_us includes connection setup time
There was a problem hiding this comment.
🧹 Nitpick comments (3)
sdk/python/tests/test_cli_json_output.py (3)
40-57: Consider extractingFakeClientto a reusable fixture.The
FakeClientclass is duplicated nearly identically across three test methods. A shared fixture would reduce duplication and simplify maintenance.♻️ Proposed refactor using a pytest fixture
`@pytest.fixture` def fake_client(monkeypatch: pytest.MonkeyPatch): """Provides a FakeClient and patches jamjet.cli.main._client.""" class FakeClient: def __init__(self, *a, **kw): pass async def __aenter__(self): return self async def __aexit__(self, *a): pass async def start_execution(self, **kw): return {"execution_id": "exec_test123"} async def get_execution(self, eid): return {"status": "completed", "output": {"result": "ok"}} async def get_events(self, eid): return {"events": []} monkeypatch.setattr("jamjet.cli.main._client", lambda runtime: FakeClient()) return FakeClientThen tests can simply request the
fake_clientfixture:def test_json_output_is_valid_json(self, fake_client) -> None: result = runner.invoke(app, ["run", "test-wf", "--output", "json", "--runtime", "http://fake:7700"]) # ... assertions🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sdk/python/tests/test_cli_json_output.py` around lines 40 - 57, Extract the duplicate FakeClient class into a pytest fixture (e.g., fake_client) and use monkeypatch to patch jamjet.cli.main._client to return FakeClient(); update the tests that currently define class FakeClient (the async context methods __aenter__/__aexit__, start_execution, get_execution, get_events) to accept the new fake_client fixture instead of redeclaring the class so the shared fixture supplies the same behavior across tests.
22-24: Consider adding a CLI-level test for invalid--outputvalues.The enum test verifies
ValueErroris raised directly, but there's no test confirming that passing an invalid value like--output xmlthrough the CLI produces a user-friendly error message. This would verify the Typer integration handles the enum validation gracefully.💡 Example additional test
def test_invalid_output_format_via_cli(self) -> None: """CLI should reject invalid --output values with a clear error.""" result = runner.invoke(app, ["run", "test-wf", "--output", "xml"]) assert result.exit_code != 0 # Typer should produce an error message about invalid choice assert "xml" in result.stdout or "Invalid value" in result.stdout🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sdk/python/tests/test_cli_json_output.py` around lines 22 - 24, Add a CLI-level test that invokes the Typer app with an invalid --output value to ensure the CLI surfaces a user-friendly error: create a new test function (e.g., test_invalid_output_format_via_cli) that calls runner.invoke(app, ["run", "test-wf", "--output", "xml"]) and asserts result.exit_code != 0 and that result.stdout (or result.stderr) contains either "xml" or "Invalid value"; reference the existing OutputFormat enum and the app and runner fixtures used elsewhere in sdk/python/tests/test_cli_json_output.py so the test integrates with the same test setup.
35-37: Remove unused import and variable.
asynciois imported but never used, andcaptured: dict = {}is declared but never referenced in the test.🧹 Proposed cleanup
def test_json_output_is_valid_json(self, monkeypatch: pytest.MonkeyPatch) -> None: """Smoke test: --output json should produce parseable JSON (mocked).""" - import asyncio - - captured: dict = {} # Mock the async client to avoid needing a running runtime🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sdk/python/tests/test_cli_json_output.py` around lines 35 - 37, Remove the unused import and variable by deleting the top-level "import asyncio" and the unused declaration "captured: dict = {}" from the test file; locate these references in sdk/python/tests/test_cli_json_output.py (the import statement and the variable named captured) and remove them so no unused symbols remain in the test module.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@sdk/python/tests/test_cli_json_output.py`:
- Around line 40-57: Extract the duplicate FakeClient class into a pytest
fixture (e.g., fake_client) and use monkeypatch to patch jamjet.cli.main._client
to return FakeClient(); update the tests that currently define class FakeClient
(the async context methods __aenter__/__aexit__, start_execution, get_execution,
get_events) to accept the new fake_client fixture instead of redeclaring the
class so the shared fixture supplies the same behavior across tests.
- Around line 22-24: Add a CLI-level test that invokes the Typer app with an
invalid --output value to ensure the CLI surfaces a user-friendly error: create
a new test function (e.g., test_invalid_output_format_via_cli) that calls
runner.invoke(app, ["run", "test-wf", "--output", "xml"]) and asserts
result.exit_code != 0 and that result.stdout (or result.stderr) contains either
"xml" or "Invalid value"; reference the existing OutputFormat enum and the app
and runner fixtures used elsewhere in sdk/python/tests/test_cli_json_output.py
so the test integrates with the same test setup.
- Around line 35-37: Remove the unused import and variable by deleting the
top-level "import asyncio" and the unused declaration "captured: dict = {}" from
the test file; locate these references in
sdk/python/tests/test_cli_json_output.py (the import statement and the variable
named captured) and remove them so no unused symbols remain in the test module.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 60b403d8-7e36-4ee4-a4ab-ea5cdaddfacc
📒 Files selected for processing (2)
sdk/python/jamjet/cli/main.pysdk/python/tests/test_cli_json_output.py
🚧 Files skipped from review as they are similar to previous changes (1)
- sdk/python/jamjet/cli/main.py
Summary
Adds a
--output json/-o jsonflag to thejamjet runCLI command, enabling machine-readable output for scripting and CI pipelines.Fixes #7
Changes
--outputoption acceptingtext(default) orjsonexecution_id,final_state,steps_executed,total_duration_us,events--output jsonfor clean stdout--no-follow)Example
Output Schema
{ "execution_id": "exec_abc123", "final_state": { ... }, "steps_executed": 5, "total_duration_us": 1234567, "events": [ ... ] }Testing
--help--outputvalues rejected with clear errorDisclosure: This contribution was developed with AI assistance (Claude via OpenClaw).
Summary by CodeRabbit
New Features
--output/-oto choosetextorjson; JSON mode emits a compact single-line JSON with execution_id, final_state, steps_executed, total_duration_us, and events.--timeoutto cap run polling and report atimeoutfinal state.Behavior Changes
Tests