Releases: yaijs/prr-cli
Releases · yaijs/prr-cli
prr-cli v0.1.0
prr-cli 0.1.0
Initial public release of prr-cli, a Node.js CLI for running parallel code reviews and concept brainstorms against OpenAI-compatible language models, including NVIDIA-hosted and OpenAI backends.
Highlights
- Run PR/code reviews from the CLI in either sequential or parallel mode.
- Use separate reviewer lanes for
defectsandsuggestionsworkflows. - Let tool-enabled reviewers fetch more file context on demand.
- Use structured outputs with
json_schemaandjson_objectmodes when supported by the provider. - Run
prr brainstorm <brief.md>to review a concept brief or implementation plan instead of source code. - Add optional
brainstormSynthesisoutput to turn panel findings into a prioritized recommendation and revised next brief. - Include auto-selection logic that prefers verified elite tool-capable models.
- Include NVIDIA model probe tooling and experiment helpers for evaluating model/reviewer combinations.
What’s new in this release
Brainstorm mode
This release introduces a first-class brainstorm workflow:
- separate
brainstormersconfig from normalreviewers - markdown brief input instead of source-file review input
- no file selection, diff parsing, or tool use required for the core brainstorm loop
- optional synthesis stage for producing a tighter recommendation and a reusable markdown
nextBrief
This makes it practical to fan out across many low-cost or free NVIDIA-hosted models for concept exploration before implementation starts.
Review workflow improvements
- clearer reviewer-mode split between bug-finding and suggestion-focused lanes
- configurable suggestion caps
- improved GPT-5 guidance for
maxOutputTokensand reasoning-token budgeting - explicit support for leaving
maxOutputTokensunset when you want the provider to decide the cap
Reliability and release hardening
- verified elite tool-capable model list and recommendation flow
- packaging tightened with an explicit npm
filesallowlist - repository, homepage, and issue tracker metadata included in package metadata
- brainstorm synthesis parsing hardened so
nextBriefis preserved correctly in final structured output
Validation
Release validation included:
npm run lintnpm testnpm pack --dry-run- live OpenAI-compatible request validation
- live brainstorm smoke testing against the configured NVIDIA-backed example flow
Notes
brainstormSynthesisremains optional, so existing brainstorm runs can work without a synthesis stage.- For NVIDIA and other non-GPT-5 models, leaving
maxOutputTokensunset meansprr-clidoes not sendmax_tokens. - For GPT-5-family models,
maxOutputTokensmaps tomax_completion_tokens, so it should usually be left unset or set conservatively high.