Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
96 changes: 83 additions & 13 deletions .deepreview
Original file line number Diff line number Diff line change
Expand Up @@ -194,6 +194,47 @@ requirements_traceability:
Produce a structured review with Coverage Gaps, Test Stability
Violations, Traceability Issues, and a Summary with PASS/FAIL verdicts.

requirement_file_format:
description: "Validate RFC 2119 compliance, unique IDs, and sequential numbering in requirement spec files."
match:
include:
- "specs/**/*-REQ-*.md"
review:
strategy: individual
instructions: |
Review this requirements specification file for format correctness.

Check the following:

1. **RFC 2119 keywords**: Every requirement statement MUST use at least one
RFC 2119 keyword (MUST, MUST NOT, SHALL, SHALL NOT, SHOULD, SHOULD NOT,
MAY, REQUIRED, RECOMMENDED, OPTIONAL). Flag any numbered requirement
that lacks an RFC 2119 keyword — e.g., "The system generates a UUID"
should be "The system MUST generate a UUID."

2. **Unique requirement IDs**: Each section heading must follow the pattern
`### {PREFIX}-REQ-NNN.M: Title` where PREFIX matches the filename prefix
(e.g., JOBS-REQ for JOBS-REQ-001-*.md). Within each section, requirements
are numbered lists (1., 2., 3., ...). Flag any duplicate section IDs.

3. **Sequential numbering**: Within each section, numbered requirements
should be sequential without gaps (1, 2, 3 — not 1, 2, 4). Flag gaps
or out-of-order numbers.

4. **Section ID consistency**: The section ID prefix must match the file's
naming convention. For example, in `JOBS-REQ-001-mcp-workflow-tools.md`,
all sections should use `JOBS-REQ-001.X` (not `JOBS-REQ-002.X`).

5. **Testability**: Each requirement should be specific enough to be
verifiable — either by an automated test or a review rule. Flag vague
requirements that cannot be objectively evaluated (e.g., "The system
SHOULD be fast" — fast compared to what?).

Output Format:
- PASS: All requirements are properly formatted.
- FAIL: Issues found. List each with the section ID, requirement number,
and a concise description of the issue.

update_documents_relating_to_src_deepwork:
description: "Ensure project documentation stays current when DeepWork source files, plugins, or platform content change."
match:
Expand Down Expand Up @@ -388,43 +429,72 @@ deepreview_config_quality:
and a specific recommendation.

job_schema_instruction_compatibility:
description: "Verify deepwork_jobs instruction files, templates, and examples are compatible with the job schema."
description: "Verify deepwork_jobs job.yml inline instructions are compatible with the job schema."
match:
include:
- "src/deepwork/jobs/job.schema.json"
- "src/deepwork/standard_jobs/deepwork_jobs/steps/*.md"
- "src/deepwork/standard_jobs/deepwork_jobs/templates/*"
- "src/deepwork/standard_jobs/deepwork_jobs/job.yml"
- "src/deepwork/standard_jobs/deepwork_reviews/job.yml"
review:
strategy: matches_together
additional_context:
unchanged_matching_files: true
instructions: |
When the job schema or deepwork_jobs instruction files change, verify they
When the job schema or standard job definitions change, verify they
are still compatible with each other.

Read src/deepwork/jobs/job.schema.json to understand the current schema.
Then read each instruction file, template, and example in
src/deepwork/standard_jobs/deepwork_jobs/ and check:
Then read each standard job's job.yml and check:

1. **Field references**: Every field name mentioned in prose instructions,
templates, or examples must exist in the schema at the correct level.
Pay special attention to root-level vs step-level fields — a field
that exists on steps may not exist at the root, and vice versa.
1. **Field references**: Every field name referenced in inline step
instructions must exist in the schema at the correct level.
Pay special attention to step_arguments vs workflow vs step fields.

2. **Required vs optional**: If instructions say a field is required,
verify the schema agrees. If instructions say a field is optional,
verify the schema doesn't require it.

3. **Schema structure**: Template files and examples that show YAML
structure must match the schema's property names and nesting.
3. **Schema structure**: Any YAML examples shown in inline instructions
must match the schema's property names and nesting.

4. **Terminology consistency**: Instructions should use the same field
names as the schema (e.g., if the schema uses
"common_job_info_provided_to_all_steps_at_runtime", instructions
should not call it "description" or "job_description").

Output Format:
- PASS: All instruction files are compatible with the schema.
- PASS: All job definitions are compatible with the schema.
- FAIL: Incompatibilities found. List each with the file path, line
reference, the incompatible content, and what the schema actually says.

nix_claude_wrapper:
description: "Ensure flake.nix always wraps the claude command with the required plugin dirs."
match:
include:
- "flake.nix"
- ".envrc"
review:
strategy: matches_together
instructions: |
The nix dev shell must ensure that running `claude` locally automatically
loads the project's plugin directories via `--plugin-dir` flags. Verify:

1. **Wrapper exists**: flake.nix creates a wrapper (script or function)
that invokes the real `claude` binary with extra arguments.

2. **Required plugin dirs**: The wrapper MUST pass both of these
`--plugin-dir` flags:
- `--plugin-dir "$REPO_ROOT/plugins/claude"`
- `--plugin-dir "$REPO_ROOT/learning_agents"`

3. **PATH setup**: The wrapper must be discoverable — either via a
script placed on PATH (e.g. `.venv/bin/claude`) with `.envrc`
adding that directory to PATH, or via a shell function/alias.

4. **Real binary resolution**: The wrapper must resolve the real
`claude` binary correctly, avoiding infinite recursion (e.g. by
stripping the wrapper's directory from PATH before lookup).

Output Format:
- PASS: The claude wrapper is correctly configured with both plugin dirs.
- FAIL: Describe what is missing or broken.
1 change: 1 addition & 0 deletions .envrc
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
use flake
PATH_add .venv/bin
10 changes: 1 addition & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -167,13 +167,6 @@ For workflows that need to interact with websites, you can use any browser autom

Here are some known issues that affect some early users — we're working on improving normal performance on these, but here are some known workarounds.

### Stop hooks firing unexpectedly

Occasionally, especially after updating a job or running the `deepwork_jobs learn` process after completing a task, Claude will get confused about which workflow it's running checks for. For now, if stop hooks fire when they shouldn't, you can either:
- Ask claude `do we need to address any of these stop hooks or can we ignore them for now?`
- Ignore the stop hooks and keep going until the workflow steps are complete
- Run the `/clear` command to start a new context window (you'll have to re-run the job after this)

### Claude "just does the task" instead of using DeepWork

If Claude attempts to bypass the workflow and do the task on it's own, tell it explicitly to use the skill. You can also manually run the step command:
Expand All @@ -198,8 +191,7 @@ your-project/
│ ├── tmp/ # Session state (created lazily)
│ └── jobs/ # Job definitions
│ └── job_name/
│ ├── job.yml # Job metadata
│ └── steps/ # Step instructions
│ └── job.yml # Job definition (self-contained with inline instructions)
```

</details>
Expand Down
2 changes: 1 addition & 1 deletion claude.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ deepwork/
│ │ │ ├── deepwork/SKILL.md
│ │ │ ├── review/SKILL.md
│ │ │ └── configure_reviews/SKILL.md
│ │ ├── hooks/ # hooks.json, post_commit_reminder.sh, post_compact.sh
│ │ ├── hooks/ # hooks.json, post_commit_reminder.sh, post_compact.sh, startup_context.sh
│ │ └── .mcp.json # MCP server config
│ └── gemini/ # Gemini CLI extension
│ └── skills/deepwork/SKILL.md
Expand Down
Loading