diff --git a/.claude/CLAUDE.md b/.claude/CLAUDE.md index dbd7a3ae..3fde672d 100644 --- a/.claude/CLAUDE.md +++ b/.claude/CLAUDE.md @@ -13,6 +13,9 @@ Rust library for NP-hard problem reductions. Implements computational problems w - [write-rule-in-paper](skills/write-rule-in-paper/SKILL.md) -- Write or improve a reduction-rule entry in the Typst paper. Covers complexity citation, self-contained proof, detailed example, and verification. - [release](skills/release/SKILL.md) -- Create a new crate release. Determines version bump from diff, verifies tests/clippy, then runs `make release`. - [meta-power](skills/meta-power/SKILL.md) -- Batch-resolve all open `[Model]` and `[Rule]` issues autonomously: plan, implement, review, fix CI, merge — in dependency order (models first). +- [zero-to-infinity](skills/zero-to-infinity/SKILL.md) -- Discover and prioritize new problems and reduction rules based on user-ranked impact dimensions (academia, industry, cross-field, etc.), then file as GitHub issues. +- [add-issue-model](skills/add-issue-model/SKILL.md) -- File a well-formed `[Model]` GitHub issue with all 11 checklist items, citations, and repo verification. +- [add-issue-rule](skills/add-issue-rule/SKILL.md) -- File a well-formed `[Rule]` GitHub issue with all 9 checklist items, worked example, correctness argument, and nontriviality check. ## Commands ```bash diff --git a/.claude/skills/add-issue-model/SKILL.md b/.claude/skills/add-issue-model/SKILL.md new file mode 100644 index 00000000..e57f15b3 --- /dev/null +++ b/.claude/skills/add-issue-model/SKILL.md @@ -0,0 +1,138 @@ +--- +name: add-issue-model +description: Use when filing a GitHub issue for a new problem model, ensuring all template sections are complete with citations +--- + +# Add Issue — Model + +File a `[Model]` GitHub issue on CodingThrust/problem-reductions using the upstream "Problem" issue template. Ensures all sections are complete, cited, and verified against the repo. + +## Input + +The caller (zero-to-infinity or user) provides: +- Problem name +- Brief description / definition sketch +- Reference URLs (if available) + +## Step 1: Verify Non-Existence + +Before anything else, confirm the model doesn't already exist: + +```bash +# Check implemented models +ls src/models/*/ | grep -i "" + +# Check open issues +gh issue list --state open --limit 200 --json title,number | grep -i "" + +# Check closed issues +gh issue list --state closed --limit 200 --json title,number | grep -i "" +``` + +**If found:** STOP. Report to caller that this model already exists (with issue number or file path). + +## Step 2: Research and Fill Template Sections + +Use `WebSearch` and `WebFetch` to fill all sections from the upstream template (`.github/ISSUE_TEMPLATE/problem.md`): + +| Section | What to fill | Guidance | +|---------|-------------|----------| +| **Motivation** | One sentence: why include this problem? | E.g. "Widely used in network design and has known reductions to QUBO." | +| **Definition — Name** | Use `Maximum*`/`Minimum*` prefix for optimization. Check CLAUDE.md "Problem Names" | E.g. `MaximumIndependentSet` | +| **Definition — Reference** | URL or citation for the formal definition | Must be a real, accessible URL | +| **Definition — Formal** | Input, feasibility constraints, and objective. Define ALL symbols before using them. Use LaTeX math (`$...$` inline, `$$...$$` display) | E.g. "Given $G=(V,E)$ where $V$ is vertex set and $E$ is edge set, find $S \subseteq V$ such that..." | +| **Variables — Count** | Number of variables in configuration vector | E.g. $n = |V|$ (one variable per vertex) | +| **Variables — Domain** | Per-variable domain | E.g. binary $\{0,1\}$ or $\{0,\ldots,K-1\}$ for $K$ colors | +| **Variables — Meaning** | What each variable represents | E.g. $x_i = 1$ if vertex $i \in S$ | +| **Schema — Type name** | Rust struct name | Must match the Definition Name | +| **Schema — Variants** | Graph topology variants, weighted/unweighted | E.g. `SimpleGraph, GridGraph; weighted or unweighted` | +| **Schema — Fields table** | `\| Field \| Type \| Description \|` for each struct field | Connect fields to symbols defined in Definition | +| **Complexity** | Best known exact algorithm with concrete numbers | E.g. $O(1.1996^n)$ by Xiao & Nagamochi (2017). **No symbolic constants.** | +| **Complexity — References** | URL for complexity results | Must be citable | +| **Extra Remark** | Optional: historical context, applications, relationships | Can be brief or empty | +| **How to solve** | Check applicable boxes | BruteForce / ILP reduction / Other | +| **Example Instance** | Small but non-trivial instance with known optimal solution | Must be large enough to exercise constraints (avoid trivial cases). Will appear in paper. | + +**Citation rule:** Every complexity claim and reference MUST include a URL. + +**Formatting rule:** All mathematical expressions MUST use GitHub LaTeX rendering: `$...$` for inline math (e.g., $G=(V,E)$, $x_i$, $O(1.1996^n)$) and `$$...$$` for display equations. Never use plain text for math. + +## Step 3: Verify Algorithm Correctness + +For the Complexity section: +- Cross-check the complexity claim against at least 2 independent sources +- Ensure the complexity uses concrete numeric values (e.g., $1.1996^n$), not symbolic constants +- Verify the variable in the complexity expression maps to a natural size getter (e.g., $n = |V|$ → `num_vertices`) + +## Step 4: Draft and File Issue + +Draft the issue body matching the upstream template format exactly: + +```bash +gh issue create --repo CodingThrust/problem-reductions \ + --title "[Model] ProblemName" \ + --label "model" \ + --body "$(cat <<'ISSUE_EOF' +## Motivation + + + +## Definition + +**Name:** ProblemName +**Reference:** [citation](url) + + + +## Variables + +- **Count:** $n = |V|$ (one variable per vertex) +- **Per-variable domain:** binary $\{0,1\}$ +- **Meaning:** $x_i = 1$ if vertex $i$ is selected + +## Schema (data type) + +**Type name:** ProblemName +**Variants:** graph topology (SimpleGraph, ...), weighted or unweighted + +| Field | Type | Description | +|-------|------|-------------| +| graph | SimpleGraph | the graph $G=(V,E)$ | +| weights | Vec | vertex weights $w_i$ (weighted variant only) | + +## Complexity + +- **Best known exact algorithm:** $O(1.1996^n)$ by Author (Year), where $n = |V|$ +- **References:** [paper](url) + +## Extra Remark + + + +## How to solve + +- [x] It can be solved by (existing) bruteforce. +- [ ] It can be solved by reducing the integer programming, through #issue-number. +- [ ] Other, refer to ... + +## Example Instance + + +ISSUE_EOF +)" +``` + +Report the created issue number and URL. + +## Common Mistakes + +| Mistake | Fix | +|---------|-----| +| Using custom format instead of template | Must match `.github/ISSUE_TEMPLATE/problem.md` sections exactly | +| Missing complexity citation | Every algorithm claim needs author + year + URL | +| Symbolic constants in complexity | Use concrete numbers: $1.1996^n$ not $(2-\epsilon)^n$ | +| Plain text math | Use LaTeX: `$G=(V,E)$` not `G=(V,E)` | +| Undefined symbols in definition | Define ALL symbols (G, V, E, S, etc.) before using them | +| Trivial example instance | Use non-trivial instance (e.g., Petersen graph, not triangle) | +| Not checking repo first | Always run Step 1 before researching | +| Missing label | Use `--label "model"` to match template metadata | diff --git a/.claude/skills/add-issue-rule/SKILL.md b/.claude/skills/add-issue-rule/SKILL.md new file mode 100644 index 00000000..3b5d806a --- /dev/null +++ b/.claude/skills/add-issue-rule/SKILL.md @@ -0,0 +1,142 @@ +--- +name: add-issue-rule +description: Use when filing a GitHub issue for a new reduction rule, ensuring all template sections are complete with citations, worked examples, and correctness verification +--- + +# Add Issue — Rule + +File a `[Rule]` GitHub issue on CodingThrust/problem-reductions using the upstream "Rule" issue template. Ensures all sections are complete, with citations, a worked example, and a validation method. + +## Input + +The caller (zero-to-infinity or user) provides: +- Source problem name +- Target problem name +- Reference URLs (if available) + +## Step 1: Verify Non-Existence + +Before anything else, confirm the rule doesn't already exist: + +```bash +# Check implemented rules (filename pattern: source_target.rs) +ls src/rules/ | grep -i ".*" + +# Check open issues +gh issue list --state open --limit 200 --json title,number | grep -i ".*" + +# Check closed issues +gh issue list --state closed --limit 200 --json title,number | grep -i ".*" +``` + +**If found:** STOP. Report to caller that this rule already exists. + +**Also verify both source and target models exist or have open issues:** +```bash +ls src/models/*/ | grep -i "" +ls src/models/*/ | grep -i "" +``` + +If a model doesn't exist and has no open issue, report it. The caller should file model issues first. + +## Step 2: Research and Fill Template Sections + +Use `WebSearch` and `WebFetch` to fill all sections from the upstream template (`.github/ISSUE_TEMPLATE/rule.md`): + +| Section | What to fill | Guidance | +|---------|-------------|----------| +| **Source** | Source problem name | Must exist in repo or have open issue. Browse: https://codingthrust.github.io/problem-reductions/ | +| **Target** | Target problem name | Must exist in repo or have open issue | +| **Motivation** | One sentence: why is this reduction useful? | E.g. "Enables solving MIS on quantum annealers via QUBO formulation." | +| **Reference** | URL, paper, or textbook citation | Must be a real, accessible reference | +| **Reduction Algorithm** | Three parts: (1) Define notation — list ALL symbols for source and target instances. (2) Variable mapping — how source variables map to target variables. (3) Constraint/objective transformation — formulas, penalty terms, etc. Use LaTeX math (`$...$` inline, `$$...$$` display). | Solution extraction follows from variable mapping, no need to describe separately | +| **Size Overhead** | Table: `\| Target metric (code name) \| Polynomial (using symbols) \|` | Code names must match the target problem's getter methods (e.g., `num_vertices`, `num_edges`) | +| **Validation Method** | How to verify correctness beyond closed-loop testing | E.g. compare with ProblemReductions.jl, external solver, known results | +| **Example** | Small but non-trivial source instance for the paper illustration | Must be small enough for brute-force but large enough to exercise the reduction meaningfully. Provide as many details as possible — this appears in the paper and is used by AI to generate example code. | + +**Citation rule:** Every claim MUST include a URL. + +**Formatting rule:** All mathematical expressions MUST use GitHub LaTeX rendering: `$...$` for inline math (e.g., $G=(V,E)$, $x_i$, $Q_{ij}$) and `$$...$$` for display equations. Never use plain text for math. + +## Step 3: Verify Example Correctness + +For the Example section: +- Walk through the reduction step-by-step +- Show: source instance → apply reduction → target instance → solve target → verify solution maps back +- The example must be small enough to verify by hand (e.g., Petersen graph for graph problems) +- Provide concrete numbers, not just descriptions + +## Step 4: Verify Nontriviality + +The rule must be **nontrivial** (per issue #127 standards): +- NOT a simple identity mapping or type cast +- NOT a trivial embedding (just copying data) +- NOT a weight type conversion (i32 → f64) +- MUST involve meaningful structural transformation + +If the rule is trivial, STOP and report to caller. + +## Step 5: Draft and File Issue + +Draft the issue body matching the upstream template format exactly: + +```bash +gh issue create --repo CodingThrust/problem-reductions \ + --title "[Rule] Source to Target" \ + --label "rule" \ + --body "$(cat <<'ISSUE_EOF' +**Source:** SourceProblem +**Target:** TargetProblem +**Motivation:** +**Reference:** [citation](url) + +## Reduction Algorithm + +**Notation:** +- Source instance: $G=(V,E)$, $n=|V|$, $m=|E|$ +- Target instance: ... + +**Variable mapping:** + + +**Constraint/objective transformation:** + + +## Size Overhead + +| Target metric (code name) | Polynomial (using symbols above) | +|----------------------------|----------------------------------| +| `num_vertices` | $n = |V|$ | +| `num_edges` | $m + \ldots$ | + +## Validation Method + + + +## Example + + + +Source: +Reduction: +Target: +Solution: +ISSUE_EOF +)" +``` + +Report the created issue number and URL. + +## Common Mistakes + +| Mistake | Fix | +|---------|-----| +| Using custom format instead of template | Must match `.github/ISSUE_TEMPLATE/rule.md` sections exactly | +| Filing trivial reductions | Check nontriviality in Step 4 | +| Missing model dependency | Verify both source and target exist in Step 1 | +| Example too complex or too trivial | Small enough for brute-force, large enough to be meaningful (e.g., Petersen graph) | +| Undefined symbols in algorithm | Define ALL notation before using it | +| Missing validation method | Must describe how to cross-check beyond closed-loop | +| Wrong overhead code names | Must match actual getter methods on target type | +| Missing label | Use `--label "rule"` to match template metadata | +| Plain text math | Use LaTeX: `$G=(V,E)$` not `G=(V,E)`, `$\sum w_{ij}$` not `sum w_ij` | diff --git a/.claude/skills/zero-to-infinity/SKILL.md b/.claude/skills/zero-to-infinity/SKILL.md new file mode 100644 index 00000000..2608f37d --- /dev/null +++ b/.claude/skills/zero-to-infinity/SKILL.md @@ -0,0 +1,240 @@ +--- +name: zero-to-infinity +description: Use when you want to discover and prioritize new problems and reduction rules to add to the codebase, based on user-ranked impact dimensions +--- + +# Zero to Infinity + +Discover high-impact problems and reduction rules, rank them by user priorities, and file them as GitHub issues — feeding the existing `issue-to-pr` / `meta-power` pipeline. + +## Overview + +This skill bridges "what should we add next?" with the implementation pipeline. It does NOT write code — it creates well-formed `[Model]` and `[Rule]` issues via the `add-issue-model` and `add-issue-rule` sub-skills. + +## Step 1: Survey — Rank Impact Dimensions + +Rank dimensions using **cascading elimination** — each round removes previously selected options. + +**Default dimensions:** + +| # | Dimension | Description | +|---|-----------|-------------| +| 0 | Academic Publications | Papers in JACM, SICOMP, and top CS venues studying this problem/reduction | +| 1 | Industrial Application | Real-world use cases (search engines, navigation, scheduling, compilers) | +| 2 | Cross-Field Application | Relevance to physics, chemistry, biology, or other scientific domains | +| 3 | Top-Scientists Interest | Featured in Karp's 21, Garey & Johnson, or by researchers like Aaronson | +| 4 | Graph Connectivity | Bridges disconnected components in the existing reduction graph | +| 5 | Pedagogical Value | Clean, illustrative reductions good for teaching | + +### Cascading Elimination Process + +Maintain a list of `remaining_dimensions` (initially all 6). For each round: + +1. Present `remaining_dimensions` as options via `AskUserQuestion`: "Which is your #K priority?" +2. User selects one → assign it rank K +3. Remove selected dimension from `remaining_dimensions` +4. Repeat until 2 remain → user picks between them, last one auto-assigned to bottom rank + +**Example flow:** +``` +Round 1 (6 options): "#1 priority?" → User picks "Cross-Field" +Round 2 (5 options): "#2 priority?" → User picks "Industry" (Cross-Field removed) +Round 3 (4 options): "#3 priority?" → User picks "TopSci" (Cross-Field + Industry removed) +Round 4 (3 options): "#4 priority?" → User picks "Academic" (3 removed) +Round 5 (2 options): "#5 priority?" → User picks one, last auto-assigned #6 +``` + +**IMPORTANT:** You MUST track which dimensions have been selected and exclude them from subsequent AskUserQuestion calls. Never show an already-ranked dimension again. + +**Scoring weights:** For N dimensions, rank k gets weight N - k + 1. + +User may also add custom dimensions during the first round. + +## Step 2: Discover — Inventory First, Then Search + +### Phase 1: Build Exclusion Set (MANDATORY FIRST STEP) + +Before any web search or analysis, build a complete inventory of what already exists: + +```bash +# Implemented models +ls src/models/*/ + +# Implemented rules +ls src/rules/ + +# Open issues (increase limit to 200) +gh issue list --state open --limit 200 --json title,number + +# NOTE: Only open issues are excluded (not closed — those may have been rejected/abandoned) +``` + +Build a named **exclusion set** containing: +- Every implemented model name (from filenames) +- Every implemented rule (source→target pairs from filenames) +- Every open issue title mentioning a problem or rule name (both `[Model]` and `[Rule]`) + +**New candidates must NOT overlap with this exclusion set.** A candidate is excluded if it matches ANY of: an implemented model/rule OR an open issue. Closed issues are NOT excluded (they may have been rejected or abandoned). + +**Pass this exclusion set to both discovery channels.** + +### Phase 2: Discover (parallel, with exclusion set) + +Run two channels in parallel (use `dispatching-parallel-agents` or concurrent subagents). Both channels receive the exclusion set and must filter results against it during discovery. + +#### Channel A: Web Search + +Search queries targeting the user's top-ranked dimensions: +- `"classical NP-complete problems Karp's 21 reductions"` +- `"NP-hard problems {top_dimension_keyword}"` (e.g., `"NP-hard problems condensed matter physics"`) +- `"polynomial reductions from {existing_problem}"` for each problem with few outgoing edges +- `"important reductions computational complexity textbook"` + +For each candidate, collect: +- Formal problem name +- Brief definition +- Known reductions to/from other problems +- Complexity class and best known algorithms +- Reference URLs + +**Filter:** Immediately discard any candidate in the exclusion set. + +#### Channel B: Reduction Graph Gap Analysis + +```bash +cat docs/data/reduction_graph.json +``` + +Identify: +- **Dead-end problems**: nodes with no outgoing reductions +- **Missing natural reductions**: pairs of related problems without a direct edge +- **Disconnected components**: subgraphs that could be bridged by a single reduction +- **Well-known textbook reductions** not yet implemented (Garey & Johnson, CLRS, Arora & Barak) + +**Filter:** Only suggest gaps where neither the model nor rule is in the exclusion set. + +### Phase 3: Final Deduplication + +Merge results from both channels. Remove any remaining duplicates (same problem/rule found by both channels). + +## Step 3: Rank — Score, Sort, and Prioritize + +For each candidate, assign a score (0-5) per dimension: + +| Score | Meaning | +|-------|---------| +| 0 | No relevance | +| 1 | Marginal | +| 2 | Some relevance | +| 3 | Moderate | +| 4 | Strong | +| 5 | Exceptional | + +**Total score** = sum of (dimension_score × dimension_weight) for all dimensions. + +### Filing Priority Order + +After scoring, group candidates by priority: + +``` +--- Group 1: Rules between existing models (highest value) --- +Rules where BOTH source and target models already exist in the codebase. +These can be implemented immediately without new models. + +--- Group 2: Models + Rules (both needed) --- +A model that enables one or more high-value rules. File model first, rules second. +List the model and its dependent rules together. + +--- Group 3: Standalone models (lowest priority) --- +Models with no immediate rule connection to existing problems. +``` + +Within each group, sort by total score descending. + +### Nontrivial Filter + +**Exclude** candidate rules that are: +- Identity mappings or trivial embeddings +- Simple type/weight casts (i32 → f64) +- Variant promotions (SimpleGraph → HyperGraph) +- Any reduction without meaningful structural transformation + +Reference: issue #127's standard for non-trivial cross-domain reductions. + +### Present 10–20 Candidates + +Present the ranked table with **10–20 candidates** (default target: ~15, hard cap: 20). If discovery returns fewer than 10 quality candidates, present all. + +``` +| # | Group | Type | Name | Score | Top Dimensions Hit | +|---|-------|-------|-------------------|-------|----------------------------| +| | **Rules (models exist)** | +| 1 | 1 | Rule | 3SAT → MaxCut | 21 | Academic(5), TopSci(4) | +| 2 | 1 | Rule | MaxClique ↔ MaxIS | 17 | GraphConn(5), Pedagogical(5)| +| | **Models + Rules** | +| 3 | 2 | Model | Partition | 22 | Industry(4), TopSci(4) | +| 4 | 2 | Rule | Partition → BinPack| 21 | GraphConn(5), Industry(4) | +| | **Standalone models** | +| 5 | 3 | Model | VehicleRouting | 14 | Industry(5) | +... +``` + +Include a 1-line justification for each candidate's top scores. + +## Step 4: Select — User Picks Candidates + +Use `AskUserQuestion` with `multiSelect: true` to let the user choose which candidates to file as issues. + +Present each candidate as an option with its score, group, and type in the label. + +**Hint to user:** Filing rules is higher impact than filing standalone models, since rules connect the graph. + +## Step 5: File — Dispatch Sub-Skills + +For each selected candidate, dispatch a subagent running the appropriate sub-skill: + +- **Model candidates:** Invoke `add-issue-model` with the problem name, definition, and references +- **Rule candidates:** Invoke `add-issue-rule` with source, target, and references + +**Parallelization:** Use `dispatching-parallel-agents` to file multiple issues concurrently. Each subagent independently: +1. Verifies non-existence (double-check) +2. Researches to fill the full checklist +3. Drafts the issue +4. Files via `gh issue create` +5. Reports the issue URL + +**Ordering constraint:** If a model and its dependent rules are both selected, file the model FIRST (sequential), then file rules (can be parallel with each other). + +Collect all filed issue URLs and present a summary table. + +## Step 6: Implement (Optional) + +After all issues are filed, ask: + +``` +Would you like to invoke meta-power to automatically implement these issues? +``` + +If yes, invoke the `meta-power` skill. If no, stop — the user can run `/meta-power` later. + +## Key Constraints + +- **No code writing**: This skill only creates issues. Implementation is delegated to downstream skills. +- **No duplicates**: Inventory check (Phase 1) is mandatory BEFORE any discovery. +- **Template compliance**: Every issue must fully satisfy the `add-model` or `add-rule` checklist. Incomplete issues get rejected by `issue-to-pr`. +- **Citations required**: Every claim about a problem's complexity, applications, or significance must include a reference URL. +- **Nontrivial rules only**: No identity mappings, type casts, or trivial embeddings. +- **User approval gates**: The user approves at two points — candidate selection (Step 4) and optionally at issue draft (via sub-skills). +- **Rules over models**: Prioritize rules between existing models over standalone models. + +## Common Mistakes + +| Mistake | Fix | +|---------|-----| +| Repeated survey options | Use cascading elimination — track and exclude selected dimensions | +| Filing issues without inventory check | Always run Phase 1 (exclusion set) BEFORE discovery | +| Presenting trivial rules | Apply nontrivial filter — no identity maps, type casts, or embeddings | +| Filing model when only rule is needed | Check if models already exist; file rules first | +| Too many candidates | Hard cap at 20; default target ~15 | +| Filing without sub-skill | Always dispatch via `add-issue-model` or `add-issue-rule` for template compliance | +| Showing >4 options in one AskUserQuestion | AskUserQuestion supports max 4 options; for candidate selection use multiSelect with up to 4 per call, or present in batches | diff --git a/docs/plans/2026-03-04-zero-to-infinity-design.md b/docs/plans/2026-03-04-zero-to-infinity-design.md new file mode 100644 index 00000000..02102c58 --- /dev/null +++ b/docs/plans/2026-03-04-zero-to-infinity-design.md @@ -0,0 +1,97 @@ +# Zero-to-Infinity Skill Design + +**Issue:** https://github.com/CodingThrust/problem-reductions/issues/161 +**Date:** 2026-03-04 + +## Overview + +A skill that discovers high-impact problems and reduction rules, ranks them by user priorities, and files them as GitHub issues — bridging the gap between "what should we add next?" and the existing `issue-to-pr` / `meta-power` pipeline. + +## Pipeline + +``` +Step 1: Survey → User ranks impact dimensions (single prompt) +Step 2: Discover → Web search + reduction graph gap analysis (parallel) +Step 3: Rank → Score candidates against user weights, present table +Step 4: Select → User picks which candidates to file +Step 5: File → Create [Model]/[Rule] GitHub issues +Step 6: Implement → Optionally invoke meta-power +``` + +## Impact Dimensions + +| # | Dimension | Description | Scoring signal | +|---|-----------|-------------|----------------| +| 0 | Academic Publications | Papers in JACM, SICOMP, top venues | Paper count from web search | +| 1 | Industrial Application | Real-world use (search, navigation, scheduling) | Application domain count | +| 2 | Cross-Field Application | Physics, chemistry, biology relevance | Scientific domain count | +| 3 | Top-Scientists Interest | Karp's 21, Garey & Johnson, Aaronson | Named in canonical lists | +| 4 | Graph Connectivity | Bridges disconnected reduction graph components | Structural gap score | +| 5 | Pedagogical Value | Clean, illustrative reductions for teaching | Subjective assessment | + +Extensible: the user can add custom dimensions during the survey step. + +### Scoring + +User ranks dimensions 1-N in a single prompt. Weight for rank k (out of N) = N - k + 1. Each candidate gets a score per dimension (0-5), multiplied by the weight, then summed. + +## Discovery Channels + +### Channel A: Web Search + +Search queries (run via parallel subagents): +- "classical NP-complete problems Karp's 21" +- "NP-hard problems {top user dimension}" (e.g., "NP-hard problems industrial applications") +- "polynomial reductions from {existing_problem}" +- "important reductions in computational complexity" + +For each candidate found, gather: formal name, definition sketch, known reductions, complexity class, references. + +### Channel B: Reduction Graph Gap Analysis + +Read `reduction_graph.json` and identify: +- Problems with no outgoing reductions (dead ends) +- Natural reductions missing between related problems +- Disconnected components that could be bridged +- Well-known reductions from literature that aren't implemented + +## Deduplication + +Before presenting candidates, filter out: +- Already-implemented models (check `src/models/`) +- Already-implemented rules (check `src/rules/`) +- Open issues (check `gh issue list`) +- Recently closed issues (check `gh issue list --state closed`) + +## Ranking & Selection + +Present a ranked table with up to 10 candidates: + +``` +| # | Type | Name | Score | Top Dimensions Hit | +|---|-------|-------------------|-------|--------------------------| +| 1 | Model | SubsetSum | 23 | Academic(5), Industry(4) | +| 2 | Rule | 3SAT → MaxCut | 21 | Academic(5), TopSci(4) | +... +``` + +User multi-selects which to file. + +## Issue Filing + +For each selected candidate, generate a GitHub issue: +- `[Model]` issues: populate all 11 items from `add-model` Step 0 checklist +- `[Rule]` issues: populate all 9 items from `add-rule` Step 0 checklist + +Show draft to user for confirmation before filing via `gh issue create`. + +## Optional Implementation + +After filing, ask user whether to invoke `meta-power` to implement the filed issues automatically. + +## Conventions + +- No duplicate issues (deduplication check is mandatory) +- Issue content follows existing `[Model]`/`[Rule]` template conventions +- The skill does NOT implement code directly — it only creates issues +- All web search results are cited with URLs in the issue body diff --git a/docs/plans/2026-03-04-zero-to-infinity-v2-design.md b/docs/plans/2026-03-04-zero-to-infinity-v2-design.md new file mode 100644 index 00000000..1fdea820 --- /dev/null +++ b/docs/plans/2026-03-04-zero-to-infinity-v2-design.md @@ -0,0 +1,137 @@ +# Zero-to-Infinity v2 Design — 5 Fixes + +**Date:** 2026-03-04 +**Issue:** https://github.com/CodingThrust/problem-reductions/issues/161 + +## Context + +After manual testing of the zero-to-infinity skill, 5 issues were identified. This design addresses all of them. + +## Fix 1: Survey — Cascading Elimination + +**Problem:** All 6 dimensions shown for every rank selection, causing repeated options. + +**Solution:** Each round removes previously selected options: + +``` +Round 1 (6 options): "Which is your #1 priority?" + → User picks "Cross-Field Application" +Round 2 (5 options): "Which is your #2 priority?" (Cross-Field removed) + → User picks "Industrial Application" +Round 3 (4 options): "#3?" (Cross-Field + Industrial removed) + ...continue until 2 remain +Round N-1 (2 options): final pick, last one auto-assigned to bottom rank +``` + +The skill must explicitly instruct Claude to track selected dimensions and exclude them from subsequent AskUserQuestion calls. + +## Fix 2: Inventory-First Deduplication + +**Problem:** Deduplication was a sub-step of discovery, happening too late. + +**Solution:** Restructure Step 2 into 3 ordered phases: + +### Phase 1: Build Exclusion Set (FIRST, before any search) + +```bash +# Implemented models +ls src/models/*/ + +# Implemented rules +ls src/rules/ + +# Open issues +gh issue list --state open --limit 200 --json title,number + +# Closed issues +gh issue list --state closed --limit 200 --json title,number +``` + +Build a named set of all known problems, rules, and issue titles. + +### Phase 2: Discover (parallel, with exclusion set) + +Both web search and graph gap analysis receive the exclusion set upfront and filter during discovery, not after. + +### Phase 3: Final Deduplication + +Merge results from both channels, remove any remaining duplicates. + +## Fix 3: Rules Over Models, Nontrivial Only + +**Problem:** No prioritization of rules vs models. Trivial reductions could appear. + +**Solution:** + +### Filing Priority Order +1. **Rules between existing models** — highest value, both endpoints already implemented +2. **Models needed by high-value rules** — file model first, then rule +3. **Standalone models** — lowest priority (no immediate rule connection) + +### Nontrivial Filter + +Exclude candidate rules that are: +- Identity mappings or trivial embeddings +- Simple type/weight casts (i32 → f64) +- Variant promotions (SimpleGraph → HyperGraph) + +Reference: issue #127's standard for non-trivial cross-domain reductions. + +### Presentation + +The ranked table groups candidates: +``` +--- Rules (models exist) --- +1. Rule: 3SAT → MaxCut Score: 21 +2. Rule: MaxClique ↔ MaxIS Score: 17 +--- Models + Rules (both needed) --- +3. Model: Partition Score: 22 +4. Rule: Partition → BinPacking Score: 21 +--- Models (standalone) --- +5. Model: VehicleRouting Score: 14 +``` + +## Fix 4: Candidate Limit 10–20 + +**Problem:** Hard limit of 10 was too restrictive. + +**Solution:** Present 10–20 candidates. Default target: ~15. Hard cap: 20 (to avoid overwhelming the user and taking too long to file). If discovery returns fewer than 10 quality candidates, present all. + +## Fix 5: Sub-Skills for Issue Filing + +**Problem:** Issue filing was inline in zero-to-infinity with no reusable structure. + +**Solution:** Create two new standalone skills: + +### `.claude/skills/add-issue-model/SKILL.md` + +**Input:** Problem name, brief description, references (from zero-to-infinity candidate data) + +**Process:** +1. Web search to fill all 11 items from add-model Step 0 checklist +2. Double-check the model doesn't already exist in repo (`src/models/`, open issues) +3. Enforce: citation for every complexity claim, concrete example, algorithm with reference +4. Draft issue body, show to user for confirmation +5. File via `gh issue create --title "[Model] ProblemName" --body ...` + +### `.claude/skills/add-issue-rule/SKILL.md` + +**Input:** Source problem, target problem, references (from zero-to-infinity candidate data) + +**Process:** +1. Web search to fill all 9 items from add-rule Step 0 checklist +2. Double-check the rule doesn't already exist in repo (`src/rules/`, open issues) +3. Enforce: citation, worked step-by-step example, correctness proof sketch +4. Draft issue body, show to user for confirmation +5. File via `gh issue create --title "[Rule] Source to Target" --body ...` + +### Integration with zero-to-infinity + +Step 5 dispatches parallel subagents, each running `add-issue-model` or `add-issue-rule` for its assigned candidate. The parent skill collects results and reports filed issue URLs. + +## Files Changed + +1. **Modified:** `.claude/skills/zero-to-infinity/SKILL.md` — all 5 fixes +2. **New:** `.claude/skills/add-issue-model/SKILL.md` — model issue filing sub-skill +3. **New:** `.claude/skills/add-issue-rule/SKILL.md` — rule issue filing sub-skill +4. **Modified:** `.claude/CLAUDE.md` — register 2 new skills diff --git a/docs/plans/2026-03-04-zero-to-infinity-v2-impl.md b/docs/plans/2026-03-04-zero-to-infinity-v2-impl.md new file mode 100644 index 00000000..7abdef12 --- /dev/null +++ b/docs/plans/2026-03-04-zero-to-infinity-v2-impl.md @@ -0,0 +1,610 @@ +# Zero-to-Infinity v2 Implementation Plan + +> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task. + +**Goal:** Fix 5 issues in the zero-to-infinity skill found during manual testing, and create two new standalone sub-skills for issue filing. + +**Architecture:** Update SKILL.md with cascading survey, inventory-first dedup, rules-over-models prioritization, 10-20 candidate limit. Create add-issue-model and add-issue-rule as standalone skills that handle template-compliant issue creation. + +**Tech Stack:** Claude Code skills (Markdown), GitHub CLI + +--- + +### Task 1: Create add-issue-model skill + +**Files:** +- Create: `.claude/skills/add-issue-model/SKILL.md` + +**Step 1: Write the skill file** + +```markdown +--- +name: add-issue-model +description: Use when filing a GitHub issue for a new problem model, ensuring all 11 checklist items from add-model are complete with citations +--- + +# Add Issue — Model + +File a well-formed `[Model]` GitHub issue that passes the `issue-to-pr` validation. This skill ensures all 11 checklist items are complete, cited, and verified against the repo. + +## Input + +The caller (zero-to-infinity or user) provides: +- Problem name +- Brief description / definition sketch +- Reference URLs (if available) + +## Step 1: Verify Non-Existence + +Before anything else, confirm the model doesn't already exist: + +```bash +# Check implemented models (look for matching filename) +ls src/models/*/ | grep -i "" + +# Check open issues +gh issue list --state open --limit 200 --json title,number | grep -i "" + +# Check closed issues +gh issue list --state closed --limit 200 --json title,number | grep -i "" +``` + +**If found:** STOP. Report to caller that this model already exists (with issue number or file path). + +## Step 2: Research and Fill Checklist + +Use `WebSearch` and `WebFetch` to fill all 11 items from the [add-model](../add-model/SKILL.md) Step 0 checklist: + +| # | Item | How to fill | +|---|------|-------------| +| 1 | **Problem name** | Use optimization prefix convention: `Maximum*`, `Minimum*`, or no prefix. Check CLAUDE.md "Problem Names" | +| 2 | **Mathematical definition** | Formal definition from textbook/paper. Must include input, output, and objective | +| 3 | **Problem type** | Optimization (maximize/minimize) or Satisfaction (decision). Determines trait impl | +| 4 | **Type parameters** | Usually `G: Graph, W: WeightElement` for graph problems, or none | +| 5 | **Struct fields** | What the struct holds (graph, weights, parameters) | +| 6 | **Configuration space** | What `dims()` returns — e.g., `vec![2; n]` for binary selection over n items | +| 7 | **Feasibility check** | How to determine if a configuration is valid | +| 8 | **Objective function** | How to compute the metric from a valid configuration | +| 9 | **Best known exact algorithm** | Complexity with concrete numbers, author, year, citation URL | +| 10 | **Solving strategy** | BruteForce, ILP reduction, or custom solver | +| 11 | **Category** | `graph/`, `formula/`, `set/`, `algebraic/`, or `misc/` | + +**Citation rule:** Every complexity claim and algorithm reference MUST include a URL (paper, Wikipedia, lecture notes). + +## Step 3: Verify Algorithm Correctness + +For item 9 (best known exact algorithm): +- Cross-check the complexity claim against at least 2 independent sources +- Ensure the complexity uses concrete numeric values (e.g., `1.1996^n`), not symbolic constants +- Verify the variable in the complexity expression maps to a natural size getter (e.g., `n = |V|` → `num_vertices`) + +## Step 4: Draft and File Issue + +Draft the issue body with all 11 items clearly formatted: + +```bash +gh issue create --repo CodingThrust/problem-reductions \ + --title "[Model] ProblemName" \ + --body "$(cat <<'ISSUE_EOF' +## Problem Definition + +**1. Problem name:** `ProblemName` + +**2. Mathematical definition:** ... + +**3. Problem type:** Optimization (Maximize) / Satisfaction + +**4. Type parameters:** `G: Graph, W: WeightElement` / None + +**5. Struct fields:** +- `field: Type` — description + +**6. Configuration space:** `dims() = vec![2; n]` + +**7. Feasibility check:** ... + +**8. Objective function:** ... + +**9. Best known exact algorithm:** O(...) by Author (Year). [Reference](url) + +**10. Solving strategy:** BruteForce / ILP reduction + +**11. Category:** `graph/` / `formula/` / `set/` / `algebraic/` / `misc/` + +## References +- [Source 1](url1) +- [Source 2](url2) +ISSUE_EOF +)" +``` + +Report the created issue number and URL. + +## Common Mistakes + +| Mistake | Fix | +|---------|-----| +| Missing complexity citation | Every algorithm claim needs author + year + URL | +| Symbolic constants in complexity | Use concrete numbers: `1.1996^n` not `(2-epsilon)^n` | +| Wrong optimization prefix | Check CLAUDE.md "Problem Names" for conventions | +| Not checking repo first | Always run Step 1 before researching | +``` + +**Step 2: Verify file was created correctly** + +Read: `.claude/skills/add-issue-model/SKILL.md` +Expected: File exists with correct YAML frontmatter + +**Step 3: Commit** + +```bash +git add .claude/skills/add-issue-model/SKILL.md +git commit -m "feat: add add-issue-model skill for filing model issues" +``` + +--- + +### Task 2: Create add-issue-rule skill + +**Files:** +- Create: `.claude/skills/add-issue-rule/SKILL.md` + +**Step 1: Write the skill file** + +```markdown +--- +name: add-issue-rule +description: Use when filing a GitHub issue for a new reduction rule, ensuring all 9 checklist items from add-rule are complete with citations and worked examples +--- + +# Add Issue — Rule + +File a well-formed `[Rule]` GitHub issue that passes the `issue-to-pr` validation. This skill ensures all 9 checklist items are complete, with citations, a worked example, and a correctness argument. + +## Input + +The caller (zero-to-infinity or user) provides: +- Source problem name +- Target problem name +- Reference URLs (if available) + +## Step 1: Verify Non-Existence + +Before anything else, confirm the rule doesn't already exist: + +```bash +# Check implemented rules (filename pattern: source_target.rs) +ls src/rules/ | grep -i ".*" + +# Check open issues +gh issue list --state open --limit 200 --json title,number | grep -i ".*" + +# Check closed issues +gh issue list --state closed --limit 200 --json title,number | grep -i ".*" +``` + +**If found:** STOP. Report to caller that this rule already exists. + +**Also verify both source and target models exist:** +```bash +ls src/models/*/ | grep -i "" +ls src/models/*/ | grep -i "" +``` + +If source or target model doesn't exist, report which model(s) are missing. The caller should file model issues first. + +## Step 2: Research and Fill Checklist + +Use `WebSearch` and `WebFetch` to fill all 9 items from the [add-rule](../add-rule/SKILL.md) Step 0 checklist: + +| # | Item | How to fill | +|---|------|-------------| +| 1 | **Source problem** | Full type with generics: `ProblemName` | +| 2 | **Target problem** | Full type with generics | +| 3 | **Reduction algorithm** | Step-by-step: how to transform source instance to target instance | +| 4 | **Solution extraction** | How to map target solution back to source solution | +| 5 | **Correctness argument** | Why the reduction preserves optimality/satisfiability | +| 6 | **Size overhead** | Expressions for target size in terms of source size getters | +| 7 | **Concrete example** | Small worked instance, tutorial style, step-by-step | +| 8 | **Solving strategy** | How to solve the target (BruteForce, existing solver) | +| 9 | **Reference** | Paper/textbook citation with URL | + +**Citation rule:** Every claim MUST include a URL. + +## Step 3: Verify Example Correctness + +For item 7 (concrete example): +- Walk through the reduction step-by-step on paper +- Show: source instance → reduction → target instance → solve target → extract source solution +- Verify the extracted solution is valid and optimal for the source +- The example must be small enough to verify by hand (3-5 vertices/variables) + +## Step 4: Verify Nontriviality + +The rule must be **nontrivial** (per issue #127 standards): +- NOT a simple identity mapping or type cast +- NOT a trivial embedding (just copying data) +- NOT a weight type conversion (i32 → f64) +- MUST involve meaningful structural transformation + +If the rule is trivial, STOP and report to caller. + +## Step 5: Draft and File Issue + +```bash +gh issue create --repo CodingThrust/problem-reductions \ + --title "[Rule] Source to Target" \ + --body "$(cat <<'ISSUE_EOF' +## Reduction Definition + +**1. Source problem:** `SourceProblem` + +**2. Target problem:** `TargetProblem<...>` + +**3. Reduction algorithm:** +- Step 1: ... +- Step 2: ... + +**4. Solution extraction:** ... + +**5. Correctness argument:** ... + +**6. Size overhead:** +``` +field1 = "expression1" +field2 = "expression2" +``` + +**7. Concrete example:** +Source: ... +→ Reduction: ... +→ Target: ... +→ Solve: ... +→ Extract: ... + +**8. Solving strategy:** BruteForce / existing solver + +**9. Reference:** +- [Source](url) + +## References +- [Source 1](url1) +ISSUE_EOF +)" +``` + +Report the created issue number and URL. + +## Common Mistakes + +| Mistake | Fix | +|---------|-----| +| Filing trivial reductions | Check nontriviality in Step 4 | +| Missing model dependency | Verify both source and target exist in Step 1 | +| Example too complex | Keep to 3-5 vertices/variables, verifiable by hand | +| Missing correctness argument | Must explain WHY, not just HOW | +| Wrong overhead expressions | Must reference getter methods that exist on source type | +``` + +**Step 2: Verify file was created correctly** + +Read: `.claude/skills/add-issue-rule/SKILL.md` +Expected: File exists with correct YAML frontmatter + +**Step 3: Commit** + +```bash +git add .claude/skills/add-issue-rule/SKILL.md +git commit -m "feat: add add-issue-rule skill for filing rule issues" +``` + +--- + +### Task 3: Rewrite zero-to-infinity SKILL.md with all 5 fixes + +**Files:** +- Modify: `.claude/skills/zero-to-infinity/SKILL.md` + +**Step 1: Rewrite the entire skill file** + +Replace the full contents of `.claude/skills/zero-to-infinity/SKILL.md` with the following (this incorporates all 5 fixes): + +```markdown +--- +name: zero-to-infinity +description: Use when you want to discover and prioritize new problems and reduction rules to add to the codebase, based on user-ranked impact dimensions +--- + +# Zero to Infinity + +Discover high-impact problems and reduction rules, rank them by user priorities, and file them as GitHub issues — feeding the existing `issue-to-pr` / `meta-power` pipeline. + +## Overview + +This skill bridges "what should we add next?" with the implementation pipeline. It does NOT write code — it creates well-formed `[Model]` and `[Rule]` issues via the `add-issue-model` and `add-issue-rule` sub-skills. + +## Step 1: Survey — Rank Impact Dimensions + +Rank dimensions using **cascading elimination** — each round removes previously selected options. + +**Default dimensions:** + +| # | Dimension | Description | +|---|-----------|-------------| +| 0 | Academic Publications | Papers in JACM, SICOMP, and top CS venues studying this problem/reduction | +| 1 | Industrial Application | Real-world use cases (search engines, navigation, scheduling, compilers) | +| 2 | Cross-Field Application | Relevance to physics, chemistry, biology, or other scientific domains | +| 3 | Top-Scientists Interest | Featured in Karp's 21, Garey & Johnson, or by researchers like Aaronson | +| 4 | Graph Connectivity | Bridges disconnected components in the existing reduction graph | +| 5 | Pedagogical Value | Clean, illustrative reductions good for teaching | + +### Cascading Elimination Process + +Maintain a list of `remaining_dimensions` (initially all 6). For each round: + +1. Present `remaining_dimensions` as options via `AskUserQuestion`: "Which is your #K priority?" +2. User selects one → assign it rank K +3. Remove selected dimension from `remaining_dimensions` +4. Repeat until 2 remain → user picks between them, last one auto-assigned to bottom rank + +**Example flow:** +``` +Round 1 (6 options): "#1 priority?" → User picks "Cross-Field" +Round 2 (5 options): "#2 priority?" → User picks "Industry" (Cross-Field removed) +Round 3 (4 options): "#3 priority?" → User picks "TopSci" (Cross-Field + Industry removed) +Round 4 (3 options): "#4 priority?" → User picks "Academic" (3 removed) +Round 5 (2 options): "#5 priority?" → User picks one, last auto-assigned #6 +``` + +**IMPORTANT:** You MUST track which dimensions have been selected and exclude them from subsequent AskUserQuestion calls. Never show an already-ranked dimension again. + +**Scoring weights:** For N dimensions, rank k gets weight N - k + 1. + +User may also add custom dimensions during the first round. + +## Step 2: Discover — Inventory First, Then Search + +### Phase 1: Build Exclusion Set (MANDATORY FIRST STEP) + +Before any web search or analysis, build a complete inventory of what already exists: + +```bash +# Implemented models +ls src/models/*/ + +# Implemented rules +ls src/rules/ + +# Open issues (increase limit to 200) +gh issue list --state open --limit 200 --json title,number + +# Closed issues +gh issue list --state closed --limit 200 --json title,number +``` + +Build a named **exclusion set** containing: +- Every implemented model name (from filenames) +- Every implemented rule (source→target pairs from filenames) +- Every issue title mentioning a problem or rule name + +**Pass this exclusion set to both discovery channels.** + +### Phase 2: Discover (parallel, with exclusion set) + +Run two channels in parallel (use `dispatching-parallel-agents` or concurrent subagents). Both channels receive the exclusion set and must filter results against it during discovery. + +#### Channel A: Web Search + +Search queries targeting the user's top-ranked dimensions: +- `"classical NP-complete problems Karp's 21 reductions"` +- `"NP-hard problems {top_dimension_keyword}"` (e.g., `"NP-hard problems condensed matter physics"`) +- `"polynomial reductions from {existing_problem}"` for each problem with few outgoing edges +- `"important reductions computational complexity textbook"` + +For each candidate, collect: +- Formal problem name +- Brief definition +- Known reductions to/from other problems +- Complexity class and best known algorithms +- Reference URLs + +**Filter:** Immediately discard any candidate in the exclusion set. + +#### Channel B: Reduction Graph Gap Analysis + +```bash +cat docs/data/reduction_graph.json +``` + +Identify: +- **Dead-end problems**: nodes with no outgoing reductions +- **Missing natural reductions**: pairs of related problems without a direct edge +- **Disconnected components**: subgraphs that could be bridged by a single reduction +- **Well-known textbook reductions** not yet implemented (Garey & Johnson, CLRS, Arora & Barak) + +**Filter:** Only suggest gaps where neither the model nor rule is in the exclusion set. + +### Phase 3: Final Deduplication + +Merge results from both channels. Remove any remaining duplicates (same problem/rule found by both channels). + +## Step 3: Rank — Score, Sort, and Prioritize + +For each candidate, assign a score (0-5) per dimension: + +| Score | Meaning | +|-------|---------| +| 0 | No relevance | +| 1 | Marginal | +| 2 | Some relevance | +| 3 | Moderate | +| 4 | Strong | +| 5 | Exceptional | + +**Total score** = sum of (dimension_score × dimension_weight) for all dimensions. + +### Filing Priority Order + +After scoring, group candidates by priority: + +``` +--- Group 1: Rules between existing models (highest value) --- +Rules where BOTH source and target models already exist in the codebase. +These can be implemented immediately without new models. + +--- Group 2: Models + Rules (both needed) --- +A model that enables one or more high-value rules. File model first, rules second. +List the model and its dependent rules together. + +--- Group 3: Standalone models (lowest priority) --- +Models with no immediate rule connection to existing problems. +``` + +Within each group, sort by total score descending. + +### Nontrivial Filter + +**Exclude** candidate rules that are: +- Identity mappings or trivial embeddings +- Simple type/weight casts (i32 → f64) +- Variant promotions (SimpleGraph → HyperGraph) +- Any reduction without meaningful structural transformation + +Reference: issue #127's standard for non-trivial cross-domain reductions. + +### Present 10–20 Candidates + +Present the ranked table with **10–20 candidates** (default target: ~15, hard cap: 20). If discovery returns fewer than 10 quality candidates, present all. + +``` +| # | Group | Type | Name | Score | Top Dimensions Hit | +|---|-------|-------|-------------------|-------|----------------------------| +| | **Rules (models exist)** | +| 1 | 1 | Rule | 3SAT → MaxCut | 21 | Academic(5), TopSci(4) | +| 2 | 1 | Rule | MaxClique ↔ MaxIS | 17 | GraphConn(5), Pedagogical(5)| +| | **Models + Rules** | +| 3 | 2 | Model | Partition | 22 | Industry(4), TopSci(4) | +| 4 | 2 | Rule | Partition → BinPack| 21 | GraphConn(5), Industry(4) | +| | **Standalone models** | +| 5 | 3 | Model | VehicleRouting | 14 | Industry(5) | +... +``` + +Include a 1-line justification for each candidate's top scores. + +## Step 4: Select — User Picks Candidates + +Use `AskUserQuestion` with `multiSelect: true` to let the user choose which candidates to file as issues. + +Present each candidate as an option with its score, group, and type in the label. + +**Hint to user:** Filing rules is higher impact than filing standalone models, since rules connect the graph. + +## Step 5: File — Dispatch Sub-Skills + +For each selected candidate, dispatch a subagent running the appropriate sub-skill: + +- **Model candidates:** Invoke `add-issue-model` with the problem name, definition, and references +- **Rule candidates:** Invoke `add-issue-rule` with source, target, and references + +**Parallelization:** Use `dispatching-parallel-agents` to file multiple issues concurrently. Each subagent independently: +1. Verifies non-existence (double-check) +2. Researches to fill the full checklist +3. Drafts the issue +4. Files via `gh issue create` +5. Reports the issue URL + +**Ordering constraint:** If a model and its dependent rules are both selected, file the model FIRST (sequential), then file rules (can be parallel with each other). + +Collect all filed issue URLs and present a summary table. + +## Step 6: Implement (Optional) + +After all issues are filed, ask: + +``` +Would you like to invoke meta-power to automatically implement these issues? +``` + +If yes, invoke the `meta-power` skill. If no, stop — the user can run `/meta-power` later. + +## Key Constraints + +- **No code writing**: This skill only creates issues. Implementation is delegated to downstream skills. +- **No duplicates**: Inventory check (Phase 1) is mandatory BEFORE any discovery. +- **Template compliance**: Every issue must fully satisfy the `add-model` or `add-rule` checklist. Incomplete issues get rejected by `issue-to-pr`. +- **Citations required**: Every claim about a problem's complexity, applications, or significance must include a reference URL. +- **Nontrivial rules only**: No identity mappings, type casts, or trivial embeddings. +- **User approval gates**: The user approves at two points — candidate selection (Step 4) and optionally at issue draft (via sub-skills). +- **Rules over models**: Prioritize rules between existing models over standalone models. + +## Common Mistakes + +| Mistake | Fix | +|---------|-----| +| Repeated survey options | Use cascading elimination — track and exclude selected dimensions | +| Filing issues without inventory check | Always run Phase 1 (exclusion set) BEFORE discovery | +| Presenting trivial rules | Apply nontrivial filter — no identity maps, type casts, or embeddings | +| Filing model when only rule is needed | Check if models already exist; file rules first | +| Too many candidates | Hard cap at 20; default target ~15 | +| Filing without sub-skill | Always dispatch via `add-issue-model` or `add-issue-rule` for template compliance | +| Showing >4 options in one AskUserQuestion | AskUserQuestion supports max 4 options; for candidate selection use multiSelect with up to 4 per call, or present in batches | +``` + +**Step 2: Verify the rewritten file** + +Read: `.claude/skills/zero-to-infinity/SKILL.md` +Expected: Contains "Cascading Elimination", "Phase 1: Build Exclusion Set", "Filing Priority Order", "10–20 candidates", "add-issue-model", "add-issue-rule" + +**Step 3: Commit** + +```bash +git add .claude/skills/zero-to-infinity/SKILL.md +git commit -m "fix: zero-to-infinity v2 — cascading survey, inventory-first dedup, rules-over-models, sub-skills" +``` + +--- + +### Task 4: Register new skills in CLAUDE.md + +**Files:** +- Modify: `.claude/CLAUDE.md` (line ~16, after zero-to-infinity entry) + +**Step 1: Add two new skill entries** + +After the existing `zero-to-infinity` line, add: + +```markdown +- [add-issue-model](skills/add-issue-model/SKILL.md) -- File a well-formed `[Model]` GitHub issue with all 11 checklist items, citations, and repo verification. +- [add-issue-rule](skills/add-issue-rule/SKILL.md) -- File a well-formed `[Rule]` GitHub issue with all 9 checklist items, worked example, correctness argument, and nontriviality check. +``` + +**Step 2: Verify** + +Read: `.claude/CLAUDE.md` lines 1-20 +Expected: Both new skills appear in the Skills list + +**Step 3: Commit** + +```bash +git add .claude/CLAUDE.md +git commit -m "docs: register add-issue-model and add-issue-rule skills in CLAUDE.md" +``` + +--- + +### Task 5: Push and update PR + +**Step 1: Push all commits** + +```bash +git push +``` + +**Step 2: Verify PR is updated** + +```bash +gh pr view --web +``` + +Expected: PR shows 4 new commits with all skill files.