diff --git a/.claude/CLAUDE.md b/.claude/CLAUDE.md index 84877d21..ad5e592e 100644 --- a/.claude/CLAUDE.md +++ b/.claude/CLAUDE.md @@ -44,7 +44,8 @@ make diagrams # Generate SVG diagrams from Typst (light + dark) make examples # Generate example JSON for paper make compare # Generate and compare Rust mapping exports make jl-testdata # Regenerate Julia parity test data (requires julia) -make cli # Build the pred CLI tool (release mode) +make cli # Build the pred CLI tool (without MCP, fast) +make mcp # Build the pred CLI tool with MCP server support make cli-demo # Run closed-loop CLI demo (exercises all commands) make mcp-test # Run MCP server tests (unit + integration) make run-plan # Execute a plan with Claude autorun @@ -64,11 +65,12 @@ make release V=x.y.z # Tag and push a new release (CI publishes to crates.io) ### Core Modules - `src/models/` - Problem implementations organized by input structure: - - `graph/` - Problems on graphs (MIS, MaxClique, MaxCut, MinVC, MinDS, MaxMatching, MaximalIS, KColoring, TSP, SpinGlass, BicliqueCover) - - `formula/` - Logical formulas and circuits (SAT, k-SAT, CircuitSAT) - - `set/` - Set systems (MinSetCovering, MaxSetPacking) - - `algebraic/` - Matrices, linear systems, lattices (QUBO, ILP, CVP, BMF) - - `misc/` - Unique input structures (BinPacking, PaintShop, Factoring) + - `graph/` - Graph-input problems + - `formula/` - Boolean formulas and circuits + - `set/` - Set systems (universe + subsets) + - `algebraic/` - Matrices, linear systems, lattices + - `misc/` - Unique input structures + - Run `pred list` for the full catalog of problems, variants, and reductions; `pred show ` for details on a specific problem - `src/rules/` - Reduction rules + inventory registration - `src/solvers/` - BruteForce solver, ILP solver (feature-gated) - `src/traits.rs` - `Problem`, `OptimizationProblem`, `SatisfactionProblem` traits @@ -91,7 +93,8 @@ Problem (core trait — all problems must implement) ├── fn dims(&self) -> Vec // config space: [2, 2, 2] for 3 binary variables ├── fn evaluate(&self, config) -> Metric ├── fn variant() -> Vec<(&str, &str)> // e.g., [("graph","SimpleGraph"), ("weight","i32")] -└── fn num_variables(&self) -> usize // default: dims().len() +├── fn num_variables(&self) -> usize // default: dims().len() +└── fn problem_type() -> ProblemType // catalog bridge: registry lookup by NAME OptimizationProblem : Problem> (extension for optimization) │ @@ -111,7 +114,7 @@ enum Direction { Maximize, Minimize } ### Key Patterns - `variant_params!` macro implements `Problem::variant()` — e.g., `crate::variant_params![G, W]` for two type params, `crate::variant_params![]` for none (see `src/variant.rs`) -- `declare_variants!` proc macro registers concrete type instantiations with best-known complexity — must appear in every model file (see `src/models/graph/maximum_independent_set.rs`). Variable names in complexity strings are validated at compile time against actual getter methods. +- `declare_variants!` proc macro registers concrete type instantiations with best-known complexity and registry-backed dynamic dispatch metadata — every entry must specify `opt` or `sat`, and one entry per problem may be marked `default` (see `src/models/graph/maximum_independent_set.rs`). Variable names in complexity strings are validated at compile time against actual getter methods. - Problems parameterized by graph type `G` and optionally weight type `W` (problem-dependent) - `ReductionResult` provides `target_problem()` and `extract_solution()` - `Solver::find_best()` → `Option>` for optimization problems; `Solver::find_satisfying()` → `Option>` for `Metric = bool` @@ -142,25 +145,31 @@ impl ReduceTo for Source { ... } - `Expr::parse()` provides runtime parsing for cross-check tests that compare compiled vs symbolic evaluation ### Problem Names -Problem types use explicit optimization prefixes: -- `MaximumIndependentSet`, `MaximumClique`, `MaximumMatching`, `MaximumSetPacking` -- `MinimumVertexCover`, `MinimumDominatingSet`, `MinimumSetCovering` -- No prefix: `MaxCut`, `SpinGlass`, `QUBO`, `ILP`, `Satisfiability`, `KSatisfiability`, `CircuitSAT`, `Factoring`, `MaximalIS`, `PaintShop`, `BicliqueCover`, `BMF`, `KColoring`, `TravelingSalesman` +Problem types use explicit optimization prefixes (`Maximum...`, `Minimum...`) or no prefix. Run `pred list` for the full catalog. Common aliases (e.g., `MIS` → `MaximumIndependentSet`, `MVC` → `MinimumVertexCover`) are shown in the `Aliases` column. -### Problem Variant IDs +### Problem Variants Reduction graph nodes use variant key-value pairs from `Problem::variant()`: - Base: `MaximumIndependentSet` (empty variant = defaults) - Graph variant: `MaximumIndependentSet {graph: "KingsSubgraph", weight: "One"}` - Weight variant: `MaximumIndependentSet {graph: "SimpleGraph", weight: "f64"}` - Default variant ranking: `SimpleGraph`, `One`, `KN` are considered default values; variants with the most default values sort first - Nodes come exclusively from `#[reduction]` registrations; natural edges between same-name variants are inferred from the graph/weight subtype partial order +- Each primitive reduction is determined by the exact `(source_variant, target_variant)` endpoint pair +- `#[reduction]` accepts only `overhead = { ... }` + +### Extension Points +- New models register dynamic load/serialize/brute-force dispatch through `declare_variants!` in the model file, not by adding manual match arms in the CLI +- Exact registry dispatch lives in `src/registry/`; alias resolution and partial/default variant resolution live in `problemreductions-cli/src/problem_name.rs` +- `pred create` UX lives in `problemreductions-cli/src/commands/create.rs` +- Canonical paper and CLI examples live in `src/example_db/model_builders.rs` and `src/example_db/rule_builders.rs` ## Conventions ### File Naming - Reduction files: `src/rules/_.rs` (e.g., `maximumindependentset_qubo.rs`) - Model files: `src/models//.rs` — category is by input structure: `graph/` (graph input), `formula/` (boolean formula/circuit), `set/` (universe + subsets), `algebraic/` (matrix/linear system/lattice), `misc/` (other) -- Example files: `examples/reduction__to_.rs` (must have `pub fn run()` + `fn main() { run() }`) +- Canonical examples: builder functions in `src/example_db/rule_builders.rs` and `src/example_db/model_builders.rs` +- Example binaries in `examples/`: utility/export tools and pedagogical demos only (not per-reduction files) - Test naming: `test__to__closed_loop` ### Paper (docs/paper/reductions.typ) @@ -194,14 +203,15 @@ See Key Patterns above for solver API signatures. Follow the reference files for ### File Organization -Unit tests in `src/unit_tests/` linked via `#[path]` (see Core Modules above). Integration tests in `tests/suites/`, consolidated through `tests/main.rs`. Example tests in `tests/suites/examples.rs` using `include!` for direct invocation. +Unit tests in `src/unit_tests/` linked via `#[path]` (see Core Modules above). Integration tests in `tests/suites/`, consolidated through `tests/main.rs`. Canonical example-db coverage lives in `src/unit_tests/example_db.rs`. ## Documentation Locations - `README.md` — Project overview and quickstart - `.claude/` — Claude Code instructions and skills - `docs/book/` — mdBook user documentation (built with `make doc`) - `docs/paper/reductions.typ` — Typst paper with problem definitions and reduction theorems -- `examples/` — Reduction example code (also used in paper and tests) +- `src/example_db/` — Canonical model/rule examples: `model_builders.rs`, `rule_builders.rs` (in-memory builders), `specs.rs` (per-module invariant specs), consumed by `pred create --example` and paper exports +- `examples/` — Export utilities, graph-analysis helpers, and pedagogical demos ## Documentation Requirements @@ -254,3 +264,4 @@ Overhead expressions describe how target problem size relates to source problem 2. Check that each field (e.g., `num_vertices`, `num_edges`, `num_sets`) matches the constructed target problem 3. Watch for common errors: universe elements mismatch (edge indices vs vertex indices), worst-case edge counts in intersection graphs (quadratic, not linear), constant factors in circuit constructions 4. Test with concrete small instances: construct a source problem, run the reduction, and compare target sizes against the formula +5. Ensure there is only one primitive reduction registration for each exact source/target variant pair; wrap shared helpers instead of registering duplicate endpoints diff --git a/.claude/skills/add-model/SKILL.md b/.claude/skills/add-model/SKILL.md index c6b5d91f..511d1cb0 100644 --- a/.claude/skills/add-model/SKILL.md +++ b/.claude/skills/add-model/SKILL.md @@ -36,8 +36,10 @@ Read these first to understand the patterns: - **Satisfaction problem:** `src/models/formula/sat.rs` - **Model tests:** `src/unit_tests/models/graph/maximum_independent_set.rs` - **Trait definitions:** `src/traits.rs` (`Problem`, `OptimizationProblem`, `SatisfactionProblem`) -- **CLI dispatch:** `problemreductions-cli/src/dispatch.rs` +- **Registry dispatch boundary:** `src/registry/mod.rs`, `src/registry/variant.rs` - **CLI aliases:** `problemreductions-cli/src/problem_name.rs` +- **CLI creation:** `problemreductions-cli/src/commands/create.rs` +- **Canonical model examples:** `src/example_db/model_builders.rs` ## Step 1: Determine the category @@ -92,16 +94,20 @@ Add `declare_variants!` at the bottom of the model file (after the trait impls, ```rust crate::declare_variants! { - ProblemName => "1.1996^num_vertices", - ProblemName => "1.1996^num_vertices", + opt ProblemName => "1.1996^num_vertices", + default opt ProblemName => "1.1996^num_vertices", } ``` +- Each entry must include an explicit solver kind: + - `opt` for optimization problems (`BruteForce::find_best`) + - `sat` for satisfaction problems (`BruteForce::find_satisfying`) +- Mark exactly one concrete variant `default` when the problem has multiple registered variants - The complexity string references the getter method names from Step 1.5 (e.g., `num_vertices`) — variable names are validated at compile time against actual getters, so typos cause compile errors - One entry per supported `(graph, weight)` combination - The string is parsed as an `Expr` AST — supports `+`, `-`, `*`, `/`, `^`, `exp()`, `log()`, `sqrt()` - Use only concrete numeric values (e.g., `"1.1996^num_vertices"`, not `"(2-epsilon)^num_vertices"`) -- A compiled `complexity_eval_fn` is auto-generated alongside the symbolic expression +- A compiled `complexity_eval_fn` plus registry-backed load/serialize/solve dispatch metadata are auto-generated alongside the symbolic expression - See `src/models/graph/maximum_independent_set.rs` for the reference pattern ## Step 3: Register the model @@ -112,13 +118,14 @@ Update these files to register the new problem type: 2. `src/models/mod.rs` -- add to the appropriate re-export line 3. `src/lib.rs` or `prelude` -- if the type should be in `prelude::*`, add it there -## Step 4: Register in CLI +## Step 4: Register for CLI discovery -Update the CLI dispatch table so `pred` can load, solve, and serialize the new problem: +The CLI now loads, serializes, and brute-force solves problems through the core registry. Do **not** add manual match arms in `problemreductions-cli/src/dispatch.rs`. -1. **`problemreductions-cli/src/dispatch.rs`:** - - Add a match arm in `load_problem()` -- use `deser_opt::` for optimization or `deser_sat::` for satisfaction - - Add a match arm in `serialize_any_problem()` -- use `try_ser::` +1. **Registry-backed dispatch comes from `declare_variants!`:** + - Make sure every concrete variant you want the CLI to load is listed in `declare_variants!` + - Use the correct `opt`/`sat` marker per entry + - Mark the intended default variant with `default` when applicable 2. **`problemreductions-cli/src/problem_name.rs`:** - Add a lowercase alias mapping in `resolve_alias()` (e.g., `"newproblem" => "NewProblem".to_string()`) @@ -139,6 +146,15 @@ Update `problemreductions-cli/src/commands/create.rs` so `pred create ` +- paper/example exports +- example-db invariants tested in `src/unit_tests/example_db.rs` + ## Step 5: Write unit tests Create `src/unit_tests/models//.rs`: @@ -233,12 +249,13 @@ If running standalone (not inside `make run-plan`), invoke [review-implementatio | Missing `#[path]` test link | Add `#[cfg(test)] #[path = "..."] mod tests;` at file bottom | | Wrong `dims()` | Must match the actual configuration space (e.g., `vec![2; n]` for binary) | | Not registering in `mod.rs` | Must update both `/mod.rs` and `models/mod.rs` | -| Forgetting `declare_variants!` | Required for variant complexity metadata used by the paper's auto-generated table | -| Forgetting CLI dispatch | Must add match arms in `dispatch.rs` (`load_problem` + `serialize_any_problem`) | +| Forgetting `declare_variants!` | Required for variant complexity metadata and registry-backed load/serialize/solve dispatch | +| Wrong `declare_variants!` syntax | Every entry now needs `opt` or `sat`; one entry per problem may be marked `default` | | Forgetting CLI alias | Must add lowercase entry in `problem_name.rs` `resolve_alias()` | | Inventing short aliases | Only use well-established literature abbreviations (MIS, SAT, TSP); do NOT invent new ones | | Forgetting CLI create | Must add creation handler in `commands/create.rs` and flags in `cli.rs` | | Missing from CLI help table | Must add entry to "Flags by problem type" table in `cli.rs` `after_help` | | Schema lists derived fields | Schema should list constructor params, not internal fields (e.g., `matrix, k` not `matrix, m, n, k`) | +| Missing canonical model example | Add a builder in `src/example_db/model_builders.rs` and keep it aligned with paper/example workflows | | Forgetting trait_consistency | Must add entry in `test_all_problems_implement_trait_correctly` (and `test_direction` for optimization) in `src/unit_tests/trait_consistency.rs` | | Paper example not tested | Must include `test__paper_example` that verifies the exact instance, solution, and solution count shown in the paper | diff --git a/.claude/skills/add-rule/SKILL.md b/.claude/skills/add-rule/SKILL.md index 630df8b1..c8ac7e7c 100644 --- a/.claude/skills/add-rule/SKILL.md +++ b/.claude/skills/add-rule/SKILL.md @@ -32,8 +32,6 @@ If any item is missing, ask the user to provide it. Put a high standard on item Read these first to understand the patterns: - **Reduction rule:** `src/rules/minimumvertexcover_maximumindependentset.rs` - **Reduction tests:** `src/unit_tests/rules/minimumvertexcover_maximumindependentset.rs` -- **Example program:** `examples/reduction_minimumvertexcover_to_maximumindependentset.rs` -- **Example registration:** `tests/suites/examples.rs` - **Paper entry:** search `docs/paper/reductions.typ` for `MinimumVertexCover` `MaximumIndependentSet` - **Traits:** `src/rules/traits.rs` (`ReduceTo`, `ReductionResult`) @@ -84,6 +82,8 @@ impl ReduceTo for SourceType { } ``` +Each primitive reduction is determined by the exact source/target variant pair. Keep one primitive registration per endpoint pair and use only the `overhead` form of `#[reduction]`. + ## Step 2: Register in mod.rs Add to `src/rules/mod.rs`: @@ -110,36 +110,15 @@ Additional recommended tests: Link via `#[cfg(test)] #[path = "..."] mod tests;` at the bottom of the rule file. -## Step 4: Write example program - -Create `examples/reduction__to_.rs`: - -Required structure: -- `pub fn run()` -- main logic (required for test harness) -- `fn main() { run() }` -- entry point -- Use regular comments (`//`), not doc comments -- Create source instance, reduce, solve, extract, verify, export JSON +## Step 4: Add canonical example to example_db -Register in `tests/suites/examples.rs`: -```rust -example_test!(reduction__to_); -// ... -example_fn!(test__to_, reduction__to_); -``` +Add a builder function in `src/example_db/rule_builders.rs` that constructs a small, canonical instance for this reduction. Follow the existing patterns in that file. Register the builder in `build_rule_examples()`. ## Step 5: Document in paper Write a `reduction-rule` entry in `docs/paper/reductions.typ`. **Reference example:** search for `reduction-rule("KColoring", "QUBO"` to see the gold-standard entry — use it as a template. For a minimal example, see MinimumVertexCover -> MaximumIndependentSet. -### 5a. Load example data - -```typst -#let src_tgt = load-example("_to_") -#let src_tgt_r = load-results("_to_") -#let src_tgt_sol = src_tgt_r.solutions.at(0) -``` - -### 5b. Write theorem body (rule statement) +### 5a. Write theorem body (rule statement) ```typst #reduction-rule("Source", "Target", @@ -152,7 +131,7 @@ Write a `reduction-rule` entry in `docs/paper/reductions.typ`. **Reference examp Three parts: complexity with citation, construction summary, overhead hint. -### 5c. Write proof body +### 5b. Write proof body Use these subsections with italic labels: @@ -170,17 +149,17 @@ Use these subsections with italic labels: Must be self-contained (all notation defined) and reproducible. -### 5d. Write worked example (extra block) +### 5c. Write worked example (extra block) Step-by-step walkthrough with concrete numbers from JSON data. Required steps: 1. Show source instance (dimensions, structure, graph visualization if applicable) 2. Walk through construction with intermediate values 3. Verify a concrete solution end-to-end -4. Solution count: `#src_tgt_r.solutions.len()` with combinatorial justification +4. Solution count: `#src_tgt.solutions.len()` with combinatorial justification Use `graph-colors`, `g-node()`, `g-edge()` for graph visualization — see reference examples. -### 5e. Build and verify +### 5d. Build and verify ```bash make examples # Regenerate example JSON @@ -207,15 +186,14 @@ If running standalone (not inside `make run-plan`), invoke [review-implementatio ## CLI Impact -Adding a reduction rule does NOT require CLI changes -- the reduction graph is auto-generated from `#[reduction]` macros and the CLI discovers paths dynamically. However, both source and target models must already be registered in the CLI dispatch table (see `add-model` skill). +Adding a reduction rule does NOT require CLI changes -- the reduction graph is auto-generated from `#[reduction]` macros and the CLI discovers paths dynamically. However, both source and target models must already be fully registered through their model files (`declare_variants!`), aliases as needed in `problem_name.rs`, and `pred create` support where applicable (see `add-model` skill). ## File Naming - Rule file: `src/rules/_.rs` -- no underscores within a problem name - e.g., `maximumindependentset_qubo.rs`, `minimumvertexcover_maximumindependentset.rs` -- Example file: `examples/reduction__to_.rs` - - e.g., `reduction_minimumvertexcover_to_maximumindependentset.rs` - Test file: `src/unit_tests/rules/_.rs` +- Canonical example: builder function in `src/example_db/rule_builders.rs` ## Common Mistakes @@ -223,8 +201,8 @@ Adding a reduction rule does NOT require CLI changes -- the reduction graph is a |---------|-----| | Forgetting `#[reduction(...)]` macro | Required for compile-time registration in the reduction graph | | Wrong overhead expression | Must accurately reflect the size relationship | +| Adding extra reduction metadata or duplicate primitive endpoint registration | Keep one primitive registration per endpoint pair and use only the `overhead` form of `#[reduction]` | | Missing `extract_solution` mapping state | Store any index maps needed in the ReductionResult struct | -| Example missing `pub fn run()` | Required for the test harness (`include!` pattern) | -| Not registering example in `tests/suites/examples.rs` | Must add both `example_test!` and `example_fn!` | +| Not adding canonical example to `example_db` | Add builder in `src/example_db/rule_builders.rs` | | Not regenerating reduction graph | Run `cargo run --example export_graph` after adding a rule | -| Source/target model not in CLI dispatch | Both problems must be registered -- use `add-model` skill first | +| Source/target model not fully registered | Both problems must already have `declare_variants!`, aliases as needed, and CLI create support -- use `add-model` skill first | diff --git a/.claude/skills/final-review/SKILL.md b/.claude/skills/final-review/SKILL.md index 0a9f4ffa..93f6aab2 100644 --- a/.claude/skills/final-review/SKILL.md +++ b/.claude/skills/final-review/SKILL.md @@ -109,7 +109,9 @@ Verify the PR includes all required components. Check: **For [Model] PRs:** - [ ] Model implementation (`src/models/...`) - [ ] Unit tests (`src/unit_tests/models/...`) -- [ ] `declare_variants!` macro with complexity +- [ ] `declare_variants!` macro with explicit `opt`/`sat` solver-kind markers and intended default variant +- [ ] CLI `pred create` support / help text as needed +- [ ] Canonical model example in `src/example_db/model_builders.rs` - [ ] Paper section in `docs/paper/reductions.typ` (`problem-def` entry) - [ ] `display-name` entry in paper - [ ] `trait_consistency.rs` entry in `test_all_problems_implement_trait_correctly` (+ `test_direction` for optimization) @@ -118,8 +120,8 @@ Verify the PR includes all required components. Check: - [ ] Reduction implementation (`src/rules/...`) - [ ] Unit tests (`src/unit_tests/rules/...`) - [ ] `#[reduction(overhead = {...})]` with correct expressions -- [ ] Example file (`examples/reduction_...`) -- [ ] Example test in `tests/suites/examples.rs` +- [ ] Uses only the `overhead` form of `#[reduction]` and does not duplicate a primitive exact endpoint registration +- [ ] Canonical rule example in `src/example_db/rule_builders.rs` - [ ] Paper section in `docs/paper/reductions.typ` (`reduction-rule` entry) Report missing items: diff --git a/.claude/skills/issue-to-pr/SKILL.md b/.claude/skills/issue-to-pr/SKILL.md index 58a97dcd..e8bc6160 100644 --- a/.claude/skills/issue-to-pr/SKILL.md +++ b/.claude/skills/issue-to-pr/SKILL.md @@ -257,7 +257,7 @@ Run /review-pipeline to process Copilot comments, fix CI, and run agentic tests. | Issue has failure labels | Fix the issue, re-run `/check-issue`, then retry | | Including implementation code in initial PR | First PR: plan only | | Generic plan | Use specifics from the issue, mapped to add-model/add-rule steps | -| Skipping CLI registration in plan | add-model requires CLI dispatch updates -- include in plan | +| Skipping CLI registration in plan | add-model still requires alias/create/example-db planning, but not manual CLI dispatch-table edits | | Not verifying facts from issue | Use WebSearch/WebFetch to cross-check claims | | Branch already exists on retry | Check with `git rev-parse --verify` before `git checkout -b` | | Dirty working tree | Verify `git status --porcelain` is empty before branching | diff --git a/.claude/skills/review-implementation/SKILL.md b/.claude/skills/review-implementation/SKILL.md index 6ecfe288..3e61fbd9 100644 --- a/.claude/skills/review-implementation/SKILL.md +++ b/.claude/skills/review-implementation/SKILL.md @@ -151,6 +151,7 @@ Merge both subagent outputs into a single report: ### Overhead Consistency Check - Rules: verify `#[reduction(overhead)]` expressions match actual sizes constructed in `reduce_to()` code +- Rules: verify the impl uses only the `overhead` form and does not introduce a duplicate primitive exact endpoint pair - Models: verify `dims()` and getter methods are consistent with struct fields - Result: PASS / FAIL diff --git a/.claude/skills/review-implementation/structural-reviewer-prompt.md b/.claude/skills/review-implementation/structural-reviewer-prompt.md index 577cfc6a..a8962fe9 100644 --- a/.claude/skills/review-implementation/structural-reviewer-prompt.md +++ b/.claude/skills/review-implementation/structural-reviewer-prompt.md @@ -37,12 +37,13 @@ Given: problem name `P` = `{PROBLEM_NAME}`, category `C` = `{CATEGORY}`, file st | 9 | Test has evaluation test | `Grep("fn test_.*evaluat", test_file)` | | 10 | Registered in `{C}/mod.rs` | `Grep("mod {F}", "src/models/{C}/mod.rs")` | | 11 | Re-exported in `models/mod.rs` | `Grep("{P}", "src/models/mod.rs")` | -| 12 | CLI `load_problem` arm | `Grep('"{P}"', "problemreductions-cli/src/dispatch.rs")` | -| 13 | CLI `serialize_any_problem` arm | `Grep('"{P}".*try_ser', "problemreductions-cli/src/dispatch.rs")` | -| 14 | CLI `resolve_alias` entry | `Grep("{P}", "problemreductions-cli/src/problem_name.rs")` | -| 15 | Paper `display-name` entry | `Grep('"{P}"', "docs/paper/reductions.typ")` | -| 16 | Paper `problem-def` block | `Grep('problem-def.*"{P}"', "docs/paper/reductions.typ")` | -| 17 | `trait_consistency` entry | `Grep("{P}", "src/unit_tests/trait_consistency.rs")` | +| 12 | `declare_variants!` entry exists | `Grep("declare_variants!|default opt|default sat|opt {P}|sat {P}", file)` | +| 13 | CLI `resolve_alias` entry | `Grep("{P}", "problemreductions-cli/src/problem_name.rs")` | +| 14 | CLI `create` support | `Grep('"{P}"', "problemreductions-cli/src/commands/create.rs")` | +| 15 | Canonical model example registered | `Grep("{P}", "src/example_db/model_builders.rs")` | +| 16 | Paper `display-name` entry | `Grep('"{P}"', "docs/paper/reductions.typ")` | +| 17 | Paper `problem-def` block | `Grep('problem-def.*"{P}"', "docs/paper/reductions.typ")` | +| 18 | `trait_consistency` entry | `Grep("{P}", "src/unit_tests/trait_consistency.rs")` | ## Rule Checklist @@ -60,12 +61,9 @@ Given: source `S` = `{SOURCE}`, target `T` = `{TARGET}`, rule file stem `R` = `{ | 6 | Test file exists | `Glob("src/unit_tests/rules/{R}.rs")` | | 7 | Closed-loop test present | `Grep("fn test_.*closed_loop\|fn test_.*to_.*basic", test_file)` | | 8 | Registered in `rules/mod.rs` | `Grep("mod {R}", "src/rules/mod.rs")` | -| 9 | Example file exists | `Glob("examples/{E}.rs")` | -| 10 | Example has `pub fn run()` | `Grep("pub fn run", example_file)` | -| 11 | Example has `fn main()` | `Grep("fn main", example_file)` | -| 12 | `example_test!` registered | `Grep("example_test!\\({E}\\)", "tests/suites/examples.rs")` | -| 13 | `example_fn!` registered | `Grep("example_fn!.*{E}", "tests/suites/examples.rs")` | -| 14 | Paper `reduction-rule` entry | `Grep('reduction-rule.*"{S}".*"{T}"', "docs/paper/reductions.typ")` | +| 9 | Canonical rule example registered | `Grep("{S}|{T}|{R}", "src/example_db/rule_builders.rs")` | +| 10 | Example-db lookup tests exist | `Grep("find_rule_example|build_rule_db", "src/unit_tests/example_db.rs")` | +| 11 | Paper `reduction-rule` entry | `Grep('reduction-rule.*"{S}".*"{T}"', "docs/paper/reductions.typ")` | ## Build Check @@ -116,7 +114,7 @@ Compare the implementation against the requirements in the original issue. The i | 3 | Solution extraction matches | Read `extract_solution()` and verify it matches the issue's **Solution extraction** | | 4 | Correctness preserved | Verify the reduction logic is consistent with the issue's **Correctness argument** | | 5 | Overhead expressions match | Compare `#[reduction(overhead = {...})]` against the issue's **Size overhead** | -| 6 | Example matches | Verify the example program uses the instance from the issue's **Concrete example** | +| 6 | Example matches | Verify the canonical example-db entry uses the instance from the issue's **Concrete example** | Flag any deviation as ISSUE -- the implementation must match what was specified in the issue unless there's a documented reason for the change. diff --git a/.claude/skills/review-pipeline/SKILL.md b/.claude/skills/review-pipeline/SKILL.md index 5ffc1c1d..4e345dc7 100644 --- a/.claude/skills/review-pipeline/SKILL.md +++ b/.claude/skills/review-pipeline/SKILL.md @@ -89,6 +89,12 @@ All subsequent steps run inside the worktree. ### 1a. Resolve Conflicts with Main +**IMPORTANT:** The `add-model` and `add-rule` skills evolve frequently. When merging main into a PR branch, conflicts in skill-generated code are common. Before resolving conflicts: + +1. Run `git diff origin/main...HEAD -- .claude/skills/add-model/ .claude/skills/add-rule/` to see if these skills changed on main since the PR was created. +2. If they changed, read the current versions on main (`git show origin/main:.claude/skills/add-model/SKILL.md` and `git show origin/main:.claude/skills/add-rule/SKILL.md`) to understand what's different. +3. When resolving conflicts in model/rule implementation files, prefer the patterns from main's current skills — the PR's implementation may be based on outdated skill instructions. + Check if the branch has merge conflicts with main: ```bash @@ -99,8 +105,9 @@ git merge origin/main --no-edit - If the merge succeeds cleanly: push the merge commit and continue. - If there are conflicts: 1. Inspect the conflicting files with `git diff --name-only --diff-filter=U`. - 2. Resolve conflicts (prefer the PR branch for new code, main for regenerated artifacts like JSON). - 3. Stage resolved files, commit, and push. + 2. Compare the current skill versions on main vs the PR branch to understand which patterns are current. + 3. Resolve conflicts (prefer main's patterns for skill-generated code, the PR branch for problem-specific logic, main for regenerated artifacts like JSON). + 4. Stage resolved files, commit, and push. - If conflicts are too complex to resolve automatically (e.g., overlapping logic changes in the same function): abort the merge (`git merge --abort`), leave the PR in review-agentic, and report: `PR #N has complex merge conflicts with main — needs manual resolution.` Then STOP processing this PR. ### 2. Fix Copilot Review Comments diff --git a/.claude/skills/write-model-in-paper/SKILL.md b/.claude/skills/write-model-in-paper/SKILL.md index 18b4ba5d..0ec8105f 100644 --- a/.claude/skills/write-model-in-paper/SKILL.md +++ b/.claude/skills/write-model-in-paper/SKILL.md @@ -14,6 +14,7 @@ Full authoring guide for writing a `problem-def` entry in `docs/paper/reductions Before using this skill, ensure: - The problem model is implemented (`src/models//.rs`) - The problem is registered with schema and variant metadata +- A canonical example exists in `src/example_db/model_builders.rs` - JSON exports are up to date (`cargo run --example export_graph && cargo run --example export_schemas`) ## Reference Example diff --git a/.claude/skills/write-rule-in-paper/SKILL.md b/.claude/skills/write-rule-in-paper/SKILL.md index 390baba0..3542875e 100644 --- a/.claude/skills/write-rule-in-paper/SKILL.md +++ b/.claude/skills/write-rule-in-paper/SKILL.md @@ -17,22 +17,21 @@ Full authoring guide for writing a `reduction-rule` entry in `docs/paper/reducti Before using this skill, ensure: - The reduction is implemented and tested (`src/rules/_.rs`) -- An example program exists (`examples/reduction__to_.rs`) +- A canonical example exists in `src/example_db/rule_builders.rs` - Example JSON is generated (`make examples`) - The reduction graph and schemas are up to date (`cargo run --example export_graph && cargo run --example export_schemas`) ## Step 1: Load Example Data ```typst -#let src_tgt = load-example("_to_") -#let src_tgt_r = load-results("_to_") -#let src_tgt_sol = src_tgt_r.solutions.at(0) +#let src_tgt = load-example("Source", "Target") +#let src_tgt_sol = src_tgt.solutions.at(0) ``` Where: -- `load-example(name)` loads `examples/{name}.json` — contains source/target problem instances -- `load-results(name)` loads `examples/{name}.result.json` — contains solution configs -- Access fields: `src_tgt.source.instance`, `src_tgt_sol.source_config`, `src_tgt_sol.target_config` +- `load-example(source, target)` looks up the canonical rule entry from the generated rule database +- The returned record contains `source`, `target`, `overhead`, and `solutions` +- Access fields: `src_tgt.source.instance`, `src_tgt.target.instance`, `src_tgt_sol.source_config`, `src_tgt_sol.target_config` ## Step 2: Write the Theorem Body (Rule Statement) @@ -159,7 +158,7 @@ Detailed by default. Only use a brief example for trivially obvious reductions ( *Step N -- Verify a solution.* [end-to-end verification] - *Count:* #src_tgt_r.solutions.len() optimal solutions ... + *Count:* #src_tgt.solutions.len() optimal solutions ... ], ) ``` @@ -178,7 +177,7 @@ Each step should: | First | Show the source instance (dimensions, structure). Include graph visualization if applicable. | | Middle | Walk through the construction. Show intermediate values. Explicitly quantify overhead. | | Second-to-last | Verify a concrete solution end-to-end (source config → target config, check validity). | -| Last | Solution count: `#src_tgt_r.solutions.len()` with brief combinatorial justification. | +| Last | Solution count: `#src_tgt.solutions.len()` with brief combinatorial justification. | ### 4d. Graph Visualization (if applicable) @@ -204,7 +203,7 @@ Each step should: #src_tgt_sol.target_config.map(str).join(", ") // Number of optimal solutions -#src_tgt_r.solutions.len() +#src_tgt.solutions.len() // Source instance fields #src_tgt.source.instance.num_vertices diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 2029f01d..0b4846ed 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -37,7 +37,7 @@ jobs: run: cargo build --features ilp-highs --verbose - name: Run tests - run: cargo test --features ilp-highs --verbose + run: cargo test --features "ilp-highs example-db" --verbose - name: Run doc tests run: cargo test --doc --features ilp-highs --verbose diff --git a/Cargo.toml b/Cargo.toml index 21c81f18..9ee646cd 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -13,6 +13,7 @@ categories = ["algorithms", "science"] [features] default = ["ilp-highs"] +example-db = [] ilp = ["ilp-highs"] # backward compat shorthand ilp-solver = [] # marker: enables ILP solver code ilp-highs = ["ilp-solver", "dep:good_lp", "good_lp/highs"] @@ -44,6 +45,11 @@ criterion = "0.8" name = "solver_benchmarks" harness = false +[[example]] +name = "export_examples" +path = "examples/export_examples.rs" +required-features = ["example-db"] + [profile.release] lto = true codegen-units = 1 diff --git a/Makefile b/Makefile index ec1708a8..0f85fe3c 100644 --- a/Makefile +++ b/Makefile @@ -39,7 +39,7 @@ build: # Run all tests (including ignored tests) test: - cargo test --features ilp-highs -- --include-ignored + cargo test --features "ilp-highs example-db" -- --include-ignored # Run MCP server tests mcp-test: ## Run MCP server tests @@ -95,13 +95,8 @@ mdbook: @sleep 1 && (command -v xdg-open >/dev/null && xdg-open http://localhost:3001 || open http://localhost:3001) # Generate all example JSON files for the paper -REDUCTION_EXAMPLES := $(patsubst examples/%.rs,%,$(wildcard examples/reduction_*.rs)) examples: - @mkdir -p docs/paper/examples - @for example in $(REDUCTION_EXAMPLES); do \ - echo "Running $$example..."; \ - cargo run --features ilp-highs --example $$example || exit 1; \ - done + cargo run --features "ilp-highs example-db" --example export_examples cargo run --features ilp-highs --example export_petersen_mapping # Export problem schemas to JSON @@ -152,10 +147,14 @@ endif git push origin main --tags @echo "v$(V) pushed — CI will publish to crates.io" -# Build and install the pred CLI tool +# Build and install the pred CLI tool (without MCP for fast builds) cli: cargo install --path problemreductions-cli +# Build and install the pred CLI tool with MCP server support +mcp: + cargo install --path problemreductions-cli --features mcp + # Generate Rust mapping JSON exports for all graphs and modes GRAPHS := diamond bull house petersen MODES := unweighted weighted triangular diff --git a/docs/agent-profiles/SKILLS.md b/docs/agent-profiles/SKILLS.md index b555c2c2..9df63925 100644 --- a/docs/agent-profiles/SKILLS.md +++ b/docs/agent-profiles/SKILLS.md @@ -1,5 +1,20 @@ # Skills +Example generation now goes through the example catalog and dedicated exporter. +When a workflow needs a paper/example instance, prefer the catalog path over ad hoc `examples/reduction_*.rs` binaries: + +- use `make examples` or `cargo run --features "ilp-highs example-db" --example export_examples` +- use `pred create --example ` to materialize a canonical model example as normal problem JSON +- use `pred create --example --to ` to materialize a canonical rule example as normal problem JSON +- when adding new example coverage, register a catalog entry instead of creating a new standalone reduction example file + +Post-refactor extension points: + +- new model load/serialize/brute-force dispatch comes from `declare_variants!` in the model file, with explicit `opt` or `sat` markers and an optional `default` +- alias resolution lives in `problemreductions-cli/src/problem_name.rs` +- `pred create` UX lives in `problemreductions-cli/src/commands/create.rs` +- canonical examples live in `src/example_db/model_builders.rs` and `src/example_db/rule_builders.rs` + - [issue-to-pr] — Convert a GitHub issue into a PR with an implementation plan - [add-model] — Add a new problem model to the codebase - [add-rule] — Add a new reduction rule to the codebase @@ -7,7 +22,11 @@ - [fix-pr] — Resolve PR review comments, CI failures, and coverage gaps - [check-issue] — Quality gate for Rule and Model GitHub issues - [topology-sanity-check] — Run sanity checks on the reduction graph: detect orphan problems and redundant rules +- [project-pipeline] — Pick the next ready issue, implement it, and move it through the project workflow +- [review-pipeline] — Process PRs in review-agentic: fix comments, fix CI, run agentic review, move to In Review +- [propose] — Interactive brainstorming that turns a new model or rule idea into a GitHub issue +- [final-review] — Interactive maintainer review for PRs in the In review column +- [dev-setup] — Install and configure the maintainer development environment - [write-model-in-paper] — Write or improve a problem-def entry in the Typst paper - [write-rule-in-paper] — Write or improve a reduction-rule entry in the Typst paper - [release] — Create a new crate release with version bump -- [meta-power] — Batch-resolve all open Model and Rule issues autonomously diff --git a/docs/paper/reductions.typ b/docs/paper/reductions.typ index da24a38e..84562305 100644 --- a/docs/paper/reductions.typ +++ b/docs/paper/reductions.typ @@ -15,12 +15,29 @@ #show: thmrules.with(qed-symbol: $square$) // === Example JSON helpers === -// Load example JSON files generated by `make examples`. -// Unified schema: { source: { problem, variant, instance }, target: { ... }, overhead: [...] } -#let load-example(name) = json("examples/" + name + ".json") +// Load canonical example databases generated by `make examples`. +#let rule-db = json("examples/generated/rules.json") + +#let load-example(source, target) = { + let matches = rule-db.rules.filter(r => r.source.problem == source and r.target.problem == target) + if matches.len() == 1 { + matches.at(0) + } else if matches.len() == 0 { + panic("Missing canonical rule example for " + source + " -> " + target) + } else { + panic("Ambiguous canonical rule example for " + source + " -> " + target) + } +} -// Load result JSON: { solutions: [{ source_config, target_config }, ...] } -#let load-results(name) = json("examples/" + name + ".result.json") +#let graph-num-vertices(instance) = instance.graph.inner.nodes.len() +#let graph-num-edges(instance) = instance.graph.inner.edges.len() +#let spin-num-spins(instance) = instance.fields.len() +#let sat-num-clauses(instance) = instance.clauses.len() +#let subsetsum-num-elements(instance) = instance.sizes.len() +#let circuit-num-gates(instance) = instance.circuit.assignments.len() +#let circuit-num-variables(instance) = instance.variables.len() + +#let example-name(source, target) = lower(source) + "_to_" + lower(target) #let problem-schemas = json("../src/reductions/problem_schemas.json") @@ -258,9 +275,6 @@ let tgt-lbl = label("def:" + target) let overhead = if edge != none and edge.overhead.len() > 0 { edge.overhead } else { none } let thm-lbl = label("thm:" + source + "-to-" + target) - // Derive example filename from source/target: "Source" → "source", then "source_to_target" - let example-name = lower(source) + "_to_" + lower(target) - covered-rules.update(old => old + ((source, target),)) [ @@ -273,7 +287,7 @@ proof[#proof-body] if example { - let data = load-example(example-name) + let data = load-example(source, target) pad(left: 1.5em, reduction-example(data, caption: example-caption)[#extra]) } } @@ -1345,16 +1359,15 @@ Each reduction is presented as a *Rule* (with linked problem names and overhead == Trivial Reductions -#let mvc_mis = load-example("minimumvertexcover_to_maximumindependentset") -#let mvc_mis_r = load-results("minimumvertexcover_to_maximumindependentset") -#let mvc_mis_sol = mvc_mis_r.solutions.at(0) +#let mvc_mis = load-example("MinimumVertexCover", "MaximumIndependentSet") +#let mvc_mis_sol = mvc_mis.solutions.at(0) #reduction-rule("MinimumVertexCover", "MaximumIndependentSet", example: true, example-caption: [Petersen graph ($n = 10$): VC $arrow.l.r$ IS], extra: [ Source VC: $C = {#mvc_mis_sol.source_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => str(i)).join(", ")}$ (size #mvc_mis_sol.source_config.filter(x => x == 1).len()) #h(1em) Target IS: $S = {#mvc_mis_sol.target_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => str(i)).join(", ")}$ (size #mvc_mis_sol.target_config.filter(x => x == 1).len()) \ - $|"VC"| + |"IS"| = #mvc_mis.source.instance.num_vertices = |V|$ #sym.checkmark + $|"VC"| + |"IS"| = #graph-num-vertices(mvc_mis.source.instance) = |V|$ #sym.checkmark ], )[ Vertex cover and independent set are set complements: removing a cover from $V$ leaves vertices with no edges between them (an independent set), and vice versa. Since $|S| + |C| = |V|$ is constant, maximizing one is equivalent to minimizing the other. The reduction preserves the graph and weights unchanged. @@ -1376,16 +1389,15 @@ Each reduction is presented as a *Rule* (with linked problem names and overhead _Solution extraction._ For VC solution $C$, return $S = V backslash C$, i.e.\ flip each variable: $s_v = 1 - c_v$. ] -#let mis_clique = load-example("maximumindependentset_to_maximumclique") -#let mis_clique_r = load-results("maximumindependentset_to_maximumclique") -#let mis_clique_sol = mis_clique_r.solutions.at(0) +#let mis_clique = load-example("MaximumIndependentSet", "MaximumClique") +#let mis_clique_sol = mis_clique.solutions.at(0) #reduction-rule("MaximumIndependentSet", "MaximumClique", example: true, example-caption: [Path graph $P_5$: IS $arrow.r$ Clique via complement], extra: [ Source IS: $S = {#mis_clique_sol.source_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => str(i)).join(", ")}$ (size #mis_clique_sol.source_config.filter(x => x == 1).len()) #h(1em) Target Clique: $C = {#mis_clique_sol.target_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => str(i)).join(", ")}$ (size #mis_clique_sol.target_config.filter(x => x == 1).len()) \ - Source $|E| = #mis_clique.source.instance.num_edges$, complement $|overline(E)| = #mis_clique.target.instance.num_edges$ #sym.checkmark + Source $|E| = #graph-num-edges(mis_clique.source.instance)$, complement $|overline(E)| = #graph-num-edges(mis_clique.target.instance)$ #sym.checkmark ], )[ An independent set in $G$ is exactly a clique in the complement graph $overline(G)$: vertices with no edges between them in $G$ are pairwise adjacent in $overline(G)$. Both problems maximize total vertex weight, so optimal values are preserved. This is Karp's classical complement graph reduction. @@ -1448,16 +1460,15 @@ Each reduction is presented as a *Rule* (with linked problem names and overhead _Solution extraction._ Convert spins to binary: $x_i = (s_i + 1) \/ 2$, i.e.\ $s_i = +1 arrow.r x_i = 1$, $s_i = -1 arrow.r x_i = 0$. ] -#let sg_qubo = load-example("spinglass_to_qubo") -#let sg_qubo_r = load-results("spinglass_to_qubo") -#let sg_qubo_sol = sg_qubo_r.solutions.at(0) +#let sg_qubo = load-example("SpinGlass", "QUBO") +#let sg_qubo_sol = sg_qubo.solutions.at(0) #reduction-rule("SpinGlass", "QUBO", example: true, example-caption: [10-spin Ising model on Petersen graph], extra: [ - Source: $n = #sg_qubo.source.instance.num_spins$ spins, $h_i = 0$, couplings $J_(i j) in {plus.minus 1}$ \ + Source: $n = #spin-num-spins(sg_qubo.source.instance)$ spins, $h_i = 0$, couplings $J_(i j) in {plus.minus 1}$ \ Mapping: $s_i = 2x_i - 1$ converts spins ${-1, +1}$ to binary ${0, 1}$ \ - Ground state ($#sg_qubo_r.solutions.len()$-fold degenerate): $bold(x) = (#sg_qubo_sol.target_config.map(str).join(", "))$ #sym.checkmark + Ground state ($#sg_qubo.solutions.len()$-fold degenerate): $bold(x) = (#sg_qubo_sol.target_config.map(str).join(", "))$ #sym.checkmark ], )[ The Ising model and QUBO are both quadratic functions over finite domains: spins ${-1,+1}$ and binary variables ${0,1}$, respectively. The affine map $s_i = 2x_i - 1$ establishes a bijection between the two domains and preserves the quadratic structure. Substituting into the Ising Hamiltonian yields a QUBO objective that differs from the original energy by a constant, so ground states correspond exactly. @@ -1479,9 +1490,8 @@ The _penalty method_ @glover2019 @lucas2014 converts a constrained optimization $ f(bold(x)) = "obj"(bold(x)) + P sum_k g_k (bold(x))^2 $ where $P$ is a penalty weight large enough that any constraint violation costs more than the entire objective range. Since $g_k (bold(x))^2 >= 0$ with equality iff $g_k (bold(x)) = 0$, minimizers of $f$ are feasible and optimal for the original problem. Because binary variables satisfy $x_i^2 = x_i$, the resulting $f$ is a quadratic in $bold(x)$, i.e.\ a QUBO. -#let kc_qubo = load-example("kcoloring_to_qubo") -#let kc_qubo_r = load-results("kcoloring_to_qubo") -#let kc_qubo_sol = kc_qubo_r.solutions.at(0) +#let kc_qubo = load-example("KColoring", "QUBO") +#let kc_qubo_sol = kc_qubo.solutions.at(0) #reduction-rule("KColoring", "QUBO", example: true, example-caption: [House graph ($n = 5$, $|E| = 6$, $chi = 3$) with $k = 3$ colors], @@ -1506,7 +1516,7 @@ where $P$ is a penalty weight large enough that any constraint violation costs m *Step 4 -- Verify a solution.* The first valid 3-coloring is $(c_0, ..., c_4) = (#kc_qubo_sol.source_config.map(str).join(", "))$, shown in the figure above. The one-hot encoding is $bold(x) = (#kc_qubo_sol.target_config.map(str).join(", "))$. Check: each 3-bit group has exactly one 1 (valid one-hot #sym.checkmark), and for every edge the two endpoints have different colors (e.g.\ edge $0 dash 1$: colors $#kc_qubo_sol.source_config.at(0), #kc_qubo_sol.source_config.at(1)$ #sym.checkmark).\ - *Count:* #kc_qubo_r.solutions.len() valid colorings $= 3! times 3$. The triangle $2 dash 3 dash 4$ forces 3 distinct colors ($3! = 6$ permutations); for each, the base vertices $0, 1$ each have 3 compatible choices but share edge $0 dash 1$, leaving $3$ valid pairs. + *Count:* #kc_qubo.solutions.len() valid colorings $= 3! times 3$. The triangle $2 dash 3 dash 4$ forces 3 distinct colors ($3! = 6$ permutations); for each, the base vertices $0, 1$ each have 3 compatible choices but share edge $0 dash 1$, leaving $3$ valid pairs. ], )[ The $k$-coloring problem has two requirements: each vertex gets exactly one color, and adjacent vertices get different colors. Both can be expressed as quadratic penalties over binary variables. Introduce $n k$ binary variables $x_(v,c) in {0,1}$ (indexed by $v dot k + c$), where $x_(v,c) = 1$ means vertex $v$ receives color $c$. The first requirement becomes a _one-hot constraint_ penalizing vertices with zero or multiple colors; the second becomes an _edge conflict penalty_ penalizing same-color neighbors. The combined QUBO matrix $Q in RR^(n k times n k)$ encodes both penalties. @@ -1572,15 +1582,14 @@ where $P$ is a penalty weight large enough that any constraint violation costs m _Solution extraction._ Discard auxiliary variables: return $bold(x)[0..n]$. ] -#let ksat_ss = load-example("ksatisfiability_to_subsetsum") -#let ksat_ss_r = load-results("ksatisfiability_to_subsetsum") -#let ksat_ss_sol = ksat_ss_r.solutions.at(0) +#let ksat_ss = load-example("KSatisfiability", "SubsetSum") +#let ksat_ss_sol = ksat_ss.solutions.at(0) #reduction-rule("KSatisfiability", "SubsetSum", example: true, example-caption: [3-SAT with 3 variables and 2 clauses], extra: [ - Source: $n = #ksat_ss.source.instance.num_vars$ variables, $m = #ksat_ss.source.instance.num_clauses$ clauses \ - Target: #ksat_ss.target.instance.num_elements elements, target $= #ksat_ss.target.instance.target$ \ + Source: $n = #ksat_ss.source.instance.num_vars$ variables, $m = #sat-num-clauses(ksat_ss.source.instance)$ clauses \ + Target: #subsetsum-num-elements(ksat_ss.target.instance) elements, target $= #ksat_ss.target.instance.target$ \ Source config: #ksat_ss_sol.source_config #h(1em) Target config: #ksat_ss_sol.target_config ], )[ @@ -1617,16 +1626,15 @@ where $P$ is a penalty weight large enough that any constraint violation costs m _Solution extraction._ Discard slack variables: return $bold(x)' [0..n]$. ] -#let qubo_ilp = load-example("qubo_to_ilp") -#let qubo_ilp_r = load-results("qubo_to_ilp") -#let qubo_ilp_sol = qubo_ilp_r.solutions.at(0) +#let qubo_ilp = load-example("QUBO", "ILP") +#let qubo_ilp_sol = qubo_ilp.solutions.at(0) #reduction-rule("QUBO", "ILP", example: true, example-caption: [4-variable QUBO with 3 quadratic terms], extra: [ Source: $n = #qubo_ilp.source.instance.num_vars$ binary variables, 3 off-diagonal terms \ - Target: #qubo_ilp.target.instance.num_vars ILP variables ($#qubo_ilp.source.instance.num_vars$ original $+ #(qubo_ilp.target.instance.num_vars - qubo_ilp.source.instance.num_vars)$ auxiliary), 9 McCormick constraints \ - Optimal: $bold(x) = (#qubo_ilp_sol.source_config.map(str).join(", "))$ ($#qubo_ilp_r.solutions.len()$-fold degenerate) #sym.checkmark + Target: #qubo_ilp.target.instance.num_vars ILP variables ($#qubo_ilp.source.instance.num_vars$ original $+ #(qubo_ilp.target.instance.num_vars - qubo_ilp.source.instance.num_vars)$ auxiliary), #qubo_ilp.target.instance.constraints.len() McCormick constraints \ + Optimal: $bold(x) = (#qubo_ilp_sol.source_config.map(str).join(", "))$ ($#qubo_ilp.solutions.len()$-fold degenerate) #sym.checkmark ], )[ QUBO minimizes a quadratic form $bold(x)^top Q bold(x)$ over binary variables. Every quadratic term $Q_(i j) x_i x_j$ can be _linearized_ by introducing an auxiliary variable $y_(i j)$ constrained to equal the product $x_i x_j$ via three McCormick inequalities. Diagonal terms $Q_(i i) x_i^2 = Q_(i i) x_i$ are already linear for binary $x_i$. The result is a binary ILP with a linear objective and $3 m$ constraints (where $m$ is the number of non-zero off-diagonal entries), whose minimizer corresponds exactly to the QUBO minimizer. @@ -1645,15 +1653,14 @@ where $P$ is a penalty weight large enough that any constraint violation costs m _Solution extraction._ Return the first $n$ variables (discard auxiliary $y_(i j)$). ] -#let cs_ilp = load-example("circuitsat_to_ilp") -#let cs_ilp_r = load-results("circuitsat_to_ilp") +#let cs_ilp = load-example("CircuitSAT", "ILP") #reduction-rule("CircuitSAT", "ILP", example: true, example-caption: [1-bit full adder to ILP], extra: [ - Circuit: #cs_ilp.source.instance.num_gates gates (2 XOR, 2 AND, 1 OR), #cs_ilp.source.instance.num_variables variables \ + Circuit: #circuit-num-gates(cs_ilp.source.instance) gates (2 XOR, 2 AND, 1 OR), #circuit-num-variables(cs_ilp.source.instance) variables \ Target: #cs_ilp.target.instance.num_vars ILP variables (circuit vars $+$ auxiliary), trivial objective \ - #cs_ilp_r.solutions.len() feasible solutions ($= 2^3$ valid input combinations for the full adder) #sym.checkmark + #cs_ilp.solutions.len() feasible solutions ($= 2^3$ valid input combinations for the full adder) #sym.checkmark ], )[ Each boolean gate (AND, OR, NOT, XOR) has a truth table that can be captured exactly by a small set of linear inequalities over binary variables. By Tseitin-style flattening, each internal expression node gets an auxiliary ILP variable constrained to match its gate's output, so the conjunction of all gate constraints is feasible if and only if the circuit is satisfiable. The ILP has a trivial objective (minimize 0), making it a pure feasibility problem. @@ -1675,16 +1682,15 @@ where $P$ is a penalty weight large enough that any constraint violation costs m == Non-Trivial Reductions -#let sat_mis = load-example("satisfiability_to_maximumindependentset") -#let sat_mis_r = load-results("satisfiability_to_maximumindependentset") -#let sat_mis_sol = sat_mis_r.solutions.at(0) +#let sat_mis = load-example("Satisfiability", "MaximumIndependentSet") +#let sat_mis_sol = sat_mis.solutions.at(0) #reduction-rule("Satisfiability", "MaximumIndependentSet", example: true, example-caption: [3-SAT with 5 variables and 7 clauses], extra: [ SAT assignment: $(x_1, ..., x_5) = (#sat_mis_sol.source_config.map(str).join(", "))$ \ - IS graph: #sat_mis.target.instance.num_vertices vertices ($= 3 times #sat_mis.source.instance.num_clauses$ literals), #sat_mis.target.instance.num_edges edges \ - IS of size #sat_mis.source.instance.num_clauses $= m$: one vertex per clause $arrow.r$ satisfying assignment #sym.checkmark + IS graph: #graph-num-vertices(sat_mis.target.instance) vertices ($= 3 times #sat-num-clauses(sat_mis.source.instance)$ literals), #graph-num-edges(sat_mis.target.instance) edges \ + IS of size #sat-num-clauses(sat_mis.source.instance) $= m$: one vertex per clause $arrow.r$ satisfying assignment #sym.checkmark ], )[ @karp1972 A satisfying assignment must make at least one literal true in every clause, and different clauses cannot assign contradictory values to the same variable. These two requirements map naturally to an independent set problem: _intra-clause cliques_ force exactly one literal per clause to be selected, while _conflict edges_ between complementary literals across clauses enforce consistency. The target IS size equals the number of clauses $m$, so an IS of size $m$ exists iff the formula is satisfiable. @@ -1700,16 +1706,15 @@ where $P$ is a penalty weight large enough that any constraint violation costs m _Solution extraction._ For $v_(j,i) in S$ with literal $x_k$: set $x_k = 1$; for $overline(x_k)$: set $x_k = 0$. ] -#let sat_kc = load-example("satisfiability_to_kcoloring") -#let sat_kc_r = load-results("satisfiability_to_kcoloring") -#let sat_kc_sol = sat_kc_r.solutions.at(0) +#let sat_kc = load-example("Satisfiability", "KColoring") +#let sat_kc_sol = sat_kc.solutions.at(0) #reduction-rule("Satisfiability", "KColoring", example: true, example-caption: [5-variable SAT with 3 unit clauses to 3-coloring], extra: [ SAT assignment: $(x_1, ..., x_5) = (#sat_kc_sol.source_config.map(str).join(", "))$ \ - Construction: 3 base + $2 times #sat_kc.source.instance.num_vars$ variable gadgets + OR-gadgets $arrow.r$ #sat_kc.target.instance.num_vertices vertices, #sat_kc.target.instance.num_edges edges \ - #sat_kc_r.solutions.len() valid 3-colorings (color symmetry of satisfying assignments) #sym.checkmark + Construction: 3 base + $2 times #sat_kc.source.instance.num_vars$ variable gadgets + OR-gadgets $arrow.r$ #graph-num-vertices(sat_kc.target.instance) vertices, #graph-num-edges(sat_kc.target.instance) edges \ + #sat_kc.solutions.len() valid 3-colorings (color symmetry of satisfying assignments) #sym.checkmark ], )[ @garey1979 A 3-coloring partitions vertices into three classes. The key insight is that three colors suffice to encode Boolean logic: one color represents TRUE, one FALSE, and a third (AUX) serves as a neutral ground. Variable gadgets force each variable's positive and negative literals to receive opposite truth colors, while clause gadgets use an OR-chain that can only receive the TRUE color when at least one input literal is TRUE-colored. Connecting the output of each clause gadget to the FALSE vertex forces it to be TRUE-colored, encoding the requirement that every clause is satisfied. @@ -1723,15 +1728,14 @@ where $P$ is a penalty weight large enough that any constraint violation costs m _Solution extraction._ Set $x_i = 1$ iff $"color"("pos"_i) = "color"("TRUE")$. ] -#let sat_ds = load-example("satisfiability_to_minimumdominatingset") -#let sat_ds_r = load-results("satisfiability_to_minimumdominatingset") -#let sat_ds_sol = sat_ds_r.solutions.at(0) +#let sat_ds = load-example("Satisfiability", "MinimumDominatingSet") +#let sat_ds_sol = sat_ds.solutions.at(0) #reduction-rule("Satisfiability", "MinimumDominatingSet", example: true, example-caption: [5-variable 7-clause 3-SAT to dominating set], extra: [ SAT assignment: $(x_1, ..., x_5) = (#sat_ds_sol.source_config.map(str).join(", "))$ \ - Vertex structure: $#sat_ds.target.instance.num_vertices = 3 times #sat_ds.source.instance.num_vars + #sat_ds.source.instance.num_clauses$ (variable triangles + clause vertices) \ + Vertex structure: $#graph-num-vertices(sat_ds.target.instance) = 3 times #sat_ds.source.instance.num_vars + #sat-num-clauses(sat_ds.source.instance)$ (variable triangles + clause vertices) \ Dominating set of size $n = #sat_ds.source.instance.num_vars$: one vertex per variable triangle #sym.checkmark ], )[ @@ -1752,15 +1756,14 @@ where $P$ is a penalty weight large enough that any constraint violation costs m _Correctness._ ($arrow.r.double$) Any $k$-SAT satisfying assignment satisfies the same clauses under SAT. ($arrow.l.double$) Any SAT satisfying assignment satisfies the same clauses (which all have width $k$). _Solution extraction._ Identity. ] -#let sat_ksat = load-example("satisfiability_to_ksatisfiability") -#let sat_ksat_r = load-results("satisfiability_to_ksatisfiability") -#let sat_ksat_sol = sat_ksat_r.solutions.at(0) +#let sat_ksat = load-example("Satisfiability", "KSatisfiability") +#let sat_ksat_sol = sat_ksat.solutions.at(0) #reduction-rule("Satisfiability", "KSatisfiability", example: true, example-caption: [Mixed-size clauses (sizes 1 to 5) to 3-SAT], extra: [ - Source: #sat_ksat.source.instance.num_vars variables, #sat_ksat.source.instance.num_clauses clauses (sizes 1, 2, 3, 3, 4, 5) \ - Target 3-SAT: $#sat_ksat.target.instance.num_vars = #sat_ksat.source.instance.num_vars + 7$ variables, #sat_ksat.target.instance.num_clauses clauses (small padded, large split) \ + Source: #sat_ksat.source.instance.num_vars variables, #sat-num-clauses(sat_ksat.source.instance) clauses (sizes 1, 2, 3, 3, 4, 5) \ + Target 3-SAT: $#sat_ksat.target.instance.num_vars = #sat_ksat.source.instance.num_vars + 7$ variables, #sat-num-clauses(sat_ksat.target.instance) clauses (small padded, large split) \ First solution: $(x_1, ..., x_5) = (#sat_ksat_sol.source_config.map(str).join(", "))$, auxiliary vars are don't-cares #sym.checkmark ], )[ @@ -1791,15 +1794,14 @@ where $P$ is a penalty weight large enough that any constraint violation costs m _Solution extraction._ Return the values of the circuit input variables $x_1, dots, x_n$. ] -#let cs_sg = load-example("circuitsat_to_spinglass") -#let cs_sg_r = load-results("circuitsat_to_spinglass") +#let cs_sg = load-example("CircuitSAT", "SpinGlass") #reduction-rule("CircuitSAT", "SpinGlass", example: true, example-caption: [1-bit full adder to Ising model], extra: [ - Circuit: #cs_sg.source.instance.num_gates gates (2 XOR, 2 AND, 1 OR), #cs_sg.source.instance.num_variables variables \ - Target: #cs_sg.target.instance.num_spins spins (each gate allocates I/O + auxiliary spins) \ - #cs_sg_r.solutions.len() ground states ($= 2^3$ valid input combinations for the full adder) #sym.checkmark + Circuit: #circuit-num-gates(cs_sg.source.instance) gates (2 XOR, 2 AND, 1 OR), #circuit-num-variables(cs_sg.source.instance) variables \ + Target: #spin-num-spins(cs_sg.target.instance) spins (each gate allocates I/O + auxiliary spins) \ + #cs_sg.solutions.len() ground states ($= 2^3$ valid input combinations for the full adder) #sym.checkmark ], )[ @whitfield2012 @lucas2014 Each logic gate can be represented as an Ising gadget --- a small set of spins with couplings $J_(i j)$ and fields $h_i$ chosen so that the gadget's ground states correspond exactly to the gate's truth table rows. Composing gadgets for all gates in the circuit yields a spin glass whose ground states encode precisely the satisfying assignments of the circuit. The energy gap between valid and invalid I/O patterns ensures that any global ground state respects every gate's logic. @@ -1829,23 +1831,22 @@ where $P$ is a penalty weight large enough that any constraint violation costs m caption: [Ising gadgets for logic gates. Ground states match truth tables.] ) -#let fact_cs = load-example("factoring_to_circuitsat") -#let fact_cs_r = load-results("factoring_to_circuitsat") +#let fact_cs = load-example("Factoring", "CircuitSAT") #let fact-decode(config, start, count) = { let pow2 = (1, 2, 4, 8, 16, 32) range(count).fold(0, (acc, i) => acc + config.at(start + i) * pow2.at(i)) } -#let fact-nbf = fact_cs.source.instance.num_bits_first -#let fact-nbs = fact_cs.source.instance.num_bits_second +#let fact-nbf = fact_cs.source.instance.m +#let fact-nbs = fact_cs.source.instance.n #reduction-rule("Factoring", "CircuitSAT", example: true, - example-caption: [Factor $N = #fact_cs.source.instance.number$], + example-caption: [Factor $N = #fact_cs.source.instance.target$], extra: [ - Circuit: $#fact-nbf times #fact-nbs$ array multiplier with #fact_cs.target.instance.num_gates gates, #fact_cs.target.instance.num_variables variables \ - #fact_cs_r.solutions.len() solutions: #fact_cs_r.solutions.map(sol => { + Circuit: $#fact-nbf times #fact-nbs$ array multiplier with #circuit-num-gates(fact_cs.target.instance) gates, #circuit-num-variables(fact_cs.target.instance) variables \ + #fact_cs.solutions.len() solutions: #fact_cs.solutions.map(sol => { let p = fact-decode(sol.source_config, 0, fact-nbf) let q = fact-decode(sol.source_config, fact-nbf, fact-nbs) - $#p times #q = #fact_cs.source.instance.number$ + $#p times #q = #fact_cs.source.instance.target$ }).join(" and ") #sym.checkmark ], )[ @@ -1864,17 +1865,16 @@ where $P$ is a penalty weight large enough that any constraint violation costs m _Solution extraction._ Read off factor bits: $p = sum_i p_i 2^(i-1)$, $q = sum_j q_j 2^(j-1)$. ] -#let mc_sg = load-example("maxcut_to_spinglass") -#let mc_sg_r = load-results("maxcut_to_spinglass") -#let mc_sg_sol = mc_sg_r.solutions.at(0) -#let mc_sg_cut = mc_sg.source.instance.edges.filter(e => mc_sg_sol.source_config.at(e.at(0)) != mc_sg_sol.source_config.at(e.at(1))).len() +#let mc_sg = load-example("MaxCut", "SpinGlass") +#let mc_sg_sol = mc_sg.solutions.at(0) +#let mc_sg_cut = mc_sg.source.instance.graph.inner.edges.filter(e => mc_sg_sol.source_config.at(e.at(0)) != mc_sg_sol.source_config.at(e.at(1))).len() #reduction-rule("MaxCut", "SpinGlass", example: true, example-caption: [Petersen graph ($n = 10$, unit weights) to Ising], extra: [ Direct 1:1 mapping: vertices $arrow.r$ spins, $J_(i j) = w_(i j) = 1$, $h_i = 0$ \ Partition: $S = {#mc_sg_sol.source_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => str(i)).join(", ")}$ vs $overline(S) = {#mc_sg_sol.source_config.enumerate().filter(((i, x)) => x == 0).map(((i, x)) => str(i)).join(", ")}$ \ - Cut value $= #mc_sg_cut$ ($#mc_sg_r.solutions.len()$-fold degenerate) #sym.checkmark + Cut value $= #mc_sg_cut$ ($#mc_sg.solutions.len()$-fold degenerate) #sym.checkmark ], )[ @barahona1982 A maximum cut partitions vertices into two groups to maximize the total weight of edges crossing the partition. In the Ising model, two spins with opposite signs contribute $-J_(i j) s_i s_j = J_(i j)$ to the energy, while same-sign spins contribute $-J_(i j)$. Setting $J_(i j) = w_(i j)$ and $h_i = 0$ makes each cut edge lower the energy by $2 J_(i j)$ relative to an uncut edge, so the Ising ground state corresponds to the maximum cut. @@ -1886,16 +1886,15 @@ where $P$ is a penalty weight large enough that any constraint violation costs m _Solution extraction._ Partition $= {i : s_i = +1}$. ] -#let sg_mc = load-example("spinglass_to_maxcut") -#let sg_mc_r = load-results("spinglass_to_maxcut") -#let sg_mc_sol = sg_mc_r.solutions.at(0) +#let sg_mc = load-example("SpinGlass", "MaxCut") +#let sg_mc_sol = sg_mc.solutions.at(0) #reduction-rule("SpinGlass", "MaxCut", example: true, example-caption: [10-spin Ising with alternating $J_(i j) in {plus.minus 1}$], extra: [ All $h_i = 0$: no ancilla needed, direct 1:1 vertex mapping \ Edge weights $w_(i j) = J_(i j) in {plus.minus 1}$ (alternating couplings) \ - Ground state ($#sg_mc_r.solutions.len()$-fold degenerate): partition $S = {#sg_mc_sol.source_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => str(i)).join(", ")}$ #sym.checkmark + Ground state ($#sg_mc.solutions.len()$-fold degenerate): partition $S = {#sg_mc_sol.source_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => str(i)).join(", ")}$ #sym.checkmark ], )[ @barahona1982 @lucas2014 The Ising Hamiltonian $H = -sum J_(i j) s_i s_j - sum h_i s_i$ has two types of terms. The pairwise couplings $J_(i j)$ map directly to MaxCut edge weights, since minimizing $-J_(i j) s_i s_j$ favors opposite spins (cut edges) when $J_(i j) > 0$. The local fields $h_i$ have no direct MaxCut analogue, but can be absorbed by introducing a single ancilla vertex connected to every spin with weight $h_i$: fixing the ancilla's partition side effectively creates a linear bias on each spin. @@ -2048,9 +2047,8 @@ The following reductions to Integer Linear Programming are straightforward formu _Solution extraction._ For each position $k$, find vertex $v$ with $x_(v,k) = 1$ to recover the tour permutation; then select edges between consecutive positions. ] -#let tsp_qubo = load-example("travelingsalesman_to_qubo") -#let tsp_qubo_r = load-results("travelingsalesman_to_qubo") -#let tsp_qubo_sol = tsp_qubo_r.solutions.at(0) +#let tsp_qubo = load-example("TravelingSalesman", "QUBO") +#let tsp_qubo_sol = tsp_qubo.solutions.at(0) #reduction-rule("TravelingSalesman", "QUBO", example: true, @@ -2065,7 +2063,7 @@ The following reductions to Integer Linear Programming are straightforward formu *Step 4 -- Verify a solution.* The QUBO ground state $bold(x) = (#tsp_qubo_sol.target_config.map(str).join(", "))$ encodes a valid tour. Reading the permutation: each 3-bit group has exactly one 1 (valid permutation #sym.checkmark). The tour cost equals $w_(01) + w_(02) + w_(12) = 1 + 2 + 3 = 6$.\ - *Count:* #tsp_qubo_r.solutions.len() optimal QUBO solutions $= 3! = 6$. On $K_3$ with distinct edge weights $1, 2, 3$, every Hamiltonian cycle has cost $1 + 2 + 3 = 6$ (all edges used), and 3 cyclic tours $times$ 2 directions yield $6$ permutation matrices. + *Count:* #tsp_qubo.solutions.len() optimal QUBO solutions $= 3! = 6$. On $K_3$ with distinct edge weights $1, 2, 3$, every Hamiltonian cycle has cost $1 + 2 + 3 = 6$ (all edges used), and 3 cyclic tours $times$ 2 directions yield $6$ permutation matrices. ], )[ Position-based QUBO encoding @lucas2014 maps a Hamiltonian tour to $n^2$ binary variables $x_(v,p)$, where $x_(v,p) = 1$ iff city $v$ is visited at position $p$. The QUBO Hamiltonian $H = H_A + H_B + H_C$ combines permutation constraints with the distance objective ($n^2$ variables indexed by $v dot n + p$). @@ -2208,25 +2206,39 @@ See #link("https://github.com/CodingThrust/problem-reductions/blob/main/examples The following table shows concrete variable overhead for example instances, generated from the reduction examples (`make examples`). #let example-files = ( - "maximumindependentset_to_minimumvertexcover", "minimumvertexcover_to_maximumindependentset", - "maximumindependentset_to_maximumsetpacking", "maximummatching_to_maximumsetpacking", - "minimumvertexcover_to_minimumsetcovering", - "maxcut_to_spinglass", "spinglass_to_maxcut", - "spinglass_to_qubo", "qubo_to_spinglass", - "maximumindependentset_to_qubo", "kcoloring_to_qubo", - "maximumsetpacking_to_qubo", "ksatisfiability_to_qubo", "ilp_to_qubo", - "satisfiability_to_maximumindependentset", "satisfiability_to_kcoloring", "satisfiability_to_minimumdominatingset", "satisfiability_to_ksatisfiability", - "circuitsat_to_spinglass", "factoring_to_circuitsat", - "maximumsetpacking_to_ilp", "maximummatching_to_ilp", - "kcoloring_to_ilp", "factoring_to_ilp", - "minimumsetcovering_to_ilp", - "minimumdominatingset_to_ilp", "maximumclique_to_ilp", - "travelingsalesman_to_ilp", + (source: "MaximumIndependentSet", target: "MinimumVertexCover"), + (source: "MinimumVertexCover", target: "MaximumIndependentSet"), + (source: "MaximumIndependentSet", target: "MaximumSetPacking"), + (source: "MaximumMatching", target: "MaximumSetPacking"), + (source: "MinimumVertexCover", target: "MinimumSetCovering"), + (source: "MaxCut", target: "SpinGlass"), + (source: "SpinGlass", target: "MaxCut"), + (source: "SpinGlass", target: "QUBO"), + (source: "QUBO", target: "SpinGlass"), + (source: "MaximumIndependentSet", target: "QUBO"), + (source: "KColoring", target: "QUBO"), + (source: "MaximumSetPacking", target: "QUBO"), + (source: "KSatisfiability", target: "QUBO"), + (source: "ILP", target: "QUBO"), + (source: "Satisfiability", target: "MaximumIndependentSet"), + (source: "Satisfiability", target: "KColoring"), + (source: "Satisfiability", target: "MinimumDominatingSet"), + (source: "Satisfiability", target: "KSatisfiability"), + (source: "CircuitSAT", target: "SpinGlass"), + (source: "Factoring", target: "CircuitSAT"), + (source: "MaximumSetPacking", target: "ILP"), + (source: "MaximumMatching", target: "ILP"), + (source: "KColoring", target: "ILP"), + (source: "Factoring", target: "ILP"), + (source: "MinimumSetCovering", target: "ILP"), + (source: "MinimumDominatingSet", target: "ILP"), + (source: "MaximumClique", target: "ILP"), + (source: "TravelingSalesman", target: "ILP"), ) -#let examples = example-files.map(n => { - let d = load-example(n) - (name: n, data: d) +#let examples = example-files.map(entry => { + let d = load-example(entry.source, entry.target) + (name: example-name(entry.source, entry.target), data: d) }) #pagebreak() diff --git a/docs/src/cli.md b/docs/src/cli.md index c3e2f2e4..ca43e792 100644 --- a/docs/src/cli.md +++ b/docs/src/cli.md @@ -42,13 +42,25 @@ Available backends: `highs` (default), `coin-cbc`, `clarabel`, `scip`, `lpsolve` # Create a Maximum Independent Set problem pred create MIS --graph 0-1,1-2,2-3 -o problem.json +# Create a weighted instance (variant auto-upgrades to i32) +pred create MIS --graph 0-1,1-2,2-3 --weights 3,1,2,1 -o weighted.json + +# Or start from a canonical model example +pred create --example MIS/SimpleGraph/i32 -o example.json + +# Or from a canonical rule example +pred create --example MVC/SimpleGraph/i32 --to MIS/SimpleGraph/i32 -o example.json + +# Inspect what's inside a problem file +pred inspect problem.json + # Solve it (auto-reduces to ILP) pred solve problem.json # Or solve with brute-force pred solve problem.json --solver brute-force -# Evaluate a specific configuration +# Evaluate a specific configuration (shows Valid(N) or Invalid) pred evaluate problem.json --config 1,0,1,0 # Reduce to another problem type and solve via brute-force @@ -60,6 +72,10 @@ pred create MIS --graph 0-1,1-2,2-3 | pred solve - pred create MIS --graph 0-1,1-2,2-3 | pred reduce - --to QUBO | pred solve - ``` +> **Note:** When you provide `--weights` with non-unit values (e.g., `3,1,2,1`), the variant is +> automatically upgraded from the default unit-weight (`One`) to `i32`. You can also specify the +> weighted variant explicitly: `pred create MIS/SimpleGraph/i32 --graph 0-1 --weights 3,1`. + ## Global Flags | Flag | Description | @@ -103,7 +119,7 @@ Use `pred show ` to see variants, reductions, and fields. ### `pred show` — Inspect a problem -Show variants, fields, size fields, and reductions for a problem type. Use short aliases like `MIS` for `MaximumIndependentSet`. +Show variants, fields, size fields, and reductions for a problem type. `show` operates at the **type level** — it displays all variants of a problem, not a specific node. Slash suffixes (e.g., `MIS/UnitDiskGraph`) are rejected; use `pred to` or `pred from` for variant-level exploration. Use short aliases like `MIS` for `MaximumIndependentSet`. ```bash $ pred show MIS @@ -215,11 +231,14 @@ Path (2 steps): Factoring → CircuitSAT → SpinGlass {graph: "SimpleGraph", we Show all paths or save for later use with `pred reduce --via`: ```bash -pred path MIS QUBO --all # all paths +pred path MIS QUBO --all # all paths (up to 20) +pred path MIS QUBO --all --max-paths 50 # increase limit pred path MIS QUBO -o path.json # save path for `pred reduce --via` pred path MIS QUBO --all -o paths/ # save all paths to a folder ``` +When using `--all`, the output is capped at `--max-paths` (default: 20). If more paths exist, the output indicates truncation. + Use `--cost` to change the optimization strategy: ```bash @@ -243,6 +262,9 @@ pred export-graph -o reduction_graph.json # save to file Construct a problem instance from CLI arguments and save as JSON: ```bash +pred create --example MIS/SimpleGraph/i32 -o model.json +pred create --example MVC/SimpleGraph/i32 --to MIS/SimpleGraph/i32 -o problem.json +pred create --example MVC/SimpleGraph/i32 --to MIS/SimpleGraph/i32 --example-side target -o target.json pred create MIS --graph 0-1,1-2,2-3 -o problem.json pred create MIS --graph 0-1,1-2,2-3 --weights 2,1,3,1 -o problem.json pred create SAT --num-vars 3 --clauses "1,2;-1,3" -o sat.json @@ -254,6 +276,12 @@ pred create Factoring --target 15 --bits-m 4 --bits-n 4 -o factoring.json pred create Factoring --target 21 --bits-m 3 --bits-n 3 -o factoring2.json ``` +Canonical examples are useful when you want a known-good instance from the paper/example database. +For model examples, `pred create --example ` emits the canonical instance for that +graph node. +For rule examples, `pred create --example --to ` emits the source +instance by default; use `--example-side target` to emit the reduction target instance instead. + Generate random instances for graph-based problems: ```bash @@ -437,6 +465,8 @@ You can use short aliases instead of full problem names (shown in `pred list`): You can also specify variants with a slash: `MIS/UnitDiskGraph`, `SpinGlass/SimpleGraph`. +When a bare name (no slash) is used in commands like `path`, `to`, `from`, `create`, or `reduce`, it resolves to the **declared default variant** for that problem type. For example, `MIS` resolves to `MaximumIndependentSet/SimpleGraph/One`. + If you mistype a problem name, `pred` will suggest the closest match: ```bash diff --git a/examples/chained_reduction_ksat_to_mis.rs b/examples/chained_reduction_ksat_to_mis.rs deleted file mode 100644 index 5fd1aab2..00000000 --- a/examples/chained_reduction_ksat_to_mis.rs +++ /dev/null @@ -1,95 +0,0 @@ -// # Chained Reduction: 3-SAT → MIS via Reduction Chains -// -// Demonstrates the `find_cheapest_path` + `reduce_along_path` API to chain -// reductions automatically: KSatisfiability → Satisfiability → MIS. -// The target MIS is then reduced further to ILP and solved there. - -// ANCHOR: imports -use problemreductions::models::algebraic::ILP; -use problemreductions::prelude::*; -use problemreductions::rules::{MinimizeSteps, ReductionGraph}; -use problemreductions::solvers::ILPSolver; -use problemreductions::topology::SimpleGraph; -use problemreductions::types::ProblemSize; -use problemreductions::variant::K3; -// ANCHOR_END: imports - -pub fn run() { - // ANCHOR: example - let graph = ReductionGraph::new(); - - // Find variant-exact path from KSat to MIS - let src_var = ReductionGraph::variant_to_map(&KSatisfiability::::variant()); - let dst_var = - ReductionGraph::variant_to_map(&MaximumIndependentSet::::variant()); - let rpath = graph - .find_cheapest_path( - "KSatisfiability", - &src_var, - "MaximumIndependentSet", - &dst_var, - &ProblemSize::new(vec![]), - &MinimizeSteps, - ) - .unwrap(); - - // Create: 3-SAT formula (a∨b∨¬c)∧(¬a∨¬b∨¬c)∧(¬a∨b∨c)∧(a∨¬b∨c) - let ksat = KSatisfiability::::new( - 3, - vec![ - CNFClause::new(vec![1, 2, -3]), - CNFClause::new(vec![-1, -2, -3]), - CNFClause::new(vec![-1, 2, 3]), - CNFClause::new(vec![1, -2, 3]), - ], - ); - - // Reduce: the reduction chain handles all intermediate steps - let chain = graph - .reduce_along_path(&rpath, &ksat as &dyn std::any::Any) - .unwrap(); - let target: &MaximumIndependentSet = chain.target_problem(); - - // Reduce the target MIS further to ILP through the registered rule graph. - let ilp_var = ReductionGraph::variant_to_map(&ILP::::variant()); - let ilp_path = graph - .find_cheapest_path( - "MaximumIndependentSet", - &dst_var, - "ILP", - &ilp_var, - &ProblemSize::new(vec![]), - &MinimizeSteps, - ) - .unwrap(); - let ilp_chain = graph - .reduce_along_path(&ilp_path, target as &dyn std::any::Any) - .unwrap(); - let ilp: &ILP = ilp_chain.target_problem(); - - // Solve the target MIS via the derived ILP. - let solver = ILPSolver::new(); - let ilp_solution = solver.solve(ilp).unwrap(); - let mis_solution = ilp_chain.extract_solution(&ilp_solution); - let original = chain.extract_solution(&mis_solution); - - // Verify: satisfies the original 3-SAT formula - assert!(ksat.evaluate(&original)); - // ANCHOR_END: example - - // ANCHOR: overhead - // Compose overheads symbolically along the path - // Maps source problem variables → final target problem variables - let composed = graph.compose_path_overhead(&rpath); - for (field, poly) in &composed.output_size { - println!(" {} = {}", field, poly); - } - // ANCHOR_END: overhead - - println!("3-SAT solution: {:?}", original); - println!("Reduction path: {:?}", rpath.type_names()); -} - -fn main() { - run() -} diff --git a/examples/detect_isolated_problems.rs b/examples/detect_isolated_problems.rs index 5e6c5c1e..c5445d62 100644 --- a/examples/detect_isolated_problems.rs +++ b/examples/detect_isolated_problems.rs @@ -1,90 +1,35 @@ //! Detect problems that have no reduction path connecting them to the main graph. //! -//! Finds: -//! 1. Completely isolated problem types (no reductions in or out) -//! 2. Disconnected components (groups not reachable from the largest component) -//! //! Run with: `cargo run --example detect_isolated_problems` +use problemreductions::rules::analysis::check_connectivity; use problemreductions::rules::ReductionGraph; -use std::collections::{BTreeMap, BTreeSet, VecDeque}; fn main() { let graph = ReductionGraph::new(); + let report = check_connectivity(&graph); - let mut types = graph.problem_types(); - types.sort(); - - // Build undirected adjacency at the problem-type level - let mut adj: BTreeMap<&str, BTreeSet<&str>> = BTreeMap::new(); - for &name in &types { - adj.entry(name).or_default(); - for edge in graph.outgoing_reductions(name) { - adj.entry(name).or_default().insert(edge.target_name); - adj.entry(edge.target_name).or_default().insert(name); - } - } - - // Find connected components via BFS - let mut visited: BTreeSet<&str> = BTreeSet::new(); - let mut components: Vec> = Vec::new(); - - for &name in &types { - if visited.contains(name) { - continue; - } - let mut component = Vec::new(); - let mut queue = VecDeque::new(); - queue.push_back(name); - visited.insert(name); - - while let Some(current) = queue.pop_front() { - component.push(current); - if let Some(neighbors) = adj.get(current) { - for &neighbor in neighbors { - if visited.insert(neighbor) { - queue.push_back(neighbor); - } - } - } - } - component.sort(); - components.push(component); - } - - // Sort components by size (largest first) - components.sort_by_key(|b| std::cmp::Reverse(b.len())); - - // Identify isolated types (no edges at all) - let isolated: Vec<&str> = types - .iter() - .copied() - .filter(|name| adj.get(name).is_some_and(|n| n.is_empty())) - .collect(); - - // Report println!("Reduction Graph Connectivity Report"); println!("===================================="); - println!("Total problem types: {}", types.len()); - println!("Total reductions: {}", graph.num_reductions()); - println!("Connected components: {}", components.len()); + println!("Total problem types: {}", report.total_types); + println!("Total reductions: {}", report.total_reductions); + println!("Connected components: {}", report.components.len()); println!(); - if !isolated.is_empty() { + if !report.isolated.is_empty() { println!( "Isolated problems ({}) — no reductions in or out:", - isolated.len() + report.isolated.len() ); - for name in &isolated { - let num_variants = graph.variants_for(name).len(); - println!(" {name} ({num_variants} variant(s))"); + for p in &report.isolated { + println!(" {} ({} variant(s))", p.name, p.num_variants); } println!(); } - if components.len() > 1 { + if report.components.len() > 1 { println!("Disconnected components:"); - for (i, comp) in components.iter().enumerate() { + for (i, comp) in report.components.iter().enumerate() { let marker = if i == 0 { " (main)" } else { "" }; println!("\n Component {}{marker} — {} types:", i + 1, comp.len()); for name in comp { @@ -98,19 +43,20 @@ fn main() { println!("All problem types with reductions are in a single connected component."); } - // Also report at the variant level println!(); println!("Variant-level detail for isolated problems:"); - for name in &isolated { - let variants = graph.variants_for(name); - for v in &variants { - let label = if v.is_empty() { - name.to_string() + for p in &report.isolated { + for (variant, complexity) in &p.variant_complexities { + let label = if variant.is_empty() { + p.name.to_string() } else { - let parts: Vec = v.iter().map(|(k, val)| format!("{k}: {val}")).collect(); - format!("{name} {{{}}}", parts.join(", ")) + let parts: Vec = variant + .iter() + .map(|(k, val)| format!("{k}: {val}")) + .collect(); + format!("{} {{{}}}", p.name, parts.join(", ")) }; - if let Some(c) = graph.variant_complexity(name, v) { + if let Some(c) = complexity { println!(" {label} complexity: {c}"); } else { println!(" {label}"); @@ -119,7 +65,7 @@ fn main() { } // Exit with non-zero if there are isolated types or multiple components - if !isolated.is_empty() || components.len() > 1 { + if !report.isolated.is_empty() || report.components.len() > 1 { std::process::exit(1); } } diff --git a/examples/detect_unreachable_from_3sat.rs b/examples/detect_unreachable_from_3sat.rs index b770ae6e..f4e0baf8 100644 --- a/examples/detect_unreachable_from_3sat.rs +++ b/examples/detect_unreachable_from_3sat.rs @@ -7,144 +7,73 @@ //! //! Run with: `cargo run --example detect_unreachable_from_3sat` +use problemreductions::rules::analysis::{check_reachability_from_3sat, UnreachableReason}; use problemreductions::rules::ReductionGraph; -use std::collections::{BTreeMap, BTreeSet, VecDeque}; - -const SOURCE: &str = "KSatisfiability"; fn main() { let graph = ReductionGraph::new(); + let report = check_reachability_from_3sat(&graph); - let mut types = graph.problem_types(); - types.sort(); - - // Build directed adjacency at the type level - let mut adj: BTreeMap<&str, BTreeSet<&str>> = BTreeMap::new(); - for &name in &types { - adj.entry(name).or_default(); - for edge in graph.outgoing_reductions(name) { - adj.entry(name).or_default().insert(edge.target_name); - } - } - - // BFS from 3-SAT (KSatisfiability) following directed edges - let mut reachable: BTreeMap<&str, usize> = BTreeMap::new(); // name -> min hops - let mut queue: VecDeque<(&str, usize)> = VecDeque::new(); - reachable.insert(SOURCE, 0); - queue.push_back((SOURCE, 0)); - - while let Some((current, hops)) = queue.pop_front() { - if let Some(neighbors) = adj.get(current) { - for &neighbor in neighbors { - if !reachable.contains_key(neighbor) { - reachable.insert(neighbor, hops + 1); - queue.push_back((neighbor, hops + 1)); - } - } - } - } - - // Classify unreachable problems - let unreachable_types: Vec<&str> = types - .iter() - .copied() - .filter(|name| !reachable.contains_key(name)) - .collect(); - - // Report println!("NP-Hardness Proof Chain Report (from 3-SAT)"); println!("============================================="); - println!("Total problem types: {}", types.len()); - println!("Reachable from 3-SAT: {}", reachable.len()); - println!("Not reachable: {}", unreachable_types.len()); + println!("Total problem types: {}", report.total_types); + println!("Reachable from 3-SAT: {}", report.reachable.len()); + println!("Not reachable: {}", report.unreachable.len()); println!(); // Show reachable problems sorted by hop distance - println!("Reachable from 3-SAT ({}):", reachable.len()); - let mut by_hops: Vec<(&&str, &usize)> = reachable.iter().collect(); + println!("Reachable from 3-SAT ({}):", report.reachable.len()); + let mut by_hops: Vec<(&&str, &usize)> = report.reachable.iter().collect(); by_hops.sort_by_key(|(name, hops)| (**hops, **name)); for (name, hops) in &by_hops { println!(" [{hops} hops] {name}"); } println!(); - if unreachable_types.is_empty() { + if report.unreachable.is_empty() { println!("All problems are reachable from 3-SAT."); return; } - // Categorize unreachable problems - let mut np_hard_missing: Vec<&str> = Vec::new(); - let mut p_time: Vec<&str> = Vec::new(); - let mut intermediate: Vec<&str> = Vec::new(); - let mut orphans: Vec<&str> = Vec::new(); - - // Known P-time problems and variants - let p_time_checks: &[(&str, Option<(&str, &str)>)] = &[ - ("MaximumMatching", None), - ("KSatisfiability", Some(("k", "K2"))), - ("KColoring", Some(("graph", "SimpleGraph"))), - ]; - - // Known intermediate-complexity problems - let intermediate_names: &[&str] = &["Factoring"]; - - for &name in &unreachable_types { - // Check if it's an orphan (no edges at all) - let out = graph.outgoing_reductions(name); - let inc = graph.incoming_reductions(name); - if out.is_empty() && inc.is_empty() { - orphans.push(name); - continue; - } - - // Check if it's a known P-time problem - let is_p = p_time_checks.iter().any(|(pname, variant_check)| { - if *pname != name { - return false; - } - match variant_check { - None => true, - Some((key, val)) => { - // Check if ALL variants of this problem are P-time - // (conservative: if any variant could be hard, don't classify as P) - let variants = graph.variants_for(name); - variants.len() == 1 && variants[0].get(*key).map(|s| s.as_str()) == Some(*val) - } - } - }); - if is_p { - p_time.push(name); - continue; - } - - // Check if it's known intermediate complexity - if intermediate_names.contains(&name) { - intermediate.push(name); - continue; - } - - // Otherwise it's NP-hard but missing a proof chain - np_hard_missing.push(name); - } + let missing: Vec<_> = report + .unreachable + .iter() + .filter(|p| p.reason == UnreachableReason::MissingProofChain) + .collect(); + let in_p: Vec<_> = report + .unreachable + .iter() + .filter(|p| p.reason == UnreachableReason::InP) + .collect(); + let intermediate: Vec<_> = report + .unreachable + .iter() + .filter(|p| p.reason == UnreachableReason::Intermediate) + .collect(); + let orphans: Vec<_> = report + .unreachable + .iter() + .filter(|p| p.reason == UnreachableReason::Orphan) + .collect(); - if !np_hard_missing.is_empty() { + if !missing.is_empty() { println!( "NP-hard but NO proof chain from 3-SAT ({}) — missing reductions:", - np_hard_missing.len() + missing.len() ); - for name in &np_hard_missing { - let out_count = graph.outgoing_reductions(name).len(); - let in_count = graph.incoming_reductions(name).len(); - println!(" {name} ({out_count} out, {in_count} in)"); + for p in &missing { + println!( + " {} ({} out, {} in)", + p.name, p.outgoing_count, p.incoming_count + ); } println!(); } - if !p_time.is_empty() { - println!("In P — correctly unreachable ({}):", p_time.len()); - for name in &p_time { - println!(" {name}"); + if !in_p.is_empty() { + println!("In P — correctly unreachable ({}):", in_p.len()); + for p in &in_p { + println!(" {}", p.name); } println!(); } @@ -154,22 +83,22 @@ fn main() { "Intermediate complexity — correctly unreachable ({}):", intermediate.len() ); - for name in &intermediate { - println!(" {name}"); + for p in &intermediate { + println!(" {}", p.name); } println!(); } if !orphans.is_empty() { println!("Orphans — no reductions at all ({}):", orphans.len()); - for name in &orphans { - println!(" {name}"); + for p in &orphans { + println!(" {}", p.name); } println!(); } // Exit with non-zero if there are NP-hard problems missing proof chains - if !np_hard_missing.is_empty() { + if !missing.is_empty() { std::process::exit(1); } } diff --git a/examples/export_examples.rs b/examples/export_examples.rs new file mode 100644 index 00000000..af5ee8e0 --- /dev/null +++ b/examples/export_examples.rs @@ -0,0 +1,23 @@ +use problemreductions::example_db::{build_model_db, build_rule_db, default_generated_dir}; +use problemreductions::export::{write_model_db_to, write_rule_db_to}; +use std::fs; + +fn main() { + let output_dir = default_generated_dir(); + if output_dir.exists() { + fs::remove_dir_all(&output_dir).expect("Failed to clear generated examples directory"); + } + fs::create_dir_all(&output_dir).expect("Failed to create generated examples directory"); + + let rule_db = build_rule_db().expect("Failed to build canonical rule database"); + let model_db = build_model_db().expect("Failed to build canonical model database"); + + write_rule_db_to(&output_dir, &rule_db); + write_model_db_to(&output_dir, &model_db); + + println!( + "Exported {} rule examples and {} model examples", + rule_db.rules.len(), + model_db.models.len() + ); +} diff --git a/examples/hamiltonian_path.rs b/examples/hamiltonian_path.rs deleted file mode 100644 index 311a03c3..00000000 --- a/examples/hamiltonian_path.rs +++ /dev/null @@ -1,41 +0,0 @@ -use problemreductions::models::graph::HamiltonianPath; -use problemreductions::topology::SimpleGraph; -use problemreductions::{BruteForce, Problem}; - -pub fn run() { - // Instance 2 from issue: 6 vertices, 8 edges (non-trivial) - let graph = SimpleGraph::new( - 6, - vec![ - (0, 1), - (0, 2), - (1, 3), - (2, 3), - (3, 4), - (3, 5), - (4, 2), - (5, 1), - ], - ); - let problem = HamiltonianPath::new(graph); - - println!("HamiltonianPath instance:"); - println!(" Vertices: {}", problem.num_vertices()); - println!(" Edges: {}", problem.num_edges()); - - let json = serde_json::to_string_pretty(&problem).unwrap(); - println!(" JSON: {}", json); - - // Find all Hamiltonian paths - let solver = BruteForce::new(); - let solutions = solver.find_all_satisfying(&problem); - println!(" Solutions found: {}", solutions.len()); - - for (i, sol) in solutions.iter().enumerate() { - println!(" Path {}: {:?} (valid: {})", i, sol, problem.evaluate(sol)); - } -} - -fn main() { - run(); -} diff --git a/examples/reduction_binpacking_to_ilp.rs b/examples/reduction_binpacking_to_ilp.rs deleted file mode 100644 index 74b87e1b..00000000 --- a/examples/reduction_binpacking_to_ilp.rs +++ /dev/null @@ -1,113 +0,0 @@ -// # Bin Packing to ILP Reduction -// -// ## Mathematical Formulation -// Variables: x_{ij} in {0,1} (item i in bin j), y_j in {0,1} (bin j used). -// Constraints: -// Assignment: sum_j x_{ij} = 1 for each item i. -// Capacity: sum_i w_i * x_{ij} <= C * y_j for each bin j. -// Objective: minimize sum_j y_j. -// -// ## This Example -// - Instance: 5 items with weights [6, 5, 5, 4, 3], bin capacity 10 -// - Optimal: 3 bins (e.g., {6,4}, {5,5}, {3}) -// - Target ILP: 30 binary variables (25 assignment + 5 bin-open), 10 constraints -// -// ## Output -// Exports `docs/paper/examples/binpacking_to_ilp.json` and `binpacking_to_ilp.result.json`. - -use problemreductions::export::*; -use problemreductions::models::algebraic::ILP; -use problemreductions::prelude::*; -use problemreductions::solvers::ILPSolver; -use problemreductions::types::SolutionSize; - -pub fn run() { - // 1. Create BinPacking instance: 5 items, capacity 10 - let weights = vec![6, 5, 5, 4, 3]; - let capacity = 10; - let bp = BinPacking::new(weights.clone(), capacity); - - // 2. Reduce to ILP - let reduction = ReduceTo::>::reduce_to(&bp); - let ilp = reduction.target_problem(); - - // 3. Print transformation - println!("\n=== Problem Transformation ==="); - println!( - "Source: BinPacking with {} items, weights {:?}, capacity {}", - bp.num_items(), - bp.sizes(), - bp.capacity() - ); - println!( - "Target: ILP with {} variables, {} constraints", - ilp.num_vars, - ilp.constraints.len() - ); - - // 4. Solve target ILP using ILP solver (BruteForce would be too slow: 2^30 configs) - let ilp_solver = ILPSolver::new(); - let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); - - println!("\n=== Solution ==="); - - // 5. Extract source solution - let bp_solution = reduction.extract_solution(&ilp_solution); - println!( - "Source BinPacking solution (bin assignments): {:?}", - bp_solution - ); - - // 6. Verify - let size = bp.evaluate(&bp_solution); - println!("Number of bins used: {:?}", size); - assert!(size.is_valid()); - assert_eq!(size, SolutionSize::Valid(3)); - println!("\nReduction verified successfully"); - - // 7. Collect solution and export JSON - let mut solutions = Vec::new(); - { - let source_sol = reduction.extract_solution(&ilp_solution); - let s = bp.evaluate(&source_sol); - assert!(s.is_valid()); - solutions.push(SolutionPair { - source_config: source_sol, - target_config: ilp_solution.clone(), - }); - } - - let source_variant = variant_to_map(BinPacking::::variant()); - let target_variant = variant_to_map(ILP::::variant()); - let overhead = - lookup_overhead("BinPacking", &source_variant, "ILP", &target_variant).unwrap_or_default(); - - let data = ReductionData { - source: ProblemSide { - problem: BinPacking::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_items": bp.num_items(), - "sizes": bp.sizes(), - "capacity": bp.capacity(), - }), - }, - target: ProblemSide { - problem: ILP::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vars": ilp.num_vars, - "num_constraints": ilp.constraints.len(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "binpacking_to_ilp"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_circuitsat_to_ilp.rs b/examples/reduction_circuitsat_to_ilp.rs deleted file mode 100644 index 599a0960..00000000 --- a/examples/reduction_circuitsat_to_ilp.rs +++ /dev/null @@ -1,169 +0,0 @@ -// # Circuit-SAT to ILP Reduction -// -// ## Mathematical Equivalence -// Each logic gate (AND, OR, NOT, XOR) is encoded as linear constraints over -// binary variables. The expression tree is flattened by introducing an auxiliary -// variable per internal node (Tseitin-style). Any feasible ILP solution is a -// satisfying circuit assignment. -// -// ## This Example -// - Instance: 1-bit full adder circuit (a, b, cin -> sum, cout) -// - sum = a XOR b XOR cin (via intermediate t = a XOR b) -// - cout = (a AND b) OR (cin AND t) -// - 5 gates (2 XOR, 2 AND, 1 OR), ~8 variables -// - Source: CircuitSAT with 3 inputs -// - Target: ILP (feasibility, trivial objective) -// -// ## Output -// Exports `docs/paper/examples/circuitsat_to_ilp.json` and `circuitsat_to_ilp.result.json`. -// -// ## Usage -// ```bash -// cargo run --example reduction_circuitsat_to_ilp --features ilp-solver -// ``` - -use problemreductions::export::*; -use problemreductions::models::algebraic::ILP; -use problemreductions::models::formula::{Assignment, BooleanExpr, Circuit}; -use problemreductions::prelude::*; - -pub fn run() { - // 1. Create CircuitSAT instance: 1-bit full adder - // sum = a XOR b XOR cin, cout = (a AND b) OR (cin AND (a XOR b)) - // Decomposed into 5 gates with intermediate variables t, ab, cin_t. - let circuit = Circuit::new(vec![ - // Intermediate: t = a XOR b - Assignment::new( - vec!["t".to_string()], - BooleanExpr::xor(vec![BooleanExpr::var("a"), BooleanExpr::var("b")]), - ), - // sum = t XOR cin - Assignment::new( - vec!["sum".to_string()], - BooleanExpr::xor(vec![BooleanExpr::var("t"), BooleanExpr::var("cin")]), - ), - // ab = a AND b - Assignment::new( - vec!["ab".to_string()], - BooleanExpr::and(vec![BooleanExpr::var("a"), BooleanExpr::var("b")]), - ), - // cin_t = cin AND t - Assignment::new( - vec!["cin_t".to_string()], - BooleanExpr::and(vec![BooleanExpr::var("cin"), BooleanExpr::var("t")]), - ), - // cout = ab OR cin_t - Assignment::new( - vec!["cout".to_string()], - BooleanExpr::or(vec![BooleanExpr::var("ab"), BooleanExpr::var("cin_t")]), - ), - ]); - let circuit_sat = CircuitSAT::new(circuit); - - println!("=== Circuit-SAT to ILP Reduction ===\n"); - println!("Source circuit: 1-bit full adder (a, b, cin -> sum, cout)"); - println!( - " {} variables: {:?}", - circuit_sat.num_variables(), - circuit_sat.variable_names() - ); - - // 2. Reduce to ILP - let reduction = ReduceTo::>::reduce_to(&circuit_sat); - let ilp = reduction.target_problem(); - - println!("\n=== Problem Transformation ==="); - println!( - "Source: CircuitSAT with {} variables", - circuit_sat.num_variables() - ); - println!( - "Target: ILP with {} variables, {} constraints", - ilp.num_variables(), - ilp.constraints.len() - ); - println!(" Each logic gate becomes a set of linear constraints."); - println!(" XOR gates use 4 constraints each; AND/OR use k+1 constraints."); - println!(" Objective is trivial (minimize 0): feasibility = satisfying assignment."); - - // 3. Solve the target ILP problem - let solver = BruteForce::new(); - let ilp_solutions = solver.find_all_best(ilp); - println!("\n=== Solution ==="); - println!( - "Target ILP feasible solutions found: {}", - ilp_solutions.len() - ); - - // 4. Extract and verify source solutions - println!("\nAll extracted CircuitSAT solutions:"); - let mut valid_count = 0; - let mut solutions = Vec::new(); - for ilp_sol in &ilp_solutions { - let circuit_sol = reduction.extract_solution(ilp_sol); - let valid = circuit_sat.evaluate(&circuit_sol); - let var_names = circuit_sat.variable_names(); - let assignment_str: Vec = var_names - .iter() - .zip(circuit_sol.iter()) - .map(|(name, &val)| format!("{}={}", name, val)) - .collect(); - println!( - " ILP config {:?} -> Circuit: [{}], valid: {}", - ilp_sol, - assignment_str.join(", "), - valid - ); - if valid { - valid_count += 1; - solutions.push(SolutionPair { - source_config: circuit_sol, - target_config: ilp_sol.clone(), - }); - } - } - println!( - "\n{}/{} ILP solutions map to valid circuit assignments", - valid_count, - ilp_solutions.len() - ); - assert!( - valid_count > 0, - "At least one ILP solution must be a valid circuit assignment" - ); - - println!("\nReduction verified successfully"); - - // 5. Export JSON - let source_variant = variant_to_map(CircuitSAT::variant()); - let target_variant = variant_to_map(ILP::::variant()); - let overhead = lookup_overhead("CircuitSAT", &source_variant, "ILP", &target_variant) - .expect("CircuitSAT -> ILP overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: CircuitSAT::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_gates": circuit_sat.circuit().num_assignments(), - "num_variables": circuit_sat.num_variables(), - }), - }, - target: ProblemSide { - problem: ILP::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vars": ilp.num_variables(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "circuitsat_to_ilp"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_circuitsat_to_spinglass.rs b/examples/reduction_circuitsat_to_spinglass.rs deleted file mode 100644 index 7d986cd0..00000000 --- a/examples/reduction_circuitsat_to_spinglass.rs +++ /dev/null @@ -1,165 +0,0 @@ -// # Circuit-SAT to Spin Glass Reduction -// -// ## Mathematical Equivalence -// Each logic gate (AND, OR, NOT, XOR) maps to a spin glass gadget whose ground -// states encode valid input-output combinations. The full circuit becomes a sum -// of gadget Hamiltonians; ground states correspond to satisfying assignments. -// -// ## This Example -// - Instance: 1-bit full adder circuit (a, b, cin -> sum, cout) -// - sum = a XOR b XOR cin (via intermediate t = a XOR b) -// - cout = (a AND b) OR (cin AND t) -// - 5 gates (2 XOR, 2 AND, 1 OR), ~8 variables -// - Source: CircuitSAT with 3 inputs -// - Target: SpinGlass -// -// ## Output -// Exports `docs/paper/examples/circuitsat_to_spinglass.json` and `circuitsat_to_spinglass.result.json`. - -use problemreductions::export::*; -use problemreductions::models::formula::{Assignment, BooleanExpr, Circuit}; -use problemreductions::prelude::*; -use problemreductions::topology::{Graph, SimpleGraph}; - -pub fn run() { - // 1. Create CircuitSAT instance: 1-bit full adder - // sum = a XOR b XOR cin, cout = (a AND b) OR (cin AND (a XOR b)) - // Decomposed into 5 gates with intermediate variables t, ab, cin_t. - let circuit = Circuit::new(vec![ - // Intermediate: t = a XOR b - Assignment::new( - vec!["t".to_string()], - BooleanExpr::xor(vec![BooleanExpr::var("a"), BooleanExpr::var("b")]), - ), - // sum = t XOR cin - Assignment::new( - vec!["sum".to_string()], - BooleanExpr::xor(vec![BooleanExpr::var("t"), BooleanExpr::var("cin")]), - ), - // ab = a AND b - Assignment::new( - vec!["ab".to_string()], - BooleanExpr::and(vec![BooleanExpr::var("a"), BooleanExpr::var("b")]), - ), - // cin_t = cin AND t - Assignment::new( - vec!["cin_t".to_string()], - BooleanExpr::and(vec![BooleanExpr::var("cin"), BooleanExpr::var("t")]), - ), - // cout = ab OR cin_t - Assignment::new( - vec!["cout".to_string()], - BooleanExpr::or(vec![BooleanExpr::var("ab"), BooleanExpr::var("cin_t")]), - ), - ]); - let circuit_sat = CircuitSAT::new(circuit); - - println!("=== Circuit-SAT to Spin Glass Reduction ===\n"); - println!("Source circuit: 1-bit full adder (a, b, cin -> sum, cout)"); - println!( - " {} variables: {:?}", - circuit_sat.num_variables(), - circuit_sat.variable_names() - ); - - // 2. Reduce to SpinGlass - let reduction = ReduceTo::>::reduce_to(&circuit_sat); - let sg = reduction.target_problem(); - - println!("\n=== Problem Transformation ==="); - println!( - "Source: CircuitSAT with {} variables", - circuit_sat.num_variables() - ); - println!( - "Target: SpinGlass with {} spins, {} interactions", - sg.num_spins(), - sg.graph().num_edges() - ); - println!(" Each logic gate (AND, OR, XOR) becomes a spin glass gadget."); - println!(" Gadget ground states encode valid truth table entries."); - println!(" Full adder uses 5 gadgets for its 5 gate decomposition."); - - // 3. Solve the target SpinGlass problem - let solver = BruteForce::new(); - let sg_solutions = solver.find_all_best(sg); - println!("\n=== Solution ==="); - println!( - "Target SpinGlass ground states found: {}", - sg_solutions.len() - ); - - // 4. Extract and verify source solutions - println!("\nAll extracted CircuitSAT solutions:"); - let mut valid_count = 0; - let mut solutions = Vec::new(); - for sg_sol in &sg_solutions { - let circuit_sol = reduction.extract_solution(sg_sol); - let size = circuit_sat.evaluate(&circuit_sol); - let var_names = circuit_sat.variable_names(); - let assignment_str: Vec = var_names - .iter() - .zip(circuit_sol.iter()) - .map(|(name, &val)| format!("{}={}", name, val)) - .collect(); - // CircuitSAT is a satisfaction problem (bool), so evaluate returns bool directly - // The bool IS the validity - println!( - " SG config {:?} -> Circuit: [{}], valid: {}", - sg_sol, - assignment_str.join(", "), - size - ); - if size { - valid_count += 1; - solutions.push(SolutionPair { - source_config: circuit_sol, - target_config: sg_sol.clone(), - }); - } - } - println!( - "\n{}/{} SpinGlass ground states map to valid circuit assignments", - valid_count, - sg_solutions.len() - ); - assert!( - valid_count > 0, - "At least one ground state must be a valid circuit assignment" - ); - - println!("\nReduction verified successfully"); - - // 5. Export JSON - let source_variant = variant_to_map(CircuitSAT::variant()); - let target_variant = variant_to_map(SpinGlass::::variant()); - let overhead = lookup_overhead("CircuitSAT", &source_variant, "SpinGlass", &target_variant) - .expect("CircuitSAT -> SpinGlass overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: CircuitSAT::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_gates": circuit_sat.circuit().num_assignments(), - "num_variables": circuit_sat.num_variables(), - }), - }, - target: ProblemSide { - problem: SpinGlass::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_spins": sg.num_variables(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "circuitsat_to_spinglass"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_factoring_to_circuitsat.rs b/examples/reduction_factoring_to_circuitsat.rs deleted file mode 100644 index 7083d46c..00000000 --- a/examples/reduction_factoring_to_circuitsat.rs +++ /dev/null @@ -1,226 +0,0 @@ -// # Factoring to Circuit-SAT Reduction -// -// ## Mathematical Equivalence -// Builds an array multiplier circuit for p * q = N. The circuit is satisfiable -// iff N can be factored within the given bit bounds. -// -// ## This Example -// - Instance: Factor 35 = 5 × 7 (m=3 bits, n=3 bits) -// - Reference: Based on ProblemReductions.jl factoring example -// - Source: Factoring(3, 3, 35) -// - Target: CircuitSAT -// -// We solve the source Factoring problem directly with BruteForce (only 6 binary -// variables), then verify the reduction produces a valid CircuitSAT encoding by -// simulating the circuit forward from a known factorization to build a complete -// satisfying assignment. -// -// ## Output -// Exports `docs/paper/examples/factoring_to_circuitsat.json` and `factoring_to_circuitsat.result.json`. - -use problemreductions::export::*; -use problemreductions::models::formula::Circuit; -use problemreductions::prelude::*; -use std::collections::HashMap; - -/// Simulate a circuit forward: given input variable values, compute all internal -/// variable values by evaluating each assignment in order. -fn simulate_circuit( - circuit: &Circuit, - initial_assignments: &HashMap, -) -> HashMap { - let mut values = initial_assignments.clone(); - for assignment in &circuit.assignments { - let result = assignment.expr.evaluate(&values); - for output in &assignment.outputs { - values.insert(output.clone(), result); - } - } - values -} - -pub fn run() { - // 1. Create Factoring instance: factor 35 with 3-bit factors - // Possible: 5*7=35 or 7*5=35 - let factoring = Factoring::new(3, 3, 35); - - println!("=== Factoring to Circuit-SAT Reduction ===\n"); - println!( - "Source: Factor {} with {}-bit * {}-bit factors", - factoring.target(), - factoring.m(), - factoring.n() - ); - println!( - " {} total variables ({} bits for p, {} bits for q)", - factoring.num_variables(), - factoring.m(), - factoring.n() - ); - - // 2. Solve the source Factoring problem directly (only 6 binary variables) - let solver = BruteForce::new(); - let factoring_solutions = solver.find_all_best(&factoring); - println!("\nFactoring solutions found: {}", factoring_solutions.len()); - for sol in &factoring_solutions { - let (a, b) = factoring.read_factors(sol); - println!(" p={}, q={} -> {} * {} = {}", a, b, a, b, a * b); - } - - // 3. Reduce Factoring -> CircuitSAT - let reduction = ReduceTo::::reduce_to(&factoring); - let circuit_sat = reduction.target_problem(); - - println!("\n=== Factoring -> CircuitSAT ==="); - println!( - "CircuitSAT: {} variables, {} assignments (gates)", - circuit_sat.num_variables(), - circuit_sat.circuit().num_assignments() - ); - println!( - " The multiplier circuit computes p * q and constrains output = {}.", - factoring.target() - ); - - // 4. Verify using forward simulation - // Take a known valid factorization, set the input variables (p and q bits), - // and simulate the circuit to get all internal variable values. - let factoring_sol = &factoring_solutions[0]; - let (a, b) = factoring.read_factors(factoring_sol); - println!("\n=== Forward Simulation Verification ==="); - println!( - "Known factorization: {} * {} = {} (bits: {:?})", - a, - b, - a * b, - factoring_sol - ); - - // Set input variables: p1..p3 for first factor, q1..q3 for second factor - let mut input_values: HashMap = HashMap::new(); - for (i, &bit) in factoring_sol.iter().enumerate().take(factoring.m()) { - input_values.insert(format!("p{}", i + 1), bit == 1); - } - for (i, &bit) in factoring_sol[factoring.m()..] - .iter() - .enumerate() - .take(factoring.n()) - { - input_values.insert(format!("q{}", i + 1), bit == 1); - } - println!("Input variables: {:?}", input_values); - - // Simulate the circuit forward - let all_values = simulate_circuit(circuit_sat.circuit(), &input_values); - - // Convert to a config vector matching CircuitSAT variable order - let var_names = circuit_sat.variable_names(); - let circuit_config: Vec = var_names - .iter() - .map(|name| { - if *all_values.get(name).unwrap_or(&false) { - 1 - } else { - 0 - } - }) - .collect(); - - // Verify the circuit is satisfied - let circuit_satisfied = circuit_sat.evaluate(&circuit_config); - println!("Circuit satisfied: {}", circuit_satisfied); - assert!( - circuit_satisfied, - "Forward-simulated circuit assignment must satisfy all gates" - ); - - // Verify extraction round-trips correctly - let extracted = reduction.extract_solution(&circuit_config); - println!("Extracted factoring solution: {:?}", extracted); - let (ea, eb) = factoring.read_factors(&extracted); - println!("Extracted factors: {} * {} = {}", ea, eb, ea * eb); - assert_eq!( - ea * eb, - factoring.target(), - "Round-trip must preserve factorization" - ); - - // 5. Verify all factoring solutions can be simulated through the circuit - println!( - "\nVerifying all {} factoring solutions through circuit:", - factoring_solutions.len() - ); - let mut solutions = Vec::new(); - for sol in &factoring_solutions { - let (fa, fb) = factoring.read_factors(sol); - let mut inputs: HashMap = HashMap::new(); - for (i, &bit) in sol.iter().enumerate().take(factoring.m()) { - inputs.insert(format!("p{}", i + 1), bit == 1); - } - for (i, &bit) in sol[factoring.m()..].iter().enumerate().take(factoring.n()) { - inputs.insert(format!("q{}", i + 1), bit == 1); - } - let vals = simulate_circuit(circuit_sat.circuit(), &inputs); - let config: Vec = var_names - .iter() - .map(|name| { - if *vals.get(name).unwrap_or(&false) { - 1 - } else { - 0 - } - }) - .collect(); - let satisfied = circuit_sat.evaluate(&config); - println!( - " {} * {} = {}: circuit satisfied = {}", - fa, - fb, - fa * fb, - satisfied - ); - assert!(satisfied); - - solutions.push(SolutionPair { - source_config: sol.clone(), - target_config: config, - }); - } - - println!("\nReduction verified successfully: 35 = 5 * 7"); - - // 6. Export JSON - let source_variant = variant_to_map(Factoring::variant()); - let target_variant = variant_to_map(CircuitSAT::variant()); - let overhead = lookup_overhead("Factoring", &source_variant, "CircuitSAT", &target_variant) - .expect("Factoring -> CircuitSAT overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: Factoring::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "number": factoring.target(), - "num_bits_first": factoring.m(), - "num_bits_second": factoring.n(), - }), - }, - target: ProblemSide { - problem: CircuitSAT::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_variables": circuit_sat.num_variables(), - "num_gates": circuit_sat.circuit().num_assignments(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "factoring_to_circuitsat"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_factoring_to_ilp.rs b/examples/reduction_factoring_to_ilp.rs deleted file mode 100644 index 12c6e564..00000000 --- a/examples/reduction_factoring_to_ilp.rs +++ /dev/null @@ -1,107 +0,0 @@ -// # Factoring to ILP Reduction -// -// ## Mathematical Formulation -// Uses McCormick linearization for binary products with carry propagation. -// Variables: p_i, q_j (factor bits), z_ij (product bits), c_k (carries). -// Constraints: -// (1) McCormick: z_ij <= p_i, z_ij <= q_j, z_ij >= p_i + q_j - 1 -// (2) Bit equations: sum_{i+j=k} z_ij + c_{k-1} = N_k + 2*c_k -// (3) No overflow: c_{m+n-1} = 0 -// Objective: feasibility (minimize 0). -// -// ## This Example -// - Instance: Factor 35 = 5 × 7 (m=3 bits, n=3 bits) -// - NOTE: Uses ILPSolver (not BruteForce) since the ILP has many variables -// - Target ILP: ~21 variables (factor bits + product bits + carries) -// -// ## Output -// Exports `docs/paper/examples/factoring_to_ilp.json` for use in paper code blocks. - -use problemreductions::export::*; -use problemreductions::models::algebraic::ILP; -use problemreductions::prelude::*; -use problemreductions::solvers::ILPSolver; - -pub fn run() { - // 1. Create Factoring instance: find p (3-bit) x q (3-bit) = 35 - let problem = Factoring::new(3, 3, 35); - - // 2. Reduce to ILP - let reduction = ReduceTo::>::reduce_to(&problem); - let ilp = reduction.target_problem(); - - // 3. Print transformation - println!("\n=== Problem Transformation ==="); - println!( - "Source: Factoring with {} variables ({}+{} bits)", - problem.num_variables(), - problem.m(), - problem.n() - ); - println!( - "Target: ILP with {} variables, {} constraints", - ilp.num_vars, - ilp.constraints.len() - ); - - // 4. Solve ILP using ILPSolver (too many variables for BruteForce) - let solver = ILPSolver::new(); - let ilp_solution = solver - .solve(ilp) - .expect("ILP should be feasible for 35 = 5 * 7"); - println!("\n=== Solution ==="); - println!( - "ILP solution found (first 6 vars): {:?}", - &ilp_solution[..6] - ); - - // 5. Extract factoring solution - let extracted = reduction.extract_solution(&ilp_solution); - println!("Source Factoring solution: {:?}", extracted); - - // 6. Verify: read factors and confirm p * q = 35 - let (p, q) = problem.read_factors(&extracted); - println!("Factors: {} x {} = {}", p, q, p * q); - assert_eq!(p * q, 35); - println!("\nReduction verified successfully"); - - // 7. Collect solutions and export JSON - let solutions = vec![SolutionPair { - source_config: extracted, - target_config: ilp_solution, - }]; - - let source_variant = variant_to_map(Factoring::variant()); - let target_variant = variant_to_map(ILP::::variant()); - let overhead = lookup_overhead("Factoring", &source_variant, "ILP", &target_variant) - .expect("Factoring -> ILP overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: Factoring::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "number": problem.target(), - "num_bits_first": problem.m(), - "num_bits_second": problem.n(), - }), - }, - target: ProblemSide { - problem: ILP::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vars": ilp.num_vars, - "num_constraints": ilp.constraints.len(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "factoring_to_ilp"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_ilp_to_qubo.rs b/examples/reduction_ilp_to_qubo.rs deleted file mode 100644 index abf2ff19..00000000 --- a/examples/reduction_ilp_to_qubo.rs +++ /dev/null @@ -1,170 +0,0 @@ -// # Integer Linear Programming (Binary) to QUBO Reduction (Penalty Method) -// -// ## Mathematical Relationship -// A binary ILP problem: -// -// maximize c^T x -// subject to A x <= b -// x_i in {0, 1} -// -// is mapped to QUBO by introducing slack variables to convert inequality -// constraints into equalities, then penalizing constraint violations: -// -// H(x, s) = -c^T x + P * sum_j (a_j^T x + s_j - b_j)^2 -// -// where s_j are slack variables encoded in binary. The penalty P is chosen -// large enough to ensure feasibility is always preferred over infeasible -// solutions with better objective values. -// -// ## This Example -// - Instance: 6-variable binary knapsack problem -// - Items with weights [3, 2, 5, 4, 2, 3] and values [10, 7, 12, 8, 6, 9] -// - Constraint 1: 3x0 + 2x1 + 5x2 + 4x3 + 2x4 + 3x5 <= 10 (weight capacity) -// - Constraint 2: x0 + x1 + x2 <= 2 (category A limit) -// - Constraint 3: x3 + x4 + x5 <= 2 (category B limit) -// - Objective: maximize 10x0 + 7x1 + 12x2 + 8x3 + 6x4 + 9x5 -// - Expected: Select items that maximize total value while satisfying all -// weight and category constraints -// -// ## Outputs -// - `docs/paper/examples/ilp_to_qubo.json` — reduction structure -// - `docs/paper/examples/ilp_to_qubo.result.json` — solutions -// -// ## Usage -// ```bash -// cargo run --example reduction_ilp_to_qubo -// ``` - -use problemreductions::export::*; -use problemreductions::models::algebraic::{LinearConstraint, ObjectiveSense, ILP}; -use problemreductions::prelude::*; - -pub fn run() { - println!("=== ILP (Binary) -> QUBO Reduction ===\n"); - - // 6-variable binary knapsack problem - // Items with weights [3, 2, 5, 4, 2, 3] and values [10, 7, 12, 8, 6, 9] - // Constraint 1: knapsack weight capacity <= 10 - // Constraint 2: category A items (x0, x1, x2) limited to 2 - // Constraint 3: category B items (x3, x4, x5) limited to 2 - let ilp = ILP::::new( - 6, - vec![ - // Knapsack weight constraint: 3x0 + 2x1 + 5x2 + 4x3 + 2x4 + 3x5 <= 10 - LinearConstraint::le( - vec![(0, 3.0), (1, 2.0), (2, 5.0), (3, 4.0), (4, 2.0), (5, 3.0)], - 10.0, - ), - // Category A limit: x0 + x1 + x2 <= 2 - LinearConstraint::le(vec![(0, 1.0), (1, 1.0), (2, 1.0)], 2.0), - // Category B limit: x3 + x4 + x5 <= 2 - LinearConstraint::le(vec![(3, 1.0), (4, 1.0), (5, 1.0)], 2.0), - ], - vec![(0, 10.0), (1, 7.0), (2, 12.0), (3, 8.0), (4, 6.0), (5, 9.0)], - ObjectiveSense::Maximize, - ); - - let item_names = ["Item0", "Item1", "Item2", "Item3", "Item4", "Item5"]; - let weights = [3, 2, 5, 4, 2, 3]; - let values = [10, 7, 12, 8, 6, 9]; - - // Reduce to QUBO - let reduction = ReduceTo::>::reduce_to(&ilp); - let qubo = reduction.target_problem(); - - println!("Source: ILP (binary) with 6 variables, 3 constraints"); - println!(" Objective: maximize 10x0 + 7x1 + 12x2 + 8x3 + 6x4 + 9x5"); - println!(" Constraint 1: 3x0 + 2x1 + 5x2 + 4x3 + 2x4 + 3x5 <= 10 (weight capacity)"); - println!(" Constraint 2: x0 + x1 + x2 <= 2 (category A limit)"); - println!(" Constraint 3: x3 + x4 + x5 <= 2 (category B limit)"); - println!("Target: QUBO with {} variables", qubo.num_variables()); - println!( - " (6 original + {} slack variables for inequality constraints)", - qubo.num_variables() - 6 - ); - println!( - "Q matrix size: {}x{}", - qubo.matrix().len(), - qubo.matrix().len() - ); - - // Solve QUBO with brute force - let solver = BruteForce::new(); - let qubo_solutions = solver.find_all_best(qubo); - - // Extract and verify solutions - println!("\nOptimal solutions:"); - let mut solutions = Vec::new(); - for sol in &qubo_solutions { - let extracted = reduction.extract_solution(sol); - let selected: Vec = extracted - .iter() - .enumerate() - .filter(|(_, &x)| x == 1) - .map(|(i, _)| item_names[i].to_string()) - .collect(); - let total_weight: i32 = extracted - .iter() - .enumerate() - .filter(|(_, &x)| x == 1) - .map(|(i, _)| weights[i]) - .sum(); - let total_value: i32 = extracted - .iter() - .enumerate() - .filter(|(_, &x)| x == 1) - .map(|(i, _)| values[i]) - .sum(); - println!( - " Selected items: {:?} (total weight: {}, total value: {})", - selected, total_weight, total_value - ); - - // Closed-loop verification: check solution is valid in original problem - let sol_size = ilp.evaluate(&extracted); - assert!( - sol_size.is_valid(), - "Solution must be valid in source problem" - ); - - solutions.push(SolutionPair { - source_config: extracted, - target_config: sol.clone(), - }); - } - - println!("\nVerification passed: all solutions are feasible and optimal"); - - // Export JSON - let source_variant = variant_to_map(ILP::::variant()); - let target_variant = variant_to_map(QUBO::::variant()); - let overhead = lookup_overhead("ILP", &source_variant, "QUBO", &target_variant) - .expect("ILP -> QUBO overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: ILP::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vars": ilp.num_vars, - }), - }, - target: ProblemSide { - problem: QUBO::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vars": qubo.num_vars(), - "matrix": qubo.matrix(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "ilp_to_qubo"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_kcoloring_to_ilp.rs b/examples/reduction_kcoloring_to_ilp.rs deleted file mode 100644 index 97be8995..00000000 --- a/examples/reduction_kcoloring_to_ilp.rs +++ /dev/null @@ -1,108 +0,0 @@ -// # K-Coloring to ILP Reduction -// -// ## Mathematical Formulation -// Variables: x_{v,c} in {0,1} for each vertex v and color c. -// Constraints: -// (1) sum_c x_{v,c} = 1 for each vertex v (exactly one color). -// (2) x_{u,c} + x_{v,c} <= 1 for each edge (u,v) and color c (different colors on adjacent). -// Objective: feasibility (minimize 0). -// -// ## This Example -// - Instance: Petersen graph (10 vertices, 15 edges) with 3 colors, χ=3 -// - Source KColoring: feasible, each vertex gets a color such that no adjacent vertices share a color -// - Target ILP: 30 binary variables (10 vertices * 3 colors), many constraints -// -// ## Output -// Exports `docs/paper/examples/kcoloring_to_ilp.json` and `kcoloring_to_ilp.result.json`. - -use problemreductions::export::*; -use problemreductions::models::algebraic::ILP; -use problemreductions::prelude::*; -use problemreductions::solvers::ILPSolver; -use problemreductions::topology::small_graphs::petersen; -use problemreductions::topology::{Graph, SimpleGraph}; -use problemreductions::variant::K3; - -pub fn run() { - // 1. Create KColoring instance: Petersen graph (10 vertices, 15 edges) with 3 colors, χ=3 - let (num_vertices, edges) = petersen(); - let coloring = KColoring::::new(SimpleGraph::new(num_vertices, edges.clone())); - - // 2. Reduce to ILP - let reduction = ReduceTo::>::reduce_to(&coloring); - let ilp = reduction.target_problem(); - - // 3. Print transformation - println!("\n=== Problem Transformation ==="); - println!( - "Source: KColoring<3> with {} variables", - coloring.num_variables() - ); - println!( - "Target: ILP with {} variables, {} constraints", - ilp.num_vars, - ilp.constraints.len() - ); - - // 4. Solve target ILP using HiGHS solver (BruteForce on 30 vars is too slow) - let solver = ILPSolver::new(); - let ilp_solution = solver.solve(ilp).expect("ILP should be feasible"); - println!("\n=== Solution ==="); - println!("ILP solution: {:?}", ilp_solution); - - // 5. Extract source solution - let coloring_solution = reduction.extract_solution(&ilp_solution); - println!("Source Coloring solution: {:?}", coloring_solution); - - // 6. Verify - // KColoring is a satisfaction problem (bool), so evaluate returns bool directly - let size = coloring.evaluate(&coloring_solution); - println!("Solution valid: {}", size); - assert!(size); - println!("\nReduction verified successfully"); - - // 7. Collect solutions and export JSON - let mut solutions = Vec::new(); - let source_sol = reduction.extract_solution(&ilp_solution); - // KColoring is a satisfaction problem (bool), so evaluate returns bool directly - let s = coloring.evaluate(&source_sol); - assert!(s); - solutions.push(SolutionPair { - source_config: source_sol, - target_config: ilp_solution, - }); - - let source_variant = variant_to_map(KColoring::::variant()); - let target_variant = variant_to_map(ILP::::variant()); - let overhead = lookup_overhead("KColoring", &source_variant, "ILP", &target_variant) - .expect("KColoring -> ILP overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: KColoring::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vertices": coloring.graph().num_vertices(), - "num_edges": coloring.graph().num_edges(), - "num_colors": 3, - }), - }, - target: ProblemSide { - problem: ILP::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vars": ilp.num_vars, - "num_constraints": ilp.constraints.len(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "kcoloring_to_ilp"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_kcoloring_to_qubo.rs b/examples/reduction_kcoloring_to_qubo.rs deleted file mode 100644 index 42a6f03c..00000000 --- a/examples/reduction_kcoloring_to_qubo.rs +++ /dev/null @@ -1,125 +0,0 @@ -// # K-Coloring to QUBO Reduction (Penalty Method) -// -// ## Mathematical Relationship -// The K-Coloring problem on a graph G = (V, E) with K colors is mapped to QUBO -// using a one-hot encoding. Each vertex i has K binary variables x_{i,c} for -// c = 0..K-1, with penalties enforcing: -// -// 1. One-hot constraint: each vertex gets exactly one color -// P1 * sum_i (1 - sum_c x_{i,c})^2 -// -// 2. Edge constraint: adjacent vertices get different colors -// P2 * sum_{(i,j) in E} sum_c x_{i,c} * x_{j,c} -// -// The QUBO has n*K variables (n vertices, K colors). -// -// ## This Example -// - Instance: House graph (5 vertices, 6 edges) with 3 colors, χ=3 -// - Source: KColoring<3> on 5 vertices, 6 edges -// - QUBO variables: 15 (5 vertices x 3 colors, one-hot encoding) -// - BruteForce on 15 variables (2^15 = 32768) completes quickly -// -// ## Outputs -// - `docs/paper/examples/coloring_to_qubo.json` — reduction structure -// - `docs/paper/examples/coloring_to_qubo.result.json` — solutions -// -// ## Usage -// ```bash -// cargo run --example reduction_coloring_to_qubo -// ``` - -use problemreductions::export::*; -use problemreductions::prelude::*; -use problemreductions::topology::small_graphs::house; -use problemreductions::topology::{Graph, SimpleGraph}; -use problemreductions::variant::K3; - -pub fn run() { - println!("=== K-Coloring -> QUBO Reduction ===\n"); - - // House graph: 5 vertices, 6 edges (square base + triangle roof), χ=3 - let (num_vertices, edges) = house(); - let kc = KColoring::::new(SimpleGraph::new(num_vertices, edges.clone())); - - // Reduce to QUBO - let reduction = ReduceTo::::reduce_to(&kc); - let qubo = reduction.target_problem(); - - let colors = ["Red", "Green", "Blue"]; - println!("Source: KColoring<3> on house graph (5 vertices, 6 edges)"); - println!( - "Target: QUBO with {} variables (one-hot: 5 vertices x 3 colors)", - qubo.num_variables() - ); - println!("Q matrix:"); - for row in qubo.matrix() { - let formatted: Vec = row.iter().map(|v| format!("{:6.1}", v)).collect(); - println!(" [{}]", formatted.join(", ")); - } - - // Solve QUBO with brute force - let solver = BruteForce::new(); - let qubo_solutions = solver.find_all_best(qubo); - - // Extract and verify solutions - println!("\nValid 3-colorings: {}", qubo_solutions.len()); - let mut solutions = Vec::new(); - for sol in &qubo_solutions { - let extracted = reduction.extract_solution(sol); - let coloring: Vec = extracted - .iter() - .enumerate() - .map(|(i, &c)| format!("V{}={}", i, colors[c])) - .collect(); - println!(" {}", coloring.join(", ")); - - // Closed-loop verification: check solution is valid in original problem - let valid = kc.evaluate(&extracted); - assert!(valid, "Coloring must be valid in source problem"); - - solutions.push(SolutionPair { - source_config: extracted, - target_config: sol.clone(), - }); - } - - println!( - "\nVerification passed: {} valid colorings found", - qubo_solutions.len() - ); - - // Export JSON - let source_variant = variant_to_map(KColoring::::variant()); - let target_variant = variant_to_map(QUBO::::variant()); - let overhead = lookup_overhead("KColoring", &source_variant, "QUBO", &target_variant) - .expect("KColoring -> QUBO overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: KColoring::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vertices": kc.graph().num_vertices(), - "num_edges": kc.graph().num_edges(), - "num_colors": 3, - }), - }, - target: ProblemSide { - problem: QUBO::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vars": qubo.num_vars(), - "matrix": qubo.matrix(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "kcoloring_to_qubo"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_ksatisfiability_to_qubo.rs b/examples/reduction_ksatisfiability_to_qubo.rs deleted file mode 100644 index 0886ef6e..00000000 --- a/examples/reduction_ksatisfiability_to_qubo.rs +++ /dev/null @@ -1,154 +0,0 @@ -// # K-Satisfiability (3-SAT) to QUBO Reduction (Penalty Method) -// -// ## Mathematical Relationship -// The Maximum K-Satisfiability problem maps a CNF formula with k-literal clauses -// to QUBO. Each clause C_j = (l_1 OR l_2 OR ... OR l_k) contributes a penalty -// term that is minimized when the clause is satisfied: -// -// H_j(x) = product_{l in C_j} (1 - l) -// -// where l = x_i for positive literal and l = (1 - x_i) for negated literal. -// The total QUBO Hamiltonian H = -sum_j H_j is minimized when the maximum -// number of clauses is satisfied. -// -// For 3-SAT clauses, the cubic penalty terms are quadratized using -// Rosenberg's substitution, introducing one auxiliary variable per clause. -// -// ## This Example -// - Instance: 3-SAT with 5 variables and 7 clauses -// - C1: x1 OR x2 OR NOT x3 -// - C2: NOT x1 OR x3 OR x4 -// - C3: x2 OR NOT x4 OR x5 -// - C4: NOT x2 OR x3 OR NOT x5 -// - C5: x1 OR NOT x3 OR x5 -// - C6: NOT x1 OR NOT x2 OR x4 -// - C7: x3 OR NOT x4 OR NOT x5 -// - QUBO variables: 5 original + 7 auxiliary = 12 total -// - Expected: Assignments satisfying all 7 clauses (if possible) or -// maximizing satisfied clauses -// -// ## Outputs -// - `docs/paper/examples/ksatisfiability_to_qubo.json` — reduction structure -// - `docs/paper/examples/ksatisfiability_to_qubo.result.json` — solutions -// -// ## Usage -// ```bash -// cargo run --example reduction_ksatisfiability_to_qubo -// ``` - -use problemreductions::export::*; -use problemreductions::prelude::*; -use problemreductions::variant::K3; - -pub fn run() { - println!("=== K-Satisfiability (3-SAT) -> QUBO Reduction ===\n"); - - // 7 clauses over 5 variables - let clauses = vec![ - CNFClause::new(vec![1, 2, -3]), // x1 OR x2 OR NOT x3 - CNFClause::new(vec![-1, 3, 4]), // NOT x1 OR x3 OR x4 - CNFClause::new(vec![2, -4, 5]), // x2 OR NOT x4 OR x5 - CNFClause::new(vec![-2, 3, -5]), // NOT x2 OR x3 OR NOT x5 - CNFClause::new(vec![1, -3, 5]), // x1 OR NOT x3 OR x5 - CNFClause::new(vec![-1, -2, 4]), // NOT x1 OR NOT x2 OR x4 - CNFClause::new(vec![3, -4, -5]), // x3 OR NOT x4 OR NOT x5 - ]; - let clause_strings = [ - "x1 OR x2 OR NOT x3".to_string(), - "NOT x1 OR x3 OR x4".to_string(), - "x2 OR NOT x4 OR x5".to_string(), - "NOT x2 OR x3 OR NOT x5".to_string(), - "x1 OR NOT x3 OR x5".to_string(), - "NOT x1 OR NOT x2 OR x4".to_string(), - "x3 OR NOT x4 OR NOT x5".to_string(), - ]; - - let ksat = KSatisfiability::::new(5, clauses); - - // Reduce to QUBO - let reduction = ReduceTo::::reduce_to(&ksat); - let qubo = reduction.target_problem(); - - println!("Source: KSatisfiability with 5 variables, 7 clauses"); - for (i, c) in clause_strings.iter().enumerate() { - println!(" C{}: {}", i + 1, c); - } - println!("Target: QUBO with {} variables", qubo.num_variables()); - println!("Q matrix:"); - for row in qubo.matrix() { - println!(" {:?}", row); - } - - // Solve QUBO with brute force - let solver = BruteForce::new(); - let qubo_solutions = solver.find_all_best(qubo); - - // Extract and verify solutions - println!("\nOptimal solutions:"); - let num_clauses = ksat.clauses().len(); - let mut solutions = Vec::new(); - for sol in &qubo_solutions { - let extracted = reduction.extract_solution(sol); - let assignment: Vec = extracted - .iter() - .map(|&x| { - if x == 1 { - "ON".to_string() - } else { - "OFF".to_string() - } - }) - .collect(); - // KSatisfiability is a maximization problem (maximize satisfied clauses) - // evaluate returns number of satisfied clauses directly - let satisfied = ksat.evaluate(&extracted); - println!( - " Switches: [{}] -> {}/{} clauses satisfied", - assignment.join(", "), - satisfied, - num_clauses - ); - - solutions.push(SolutionPair { - source_config: extracted, - target_config: sol.clone(), - }); - } - - println!("\nVerification passed: all solutions maximize satisfied clauses"); - - // Export JSON - let source_variant = variant_to_map(KSatisfiability::::variant()); - let target_variant = variant_to_map(QUBO::::variant()); - let overhead = lookup_overhead("KSatisfiability", &source_variant, "QUBO", &target_variant) - .expect("KSatisfiability -> QUBO overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: KSatisfiability::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vars": ksat.num_vars(), - "num_clauses": ksat.clauses().len(), - "k": 3, - }), - }, - target: ProblemSide { - problem: QUBO::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vars": qubo.num_vars(), - "matrix": qubo.matrix(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "ksatisfiability_to_qubo"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_ksatisfiability_to_satisfiability.rs b/examples/reduction_ksatisfiability_to_satisfiability.rs deleted file mode 100644 index 620e8897..00000000 --- a/examples/reduction_ksatisfiability_to_satisfiability.rs +++ /dev/null @@ -1,138 +0,0 @@ -// # K-Satisfiability (3-SAT) to Satisfiability Reduction (Trivial Embedding) -// -// ## Mathematical Equivalence -// K-SAT is a special case of SAT where every clause has exactly k literals. -// The reduction is a trivial embedding: the K-SAT clauses are directly used -// as SAT clauses with no transformation needed. -// -// ## This Example -// - Instance: 3-SAT with 4 variables and 3 clauses (each with exactly 3 literals) -// - C1: x1 OR NOT x2 OR x3 -// - C2: NOT x1 OR x3 OR x4 -// - C3: x2 OR NOT x3 OR NOT x4 -// - Source K-SAT: satisfiable -// - Target: SAT with identical clauses (same variables, same clauses) -// -// ## Output -// Exports `docs/paper/examples/ksatisfiability_to_satisfiability.json` and -// `ksatisfiability_to_satisfiability.result.json`. - -use problemreductions::export::*; -use problemreductions::prelude::*; -use problemreductions::variant::K3; - -pub fn run() { - println!("=== K-Satisfiability (3-SAT) -> Satisfiability Reduction ===\n"); - - // 1. Create a small 3-SAT instance: 4 variables, 3 clauses (each with exactly 3 literals) - let clauses = vec![ - CNFClause::new(vec![1, -2, 3]), // x1 OR NOT x2 OR x3 - CNFClause::new(vec![-1, 3, 4]), // NOT x1 OR x3 OR x4 - CNFClause::new(vec![2, -3, -4]), // x2 OR NOT x3 OR NOT x4 - ]; - let clause_strings = [ - "x1 OR NOT x2 OR x3", - "NOT x1 OR x3 OR x4", - "x2 OR NOT x3 OR NOT x4", - ]; - - let ksat = KSatisfiability::::new(4, clauses); - - println!( - "Source: KSatisfiability with {} variables, {} clauses", - ksat.num_vars(), - ksat.num_clauses() - ); - for (i, c) in clause_strings.iter().enumerate() { - println!(" C{}: {}", i + 1, c); - } - - // 2. Reduce to Satisfiability (trivial embedding) - let reduction = ReduceTo::::reduce_to(&ksat); - let sat = reduction.target_problem(); - - println!("\n=== Problem Transformation ==="); - println!( - "Target: Satisfiability with {} variables, {} clauses", - sat.num_vars(), - sat.num_clauses() - ); - println!(" (Trivial embedding: K-SAT is a special case of SAT, no transformation needed)"); - - // Print target clauses - println!("\n Target SAT clauses:"); - for (i, clause) in sat.clauses().iter().enumerate() { - println!(" Clause {}: {:?}", i, clause.literals); - } - - // 3. Solve the target SAT problem (satisfaction problem) - let solver = BruteForce::new(); - let sat_solutions = solver.find_all_satisfying(sat); - println!("\n=== Solution ==="); - println!("Target SAT solutions found: {}", sat_solutions.len()); - - // 4. Extract and verify all solutions - let mut solutions = Vec::new(); - for sat_sol in &sat_solutions { - let ksat_sol = reduction.extract_solution(sat_sol); - let valid = ksat.evaluate(&ksat_sol); - let assignment: Vec = ksat_sol - .iter() - .enumerate() - .map(|(i, &v)| format!("x{}={}", i + 1, if v == 1 { "T" } else { "F" })) - .collect(); - println!(" [{}] -> valid: {}", assignment.join(", "), valid); - assert!(valid, "Extracted K-SAT solution must be valid"); - - solutions.push(SolutionPair { - source_config: ksat_sol, - target_config: sat_sol.to_vec(), - }); - } - - println!( - "\nAll {} SAT solutions map to valid K-SAT assignments", - sat_solutions.len() - ); - println!("Reduction verified successfully"); - - // 5. Export JSON - let source_variant = variant_to_map(KSatisfiability::::variant()); - let target_variant = variant_to_map(Satisfiability::variant()); - let overhead = lookup_overhead( - "KSatisfiability", - &source_variant, - "Satisfiability", - &target_variant, - ) - .expect("KSatisfiability -> Satisfiability overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: KSatisfiability::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vars": ksat.num_vars(), - "num_clauses": ksat.num_clauses(), - "k": 3, - }), - }, - target: ProblemSide { - problem: Satisfiability::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vars": sat.num_vars(), - "num_clauses": sat.num_clauses(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "ksatisfiability_to_satisfiability"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_ksatisfiability_to_subsetsum.rs b/examples/reduction_ksatisfiability_to_subsetsum.rs deleted file mode 100644 index dc1cadd9..00000000 --- a/examples/reduction_ksatisfiability_to_subsetsum.rs +++ /dev/null @@ -1,131 +0,0 @@ -// # K-Satisfiability (3-SAT) to SubsetSum Reduction (Karp 1972) -// -// ## Mathematical Relationship -// The classical Karp reduction encodes a 3-CNF formula as a SubsetSum instance -// using base-10 digit positions. Each integer has (n + m) digits where n is the -// number of variables and m is the number of clauses. Variable digits ensure -// exactly one truth value per variable; clause digits count satisfied literals, -// padded to 4 by slack integers. -// -// No carries occur because the maximum digit sum is at most 3 + 2 = 5 < 10. -// -// ## This Example -// - Instance: 3-SAT formula (x₁ ∨ x₂ ∨ x₃) ∧ (¬x₁ ∨ ¬x₂ ∨ x₃) -// - n = 3 variables, m = 2 clauses -// - SubsetSum: 10 integers (2n + 2m) with 5-digit (n + m) encoding -// - Target: T = 11144 -// -// ## Outputs -// - `docs/paper/examples/ksatisfiability_to_subsetsum.json` — reduction structure -// - `docs/paper/examples/ksatisfiability_to_subsetsum.result.json` — solutions -// -// ## Usage -// ```bash -// cargo run --example reduction_ksatisfiability_to_subsetsum -// ``` - -use problemreductions::export::*; -use problemreductions::prelude::*; -use problemreductions::variant::K3; - -pub fn run() { - println!("=== K-Satisfiability (3-SAT) -> SubsetSum Reduction ===\n"); - - // 3-SAT: (x₁ ∨ x₂ ∨ x₃) ∧ (¬x₁ ∨ ¬x₂ ∨ x₃) - let clauses = vec![ - CNFClause::new(vec![1, 2, 3]), // x₁ ∨ x₂ ∨ x₃ - CNFClause::new(vec![-1, -2, 3]), // ¬x₁ ∨ ¬x₂ ∨ x₃ - ]; - - let ksat = KSatisfiability::::new(3, clauses); - println!("Source: KSatisfiability with 3 variables, 2 clauses"); - println!(" C1: x1 OR x2 OR x3"); - println!(" C2: NOT x1 OR NOT x2 OR x3"); - - // Reduce to SubsetSum - let reduction = ReduceTo::::reduce_to(&ksat); - let subsetsum = reduction.target_problem(); - - println!( - "\nTarget: SubsetSum with {} elements, target = {}", - subsetsum.num_elements(), - subsetsum.target() - ); - println!("Elements: {:?}", subsetsum.sizes()); - - // Solve SubsetSum with brute force - let solver = BruteForce::new(); - let ss_solutions = solver.find_all_satisfying(subsetsum); - - println!("\nSatisfying solutions:"); - let mut solutions = Vec::new(); - for sol in &ss_solutions { - let extracted = reduction.extract_solution(sol); - let assignment: Vec<&str> = extracted - .iter() - .map(|&x| if x == 1 { "T" } else { "F" }) - .collect(); - let satisfied = ksat.evaluate(&extracted); - println!( - " x = [{}] -> formula {}", - assignment.join(", "), - if satisfied { - "SATISFIED" - } else { - "NOT SATISFIED" - } - ); - assert!(satisfied, "Extracted solution must satisfy the formula"); - - solutions.push(SolutionPair { - source_config: extracted, - target_config: sol.clone(), - }); - } - - println!( - "\nVerification passed: all {} SubsetSum solutions map to satisfying assignments", - ss_solutions.len() - ); - - // Export JSON - let source_variant = variant_to_map(KSatisfiability::::variant()); - let target_variant = variant_to_map(SubsetSum::variant()); - let overhead = lookup_overhead( - "KSatisfiability", - &source_variant, - "SubsetSum", - &target_variant, - ) - .expect("KSatisfiability -> SubsetSum overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: KSatisfiability::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vars": ksat.num_vars(), - "num_clauses": ksat.clauses().len(), - "k": 3, - }), - }, - target: ProblemSide { - problem: SubsetSum::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_elements": subsetsum.num_elements(), - "sizes": subsetsum.sizes().iter().map(ToString::to_string).collect::>(), - "target": subsetsum.target().to_string(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "ksatisfiability_to_subsetsum"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_longestcommonsubsequence_to_ilp.rs b/examples/reduction_longestcommonsubsequence_to_ilp.rs deleted file mode 100644 index edaad1d5..00000000 --- a/examples/reduction_longestcommonsubsequence_to_ilp.rs +++ /dev/null @@ -1,113 +0,0 @@ -// # LongestCommonSubsequence to ILP Reduction -// -// ## Mathematical Formulation -// Uses the match-pair formulation (Blum et al., 2021). -// For each position pair (j1, j2) where s1[j1] == s2[j2], a binary variable m_{j1,j2}. -// Constraints: -// (1) Each s1 position matched at most once -// (2) Each s2 position matched at most once -// (3) Order preservation: no crossings among matched pairs -// Objective: maximize total matched pairs. -// -// ## This Example -// - Instance: s1 = "ABAC", s2 = "BACA" -// - 6 match pairs, LCS = "BAC" (length 3) -// -// ## Output -// Exports `docs/paper/examples/longestcommonsubsequence_to_ilp.json`. - -use problemreductions::export::*; -use problemreductions::models::algebraic::ILP; -use problemreductions::prelude::*; -use problemreductions::solvers::ILPSolver; - -pub fn run() { - // 1. Create LCS instance: s1 = "ABAC", s2 = "BACA" - let problem = LongestCommonSubsequence::new(vec![ - vec![b'A', b'B', b'A', b'C'], - vec![b'B', b'A', b'C', b'A'], - ]); - - // 2. Reduce to ILP - let reduction = ReduceTo::>::reduce_to(&problem); - let ilp = reduction.target_problem(); - - // 3. Print transformation - println!("\n=== Problem Transformation ==="); - println!( - "Source: LCS with {} strings, total length {}", - problem.num_strings(), - problem.total_length() - ); - println!( - "Target: ILP with {} variables, {} constraints", - ilp.num_vars, - ilp.constraints.len() - ); - - // 4. Solve ILP - let solver = ILPSolver::new(); - let ilp_solution = solver - .solve(ilp) - .expect("ILP should be feasible for ABAC/BACA"); - println!("\n=== Solution ==="); - println!("ILP solution: {:?}", &ilp_solution); - - // 5. Extract LCS solution - let extracted = reduction.extract_solution(&ilp_solution); - println!("Source LCS config: {:?}", extracted); - - // 6. Verify - let metric = problem.evaluate(&extracted); - assert!(metric.is_valid()); - let lcs_length = metric.unwrap(); - println!("LCS length: {}", lcs_length); - assert_eq!(lcs_length, 3); - println!("\nReduction verified successfully"); - - // 7. Collect solutions and export JSON - let solutions = vec![SolutionPair { - source_config: extracted, - target_config: ilp_solution, - }]; - - let source_variant = variant_to_map(LongestCommonSubsequence::variant()); - let target_variant = variant_to_map(ILP::::variant()); - let overhead = lookup_overhead( - "LongestCommonSubsequence", - &source_variant, - "ILP", - &target_variant, - ) - .expect("LCS -> ILP overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: LongestCommonSubsequence::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "strings": [ - [65, 66, 65, 67], - [66, 65, 67, 65], - ], - }), - }, - target: ProblemSide { - problem: ILP::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vars": ilp.num_vars, - "num_constraints": ilp.constraints.len(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "longestcommonsubsequence_to_ilp"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_maxcut_to_spinglass.rs b/examples/reduction_maxcut_to_spinglass.rs deleted file mode 100644 index 3c9e8538..00000000 --- a/examples/reduction_maxcut_to_spinglass.rs +++ /dev/null @@ -1,95 +0,0 @@ -// # Max-Cut to Spin Glass Reduction -// -// ## Mathematical Equivalence -// Max-Cut maps to Ising by setting J_{ij} = w_{ij} and h_i = 0. Maximizing the -// cut value sum w_{ij} (for i,j on different sides) equals minimizing the Ising -// energy -sum J_{ij} s_i s_j since s_i s_j = -1 when vertices are on opposite sides. -// -// ## This Example -// - Instance: Petersen graph (10 vertices, 15 edges) with unit edge weights -// - Source MaxCut: 10 vertices, 15 edges -// - Target SpinGlass: 10 spins -// -// ## Output -// Exports `docs/paper/examples/maxcut_to_spinglass.json` and `maxcut_to_spinglass.result.json`. -// -// See docs/paper/reductions.typ for the full reduction specification. - -use problemreductions::export::*; -use problemreductions::prelude::*; -use problemreductions::topology::small_graphs::petersen; -use problemreductions::topology::{Graph, SimpleGraph}; - -pub fn run() { - let (num_vertices, edges) = petersen(); - let maxcut = MaxCut::<_, i32>::unweighted(SimpleGraph::new(num_vertices, edges.clone())); - - let reduction = ReduceTo::>::reduce_to(&maxcut); - let sg = reduction.target_problem(); - - println!("\n=== Problem Transformation ==="); - println!("Source: MaxCut with {} variables", maxcut.num_variables()); - println!("Target: SpinGlass with {} variables", sg.num_variables()); - - let solver = BruteForce::new(); - let sg_solutions = solver.find_all_best(sg); - println!("\n=== Solution ==="); - println!("Target solutions found: {}", sg_solutions.len()); - - // Extract and verify solutions - let mut solutions = Vec::new(); - for target_sol in &sg_solutions { - let source_sol = reduction.extract_solution(target_sol); - let size = maxcut.evaluate(&source_sol); - // MaxCut is a maximization problem, infeasible configs return Invalid - assert!(size.is_valid()); - solutions.push(SolutionPair { - source_config: source_sol, - target_config: target_sol.clone(), - }); - } - - let maxcut_solution = reduction.extract_solution(&sg_solutions[0]); - println!("Source MaxCut solution: {:?}", maxcut_solution); - - let size = maxcut.evaluate(&maxcut_solution); - println!("Solution size: {:?}", size); - // MaxCut is a maximization problem, infeasible configs return Invalid - assert!(size.is_valid()); - println!("\nReduction verified successfully"); - - // Export JSON - let edges: Vec<(usize, usize, i32)> = maxcut.edges(); - let source_variant = variant_to_map(MaxCut::::variant()); - let target_variant = variant_to_map(SpinGlass::::variant()); - let overhead = lookup_overhead("MaxCut", &source_variant, "SpinGlass", &target_variant) - .expect("MaxCut -> SpinGlass overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: MaxCut::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vertices": maxcut.graph().num_vertices(), - "num_edges": maxcut.graph().num_edges(), - "edges": edges, - }), - }, - target: ProblemSide { - problem: SpinGlass::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_spins": sg.num_variables(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "maxcut_to_spinglass"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_maximumclique_to_ilp.rs b/examples/reduction_maximumclique_to_ilp.rs deleted file mode 100644 index af60e544..00000000 --- a/examples/reduction_maximumclique_to_ilp.rs +++ /dev/null @@ -1,111 +0,0 @@ -// # MaximumClique to ILP Reduction -// -// ## Mathematical Formulation -// Variables: x_v in {0,1} for each vertex v. -// Constraints: x_u + x_v <= 1 for each non-edge (u,v) not in E. -// Objective: maximize sum of w_v * x_v. -// -// ## This Example -// - Instance: Octahedron graph (K_{2,2,2}) with 6 vertices and 12 edges. -// - Source MaximumClique: max clique is size 3 -// - Target ILP: 6 binary variables, 3 non-edge constraints -// (non-edges: opposite vertex pairs (0,5), (1,4), (2,3)) -// -// ## Output -// Exports `docs/paper/examples/maximumclique_to_ilp.json` and `maximumclique_to_ilp.result.json`. - -use problemreductions::export::*; -use problemreductions::models::algebraic::ILP; -use problemreductions::prelude::*; -use problemreductions::topology::small_graphs::octahedral; -use problemreductions::topology::{Graph, SimpleGraph}; - -pub fn run() { - // 1. Create MaximumClique instance: Octahedron (K_{2,2,2}), 6 vertices, 12 edges, clique number 3 - let (num_vertices, edges) = octahedral(); - let clique = MaximumClique::new( - SimpleGraph::new(num_vertices, edges.clone()), - vec![1i32; num_vertices], - ); - - // 2. Reduce to ILP - let reduction = ReduceTo::>::reduce_to(&clique); - let ilp = reduction.target_problem(); - - // 3. Print transformation - println!("\n=== Problem Transformation ==="); - println!( - "Source: MaximumClique with {} variables", - clique.num_variables() - ); - println!( - "Target: ILP with {} variables, {} constraints", - ilp.num_vars, - ilp.constraints.len() - ); - - // 4. Solve target ILP - let solver = BruteForce::new(); - let ilp_solutions = solver.find_all_best(ilp); - println!("\n=== Solution ==="); - println!("ILP solutions found: {}", ilp_solutions.len()); - - let ilp_solution = &ilp_solutions[0]; - println!("ILP solution: {:?}", ilp_solution); - - // 5. Extract source solution - let clique_solution = reduction.extract_solution(ilp_solution); - println!("Source MaximumClique solution: {:?}", clique_solution); - - // 6. Verify - let size = clique.evaluate(&clique_solution); - println!("Solution size: {:?}", size); - assert!(size.is_valid()); // Valid solution - println!("\nReduction verified successfully"); - - // 7. Collect solutions and export JSON - let mut solutions = Vec::new(); - for target_config in &ilp_solutions { - let source_sol = reduction.extract_solution(target_config); - let s = clique.evaluate(&source_sol); - assert!(s.is_valid()); // Valid solution - solutions.push(SolutionPair { - source_config: source_sol, - target_config: target_config.clone(), - }); - } - - let source_variant = variant_to_map(MaximumClique::::variant()); - let target_variant = variant_to_map(ILP::::variant()); - let overhead = lookup_overhead("MaximumClique", &source_variant, "ILP", &target_variant) - .unwrap_or_default(); - - let data = ReductionData { - source: ProblemSide { - problem: MaximumClique::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vertices": clique.graph().num_vertices(), - "num_edges": clique.graph().num_edges(), - "edges": clique.graph().edges(), - }), - }, - target: ProblemSide { - problem: ILP::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vars": ilp.num_vars, - "num_constraints": ilp.constraints.len(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "maximumclique_to_ilp"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_maximumclique_to_maximumindependentset.rs b/examples/reduction_maximumclique_to_maximumindependentset.rs deleted file mode 100644 index 74e29d20..00000000 --- a/examples/reduction_maximumclique_to_maximumindependentset.rs +++ /dev/null @@ -1,114 +0,0 @@ -// # Maximum Clique to Maximum Independent Set Reduction -// -// ## Complement Graph Reduction (Karp 1972) -// A set S is a clique in G iff S is an independent set in the complement -// graph. The reduction builds the complement graph and preserves weights. -// -// ## This Example -// - Instance: Path graph P4 with 4 vertices, edges {(0,1),(1,2),(2,3)} -// - Complement has edges {(0,2),(0,3),(1,3)} -// - Maximum clique = any edge = size 2 -// - Maximum independent set in complement = size 2 -// -// ## Output -// Exports `docs/paper/examples/maximumclique_to_maximumindependentset.json` and `maximumclique_to_maximumindependentset.result.json`. -// -// See docs/paper/reductions.typ for the full reduction specification. - -use problemreductions::export::*; -use problemreductions::prelude::*; -use problemreductions::topology::{Graph, SimpleGraph}; - -pub fn run() { - // Path graph P4: 4 vertices, 3 edges - let clique = MaximumClique::new( - SimpleGraph::new(4, vec![(0, 1), (1, 2), (2, 3)]), - vec![1i32; 4], - ); - - let reduction = ReduceTo::>::reduce_to(&clique); - let is = reduction.target_problem(); - - println!("\n=== Problem Transformation ==="); - println!( - "Source: MaximumClique with {} variables", - clique.num_variables() - ); - println!( - "Target: MaximumIndependentSet with {} variables", - is.num_variables() - ); - println!( - "Source edges: {}, Target (complement) edges: {}", - clique.graph().num_edges(), - is.graph().num_edges() - ); - - let solver = BruteForce::new(); - let is_solutions = solver.find_all_best(is); - println!("\n=== Solution ==="); - println!("Target solutions found: {}", is_solutions.len()); - - // Extract and verify solutions - let mut solutions = Vec::new(); - for target_sol in &is_solutions { - let source_sol = reduction.extract_solution(target_sol); - let size = clique.evaluate(&source_sol); - assert!(size.is_valid()); - solutions.push(SolutionPair { - source_config: source_sol.clone(), - target_config: target_sol.clone(), - }); - } - - let clique_solution = reduction.extract_solution(&is_solutions[0]); - println!("Source Clique solution: {:?}", clique_solution); - - let size = clique.evaluate(&clique_solution); - println!("Solution size: {:?}", size); - assert!(size.is_valid()); - println!("\nReduction verified successfully"); - - // Export JSON - let source_edges = clique.graph().edges(); - let target_edges = is.graph().edges(); - let source_variant = variant_to_map(MaximumClique::::variant()); - let target_variant = variant_to_map(MaximumIndependentSet::::variant()); - let overhead = lookup_overhead( - "MaximumClique", - &source_variant, - "MaximumIndependentSet", - &target_variant, - ) - .expect("MaximumClique -> MaximumIndependentSet overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: MaximumClique::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vertices": clique.graph().num_vertices(), - "num_edges": clique.graph().num_edges(), - "edges": source_edges, - }), - }, - target: ProblemSide { - problem: MaximumIndependentSet::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vertices": is.graph().num_vertices(), - "num_edges": is.graph().num_edges(), - "edges": target_edges, - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "maximumclique_to_maximumindependentset"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_maximumindependentset_to_ilp.rs b/examples/reduction_maximumindependentset_to_ilp.rs deleted file mode 100644 index 01d857a8..00000000 --- a/examples/reduction_maximumindependentset_to_ilp.rs +++ /dev/null @@ -1,116 +0,0 @@ -// # Independent Set to ILP via Reduction Path -// -// ## This Example -// - Instance: Petersen graph (10 vertices, 15 edges, 3-regular) -// - Source IS: max size 4 -// - Target: ILP reached through the reduction graph -// -// ## Output -// Exports `docs/paper/examples/maximumindependentset_to_ilp.json` and -// `maximumindependentset_to_ilp.result.json`. - -use problemreductions::export::*; -use problemreductions::models::algebraic::ILP; -use problemreductions::prelude::*; -use problemreductions::rules::{MinimizeSteps, ReductionGraph}; -use problemreductions::topology::small_graphs::petersen; -use problemreductions::topology::{Graph, SimpleGraph}; -use problemreductions::types::ProblemSize; - -pub fn run() { - let (num_vertices, edges) = petersen(); - let is = MaximumIndependentSet::new( - SimpleGraph::new(num_vertices, edges.clone()), - vec![1i32; num_vertices], - ); - - let graph = ReductionGraph::new(); - let src_variant_bt = - ReductionGraph::variant_to_map(&MaximumIndependentSet::::variant()); - let dst_variant_bt = ReductionGraph::variant_to_map(&ILP::::variant()); - let path = graph - .find_cheapest_path( - "MaximumIndependentSet", - &src_variant_bt, - "ILP", - &dst_variant_bt, - &ProblemSize::new(vec![]), - &MinimizeSteps, - ) - .expect("MaximumIndependentSet -> ILP path not found"); - let reduction = graph - .reduce_along_path(&path, &is as &dyn std::any::Any) - .expect("MaximumIndependentSet -> ILP path reduction failed"); - let ilp: &ILP = reduction.target_problem(); - - println!("\n=== Problem Transformation ==="); - println!( - "Source: MaximumIndependentSet with {} variables", - is.num_variables() - ); - println!("Path: {}", path); - println!( - "Target: ILP with {} variables, {} constraints", - ilp.num_vars, - ilp.constraints.len() - ); - - let solver = BruteForce::new(); - let ilp_solutions = solver.find_all_best(ilp); - println!("\n=== Solution ==="); - println!("ILP solutions found: {}", ilp_solutions.len()); - - let ilp_solution = &ilp_solutions[0]; - println!("ILP solution: {:?}", ilp_solution); - - let is_solution = reduction.extract_solution(ilp_solution); - println!("Source IS solution: {:?}", is_solution); - - let size = is.evaluate(&is_solution); - println!("Solution size: {:?}", size); - assert!(size.is_valid()); - println!("\nReduction verified successfully"); - - let mut solutions = Vec::new(); - for target_config in &ilp_solutions { - let source_sol = reduction.extract_solution(target_config); - let s = is.evaluate(&source_sol); - assert!(s.is_valid()); - solutions.push(SolutionPair { - source_config: source_sol, - target_config: target_config.clone(), - }); - } - - let source_variant = variant_to_map(MaximumIndependentSet::::variant()); - let target_variant = variant_to_map(ILP::::variant()); - let overhead = graph.compose_path_overhead(&path); - - let data = ReductionData { - source: ProblemSide { - problem: MaximumIndependentSet::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vertices": is.graph().num_vertices(), - "num_edges": is.graph().num_edges(), - "edges": edges, - }), - }, - target: ProblemSide { - problem: ILP::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vars": ilp.num_vars, - "num_constraints": ilp.constraints.len(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - write_example("maximumindependentset_to_ilp", &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_maximumindependentset_to_maximumclique.rs b/examples/reduction_maximumindependentset_to_maximumclique.rs deleted file mode 100644 index 5f858b55..00000000 --- a/examples/reduction_maximumindependentset_to_maximumclique.rs +++ /dev/null @@ -1,107 +0,0 @@ -// # Independent Set to Clique Reduction -// -// ## Mathematical Equivalence -// S is an independent set in G iff S is a clique in the complement graph Ḡ. -// The reduction builds Ḡ by taking edges not in G. Solution extraction is -// identity: the same vertex set works for both problems. -// -// ## This Example -// - Instance: Path graph P5 (5 vertices, 4 edges) -// - Source MIS: max size 3 (e.g., {0, 2, 4}) -// - Target MaxClique on complement: max clique size 3 -// -// ## Output -// Exports `docs/paper/examples/maximumindependentset_to_maximumclique.json` and `.result.json`. - -use problemreductions::export::*; -use problemreductions::prelude::*; -use problemreductions::topology::{Graph, SimpleGraph}; - -pub fn run() { - // Path graph: 0-1-2-3-4 - let source = MaximumIndependentSet::new( - SimpleGraph::new(5, vec![(0, 1), (1, 2), (2, 3), (3, 4)]), - vec![1i32; 5], - ); - - let reduction = ReduceTo::>::reduce_to(&source); - let target = reduction.target_problem(); - - println!("\n=== Problem Transformation ==="); - println!( - "Source: MaximumIndependentSet with {} vertices, {} edges", - source.graph().num_vertices(), - source.graph().num_edges() - ); - println!( - "Target: MaximumClique with {} vertices, {} edges (complement graph)", - target.num_vertices(), - target.num_edges() - ); - - let solver = BruteForce::new(); - let target_solutions = solver.find_all_best(target); - println!("\n=== Solution ==="); - println!("Target solutions found: {}", target_solutions.len()); - - let mut solutions = Vec::new(); - for target_sol in &target_solutions { - let source_sol = reduction.extract_solution(target_sol); - let size = source.evaluate(&source_sol); - assert!(size.is_valid()); - solutions.push(SolutionPair { - source_config: source_sol.clone(), - target_config: target_sol.clone(), - }); - } - - let source_sol = reduction.extract_solution(&target_solutions[0]); - println!("Source IS solution: {:?}", source_sol); - let size = source.evaluate(&source_sol); - println!("Solution size: {:?}", size); - assert!(size.is_valid()); - println!("\nReduction verified successfully"); - - // Export JSON - let source_edges = source.graph().edges(); - let target_edges = target.graph().edges(); - let source_variant = variant_to_map(MaximumIndependentSet::::variant()); - let target_variant = variant_to_map(MaximumClique::::variant()); - let overhead = lookup_overhead( - "MaximumIndependentSet", - &source_variant, - "MaximumClique", - &target_variant, - ) - .expect("MaximumIndependentSet -> MaximumClique overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: MaximumIndependentSet::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vertices": source.graph().num_vertices(), - "num_edges": source.graph().num_edges(), - "edges": source_edges, - }), - }, - target: ProblemSide { - problem: MaximumClique::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vertices": target.num_vertices(), - "num_edges": target.num_edges(), - "edges": target_edges, - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "maximumindependentset_to_maximumclique"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_maximumindependentset_to_maximumsetpacking.rs b/examples/reduction_maximumindependentset_to_maximumsetpacking.rs deleted file mode 100644 index 4a567417..00000000 --- a/examples/reduction_maximumindependentset_to_maximumsetpacking.rs +++ /dev/null @@ -1,139 +0,0 @@ -// # Independent Set to Set Packing Reduction -// -// ## Mathematical Equivalence -// For each vertex v, create a set S_v of edges incident to v. Universe U = E. -// Selecting vertex v means selecting S_v. Independent vertices have disjoint -// incident edge sets, so IS maps to set packing with identical optimal value. -// -// ## This Example -// - Instance: Petersen graph (10 vertices, 15 edges, 3-regular) -// - Source IS: max size 4 -// - Target MaximumSetPacking: max packing 4 -// -// ## Output -// Exports `docs/paper/examples/maximumindependentset_to_maximumsetpacking.json` and `maximumindependentset_to_maximumsetpacking.result.json`. -// -// See docs/paper/reductions.typ for the full reduction specification. - -use problemreductions::export::*; -use problemreductions::prelude::*; -use problemreductions::topology::small_graphs::petersen; -use problemreductions::topology::{Graph, SimpleGraph}; - -pub fn run() { - println!("\n=== Independent Set -> Set Packing Reduction ===\n"); - - // Petersen graph: 10 vertices, 15 edges, 3-regular - let (num_vertices, edges) = petersen(); - let source = MaximumIndependentSet::new( - SimpleGraph::new(num_vertices, edges.clone()), - vec![1i32; num_vertices], - ); - - println!("Source: MaximumIndependentSet on Petersen graph"); - println!(" Vertices: {}", num_vertices); - println!(" Edges: {:?}", edges); - - // Reduce to MaximumSetPacking - let reduction = ReduceTo::>::reduce_to(&source); - let target = reduction.target_problem(); - - println!("\nTarget: MaximumSetPacking"); - println!(" Sets: {} sets", target.num_sets()); - for (i, set) in target.sets().iter().enumerate() { - println!(" S_{} = {:?}", i, set); - } - - // Solve the target problem - let solver = BruteForce::new(); - let target_solutions = solver.find_all_best(target); - - println!("\nBest target solutions: {}", target_solutions.len()); - - // Extract and verify each solution - let mut solutions = Vec::new(); - for (i, target_sol) in target_solutions.iter().enumerate() { - let source_sol = reduction.extract_solution(target_sol); - let source_size = source.evaluate(&source_sol); - let target_size = target.evaluate(target_sol); - - println!( - " Solution {}: target={:?} (size={:?}), source={:?} (size={:?}, valid={})", - i, - target_sol, - target_size, - source_sol, - source_size, - source_size.is_valid() - ); - - assert!( - source_size.is_valid(), - "Extracted source solution must be valid" - ); - - solutions.push(SolutionPair { - source_config: source_sol, - target_config: target_sol.clone(), - }); - } - - // Use the first solution for additional assertions - let target_sol = &target_solutions[0]; - let source_sol = reduction.extract_solution(target_sol); - let source_size = source.evaluate(&source_sol); - let target_size = target.evaluate(target_sol); - - assert_eq!( - source_size, - problemreductions::types::SolutionSize::Valid(4), - "IS on Petersen graph has optimal size 4" - ); - assert_eq!( - target_size, - problemreductions::types::SolutionSize::Valid(4), - "MaximumSetPacking should also have size 4" - ); - - // Export JSON - let source_variant = variant_to_map(MaximumIndependentSet::::variant()); - let target_variant = variant_to_map(MaximumSetPacking::::variant()); - let overhead = lookup_overhead( - "MaximumIndependentSet", - &source_variant, - "MaximumSetPacking", - &target_variant, - ) - .expect("MaximumIndependentSet -> MaximumSetPacking overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: MaximumIndependentSet::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vertices": source.graph().num_vertices(), - "num_edges": source.graph().num_edges(), - "edges": edges, - }), - }, - target: ProblemSide { - problem: MaximumSetPacking::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_sets": target.num_sets(), - "sets": target.sets(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "maximumindependentset_to_maximumsetpacking"; - write_example(name, &data, &results); - - println!("\nDone: IS(Petersen) optimal=4 maps to MaximumSetPacking optimal=4"); -} - -fn main() { - run() -} diff --git a/examples/reduction_maximumindependentset_to_minimumvertexcover.rs b/examples/reduction_maximumindependentset_to_minimumvertexcover.rs deleted file mode 100644 index 1b4c594b..00000000 --- a/examples/reduction_maximumindependentset_to_minimumvertexcover.rs +++ /dev/null @@ -1,106 +0,0 @@ -// # Independent Set to Vertex Cover Reduction -// -// ## Mathematical Equivalence -// S ⊆ V is an independent set iff V \ S is a vertex cover. The complement -// operation preserves optimality since |IS| + |VC| = |V| is constant. -// -// ## This Example -// - Instance: Petersen graph (10 vertices, 15 edges, 3-regular) -// - Source IS: max size 4 -// - Target VC: min size 6 -// -// ## Output -// Exports `docs/paper/examples/maximumindependentset_to_minimumvertexcover.json` and `maximumindependentset_to_minimumvertexcover.result.json`. -// -// See docs/paper/reductions.typ for the full reduction specification. - -use problemreductions::export::*; -use problemreductions::prelude::*; -use problemreductions::topology::small_graphs::petersen; -use problemreductions::topology::{Graph, SimpleGraph}; - -pub fn run() { - // 1. Create IS instance: Petersen graph - let (num_vertices, edges) = petersen(); - let is = MaximumIndependentSet::new( - SimpleGraph::new(num_vertices, edges.clone()), - vec![1i32; num_vertices], - ); - - // 2. Reduce to VC - let reduction = ReduceTo::>::reduce_to(&is); - let vc = reduction.target_problem(); - - // 3. Print transformation - println!("\n=== Problem Transformation ==="); - println!( - "Source: MaximumIndependentSet with {} variables", - is.num_variables() - ); - println!( - "Target: MinimumVertexCover with {} variables", - vc.num_variables() - ); - - // 4. Solve target - let solver = BruteForce::new(); - let vc_solutions = solver.find_all_best(vc); - println!("\n=== Solution ==="); - println!("Target solutions found: {}", vc_solutions.len()); - - // 5. Extract and verify solutions - let mut solutions = Vec::new(); - for target_sol in &vc_solutions { - let source_sol = reduction.extract_solution(target_sol); - let size = is.evaluate(&source_sol); - // MaximumIndependentSet is a maximization problem, infeasible configs return Invalid - assert!(size.is_valid()); - solutions.push(SolutionPair { - source_config: source_sol, - target_config: target_sol.clone(), - }); - } - println!("Reduction verified successfully"); - - // 6. Export JSON - let source_variant = variant_to_map(MaximumIndependentSet::::variant()); - let target_variant = variant_to_map(MinimumVertexCover::::variant()); - let overhead = lookup_overhead( - "MaximumIndependentSet", - &source_variant, - "MinimumVertexCover", - &target_variant, - ) - .expect("MaximumIndependentSet -> MinimumVertexCover overhead not found"); - let vc_edges = vc.graph().edges(); - - let data = ReductionData { - source: ProblemSide { - problem: MaximumIndependentSet::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vertices": is.graph().num_vertices(), - "num_edges": is.graph().num_edges(), - "edges": edges, - }), - }, - target: ProblemSide { - problem: MinimumVertexCover::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vertices": vc.graph().num_vertices(), - "num_edges": vc.graph().num_edges(), - "edges": vc_edges, - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "maximumindependentset_to_minimumvertexcover"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_maximumindependentset_to_qubo.rs b/examples/reduction_maximumindependentset_to_qubo.rs deleted file mode 100644 index 334cc812..00000000 --- a/examples/reduction_maximumindependentset_to_qubo.rs +++ /dev/null @@ -1,118 +0,0 @@ -// # Independent Set to QUBO via Reduction Path -// -// ## This Example -// - Instance: Petersen graph (10 vertices, 15 edges, 3-regular) -// - Source: MaximumIndependentSet with maximum size 4 -// - Target: QUBO reached through the reduction graph -// -// ## Output -// Exports `docs/paper/examples/maximumindependentset_to_qubo.json` and -// `maximumindependentset_to_qubo.result.json`. - -use problemreductions::export::*; -use problemreductions::prelude::*; -use problemreductions::rules::{Minimize, ReductionGraph}; -use problemreductions::topology::small_graphs::petersen; -use problemreductions::topology::{Graph, SimpleGraph}; -use problemreductions::types::ProblemSize; - -pub fn run() { - println!("=== Independent Set -> QUBO Reduction ===\n"); - - let (num_vertices, edges) = petersen(); - let is = MaximumIndependentSet::new( - SimpleGraph::new(num_vertices, edges.clone()), - vec![1i32; num_vertices], - ); - - let graph = ReductionGraph::new(); - let src_variant_bt = - ReductionGraph::variant_to_map(&MaximumIndependentSet::::variant()); - let dst_variant_bt = ReductionGraph::variant_to_map(&QUBO::::variant()); - let path = graph - .find_cheapest_path( - "MaximumIndependentSet", - &src_variant_bt, - "QUBO", - &dst_variant_bt, - &ProblemSize::new(vec![ - ("num_vertices", is.graph().num_vertices()), - ("num_edges", is.graph().num_edges()), - ]), - &Minimize("num_vars"), - ) - .expect("MaximumIndependentSet -> QUBO path not found"); - let reduction = graph - .reduce_along_path(&path, &is as &dyn std::any::Any) - .expect("MaximumIndependentSet -> QUBO path reduction failed"); - let qubo: &QUBO = reduction.target_problem(); - - println!("Source: MaximumIndependentSet on Petersen graph (10 vertices, 15 edges)"); - println!("Path: {}", path); - println!("Target: QUBO with {} variables", qubo.num_variables()); - println!("Q matrix:"); - for row in qubo.matrix() { - println!(" {:?}", row); - } - - let solver = BruteForce::new(); - let qubo_solutions = solver.find_all_best(qubo); - - println!("\nOptimal solutions:"); - let mut solutions = Vec::new(); - for sol in &qubo_solutions { - let extracted = reduction.extract_solution(sol); - let sol_size = is.evaluate(&extracted); - assert!( - sol_size.is_valid(), - "Solution must be valid in source problem" - ); - - let selected: Vec = extracted - .iter() - .enumerate() - .filter(|(_, &x)| x == 1) - .map(|(i, _)| i) - .collect(); - println!(" Vertices: {:?} (size {})", selected, selected.len()); - - solutions.push(SolutionPair { - source_config: extracted, - target_config: sol.clone(), - }); - } - - println!("\nVerification passed: all solutions are valid"); - - let source_variant = variant_to_map(MaximumIndependentSet::::variant()); - let target_variant = variant_to_map(QUBO::::variant()); - let overhead = graph.compose_path_overhead(&path); - - let data = ReductionData { - source: ProblemSide { - problem: MaximumIndependentSet::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vertices": is.graph().num_vertices(), - "num_edges": is.graph().num_edges(), - "edges": edges, - }), - }, - target: ProblemSide { - problem: QUBO::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vars": qubo.num_vars(), - "matrix": qubo.matrix(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - write_example("maximumindependentset_to_qubo", &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_maximummatching_to_ilp.rs b/examples/reduction_maximummatching_to_ilp.rs deleted file mode 100644 index 7a588c88..00000000 --- a/examples/reduction_maximummatching_to_ilp.rs +++ /dev/null @@ -1,108 +0,0 @@ -// # MaximumMatching to ILP Reduction -// -// ## Mathematical Formulation -// Variables: x_e in {0,1} for each edge e. -// Constraints: sum_{e incident to v} x_e <= 1 for each vertex v. -// Objective: maximize sum of w_e * x_e. -// -// ## This Example -// - Instance: Petersen graph (10 vertices, 15 edges), perfect matching of size 5 -// - Source MaximumMatching: max matching size 5 -// - Target ILP: 15 binary variables (one per edge), 10 vertex constraints -// -// ## Output -// Exports `docs/paper/examples/maximummatching_to_ilp.json` and `maximummatching_to_ilp.result.json`. - -use problemreductions::export::*; -use problemreductions::models::algebraic::ILP; -use problemreductions::prelude::*; -use problemreductions::topology::small_graphs::petersen; -use problemreductions::topology::{Graph, SimpleGraph}; - -pub fn run() { - // 1. Create MaximumMatching instance: Petersen graph with unit weights - let (num_vertices, edges) = petersen(); - let matching = - MaximumMatching::<_, i32>::unit_weights(SimpleGraph::new(num_vertices, edges.clone())); - - // 2. Reduce to ILP - let reduction = ReduceTo::>::reduce_to(&matching); - let ilp = reduction.target_problem(); - - // 3. Print transformation - println!("\n=== Problem Transformation ==="); - println!( - "Source: MaximumMatching with {} variables (edges)", - matching.num_variables() - ); - println!( - "Target: ILP with {} variables, {} constraints", - ilp.num_vars, - ilp.constraints.len() - ); - - // 4. Solve target ILP - let solver = BruteForce::new(); - let ilp_solutions = solver.find_all_best(ilp); - println!("\n=== Solution ==="); - println!("ILP solutions found: {}", ilp_solutions.len()); - - let ilp_solution = &ilp_solutions[0]; - println!("ILP solution: {:?}", ilp_solution); - - // 5. Extract source solution - let matching_solution = reduction.extract_solution(ilp_solution); - println!("Source MaximumMatching solution: {:?}", matching_solution); - - // 6. Verify - let size = matching.evaluate(&matching_solution); - println!("Solution size: {:?}", size); - assert!(size.is_valid()); // Valid solution - println!("\nReduction verified successfully"); - - // 7. Collect solutions and export JSON - let mut solutions = Vec::new(); - for target_config in &ilp_solutions { - let source_sol = reduction.extract_solution(target_config); - let s = matching.evaluate(&source_sol); - assert!(s.is_valid()); // Valid solution - solutions.push(SolutionPair { - source_config: source_sol, - target_config: target_config.clone(), - }); - } - - let source_variant = variant_to_map(MaximumMatching::::variant()); - let target_variant = variant_to_map(ILP::::variant()); - let overhead = lookup_overhead("MaximumMatching", &source_variant, "ILP", &target_variant) - .unwrap_or_default(); - - let data = ReductionData { - source: ProblemSide { - problem: MaximumMatching::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vertices": matching.graph().num_vertices(), - "num_edges": matching.graph().num_edges(), - "edges": edges, - }), - }, - target: ProblemSide { - problem: ILP::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vars": ilp.num_vars, - "num_constraints": ilp.constraints.len(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "maximummatching_to_ilp"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_maximummatching_to_maximumsetpacking.rs b/examples/reduction_maximummatching_to_maximumsetpacking.rs deleted file mode 100644 index 3c0c488b..00000000 --- a/examples/reduction_maximummatching_to_maximumsetpacking.rs +++ /dev/null @@ -1,114 +0,0 @@ -// # MaximumMatching to Set Packing Reduction -// -// ## Mathematical Equivalence -// Each edge e = (u,v) becomes a set S_e = {u, v}. Universe U = V. -// A matching (edges with no shared vertices) maps to a packing (sets with -// no shared elements) with the same weight. -// -// ## This Example -// - Instance: Petersen graph (10 vertices, 15 edges), perfect matching of size 5 -// - Source matching: max size 5 -// - Target MaximumSetPacking: max packing 5 -// -// ## Output -// Exports `docs/paper/examples/maximummatching_to_maximumsetpacking.json` and `maximummatching_to_maximumsetpacking.result.json`. -// -// See docs/paper/reductions.typ for the full reduction specification. - -use problemreductions::export::*; -use problemreductions::prelude::*; -use problemreductions::topology::small_graphs::petersen; -use problemreductions::topology::{Graph, SimpleGraph}; - -pub fn run() { - println!("\n=== MaximumMatching -> Set Packing Reduction ===\n"); - - // Petersen graph with unit weights - let (num_vertices, edges) = petersen(); - let source = - MaximumMatching::<_, i32>::unit_weights(SimpleGraph::new(num_vertices, edges.clone())); - - println!("Source: MaximumMatching on Petersen graph"); - println!(" Vertices: {}", num_vertices); - println!(" Edges: {:?}", edges); - - // Reduce to MaximumSetPacking - let reduction = ReduceTo::>::reduce_to(&source); - let target = reduction.target_problem(); - - println!("\nTarget: MaximumSetPacking"); - println!(" Sets: {} sets", target.num_sets()); - for (i, set) in target.sets().iter().enumerate() { - println!(" S_{} = {:?}", i, set); - } - - // Solve the target problem - let solver = BruteForce::new(); - let target_solutions = solver.find_all_best(target); - - println!("\nBest target solutions: {}", target_solutions.len()); - - // Extract and verify each solution - let mut solutions = Vec::new(); - for (i, target_sol) in target_solutions.iter().enumerate() { - let source_sol = reduction.extract_solution(target_sol); - let source_size = source.evaluate(&source_sol); - let target_size = target.evaluate(target_sol); - - println!( - " Solution {}: target={:?} (size={:?}), source={:?} (size={:?})", - i, target_sol, target_size, source_sol, source_size - ); - assert!( - source_size.is_valid(), - "Extracted source solution must be valid" - ); - - solutions.push(SolutionPair { - source_config: source_sol, - target_config: target_sol.clone(), - }); - } - - // Export JSON - let source_variant = variant_to_map(MaximumMatching::::variant()); - let target_variant = variant_to_map(MaximumSetPacking::::variant()); - let overhead = lookup_overhead( - "MaximumMatching", - &source_variant, - "MaximumSetPacking", - &target_variant, - ) - .expect("MaximumMatching -> MaximumSetPacking overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: MaximumMatching::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vertices": source.graph().num_vertices(), - "num_edges": source.graph().num_edges(), - "edges": edges, - }), - }, - target: ProblemSide { - problem: MaximumSetPacking::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_sets": target.num_sets(), - "sets": target.sets(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "maximummatching_to_maximumsetpacking"; - write_example(name, &data, &results); - - println!("\nDone: MaximumMatching(Petersen) optimal=5 maps to MaximumSetPacking optimal=5"); -} - -fn main() { - run() -} diff --git a/examples/reduction_maximumsetpacking_to_ilp.rs b/examples/reduction_maximumsetpacking_to_ilp.rs deleted file mode 100644 index 4a832196..00000000 --- a/examples/reduction_maximumsetpacking_to_ilp.rs +++ /dev/null @@ -1,105 +0,0 @@ -// # Set Packing to ILP Reduction -// -// ## Mathematical Formulation -// Variables: x_i in {0,1} for each set S_i. -// Constraints: x_i + x_j <= 1 for each overlapping pair (i,j). -// Objective: maximize sum of w_i * x_i. -// -// ## This Example -// - Instance: 6 sets over universe {0,...,7} -// - S0={0,1,2}, S1={2,3,4}, S2={4,5,6}, S3={6,7,0}, S4={1,3,5}, S5={0,4,7} -// - Source MaximumSetPacking: max packing size 2 -// - Target ILP: 6 binary variables, one constraint per overlapping pair -// -// ## Output -// Exports `docs/paper/examples/maximumsetpacking_to_ilp.json` and `maximumsetpacking_to_ilp.result.json`. - -use problemreductions::export::*; -use problemreductions::models::algebraic::ILP; -use problemreductions::prelude::*; - -pub fn run() { - let sets = vec![ - vec![0, 1, 2], - vec![2, 3, 4], - vec![4, 5, 6], - vec![6, 7, 0], - vec![1, 3, 5], - vec![0, 4, 7], - ]; - let sp = MaximumSetPacking::::new(sets.clone()); - - let reduction = ReduceTo::>::reduce_to(&sp); - let ilp = reduction.target_problem(); - - println!("\n=== Problem Transformation ==="); - println!( - "Source: MaximumSetPacking with {} sets over universe {{0,...,7}}", - sp.num_variables() - ); - for (i, s) in sets.iter().enumerate() { - println!(" S{} = {:?}", i, s); - } - println!( - "Target: ILP with {} variables, {} constraints", - ilp.num_vars, - ilp.constraints.len() - ); - - let solver = BruteForce::new(); - let ilp_solutions = solver.find_all_best(ilp); - println!("\n=== Solution ==="); - println!("ILP solutions found: {}", ilp_solutions.len()); - - let ilp_solution = &ilp_solutions[0]; - println!("ILP solution: {:?}", ilp_solution); - - let sp_solution = reduction.extract_solution(ilp_solution); - println!("Source MaximumSetPacking solution: {:?}", sp_solution); - - let metric = sp.evaluate(&sp_solution); - println!("Solution metric: {:?}", metric); - println!("\nReduction verified successfully"); - - let mut solutions = Vec::new(); - for target_config in &ilp_solutions { - let source_sol = reduction.extract_solution(target_config); - solutions.push(SolutionPair { - source_config: source_sol, - target_config: target_config.clone(), - }); - } - - let source_variant = variant_to_map(MaximumSetPacking::::variant()); - let target_variant = variant_to_map(ILP::::variant()); - let overhead = lookup_overhead("MaximumSetPacking", &source_variant, "ILP", &target_variant) - .unwrap_or_default(); - - let data = ReductionData { - source: ProblemSide { - problem: MaximumSetPacking::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_sets": sp.num_sets(), - "sets": sp.sets(), - }), - }, - target: ProblemSide { - problem: ILP::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vars": ilp.num_vars, - "num_constraints": ilp.constraints.len(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "maximumsetpacking_to_ilp"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_maximumsetpacking_to_maximumindependentset.rs b/examples/reduction_maximumsetpacking_to_maximumindependentset.rs deleted file mode 100644 index 7e719082..00000000 --- a/examples/reduction_maximumsetpacking_to_maximumindependentset.rs +++ /dev/null @@ -1,149 +0,0 @@ -// # Set Packing to Independent Set Reduction -// -// ## Mathematical Equivalence -// Each set becomes a vertex; two vertices are adjacent if their sets overlap. -// Selecting a collection of non-overlapping sets is equivalent to selecting -// an independent set in the conflict graph. The optimal packing size equals -// the maximum independent set size. -// -// ## This Example -// - Instance: 5 sets over universe {0,...,7}, with varying sizes (2 and 3) -// - S0 = {0, 1, 2}, S1 = {2, 3}, S2 = {4, 5, 6}, S3 = {1, 5, 7}, S4 = {3, 6} -// - Conflict edges: (0,1) share 2, (0,3) share 1, (1,4) share 3, (2,3) share 5, (2,4) share 6 -// - Source MaximumSetPacking: max packing size 2 (e.g., S0+S2, S1+S3, S3+S4, ...) -// - Target MaximumIndependentSet: 5 vertices, 5 edges, max IS size 2 -// -// ## Output -// Exports `docs/paper/examples/maximumsetpacking_to_maximumindependentset.json` and -// `maximumsetpacking_to_maximumindependentset.result.json`. -// -// See docs/paper/reductions.typ for the full reduction specification. - -use problemreductions::export::*; -use problemreductions::prelude::*; -use problemreductions::topology::{Graph, SimpleGraph}; - -pub fn run() { - println!("\n=== Set Packing -> Independent Set Reduction ===\n"); - - // 1. Create MaximumSetPacking instance: 5 sets over universe {0,...,7} - let sets = vec![ - vec![0, 1, 2], // S0 (size 3) - vec![2, 3], // S1 (size 2, overlaps S0 at 2) - vec![4, 5, 6], // S2 (size 3, disjoint from S0, S1) - vec![1, 5, 7], // S3 (size 3, overlaps S0 at 1, S2 at 5) - vec![3, 6], // S4 (size 2, overlaps S1 at 3, S2 at 6) - ]; - let num_sets = sets.len(); - let sp = MaximumSetPacking::with_weights(sets.clone(), vec![1i32; num_sets]); - - println!("Source: MaximumSetPacking with {} sets", num_sets); - for (i, s) in sets.iter().enumerate() { - println!(" S{} = {:?}", i, s); - } - - // 2. Reduce to MaximumIndependentSet - let reduction = ReduceTo::>::reduce_to(&sp); - let target = reduction.target_problem(); - - println!("\nTarget: MaximumIndependentSet"); - println!(" Vertices: {}", target.graph().num_vertices()); - println!( - " Edges: {} {:?}", - target.graph().num_edges(), - target.graph().edges() - ); - - // 3. Solve the target problem - let solver = BruteForce::new(); - let target_solutions = solver.find_all_best(target); - - println!("\nBest target solutions: {}", target_solutions.len()); - - // 4. Extract and verify each solution - let mut solutions = Vec::new(); - for (i, target_sol) in target_solutions.iter().enumerate() { - let source_sol = reduction.extract_solution(target_sol); - let source_size = sp.evaluate(&source_sol); - let target_size = target.evaluate(target_sol); - - println!( - " Solution {}: target={:?} (size={:?}), source={:?} (size={:?}, valid={})", - i, - target_sol, - target_size, - source_sol, - source_size, - source_size.is_valid() - ); - - assert!( - source_size.is_valid(), - "Extracted source solution must be valid" - ); - - solutions.push(SolutionPair { - source_config: source_sol, - target_config: target_sol.clone(), - }); - } - - // 5. Verify the optimal value - let target_sol = &target_solutions[0]; - let source_sol = reduction.extract_solution(target_sol); - let source_size = sp.evaluate(&source_sol); - let target_size = target.evaluate(target_sol); - - assert_eq!( - source_size, - problemreductions::types::SolutionSize::Valid(2), - "MaximumSetPacking optimal packing size is 2" - ); - assert_eq!( - target_size, - problemreductions::types::SolutionSize::Valid(2), - "MaximumIndependentSet should also have size 2" - ); - - // 6. Export JSON - let source_variant = variant_to_map(MaximumSetPacking::::variant()); - let target_variant = variant_to_map(MaximumIndependentSet::::variant()); - let overhead = lookup_overhead( - "MaximumSetPacking", - &source_variant, - "MaximumIndependentSet", - &target_variant, - ) - .expect("MaximumSetPacking -> MaximumIndependentSet overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: MaximumSetPacking::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_sets": sp.num_sets(), - "sets": sp.sets(), - }), - }, - target: ProblemSide { - problem: MaximumIndependentSet::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vertices": target.graph().num_vertices(), - "num_edges": target.graph().num_edges(), - "edges": target.graph().edges(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "maximumsetpacking_to_maximumindependentset"; - write_example(name, &data, &results); - - println!("\nDone: SetPacking(5 sets) optimal=2 maps to IS(5 vertices, 5 edges) optimal=2"); -} - -fn main() { - run() -} diff --git a/examples/reduction_maximumsetpacking_to_qubo.rs b/examples/reduction_maximumsetpacking_to_qubo.rs deleted file mode 100644 index 4ceacc8a..00000000 --- a/examples/reduction_maximumsetpacking_to_qubo.rs +++ /dev/null @@ -1,137 +0,0 @@ -// # Set Packing to QUBO Reduction (Penalty Method) -// -// ## Mathematical Relationship -// The Maximum Set Packing problem selects the largest collection of -// non-overlapping sets from a family of sets. It is mapped to QUBO as: -// -// H(x) = -sum_i x_i + P * sum_{i 1 penalizes selecting -// overlapping sets. The QUBO minimization maximizes the number of selected -// non-overlapping sets. -// -// ## This Example -// - Instance: 6 sets over universe {0,...,7} -// - S0 = {0, 1, 2} -// - S1 = {2, 3, 4} (overlaps S0 at 2) -// - S2 = {4, 5, 6} (overlaps S1 at 4) -// - S3 = {6, 7, 0} (overlaps S2 at 6, S0 at 0) -// - S4 = {1, 3, 5} (overlaps S0, S1, S2) -// - S5 = {0, 4, 7} (overlaps S0, S1, S3) -// - QUBO variables: 6 (one per set) -// - Expected: Optimal packing selects 2 disjoint sets (e.g., {S0, S2} or {S1, S3}) -// -// ## Output -// Exports `docs/paper/examples/maximumsetpacking_to_qubo.json` and `maximumsetpacking_to_qubo.result.json`. -// -// ## Usage -// ```bash -// cargo run --example reduction_maximumsetpacking_to_qubo -// ``` - -use problemreductions::export::*; -use problemreductions::prelude::*; - -pub fn run() { - println!("=== Set Packing -> QUBO Reduction ===\n"); - - // 6 sets over universe {0,...,7} - let sets = vec![ - vec![0, 1, 2], // S0 - vec![2, 3, 4], // S1 (overlaps S0 at 2) - vec![4, 5, 6], // S2 (overlaps S1 at 4) - vec![6, 7, 0], // S3 (overlaps S2 at 6, S0 at 0) - vec![1, 3, 5], // S4 (overlaps S0, S1, S2) - vec![0, 4, 7], // S5 (overlaps S0, S1, S3) - ]; - let sp = MaximumSetPacking::::new(sets.clone()); - - // Reduce to QUBO - let reduction = ReduceTo::::reduce_to(&sp); - let qubo = reduction.target_problem(); - - println!("Source: MaximumSetPacking with 6 sets over universe {{0,...,7}}"); - for (i, s) in sets.iter().enumerate() { - println!(" S{} = {:?}", i, s); - } - println!("Target: QUBO with {} variables", qubo.num_variables()); - println!("Q matrix:"); - for row in qubo.matrix() { - println!(" {:?}", row); - } - - // Solve QUBO with brute force - let solver = BruteForce::new(); - let qubo_solutions = solver.find_all_best(qubo); - - // Extract and verify solutions - println!("\nOptimal solutions:"); - let mut solutions = Vec::new(); - for sol in &qubo_solutions { - let extracted = reduction.extract_solution(sol); - let selected: Vec = extracted - .iter() - .enumerate() - .filter(|(_, &x)| x == 1) - .map(|(i, _)| i) - .collect(); - let packing_size = selected.len(); - println!( - " Selected sets: {:?} (packing size {})", - selected, packing_size - ); - - // Closed-loop verification: check solution is valid in original problem - let sol_size = sp.evaluate(&extracted); - assert!( - sol_size.is_valid(), - "Solution must be valid in source problem" - ); - - solutions.push(SolutionPair { - source_config: extracted, - target_config: sol.clone(), - }); - } - - println!("\nVerification passed: all solutions are valid set packings"); - - // Export JSON - let source_variant = variant_to_map(MaximumSetPacking::::variant()); - let target_variant = variant_to_map(QUBO::::variant()); - let overhead = lookup_overhead( - "MaximumSetPacking", - &source_variant, - "QUBO", - &target_variant, - ) - .expect("MaximumSetPacking -> QUBO overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: MaximumSetPacking::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_sets": sp.num_sets(), - "sets": sp.sets(), - }), - }, - target: ProblemSide { - problem: QUBO::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vars": qubo.num_vars(), - "matrix": qubo.matrix(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "maximumsetpacking_to_qubo"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_minimumdominatingset_to_ilp.rs b/examples/reduction_minimumdominatingset_to_ilp.rs deleted file mode 100644 index a684986f..00000000 --- a/examples/reduction_minimumdominatingset_to_ilp.rs +++ /dev/null @@ -1,117 +0,0 @@ -// # Dominating Set to ILP Reduction -// -// ## Mathematical Formulation -// Variables: x_v in {0,1} for each vertex v. -// Constraints: x_v + sum_{u in N(v)} x_u >= 1 for each vertex v. -// Objective: minimize sum of w_v * x_v. -// -// ## This Example -// - Instance: Petersen graph (10 vertices, 15 edges), min dominating set size 3 -// - Source MinimumDominatingSet: min dominating set size 3 -// - Target ILP: 10 binary variables, 10 domination constraints -// -// ## Output -// Exports `docs/paper/examples/minimumdominatingset_to_ilp.json` and `minimumdominatingset_to_ilp.result.json`. - -use problemreductions::export::*; -use problemreductions::models::algebraic::ILP; -use problemreductions::prelude::*; -use problemreductions::topology::small_graphs::petersen; -use problemreductions::topology::{Graph, SimpleGraph}; - -pub fn run() { - // 1. Create MinimumDominatingSet instance: Petersen graph - let (num_vertices, edges) = petersen(); - let ds = MinimumDominatingSet::new( - SimpleGraph::new(num_vertices, edges.clone()), - vec![1i32; num_vertices], - ); - - // 2. Reduce to ILP - let reduction = ReduceTo::>::reduce_to(&ds); - let ilp = reduction.target_problem(); - - // 3. Print transformation - println!("\n=== Problem Transformation ==="); - println!( - "Source: MinimumDominatingSet with {} variables", - ds.num_variables() - ); - println!( - "Target: ILP with {} variables, {} constraints", - ilp.num_vars, - ilp.constraints.len() - ); - - // 4. Solve target ILP - let solver = BruteForce::new(); - let ilp_solutions = solver.find_all_best(ilp); - println!("\n=== Solution ==="); - println!("ILP solutions found: {}", ilp_solutions.len()); - - let ilp_solution = &ilp_solutions[0]; - println!("ILP solution: {:?}", ilp_solution); - - // 5. Extract source solution - let ds_solution = reduction.extract_solution(ilp_solution); - println!("Source MinimumDominatingSet solution: {:?}", ds_solution); - - // 6. Verify - let size = ds.evaluate(&ds_solution); - // MinimumDominatingSet is a minimization problem, infeasible configs return Invalid - println!("Solution size: {:?}", size); - assert!(size.is_valid()); - println!("\nReduction verified successfully"); - - // 7. Collect solutions and export JSON - let mut solutions = Vec::new(); - for target_config in &ilp_solutions { - let source_sol = reduction.extract_solution(target_config); - let s = ds.evaluate(&source_sol); - // MinimumDominatingSet is a minimization problem, infeasible configs return Invalid - assert!(s.is_valid()); - solutions.push(SolutionPair { - source_config: source_sol, - target_config: target_config.clone(), - }); - } - - let source_variant = variant_to_map(MinimumDominatingSet::::variant()); - let target_variant = variant_to_map(ILP::::variant()); - let overhead = lookup_overhead( - "MinimumDominatingSet", - &source_variant, - "ILP", - &target_variant, - ) - .unwrap_or_default(); - - let data = ReductionData { - source: ProblemSide { - problem: MinimumDominatingSet::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vertices": ds.graph().num_vertices(), - "num_edges": ds.graph().num_edges(), - "edges": ds.graph().edges(), - }), - }, - target: ProblemSide { - problem: ILP::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vars": ilp.num_vars, - "num_constraints": ilp.constraints.len(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "minimumdominatingset_to_ilp"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_minimumsetcovering_to_ilp.rs b/examples/reduction_minimumsetcovering_to_ilp.rs deleted file mode 100644 index 0d8b726c..00000000 --- a/examples/reduction_minimumsetcovering_to_ilp.rs +++ /dev/null @@ -1,121 +0,0 @@ -// # Set Covering to ILP Reduction -// -// ## Mathematical Formulation -// Variables: x_i in {0,1} for each set S_i. -// Constraints: sum_{S_i containing e} x_i >= 1 for each element e in universe. -// Objective: minimize sum of w_i * x_i. -// -// ## This Example -// - Instance: Universe size 8, 6 sets -// - S0={0,1,2}, S1={2,3,4}, S2={4,5,6}, S3={6,7,0}, S4={1,3,5}, S5={0,4,7} -// - Source MinimumSetCovering: every element in {0,...,7} must be covered -// - Target ILP: 6 binary variables, 8 element-coverage constraints -// -// ## Output -// Exports `docs/paper/examples/minimumsetcovering_to_ilp.json` and `minimumsetcovering_to_ilp.result.json`. - -use problemreductions::export::*; -use problemreductions::models::algebraic::ILP; -use problemreductions::prelude::*; - -pub fn run() { - // 1. Create MinimumSetCovering instance: universe {0,...,7}, 6 sets - let sets = vec![ - vec![0, 1, 2], // S0 - vec![2, 3, 4], // S1 - vec![4, 5, 6], // S2 - vec![6, 7, 0], // S3 - vec![1, 3, 5], // S4 - vec![0, 4, 7], // S5 - ]; - let sc = MinimumSetCovering::::new(8, sets.clone()); - - // 2. Reduce to ILP - let reduction = ReduceTo::>::reduce_to(&sc); - let ilp = reduction.target_problem(); - - // 3. Print transformation - println!("\n=== Problem Transformation ==="); - println!( - "Source: MinimumSetCovering with {} sets over universe {{0,...,7}}", - sc.num_variables() - ); - for (i, s) in sets.iter().enumerate() { - println!(" S{} = {:?}", i, s); - } - println!( - "Target: ILP with {} variables, {} constraints", - ilp.num_vars, - ilp.constraints.len() - ); - - // 4. Solve target ILP - let solver = BruteForce::new(); - let ilp_solutions = solver.find_all_best(ilp); - println!("\n=== Solution ==="); - println!("ILP solutions found: {}", ilp_solutions.len()); - - let ilp_solution = &ilp_solutions[0]; - println!("ILP solution: {:?}", ilp_solution); - - // 5. Extract source solution - let sc_solution = reduction.extract_solution(ilp_solution); - println!("Source MinimumSetCovering solution: {:?}", sc_solution); - - // 6. Verify - let size = sc.evaluate(&sc_solution); - println!("Solution size: {:?}", size); - assert!(size.is_valid()); // Valid solution - println!("\nReduction verified successfully"); - - // 7. Collect solutions and export JSON - let mut solutions = Vec::new(); - for target_config in &ilp_solutions { - let source_sol = reduction.extract_solution(target_config); - let s = sc.evaluate(&source_sol); - assert!(s.is_valid()); // Valid solution - solutions.push(SolutionPair { - source_config: source_sol, - target_config: target_config.clone(), - }); - } - - let source_variant = variant_to_map(MinimumSetCovering::::variant()); - let target_variant = variant_to_map(ILP::::variant()); - let overhead = lookup_overhead( - "MinimumSetCovering", - &source_variant, - "ILP", - &target_variant, - ) - .unwrap_or_default(); - - let data = ReductionData { - source: ProblemSide { - problem: MinimumSetCovering::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_sets": sc.num_sets(), - "sets": sc.sets(), - "universe_size": sc.universe_size(), - }), - }, - target: ProblemSide { - problem: ILP::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vars": ilp.num_vars, - "num_constraints": ilp.constraints.len(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "minimumsetcovering_to_ilp"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_minimumvertexcover_to_ilp.rs b/examples/reduction_minimumvertexcover_to_ilp.rs deleted file mode 100644 index 186ec3d5..00000000 --- a/examples/reduction_minimumvertexcover_to_ilp.rs +++ /dev/null @@ -1,116 +0,0 @@ -// # Vertex Cover to ILP via Reduction Path -// -// ## This Example -// - Instance: Petersen graph (10 vertices, 15 edges), VC = 6 -// - Source VC: min size 6 -// - Target: ILP reached through the reduction graph -// -// ## Output -// Exports `docs/paper/examples/minimumvertexcover_to_ilp.json` and -// `minimumvertexcover_to_ilp.result.json`. - -use problemreductions::export::*; -use problemreductions::models::algebraic::ILP; -use problemreductions::prelude::*; -use problemreductions::rules::{MinimizeSteps, ReductionGraph}; -use problemreductions::topology::small_graphs::petersen; -use problemreductions::topology::{Graph, SimpleGraph}; -use problemreductions::types::ProblemSize; - -pub fn run() { - let (num_vertices, edges) = petersen(); - let vc = MinimumVertexCover::new( - SimpleGraph::new(num_vertices, edges.clone()), - vec![1i32; num_vertices], - ); - - let graph = ReductionGraph::new(); - let src_variant_bt = - ReductionGraph::variant_to_map(&MinimumVertexCover::::variant()); - let dst_variant_bt = ReductionGraph::variant_to_map(&ILP::::variant()); - let path = graph - .find_cheapest_path( - "MinimumVertexCover", - &src_variant_bt, - "ILP", - &dst_variant_bt, - &ProblemSize::new(vec![]), - &MinimizeSteps, - ) - .expect("MinimumVertexCover -> ILP path not found"); - let reduction = graph - .reduce_along_path(&path, &vc as &dyn std::any::Any) - .expect("MinimumVertexCover -> ILP path reduction failed"); - let ilp: &ILP = reduction.target_problem(); - - println!("\n=== Problem Transformation ==="); - println!( - "Source: MinimumVertexCover with {} variables", - vc.num_variables() - ); - println!("Path: {}", path); - println!( - "Target: ILP with {} variables, {} constraints", - ilp.num_vars, - ilp.constraints.len() - ); - - let solver = BruteForce::new(); - let ilp_solutions = solver.find_all_best(ilp); - println!("\n=== Solution ==="); - println!("ILP solutions found: {}", ilp_solutions.len()); - - let ilp_solution = &ilp_solutions[0]; - println!("ILP solution: {:?}", ilp_solution); - - let vc_solution = reduction.extract_solution(ilp_solution); - println!("Source VC solution: {:?}", vc_solution); - - let size = vc.evaluate(&vc_solution); - println!("Solution size: {:?}", size); - assert!(size.is_valid()); - println!("\nReduction verified successfully"); - - let mut solutions = Vec::new(); - for target_config in &ilp_solutions { - let source_sol = reduction.extract_solution(target_config); - let s = vc.evaluate(&source_sol); - assert!(s.is_valid()); - solutions.push(SolutionPair { - source_config: source_sol, - target_config: target_config.clone(), - }); - } - - let source_variant = variant_to_map(MinimumVertexCover::::variant()); - let target_variant = variant_to_map(ILP::::variant()); - let overhead = graph.compose_path_overhead(&path); - - let data = ReductionData { - source: ProblemSide { - problem: MinimumVertexCover::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vertices": vc.graph().num_vertices(), - "num_edges": vc.graph().num_edges(), - "edges": vc.graph().edges(), - }), - }, - target: ProblemSide { - problem: ILP::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vars": ilp.num_vars, - "num_constraints": ilp.constraints.len(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - write_example("minimumvertexcover_to_ilp", &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_minimumvertexcover_to_maximumindependentset.rs b/examples/reduction_minimumvertexcover_to_maximumindependentset.rs deleted file mode 100644 index a718ccd3..00000000 --- a/examples/reduction_minimumvertexcover_to_maximumindependentset.rs +++ /dev/null @@ -1,113 +0,0 @@ -// # Vertex Cover to Independent Set Reduction -// -// ## Mathematical Equivalence -// C ⊆ V is a vertex cover iff V \ C is an independent set. The reduction -// creates an identical graph with identical weights. Solution extraction -// computes the complement: IS = V \ VC. -// -// ## This Example -// - Instance: Petersen graph (10 vertices, 15 edges), VC=6 -// - Source VC: min size 6 -// - Target IS: max size 4 -// -// ## Output -// Exports `docs/paper/examples/minimumvertexcover_to_maximumindependentset.json` and `minimumvertexcover_to_maximumindependentset.result.json`. -// -// See docs/paper/reductions.typ for the full reduction specification. - -use problemreductions::export::*; -use problemreductions::prelude::*; -use problemreductions::topology::small_graphs::petersen; -use problemreductions::topology::{Graph, SimpleGraph}; - -pub fn run() { - // Petersen graph: 10 vertices, 15 edges, VC=6 - let (num_vertices, edges) = petersen(); - let vc = MinimumVertexCover::new( - SimpleGraph::new(num_vertices, edges.clone()), - vec![1i32; num_vertices], - ); - - let reduction = ReduceTo::>::reduce_to(&vc); - let is = reduction.target_problem(); - - println!("\n=== Problem Transformation ==="); - println!( - "Source: MinimumVertexCover with {} variables", - vc.num_variables() - ); - println!( - "Target: MaximumIndependentSet with {} variables", - is.num_variables() - ); - - let solver = BruteForce::new(); - let is_solutions = solver.find_all_best(is); - println!("\n=== Solution ==="); - println!("Target solutions found: {}", is_solutions.len()); - - // Extract and verify solutions - let mut solutions = Vec::new(); - for target_sol in &is_solutions { - let source_sol = reduction.extract_solution(target_sol); - let size = vc.evaluate(&source_sol); - // MinimumVertexCover is a minimization problem, infeasible configs return Invalid - assert!(size.is_valid()); - solutions.push(SolutionPair { - source_config: source_sol.clone(), - target_config: target_sol.clone(), - }); - } - - let vc_solution = reduction.extract_solution(&is_solutions[0]); - println!("Source VC solution: {:?}", vc_solution); - - let size = vc.evaluate(&vc_solution); - println!("Solution size: {:?}", size); - // MinimumVertexCover is a minimization problem, infeasible configs return Invalid - assert!(size.is_valid()); - println!("\nReduction verified successfully"); - - // Export JSON - let vc_edges = vc.graph().edges(); - let is_edges = is.graph().edges(); - let source_variant = variant_to_map(MinimumVertexCover::::variant()); - let target_variant = variant_to_map(MaximumIndependentSet::::variant()); - let overhead = lookup_overhead( - "MinimumVertexCover", - &source_variant, - "MaximumIndependentSet", - &target_variant, - ) - .expect("MinimumVertexCover -> MaximumIndependentSet overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: MinimumVertexCover::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vertices": vc.graph().num_vertices(), - "num_edges": vc.graph().num_edges(), - "edges": vc_edges, - }), - }, - target: ProblemSide { - problem: MaximumIndependentSet::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vertices": is.graph().num_vertices(), - "num_edges": is.graph().num_edges(), - "edges": is_edges, - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "minimumvertexcover_to_maximumindependentset"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_minimumvertexcover_to_minimumsetcovering.rs b/examples/reduction_minimumvertexcover_to_minimumsetcovering.rs deleted file mode 100644 index bd453ff2..00000000 --- a/examples/reduction_minimumvertexcover_to_minimumsetcovering.rs +++ /dev/null @@ -1,142 +0,0 @@ -// # Vertex Cover to Set Covering Reduction -// -// ## Mathematical Equivalence -// Universe U = {0, ..., |E|-1} (edge indices). For each vertex v, set -// S_v = edges incident to v. A vertex cover (every edge has an endpoint -// in the cover) maps to a set cover (every universe element in some set). -// -// ## This Example -// - Instance: Petersen graph (10 vertices, 15 edges), VC=6 -// - Source VC: min size 6 -// - Target MinimumSetCovering: min cover 6 -// -// ## Output -// Exports `docs/paper/examples/minimumvertexcover_to_minimumsetcovering.json` and `minimumvertexcover_to_minimumsetcovering.result.json`. -// -// See docs/paper/reductions.typ for the full reduction specification. - -use problemreductions::export::*; -use problemreductions::prelude::*; -use problemreductions::topology::small_graphs::petersen; -use problemreductions::topology::{Graph, SimpleGraph}; - -pub fn run() { - println!("\n=== Vertex Cover -> Set Covering Reduction ===\n"); - - // Petersen graph: 10 vertices, 15 edges, VC=6 - let (num_vertices, edges) = petersen(); - let source = MinimumVertexCover::new( - SimpleGraph::new(num_vertices, edges.clone()), - vec![1i32; num_vertices], - ); - - println!("Source: MinimumVertexCover on Petersen graph"); - println!(" Vertices: {}", num_vertices); - println!(" Edges: {:?}", edges); - - // Reduce to MinimumSetCovering - let reduction = ReduceTo::>::reduce_to(&source); - let target = reduction.target_problem(); - - println!("\nTarget: MinimumSetCovering"); - println!(" Universe size: {}", target.universe_size()); - println!(" Sets: {} sets", target.num_sets()); - for (i, set) in target.sets().iter().enumerate() { - println!(" S_{} = {:?}", i, set); - } - - // Solve the target problem - let solver = BruteForce::new(); - let target_solutions = solver.find_all_best(target); - - println!("\nBest target solutions: {}", target_solutions.len()); - - // Extract and verify each solution - let mut solutions = Vec::new(); - for (i, target_sol) in target_solutions.iter().enumerate() { - let source_sol = reduction.extract_solution(target_sol); - let source_size = source.evaluate(&source_sol); - let target_size = target.evaluate(target_sol); - - // Both are minimization problems, infeasible configs return Invalid - println!( - " Solution {}: target={:?} (size={:?}), source={:?} (size={:?}, valid={})", - i, - target_sol, - target_size, - source_sol, - source_size, - source_size.is_valid() - ); - - assert!( - source_size.is_valid(), - "Extracted source solution must be valid" - ); - - solutions.push(SolutionPair { - source_config: source_sol, - target_config: target_sol.clone(), - }); - } - - // Use the first solution for verification - let target_sol = &target_solutions[0]; - let source_sol = reduction.extract_solution(target_sol); - let source_size = source.evaluate(&source_sol); - let target_size = target.evaluate(target_sol); - - assert_eq!( - source_size, - problemreductions::types::SolutionSize::Valid(6), - "VC on Petersen has optimal size 6" - ); - assert_eq!( - target_size, - problemreductions::types::SolutionSize::Valid(6), - "MinimumSetCovering should also have size 6" - ); - - // Export JSON - let source_variant = variant_to_map(MinimumVertexCover::::variant()); - let target_variant = variant_to_map(MinimumSetCovering::::variant()); - let overhead = lookup_overhead( - "MinimumVertexCover", - &source_variant, - "MinimumSetCovering", - &target_variant, - ) - .expect("MinimumVertexCover -> MinimumSetCovering overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: MinimumVertexCover::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vertices": source.graph().num_vertices(), - "num_edges": source.graph().num_edges(), - "edges": edges, - }), - }, - target: ProblemSide { - problem: MinimumSetCovering::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_sets": target.num_sets(), - "sets": target.sets(), - "universe_size": target.universe_size(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "minimumvertexcover_to_minimumsetcovering"; - write_example(name, &data, &results); - - println!("\nDone: VC(Petersen) optimal=6 maps to MinimumSetCovering optimal=6"); -} - -fn main() { - run() -} diff --git a/examples/reduction_minimumvertexcover_to_qubo.rs b/examples/reduction_minimumvertexcover_to_qubo.rs deleted file mode 100644 index 54e9d40d..00000000 --- a/examples/reduction_minimumvertexcover_to_qubo.rs +++ /dev/null @@ -1,119 +0,0 @@ -// # Vertex Cover to QUBO via Reduction Path -// -// ## This Example -// - Instance: Petersen graph (10 vertices, 15 edges), VC = 6 -// - Source: MinimumVertexCover -// - Target: QUBO reached through the reduction graph -// -// ## Output -// Exports `docs/paper/examples/minimumvertexcover_to_qubo.json` and -// `minimumvertexcover_to_qubo.result.json`. - -use problemreductions::export::*; -use problemreductions::prelude::*; -use problemreductions::rules::{Minimize, ReductionGraph}; -use problemreductions::topology::small_graphs::petersen; -use problemreductions::topology::{Graph, SimpleGraph}; -use problemreductions::types::ProblemSize; - -pub fn run() { - println!("=== Vertex Cover -> QUBO Reduction ===\n"); - - let (num_vertices, edges) = petersen(); - let vc = MinimumVertexCover::new( - SimpleGraph::new(num_vertices, edges.clone()), - vec![1i32; num_vertices], - ); - - let graph = ReductionGraph::new(); - let src_variant_bt = - ReductionGraph::variant_to_map(&MinimumVertexCover::::variant()); - let dst_variant_bt = ReductionGraph::variant_to_map(&QUBO::::variant()); - let path = graph - .find_cheapest_path( - "MinimumVertexCover", - &src_variant_bt, - "QUBO", - &dst_variant_bt, - &ProblemSize::new(vec![ - ("num_vertices", vc.graph().num_vertices()), - ("num_edges", vc.graph().num_edges()), - ]), - &Minimize("num_vars"), - ) - .expect("MinimumVertexCover -> QUBO path not found"); - let reduction = graph - .reduce_along_path(&path, &vc as &dyn std::any::Any) - .expect("MinimumVertexCover -> QUBO path reduction failed"); - let qubo: &QUBO = reduction.target_problem(); - - println!("Source: MinimumVertexCover on Petersen graph (10 vertices, 15 edges)"); - println!("Path: {}", path); - println!("Target: QUBO with {} variables", qubo.num_variables()); - println!("Q matrix:"); - for row in qubo.matrix() { - println!(" {:?}", row); - } - - let solver = BruteForce::new(); - let qubo_solutions = solver.find_all_best(qubo); - - println!("\nOptimal solutions:"); - let mut solutions = Vec::new(); - for sol in &qubo_solutions { - let extracted = reduction.extract_solution(sol); - let selected: Vec = extracted - .iter() - .enumerate() - .filter(|(_, &x)| x == 1) - .map(|(i, _)| i) - .collect(); - let size = selected.len(); - println!(" Cover vertices: {:?} ({} vertices)", selected, size); - - let sol_size = vc.evaluate(&extracted); - assert!( - sol_size.is_valid(), - "Solution must be valid in source problem" - ); - - solutions.push(SolutionPair { - source_config: extracted, - target_config: sol.clone(), - }); - } - - println!("\nVerification passed: all solutions are valid"); - - let source_variant = variant_to_map(MinimumVertexCover::::variant()); - let target_variant = variant_to_map(QUBO::::variant()); - let overhead = graph.compose_path_overhead(&path); - - let data = ReductionData { - source: ProblemSide { - problem: MinimumVertexCover::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vertices": vc.graph().num_vertices(), - "num_edges": vc.graph().num_edges(), - "edges": edges, - }), - }, - target: ProblemSide { - problem: QUBO::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vars": qubo.num_vars(), - "matrix": qubo.matrix(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - write_example("minimumvertexcover_to_qubo", &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_qubo_to_ilp.rs b/examples/reduction_qubo_to_ilp.rs deleted file mode 100644 index cbd976bc..00000000 --- a/examples/reduction_qubo_to_ilp.rs +++ /dev/null @@ -1,121 +0,0 @@ -// # QUBO to ILP Reduction (McCormick Linearization) -// -// ## Mathematical Relationship -// A QUBO problem: -// -// minimize x^T Q x, x ∈ {0,1}^n -// -// is linearized by replacing each product x_i·x_j (i < j) with an -// auxiliary binary variable y_ij and three McCormick constraints: -// y_ij ≤ x_i, y_ij ≤ x_j, y_ij ≥ x_i + x_j - 1 -// -// Diagonal terms Q_ii·x_i² = Q_ii·x_i are directly linear. -// -// ## This Example -// - Instance: 4-variable QUBO with a few quadratic terms -// Q = diag(-2, -3, -1, -4) with Q_{01}=1, Q_{12}=2, Q_{23}=-1 -// - Expected: optimal binary assignment minimizing x^T Q x -// -// ## Outputs -// - `docs/paper/examples/qubo_to_ilp.json` — reduction structure -// - `docs/paper/examples/qubo_to_ilp.result.json` — solutions -// -// ## Usage -// ```bash -// cargo run --example reduction_qubo_to_ilp --features ilp-solver -// ``` - -use problemreductions::export::*; -use problemreductions::models::algebraic::ILP; -use problemreductions::prelude::*; - -pub fn run() { - println!("=== QUBO -> ILP Reduction (McCormick) ===\n"); - - // 4-variable QUBO: diagonal (linear) + off-diagonal (quadratic) terms - let mut matrix = vec![vec![0.0; 4]; 4]; - matrix[0][0] = -2.0; - matrix[1][1] = -3.0; - matrix[2][2] = -1.0; - matrix[3][3] = -4.0; - matrix[0][1] = 1.0; // x0·x1 coupling - matrix[1][2] = 2.0; // x1·x2 coupling - matrix[2][3] = -1.0; // x2·x3 coupling - let qubo = QUBO::from_matrix(matrix); - - // Reduce to ILP - let reduction = ReduceTo::>::reduce_to(&qubo); - let ilp = reduction.target_problem(); - - println!("Source: QUBO with {} variables", qubo.num_variables()); - println!(" Q diagonal: [-2, -3, -1, -4]"); - println!(" Q off-diagonal: (0,1)=1, (1,2)=2, (2,3)=-1"); - println!( - "Target: ILP with {} variables ({} original + {} auxiliary)", - ilp.num_variables(), - qubo.num_variables(), - ilp.num_variables() - qubo.num_variables() - ); - println!( - " {} constraints (3 McCormick per auxiliary variable)", - ilp.constraints.len() - ); - - // Solve ILP with brute force - let solver = BruteForce::new(); - let ilp_solutions = solver.find_all_best(ilp); - - println!("\nOptimal solutions:"); - let mut solutions = Vec::new(); - for sol in &ilp_solutions { - let extracted = reduction.extract_solution(sol); - let qubo_val = qubo.evaluate(&extracted); - println!(" x = {:?}, QUBO value = {}", extracted, qubo_val); - - // Closed-loop verification - assert!( - qubo_val < f64::MAX, - "Solution must be valid in source problem" - ); - - solutions.push(SolutionPair { - source_config: extracted, - target_config: sol.clone(), - }); - } - - println!("\nVerification passed: all solutions are feasible and optimal"); - - // Export JSON - let source_variant = variant_to_map(QUBO::::variant()); - let target_variant = variant_to_map(ILP::::variant()); - let overhead = lookup_overhead("QUBO", &source_variant, "ILP", &target_variant) - .expect("QUBO -> ILP overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: QUBO::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vars": qubo.num_vars(), - "matrix": qubo.matrix(), - }), - }, - target: ProblemSide { - problem: ILP::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vars": ilp.num_variables(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "qubo_to_ilp"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_qubo_to_spinglass.rs b/examples/reduction_qubo_to_spinglass.rs deleted file mode 100644 index 99af9517..00000000 --- a/examples/reduction_qubo_to_spinglass.rs +++ /dev/null @@ -1,101 +0,0 @@ -// # QUBO to Spin Glass Reduction -// -// ## Mathematical Equivalence -// The reverse substitution x_i = (s_i + 1)/2 transforms binary QUBO variables -// back to Ising spins. The QUBO matrix Q maps to couplings J and fields h via -// Q_{ij} = -4J_{ij} for off-diagonal and Q_{ii} = 2*sum_j J_{ij} - 2h_i for diagonal. -// -// ## This Example -// - Instance: 10-variable QUBO with Petersen connectivity -// - Source QUBO: 10 binary variables -// - Target SpinGlass: 10 spins -// -// ## Output -// Exports `docs/paper/examples/qubo_to_spinglass.json` and -// `docs/paper/examples/qubo_to_spinglass.result.json` for use in paper code blocks. -// -// See docs/paper/reductions.typ for the full reduction specification. - -use problemreductions::export::*; -use problemreductions::prelude::*; -use problemreductions::topology::small_graphs::petersen; -use problemreductions::topology::SimpleGraph; - -pub fn run() { - let (n, edges) = petersen(); - let mut matrix = vec![vec![0.0; n]; n]; - // Diagonal: linear terms - for (i, row) in matrix.iter_mut().enumerate() { - row[i] = -1.0 + 0.2 * i as f64; - } - // Off-diagonal: quadratic terms on Petersen edges - for (idx, &(u, v)) in edges.iter().enumerate() { - let (i, j) = if u < v { (u, v) } else { (v, u) }; - matrix[i][j] = if idx % 2 == 0 { 2.0 } else { -1.5 }; - } - let qubo = QUBO::from_matrix(matrix.clone()); - - let reduction = ReduceTo::>::reduce_to(&qubo); - let sg = reduction.target_problem(); - - println!("\n=== Problem Transformation ==="); - println!("Source: QUBO with {} variables", qubo.num_variables()); - println!("Target: SpinGlass with {} variables", sg.num_variables()); - - let solver = BruteForce::new(); - let sg_solutions = solver.find_all_best(sg); - println!("\n=== Solution ==="); - println!("Target solutions found: {}", sg_solutions.len()); - - let qubo_solution = reduction.extract_solution(&sg_solutions[0]); - println!("Source QUBO solution: {:?}", qubo_solution); - - let size = qubo.evaluate(&qubo_solution); - println!("Solution energy: {}", size); - // QUBO is a minimization problem, infeasible configs return f64::MAX - assert!(size < f64::MAX); - println!("\nReduction verified successfully"); - - // Collect all solutions - let mut solutions = Vec::new(); - for target_sol in &sg_solutions { - let source_sol = reduction.extract_solution(target_sol); - solutions.push(SolutionPair { - source_config: source_sol, - target_config: target_sol.clone(), - }); - } - - // Export JSON - let source_variant = variant_to_map(QUBO::::variant()); - let target_variant = variant_to_map(SpinGlass::::variant()); - let overhead = lookup_overhead("QUBO", &source_variant, "SpinGlass", &target_variant) - .expect("QUBO -> SpinGlass overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: QUBO::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vars": qubo.num_vars(), - "matrix": matrix, - }), - }, - target: ProblemSide { - problem: SpinGlass::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_spins": sg.num_variables(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "qubo_to_spinglass"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_satisfiability_to_circuitsat.rs b/examples/reduction_satisfiability_to_circuitsat.rs deleted file mode 100644 index 823f5b57..00000000 --- a/examples/reduction_satisfiability_to_circuitsat.rs +++ /dev/null @@ -1,157 +0,0 @@ -// # SAT to CircuitSAT Reduction -// -// ## Mathematical Equivalence -// A CNF formula is converted into a boolean circuit by creating an OR gate for -// each clause and a final AND gate that requires all clause outputs to be true. -// The circuit is satisfiable iff the original CNF formula is satisfiable. -// -// ## This Example -// - Instance: 3-variable, 3-clause SAT formula -// - (x1 v ~x2 v x3) & (~x1 v x2) & (x2 v x3) -// - Source SAT: satisfiable -// - Target: CircuitSAT with OR gates per clause + AND gate -// -// ## Output -// Exports `docs/paper/examples/satisfiability_to_circuitsat.json` and `satisfiability_to_circuitsat.result.json`. - -use problemreductions::export::*; -use problemreductions::prelude::*; - -pub fn run() { - // 1. Create SAT instance: 3 variables, 3 clauses - // (x1 v ~x2 v x3) & (~x1 v x2) & (x2 v x3) - let sat = Satisfiability::new( - 3, - vec![ - CNFClause::new(vec![1, -2, 3]), // x1 v ~x2 v x3 - CNFClause::new(vec![-1, 2]), // ~x1 v x2 - CNFClause::new(vec![2, 3]), // x2 v x3 - ], - ); - - println!("=== SAT to CircuitSAT Reduction ===\n"); - println!("Source SAT formula: 3-variable, 3-clause"); - println!(" (x1 v ~x2 v x3) & (~x1 v x2) & (x2 v x3)"); - println!( - " {} variables, {} clauses", - sat.num_vars(), - sat.num_clauses() - ); - - // 2. Reduce to CircuitSAT - let reduction = ReduceTo::::reduce_to(&sat); - let circuit_sat = reduction.target_problem(); - - println!("\n=== Problem Transformation ==="); - println!( - "Source: Satisfiability with {} variables, {} clauses", - sat.num_vars(), - sat.num_clauses() - ); - println!( - "Target: CircuitSAT with {} variables, {} assignments (gates)", - circuit_sat.num_variables(), - circuit_sat.circuit().num_assignments() - ); - println!(" Variables: {:?}", circuit_sat.variable_names()); - println!(" Each clause becomes an OR gate; a final AND gate combines them."); - - // 3. Solve the target CircuitSAT problem (satisfaction problem) - let solver = BruteForce::new(); - let circuit_solutions = solver.find_all_satisfying(circuit_sat); - println!("\n=== Solution ==="); - println!( - "CircuitSAT satisfying assignments found: {}", - circuit_solutions.len() - ); - - // 4. Extract and verify source solutions - let sat_solution = reduction.extract_solution(&circuit_solutions[0]); - println!("First extracted SAT solution: {:?}", sat_solution); - println!( - " Interpretation: x1={}, x2={}, x3={}", - sat_solution[0], sat_solution[1], sat_solution[2] - ); - - let satisfied = sat.evaluate(&sat_solution); - println!("SAT solution valid: {}", satisfied); - assert!(satisfied, "Extracted SAT solution must be valid"); - - // Verify all CircuitSAT solutions map to valid SAT assignments - let mut valid_count = 0; - let mut solutions = Vec::new(); - for cs_sol in &circuit_solutions { - let sat_sol = reduction.extract_solution(cs_sol); - let s = sat.evaluate(&sat_sol); - if s { - valid_count += 1; - } - solutions.push(SolutionPair { - source_config: sat_sol, - target_config: cs_sol.to_vec(), - }); - } - println!( - "All {} CircuitSAT solutions map to valid SAT assignments: {}", - circuit_solutions.len(), - valid_count == circuit_solutions.len() - ); - assert_eq!(valid_count, circuit_solutions.len()); - - // Also verify that the extracted solutions cover all SAT solutions - let sat_all = solver.find_all_satisfying(&sat); - let extracted_set: std::collections::HashSet> = circuit_solutions - .iter() - .map(|cs| reduction.extract_solution(cs)) - .collect(); - let sat_set: std::collections::HashSet> = sat_all.into_iter().collect(); - assert_eq!( - extracted_set, sat_set, - "Extracted solutions must match all SAT solutions" - ); - println!( - "Unique SAT solutions extracted: {} (matches direct SAT solve)", - extracted_set.len() - ); - - println!("\nReduction verified successfully"); - - // 5. Export JSON - let source_variant = variant_to_map(Satisfiability::variant()); - let target_variant = variant_to_map(CircuitSAT::variant()); - let overhead = lookup_overhead( - "Satisfiability", - &source_variant, - "CircuitSAT", - &target_variant, - ) - .expect("Satisfiability -> CircuitSAT overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: Satisfiability::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vars": sat.num_vars(), - "num_clauses": sat.num_clauses(), - }), - }, - target: ProblemSide { - problem: CircuitSAT::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_variables": circuit_sat.num_variables(), - "num_gates": circuit_sat.circuit().num_assignments(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "satisfiability_to_circuitsat"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_satisfiability_to_kcoloring.rs b/examples/reduction_satisfiability_to_kcoloring.rs deleted file mode 100644 index 27ecc4dd..00000000 --- a/examples/reduction_satisfiability_to_kcoloring.rs +++ /dev/null @@ -1,157 +0,0 @@ -// # SAT to 3-Coloring Reduction (Garey & Johnson 1979) -// -// ## Mathematical Equivalence -// Creates a graph with a base triangle (TRUE, FALSE, AUX), variable gadgets -// (pos_i, neg_i connected to AUX), and clause gadgets using OR-gadgets. -// phi is satisfiable iff the constructed graph is 3-colorable. -// -// ## This Example -// - Instance: 5-variable, 3-clause SAT formula with unit clauses -// (OR-gadgets add 5 vertices per extra literal per clause, making BruteForce -// infeasible for multi-literal clauses; unit clauses keep it at 13 vertices) -// - Source SAT: satisfiable (x1=1, x3=0, x5=1, x2/x4 free) -// - Target: 3-Coloring with 13 vertices -// -// ## Output -// Exports `docs/paper/examples/satisfiability_to_kcoloring.json` and `satisfiability_to_kcoloring.result.json`. - -use problemreductions::export::*; -use problemreductions::prelude::*; -use problemreductions::topology::{Graph, SimpleGraph}; -use problemreductions::variant::K3; - -pub fn run() { - // 1. Create SAT instance: 5-variable, 3-clause formula with unit clauses - // The SAT→KColoring reduction creates OR-gadgets that add 5 vertices per literal - // beyond the first in each clause. BruteForce on 3-coloring is O(3^n), so we use - // unit clauses (1 literal each) to keep vertex count at 2*5+3 = 13 (3^13 ~ 1.6M). - let sat = Satisfiability::new( - 5, - vec![ - CNFClause::new(vec![1]), // x1 (unit clause) - CNFClause::new(vec![-3]), // ~x3 (unit clause) - CNFClause::new(vec![5]), // x5 (unit clause) - ], - ); - - println!("=== SAT to 3-Coloring Reduction (Garey & Johnson 1979) ===\n"); - println!("Source SAT formula: 5-variable, 3-clause SAT (unit clauses to fit BruteForce)"); - println!(" (x1) ^ (~x3) ^ (x5)"); - println!( - " {} variables, {} clauses", - sat.num_vars(), - sat.num_clauses() - ); - println!(" (Unit clauses avoid OR-gadgets, keeping vertex count manageable for BruteForce)"); - - // 2. Reduce to 3-Coloring - // SAT reduces to KColoring - let reduction = ReduceTo::>::reduce_to(&sat); - let coloring = reduction.target_problem(); - - println!("\n=== Problem Transformation ==="); - println!( - "Source: Satisfiability with {} variables", - sat.num_variables() - ); - println!( - "Target: 3-Coloring with {} vertices, {} edges", - coloring.graph().num_vertices(), - coloring.graph().num_edges() - ); - println!(" Base triangle: TRUE(0), FALSE(1), AUX(2)"); - println!(" Variable gadgets: pos_i and neg_i vertices connected to AUX"); - println!(" Clause gadgets: OR-gadgets forcing output to TRUE color"); - - // 3. Solve the target 3-Coloring problem (satisfaction, not optimization) - let solver = BruteForce::new(); - // Find all satisfying 3-colorings by iterating through configs - let dims = coloring.dims(); - let all_configs: Vec> = problemreductions::config::DimsIterator::new(dims).collect(); - let coloring_solutions: Vec<&[usize]> = all_configs - .iter() - .filter(|config| coloring.evaluate(config)) - .map(|v| v.as_slice()) - .collect(); - let _ = solver; // Silence unused warning - println!("\n=== Solution ==="); - println!( - "Target 3-Coloring solutions found: {}", - coloring_solutions.len() - ); - - // 4. Extract and verify source solutions - let sat_solution = reduction.extract_solution(coloring_solutions[0]); - println!("Extracted SAT solution: {:?}", sat_solution); - println!( - " Interpretation: x1={}, x2={}, x3={}, x4={}, x5={}", - sat_solution[0], sat_solution[1], sat_solution[2], sat_solution[3], sat_solution[4] - ); - - let satisfied = sat.evaluate(&sat_solution); - println!("SAT solution valid: {}", satisfied); - assert!(satisfied, "Extracted SAT solution must be valid"); - - // Verify all coloring solutions map to valid SAT assignments - let mut valid_count = 0; - let mut solutions = Vec::new(); - for col_sol in &coloring_solutions { - let sat_sol = reduction.extract_solution(col_sol); - let s = sat.evaluate(&sat_sol); - if s { - valid_count += 1; - } - solutions.push(SolutionPair { - source_config: sat_sol, - target_config: col_sol.to_vec(), - }); - } - println!( - "All {} coloring solutions map to valid SAT assignments: {}", - coloring_solutions.len(), - valid_count == coloring_solutions.len() - ); - assert_eq!(valid_count, coloring_solutions.len()); - - println!("\nReduction verified successfully"); - - // 5. Export JSON - let source_variant = variant_to_map(Satisfiability::variant()); - let target_variant = variant_to_map(KColoring::::variant()); - let overhead = lookup_overhead( - "Satisfiability", - &source_variant, - "KColoring", - &target_variant, - ) - .expect("Satisfiability -> KColoring overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: Satisfiability::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vars": sat.num_vars(), - "num_clauses": sat.num_clauses(), - }), - }, - target: ProblemSide { - problem: KColoring::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vertices": coloring.graph().num_vertices(), - "num_edges": coloring.graph().num_edges(), - "num_colors": 3, - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "satisfiability_to_kcoloring"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_satisfiability_to_ksatisfiability.rs b/examples/reduction_satisfiability_to_ksatisfiability.rs deleted file mode 100644 index bc2d8d97..00000000 --- a/examples/reduction_satisfiability_to_ksatisfiability.rs +++ /dev/null @@ -1,165 +0,0 @@ -// # SAT to k-SAT Reduction (Cook-Levin) -// -// ## Mathematical Equivalence -// Small clauses (< k literals) are padded with auxiliary variables and their -// negations. Large clauses (> k literals) are split using auxiliary variables -// in a chain that preserves satisfiability. -// -// ## This Example -// - Instance: 5-variable, 6-clause SAT formula with mixed clause sizes (1, 2, 3, 3, 4, 5 literals) -// - 1-literal clause: padded to 3 -// - 2-literal clause: padded to 3 -// - 3-literal clauses: no change -// - 4-literal clause: split into two 3-literal clauses -// - 5-literal clause: split into three 3-literal clauses -// - Source SAT: satisfiable -// - Target: 3-SAT with 3 literals per clause -// -// ## Output -// Exports `docs/paper/examples/satisfiability_to_ksatisfiability.json` and `satisfiability_to_ksatisfiability.result.json`. - -use problemreductions::export::*; -use problemreductions::prelude::*; -use problemreductions::variant::K3; - -pub fn run() { - // 1. Create SAT instance with varied clause sizes to demonstrate padding and splitting: - // - 1 literal: padded to 3 - // - 2 literals: padded to 3 - // - 3 literals: no change (already 3-SAT) - // - 4 literals: split into two 3-literal clauses - // - 5 literals: split into three 3-literal clauses - let sat = Satisfiability::new( - 5, - vec![ - CNFClause::new(vec![1]), // 1 literal - will be padded - CNFClause::new(vec![2, -3]), // 2 literals - will be padded - CNFClause::new(vec![-1, 3, 4]), // 3 literals - no change - CNFClause::new(vec![2, -4, 5]), // 3 literals - no change - CNFClause::new(vec![1, -2, 3, -5]), // 4 literals - will be split - CNFClause::new(vec![-1, 2, -3, 4, 5]), // 5 literals - will be split - ], - ); - - println!("=== SAT to 3-SAT Reduction ===\n"); - println!("Source SAT formula: 5-variable, 6-clause SAT with mixed clause sizes"); - println!(" (x1) ^ (x2 v ~x3) ^ (~x1 v x3 v x4) ^ (x2 v ~x4 v x5) ^"); - println!(" (x1 v ~x2 v x3 v ~x5) ^ (~x1 v x2 v ~x3 v x4 v x5)"); - println!( - " {} variables, {} clauses", - sat.num_vars(), - sat.num_clauses() - ); - println!(" Clause sizes: 1, 2, 3, 3, 4, 5 (demonstrates padding and splitting)"); - - // 2. Reduce to 3-SAT (K=3) - let reduction = ReduceTo::>::reduce_to(&sat); - let ksat = reduction.target_problem(); - - println!("\n=== Problem Transformation ==="); - println!( - "Source: Satisfiability with {} variables, {} clauses", - sat.num_vars(), - sat.num_clauses() - ); - println!( - "Target: 3-SAT with {} variables, {} clauses", - ksat.num_vars(), - ksat.num_clauses() - ); - println!( - " Additional variables: {} (ancilla/auxiliary)", - ksat.num_vars() - sat.num_vars() - ); - println!(" 1-literal (x1) padded: (x1 v a v b) ^ (x1 v a v ~b) ^ ... "); - println!(" 2-literal (x2 v ~x3) padded similarly with auxiliary variables"); - println!(" 4-literal (x1 v ~x2 v x3 v ~x5) split: two 3-literal clauses"); - println!(" 5-literal (~x1 v x2 v ~x3 v x4 v x5) split: three 3-literal clauses"); - - // Print target clauses - println!("\n Target 3-SAT clauses:"); - for (i, clause) in ksat.clauses().iter().enumerate() { - println!(" Clause {}: {:?}", i, clause.literals); - } - - // 3. Solve the target 3-SAT problem (satisfaction, not optimization) - let solver = BruteForce::new(); - let ksat_solutions = solver.find_all_satisfying(ksat); - println!("\n=== Solution ==="); - println!("Target 3-SAT solutions found: {}", ksat_solutions.len()); - - // 4. Extract and verify source solutions - let sat_solution = reduction.extract_solution(&ksat_solutions[0]); - println!("Extracted SAT solution: {:?}", sat_solution); - println!( - " Interpretation: x1={}, x2={}, x3={}, x4={}, x5={}", - sat_solution[0], sat_solution[1], sat_solution[2], sat_solution[3], sat_solution[4] - ); - - let satisfied = sat.evaluate(&sat_solution); - println!("SAT solution valid: {}", satisfied); - assert!(satisfied, "Extracted SAT solution must be valid"); - - // Verify all 3-SAT solutions map to valid SAT assignments - let mut valid_count = 0; - let mut solutions = Vec::new(); - for ks_sol in &ksat_solutions { - let sat_sol = reduction.extract_solution(ks_sol); - let s = sat.evaluate(&sat_sol); - if s { - valid_count += 1; - } - solutions.push(SolutionPair { - source_config: sat_sol, - target_config: ks_sol.to_vec(), - }); - } - println!( - "All {} 3-SAT solutions map to valid SAT assignments: {}", - ksat_solutions.len(), - valid_count == ksat_solutions.len() - ); - assert_eq!(valid_count, ksat_solutions.len()); - - println!("\nReduction verified successfully"); - - // 5. Export JSON - let source_variant = variant_to_map(Satisfiability::variant()); - let target_variant = variant_to_map(KSatisfiability::::variant()); - let overhead = lookup_overhead( - "Satisfiability", - &source_variant, - "KSatisfiability", - &target_variant, - ) - .expect("Satisfiability -> KSatisfiability overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: Satisfiability::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vars": sat.num_vars(), - "num_clauses": sat.num_clauses(), - }), - }, - target: ProblemSide { - problem: KSatisfiability::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vars": ksat.num_vars(), - "num_clauses": ksat.num_clauses(), - "k": 3, - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "satisfiability_to_ksatisfiability"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_satisfiability_to_maximumindependentset.rs b/examples/reduction_satisfiability_to_maximumindependentset.rs deleted file mode 100644 index 40027025..00000000 --- a/examples/reduction_satisfiability_to_maximumindependentset.rs +++ /dev/null @@ -1,144 +0,0 @@ -// # SAT to Independent Set Reduction (Karp 1972) -// -// ## Mathematical Equivalence -// Given a CNF formula phi with m clauses, construct a graph G where each literal -// in each clause becomes a vertex. Intra-clause edges form cliques, cross-clause -// edges connect complementary literals. phi is satisfiable iff G has IS of size m. -// -// ## This Example -// - Instance: 5-variable, 7-clause 3-SAT formula -// - Source SAT: satisfiable -// - Target IS: size 7 (one vertex per clause), 21 vertices total -// -// ## Output -// Exports `docs/paper/examples/satisfiability_to_maximumindependentset.json` and `satisfiability_to_maximumindependentset.result.json`. - -use problemreductions::export::*; -use problemreductions::prelude::*; -use problemreductions::topology::{Graph, SimpleGraph}; - -pub fn run() { - // 1. Create SAT instance: 5-variable, 7-clause 3-SAT formula - let sat = Satisfiability::new( - 5, - vec![ - CNFClause::new(vec![1, 2, -3]), // x1 v x2 v ~x3 - CNFClause::new(vec![-1, 3, 4]), // ~x1 v x3 v x4 - CNFClause::new(vec![2, -4, 5]), // x2 v ~x4 v x5 - CNFClause::new(vec![-2, 3, -5]), // ~x2 v x3 v ~x5 - CNFClause::new(vec![1, -3, 5]), // x1 v ~x3 v x5 - CNFClause::new(vec![-1, -2, 4]), // ~x1 v ~x2 v x4 - CNFClause::new(vec![3, -4, -5]), // x3 v ~x4 v ~x5 - ], - ); - - println!("=== SAT to Independent Set Reduction (Karp 1972) ===\n"); - println!("Source SAT formula: 5-variable, 7-clause 3-SAT"); - println!(" (x1 v x2 v ~x3) ^ (~x1 v x3 v x4) ^ (x2 v ~x4 v x5) ^"); - println!(" (~x2 v x3 v ~x5) ^ (x1 v ~x3 v x5) ^ (~x1 v ~x2 v x4) ^ (x3 v ~x4 v ~x5)"); - println!( - " {} variables, {} clauses", - sat.num_vars(), - sat.num_clauses() - ); - - // 2. Reduce to Independent Set - let reduction = ReduceTo::>::reduce_to(&sat); - let is = reduction.target_problem(); - - println!("\n=== Problem Transformation ==="); - println!( - "Source: Satisfiability with {} variables", - sat.num_variables() - ); - println!( - "Target: MaximumIndependentSet with {} vertices, {} edges", - is.graph().num_vertices(), - is.graph().num_edges() - ); - println!(" Each literal occurrence becomes a vertex."); - println!(" Edges connect literals within the same clause (clique)"); - println!(" and complementary literals across clauses."); - - // 3. Solve the target IS problem - let solver = BruteForce::new(); - let is_solutions = solver.find_all_best(is); - println!("\n=== Solution ==="); - println!("Target IS solutions found: {}", is_solutions.len()); - - // 4. Extract and verify source solutions - let sat_solution = reduction.extract_solution(&is_solutions[0]); - println!("Extracted SAT solution: {:?}", sat_solution); - println!( - " Interpretation: x1={}, x2={}, x3={}, x4={}, x5={}", - sat_solution[0], sat_solution[1], sat_solution[2], sat_solution[3], sat_solution[4] - ); - - // Satisfiability is a satisfaction problem (bool), so evaluate returns bool directly - let size = sat.evaluate(&sat_solution); - println!("SAT solution valid: {}", size); - assert!(size, "Extracted SAT solution must be valid"); - - // Verify all IS solutions map to valid SAT assignments - let mut valid_count = 0; - let mut solutions = Vec::new(); - for is_sol in &is_solutions { - let sat_sol = reduction.extract_solution(is_sol); - // Satisfiability is a satisfaction problem (bool), so evaluate returns bool directly - let s = sat.evaluate(&sat_sol); - if s { - valid_count += 1; - } - solutions.push(SolutionPair { - source_config: sat_sol, - target_config: is_sol.clone(), - }); - } - println!( - "All {} IS solutions map to valid SAT assignments: {}", - is_solutions.len(), - valid_count == is_solutions.len() - ); - assert_eq!(valid_count, is_solutions.len()); - - println!("\nReduction verified successfully"); - - // 5. Export JSON - let source_variant = variant_to_map(Satisfiability::variant()); - let target_variant = variant_to_map(MaximumIndependentSet::::variant()); - let overhead = lookup_overhead( - "Satisfiability", - &source_variant, - "MaximumIndependentSet", - &target_variant, - ) - .expect("Satisfiability -> MaximumIndependentSet overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: Satisfiability::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vars": sat.num_vars(), - "num_clauses": sat.num_clauses(), - }), - }, - target: ProblemSide { - problem: MaximumIndependentSet::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vertices": is.graph().num_vertices(), - "num_edges": is.graph().num_edges(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "satisfiability_to_maximumindependentset"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_satisfiability_to_minimumdominatingset.rs b/examples/reduction_satisfiability_to_minimumdominatingset.rs deleted file mode 100644 index 78e7dafc..00000000 --- a/examples/reduction_satisfiability_to_minimumdominatingset.rs +++ /dev/null @@ -1,158 +0,0 @@ -// # SAT to Dominating Set Reduction (Garey & Johnson 1979) -// -// ## Mathematical Equivalence -// For each variable x_i, create a triangle (pos_i, neg_i, dummy_i). For each -// clause c_j, create a vertex connected to the literals it contains. phi is -// satisfiable iff the graph has a dominating set of size n. -// -// ## This Example -// - Instance: 5-variable, 7-clause 3-SAT formula -// - Source SAT: satisfiable -// - Target: Dominating set with 3*5 + 7 = 22 vertices -// -// ## Output -// Exports `docs/paper/examples/satisfiability_to_minimumdominatingset.json` and `satisfiability_to_minimumdominatingset.result.json`. - -use problemreductions::export::*; -use problemreductions::prelude::*; -use problemreductions::topology::{Graph, SimpleGraph}; - -pub fn run() { - // 1. Create SAT instance: 5-variable, 7-clause 3-SAT formula - let sat = Satisfiability::new( - 5, - vec![ - CNFClause::new(vec![1, 2, -3]), // x1 v x2 v ~x3 - CNFClause::new(vec![-1, 3, 4]), // ~x1 v x3 v x4 - CNFClause::new(vec![2, -4, 5]), // x2 v ~x4 v x5 - CNFClause::new(vec![-2, 3, -5]), // ~x2 v x3 v ~x5 - CNFClause::new(vec![1, -3, 5]), // x1 v ~x3 v x5 - CNFClause::new(vec![-1, -2, 4]), // ~x1 v ~x2 v x4 - CNFClause::new(vec![3, -4, -5]), // x3 v ~x4 v ~x5 - ], - ); - - println!("=== SAT to Dominating Set Reduction (Garey & Johnson 1979) ===\n"); - println!("Source SAT formula: 5-variable, 7-clause 3-SAT"); - println!(" (x1 v x2 v ~x3) ^ (~x1 v x3 v x4) ^ (x2 v ~x4 v x5) ^"); - println!(" (~x2 v x3 v ~x5) ^ (x1 v ~x3 v x5) ^ (~x1 v ~x2 v x4) ^ (x3 v ~x4 v ~x5)"); - println!( - " {} variables, {} clauses", - sat.num_vars(), - sat.num_clauses() - ); - - // 2. Reduce to Dominating Set - let reduction = ReduceTo::>::reduce_to(&sat); - let ds = reduction.target_problem(); - - println!("\n=== Problem Transformation ==="); - println!( - "Source: Satisfiability with {} variables", - sat.num_variables() - ); - println!( - "Target: MinimumDominatingSet with {} vertices, {} edges", - ds.graph().num_vertices(), - ds.graph().num_edges() - ); - println!(" Variable gadgets: 3 vertices per variable (pos, neg, dummy) forming triangles"); - println!(" Clause vertices: 1 per clause, connected to relevant literal vertices"); - println!(" Layout: vertices 0-14 are variable gadgets (5 triangles), vertices 15-21 are clause vertices"); - - // 3. Solve the target DS problem - let solver = BruteForce::new(); - let ds_solutions = solver.find_all_best(ds); - println!("\n=== Solution ==="); - println!("Target DS solutions found: {}", ds_solutions.len()); - - // 4. Extract and verify source solutions - let sat_solution = reduction.extract_solution(&ds_solutions[0]); - println!("Extracted SAT solution: {:?}", sat_solution); - println!( - " Interpretation: x1={}, x2={}, x3={}, x4={}, x5={}", - sat_solution[0], sat_solution[1], sat_solution[2], sat_solution[3], sat_solution[4] - ); - - // Satisfiability is a satisfaction problem (bool), so evaluate returns bool directly - let size = sat.evaluate(&sat_solution); - println!("SAT solution valid: {}", size); - assert!(size, "Extracted SAT solution must be valid"); - - // Verify all DS solutions map to valid SAT assignments - let mut valid_count = 0; - for ds_sol in &ds_solutions { - let sat_sol = reduction.extract_solution(ds_sol); - // Satisfiability is a satisfaction problem (bool), so evaluate returns bool directly - let s = sat.evaluate(&sat_sol); - if s { - valid_count += 1; - } - } - println!( - "{}/{} DS solutions map to valid SAT assignments", - valid_count, - ds_solutions.len() - ); - // Note: Not all optimal DS solutions necessarily map back to valid SAT solutions - // because some dominating sets may use dummy vertices. The important thing is that - // at least one does, verifying the reduction's correctness. - assert!( - valid_count > 0, - "At least one DS solution must map to a valid SAT assignment" - ); - - println!("\nReduction verified successfully"); - - // 5. Collect all valid solutions - let mut solutions = Vec::new(); - for ds_sol in &ds_solutions { - let sat_sol = reduction.extract_solution(ds_sol); - // Satisfiability is a satisfaction problem (bool), so evaluate returns bool directly - if sat.evaluate(&sat_sol) { - solutions.push(SolutionPair { - source_config: sat_sol, - target_config: ds_sol.clone(), - }); - } - } - - // 6. Export JSON - let source_variant = variant_to_map(Satisfiability::variant()); - let target_variant = variant_to_map(MinimumDominatingSet::::variant()); - let overhead = lookup_overhead( - "Satisfiability", - &source_variant, - "MinimumDominatingSet", - &target_variant, - ) - .expect("Satisfiability -> MinimumDominatingSet overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: Satisfiability::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vars": sat.num_vars(), - "num_clauses": sat.num_clauses(), - }), - }, - target: ProblemSide { - problem: MinimumDominatingSet::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vertices": ds.graph().num_vertices(), - "num_edges": ds.graph().num_edges(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "satisfiability_to_minimumdominatingset"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_spinglass_to_maxcut.rs b/examples/reduction_spinglass_to_maxcut.rs deleted file mode 100644 index 7bbe3715..00000000 --- a/examples/reduction_spinglass_to_maxcut.rs +++ /dev/null @@ -1,98 +0,0 @@ -// # Spin Glass to Max-Cut Reduction -// -// ## Mathematical Equivalence -// When external fields h_i = 0, the Ising Hamiltonian H = -sum J_{ij} s_i s_j maps -// directly to a Max-Cut problem: maximizing the cut value is equivalent to minimizing -// the Ising energy. When h_i != 0, an ancilla spin is added with w_{i,a} = h_i. -// -// ## This Example -// - Instance: Petersen graph with 10 spins, ±1 couplings, no external fields -// - Source SpinGlass: 10 spins on Petersen topology -// - Target MaxCut: 10 vertices (direct mapping, no ancilla) -// -// ## Output -// Exports `docs/paper/examples/spinglass_to_maxcut.json` and `spinglass_to_maxcut.result.json`. -// -// See docs/paper/reductions.typ for the full reduction specification. - -use problemreductions::export::*; -use problemreductions::prelude::*; -use problemreductions::topology::small_graphs::petersen; -use problemreductions::topology::{Graph, SimpleGraph}; - -pub fn run() { - let (n, edges) = petersen(); - let couplings: Vec<((usize, usize), i32)> = edges - .iter() - .enumerate() - .map(|(i, &(u, v))| ((u, v), if i % 2 == 0 { 1 } else { -1 })) - .collect(); - let sg = SpinGlass::::new(n, couplings, vec![0; n]); - - let reduction = ReduceTo::>::reduce_to(&sg); - let maxcut = reduction.target_problem(); - - println!("\n=== Problem Transformation ==="); - println!("Source: SpinGlass with {} variables", sg.num_variables()); - println!("Target: MaxCut with {} variables", maxcut.num_variables()); - - let solver = BruteForce::new(); - let maxcut_solutions = solver.find_all_best(maxcut); - println!("\n=== Solution ==="); - println!("Target solutions found: {}", maxcut_solutions.len()); - - // Extract and verify solutions - let mut solutions = Vec::new(); - for target_sol in &maxcut_solutions { - let source_sol = reduction.extract_solution(target_sol); - let size = sg.evaluate(&source_sol); - // SpinGlass is unconstrained, all configs are valid - assert!(size.is_valid()); - solutions.push(SolutionPair { - source_config: source_sol, - target_config: target_sol.clone(), - }); - } - - let sg_solution = reduction.extract_solution(&maxcut_solutions[0]); - println!("Source SpinGlass solution: {:?}", sg_solution); - - let size = sg.evaluate(&sg_solution); - println!("Solution energy: {:?}", size); - // SpinGlass is unconstrained, all configs are valid - assert!(size.is_valid()); - println!("\nReduction verified successfully"); - - // Export JSON - let source_variant = variant_to_map(SpinGlass::::variant()); - let target_variant = variant_to_map(MaxCut::::variant()); - let overhead = lookup_overhead("SpinGlass", &source_variant, "MaxCut", &target_variant) - .expect("SpinGlass -> MaxCut overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: SpinGlass::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_spins": sg.num_variables(), - }), - }, - target: ProblemSide { - problem: MaxCut::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vertices": maxcut.graph().num_vertices(), - "num_edges": maxcut.graph().num_edges(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "spinglass_to_maxcut"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_spinglass_to_qubo.rs b/examples/reduction_spinglass_to_qubo.rs deleted file mode 100644 index 311414dd..00000000 --- a/examples/reduction_spinglass_to_qubo.rs +++ /dev/null @@ -1,96 +0,0 @@ -// # Spin Glass to QUBO Reduction -// -// ## Mathematical Equivalence -// The substitution s_i = 2x_i - 1 transforms Ising spins s in {-1,+1} to binary -// variables x in {0,1}. Expanding the Ising Hamiltonian H(s) under this substitution -// yields a QUBO objective Q(x) plus a constant offset. -// -// ## This Example -// - Instance: Petersen graph with 10 spins, 15 frustrated ±1 couplings, zero fields -// - Source SpinGlass: 10 spins on Petersen topology -// - Target QUBO: 10 binary variables -// -// ## Output -// Exports `docs/paper/examples/spinglass_to_qubo.json` and -// `docs/paper/examples/spinglass_to_qubo.result.json` for use in paper code blocks. -// -// See docs/paper/reductions.typ for the full reduction specification. - -use problemreductions::export::*; -use problemreductions::prelude::*; -use problemreductions::topology::small_graphs::petersen; -use problemreductions::topology::SimpleGraph; - -pub fn run() { - let (n, edges) = petersen(); - // Alternating +/-1 couplings create frustration on odd cycles - let couplings: Vec<((usize, usize), f64)> = edges - .iter() - .enumerate() - .map(|(i, &(u, v))| ((u, v), if i % 2 == 0 { 1.0 } else { -1.0 })) - .collect(); - let sg = SpinGlass::::new(n, couplings, vec![0.0; n]); - - let reduction = ReduceTo::>::reduce_to(&sg); - let qubo = reduction.target_problem(); - - println!("\n=== Problem Transformation ==="); - println!("Source: SpinGlass with {} variables", sg.num_variables()); - println!("Target: QUBO with {} variables", qubo.num_variables()); - - let solver = BruteForce::new(); - let qubo_solutions = solver.find_all_best(qubo); - println!("\n=== Solution ==="); - println!("Target solutions found: {}", qubo_solutions.len()); - - let sg_solution = reduction.extract_solution(&qubo_solutions[0]); - println!("Source SpinGlass solution: {:?}", sg_solution); - - let energy = sg.evaluate(&sg_solution); - println!("Solution energy: {:?}", energy); - assert!(energy.is_valid()); // Valid solution - println!("\nReduction verified successfully"); - - // Collect all solutions - let mut solutions = Vec::new(); - for target_sol in &qubo_solutions { - let source_sol = reduction.extract_solution(target_sol); - solutions.push(SolutionPair { - source_config: source_sol, - target_config: target_sol.clone(), - }); - } - - // Export JSON - let source_variant = variant_to_map(SpinGlass::::variant()); - let target_variant = variant_to_map(QUBO::::variant()); - let overhead = lookup_overhead("SpinGlass", &source_variant, "QUBO", &target_variant) - .expect("SpinGlass -> QUBO overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: SpinGlass::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_spins": sg.num_variables(), - }), - }, - target: ProblemSide { - problem: QUBO::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vars": qubo.num_vars(), - "matrix": qubo.matrix(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "spinglass_to_qubo"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_travelingsalesman_to_ilp.rs b/examples/reduction_travelingsalesman_to_ilp.rs deleted file mode 100644 index a6bbd871..00000000 --- a/examples/reduction_travelingsalesman_to_ilp.rs +++ /dev/null @@ -1,108 +0,0 @@ -// # Traveling Salesman to ILP Reduction -// -// ## Mathematical Formulation -// Variables: x_{v,k} in {0,1} for vertex v and position k; -// auxiliary y variables for McCormick linearization of products. -// Constraints: assignment, non-edge consecutive prohibition, McCormick. -// Objective: minimize total edge weight of the tour. -// -// ## This Example -// - Instance: K4 complete graph with weights -// - Source: TravelingSalesman with 4 vertices, 6 edges -// - Target: ILP with position-based binary variables -// -// ## Output -// Exports `docs/paper/examples/travelingsalesman_to_ilp.json` and `travelingsalesman_to_ilp.result.json`. - -use problemreductions::export::*; -use problemreductions::models::algebraic::ILP; -use problemreductions::prelude::*; -use problemreductions::solvers::ILPSolver; -use problemreductions::topology::{Graph, SimpleGraph}; - -pub fn run() { - // 1. Create TSP instance: K4 with weights - let problem = TravelingSalesman::new( - SimpleGraph::new(4, vec![(0, 1), (0, 2), (0, 3), (1, 2), (1, 3), (2, 3)]), - vec![10, 15, 20, 35, 25, 30], - ); - - // 2. Reduce to ILP - let reduction = ReduceTo::>::reduce_to(&problem); - let ilp = reduction.target_problem(); - - // 3. Print transformation - println!("\n=== Problem Transformation ==="); - println!( - "Source: TravelingSalesman with {} variables ({} edges)", - problem.num_variables(), - problem.graph().num_edges() - ); - println!( - "Target: ILP with {} variables, {} constraints", - ilp.num_vars, - ilp.constraints.len() - ); - - // 4. Solve target ILP - let ilp_solver = ILPSolver::new(); - let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); - - // 5. Extract source solution - let tsp_solution = reduction.extract_solution(&ilp_solution); - println!("\n=== Solution ==="); - println!("Edge selection: {:?}", tsp_solution); - - // 6. Verify - let metric = problem.evaluate(&tsp_solution); - println!("Tour cost: {:?}", metric); - assert!(metric.is_valid()); - - // Cross-check with brute force - let bf = BruteForce::new(); - let bf_solutions = bf.find_all_best(&problem); - let bf_metric = problem.evaluate(&bf_solutions[0]); - assert_eq!(metric, bf_metric, "ILP must match brute force optimum"); - println!("Brute force confirms optimality"); - - // 7. Collect solutions and export JSON - let solutions = vec![SolutionPair { - source_config: tsp_solution.clone(), - target_config: ilp_solution, - }]; - - let source_variant = variant_to_map(TravelingSalesman::::variant()); - let target_variant = variant_to_map(ILP::::variant()); - let overhead = lookup_overhead("TravelingSalesman", &source_variant, "ILP", &target_variant) - .unwrap_or_default(); - let edges: Vec<(usize, usize)> = problem.edges().iter().map(|&(u, v, _)| (u, v)).collect(); - - let data = ReductionData { - source: ProblemSide { - problem: TravelingSalesman::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vertices": problem.graph().num_vertices(), - "num_edges": problem.graph().num_edges(), - "edges": edges, - }), - }, - target: ProblemSide { - problem: ILP::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vars": ilp.num_vars, - "num_constraints": ilp.constraints.len(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "travelingsalesman_to_ilp"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/examples/reduction_travelingsalesman_to_qubo.rs b/examples/reduction_travelingsalesman_to_qubo.rs deleted file mode 100644 index 8fde9e89..00000000 --- a/examples/reduction_travelingsalesman_to_qubo.rs +++ /dev/null @@ -1,142 +0,0 @@ -// # Traveling Salesman to QUBO Reduction (Penalty Method) -// -// ## Mathematical Relationship -// The TSP on a graph G = (V, E) with edge weights is mapped to QUBO using -// position-based encoding. Each vertex v and position k has a binary variable -// x_{v,k}, with penalties enforcing: -// -// 1. Assignment constraint: each vertex appears exactly once in the tour -// 2. Position constraint: each position has exactly one vertex -// 3. Edge constraint: consecutive positions use valid edges -// 4. Objective: total edge weight of the tour -// -// The QUBO has n^2 variables (n vertices x n positions). -// -// ## This Example -// - Instance: K3 complete graph with edge weights [1, 2, 3] -// (w01=1, w02=2, w12=3) -// - Source: TravelingSalesman on 3 vertices, 3 edges -// - QUBO variables: 9 (3^2 = 9, position encoding) -// - Optimal tour cost = 6 (all edges used: 1 + 2 + 3) -// -// ## Outputs -// - `docs/paper/examples/travelingsalesman_to_qubo.json` — reduction structure -// - `docs/paper/examples/travelingsalesman_to_qubo.result.json` — solutions -// -// ## Usage -// ```bash -// cargo run --example reduction_travelingsalesman_to_qubo -// ``` - -use problemreductions::export::*; -use problemreductions::prelude::*; -use problemreductions::topology::{Graph, SimpleGraph}; - -pub fn run() { - println!("=== TravelingSalesman -> QUBO Reduction ===\n"); - - // K3 complete graph with edge weights: w01=1, w02=2, w12=3 - let graph = SimpleGraph::new(3, vec![(0, 1), (0, 2), (1, 2)]); - let tsp = TravelingSalesman::new(graph, vec![1i32, 2, 3]); - - // Reduce to QUBO - let reduction = ReduceTo::::reduce_to(&tsp); - let qubo = reduction.target_problem(); - - println!( - "Source: TravelingSalesman on K3 ({} vertices, {} edges)", - tsp.graph().num_vertices(), - tsp.graph().num_edges() - ); - println!( - "Target: QUBO with {} variables (position encoding: 3 vertices x 3 positions)", - qubo.num_variables() - ); - println!("Q matrix:"); - for row in qubo.matrix() { - let formatted: Vec = row.iter().map(|v| format!("{:8.1}", v)).collect(); - println!(" [{}]", formatted.join(", ")); - } - - // Solve QUBO with brute force - let solver = BruteForce::new(); - let qubo_solutions = solver.find_all_best(qubo); - - // Extract and verify solutions - println!("\nOptimal QUBO solutions: {}", qubo_solutions.len()); - let mut solutions = Vec::new(); - for sol in &qubo_solutions { - let extracted = reduction.extract_solution(sol); - let edge_names = ["(0,1)", "(0,2)", "(1,2)"]; - let selected: Vec<&str> = extracted - .iter() - .enumerate() - .filter(|(_, &v)| v == 1) - .map(|(i, _)| edge_names[i]) - .collect(); - println!(" Edges: {}", selected.join(", ")); - - // Closed-loop verification: check solution is valid in original problem - let metric = tsp.evaluate(&extracted); - assert!(metric.is_valid(), "Tour must be valid in source problem"); - println!(" Cost: {:?}", metric); - - solutions.push(SolutionPair { - source_config: extracted, - target_config: sol.clone(), - }); - } - - // Cross-check with brute force on original problem - let bf_solutions = solver.find_all_best(&tsp); - let bf_metric = tsp.evaluate(&bf_solutions[0]); - let qubo_metric = tsp.evaluate(&reduction.extract_solution(&qubo_solutions[0])); - assert_eq!( - bf_metric, qubo_metric, - "QUBO reduction must match brute force optimum" - ); - - println!( - "\nVerification passed: optimal tour cost matches brute force ({:?})", - bf_metric - ); - - // Export JSON - let source_variant = variant_to_map(TravelingSalesman::::variant()); - let target_variant = variant_to_map(QUBO::::variant()); - let overhead = lookup_overhead( - "TravelingSalesman", - &source_variant, - "QUBO", - &target_variant, - ) - .expect("TravelingSalesman -> QUBO overhead not found"); - - let data = ReductionData { - source: ProblemSide { - problem: TravelingSalesman::::NAME.to_string(), - variant: source_variant, - instance: serde_json::json!({ - "num_vertices": tsp.graph().num_vertices(), - "num_edges": tsp.graph().num_edges(), - }), - }, - target: ProblemSide { - problem: QUBO::::NAME.to_string(), - variant: target_variant, - instance: serde_json::json!({ - "num_vars": qubo.num_vars(), - "matrix": qubo.matrix(), - }), - }, - overhead: overhead_to_json(&overhead), - }; - - let results = ResultData { solutions }; - let name = "travelingsalesman_to_qubo"; - write_example(name, &data, &results); -} - -fn main() { - run() -} diff --git a/problemreductions-cli/Cargo.toml b/problemreductions-cli/Cargo.toml index 4bf4a70e..18cf08af 100644 --- a/problemreductions-cli/Cargo.toml +++ b/problemreductions-cli/Cargo.toml @@ -15,7 +15,8 @@ name = "pred-sym" path = "src/bin/pred_sym.rs" [features] -default = ["highs", "mcp"] +default = ["highs"] +all = ["highs", "mcp"] highs = ["problemreductions/ilp-highs"] mcp = ["dep:rmcp", "dep:tokio", "dep:schemars", "dep:tracing", "dep:tracing-subscriber"] coin-cbc = ["problemreductions/ilp-coin-cbc"] @@ -25,7 +26,7 @@ lpsolve = ["problemreductions/ilp-lpsolve"] microlp = ["problemreductions/ilp-microlp"] [dependencies] -problemreductions = { version = "0.3.0", path = "..", default-features = false } +problemreductions = { version = "0.3.0", path = "..", default-features = false, features = ["example-db"] } clap = { version = "4", features = ["derive"] } anyhow = "1" serde = { version = "1", features = ["derive"] } @@ -33,7 +34,7 @@ serde_json = "1" num-bigint = "0.4" clap_complete = "4" owo-colors = { version = "4", features = ["supports-colors"] } -rmcp = { version = "0.16", features = ["server", "macros", "transport-io"], optional = true } +rmcp = { version = "1.2", features = ["server", "macros", "transport-io"], optional = true } tokio = { version = "1", features = ["full"], optional = true } schemars = { version = "1.0", optional = true } tracing = { version = "0.1", optional = true } diff --git a/problemreductions-cli/src/cli.rs b/problemreductions-cli/src/cli.rs index b7fbd473..0e67c051 100644 --- a/problemreductions-cli/src/cli.rs +++ b/problemreductions-cli/src/cli.rs @@ -1,4 +1,4 @@ -use clap::{CommandFactory, Parser, Subcommand}; +use clap::{CommandFactory, Parser, Subcommand, ValueEnum}; use std::path::PathBuf; #[derive(Parser)] @@ -46,25 +46,29 @@ pub struct Cli { #[derive(Subcommand)] pub enum Commands { - /// List all registered problem types + /// List all registered problem types (or reduction rules with --rules) #[command(after_help = "\ Examples: - pred list # print to terminal + pred list # list problem types + pred list --rules # list all reduction rules pred list -o problems.json # save as JSON")] - List, + List { + /// List reduction rules instead of problem types + #[arg(long)] + rules: bool, + }, - /// Show details for a problem type (variants, fields, reductions) + /// Show details for a problem type or variant (fields, reductions, complexity) #[command(after_help = "\ Examples: - pred show MIS # using alias - pred show MaximumIndependentSet # full name - pred show MIS/UnitDiskGraph # specific graph variant + pred show MIS # all variants for MIS + pred show MIS/UnitDiskGraph # specific variant + pred show MIS/UnitDiskGraph/i32 # fully qualified variant + pred show KSAT/K3 # KSatisfiability with K=3 -Use `pred list` to see all available problem types and aliases. -Use `pred to MIS --hops 2` to explore what reduces to MIS. -Use `pred from QUBO --hops 1` to explore what QUBO reduces to.")] +Use `pred list` to see all available problem types and variants.")] Show { - /// Problem name or alias (e.g., MIS, QUBO, MIS/UnitDiskGraph) + /// Problem name or variant (e.g., MIS, MIS/UnitDiskGraph, KSAT/K3) #[arg(value_parser = crate::problem_name::ProblemNameParser)] problem: String, }, @@ -126,6 +130,9 @@ Use `pred list` to see available problems.")] /// Show all paths instead of just the cheapest #[arg(long)] all: bool, + /// Maximum paths to return in --all mode + #[arg(long, default_value_t = 20)] + max_paths: usize, }, /// Export the reduction graph to JSON @@ -195,6 +202,12 @@ Setup: add one line to your shell rc file: }, } +#[derive(Clone, Debug, ValueEnum)] +pub enum ExampleSide { + Source, + Target, +} + #[derive(clap::Args)] #[command(after_help = "\ TIP: Run `pred create ` (no other flags) to see problem-specific help. @@ -204,7 +217,7 @@ Flags by problem type: MIS, MVC, MaxClique, MinDomSet --graph, --weights MaxCut, MaxMatching, TSP --graph, --edge-weights MaximalIS --graph, --weights - SAT, 3SAT/KSAT --num-vars, --clauses [--k] + SAT, KSAT --num-vars, --clauses [--k] QUBO --matrix SpinGlass --graph, --couplings, --fields KColoring --graph, --k @@ -238,6 +251,9 @@ Random generation: --random --num-vertices N [--edge-prob 0.5] [--seed 42] Examples: + pred create --example MIS/SimpleGraph/i32 + pred create --example MVC/SimpleGraph/i32 --to MIS/SimpleGraph/i32 + pred create --example MVC/SimpleGraph/i32 --to MIS/SimpleGraph/i32 --example-side target pred create MIS --graph 0-1,1-2,2-3 --weights 1,1,1 pred create SAT --num-vars 3 --clauses \"1,2;-1,3\" pred create QUBO --matrix \"1,0.5;0.5,2\" @@ -246,9 +262,18 @@ Examples: pred create MIS --random --num-vertices 10 --edge-prob 0.3 pred create FVS --arcs \"0>1,1>2,2>0\" --weights 1,1,1")] pub struct CreateArgs { - /// Problem type (e.g., MIS, QUBO, SAT) + /// Problem type (e.g., MIS, QUBO, SAT). Omit when using --example. #[arg(value_parser = crate::problem_name::ProblemNameParser)] - pub problem: String, + pub problem: Option, + /// Build a problem from the canonical example database using a structural problem spec. + #[arg(long, value_parser = crate::problem_name::ProblemNameParser)] + pub example: Option, + /// Target problem spec for canonical rule example lookup. + #[arg(long = "to", value_parser = crate::problem_name::ProblemNameParser)] + pub example_target: Option, + /// Which side of a rule example to emit [default: source]. + #[arg(long, value_enum, default_value = "source")] + pub example_side: ExampleSide, /// Graph edge list (e.g., 0-1,1-2,2-3) #[arg(long)] pub graph: Option, diff --git a/problemreductions-cli/src/commands/create.rs b/problemreductions-cli/src/commands/create.rs index 8cf415ff..b89d65c9 100644 --- a/problemreductions-cli/src/commands/create.rs +++ b/problemreductions-cli/src/commands/create.rs @@ -1,9 +1,10 @@ -use crate::cli::CreateArgs; +use crate::cli::{CreateArgs, ExampleSide}; use crate::dispatch::ProblemJsonOutput; use crate::output::OutputConfig; -use crate::problem_name::{parse_problem_spec, resolve_variant}; +use crate::problem_name::{resolve_problem_ref, unknown_problem_error}; use crate::util; use anyhow::{bail, Context, Result}; +use problemreductions::export::{ModelExample, ProblemRef, ProblemSide, RuleExample}; use problemreductions::models::algebraic::{ClosestVectorProblem, BMF}; use problemreductions::models::graph::{GraphPartitioning, HamiltonianPath}; use problemreductions::models::misc::{ @@ -62,6 +63,132 @@ fn all_data_flags_empty(args: &CreateArgs) -> bool { && args.alphabet_size.is_none() } +fn emit_problem_output(output: &ProblemJsonOutput, out: &OutputConfig) -> Result<()> { + let json = serde_json::to_value(output)?; + if let Some(ref path) = out.output { + let content = serde_json::to_string_pretty(&json).context("Failed to serialize JSON")?; + std::fs::write(path, &content) + .with_context(|| format!("Failed to write {}", path.display()))?; + out.info(&format!("Wrote {}", path.display())); + } else { + println!("{}", serde_json::to_string_pretty(&json)?); + } + Ok(()) +} + +fn format_problem_ref(problem: &ProblemRef) -> String { + if problem.variant.is_empty() { + return problem.name.clone(); + } + + let values = problem + .variant + .values() + .cloned() + .collect::>() + .join("/"); + format!("{}/{}", problem.name, values) +} + +fn resolve_example_problem_ref( + input: &str, + rgraph: &problemreductions::rules::ReductionGraph, +) -> Result { + let problem = resolve_problem_ref(input, rgraph)?; + if rgraph.variants_for(&problem.name).is_empty() { + bail!("{}", unknown_problem_error(input)); + } + Ok(problem) +} + +fn problem_output_from_side(side: ProblemSide) -> ProblemJsonOutput { + ProblemJsonOutput { + problem_type: side.problem, + variant: side.variant, + data: side.instance, + } +} + +fn problem_output_from_model(example: ModelExample) -> ProblemJsonOutput { + ProblemJsonOutput { + problem_type: example.problem, + variant: example.variant, + data: example.instance, + } +} + +fn resolve_model_example( + example_spec: &str, + rgraph: &problemreductions::rules::ReductionGraph, +) -> Result { + let model_db = problemreductions::example_db::build_model_db()?; + let problem = resolve_example_problem_ref(example_spec, rgraph)?; + model_db + .models + .into_iter() + .find(|model| model.problem_ref() == problem) + .ok_or_else(|| { + anyhow::anyhow!( + "No canonical model example exists for {}", + format_problem_ref(&problem) + ) + }) +} + +fn resolve_rule_example( + example_spec: &str, + target_spec: &str, + rgraph: &problemreductions::rules::ReductionGraph, +) -> Result { + let rule_db = problemreductions::example_db::build_rule_db()?; + let source = resolve_example_problem_ref(example_spec, rgraph)?; + let target = resolve_example_problem_ref(target_spec, rgraph)?; + rule_db + .rules + .into_iter() + .find(|rule| rule.source.problem_ref() == source && rule.target.problem_ref() == target) + .ok_or_else(|| { + anyhow::anyhow!( + "No canonical rule example exists for {} -> {}", + format_problem_ref(&source), + format_problem_ref(&target) + ) + }) +} + +fn create_from_example(args: &CreateArgs, out: &OutputConfig) -> Result<()> { + let example_spec = args + .example + .as_deref() + .ok_or_else(|| anyhow::anyhow!("Missing --example problem spec"))?; + + if args.problem.is_some() { + bail!( + "Use either `pred create ` or `pred create --example `, not both" + ); + } + if args.random || !all_data_flags_empty(args) { + bail!("`pred create --example` does not accept problem-construction flags"); + } + let rgraph = problemreductions::rules::ReductionGraph::new(); + + let output = if let Some(target_spec) = args.example_target.as_deref() { + let example = resolve_rule_example(example_spec, target_spec, &rgraph)?; + match args.example_side { + ExampleSide::Source => problem_output_from_side(example.source), + ExampleSide::Target => problem_output_from_side(example.target), + } + } else { + if matches!(args.example_side, ExampleSide::Target) { + bail!("`--example-side target` requires `--to `"); + } + + problem_output_from_model(resolve_model_example(example_spec, &rgraph)?) + }; + + emit_problem_output(&output, out) +} + fn type_format_hint(type_name: &str, graph_type: Option<&str>) -> &'static str { match type_name { "G" => match graph_type { @@ -183,17 +310,17 @@ fn resolved_graph_type(variant: &BTreeMap) -> &str { } pub fn create(args: &CreateArgs, out: &OutputConfig) -> Result<()> { - let spec = parse_problem_spec(&args.problem)?; - let canonical = &spec.name; + if args.example.is_some() { + return create_from_example(args, out); + } - // Resolve variant early so random and help can use it + let problem = args.problem.as_ref().ok_or_else(|| { + anyhow::anyhow!("Missing problem type.\n\nUsage: pred create [FLAGS]") + })?; let rgraph = problemreductions::rules::ReductionGraph::new(); - let known_variants = rgraph.variants_for(canonical); - let resolved_variant = if known_variants.is_empty() { - BTreeMap::new() - } else { - resolve_variant(&spec, &known_variants)? - }; + let resolved = resolve_problem_ref(problem, &rgraph)?; + let canonical = &resolved.name; + let resolved_variant = resolved.variant; let graph_type = resolved_graph_type(&resolved_variant); if args.random { @@ -296,7 +423,7 @@ pub fn create(args: &CreateArgs, out: &OutputConfig) -> Result<()> { let (graph, _) = parse_graph(args).map_err(|e| { anyhow::anyhow!( "{e}\n\nUsage: pred create {} --graph 0-1,1-2,2-3 [--edge-weights 1,1,1]", - args.problem + problem ) })?; let edge_weights = parse_edge_weights(args, graph.num_edges())?; @@ -369,7 +496,7 @@ pub fn create(args: &CreateArgs, out: &OutputConfig) -> Result<()> { let num_vars = args.num_vars.ok_or_else(|| { anyhow::anyhow!( "KSatisfiability requires --num-vars\n\n\ - Usage: pred create 3SAT --num-vars 3 --clauses \"1,2,3;-1,2,-3\"" + Usage: pred create KSAT --num-vars 3 --clauses \"1,2,3;-1,2,-3\"" ) })?; let clauses = parse_clauses(args)?; @@ -864,16 +991,22 @@ pub fn create(args: &CreateArgs, out: &OutputConfig) -> Result<()> { data, }; - let json = serde_json::to_value(&output)?; + emit_problem_output(&output, out) +} - if let Some(ref path) = out.output { - let content = serde_json::to_string_pretty(&json).context("Failed to serialize JSON")?; - std::fs::write(path, &content) - .with_context(|| format!("Failed to write {}", path.display()))?; - out.info(&format!("Wrote {}", path.display())); - } else { - // Print JSON to stdout so data is not lost (consistent with reduce) - println!("{}", serde_json::to_string_pretty(&json)?); +/// Reject non-unit weights when the resolved variant uses `weight=One`. +fn reject_nonunit_weights_for_one_variant( + canonical: &str, + graph_type: &str, + variant: &BTreeMap, + weights: &[i32], +) -> Result<()> { + if variant.get("weight").map(|w| w.as_str()) == Some("One") && weights.iter().any(|&w| w != 1) { + bail!( + "Non-unit weights are not supported for the default unit-weight variant.\n\n\ + Use the weighted variant instead:\n \ + pred create {canonical}/{graph_type}/i32 --graph ... --weights ..." + ); } Ok(()) } @@ -891,6 +1024,12 @@ fn create_vertex_weight_problem( let n = positions.len(); let graph = KingsSubgraph::new(positions); let weights = parse_vertex_weights(args, n)?; + reject_nonunit_weights_for_one_variant( + canonical, + graph_type, + resolved_variant, + &weights, + )?; Ok(( ser_vertex_weight_problem_with(canonical, graph, weights)?, resolved_variant.clone(), @@ -901,6 +1040,12 @@ fn create_vertex_weight_problem( let n = positions.len(); let graph = TriangularSubgraph::new(positions); let weights = parse_vertex_weights(args, n)?; + reject_nonunit_weights_for_one_variant( + canonical, + graph_type, + resolved_variant, + &weights, + )?; Ok(( ser_vertex_weight_problem_with(canonical, graph, weights)?, resolved_variant.clone(), @@ -912,6 +1057,12 @@ fn create_vertex_weight_problem( let radius = args.radius.unwrap_or(1.0); let graph = UnitDiskGraph::new(positions, radius); let weights = parse_vertex_weights(args, n)?; + reject_nonunit_weights_for_one_variant( + canonical, + graph_type, + resolved_variant, + &weights, + )?; Ok(( ser_vertex_weight_problem_with(canonical, graph, weights)?, resolved_variant.clone(), @@ -922,10 +1073,16 @@ fn create_vertex_weight_problem( let (graph, n) = parse_graph(args).map_err(|e| { anyhow::anyhow!( "{e}\n\nUsage: pred create {} --graph 0-1,1-2,2-3 [--weights 1,1,1,1]", - args.problem + canonical ) })?; let weights = parse_vertex_weights(args, n)?; + reject_nonunit_weights_for_one_variant( + canonical, + graph_type, + resolved_variant, + &weights, + )?; let data = ser_vertex_weight_problem_with(canonical, graph, weights)?; Ok((data, resolved_variant.clone())) } @@ -1315,7 +1472,7 @@ fn create_random( anyhow::anyhow!( "--random requires --num-vertices\n\n\ Usage: pred create {} --random --num-vertices 10 [--edge-prob 0.3] [--seed 42]", - args.problem + canonical ) })?; @@ -1479,15 +1636,5 @@ fn create_random( data, }; - let json = serde_json::to_value(&output)?; - - if let Some(ref path) = out.output { - let content = serde_json::to_string_pretty(&json).context("Failed to serialize JSON")?; - std::fs::write(path, &content) - .with_context(|| format!("Failed to write {}", path.display()))?; - out.info(&format!("Wrote {}", path.display())); - } else { - println!("{}", serde_json::to_string_pretty(&json)?); - } - Ok(()) + emit_problem_output(&output, out) } diff --git a/problemreductions-cli/src/commands/evaluate.rs b/problemreductions-cli/src/commands/evaluate.rs index 6e78c935..563b848e 100644 --- a/problemreductions-cli/src/commands/evaluate.rs +++ b/problemreductions-cli/src/commands/evaluate.rs @@ -1,11 +1,23 @@ use crate::dispatch::{load_problem, read_input, ProblemJson}; use crate::output::OutputConfig; -use anyhow::Result; +use anyhow::{Context, Result}; use std::path::Path; pub fn evaluate(input: &Path, config_str: &str, out: &OutputConfig) -> Result<()> { let content = read_input(input)?; - let problem_json: ProblemJson = serde_json::from_str(&content)?; + let json: serde_json::Value = + serde_json::from_str(&content).context("Input is not valid JSON")?; + + if json.get("source").is_some() && json.get("target").is_some() && json.get("path").is_some() { + anyhow::bail!( + "Input is a reduction bundle, not a problem instance.\n\ + `pred evaluate` only works on problem files (from `pred create`).\n\ + To solve a bundle, use: pred solve " + ); + } + + let problem_json: ProblemJson = + serde_json::from_value(json).context("Failed to parse problem JSON")?; let problem = load_problem( &problem_json.problem_type, diff --git a/problemreductions-cli/src/commands/graph.rs b/problemreductions-cli/src/commands/graph.rs index c7a66294..d409efaa 100644 --- a/problemreductions-cli/src/commands/graph.rs +++ b/problemreductions-cli/src/commands/graph.rs @@ -1,5 +1,5 @@ use crate::output::OutputConfig; -use crate::problem_name::{aliases_for, parse_problem_spec, resolve_variant}; +use crate::problem_name::{aliases_for, parse_problem_spec, resolve_problem_ref}; use anyhow::{Context, Result}; use problemreductions::registry::collect_schemas; use problemreductions::rules::{Minimize, MinimizeSteps, ReductionGraph, TraversalDirection}; @@ -15,27 +15,57 @@ pub fn list(out: &OutputConfig) -> Result<()> { let mut types = graph.problem_types(); types.sort(); - // Collect data for each problem - struct RowData { - name: String, - aliases: Vec<&'static str>, - num_variants: usize, - num_reduces_to: usize, + // Collect data: one row per variant, grouped by problem type. + struct VariantRow { + /// Full problem/variant name (e.g., "MIS/SimpleGraph/i32") + display: String, + /// Aliases (shown only on first variant of each problem) + aliases: String, + /// Whether this variant is the default + is_default: bool, + /// Number of outgoing reductions from this variant + rules: usize, + /// Best-known complexity + complexity: String, + } + + let mut rows_data: Vec = Vec::new(); + for name in &types { + let variants = graph.variants_for(name); + let default_variant = graph.default_variant_for(name); + let aliases = aliases_for(name); + let alias_str = if aliases.is_empty() { + String::new() + } else { + aliases.join(", ") + }; + + for (i, v) in variants.iter().enumerate() { + let slash = variant_to_full_slash(v); + let display = if slash.is_empty() { + name.to_string() + } else { + format!("{name}{slash}") + }; + let is_default = default_variant.as_ref() == Some(v); + let rules = graph.outgoing_reductions(name).len(); + let complexity = graph + .variant_complexity(name, v) + .map(|c| big_o_of(&Expr::parse(c))) + .unwrap_or_default(); + rows_data.push(VariantRow { + display, + aliases: if i == 0 { + alias_str.clone() + } else { + String::new() + }, + is_default, + rules: if i == 0 { rules } else { 0 }, + complexity, + }); + } } - let data: Vec = types - .iter() - .map(|name| { - let aliases = aliases_for(name); - let num_variants = graph.variants_for(name).len(); - let num_reduces_to = graph.outgoing_reductions(name).len(); - RowData { - name: name.to_string(), - aliases, - num_variants, - num_reduces_to, - } - }) - .collect(); let summary = format!( "Registered problems: {} types, {} reductions, {} variant nodes\n", @@ -47,22 +77,27 @@ pub fn list(out: &OutputConfig) -> Result<()> { let columns: Vec<(&str, Align, usize)> = vec![ ("Problem", Align::Left, 7), ("Aliases", Align::Left, 7), - ("Variants", Align::Right, 8), - ("Reduces to", Align::Right, 10), + ("Rules", Align::Right, 5), + ("Complexity", Align::Left, 10), ]; - let rows: Vec> = data + let rows: Vec> = rows_data .iter() .map(|r| { + let label = if r.is_default { + format!("{} *", r.display) + } else { + r.display.clone() + }; vec![ - r.name.clone(), - if r.aliases.is_empty() { - String::new() + label, + r.aliases.clone(), + if r.rules > 0 { + r.rules.to_string() } else { - r.aliases.join(", ") + String::new() }, - r.num_variants.to_string(), - r.num_reduces_to.to_string(), + r.complexity.clone(), ] }) .collect(); @@ -74,18 +109,20 @@ pub fn list(out: &OutputConfig) -> Result<()> { text.push_str(&crate::output::fmt_section(&summary)); text.push('\n'); text.push_str(&format_table(&columns, &rows, &color_fns)); - text.push_str("\nUse `pred show ` to see variants, reductions, and fields.\n"); + text.push_str("\n* = default variant\n"); + text.push_str("Use `pred show ` to see reductions and fields.\n"); let json = serde_json::json!({ "num_types": graph.num_types(), "num_reductions": graph.num_reductions(), "num_variant_nodes": graph.num_variant_nodes(), - "problems": data.iter().map(|r| { + "variants": rows_data.iter().map(|r| { serde_json::json!({ - "name": r.name, + "name": r.display, "aliases": r.aliases, - "num_variants": r.num_variants, - "num_reduces_to": r.num_reduces_to, + "default": r.is_default, + "rules": r.rules, + "complexity": r.complexity, }) }).collect::>(), }); @@ -93,48 +130,104 @@ pub fn list(out: &OutputConfig) -> Result<()> { out.emit_with_default_name("pred_graph_list.json", &text, &json) } -pub fn show(problem: &str, out: &OutputConfig) -> Result<()> { - let spec = parse_problem_spec(problem)?; +pub fn list_rules(out: &OutputConfig) -> Result<()> { + use crate::output::{format_table, Align}; + let graph = ReductionGraph::new(); - let variants = graph.variants_for(&spec.name); - if variants.is_empty() { - anyhow::bail!("{}", crate::problem_name::unknown_problem_error(&spec.name)); + let mut types = graph.problem_types(); + types.sort(); + + struct RuleRow { + source: String, + target: String, + overhead: String, } - let mut text = format!("{}\n", crate::output::fmt_problem_name(&spec.name)); + let mut rows_data: Vec = Vec::new(); + for name in &types { + for edge in graph.outgoing_reductions(name) { + let source_slash = variant_to_full_slash(&edge.source_variant); + let target_slash = variant_to_full_slash(&edge.target_variant); + let oh_parts = fmt_overhead_parts(&edge.overhead.output_size); + rows_data.push(RuleRow { + source: format!("{}{}", edge.source_name, source_slash), + target: format!("{}{}", edge.target_name, target_slash), + overhead: oh_parts.join(", "), + }); + } + } + + let summary = format!("Registered reduction rules: {}\n", rows_data.len()); + + let columns: Vec<(&str, Align, usize)> = vec![ + ("Source", Align::Left, 6), + ("Target", Align::Left, 6), + ("Overhead", Align::Left, 8), + ]; + + let rows: Vec> = rows_data + .iter() + .map(|r| vec![r.source.clone(), r.target.clone(), r.overhead.clone()]) + .collect(); + + let color_fns: Vec> = vec![ + Some(crate::output::fmt_problem_name), + Some(crate::output::fmt_problem_name), + None, + ]; + + let mut text = String::new(); + text.push_str(&crate::output::fmt_section(&summary)); + text.push('\n'); + text.push_str(&format_table(&columns, &rows, &color_fns)); + text.push_str("\nUse `pred show ` for details on a specific problem.\n"); + + let json = serde_json::json!({ + "num_rules": rows_data.len(), + "rules": rows_data.iter().map(|r| { + serde_json::json!({ + "source": r.source, + "target": r.target, + "overhead": r.overhead, + }) + }).collect::>(), + }); + + out.emit_with_default_name("pred_rules_list.json", &text, &json) +} + +pub fn show(problem: &str, out: &OutputConfig) -> Result<()> { + let graph = ReductionGraph::new(); + let resolved = resolve_problem_ref(problem, &graph)?; + let name = &resolved.name; + let variant = &resolved.variant; + + let default_variant = graph.default_variant_for(name); + let is_default = default_variant.as_ref() == Some(variant); + + let slash = variant_to_full_slash(variant); + let header = format!("{name}{slash}"); + let mut text = format!("{}\n", crate::output::fmt_problem_name(&header)); // Show description from schema let schemas = collect_schemas(); - let schema = schemas.iter().find(|s| s.name == spec.name); + let schema = schemas.iter().find(|s| s.name == *name); if let Some(s) = schema { if !s.description.is_empty() { text.push_str(&format!(" {}\n", s.description)); } } - // Show variants - text.push_str(&format!( - "\n{}\n", - crate::output::fmt_section(&format!("Variants ({}):", variants.len())) - )); - for v in &variants { - let slash = variant_to_full_slash(v); - let label = format!( - " {}", - crate::output::fmt_problem_name(&format!("{}{}", spec.name, slash)) - ); - if let Some(c) = graph.variant_complexity(&spec.name, v) { - text.push_str(&format!( - "{label} complexity: {}\n", - big_o_of(&Expr::parse(c)) - )); - } else { - text.push_str(&format!("{label}\n")); - } + // Show variant info + if let Some(c) = graph.variant_complexity(name, variant) { + text.push_str(&format!( + " Best Known Complexity: {}\n", + big_o_of(&Expr::parse(c)) + )); } - // Show fields from schema (right after variants) + // Show fields from schema if let Some(s) = schema { text.push_str(&format!( "\n{}\n", @@ -150,7 +243,7 @@ pub fn show(problem: &str, out: &OutputConfig) -> Result<()> { } // Show size fields (used with `pred path --cost minimize:`) - let size_fields = graph.size_field_names(&spec.name); + let size_fields = graph.size_field_names(name); if !size_fields.is_empty() { text.push_str(&format!( "\n{}\n", @@ -161,9 +254,17 @@ pub fn show(problem: &str, out: &OutputConfig) -> Result<()> { } } - // Show reductions from/to this problem - let outgoing = graph.outgoing_reductions(&spec.name); - let incoming = graph.incoming_reductions(&spec.name); + // Show reductions filtered to this specific variant + let outgoing: Vec<_> = graph + .outgoing_reductions(name) + .into_iter() + .filter(|e| &e.source_variant == variant) + .collect(); + let incoming: Vec<_> = graph + .incoming_reductions(name) + .into_iter() + .filter(|e| &e.target_variant == variant) + .collect(); text.push_str(&format!( "\n{}\n", @@ -171,8 +272,7 @@ pub fn show(problem: &str, out: &OutputConfig) -> Result<()> { )); for e in &outgoing { text.push_str(&format!( - " {} {} {}", - fmt_node(&graph, e.source_name, &e.source_variant), + " {} {}", crate::output::fmt_outgoing("\u{2192}"), fmt_node(&graph, e.target_name, &e.target_variant), )); @@ -189,10 +289,9 @@ pub fn show(problem: &str, out: &OutputConfig) -> Result<()> { )); for e in &incoming { text.push_str(&format!( - " {} {} {}", + " {} {}", fmt_node(&graph, e.source_name, &e.source_variant), crate::output::fmt_outgoing("\u{2192}"), - fmt_node(&graph, e.target_name, &e.target_variant), )); let oh_parts = fmt_overhead_parts(&e.overhead.output_size); if !oh_parts.is_empty() { @@ -208,25 +307,18 @@ pub fn show(problem: &str, out: &OutputConfig) -> Result<()> { "overhead": overhead_to_json(&e.overhead.output_size), }) }; - let variants_json: Vec = variants - .iter() - .map(|v| { - let complexity = graph.variant_complexity(&spec.name, v).unwrap_or(""); - serde_json::json!({ - "variant": v, - "complexity": complexity, - "big_o": if complexity.is_empty() { - String::new() - } else { - big_o_of(&Expr::parse(complexity)) - }, - }) - }) - .collect(); + let complexity = graph.variant_complexity(name, variant).unwrap_or(""); let mut json = serde_json::json!({ - "name": spec.name, - "variants": variants_json, + "name": name, + "variant": variant, + "default": is_default, + "complexity": complexity, + "big_o": if complexity.is_empty() { + String::new() + } else { + big_o_of(&Expr::parse(complexity)) + }, "size_fields": size_fields, "reduces_to": outgoing.iter().map(&edge_to_json).collect::>(), "reduces_from": incoming.iter().map(&edge_to_json).collect::>(), @@ -237,7 +329,7 @@ pub fn show(problem: &str, out: &OutputConfig) -> Result<()> { } } - let default_name = format!("pred_show_{}.json", spec.name); + let default_name = format!("pred_show_{}.json", name); out.emit_with_default_name(&default_name, &text, &json) } @@ -369,7 +461,14 @@ fn format_path_json( }) } -pub fn path(source: &str, target: &str, cost: &str, all: bool, out: &OutputConfig) -> Result<()> { +pub fn path( + source: &str, + target: &str, + cost: &str, + all: bool, + max_paths: usize, + out: &OutputConfig, +) -> Result<()> { let src_spec = parse_problem_spec(source)?; let dst_spec = parse_problem_spec(target)?; let graph = ReductionGraph::new(); @@ -390,32 +489,22 @@ pub fn path(source: &str, target: &str, cost: &str, all: bool, out: &OutputConfi ); } + // Resolve source and target to exact variant nodes + let src_ref = resolve_problem_ref(source, &graph)?; + let dst_ref = resolve_problem_ref(target, &graph)?; + if all { - // --all uses only the specified variant or the first (default) one - let sv = if src_spec.variant_values.is_empty() { - src_variants[0].clone() - } else { - resolve_variant(&src_spec, &src_variants)? - }; - let dv = if dst_spec.variant_values.is_empty() { - dst_variants[0].clone() - } else { - resolve_variant(&dst_spec, &dst_variants)? - }; - return path_all(&graph, &src_spec.name, &sv, &dst_spec.name, &dv, out); + return path_all( + &graph, + &src_ref.name, + &src_ref.variant, + &dst_ref.name, + &dst_ref.variant, + max_paths, + out, + ); } - let src_resolved = if src_spec.variant_values.is_empty() { - src_variants.clone() - } else { - vec![resolve_variant(&src_spec, &src_variants)?] - }; - let dst_resolved = if dst_spec.variant_values.is_empty() { - dst_variants.clone() - } else { - vec![resolve_variant(&dst_spec, &dst_variants)?] - }; - let input_size = ProblemSize::new(vec![]); // Parse cost function once (validate before the search loop) @@ -435,37 +524,24 @@ pub fn path(source: &str, target: &str, cost: &str, all: bool, out: &OutputConfi ); }; - let mut best_path: Option = None; - - for sv in &src_resolved { - for dv in &dst_resolved { - let found = match cost_choice { - CostChoice::Steps => graph.find_cheapest_path( - &src_spec.name, - sv, - &dst_spec.name, - dv, - &input_size, - &MinimizeSteps, - ), - CostChoice::Field(f) => graph.find_cheapest_path( - &src_spec.name, - sv, - &dst_spec.name, - dv, - &input_size, - &Minimize(f), - ), - }; - - if let Some(p) = found { - let is_better = best_path.as_ref().is_none_or(|bp| p.len() < bp.len()); - if is_better { - best_path = Some(p); - } - } - } - } + let best_path = match cost_choice { + CostChoice::Steps => graph.find_cheapest_path( + &src_ref.name, + &src_ref.variant, + &dst_ref.name, + &dst_ref.variant, + &input_size, + &MinimizeSteps, + ), + CostChoice::Field(f) => graph.find_cheapest_path( + &src_ref.name, + &src_ref.variant, + &dst_ref.name, + &dst_ref.variant, + &input_size, + &Minimize(f), + ), + }; match best_path { Some(ref reduction_path) => { @@ -494,9 +570,12 @@ fn path_all( src_variant: &BTreeMap, dst_name: &str, dst_variant: &BTreeMap, + max_paths: usize, out: &OutputConfig, ) -> Result<()> { - let mut all_paths = graph.find_all_paths(src_name, src_variant, dst_name, dst_variant); + // Fetch one extra to detect truncation + let mut all_paths = + graph.find_paths_up_to(src_name, src_variant, dst_name, dst_variant, max_paths + 1); if all_paths.is_empty() { anyhow::bail!( @@ -514,22 +593,37 @@ fn path_all( // Sort by path length (shortest first) all_paths.sort_by_key(|p| p.len()); + let truncated = all_paths.len() > max_paths; + if truncated { + all_paths.truncate(max_paths); + } + + let returned = all_paths.len(); let mut text = format!( "Found {} paths from {} to {}:\n", - all_paths.len(), - src_name, - dst_name + returned, src_name, dst_name ); for (idx, p) in all_paths.iter().enumerate() { text.push_str(&format!("\n--- Path {} ---\n", idx + 1)); text.push_str(&format_path_text(graph, p)); } + if truncated { + text.push_str(&format!( + "\n(showing {max_paths} of more paths; use --max-paths to increase)\n" + )); + } - let json: serde_json::Value = all_paths + let paths_json: Vec = all_paths .iter() .map(|p| format_path_json(graph, p)) - .collect::>() - .into(); + .collect(); + + let json = serde_json::json!({ + "paths": paths_json, + "truncated": truncated, + "returned": returned, + "max_paths": max_paths, + }); if let Some(ref dir) = out.output { // -o specifies the output folder; save each path as a separate JSON file @@ -544,10 +638,28 @@ fn path_all( std::fs::write(&file, &content) .with_context(|| format!("Failed to write {}", file.display()))?; } + + // Write manifest + let manifest = serde_json::json!({ + "paths": returned, + "truncated": truncated, + "max_paths": max_paths, + }); + let manifest_file = dir.join("manifest.json"); + let manifest_content = + serde_json::to_string_pretty(&manifest).context("Failed to serialize manifest")?; + std::fs::write(&manifest_file, &manifest_content) + .with_context(|| format!("Failed to write {}", manifest_file.display()))?; + out.info(&format!( - "Wrote {} path files to {}", - all_paths.len(), - dir.display() + "Wrote {} path files to {}{}", + returned, + dir.display(), + if truncated { + " (truncated; use --max-paths to increase)".to_string() + } else { + String::new() + } )); } else if out.json { println!( @@ -596,23 +708,14 @@ pub fn neighbors( direction_str: &str, out: &OutputConfig, ) -> Result<()> { - let spec = parse_problem_spec(problem)?; let graph = ReductionGraph::new(); - - let variants = graph.variants_for(&spec.name); - if variants.is_empty() { - anyhow::bail!("{}", crate::problem_name::unknown_problem_error(&spec.name)); - } + let resolved = resolve_problem_ref(problem, &graph)?; + let spec_name = resolved.name.clone(); + let variant = resolved.variant; let direction = parse_direction(direction_str)?; - let variant = if spec.variant_values.is_empty() { - variants[0].clone() - } else { - resolve_variant(&spec, &variants)? - }; - - let neighbors = graph.k_neighbors(&spec.name, &variant, max_hops, direction); + let neighbors = graph.k_neighbors(&spec_name, &variant, max_hops, direction); let dir_label = match direction { TraversalDirection::Outgoing => "outgoing", @@ -621,11 +724,11 @@ pub fn neighbors( }; // Build tree structure via BFS with parent tracking - let tree = graph.k_neighbor_tree(&spec.name, &variant, max_hops, direction); + let tree = graph.k_neighbor_tree(&spec_name, &variant, max_hops, direction); - let root_label = fmt_node(&graph, &spec.name, &variant); + let root_label = fmt_node(&graph, &spec_name, &variant); - let header_label = fmt_node(&graph, &spec.name, &variant); + let header_label = fmt_node(&graph, &spec_name, &variant); let mut text = format!( "{} — {}-hop neighbors ({})\n\n", header_label, max_hops, dir_label, @@ -642,7 +745,7 @@ pub fn neighbors( )); let json = serde_json::json!({ - "source": spec.name, + "source": spec_name, "hops": max_hops, "direction": direction_str, "neighbors": neighbors.iter().map(|n| { @@ -654,7 +757,7 @@ pub fn neighbors( }).collect::>(), }); - let default_name = format!("pred_{}_{}_{}.json", direction_str, spec.name, max_hops); + let default_name = format!("pred_{}_{}_{}.json", direction_str, spec_name, max_hops); out.emit_with_default_name(&default_name, &text, &json) } diff --git a/problemreductions-cli/src/commands/reduce.rs b/problemreductions-cli/src/commands/reduce.rs index 6a7949f3..2fc5c9be 100644 --- a/problemreductions-cli/src/commands/reduce.rs +++ b/problemreductions-cli/src/commands/reduce.rs @@ -3,7 +3,7 @@ use crate::dispatch::{ ReductionBundle, }; use crate::output::OutputConfig; -use crate::problem_name::parse_problem_spec; +use crate::problem_name::resolve_problem_ref; use anyhow::{Context, Result}; use problemreductions::rules::{MinimizeSteps, ReductionGraph, ReductionPath, ReductionStep}; use problemreductions::types::ProblemSize; @@ -86,12 +86,14 @@ pub fn reduce( } // If --to is given, validate it matches the path's target if let Some(target) = target { - let dst_spec = parse_problem_spec(target)?; - if last.name != dst_spec.name { + let dst_ref = resolve_problem_ref(target, &graph)?; + if last.name != dst_ref.name || last.variant != dst_ref.variant { anyhow::bail!( - "Path file ends with {} but --to specifies {}", + "Path file ends with {}{} but --to specifies {}{}", last.name, - dst_spec.name, + variant_to_full_slash(&last.variant), + dst_ref.name, + variant_to_full_slash(&dst_ref.variant), ); } } @@ -106,36 +108,18 @@ pub fn reduce( pred reduce problem.json --via path.json" ) })?; - let dst_spec = parse_problem_spec(target)?; - let dst_variants = graph.variants_for(&dst_spec.name); - if dst_variants.is_empty() { - anyhow::bail!( - "{}", - crate::problem_name::unknown_problem_error(&dst_spec.name) - ); - } + let dst_ref = resolve_problem_ref(target, &graph)?; // Auto-discover cheapest path let input_size = ProblemSize::new(vec![]); - let mut best_path = None; - - for dv in &dst_variants { - if let Some(p) = graph.find_cheapest_path( - source_name, - &source_variant, - &dst_spec.name, - dv, - &input_size, - &MinimizeSteps, - ) { - let is_better = best_path - .as_ref() - .is_none_or(|bp: &ReductionPath| p.len() < bp.len()); - if is_better { - best_path = Some(p); - } - } - } + let best_path = graph.find_cheapest_path( + source_name, + &source_variant, + &dst_ref.name, + &dst_ref.variant, + &input_size, + &MinimizeSteps, + ); best_path.ok_or_else(|| { anyhow::anyhow!( @@ -144,9 +128,9 @@ pub fn reduce( pred path {} {} -o path.json\n\ pred reduce {} --via path.json -o reduced.json", source_name, - dst_spec.name, + dst_ref.name, source_name, - dst_spec.name, + dst_ref.name, input.display(), ) })? diff --git a/problemreductions-cli/src/dispatch.rs b/problemreductions-cli/src/dispatch.rs index fc2823fd..659bba48 100644 --- a/problemreductions-cli/src/dispatch.rs +++ b/problemreductions-cli/src/dispatch.rs @@ -1,21 +1,13 @@ -use anyhow::{bail, Context, Result}; -use problemreductions::models::algebraic::{ClosestVectorProblem, ILP}; -use problemreductions::models::misc::{ - BinPacking, FlowShopScheduling, Knapsack, LongestCommonSubsequence, - ShortestCommonSupersequence, SubsetSum, -}; -use problemreductions::prelude::*; +use anyhow::{Context, Result}; +use problemreductions::models::algebraic::ILP; +use problemreductions::registry::{DynProblem, LoadedDynProblem}; use problemreductions::rules::{MinimizeSteps, ReductionGraph}; -use problemreductions::solvers::{BruteForce, ILPSolver, Solver}; -use problemreductions::topology::{KingsSubgraph, SimpleGraph, TriangularSubgraph, UnitDiskGraph}; +use problemreductions::solvers::ILPSolver; +use problemreductions::traits::Problem; use problemreductions::types::ProblemSize; -use problemreductions::variant::{K2, K3, KN}; -use serde::Serialize; use serde_json::Value; use std::any::Any; use std::collections::BTreeMap; -use std::fmt; -use std::ops::Deref; use std::path::Path; use crate::problem_name::resolve_alias; @@ -34,100 +26,12 @@ pub fn read_input(path: &Path) -> Result { } } -/// Type-erased problem for CLI dispatch. -#[allow(dead_code)] -pub trait DynProblem: Any { - fn evaluate_dyn(&self, config: &[usize]) -> String; - fn serialize_json(&self) -> Value; - fn as_any(&self) -> &dyn Any; - fn dims_dyn(&self) -> Vec; - fn problem_name(&self) -> &'static str; - fn variant_map(&self) -> BTreeMap; - fn num_variables_dyn(&self) -> usize; -} - -impl DynProblem for T -where - T: Problem + Serialize + 'static, - T::Metric: fmt::Debug, -{ - fn evaluate_dyn(&self, config: &[usize]) -> String { - format!("{:?}", self.evaluate(config)) - } - fn serialize_json(&self) -> Value { - serde_json::to_value(self).expect("serialize failed") - } - fn as_any(&self) -> &dyn Any { - self - } - fn dims_dyn(&self) -> Vec { - self.dims() - } - fn problem_name(&self) -> &'static str { - T::NAME - } - fn variant_map(&self) -> BTreeMap { - T::variant() - .into_iter() - .map(|(k, v)| (k.to_string(), v.to_string())) - .collect() - } - fn num_variables_dyn(&self) -> usize { - self.num_variables() - } -} - -fn deser_opt(data: Value) -> Result -where - T: OptimizationProblem + Serialize + serde::de::DeserializeOwned + 'static, - T::Metric: fmt::Debug, -{ - let problem: T = serde_json::from_value(data)?; - Ok(LoadedProblem { - inner: Box::new(problem), - brute_force_fn: bf_opt::, - }) -} - -fn deser_sat(data: Value) -> Result -where - T: Problem + Serialize + serde::de::DeserializeOwned + 'static, -{ - let problem: T = serde_json::from_value(data)?; - Ok(LoadedProblem { - inner: Box::new(problem), - brute_force_fn: bf_sat::, - }) -} - -fn bf_opt(any: &dyn Any) -> Option -where - T: OptimizationProblem + 'static, - T::Metric: fmt::Debug, -{ - let p = any.downcast_ref::()?; - let config = BruteForce::new().find_best(p)?; - let evaluation = format!("{:?}", p.evaluate(&config)); - Some(SolveResult { config, evaluation }) -} - -fn bf_sat(any: &dyn Any) -> Option -where - T: Problem + 'static, -{ - let p = any.downcast_ref::()?; - let config = BruteForce::new().find_satisfying(p)?; - let evaluation = format!("{:?}", p.evaluate(&config)); - Some(SolveResult { config, evaluation }) -} - /// Loaded problem with type-erased solve capability. pub struct LoadedProblem { - inner: Box, - brute_force_fn: fn(&dyn Any) -> Option, + inner: LoadedDynProblem, } -impl Deref for LoadedProblem { +impl std::ops::Deref for LoadedProblem { type Target = dyn DynProblem; fn deref(&self) -> &(dyn DynProblem + 'static) { &*self.inner @@ -136,8 +40,11 @@ impl Deref for LoadedProblem { impl LoadedProblem { pub fn solve_brute_force(&self) -> Result { - (self.brute_force_fn)(self.inner.as_any()) - .ok_or_else(|| anyhow::anyhow!("No solution found")) + let (config, evaluation) = self + .inner + .solve_brute_force() + .ok_or_else(|| anyhow::anyhow!("No solution found"))?; + Ok(SolveResult { config, evaluation }) } /// Solve using the ILP solver. If the problem is not ILP, auto-reduce to ILP first. @@ -186,13 +93,6 @@ impl LoadedProblem { } } -fn graph_variant(variant: &BTreeMap) -> &str { - variant - .get("graph") - .map(|s| s.as_str()) - .unwrap_or("SimpleGraph") -} - /// Load a problem from JSON type/variant/data. pub fn load_problem( name: &str, @@ -200,72 +100,9 @@ pub fn load_problem( data: Value, ) -> Result { let canonical = resolve_alias(name); - match canonical.as_str() { - "MaximumIndependentSet" => match graph_variant(variant) { - "KingsSubgraph" => deser_opt::>(data), - "TriangularSubgraph" => { - deser_opt::>(data) - } - "UnitDiskGraph" => deser_opt::>(data), - _ => deser_opt::>(data), - }, - "MinimumVertexCover" => deser_opt::>(data), - "MaximumClique" => deser_opt::>(data), - "MaximumMatching" => deser_opt::>(data), - "MinimumDominatingSet" => deser_opt::>(data), - "MinimumSumMulticenter" => deser_opt::>(data), - "GraphPartitioning" => deser_opt::>(data), - "HamiltonianPath" => deser_sat::>(data), - "IsomorphicSpanningTree" => { - deser_sat::(data) - } - "MaxCut" => deser_opt::>(data), - "MaximalIS" => deser_opt::>(data), - "TravelingSalesman" => deser_opt::>(data), - "RuralPostman" => deser_sat::>(data), - "KColoring" => match variant.get("k").map(|s| s.as_str()) { - Some("K3") => deser_sat::>(data), - _ => deser_sat::>(data), - }, - "MaximumSetPacking" => deser_opt::>(data), - "MinimumSetCovering" => deser_opt::>(data), - "QUBO" => deser_opt::>(data), - "SpinGlass" => match variant.get("weight").map(|s| s.as_str()) { - Some("f64") => deser_opt::>(data), - _ => deser_opt::>(data), - }, - "Satisfiability" => deser_sat::(data), - "KSatisfiability" => match variant.get("k").map(|s| s.as_str()) { - Some("K2") => deser_sat::>(data), - Some("K3") => deser_sat::>(data), - _ => deser_sat::>(data), - }, - "CircuitSAT" => deser_sat::(data), - "Factoring" => deser_opt::(data), - "ILP" => deser_opt::(data), - "BicliqueCover" => deser_opt::(data), - "BMF" => deser_opt::(data), - "PaintShop" => deser_opt::(data), - "BinPacking" => match variant.get("weight").map(|s| s.as_str()) { - Some("f64") => deser_opt::>(data), - _ => deser_opt::>(data), - }, - "ClosestVectorProblem" => match variant.get("weight").map(|s| s.as_str()) { - Some("f64") => deser_opt::>(data), - _ => deser_opt::>(data), - }, - "Knapsack" => deser_opt::(data), - "OptimalLinearArrangement" => deser_sat::>(data), - "SubgraphIsomorphism" => deser_sat::(data), - "PartitionIntoTriangles" => deser_sat::>(data), - "LongestCommonSubsequence" => deser_opt::(data), - "MinimumFeedbackVertexSet" => deser_opt::>(data), - "FlowShopScheduling" => deser_sat::(data), - "SubsetSum" => deser_sat::(data), - "ShortestCommonSupersequence" => deser_sat::(data), - "MinimumFeedbackArcSet" => deser_opt::>(data), - _ => bail!("{}", crate::problem_name::unknown_problem_error(&canonical)), - } + let inner = problemreductions::registry::load_dyn(&canonical, variant, data) + .map_err(|e| anyhow::anyhow!(e))?; + Ok(LoadedProblem { inner }) } /// Serialize a `&dyn Any` target problem given its name and variant. @@ -275,80 +112,13 @@ pub fn serialize_any_problem( any: &dyn Any, ) -> Result { let canonical = resolve_alias(name); - match canonical.as_str() { - "MaximumIndependentSet" => match graph_variant(variant) { - "KingsSubgraph" => try_ser::>(any), - "TriangularSubgraph" => try_ser::>(any), - "UnitDiskGraph" => try_ser::>(any), - _ => try_ser::>(any), - }, - "MinimumVertexCover" => try_ser::>(any), - "MaximumClique" => try_ser::>(any), - "MaximumMatching" => try_ser::>(any), - "MinimumDominatingSet" => try_ser::>(any), - "MinimumSumMulticenter" => try_ser::>(any), - "GraphPartitioning" => try_ser::>(any), - "HamiltonianPath" => try_ser::>(any), - "IsomorphicSpanningTree" => { - try_ser::(any) - } - "MaxCut" => try_ser::>(any), - "MaximalIS" => try_ser::>(any), - "TravelingSalesman" => try_ser::>(any), - "RuralPostman" => try_ser::>(any), - "KColoring" => match variant.get("k").map(|s| s.as_str()) { - Some("K3") => try_ser::>(any), - _ => try_ser::>(any), - }, - "MaximumSetPacking" => match variant.get("weight").map(|s| s.as_str()) { - Some("f64") => try_ser::>(any), - _ => try_ser::>(any), - }, - "MinimumSetCovering" => try_ser::>(any), - "QUBO" => try_ser::>(any), - "SpinGlass" => match variant.get("weight").map(|s| s.as_str()) { - Some("f64") => try_ser::>(any), - _ => try_ser::>(any), - }, - "Satisfiability" => try_ser::(any), - "KSatisfiability" => match variant.get("k").map(|s| s.as_str()) { - Some("K2") => try_ser::>(any), - Some("K3") => try_ser::>(any), - _ => try_ser::>(any), - }, - "CircuitSAT" => try_ser::(any), - "Factoring" => try_ser::(any), - "ILP" => try_ser::(any), - "BicliqueCover" => try_ser::(any), - "BMF" => try_ser::(any), - "PaintShop" => try_ser::(any), - "BinPacking" => match variant.get("weight").map(|s| s.as_str()) { - Some("f64") => try_ser::>(any), - _ => try_ser::>(any), - }, - "ClosestVectorProblem" => match variant.get("weight").map(|s| s.as_str()) { - Some("f64") => try_ser::>(any), - _ => try_ser::>(any), - }, - "Knapsack" => try_ser::(any), - "OptimalLinearArrangement" => try_ser::>(any), - "SubgraphIsomorphism" => try_ser::(any), - "PartitionIntoTriangles" => try_ser::>(any), - "LongestCommonSubsequence" => try_ser::(any), - "MinimumFeedbackVertexSet" => try_ser::>(any), - "FlowShopScheduling" => try_ser::(any), - "SubsetSum" => try_ser::(any), - "ShortestCommonSupersequence" => try_ser::(any), - "MinimumFeedbackArcSet" => try_ser::>(any), - _ => bail!("{}", crate::problem_name::unknown_problem_error(&canonical)), - } -} - -fn try_ser(any: &dyn Any) -> Result { - let problem = any - .downcast_ref::() - .ok_or_else(|| anyhow::anyhow!("Type mismatch during serialization"))?; - Ok(serde_json::to_value(problem)?) + problemreductions::registry::serialize_any(&canonical, variant, any).ok_or_else(|| { + anyhow::anyhow!( + "Failed to serialize {} with variant {:?}", + canonical, + variant + ) + }) } /// JSON wrapper format for problem files. @@ -403,3 +173,42 @@ fn solve_ilp(any: &dyn Any) -> Result { let evaluation = format!("{:?}", problem.evaluate(&config)); Ok(SolveResult { config, evaluation }) } + +#[cfg(test)] +mod tests { + use super::*; + use problemreductions::models::graph::MaximumIndependentSet; + use problemreductions::models::misc::BinPacking; + use problemreductions::topology::SimpleGraph; + + #[test] + fn test_load_problem_alias_uses_registry_dispatch() { + let problem = MaximumIndependentSet::new(SimpleGraph::new(3, vec![(0, 1)]), vec![1i32; 3]); + let variant = BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("weight".to_string(), "i32".to_string()), + ]); + let loaded = + load_problem("MIS", &variant, serde_json::to_value(&problem).unwrap()).unwrap(); + assert_eq!(loaded.problem_name(), "MaximumIndependentSet"); + } + + #[test] + fn test_load_problem_rejects_unresolved_weight_variant() { + let problem = BinPacking::new(vec![3i32, 3, 2, 2], 5i32); + let loaded = load_problem( + "BinPacking", + &BTreeMap::new(), + serde_json::to_value(&problem).unwrap(), + ); + assert!(loaded.is_err()); + } + + #[test] + fn test_serialize_any_problem_round_trips_bin_packing() { + let problem = BinPacking::new(vec![3i32, 3, 2, 2], 5i32); + let variant = BTreeMap::from([("weight".to_string(), "i32".to_string())]); + let json = serialize_any_problem("BinPacking", &variant, &problem as &dyn Any).unwrap(); + assert_eq!(json, serde_json::to_value(&problem).unwrap()); + } +} diff --git a/problemreductions-cli/src/main.rs b/problemreductions-cli/src/main.rs index b9e83221..b1f75d20 100644 --- a/problemreductions-cli/src/main.rs +++ b/problemreductions-cli/src/main.rs @@ -43,7 +43,13 @@ fn main() -> anyhow::Result<()> { }; match cli.command { - Commands::List => commands::graph::list(&out), + Commands::List { rules } => { + if rules { + commands::graph::list_rules(&out) + } else { + commands::graph::list(&out) + } + } Commands::Show { problem } => commands::graph::show(&problem, &out), Commands::To { problem, hops } => commands::graph::neighbors(&problem, hops, "in", &out), Commands::From { problem, hops } => commands::graph::neighbors(&problem, hops, "out", &out), @@ -52,7 +58,8 @@ fn main() -> anyhow::Result<()> { target, cost, all, - } => commands::graph::path(&source, &target, &cost, all, &out), + max_paths, + } => commands::graph::path(&source, &target, &cost, all, max_paths, &out), Commands::ExportGraph => commands::graph::export(&out), Commands::Inspect(args) => commands::inspect::inspect(&args.input, &out), Commands::Create(args) => commands::create::create(&args, &out), diff --git a/problemreductions-cli/src/mcp/prompts.rs b/problemreductions-cli/src/mcp/prompts.rs index 102e86b5..b0289060 100644 --- a/problemreductions-cli/src/mcp/prompts.rs +++ b/problemreductions-cli/src/mcp/prompts.rs @@ -9,12 +9,9 @@ pub fn list_prompts() -> Vec { "Explain a problem type: what it models, its variants, and how it connects to \ other problems", ), - Some(vec![PromptArgument { - name: "problem".into(), - title: None, - description: Some("Problem name or alias (e.g., MIS, QUBO, MaxCut)".into()), - required: Some(true), - }]), + Some(vec![PromptArgument::new("problem") + .with_description("Problem name or alias (e.g., MIS, QUBO, MaxCut)") + .with_required(true)]), ), Prompt::new( "model_my_problem", @@ -22,12 +19,9 @@ pub fn list_prompts() -> Vec { "Map a real-world problem to the closest NP-hard problem type in the reduction \ graph", ), - Some(vec![PromptArgument { - name: "description".into(), - title: None, - description: Some("Free-text description of your real-world problem".into()), - required: Some(true), - }]), + Some(vec![PromptArgument::new("description") + .with_description("Free-text description of your real-world problem") + .with_required(true)]), ), Prompt::new( "compare", @@ -36,18 +30,12 @@ pub fn list_prompts() -> Vec { between them", ), Some(vec![ - PromptArgument { - name: "problem_a".into(), - title: None, - description: Some("First problem name or alias".into()), - required: Some(true), - }, - PromptArgument { - name: "problem_b".into(), - title: None, - description: Some("Second problem name or alias".into()), - required: Some(true), - }, + PromptArgument::new("problem_a") + .with_description("First problem name or alias") + .with_required(true), + PromptArgument::new("problem_b") + .with_description("Second problem name or alias") + .with_required(true), ]), ), Prompt::new( @@ -57,57 +45,38 @@ pub fn list_prompts() -> Vec { and map the solution back", ), Some(vec![ - PromptArgument { - name: "source".into(), - title: None, - description: Some("Source problem name or alias".into()), - required: Some(true), - }, - PromptArgument { - name: "target".into(), - title: None, - description: Some("Target problem name or alias".into()), - required: Some(true), - }, + PromptArgument::new("source") + .with_description("Source problem name or alias") + .with_required(true), + PromptArgument::new("target") + .with_description("Target problem name or alias") + .with_required(true), ]), ), Prompt::new( "solve", Some("Create and solve a problem instance, showing the optimal solution"), Some(vec![ - PromptArgument { - name: "problem_type".into(), - title: None, - description: Some("Problem name or alias (e.g., MIS, QUBO, MaxCut)".into()), - required: Some(true), - }, - PromptArgument { - name: "instance".into(), - title: None, - description: Some( - "Instance parameters (e.g., \"edges: 0-1,1-2\" or \"clauses: 1,2;-1,3\")" - .into(), - ), - required: Some(true), - }, + PromptArgument::new("problem_type") + .with_description("Problem name or alias (e.g., MIS, QUBO, MaxCut)") + .with_required(true), + PromptArgument::new("instance") + .with_description( + "Instance parameters (e.g., \"edges: 0-1,1-2\" or \"clauses: 1,2;-1,3\")", + ) + .with_required(true), ]), ), Prompt::new( "find_reduction", Some("Find the best reduction path between two problems, with cost analysis"), Some(vec![ - PromptArgument { - name: "source".into(), - title: None, - description: Some("Source problem name or alias".into()), - required: Some(true), - }, - PromptArgument { - name: "target".into(), - title: None, - description: Some("Target problem name or alias".into()), - required: Some(true), - }, + PromptArgument::new("source") + .with_description("Source problem name or alias") + .with_required(true), + PromptArgument::new("target") + .with_description("Target problem name or alias") + .with_required(true), ]), ), Prompt::new( @@ -118,176 +87,131 @@ pub fn list_prompts() -> Vec { ] } +fn prompt_result(description: &str, user_message: &str) -> GetPromptResult { + GetPromptResult::new(vec![PromptMessage::new_text( + PromptMessageRole::User, + user_message, + )]) + .with_description(description) +} + /// Return the content for the named prompt, or `None` if the name is unknown. pub fn get_prompt( name: &str, arguments: &serde_json::Map, ) -> Option { + let get = |key: &str, default: &str| -> String { + arguments + .get(key) + .and_then(|v| v.as_str()) + .unwrap_or(default) + .to_string() + }; + match name { "what_is" => { - let problem = arguments - .get("problem") - .and_then(|v| v.as_str()) - .unwrap_or("MIS"); - - Some(GetPromptResult { - description: Some(format!("Explain the {} problem", problem)), - messages: vec![PromptMessage::new_text( - PromptMessageRole::User, - format!( - "Explain the \"{problem}\" problem to me.\n\n\ - What does it model in the real world? What are its variants (graph types, \ - weight types)? What other problems can it reduce to, and which problems \ - reduce to it?\n\n\ - Give me a concise summary suitable for someone encountering this problem \ - for the first time, then show the technical details." - ), - )], - }) + let problem = get("problem", "MIS"); + Some(prompt_result( + &format!("Explain the {problem} problem"), + &format!( + "Explain the \"{problem}\" problem to me.\n\n\ + What does it model in the real world? What are its variants (graph types, \ + weight types)? What other problems can it reduce to, and which problems \ + reduce to it?\n\n\ + Give me a concise summary suitable for someone encountering this problem \ + for the first time, then show the technical details." + ), + )) } "model_my_problem" => { - let description = arguments - .get("description") - .and_then(|v| v.as_str()) - .unwrap_or("(no description provided)"); - - Some(GetPromptResult { - description: Some("Map a real-world problem to an NP-hard problem type".into()), - messages: vec![PromptMessage::new_text( - PromptMessageRole::User, - format!( - "I have a real-world problem and I need help identifying which NP-hard \ - problem type it maps to.\n\n\ - Here's my problem: \"{description}\"\n\n\ - Look through the available problem types in the reduction graph and \ - identify which one(s) best model my problem. Explain why it's a good \ - fit, what the variables and constraints map to, and suggest how I could \ - encode my specific instance." - ), - )], - }) + let description = get("description", "(no description provided)"); + Some(prompt_result( + "Map a real-world problem to an NP-hard problem type", + &format!( + "I have a real-world problem and I need help identifying which NP-hard \ + problem type it maps to.\n\n\ + Here's my problem: \"{description}\"\n\n\ + Look through the available problem types in the reduction graph and \ + identify which one(s) best model my problem. Explain why it's a good \ + fit, what the variables and constraints map to, and suggest how I could \ + encode my specific instance." + ), + )) } "compare" => { - let problem_a = arguments - .get("problem_a") - .and_then(|v| v.as_str()) - .unwrap_or("MIS"); - let problem_b = arguments - .get("problem_b") - .and_then(|v| v.as_str()) - .unwrap_or("VertexCover"); - - Some(GetPromptResult { - description: Some(format!("Compare {} and {}", problem_a, problem_b)), - messages: vec![PromptMessage::new_text( - PromptMessageRole::User, - format!( - "Compare \"{problem_a}\" and \"{problem_b}\".\n\n\ - How are they related? Is there a direct reduction between them, or do \ - they connect through intermediate problems? What are the key differences \ - in what they model? If one can be reduced to the other, what is the \ - overhead?" - ), - )], - }) + let a = get("problem_a", "MIS"); + let b = get("problem_b", "VertexCover"); + Some(prompt_result( + &format!("Compare {a} and {b}"), + &format!( + "Compare \"{a}\" and \"{b}\".\n\n\ + How are they related? Is there a direct reduction between them, or do \ + they connect through intermediate problems? What are the key differences \ + in what they model? If one can be reduced to the other, what is the \ + overhead?" + ), + )) } "reduce" => { - let source = arguments - .get("source") - .and_then(|v| v.as_str()) - .unwrap_or("MIS"); - let target = arguments - .get("target") - .and_then(|v| v.as_str()) - .unwrap_or("QUBO"); - - Some(GetPromptResult { - description: Some(format!( - "Step-by-step reduction from {} to {}", - source, target - )), - messages: vec![PromptMessage::new_text( - PromptMessageRole::User, - format!( - "Walk me through reducing a \"{source}\" instance to \"{target}\", step \ - by step.\n\n\ - 1. Find the reduction path and explain the overhead.\n\ - 2. Create a small, concrete example instance of \"{source}\".\n\ - 3. Reduce it to \"{target}\" and show what the transformed instance \ - looks like.\n\ - 4. Solve the reduced instance.\n\ - 5. Explain how the solution maps back to the original problem.\n\n\ - Use a small example so I can follow each transformation by hand." - ), - )], - }) + let source = get("source", "MIS"); + let target = get("target", "QUBO"); + Some(prompt_result( + &format!("Step-by-step reduction from {source} to {target}"), + &format!( + "Walk me through reducing a \"{source}\" instance to \"{target}\", step \ + by step.\n\n\ + 1. Find the reduction path and explain the overhead.\n\ + 2. Create a small, concrete example instance of \"{source}\".\n\ + 3. Reduce it to \"{target}\" and show what the transformed instance \ + looks like.\n\ + 4. Solve the reduced instance.\n\ + 5. Explain how the solution maps back to the original problem.\n\n\ + Use a small example so I can follow each transformation by hand." + ), + )) } "solve" => { - let problem_type = arguments - .get("problem_type") - .and_then(|v| v.as_str()) - .unwrap_or("MIS"); - let instance = arguments - .get("instance") - .and_then(|v| v.as_str()) - .unwrap_or("edges: 0-1,1-2,2-0"); - - Some(GetPromptResult { - description: Some(format!("Solve a {} instance", problem_type)), - messages: vec![PromptMessage::new_text( - PromptMessageRole::User, - format!( - "Create a {problem_type} instance with these parameters: {instance}\n\n\ - Solve it and show me:\n\ - - The problem instance details (size, structure)\n\ - - The optimal solution and its objective value\n\ - - Why this solution is optimal (briefly)" - ), - )], - }) + let problem_type = get("problem_type", "MIS"); + let instance = get("instance", "edges: 0-1,1-2,2-0"); + Some(prompt_result( + &format!("Solve a {problem_type} instance"), + &format!( + "Create a {problem_type} instance with these parameters: {instance}\n\n\ + Solve it and show me:\n\ + - The problem instance details (size, structure)\n\ + - The optimal solution and its objective value\n\ + - Why this solution is optimal (briefly)" + ), + )) } "find_reduction" => { - let source = arguments - .get("source") - .and_then(|v| v.as_str()) - .unwrap_or("SAT"); - let target = arguments - .get("target") - .and_then(|v| v.as_str()) - .unwrap_or("QUBO"); - - Some(GetPromptResult { - description: Some(format!("Find reduction path from {} to {}", source, target)), - messages: vec![PromptMessage::new_text( - PromptMessageRole::User, - format!( - "Find the best way to reduce \"{source}\" to \"{target}\".\n\n\ - Show me the cheapest reduction path and explain the cost at each step. \ - Are there alternative paths? If so, compare them — which is better for \ - small instances vs. large instances?" - ), - )], - }) + let source = get("source", "SAT"); + let target = get("target", "QUBO"); + Some(prompt_result( + &format!("Find reduction path from {source} to {target}"), + &format!( + "Find the best way to reduce \"{source}\" to \"{target}\".\n\n\ + Show me the cheapest reduction path and explain the cost at each step. \ + Are there alternative paths? If so, compare them — which is better for \ + small instances vs. large instances?" + ), + )) } - "overview" => Some(GetPromptResult { - description: Some("Overview of the NP-hard problem reduction landscape".into()), - messages: vec![PromptMessage::new_text( - PromptMessageRole::User, - "Give me an overview of the NP-hard problem reduction landscape.\n\n\ - How many problem types are registered? What are the major categories (graph, \ - SAT, optimization)? Which problems are the most connected hubs? Which problems \ - can reach the most targets through reductions?\n\n\ - Summarize the structure so I understand what's available and where to start \ - exploring." - .to_string(), - )], - }), + "overview" => Some(prompt_result( + "Overview of the NP-hard problem reduction landscape", + "Give me an overview of the NP-hard problem reduction landscape.\n\n\ + How many problem types are registered? What are the major categories (graph, \ + SAT, optimization)? Which problems are the most connected hubs? Which problems \ + can reach the most targets through reductions?\n\n\ + Summarize the structure so I understand what's available and where to start \ + exploring.", + )), _ => None, } diff --git a/problemreductions-cli/src/mcp/tests.rs b/problemreductions-cli/src/mcp/tests.rs index 380e7d3a..d2dd0fd0 100644 --- a/problemreductions-cli/src/mcp/tests.rs +++ b/problemreductions-cli/src/mcp/tests.rs @@ -31,7 +31,7 @@ mod tests { #[test] fn test_find_path() { let server = McpServer::new(); - let result = server.find_path_inner("MIS", "QUBO", "minimize-steps", false); + let result = server.find_path_inner("MIS", "QUBO", "minimize-steps", false, 20); assert!(result.is_ok()); let json: serde_json::Value = serde_json::from_str(&result.unwrap()).unwrap(); assert!(json["path"].as_array().unwrap().len() > 0); @@ -40,21 +40,79 @@ mod tests { #[test] fn test_find_path_all() { let server = McpServer::new(); - let result = server.find_path_inner("MIS", "QUBO", "minimize-steps", true); + let result = server.find_path_inner("MIS", "QUBO", "minimize-steps", true, 20); assert!(result.is_ok()); let json: serde_json::Value = serde_json::from_str(&result.unwrap()).unwrap(); - // --all returns an array of path objects - assert!(json.as_array().unwrap().len() > 0); + // --all returns a structured envelope + assert!(json["paths"].as_array().unwrap().len() > 0); + assert!(json["truncated"].is_boolean()); + assert!(json["returned"].is_u64()); + assert!(json["max_paths"].is_u64()); + } + + #[test] + fn test_find_path_all_structured_response() { + let server = McpServer::new(); + let result = server.find_path_inner("MIS", "QUBO", "minimize-steps", true, 20); + assert!(result.is_ok()); + let json: serde_json::Value = serde_json::from_str(&result.unwrap()).unwrap(); + // Verify the structured envelope fields + let paths = json["paths"].as_array().unwrap(); + assert!(!paths.is_empty()); + let returned = json["returned"].as_u64().unwrap() as usize; + assert_eq!(returned, paths.len()); + assert_eq!(json["max_paths"].as_u64().unwrap(), 20); + // Each path should have steps, path, and overall_overhead + let first = &paths[0]; + assert!(first["steps"].is_u64()); + assert!(first["path"].is_array()); + assert!(first["overall_overhead"].is_array()); } #[test] fn test_find_path_no_route() { let server = McpServer::new(); // Pick two problems with no path (if any). Use an unknown problem to trigger an error. - let result = server.find_path_inner("NonExistent", "QUBO", "minimize-steps", false); + let result = server.find_path_inner("NonExistent", "QUBO", "minimize-steps", false, 20); assert!(result.is_err()); } + #[test] + fn test_show_problem_rejects_slash_spec() { + let server = McpServer::new(); + let result = server.show_problem_inner("MIS/UnitDiskGraph"); + assert!(result.is_err()); + let err = result.unwrap_err().to_string(); + assert!( + err.contains("type level"), + "error should mention type level: {err}" + ); + } + + #[test] + fn test_show_problem_marks_default() { + let server = McpServer::new(); + let result = server.show_problem_inner("MIS"); + assert!(result.is_ok()); + let json: serde_json::Value = serde_json::from_str(&result.unwrap()).unwrap(); + let variants = json["variants"].as_array().unwrap(); + // At least one variant should be marked as default + let has_default = variants + .iter() + .any(|v| v["is_default"].as_bool() == Some(true)); + assert!( + has_default, + "expected at least one variant marked is_default=true" + ); + // All variants should have the is_default field + for v in variants { + assert!( + v["is_default"].is_boolean(), + "expected is_default field on variant: {v}" + ); + } + } + #[test] fn test_neighbors_out() { let server = McpServer::new(); @@ -294,4 +352,15 @@ mod tests { assert_eq!(json["kind"], "bundle"); assert_eq!(json["source"], "MaximumIndependentSet"); } + + #[test] + fn test_solve_sat_problem() { + let server = McpServer::new(); + let params = serde_json::json!({"num_vars": 2, "clauses": "1;-2"}); + let problem_json = server.create_problem_inner("SAT", ¶ms).unwrap(); + let result = server.solve_inner(&problem_json, Some("brute-force"), None); + assert!(result.is_ok()); + let json: serde_json::Value = serde_json::from_str(&result.unwrap()).unwrap(); + assert_eq!(json["solver"], "brute-force"); + } } diff --git a/problemreductions-cli/src/mcp/tools.rs b/problemreductions-cli/src/mcp/tools.rs index 2485467c..e35a076f 100644 --- a/problemreductions-cli/src/mcp/tools.rs +++ b/problemreductions-cli/src/mcp/tools.rs @@ -7,9 +7,7 @@ use problemreductions::models::graph::{ }; use problemreductions::models::misc::Factoring; use problemreductions::registry::collect_schemas; -use problemreductions::rules::{ - CustomCost, MinimizeSteps, ReductionGraph, ReductionPath, TraversalDirection, -}; +use problemreductions::rules::{CustomCost, MinimizeSteps, ReductionGraph, TraversalDirection}; use problemreductions::topology::{ Graph, KingsSubgraph, SimpleGraph, TriangularSubgraph, UnitDiskGraph, }; @@ -23,9 +21,7 @@ use std::collections::BTreeMap; use crate::dispatch::{ load_problem, serialize_any_problem, PathStep, ProblemJson, ProblemJsonOutput, ReductionBundle, }; -use crate::problem_name::{ - aliases_for, parse_problem_spec, resolve_variant, unknown_problem_error, -}; +use crate::problem_name::{aliases_for, resolve_problem_ref, unknown_problem_error}; // --------------------------------------------------------------------------- // Parameter structs — graph query tools @@ -57,6 +53,8 @@ pub struct FindPathParams { pub cost: Option, #[schemars(description = "Return all paths instead of just the cheapest")] pub all: Option, + #[schemars(description = "Maximum paths to return in all mode (default: 20)")] + pub max_paths: Option, } // --------------------------------------------------------------------------- @@ -161,56 +159,54 @@ impl McpServer { } pub fn show_problem_inner(&self, problem: &str) -> anyhow::Result { - let spec = parse_problem_spec(problem)?; let graph = ReductionGraph::new(); + let resolved = resolve_problem_ref(problem, &graph)?; + let name = &resolved.name; + let variant = &resolved.variant; - let variants = graph.variants_for(&spec.name); - if variants.is_empty() { - anyhow::bail!("{}", unknown_problem_error(&spec.name)); - } + let default_variant = graph.default_variant_for(name); + let is_default = default_variant.as_ref() == Some(variant); let schemas = collect_schemas(); - let schema = schemas.iter().find(|s| s.name == spec.name); + let schema = schemas.iter().find(|s| s.name == *name); - let outgoing = graph.outgoing_reductions(&spec.name); - let incoming = graph.incoming_reductions(&spec.name); - let size_fields = graph.size_field_names(&spec.name); + let outgoing: Vec<_> = graph + .outgoing_reductions(name) + .into_iter() + .filter(|e| &e.source_variant == variant) + .collect(); + let incoming: Vec<_> = graph + .incoming_reductions(name) + .into_iter() + .filter(|e| &e.target_variant == variant) + .collect(); + let size_fields = graph.size_field_names(name); + let complexity = graph.variant_complexity(name, variant).unwrap_or(""); - let variants_json: Vec = variants - .iter() - .map(|v| { - let complexity = graph.variant_complexity(&spec.name, v).unwrap_or(""); - serde_json::json!({ - "variant": v, - "complexity": complexity, + let edge_to_json = |e: &problemreductions::rules::ReductionEdgeInfo| { + let overhead: Vec = e + .overhead + .output_size + .iter() + .map(|(field, poly)| { + serde_json::json!({"field": field, "formula": poly.to_string()}) }) + .collect(); + serde_json::json!({ + "source": {"name": e.source_name, "variant": e.source_variant}, + "target": {"name": e.target_name, "variant": e.target_variant}, + "overhead": overhead, }) - .collect(); + }; let mut json = serde_json::json!({ - "name": spec.name, - "variants": variants_json, + "name": name, + "variant": variant, + "default": is_default, + "complexity": complexity, "size_fields": &size_fields, - "reduces_to": outgoing.iter().map(|e| { - let overhead: Vec = e.overhead.output_size.iter() - .map(|(field, poly)| serde_json::json!({"field": field, "formula": poly.to_string()})) - .collect(); - serde_json::json!({ - "source": {"name": e.source_name, "variant": e.source_variant}, - "target": {"name": e.target_name, "variant": e.target_variant}, - "overhead": overhead, - }) - }).collect::>(), - "reduces_from": incoming.iter().map(|e| { - let overhead: Vec = e.overhead.output_size.iter() - .map(|(field, poly)| serde_json::json!({"field": field, "formula": poly.to_string()})) - .collect(); - serde_json::json!({ - "source": {"name": e.source_name, "variant": e.source_variant}, - "target": {"name": e.target_name, "variant": e.target_variant}, - "overhead": overhead, - }) - }).collect::>(), + "reduces_to": outgoing.iter().map(&edge_to_json).collect::>(), + "reduces_from": incoming.iter().map(&edge_to_json).collect::>(), }); if let Some(s) = schema { if let (Some(obj), Ok(schema_val)) = (json.as_object_mut(), serde_json::to_value(s)) { @@ -227,26 +223,15 @@ impl McpServer { hops: usize, direction_str: &str, ) -> anyhow::Result { - let spec = parse_problem_spec(problem)?; let graph = ReductionGraph::new(); - - let variants = graph.variants_for(&spec.name); - if variants.is_empty() { - anyhow::bail!("{}", unknown_problem_error(&spec.name)); - } + let resolved = resolve_problem_ref(problem, &graph)?; let direction = parse_direction(direction_str)?; - let variant = if spec.variant_values.is_empty() { - variants[0].clone() - } else { - resolve_variant(&spec, &variants)? - }; - - let neighbors = graph.k_neighbors(&spec.name, &variant, hops, direction); + let neighbors = graph.k_neighbors(&resolved.name, &resolved.variant, hops, direction); let json = serde_json::json!({ - "source": spec.name, + "source": resolved.name, "hops": hops, "direction": direction_str, "neighbors": neighbors.iter().map(|n| { @@ -266,61 +251,51 @@ impl McpServer { target: &str, cost: &str, all: bool, + max_paths: usize, ) -> anyhow::Result { - let src_spec = parse_problem_spec(source)?; - let dst_spec = parse_problem_spec(target)?; let graph = ReductionGraph::new(); - - let src_variants = graph.variants_for(&src_spec.name); - let dst_variants = graph.variants_for(&dst_spec.name); - - if src_variants.is_empty() { - anyhow::bail!("{}", unknown_problem_error(&src_spec.name)); - } - if dst_variants.is_empty() { - anyhow::bail!("{}", unknown_problem_error(&dst_spec.name)); - } + let src_ref = resolve_problem_ref(source, &graph)?; + let dst_ref = resolve_problem_ref(target, &graph)?; if all { - let sv = if src_spec.variant_values.is_empty() { - src_variants[0].clone() - } else { - resolve_variant(&src_spec, &src_variants)? - }; - let dv = if dst_spec.variant_values.is_empty() { - dst_variants[0].clone() - } else { - resolve_variant(&dst_spec, &dst_variants)? - }; - let mut all_paths = graph.find_all_paths(&src_spec.name, &sv, &dst_spec.name, &dv); + // Fetch one extra to detect truncation + let mut all_paths = graph.find_paths_up_to( + &src_ref.name, + &src_ref.variant, + &dst_ref.name, + &dst_ref.variant, + max_paths + 1, + ); if all_paths.is_empty() { anyhow::bail!( "No reduction path from {} to {}", - src_spec.name, - dst_spec.name + src_ref.name, + dst_ref.name ); } all_paths.sort_by_key(|p| p.len()); - let json: serde_json::Value = all_paths + + let truncated = all_paths.len() > max_paths; + if truncated { + all_paths.truncate(max_paths); + } + let returned = all_paths.len(); + + let paths_json: Vec = all_paths .iter() .map(|p| format_path_json(&graph, p)) - .collect::>() - .into(); + .collect(); + + let json = serde_json::json!({ + "paths": paths_json, + "truncated": truncated, + "returned": returned, + "max_paths": max_paths, + }); return Ok(serde_json::to_string_pretty(&json)?); } // Single best path - let src_resolved = if src_spec.variant_values.is_empty() { - src_variants.clone() - } else { - vec![resolve_variant(&src_spec, &src_variants)?] - }; - let dst_resolved = if dst_spec.variant_values.is_empty() { - dst_variants.clone() - } else { - vec![resolve_variant(&dst_spec, &dst_variants)?] - }; - let input_size = ProblemSize::new(vec![]); let cost_field: Option = if cost == "minimize-steps" { @@ -334,44 +309,31 @@ impl McpServer { ); }; - let mut best_path: Option = None; - - for sv in &src_resolved { - for dv in &dst_resolved { - let found = match cost_field { - None => graph.find_cheapest_path( - &src_spec.name, - sv, - &dst_spec.name, - dv, - &input_size, - &MinimizeSteps, - ), - Some(ref f) => { - let cost_fn = CustomCost( - |overhead: &problemreductions::rules::ReductionOverhead, - size: &ProblemSize| { - overhead.evaluate_output_size(size).get(f).unwrap_or(0) as f64 - }, - ); - graph.find_cheapest_path( - &src_spec.name, - sv, - &dst_spec.name, - dv, - &input_size, - &cost_fn, - ) - } - }; - if let Some(p) = found { - let is_better = best_path.as_ref().is_none_or(|bp| p.len() < bp.len()); - if is_better { - best_path = Some(p); - } - } + let best_path = match cost_field { + None => graph.find_cheapest_path( + &src_ref.name, + &src_ref.variant, + &dst_ref.name, + &dst_ref.variant, + &input_size, + &MinimizeSteps, + ), + Some(ref f) => { + let cost_fn = CustomCost( + |overhead: &problemreductions::rules::ReductionOverhead, size: &ProblemSize| { + overhead.evaluate_output_size(size).get(f).unwrap_or(0) as f64 + }, + ); + graph.find_cheapest_path( + &src_ref.name, + &src_ref.variant, + &dst_ref.name, + &dst_ref.variant, + &input_size, + &cost_fn, + ) } - } + }; match best_path { Some(ref reduction_path) => { @@ -381,8 +343,8 @@ impl McpServer { None => { anyhow::bail!( "No reduction path from {} to {}", - src_spec.name, - dst_spec.name + src_ref.name, + dst_ref.name ); } } @@ -403,17 +365,10 @@ impl McpServer { problem_type: &str, params: &serde_json::Value, ) -> anyhow::Result { - let spec = parse_problem_spec(problem_type)?; - let canonical = spec.name.clone(); - - // Resolve variant from spec let rgraph = ReductionGraph::new(); - let known_variants = rgraph.variants_for(&canonical); - let resolved_variant = if known_variants.is_empty() { - BTreeMap::new() - } else { - resolve_variant(&spec, &known_variants)? - }; + let resolved = resolve_problem_ref(problem_type, &rgraph)?; + let canonical = resolved.name.clone(); + let resolved_variant = resolved.variant; let graph_type = resolved_variant .get("graph") .map(|s| s.as_str()) @@ -778,40 +733,21 @@ impl McpServer { let source_variant = source.variant_map(); let graph = ReductionGraph::new(); - let dst_spec = parse_problem_spec(target)?; - let dst_variants = graph.variants_for(&dst_spec.name); - if dst_variants.is_empty() { - anyhow::bail!("{}", unknown_problem_error(&dst_spec.name)); - } + let dst_ref = resolve_problem_ref(target, &graph)?; // Auto-discover cheapest path let input_size = ProblemSize::new(vec![]); - let mut best_path: Option = None; - - for dv in &dst_variants { - if let Some(p) = graph.find_cheapest_path( - source_name, - &source_variant, - &dst_spec.name, - dv, - &input_size, - &MinimizeSteps, - ) { - let is_better = best_path - .as_ref() - .is_none_or(|bp: &ReductionPath| p.len() < bp.len()); - if is_better { - best_path = Some(p); - } - } - } + let best_path = graph.find_cheapest_path( + source_name, + &source_variant, + &dst_ref.name, + &dst_ref.variant, + &input_size, + &MinimizeSteps, + ); let reduction_path = best_path.ok_or_else(|| { - anyhow::anyhow!( - "No reduction path from {} to {}", - source_name, - dst_spec.name - ) + anyhow::anyhow!("No reduction path from {} to {}", source_name, dst_ref.name) })?; // Execute reduction chain @@ -959,7 +895,8 @@ impl McpServer { fn find_path(&self, Parameters(params): Parameters) -> Result { let cost = params.cost.as_deref().unwrap_or("minimize-steps"); let all = params.all.unwrap_or(false); - self.find_path_inner(¶ms.source, ¶ms.target, cost, all) + let max_paths = params.max_paths.unwrap_or(20); + self.find_path_inner(¶ms.source, ¶ms.target, cost, all, max_paths) .map_err(|e| e.to_string()) } @@ -1040,27 +977,21 @@ impl McpServer { #[rmcp::tool_handler] impl rmcp::ServerHandler for McpServer { fn get_info(&self) -> rmcp::model::ServerInfo { - rmcp::model::ServerInfo { - protocol_version: rmcp::model::ProtocolVersion::V_2025_03_26, - capabilities: rmcp::model::ServerCapabilities { - tools: Some(rmcp::model::ToolsCapability::default()), - prompts: Some(rmcp::model::PromptsCapability::default()), - ..Default::default() - }, - server_info: rmcp::model::Implementation { - name: "problemreductions".into(), - version: env!("CARGO_PKG_VERSION").into(), - ..Default::default() - }, - instructions: Some( + let capabilities = rmcp::model::ServerCapabilities::builder() + .enable_tools() + .enable_prompts() + .build(); + let server_info = + rmcp::model::Implementation::new("problemreductions", env!("CARGO_PKG_VERSION")); + rmcp::model::ServerInfo::new(capabilities) + .with_server_info(server_info) + .with_instructions( "MCP server for NP-hard problem reductions. \ Graph query tools: list_problems, show_problem, neighbors, find_path, export_graph. \ Instance tools: create_problem to build instances, inspect_problem for details, \ evaluate to test configurations, reduce to transform between problem types, \ - solve to find optimal solutions." - .into(), - ), - } + solve to find optimal solutions.", + ) } async fn list_prompts( diff --git a/problemreductions-cli/src/problem_name.rs b/problemreductions-cli/src/problem_name.rs index bc2c1441..cc5796ef 100644 --- a/problemreductions-cli/src/problem_name.rs +++ b/problemreductions-cli/src/problem_name.rs @@ -1,4 +1,4 @@ -use std::collections::BTreeMap; +use std::collections::{BTreeMap, BTreeSet}; use std::ffi::OsStr; /// A parsed problem specification: name + optional variant values. @@ -10,161 +10,165 @@ pub struct ProblemSpec { pub variant_values: Vec, } -/// Alias entries: (alias, canonical_name). Only includes short aliases, -/// not the lowercase identity mappings. -pub const ALIASES: &[(&str, &str)] = &[ - ("MIS", "MaximumIndependentSet"), - ("MVC", "MinimumVertexCover"), - ("SAT", "Satisfiability"), - ("3SAT", "KSatisfiability"), - ("KSAT", "KSatisfiability"), - ("TSP", "TravelingSalesman"), - ("CVP", "ClosestVectorProblem"), - ("RPP", "RuralPostman"), - ("LCS", "LongestCommonSubsequence"), - ("MaxMatching", "MaximumMatching"), - ("OLA", "OptimalLinearArrangement"), - ("FVS", "MinimumFeedbackVertexSet"), - ("SCS", "ShortestCommonSupersequence"), - ("FAS", "MinimumFeedbackArcSet"), - ("pmedian", "MinimumSumMulticenter"), -]; - /// Resolve a short alias to the canonical problem name. +/// +/// Uses the catalog for both aliases and canonical names. pub fn resolve_alias(input: &str) -> String { - match input.to_lowercase().as_str() { - "mis" => "MaximumIndependentSet".to_string(), - "mvc" | "minimumvertexcover" => "MinimumVertexCover".to_string(), - "sat" | "satisfiability" => "Satisfiability".to_string(), - "3sat" => "KSatisfiability".to_string(), - "ksat" | "ksatisfiability" => "KSatisfiability".to_string(), - "qubo" => "QUBO".to_string(), - "graphpartitioning" => "GraphPartitioning".to_string(), - "isomorphicspanningtree" => "IsomorphicSpanningTree".to_string(), - "maxcut" => "MaxCut".to_string(), - "spinglass" => "SpinGlass".to_string(), - "ilp" => "ILP".to_string(), - "circuitsat" => "CircuitSAT".to_string(), - "factoring" => "Factoring".to_string(), - "maximumindependentset" => "MaximumIndependentSet".to_string(), - "maximumclique" => "MaximumClique".to_string(), - "maxmatching" | "maximummatching" => "MaximumMatching".to_string(), - "minimumdominatingset" => "MinimumDominatingSet".to_string(), - "minimumsetcovering" => "MinimumSetCovering".to_string(), - "maximumsetpacking" => "MaximumSetPacking".to_string(), - "kcoloring" => "KColoring".to_string(), - "maximalis" => "MaximalIS".to_string(), - "travelingsalesman" | "tsp" => "TravelingSalesman".to_string(), - "ruralpostman" | "rpp" => "RuralPostman".to_string(), - "paintshop" => "PaintShop".to_string(), - "bmf" => "BMF".to_string(), - "bicliquecover" => "BicliqueCover".to_string(), - "binpacking" => "BinPacking".to_string(), - "cvp" | "closestvectorproblem" => "ClosestVectorProblem".to_string(), - "knapsack" => "Knapsack".to_string(), - "optimallineararrangement" | "ola" => "OptimalLinearArrangement".to_string(), - "subgraphisomorphism" => "SubgraphIsomorphism".to_string(), - "partitionintotriangles" => "PartitionIntoTriangles".to_string(), - "lcs" | "longestcommonsubsequence" => "LongestCommonSubsequence".to_string(), - "fvs" | "minimumfeedbackvertexset" => "MinimumFeedbackVertexSet".to_string(), - "flowshopscheduling" => "FlowShopScheduling".to_string(), - "fas" | "minimumfeedbackarcset" => "MinimumFeedbackArcSet".to_string(), - "minimumsummulticenter" | "pmedian" => "MinimumSumMulticenter".to_string(), - "subsetsum" => "SubsetSum".to_string(), - "scs" | "shortestcommonsupersequence" => "ShortestCommonSupersequence".to_string(), - "hamiltonianpath" => "HamiltonianPath".to_string(), - _ => input.to_string(), // pass-through for exact names + if let Some(pt) = problemreductions::registry::find_problem_type_by_alias(input) { + return pt.canonical_name.to_string(); } + input.to_string() } /// Return the short aliases for a canonical problem name, if any. pub fn aliases_for(canonical: &str) -> Vec<&'static str> { - ALIASES - .iter() - .filter(|(_, name)| *name == canonical) - .map(|(alias, _)| *alias) - .collect() + problemreductions::registry::find_problem_type(canonical) + .map(|pt| pt.aliases.to_vec()) + .unwrap_or_default() +} + +/// Resolve a problem spec against the catalog schema only (no graph required). +/// +/// Returns a typed `ProblemRef` validated against the catalog's declared +/// dimensions and allowed values. Does NOT check reduction graph reachability. +#[cfg_attr(not(test), allow(dead_code))] +pub fn resolve_catalog_problem_ref( + input: &str, +) -> anyhow::Result { + problemreductions::registry::parse_catalog_problem_ref(input) + .map_err(|e| anyhow::anyhow!("{e}")) } /// Parse a problem spec string like "MIS/UnitDiskGraph/i32" into name + variant values. pub fn parse_problem_spec(input: &str) -> anyhow::Result { let parts: Vec<&str> = input.split('/').collect(); let raw_name = parts[0]; - let mut variant_values: Vec = parts[1..].iter().map(|s| s.to_string()).collect(); + let variant_values: Vec = parts[1..].iter().map(|s| s.to_string()).collect(); let name = resolve_alias(raw_name); - // Special case: "3SAT" implies K3 variant - if raw_name.to_lowercase() == "3sat" && variant_values.is_empty() { - variant_values.push("K3".to_string()); - } - Ok(ProblemSpec { name, variant_values, }) } -/// Build a variant BTreeMap by matching specified values against a problem's -/// known variants from the reduction graph. Uses value-based matching: -/// each specified value must appear as a value in the variant map. -pub fn resolve_variant( +fn format_variant(variant: &BTreeMap) -> String { + let parts = variant + .iter() + .map(|(key, value)| format!("{key}={value}")) + .collect::>() + .join(", "); + format!("{{{parts}}}") +} + +fn dimension_values( + known_variants: &[BTreeMap], +) -> BTreeMap> { + let mut by_dimension = BTreeMap::new(); + for variant in known_variants { + for (dimension, value) in variant { + by_dimension + .entry(dimension.clone()) + .or_insert_with(BTreeSet::new) + .insert(value.clone()); + } + } + by_dimension +} + +fn resolve_variant_updates( spec: &ProblemSpec, + default_variant: &BTreeMap, known_variants: &[BTreeMap], ) -> anyhow::Result> { if spec.variant_values.is_empty() { - // Return the first (default) variant, or empty - return Ok(known_variants.first().cloned().unwrap_or_default()); + return Ok(default_variant.clone()); } - // Value-based matching: find variant containing ALL specified values - let matches: Vec<_> = known_variants - .iter() - .filter(|v| { - spec.variant_values - .iter() - .all(|sv| v.values().any(|vv| vv == sv)) - }) - .collect(); - - match matches.len() { - 1 => Ok(matches[0].clone()), - 0 => anyhow::bail!( - "No variant of {} matches values {:?}. Known variants: {:?}", - spec.name, - spec.variant_values, - known_variants - ), - _ => { - // When ambiguous, use the same default ranking as the reduction graph: - // variants whose remaining (unmatched) fields are closest to defaults - // (SimpleGraph, One, KN) win. This matches variants_for() sort order. - let default_rank = |v: &BTreeMap| -> usize { - v.values() - .filter(|val| { - !spec.variant_values.contains(val) - && !["SimpleGraph", "One", "KN"].contains(&val.as_str()) - }) - .count() - }; - let min_rank = matches.iter().map(|v| default_rank(v)).min().unwrap(); - let best: Vec<_> = matches - .iter() - .filter(|v| default_rank(v) == min_rank) - .collect(); - if best.len() == 1 { - return Ok((*best[0]).clone()); + let token_index = dimension_values(known_variants); + let mut resolved = default_variant.clone(); + let mut updated_dimensions = BTreeSet::new(); + + for token in &spec.variant_values { + let matching_dimensions = token_index + .iter() + .filter(|(_, values)| values.contains(token)) + .map(|(dimension, _)| dimension.clone()) + .collect::>(); + + match matching_dimensions.as_slice() { + [] => anyhow::bail!("Unknown variant token \"{}\" for {}", token, spec.name), + [dimension] => { + if !updated_dimensions.insert(dimension.clone()) { + anyhow::bail!( + "Variant dimension \"{}\" was specified more than once", + dimension + ); + } + resolved.insert(dimension.clone(), token.clone()); + } + _ => { + let dimensions = matching_dimensions.join(" and "); + anyhow::bail!( + "Token \"{}\" is ambiguous for {}; matches dimensions {}", + token, + spec.name, + dimensions + ); } - anyhow::bail!( - "Ambiguous variant for {} with values {:?}. Matches: {:?}", - spec.name, - spec.variant_values, - matches - ) } } + + if known_variants.iter().any(|variant| variant == &resolved) { + Ok(resolved) + } else { + anyhow::bail!( + "Resolved variant {} is not declared for {}", + format_variant(&resolved), + spec.name + ) + } } +/// Parse the problem name from a spec string, resolving aliases. +/// +/// Accepts both bare names ("MIS") and slash specs ("MIS/UnitDiskGraph"). +/// Returns just the canonical name (alias-resolved). +#[cfg(test)] +pub fn parse_problem_type(input: &str) -> anyhow::Result { + let parts: Vec<&str> = input.split('/').collect(); + Ok(resolve_alias(parts[0])) +} + +/// Resolve a problem spec to a specific graph node using declared defaults. +/// +/// For bare names (no slash), returns the declared default variant. +/// For slash specs, resolves variant values against known variants. +pub fn resolve_problem_ref( + input: &str, + graph: &problemreductions::rules::ReductionGraph, +) -> anyhow::Result { + let spec = parse_problem_spec(input)?; + let known_variants = graph.variants_for(&spec.name); + + if known_variants.is_empty() { + anyhow::bail!("{}", unknown_problem_error(&spec.name)); + } + + let default_variant = graph + .default_variant_for(&spec.name) + .ok_or_else(|| anyhow::anyhow!("No default variant declared for {}", spec.name))?; + + let resolved = resolve_variant_updates(&spec, &default_variant, &known_variants)?; + Ok(ProblemRef { + name: spec.name, + variant: resolved, + }) +} + +use problemreductions::export::ProblemRef; + /// A value parser that accepts any string but provides problem names as /// completion candidates for shell completion scripts. #[derive(Clone)] @@ -183,11 +187,14 @@ impl clap::builder::TypedValueParser for ProblemNameParser { } fn possible_values(&self) -> Option>> { - let graph = problemreductions::rules::ReductionGraph::new(); - let mut names: Vec<&'static str> = graph.problem_types(); - for (alias, _) in ALIASES { - names.push(alias); + let mut names = Vec::new(); + for pt in problemreductions::registry::problem_types() { + names.push(pt.canonical_name); + for alias in pt.aliases { + names.push(alias); + } } + names.sort(); names.dedup(); Some(Box::new( @@ -198,24 +205,19 @@ impl clap::builder::TypedValueParser for ProblemNameParser { /// Find the closest matching problem names using edit distance. pub fn suggest_problem_name(input: &str) -> Vec { - let graph = problemreductions::rules::ReductionGraph::new(); - let all_names = graph.problem_types(); - let input_lower = input.to_lowercase(); let mut suggestions: Vec<(String, usize)> = Vec::new(); - for name in all_names { - let dist = edit_distance(&input_lower, &name.to_lowercase()); + for problem_type in problemreductions::registry::problem_types() { + let dist = edit_distance(&input_lower, &problem_type.canonical_name.to_lowercase()); if dist <= 3 { - suggestions.push((name.to_string(), dist)); + suggestions.push((problem_type.canonical_name.to_string(), dist)); } - } - - // Also check aliases - for (alias, canonical) in ALIASES { - let dist = edit_distance(&input_lower, &alias.to_lowercase()); - if dist <= 2 { - suggestions.push((canonical.to_string(), dist)); + for alias in problem_type.aliases { + let dist = edit_distance(&input_lower, &alias.to_lowercase()); + if dist <= 2 { + suggestions.push((problem_type.canonical_name.to_string(), dist)); + } } } @@ -272,7 +274,8 @@ mod tests { assert_eq!(resolve_alias("mis"), "MaximumIndependentSet"); assert_eq!(resolve_alias("MVC"), "MinimumVertexCover"); assert_eq!(resolve_alias("SAT"), "Satisfiability"); - assert_eq!(resolve_alias("3SAT"), "KSatisfiability"); + // 3SAT is no longer a registered alias (removed to avoid confusion with KSatisfiability/KN) + assert_eq!(resolve_alias("3SAT"), "3SAT"); // pass-through assert_eq!(resolve_alias("QUBO"), "QUBO"); assert_eq!(resolve_alias("MaxCut"), "MaxCut"); // Pass-through for full names @@ -304,8 +307,15 @@ mod tests { } #[test] - fn test_parse_problem_spec_3sat_alias() { - let spec = parse_problem_spec("3SAT").unwrap(); + fn test_parse_problem_spec_ksat_alias() { + let spec = parse_problem_spec("KSAT").unwrap(); + assert_eq!(spec.name, "KSatisfiability"); + assert!(spec.variant_values.is_empty()); + } + + #[test] + fn test_parse_problem_spec_ksat_k3() { + let spec = parse_problem_spec("KSAT/K3").unwrap(); assert_eq!(spec.name, "KSatisfiability"); assert_eq!(spec.variant_values, vec!["K3"]); } @@ -348,4 +358,126 @@ mod tests { assert_eq!(edit_distance("abc", "axc"), 1); assert_eq!(edit_distance("kitten", "sitting"), 3); } + + // ---- parse_problem_type ---- + + #[test] + fn parse_problem_type_bare_name() { + // Bare name resolves alias + assert_eq!(parse_problem_type("MIS").unwrap(), "MaximumIndependentSet"); + assert_eq!(parse_problem_type("QUBO").unwrap(), "QUBO"); + } + + #[test] + fn parse_problem_type_with_slash() { + // Slash specs extract the problem name portion + assert_eq!( + parse_problem_type("MIS/UnitDiskGraph").unwrap(), + "MaximumIndependentSet" + ); + } + + #[test] + fn parse_problem_type_ksat_alias() { + assert_eq!(parse_problem_type("KSAT").unwrap(), "KSatisfiability"); + } + + // ---- resolve_problem_ref ---- + + #[test] + fn resolve_problem_ref_bare_mis() { + // Bare MIS should resolve to the declared default variant + let graph = problemreductions::rules::ReductionGraph::new(); + let r = resolve_problem_ref("MIS", &graph).unwrap(); + assert_eq!(r.name, "MaximumIndependentSet"); + assert_eq!( + r.variant.get("graph").map(|s| s.as_str()), + Some("SimpleGraph") + ); + assert_eq!(r.variant.get("weight").map(|s| s.as_str()), Some("One")); + } + + #[test] + fn resolve_problem_ref_with_slash_updates() { + // Slash spec resolves to a specific variant + let graph = problemreductions::rules::ReductionGraph::new(); + let r = resolve_problem_ref("MIS/UnitDiskGraph", &graph).unwrap(); + assert_eq!(r.name, "MaximumIndependentSet"); + assert_eq!( + r.variant.get("graph").map(|s| s.as_str()), + Some("UnitDiskGraph") + ); + } + + #[test] + fn resolve_problem_ref_rejects_duplicate_dimension_updates() { + let graph = problemreductions::rules::ReductionGraph::new(); + let err = resolve_problem_ref("MIS/One/i32", &graph).unwrap_err(); + assert!( + err.to_string().contains("specified more than once"), + "expected duplicate-dimension error, got: {err}" + ); + } + + #[test] + fn resolve_problem_ref_unknown_problem() { + let graph = problemreductions::rules::ReductionGraph::new(); + let err = resolve_problem_ref("NonExistent", &graph).unwrap_err(); + assert!(err.to_string().contains("Unknown problem")); + } + + // ---- catalog-backed resolution ---- + + #[test] + fn resolve_problem_ref_bare_mis_uses_catalog_default() { + // Bare MIS resolves through catalog to the declared default variant + let graph = problemreductions::rules::ReductionGraph::new(); + let r = resolve_problem_ref("MIS", &graph).unwrap(); + assert_eq!(r.name, "MaximumIndependentSet"); + // Catalog declares SimpleGraph + One as defaults + assert_eq!( + r.variant.get("graph").map(|s| s.as_str()), + Some("SimpleGraph") + ); + assert_eq!(r.variant.get("weight").map(|s| s.as_str()), Some("One")); + } + + #[test] + fn parse_problem_type_extracts_name_from_variant_spec() { + // parse_problem_type extracts just the problem name from a variant spec + assert_eq!( + parse_problem_type("MIS/UnitDiskGraph/i32").unwrap(), + "MaximumIndependentSet" + ); + } + + #[test] + fn resolve_catalog_problem_ref_validates_against_schema() { + // Schema-valid values should resolve + let r = resolve_catalog_problem_ref("MIS/i32").unwrap(); + assert_eq!(r.name(), "MaximumIndependentSet"); + assert_eq!(r.variant().get("weight").map(|s| s.as_str()), Some("i32")); + } + + #[test] + fn resolve_catalog_problem_ref_rejects_schema_invalid_variant() { + // HyperGraph is not in MIS's declared dimensions + let err = resolve_catalog_problem_ref("MIS/HyperGraph").unwrap_err(); + assert!( + err.to_string().contains("Known variants"), + "error should mention known variants: {}", + err + ); + } + + #[test] + fn resolve_catalog_problem_ref_fills_defaults() { + // Bare MIS should fill in all defaults from catalog + let r = resolve_catalog_problem_ref("MIS").unwrap(); + assert_eq!( + r.variant().get("graph").map(|s| s.as_str()), + Some("SimpleGraph") + ); + assert_eq!(r.variant().get("weight").map(|s| s.as_str()), Some("One")); + } } diff --git a/problemreductions-cli/tests/cli_tests.rs b/problemreductions-cli/tests/cli_tests.rs index 13348726..92495f77 100644 --- a/problemreductions-cli/tests/cli_tests.rs +++ b/problemreductions-cli/tests/cli_tests.rs @@ -21,6 +21,40 @@ fn test_list() { assert!(stdout.contains("QUBO")); } +#[test] +fn test_list_rules() { + let output = pred().args(["list", "--rules"]).output().unwrap(); + assert!( + output.status.success(), + "stderr: {}", + String::from_utf8_lossy(&output.stderr) + ); + let stdout = String::from_utf8(output.stdout).unwrap(); + assert!(stdout.contains("Registered reduction rules:")); + assert!(stdout.contains("Source")); + assert!(stdout.contains("Target")); + assert!(stdout.contains("Overhead")); + // Should contain a known reduction + assert!( + stdout.contains("MaximumIndependentSet"), + "should list MIS reductions" + ); +} + +#[test] +fn test_list_rules_json() { + let output = pred().args(["list", "--rules", "--json"]).output().unwrap(); + assert!(output.status.success()); + let stdout = String::from_utf8(output.stdout).unwrap(); + let json: serde_json::Value = serde_json::from_str(&stdout).unwrap(); + assert!(json["num_rules"].as_u64().unwrap() > 0); + let rules = json["rules"].as_array().unwrap(); + assert!(!rules.is_empty()); + assert!(rules[0]["source"].is_string()); + assert!(rules[0]["target"].is_string()); + assert!(rules[0]["overhead"].is_string()); +} + #[test] fn test_show() { let output = pred().args(["show", "MIS"]).output().unwrap(); @@ -31,11 +65,15 @@ fn test_show() { } #[test] -fn test_show_variants() { +fn test_show_variant_info() { let output = pred().args(["show", "MIS"]).output().unwrap(); assert!(output.status.success()); let stdout = String::from_utf8(output.stdout).unwrap(); - assert!(stdout.contains("Variants")); + // Bare MIS shows default variant with complexity + assert!( + stdout.contains("Complexity:"), + "should show complexity: {stdout}" + ); } #[test] @@ -152,7 +190,7 @@ fn test_list_json() { assert!(tmp.exists()); let content = std::fs::read_to_string(&tmp).unwrap(); let json: serde_json::Value = serde_json::from_str(&content).unwrap(); - assert!(json["problems"].is_array()); + assert!(json["variants"].is_array()); std::fs::remove_file(&tmp).ok(); } @@ -286,25 +324,33 @@ fn test_reduce() { #[test] fn test_reduce_via_path() { - // 1. Create problem + // 1. Create problem (use explicit variant to match path resolution) let problem_file = std::env::temp_dir().join("pred_test_reduce_via_in.json"); let create_out = pred() .args([ "-o", problem_file.to_str().unwrap(), "create", - "MIS", + "MIS/SimpleGraph/i32", "--graph", "0-1,1-2,2-3", + "--weights", + "1,1,1,1", ]) .output() .unwrap(); assert!(create_out.status.success()); - // 2. Generate path file + // 2. Generate path file (use same variant as the problem) let path_file = std::env::temp_dir().join("pred_test_reduce_via_path.json"); let path_out = pred() - .args(["path", "MIS", "QUBO", "-o", path_file.to_str().unwrap()]) + .args([ + "path", + "MIS/SimpleGraph/i32", + "QUBO", + "-o", + path_file.to_str().unwrap(), + ]) .output() .unwrap(); assert!(path_out.status.success()); @@ -350,9 +396,11 @@ fn test_reduce_via_infer_target() { "-o", problem_file.to_str().unwrap(), "create", - "MIS", + "MIS/SimpleGraph/i32", "--graph", "0-1,1-2,2-3", + "--weights", + "1,1,1,1", ]) .output() .unwrap(); @@ -360,7 +408,13 @@ fn test_reduce_via_infer_target() { let path_file = std::env::temp_dir().join("pred_test_reduce_via_infer_path.json"); let path_out = pred() - .args(["path", "MIS", "QUBO", "-o", path_file.to_str().unwrap()]) + .args([ + "path", + "MIS/SimpleGraph/i32", + "QUBO", + "-o", + path_file.to_str().unwrap(), + ]) .output() .unwrap(); assert!(path_out.status.success()); @@ -393,6 +447,67 @@ fn test_reduce_via_infer_target() { std::fs::remove_file(&output_file).ok(); } +#[test] +fn test_reduce_via_rejects_target_variant_mismatch() { + let problem_file = std::env::temp_dir().join("pred_test_reduce_via_variant_in.json"); + let create_out = pred() + .args([ + "-o", + problem_file.to_str().unwrap(), + "create", + "MIS/SimpleGraph/i32", + "--graph", + "0-1,1-2,2-3", + "--weights", + "1,1,1,1", + ]) + .output() + .unwrap(); + assert!(create_out.status.success()); + + let path_file = std::env::temp_dir().join("pred_test_reduce_via_variant_path.json"); + let path_out = pred() + .args([ + "path", + "MIS/SimpleGraph/i32", + "ILP/bool", + "-o", + path_file.to_str().unwrap(), + ]) + .output() + .unwrap(); + assert!( + path_out.status.success(), + "stderr: {}", + String::from_utf8_lossy(&path_out.stderr) + ); + + let reduce_out = pred() + .args([ + "reduce", + problem_file.to_str().unwrap(), + "--to", + "ILP/i32", + "--via", + path_file.to_str().unwrap(), + ]) + .output() + .unwrap(); + assert!( + !reduce_out.status.success(), + "stderr: {}", + String::from_utf8_lossy(&reduce_out.stderr) + ); + let stderr = String::from_utf8_lossy(&reduce_out.stderr); + assert!( + stderr.contains("ILP") && stderr.contains("i32") && stderr.contains("bool"), + "expected variant mismatch details, got: {stderr}" + ); + + std::fs::remove_file(&problem_file).ok(); + std::fs::remove_file(&path_file).ok(); +} + #[test] fn test_reduce_missing_to_and_via() { let problem_file = std::env::temp_dir().join("pred_test_reduce_missing.json"); @@ -1004,7 +1119,7 @@ fn test_create_3sat() { "-o", output_file.to_str().unwrap(), "create", - "3SAT", + "KSAT/K3", "--num-vars", "3", "--clauses", @@ -1092,6 +1207,54 @@ fn test_create_without_output() { assert!(json["data"].is_object()); } +#[test] +fn test_create_from_example_source() { + let output = pred() + .args([ + "create", + "--example", + "MVC/SimpleGraph/i32", + "--to", + "MIS/SimpleGraph/i32", + ]) + .output() + .unwrap(); + assert!( + output.status.success(), + "stderr: {}", + String::from_utf8_lossy(&output.stderr) + ); + let stdout = String::from_utf8(output.stdout).unwrap(); + let json: serde_json::Value = serde_json::from_str(&stdout).unwrap(); + assert_eq!(json["type"], "MinimumVertexCover"); + assert_eq!(json["variant"]["graph"], "SimpleGraph"); +} + +#[test] +fn test_create_from_example_target() { + let output = pred() + .args([ + "create", + "--example", + "MVC/SimpleGraph/i32", + "--to", + "MIS/SimpleGraph/i32", + "--example-side", + "target", + ]) + .output() + .unwrap(); + assert!( + output.status.success(), + "stderr: {}", + String::from_utf8_lossy(&output.stderr) + ); + let stdout = String::from_utf8(output.stdout).unwrap(); + let json: serde_json::Value = serde_json::from_str(&stdout).unwrap(); + assert_eq!(json["type"], "MaximumIndependentSet"); + assert_eq!(json["variant"]["graph"], "SimpleGraph"); +} + // ---- Error cases ---- #[test] @@ -1103,6 +1266,82 @@ fn test_create_unknown_problem() { assert!(!output.status.success()); } +#[test] +fn test_create_unknown_example_problem() { + let output = pred() + .args(["create", "--example", "not_a_real_example"]) + .output() + .unwrap(); + assert!(!output.status.success()); + let stderr = String::from_utf8_lossy(&output.stderr); + assert!(stderr.contains("Unknown problem")); +} + +#[test] +fn test_create_model_example_mis() { + let output = pred() + .args(["create", "--example", "MIS/SimpleGraph/i32"]) + .output() + .unwrap(); + assert!( + output.status.success(), + "stderr: {}", + String::from_utf8_lossy(&output.stderr) + ); + let stdout = String::from_utf8(output.stdout).unwrap(); + let json: serde_json::Value = serde_json::from_str(&stdout).unwrap(); + assert_eq!(json["type"], "MaximumIndependentSet"); + assert_eq!(json["variant"]["graph"], "SimpleGraph"); + assert_eq!(json["variant"]["weight"], "i32"); +} + +#[test] +fn test_create_model_example_mis_shorthand() { + let output = pred() + .args(["create", "--example", "MIS"]) + .output() + .unwrap(); + assert!( + output.status.success(), + "stderr: {}", + String::from_utf8_lossy(&output.stderr) + ); + let stdout = String::from_utf8(output.stdout).unwrap(); + let json: serde_json::Value = serde_json::from_str(&stdout).unwrap(); + assert_eq!(json["type"], "MaximumIndependentSet"); + assert_eq!(json["variant"]["graph"], "SimpleGraph"); + assert_eq!(json["variant"]["weight"], "One"); +} + +#[test] +fn test_create_model_example_mis_weight_only() { + let output = pred() + .args(["create", "--example", "MIS/i32"]) + .output() + .unwrap(); + assert!( + output.status.success(), + "stderr: {}", + String::from_utf8_lossy(&output.stderr) + ); + let stdout = String::from_utf8(output.stdout).unwrap(); + let json: serde_json::Value = serde_json::from_str(&stdout).unwrap(); + assert_eq!(json["type"], "MaximumIndependentSet"); + assert_eq!(json["variant"]["graph"], "SimpleGraph"); + assert_eq!(json["variant"]["weight"], "i32"); +} + +#[test] +fn test_create_missing_model_example() { + let output = pred() + .args(["create", "--example", "GraphPartitioning/SimpleGraph"]) + .output() + .unwrap(); + assert!(!output.status.success()); + let stderr = String::from_utf8_lossy(&output.stderr); + assert!(stderr.contains("No canonical model example exists")); +} + #[test] fn test_create_no_flags_shows_help() { // pred create MIS with no data flags shows schema-driven help and exits non-zero @@ -1276,7 +1515,7 @@ fn test_path_unknown_cost() { #[test] fn test_path_overall_overhead_text() { // Use a multi-step path so the "Overall" section appears - let output = pred().args(["path", "3SAT", "MIS"]).output().unwrap(); + let output = pred().args(["path", "KSAT/K3", "MIS"]).output().unwrap(); assert!(output.status.success()); let stdout = String::from_utf8(output.stdout).unwrap(); assert!( @@ -1289,7 +1528,7 @@ fn test_path_overall_overhead_text() { fn test_path_overall_overhead_json() { let tmp = std::env::temp_dir().join("pred_test_path_overall.json"); let output = pred() - .args(["path", "3SAT", "MIS", "-o", tmp.to_str().unwrap()]) + .args(["path", "KSAT/K3", "MIS", "-o", tmp.to_str().unwrap()]) .output() .unwrap(); assert!(output.status.success()); @@ -1317,7 +1556,7 @@ fn test_path_overall_overhead_composition() { // Step 2 (SAT→MIS): num_vertices = num_literals, num_edges = num_literals^2 // Overall: num_vertices = num_literals, num_edges = num_literals^2 let output = pred() - .args(["path", "3SAT", "MIS", "-o", tmp.to_str().unwrap()]) + .args(["path", "KSAT/K3", "MIS", "-o", tmp.to_str().unwrap()]) .output() .unwrap(); assert!(output.status.success()); @@ -1368,12 +1607,15 @@ fn test_path_overall_overhead_composition() { fn test_path_all_overall_overhead() { // Every path in --all --json output should have overall_overhead let output = pred() - .args(["path", "3SAT", "MIS", "--all", "--json"]) + .args(["path", "KSAT/K3", "MIS", "--all", "--json"]) .output() .unwrap(); assert!(output.status.success()); let stdout = String::from_utf8(output.stdout).unwrap(); - let paths: Vec = serde_json::from_str(&stdout).unwrap(); + let envelope: serde_json::Value = serde_json::from_str(&stdout).unwrap(); + let paths = envelope["paths"] + .as_array() + .expect("should have paths array"); assert!(!paths.is_empty()); for (i, p) in paths.iter().enumerate() { assert!( @@ -1388,17 +1630,25 @@ fn test_path_all_overall_overhead() { i + 1 ); } + // Verify envelope metadata + assert!(envelope["returned"].is_number()); + assert!(envelope["max_paths"].is_number()); + assert!(envelope["truncated"].is_boolean()); } #[test] fn test_path_single_step_no_overall_text() { // Single-step path should NOT show the Overall section - let output = pred().args(["path", "MIS", "MVC"]).output().unwrap(); + // MaxCut -> SpinGlass is a genuine 1-step path with matching default variants + let output = pred() + .args(["path", "MaxCut", "SpinGlass"]) + .output() + .unwrap(); assert!(output.status.success()); let stdout = String::from_utf8(output.stdout).unwrap(); assert!( !stdout.contains("Overall"), - "single-step path should not show Overall" + "single-step path should not show Overall, got: {stdout}" ); } @@ -1414,8 +1664,9 @@ fn test_show_json_output() { let content = std::fs::read_to_string(&tmp).unwrap(); let json: serde_json::Value = serde_json::from_str(&content).unwrap(); assert_eq!(json["name"], "MaximumIndependentSet"); - assert!(json["variants"].is_array()); + assert!(json["variant"].is_object()); assert!(json["reduces_to"].is_array()); + assert!(json["default"].is_boolean()); std::fs::remove_file(&tmp).ok(); } @@ -2707,7 +2958,7 @@ fn test_create_mis_triangular_subgraph() { let output = pred() .args([ "create", - "MIS/TriangularSubgraph", + "MIS/TriangularSubgraph/i32", "--positions", "0,0;0,1;1,0;1,1", ]) @@ -2758,8 +3009,8 @@ fn test_create_mvc_kings_subgraph_unsupported_variant() { assert!(!output.status.success()); let stderr = String::from_utf8(output.stderr).unwrap(); assert!( - stderr.contains("No variant"), - "should mention variant mismatch: {stderr}" + stderr.contains("Unknown variant token \"KingsSubgraph\""), + "should mention unknown variant token: {stderr}" ); } @@ -2790,7 +3041,7 @@ fn test_create_mis_kings_subgraph_with_weights() { let output = pred() .args([ "create", - "MIS/KingsSubgraph", + "MIS/KingsSubgraph/i32", "--positions", "0,0;1,0;1,1", "--weights", @@ -2807,6 +3058,7 @@ fn test_create_mis_kings_subgraph_with_weights() { let json: serde_json::Value = serde_json::from_str(&stdout).unwrap(); assert_eq!(json["type"], "MaximumIndependentSet"); assert_eq!(json["variant"]["graph"], "KingsSubgraph"); + assert_eq!(json["variant"]["weight"], "i32"); } #[test] @@ -2839,7 +3091,7 @@ fn test_create_random_triangular_subgraph() { let output = pred() .args([ "create", - "MIS/TriangularSubgraph", + "MIS/TriangularSubgraph/i32", "--random", "--num-vertices", "8", @@ -2916,3 +3168,384 @@ fn test_create_geometry_graph_missing_positions() { "should mention --positions: {stderr}" ); } + +// ---- Round-trip: canonical examples through solve ---- + +#[test] +fn test_create_model_example_mis_round_trips_into_solve() { + let path = std::env::temp_dir().join(format!( + "pred_test_model_example_mis_{}.json", + std::time::SystemTime::now() + .duration_since(std::time::UNIX_EPOCH) + .unwrap() + .as_nanos() + )); + let create = pred() + .args([ + "create", + "--example", + "MIS/SimpleGraph/i32", + "-o", + path.to_str().unwrap(), + ]) + .output() + .unwrap(); + assert!( + create.status.success(), + "stderr: {}", + String::from_utf8_lossy(&create.stderr) + ); + + let solve = pred() + .args(["solve", path.to_str().unwrap(), "--solver", "brute-force"]) + .output() + .unwrap(); + assert!( + solve.status.success(), + "stderr: {}", + String::from_utf8_lossy(&solve.stderr) + ); + + std::fs::remove_file(&path).ok(); +} + +#[test] +fn test_create_rule_example_mvc_to_mis_round_trips_into_solve() { + let path = std::env::temp_dir().join(format!( + "pred_test_rule_example_mvc_to_mis_{}.json", + std::time::SystemTime::now() + .duration_since(std::time::UNIX_EPOCH) + .unwrap() + .as_nanos() + )); + let create = pred() + .args([ + "create", + "--example", + "MVC/SimpleGraph/i32", + "--to", + "MIS/SimpleGraph/i32", + "-o", + path.to_str().unwrap(), + ]) + .output() + .unwrap(); + assert!( + create.status.success(), + "stderr: {}", + String::from_utf8_lossy(&create.stderr) + ); + + let solve = pred() + .args(["solve", path.to_str().unwrap(), "--solver", "brute-force"]) + .output() + .unwrap(); + assert!( + solve.status.success(), + "stderr: {}", + String::from_utf8_lossy(&solve.stderr) + ); + + std::fs::remove_file(&path).ok(); +} + +#[test] +fn test_create_rule_example_mvc_to_mis_weight_only() { + let output = pred() + .args(["create", "--example", "MVC/i32", "--to", "MIS/i32"]) + .output() + .unwrap(); + assert!( + output.status.success(), + "stderr: {}", + String::from_utf8_lossy(&output.stderr) + ); + let stdout = String::from_utf8(output.stdout).unwrap(); + let json: serde_json::Value = serde_json::from_str(&stdout).unwrap(); + assert_eq!(json["type"], "MinimumVertexCover"); + assert_eq!(json["variant"]["graph"], "SimpleGraph"); + assert_eq!(json["variant"]["weight"], "i32"); +} + +#[test] +fn test_create_rule_example_mvc_to_mis_target_weight_only() { + let output = pred() + .args([ + "create", + "--example", + "MVC/i32", + "--to", + "MIS/i32", + "--example-side", + "target", + ]) + .output() + .unwrap(); + assert!( + output.status.success(), + "stderr: {}", + String::from_utf8_lossy(&output.stderr) + ); + let stdout = String::from_utf8(output.stdout).unwrap(); + let json: serde_json::Value = serde_json::from_str(&stdout).unwrap(); + assert_eq!(json["type"], "MaximumIndependentSet"); + assert_eq!(json["variant"]["graph"], "SimpleGraph"); + assert_eq!(json["variant"]["weight"], "i32"); +} + +// ---- Variant-level show semantics ---- + +#[test] +fn test_show_with_slash_spec() { + // `pred show MIS/UnitDiskGraph` should show that specific variant + let output = pred().args(["show", "MIS/UnitDiskGraph"]).output().unwrap(); + assert!( + output.status.success(), + "stderr: {}", + String::from_utf8_lossy(&output.stderr) + ); + let stdout = String::from_utf8(output.stdout).unwrap(); + assert!( + stdout.contains("UnitDiskGraph"), + "should show UnitDiskGraph variant: {stdout}" + ); +} + +#[test] +fn test_show_bare_name_uses_default() { + // `pred show MIS` resolves to default variant and marks it + let output = pred().args(["show", "MIS"]).output().unwrap(); + assert!(output.status.success()); + let stdout = String::from_utf8(output.stdout).unwrap(); + assert!( + stdout.contains("SimpleGraph"), + "bare MIS should resolve to SimpleGraph default: {stdout}" + ); +} + +#[test] +fn test_show_ksat_works() { + // `pred show KSAT` should succeed (alias resolves to KSatisfiability default variant) + let output = pred().args(["show", "KSAT"]).output().unwrap(); + assert!( + output.status.success(), + "stderr: {}", + String::from_utf8_lossy(&output.stderr) + ); + let stdout = String::from_utf8(output.stdout).unwrap(); + assert!( + stdout.contains("KSatisfiability"), + "should show KSatisfiability: {stdout}" + ); +} + +// ---- Capped multi-path ---- + +#[test] +fn test_path_all_max_paths_truncates() { + // With --max-paths 3, should limit to 3 paths and indicate truncation + let output = pred() + .args(["path", "MIS", "QUBO", "--all", "--max-paths", "3", "--json"]) + .output() + .unwrap(); + assert!( + output.status.success(), + "stderr: {}", + String::from_utf8_lossy(&output.stderr) + ); + let stdout = String::from_utf8(output.stdout).unwrap(); + let envelope: serde_json::Value = serde_json::from_str(&stdout).unwrap(); + let paths = envelope["paths"] + .as_array() + .expect("should have paths array"); + assert!( + paths.len() <= 3, + "should return at most 3 paths, got {}", + paths.len() + ); + assert_eq!(envelope["max_paths"], 3); + // MIS -> QUBO has many paths, so truncation is expected + assert_eq!( + envelope["truncated"], true, + "should be truncated since MIS->QUBO has many paths" + ); +} + +#[test] +fn test_path_all_max_paths_text_truncation_note() { + let output = pred() + .args(["path", "MIS", "QUBO", "--all", "--max-paths", "2"]) + .output() + .unwrap(); + assert!(output.status.success()); + let stdout = String::from_utf8(output.stdout).unwrap(); + assert!( + stdout.contains("--max-paths"), + "truncation note should mention --max-paths: {stdout}" + ); +} + +// ---- Default variant resolution for create ---- + +#[test] +fn test_create_bare_mis_default_variant() { + // `pred create MIS --graph 0-1,1-2,2-3` should work with default variant + let output = pred() + .args(["create", "MIS", "--graph", "0-1,1-2,2-3"]) + .output() + .unwrap(); + assert!( + output.status.success(), + "stderr: {}", + String::from_utf8_lossy(&output.stderr) + ); + let stdout = String::from_utf8(output.stdout).unwrap(); + let json: serde_json::Value = serde_json::from_str(&stdout).unwrap(); + assert_eq!(json["type"], "MaximumIndependentSet"); +} + +// ---- Show JSON includes default annotation ---- + +#[test] +fn test_show_json_has_default_field() { + let output = pred().args(["show", "MIS", "--json"]).output().unwrap(); + assert!(output.status.success()); + let stdout = String::from_utf8(output.stdout).unwrap(); + let json: serde_json::Value = serde_json::from_str(&stdout).unwrap(); + // Bare MIS resolves to default variant + assert_eq!( + json["default"], true, + "bare MIS should be the default variant" + ); + assert!(json["variant"].is_object(), "should have variant object"); +} + +// ---- path --all directory output includes manifest ---- + +#[test] +fn test_path_all_save_manifest() { + let dir = std::env::temp_dir().join("pred_test_all_paths_manifest"); + let _ = std::fs::remove_dir_all(&dir); + let output = pred() + .args([ + "path", + "MaxCut", + "QUBO", + "--all", + "-o", + dir.to_str().unwrap(), + ]) + .output() + .unwrap(); + assert!( + output.status.success(), + "stderr: {}", + String::from_utf8_lossy(&output.stderr) + ); + assert!(dir.is_dir()); + + let manifest_file = dir.join("manifest.json"); + assert!(manifest_file.exists(), "manifest.json should be created"); + let manifest_content = std::fs::read_to_string(&manifest_file).unwrap(); + let manifest: serde_json::Value = serde_json::from_str(&manifest_content).unwrap(); + assert!(manifest["paths"].is_number()); + assert!(manifest["max_paths"].is_number()); + assert!(manifest["truncated"].is_boolean()); + + std::fs::remove_dir_all(&dir).ok(); +} + +#[test] +fn test_create_nonunit_weights_require_weighted_variant() { + let output = pred() + .args([ + "create", + "MIS", + "--graph", + "0-1,1-2,2-3", + "--weights", + "3,1,2,1", + ]) + .output() + .unwrap(); + assert!( + !output.status.success(), + "non-unit weights should require /i32" + ); + let stderr = String::from_utf8(output.stderr).unwrap(); + assert!( + stderr.contains("Use the weighted variant instead"), + "stderr should point to the explicit weighted variant: {stderr}" + ); + assert!( + stderr.contains("MaximumIndependentSet/SimpleGraph/i32"), + "stderr should include the exact weighted variant: {stderr}" + ); +} + +#[test] +fn test_create_unit_weights_stays_one() { + // When all weights are 1, the variant should remain One. + let output = pred() + .args([ + "create", + "MIS", + "--graph", + "0-1,1-2,2-3", + "--weights", + "1,1,1,1", + ]) + .output() + .unwrap(); + assert!( + output.status.success(), + "stderr: {}", + String::from_utf8_lossy(&output.stderr) + ); + let stdout = String::from_utf8(output.stdout).unwrap(); + let json: serde_json::Value = serde_json::from_str(&stdout).unwrap(); + assert_eq!(json["variant"]["weight"], "One"); +} + +#[test] +fn test_create_weighted_mis_round_trips_into_solve() { + // The explicit weighted MIS variant should be solvable end-to-end. + let create_output = pred() + .args([ + "create", + "MIS/i32", + "--graph", + "0-1,1-2,2-3", + "--weights", + "3,1,2,1", + ]) + .output() + .unwrap(); + assert!(create_output.status.success()); + + let solve_output = pred() + .args(["solve", "-", "--solver", "brute-force"]) + .stdin(std::process::Stdio::piped()) + .stdout(std::process::Stdio::piped()) + .stderr(std::process::Stdio::piped()) + .spawn() + .and_then(|mut child| { + use std::io::Write; + child + .stdin + .take() + .unwrap() + .write_all(&create_output.stdout) + .unwrap(); + child.wait_with_output() + }) + .unwrap(); + assert!( + solve_output.status.success(), + "stderr: {}", + String::from_utf8_lossy(&solve_output.stderr) + ); + let stdout = String::from_utf8(solve_output.stdout).unwrap(); + let json: serde_json::Value = serde_json::from_str(&stdout).unwrap(); + assert_eq!(json["evaluation"], "Valid(5)"); +} diff --git a/problemreductions-macros/src/lib.rs b/problemreductions-macros/src/lib.rs index 7226c1c2..0d790967 100644 --- a/problemreductions-macros/src/lib.rs +++ b/problemreductions-macros/src/lib.rs @@ -10,7 +10,7 @@ pub(crate) mod parser; use proc_macro::TokenStream; use proc_macro2::TokenStream as TokenStream2; use quote::quote; -use std::collections::HashSet; +use std::collections::{HashMap, HashSet}; use syn::{parse_macro_input, GenericArgument, ItemImpl, Path, PathArguments, Type}; /// Attribute macro for automatic reduction registration. @@ -370,13 +370,24 @@ fn extract_target_from_trait(path: &Path) -> syn::Result { // --- declare_variants! proc macro --- +/// Solver kind for dispatch generation. +#[derive(Debug, Clone, Copy)] +enum SolverKind { + /// Optimization problem — uses `find_best`. + Opt, + /// Satisfaction problem — uses `find_satisfying`. + Sat, +} + /// Input for the `declare_variants!` proc macro. struct DeclareVariantsInput { entries: Vec, } -/// A single entry: `Type => "complexity_string"`. +/// A single entry: `[default] opt|sat Type => "complexity_string"`. struct DeclareVariantEntry { + is_default: bool, + solver_kind: SolverKind, ty: Type, complexity: syn::LitStr, } @@ -385,10 +396,48 @@ impl syn::parse::Parse for DeclareVariantsInput { fn parse(input: syn::parse::ParseStream) -> syn::Result { let mut entries = Vec::new(); while !input.is_empty() { + // Optionally accept a `default` keyword before the type + let is_default = input.peek(syn::Token![default]); + if is_default { + input.parse::()?; + } + + // Require `opt` or `sat` keyword + let solver_kind = if input.peek(syn::Ident) { + let fork = input.fork(); + if let Ok(ident) = fork.parse::() { + match ident.to_string().as_str() { + "opt" => { + input.parse::()?; // consume + SolverKind::Opt + } + "sat" => { + input.parse::()?; // consume + SolverKind::Sat + } + _ => { + return Err(syn::Error::new( + ident.span(), + "expected `opt` or `sat` before type name", + )); + } + } + } else { + return Err(input.error("expected `opt` or `sat` before type name")); + } + } else { + return Err(input.error("expected `opt` or `sat` before type name")); + }; + let ty: Type = input.parse()?; input.parse::]>()?; let complexity: syn::LitStr = input.parse()?; - entries.push(DeclareVariantEntry { ty, complexity }); + entries.push(DeclareVariantEntry { + is_default, + solver_kind, + ty, + complexity, + }); if input.peek(syn::Token![,]) { input.parse::()?; @@ -429,11 +478,49 @@ pub fn declare_variants(input: TokenStream) -> TokenStream { /// Generate code for all `declare_variants!` entries. fn generate_declare_variants(input: &DeclareVariantsInput) -> syn::Result { + // Validate default markers per problem name. + // Group entries by their base type name (e.g., "MaximumIndependentSet"). + let mut defaults_per_problem: HashMap> = HashMap::new(); + let mut problem_names = HashSet::new(); + for (i, entry) in input.entries.iter().enumerate() { + let base_name = extract_type_name(&entry.ty).unwrap_or_default(); + problem_names.insert(base_name.clone()); + if entry.is_default { + defaults_per_problem.entry(base_name).or_default().push(i); + } + } + + // Check for multiple defaults for the same problem + for (name, indices) in &defaults_per_problem { + if indices.len() > 1 { + return Err(syn::Error::new( + proc_macro2::Span::call_site(), + format!( + "`{name}` has more than one default variant; \ + only one entry per problem may be marked `default`" + ), + )); + } + } + + for name in problem_names { + if !defaults_per_problem.contains_key(&name) { + return Err(syn::Error::new( + proc_macro2::Span::call_site(), + format!( + "`{name}` must declare exactly one default variant; \ + mark one entry with `default`" + ), + )); + } + } + let mut output = TokenStream2::new(); for entry in &input.entries { let ty = &entry.ty; let complexity_str = entry.complexity.value(); + let is_default = entry.is_default; // Parse the complexity expression to validate syntax let parsed = parser::parse_expr(&complexity_str).map_err(|e| { @@ -470,6 +557,34 @@ fn generate_declare_variants(input: &DeclareVariantsInput) -> syn::Result quote! { + let config = ::find_best(&solver, p)?; + }, + SolverKind::Sat => quote! { + let config = ::find_satisfying(&solver, p)?; + }, + }; + + let dispatch_fields = quote! { + factory: |data: serde_json::Value| -> Result, serde_json::Error> { + let p: #ty = serde_json::from_value(data)?; + Ok(Box::new(p)) + }, + serialize_fn: |any: &dyn std::any::Any| -> Option { + let p = any.downcast_ref::<#ty>()?; + Some(serde_json::to_value(p).expect("serialize failed")) + }, + solve_fn: |any: &dyn std::any::Any| -> Option<(Vec, String)> { + let p = any.downcast_ref::<#ty>()?; + let solver = crate::solvers::BruteForce::new(); + #solve_body + let evaluation = format!("{:?}", crate::traits::Problem::evaluate(p, &config)); + Some((config, evaluation)) + }, + }; + output.extend(quote! { impl crate::traits::DeclaredVariant for #ty {} @@ -479,6 +594,8 @@ fn generate_declare_variants(input: &DeclareVariantsInput) -> syn::Result::variant(), complexity: #complexity_str, complexity_eval_fn: #complexity_eval_fn, + is_default: #is_default, + #dispatch_fields } } @@ -507,3 +624,181 @@ fn generate_complexity_eval_fn( } }) } + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn declare_variants_accepts_single_default() { + let input: DeclareVariantsInput = syn::parse_quote! { + default opt Foo => "1", + }; + assert!(generate_declare_variants(&input).is_ok()); + } + + #[test] + fn declare_variants_requires_one_default_per_problem() { + let input: DeclareVariantsInput = syn::parse_quote! { + opt Foo => "1", + }; + let err = generate_declare_variants(&input).unwrap_err(); + assert!( + err.to_string().contains("exactly one default"), + "expected 'exactly one default' in error, got: {}", + err + ); + } + + #[test] + fn declare_variants_rejects_multiple_defaults_for_one_problem() { + let input: DeclareVariantsInput = syn::parse_quote! { + default opt Foo => "1", + default opt Foo => "2", + }; + let err = generate_declare_variants(&input).unwrap_err(); + assert!( + err.to_string().contains("more than one default"), + "expected 'more than one default' in error, got: {}", + err + ); + } + + #[test] + fn declare_variants_rejects_missing_default_marker() { + let input: DeclareVariantsInput = syn::parse_quote! { + opt Foo => "1", + }; + let err = generate_declare_variants(&input).unwrap_err(); + assert!( + err.to_string().contains("exactly one default"), + "expected 'exactly one default' in error, got: {}", + err + ); + } + + #[test] + fn declare_variants_marks_only_explicit_default() { + let input: DeclareVariantsInput = syn::parse_quote! { + opt Foo => "1", + default opt Foo => "2", + }; + let result = generate_declare_variants(&input); + assert!(result.is_ok()); + let tokens = result.unwrap().to_string(); + let true_count = tokens.matches("is_default : true").count(); + let false_count = tokens.matches("is_default : false").count(); + assert_eq!(true_count, 1, "should have exactly one default"); + assert_eq!(false_count, 1, "should have exactly one non-default"); + } + + #[test] + fn declare_variants_accepts_solver_kind_markers() { + let input: DeclareVariantsInput = syn::parse_quote! { + default opt Foo => "1", + default sat Bar => "2", + }; + assert!(generate_declare_variants(&input).is_ok()); + } + + #[test] + fn declare_variants_rejects_missing_solver_kind() { + let result = syn::parse_str::("Foo => \"1\""); + assert!( + result.is_err(), + "expected parse error for missing solver kind" + ); + } + + #[test] + fn declare_variants_generates_find_best_for_opt_entries() { + let input: DeclareVariantsInput = syn::parse_quote! { + default opt Foo => "1", + }; + let tokens = generate_declare_variants(&input).unwrap().to_string(); + assert!(tokens.contains("factory :"), "expected factory field"); + assert!( + tokens.contains("serialize_fn :"), + "expected serialize_fn field" + ); + assert!(tokens.contains("solve_fn :"), "expected solve_fn field"); + assert!( + !tokens.contains("factory : None"), + "factory should not be None" + ); + assert!( + !tokens.contains("serialize_fn : None"), + "serialize_fn should not be None" + ); + assert!( + !tokens.contains("solve_fn : None"), + "solve_fn should not be None" + ); + assert!(tokens.contains("find_best"), "expected find_best in tokens"); + } + + #[test] + fn declare_variants_generates_find_satisfying_for_sat_entries() { + let input: DeclareVariantsInput = syn::parse_quote! { + default sat Foo => "1", + }; + let tokens = generate_declare_variants(&input).unwrap().to_string(); + assert!(tokens.contains("factory :"), "expected factory field"); + assert!( + tokens.contains("serialize_fn :"), + "expected serialize_fn field" + ); + assert!(tokens.contains("solve_fn :"), "expected solve_fn field"); + assert!( + !tokens.contains("factory : None"), + "factory should not be None" + ); + assert!( + !tokens.contains("serialize_fn : None"), + "serialize_fn should not be None" + ); + assert!( + !tokens.contains("solve_fn : None"), + "solve_fn should not be None" + ); + assert!( + tokens.contains("find_satisfying"), + "expected find_satisfying in tokens" + ); + } + + #[test] + fn reduction_rejects_unexpected_attribute() { + let extra_attr = syn::Ident::new("extra", proc_macro2::Span::call_site()); + let parse_result = syn::parse2::(quote! { + #extra_attr = "unexpected", overhead = { num_vertices = "num_vertices" } + }); + let err = match parse_result { + Ok(_) => panic!("unexpected reduction attribute should be rejected"), + Err(err) => err, + }; + assert!(err.to_string().contains("unknown attribute: extra")); + } + + #[test] + fn reduction_accepts_overhead_attribute() { + let attrs: ReductionAttrs = syn::parse_quote! { + overhead = { n = "n" } + }; + assert!(attrs.overhead.is_some()); + } + + #[test] + fn declare_variants_codegen_uses_required_dispatch_fields() { + let input: DeclareVariantsInput = syn::parse_quote! { + default opt Foo => "1", + }; + let tokens = generate_declare_variants(&input).unwrap().to_string(); + assert!(tokens.contains("factory :")); + assert!(tokens.contains("serialize_fn :")); + assert!(tokens.contains("solve_fn :")); + assert!(!tokens.contains("factory : None")); + assert!(!tokens.contains("serialize_fn : None")); + assert!(!tokens.contains("solve_fn : None")); + } +} diff --git a/src/example_db/mod.rs b/src/example_db/mod.rs new file mode 100644 index 00000000..94b0c7e8 --- /dev/null +++ b/src/example_db/mod.rs @@ -0,0 +1,106 @@ +//! Canonical example database assembly. +//! +//! `rule_builders` and `model_builders` are the canonical in-memory sources for +//! all example data. This module assembles, validates, and looks up structural +//! records from those builders — no filesystem round-trip or legacy bridge. + +use crate::error::{ProblemError, Result}; +use crate::export::{ + examples_output_dir, ModelDb, ModelExample, ProblemRef, RuleDb, RuleExample, EXAMPLE_DB_VERSION, +}; +use std::collections::BTreeSet; +use std::path::PathBuf; + +mod model_builders; +mod rule_builders; +pub(crate) mod specs; + +fn rule_key(example: &RuleExample) -> (ProblemRef, ProblemRef) { + (example.source.problem_ref(), example.target.problem_ref()) +} + +fn model_key(example: &ModelExample) -> ProblemRef { + example.problem_ref() +} + +fn validate_rule_uniqueness(rules: &[RuleExample]) -> Result<()> { + let mut seen = BTreeSet::new(); + for rule in rules { + let key = rule_key(rule); + if !seen.insert(key.clone()) { + return Err(ProblemError::InvalidProblem(format!( + "Duplicate canonical rule example for {} {:?} -> {} {:?}", + key.0.name, key.0.variant, key.1.name, key.1.variant + ))); + } + } + Ok(()) +} + +fn validate_model_uniqueness(models: &[ModelExample]) -> Result<()> { + let mut seen = BTreeSet::new(); + for model in models { + let key = model_key(model); + if !seen.insert(key.clone()) { + return Err(ProblemError::InvalidProblem(format!( + "Duplicate canonical model example for {} {:?}", + key.name, key.variant + ))); + } + } + Ok(()) +} + +pub fn build_rule_db() -> Result { + let mut rules = rule_builders::build_rule_examples(); + rules.sort_by_key(rule_key); + validate_rule_uniqueness(&rules)?; + Ok(RuleDb { + version: EXAMPLE_DB_VERSION, + rules, + }) +} + +pub fn build_model_db() -> Result { + let mut models = model_builders::build_model_examples(); + models.sort_by_key(model_key); + validate_model_uniqueness(&models)?; + Ok(ModelDb { + version: EXAMPLE_DB_VERSION, + models, + }) +} + +pub fn find_rule_example(source: &ProblemRef, target: &ProblemRef) -> Result { + let db = build_rule_db()?; + db.rules + .into_iter() + .find(|rule| &rule.source.problem_ref() == source && &rule.target.problem_ref() == target) + .ok_or_else(|| { + ProblemError::InvalidProblem(format!( + "No canonical rule example exists for {} {:?} -> {} {:?}", + source.name, source.variant, target.name, target.variant + )) + }) +} + +pub fn find_model_example(problem: &ProblemRef) -> Result { + let db = build_model_db()?; + db.models + .into_iter() + .find(|model| &model.problem_ref() == problem) + .ok_or_else(|| { + ProblemError::InvalidProblem(format!( + "No canonical model example exists for {} {:?}", + problem.name, problem.variant + )) + }) +} + +pub fn default_generated_dir() -> PathBuf { + examples_output_dir() +} + +#[cfg(test)] +#[path = "../unit_tests/example_db.rs"] +mod tests; diff --git a/src/example_db/model_builders.rs b/src/example_db/model_builders.rs new file mode 100644 index 00000000..d6c79a47 --- /dev/null +++ b/src/example_db/model_builders.rs @@ -0,0 +1,12 @@ +use crate::export::ModelExample; + +pub fn build_model_examples() -> Vec { + crate::models::graph::canonical_model_example_specs() + .into_iter() + .chain(crate::models::formula::canonical_model_example_specs()) + .chain(crate::models::set::canonical_model_example_specs()) + .chain(crate::models::algebraic::canonical_model_example_specs()) + .chain(crate::models::misc::canonical_model_example_specs()) + .map(|spec| (spec.build)()) + .collect() +} diff --git a/src/example_db/rule_builders.rs b/src/example_db/rule_builders.rs new file mode 100644 index 00000000..273d5342 --- /dev/null +++ b/src/example_db/rule_builders.rs @@ -0,0 +1,63 @@ +use crate::export::RuleExample; + +pub fn build_rule_examples() -> Vec { + crate::rules::canonical_rule_example_specs() + .into_iter() + .map(|spec| (spec.build)()) + .collect() +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn builds_all_42_canonical_rule_examples() { + let examples = build_rule_examples(); + + assert_eq!(examples.len(), 42); + assert!(examples + .iter() + .all(|example| !example.source.problem.is_empty())); + assert!(examples + .iter() + .all(|example| !example.target.problem.is_empty())); + assert!(examples + .iter() + .all(|example| example.source.instance.is_object())); + assert!(examples + .iter() + .all(|example| example.target.instance.is_object())); + } + + #[test] + fn satisfiability_to_kcoloring_uses_full_problem_serialization() { + let specs = crate::rules::canonical_rule_example_specs(); + let spec = specs + .iter() + .find(|s| s.id == "satisfiability_to_kcoloring") + .unwrap(); + let example = (spec.build)(); + + assert_eq!(example.source.problem, "Satisfiability"); + assert_eq!(example.target.problem, "KColoring"); + assert!(example.source.instance.get("num_vars").is_some()); + assert!(example.target.instance.get("graph").is_some()); + } + + #[test] + fn factoring_to_circuitsat_contains_complete_solution_pairs() { + let specs = crate::rules::canonical_rule_example_specs(); + let spec = specs + .iter() + .find(|s| s.id == "factoring_to_circuitsat") + .unwrap(); + let example = (spec.build)(); + + assert!(!example.solutions.is_empty()); + assert!(example + .solutions + .iter() + .all(|pair| !pair.source_config.is_empty() && !pair.target_config.is_empty())); + } +} diff --git a/src/example_db/specs.rs b/src/example_db/specs.rs new file mode 100644 index 00000000..8904abae --- /dev/null +++ b/src/example_db/specs.rs @@ -0,0 +1,311 @@ +//! Shared example specification types and helper functions. +//! +//! These types describe canonical model and rule examples with metadata +//! that can be validated against the catalog and reduction registry. + +use crate::export::{ + lookup_overhead, overhead_to_json, variant_to_map, ModelExample, ProblemSide, RuleExample, + SampleEval, SolutionPair, +}; +use crate::models::algebraic::{VariableDomain, ILP}; +use crate::prelude::{OptimizationProblem, Problem, ReduceTo, ReductionResult}; +use crate::rules::{PathCostFn, ReductionGraph}; +use crate::solvers::{BruteForce, ILPSolver}; +use crate::types::ProblemSize; +use serde::Serialize; + +/// Specification for a canonical model example. +#[allow(dead_code)] +pub struct ModelExampleSpec { + /// Unique example identifier. + pub id: &'static str, + /// Builder function that produces the full exported example. + pub build: fn() -> ModelExample, +} + +/// Specification for a canonical rule example. +#[allow(dead_code)] +pub struct RuleExampleSpec { + /// Unique example identifier. + pub id: &'static str, + /// Builder function that produces the full exported example. + pub build: fn() -> RuleExample, +} + +// ---- Model example helpers ---- + +pub fn sample_eval

(problem: &P, config: Vec) -> SampleEval +where + P: Problem, + P::Metric: Serialize, +{ + let metric = + serde_json::to_value(problem.evaluate(&config)).expect("Failed to serialize metric"); + SampleEval { config, metric } +} + +pub fn optimization_example

(problem: P, samples: Vec>) -> ModelExample +where + P: OptimizationProblem + Serialize, + P::Metric: Serialize, +{ + let sample_evals = samples + .into_iter() + .map(|config| sample_eval(&problem, config)) + .collect(); + let optimal = BruteForce::new() + .find_all_best(&problem) + .into_iter() + .map(|config| sample_eval(&problem, config)) + .collect(); + ModelExample::from_problem(&problem, sample_evals, optimal) +} + +pub fn satisfaction_example

(problem: P, samples: Vec>) -> ModelExample +where + P: Problem + Serialize, +{ + let sample_evals = samples + .into_iter() + .map(|config| sample_eval(&problem, config)) + .collect(); + let satisfying = BruteForce::new() + .find_all_satisfying(&problem) + .into_iter() + .map(|config| sample_eval(&problem, config)) + .collect(); + ModelExample::from_problem(&problem, sample_evals, satisfying) +} + +pub fn explicit_example

( + problem: P, + samples: Vec>, + optimal_configs: Vec>, +) -> ModelExample +where + P: Problem + Serialize, + P::Metric: Serialize, +{ + let sample_evals = samples + .into_iter() + .map(|config| sample_eval(&problem, config)) + .collect(); + let optimal = optimal_configs + .into_iter() + .map(|config| sample_eval(&problem, config)) + .collect(); + ModelExample::from_problem(&problem, sample_evals, optimal) +} + +// ---- Rule example helpers ---- + +pub fn assemble_rule_example( + source: &S, + target: &T, + overhead: crate::rules::ReductionOverhead, + solutions: Vec, +) -> RuleExample +where + S: Problem + Serialize, + T: Problem + Serialize, +{ + RuleExample { + source: ProblemSide::from_problem(source), + target: ProblemSide::from_problem(target), + overhead: overhead_to_json(&overhead), + solutions, + } +} + +pub fn direct_overhead() -> crate::rules::ReductionOverhead +where + S: Problem, + T: Problem, +{ + let source_variant = variant_to_map(S::variant()); + let target_variant = variant_to_map(T::variant()); + lookup_overhead(S::NAME, &source_variant, T::NAME, &target_variant).unwrap_or_else(|| { + panic!( + "missing exact direct overhead for {} {:?} -> {} {:?}", + S::NAME, + source_variant, + T::NAME, + target_variant + ) + }) +} + +pub fn direct_best_example(source: S, keep: Keep) -> RuleExample +where + S: Problem + Serialize + ReduceTo, + T: OptimizationProblem + Serialize, + T::Metric: Serialize, + Keep: Fn(&S, &[usize]) -> bool, +{ + let reduction = ReduceTo::::reduce_to(&source); + let target = reduction.target_problem(); + let solutions = BruteForce::new() + .find_all_best(target) + .into_iter() + .filter_map(|target_config| { + let source_config = reduction.extract_solution(&target_config); + keep(&source, &source_config).then_some(SolutionPair { + source_config, + target_config, + }) + }) + .collect(); + assemble_rule_example(&source, target, direct_overhead::(), solutions) +} + +pub fn direct_satisfying_example(source: S, keep: Keep) -> RuleExample +where + S: Problem + Serialize + ReduceTo, + T: Problem + Serialize, + Keep: Fn(&S, &[usize]) -> bool, +{ + let reduction = ReduceTo::::reduce_to(&source); + let target = reduction.target_problem(); + let solutions = BruteForce::new() + .find_all_satisfying(target) + .into_iter() + .filter_map(|target_config| { + let source_config = reduction.extract_solution(&target_config); + keep(&source, &source_config).then_some(SolutionPair { + source_config, + target_config, + }) + }) + .collect(); + assemble_rule_example(&source, target, direct_overhead::(), solutions) +} + +pub fn direct_ilp_example(source: S, keep: Keep) -> RuleExample +where + S: Problem + Serialize + ReduceTo>, + ILP: Serialize, + V: VariableDomain, + Keep: Fn(&S, &[usize]) -> bool, +{ + let reduction = ReduceTo::>::reduce_to(&source); + let target = reduction.target_problem(); + let target_config = ILPSolver::new() + .solve(target) + .expect("canonical ILP target example should solve"); + let source_config = reduction.extract_solution(&target_config); + let solutions = if keep(&source, &source_config) { + vec![SolutionPair { + source_config, + target_config, + }] + } else { + Vec::new() + }; + assemble_rule_example(&source, target, direct_overhead::>(), solutions) +} + +pub fn path_best_example( + source: S, + input_size: ProblemSize, + cost: C, + keep: Keep, +) -> RuleExample +where + S: Problem + Serialize + 'static, + T: OptimizationProblem + Serialize + 'static, + T::Metric: Serialize, + C: PathCostFn, + Keep: Fn(&S, &[usize]) -> bool, +{ + let graph = ReductionGraph::new(); + let source_variant = variant_to_map(S::variant()); + let target_variant = variant_to_map(T::variant()); + let path = graph + .find_cheapest_path( + S::NAME, + &source_variant, + T::NAME, + &target_variant, + &input_size, + &cost, + ) + .expect("canonical path example should exist"); + let chain = graph + .reduce_along_path(&path, &source as &dyn std::any::Any) + .expect("canonical path example should execute"); + let target = chain.target_problem::(); + let solutions = BruteForce::new() + .find_all_best(target) + .into_iter() + .filter_map(|target_config| { + let source_config = chain.extract_solution(&target_config); + keep(&source, &source_config).then_some(SolutionPair { + source_config, + target_config, + }) + }) + .collect(); + assemble_rule_example( + &source, + target, + graph.compose_path_overhead(&path), + solutions, + ) +} + +pub fn path_ilp_example( + source: S, + input_size: ProblemSize, + cost: C, + keep: Keep, +) -> RuleExample +where + S: Problem + Serialize + 'static, + ILP: Serialize + 'static, + V: VariableDomain, + C: PathCostFn, + Keep: Fn(&S, &[usize]) -> bool, +{ + let graph = ReductionGraph::new(); + let source_variant = variant_to_map(S::variant()); + let target_variant = variant_to_map(ILP::::variant()); + let path = graph + .find_cheapest_path( + S::NAME, + &source_variant, + ILP::::NAME, + &target_variant, + &input_size, + &cost, + ) + .expect("canonical ILP path example should exist"); + let chain = graph + .reduce_along_path(&path, &source as &dyn std::any::Any) + .expect("canonical ILP path example should execute"); + let target = chain.target_problem::>(); + let target_config = ILPSolver::new() + .solve(target) + .expect("canonical ILP path target should solve"); + let source_config = chain.extract_solution(&target_config); + let solutions = if keep(&source, &source_config) { + vec![SolutionPair { + source_config, + target_config, + }] + } else { + Vec::new() + }; + assemble_rule_example( + &source, + target, + graph.compose_path_overhead(&path), + solutions, + ) +} + +pub fn keep_bool_source(source: &S, config: &[usize]) -> bool +where + S: Problem, +{ + source.evaluate(config) +} diff --git a/src/export.rs b/src/export.rs index 2ef553ed..0f71b7fa 100644 --- a/src/export.rs +++ b/src/export.rs @@ -1,61 +1,142 @@ -//! JSON export schema for reduction examples. -//! -//! Provides a unified serialization format for all reduction example programs. -//! Each example produces two files: -//! - `.json` — reduction structure (source, target, overhead) -//! - `.result.json` — runtime solutions -//! -//! The schema mirrors the internal types: `ReductionOverhead` for expressions, -//! `Problem::variant()` for problem variants, and `Problem::NAME` for problem names. +//! JSON export schema for example payloads. use crate::expr::Expr; use crate::rules::registry::ReductionOverhead; use crate::rules::ReductionGraph; -use serde::Serialize; -use std::collections::{BTreeMap, HashMap}; +use crate::traits::Problem; +use serde::{Deserialize, Serialize}; +use std::collections::BTreeMap; +use std::env; use std::fs; -use std::path::Path; +use std::path::{Path, PathBuf}; + +pub const EXAMPLES_DIR_ENV: &str = "PROBLEMREDUCTIONS_EXAMPLES_DIR"; /// One side (source or target) of a reduction. -#[derive(Serialize, Clone, Debug)] +#[derive(Serialize, Deserialize, Clone, Debug, PartialEq)] pub struct ProblemSide { /// Problem name matching `Problem::NAME` (e.g., `"MaximumIndependentSet"`). pub problem: String, /// Variant attributes (e.g., `{"graph": "SimpleGraph", "weight": "One"}`). - pub variant: HashMap, + pub variant: BTreeMap, /// Problem-specific instance data (edges, matrix, clauses, etc.). pub instance: serde_json::Value, } +impl ProblemSide { + /// Build a serializable problem side from a typed problem. + pub fn from_problem

(problem: &P) -> Self + where + P: Problem + Serialize, + { + Self { + problem: P::NAME.to_string(), + variant: variant_to_map(P::variant()), + instance: serde_json::to_value(problem).expect("Failed to serialize problem instance"), + } + } + + /// Extract the structural identity of this problem side. + pub fn problem_ref(&self) -> ProblemRef { + ProblemRef { + name: self.problem.clone(), + variant: self.variant.clone(), + } + } +} + +/// Canonical structural identity for a problem node in the reduction graph. +#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq, PartialOrd, Ord)] +pub struct ProblemRef { + pub name: String, + pub variant: BTreeMap, +} + /// One output field mapped to an expression. -#[derive(Serialize, Clone, Debug)] +#[derive(Serialize, Deserialize, Clone, Debug, PartialEq)] pub struct OverheadEntry { pub field: String, + #[serde(skip_deserializing, default = "default_expr")] pub expr: Expr, pub formula: String, } -/// Top-level reduction structure (written to `.json`). -#[derive(Serialize, Clone, Debug)] -pub struct ReductionData { - pub source: ProblemSide, - pub target: ProblemSide, - pub overhead: Vec, +fn default_expr() -> Expr { + Expr::Const(0.0) } /// One source↔target solution pair. -#[derive(Serialize, Clone, Debug)] +#[derive(Serialize, Deserialize, Clone, Debug, PartialEq)] pub struct SolutionPair { pub source_config: Vec, pub target_config: Vec, } -/// Runtime results (written to `.result.json`). -#[derive(Serialize, Clone, Debug)] -pub struct ResultData { +/// A complete rule example: reduction + solutions in one file. +#[derive(Serialize, Deserialize, Clone, Debug, PartialEq)] +pub struct RuleExample { + pub source: ProblemSide, + pub target: ProblemSide, + pub overhead: Vec, pub solutions: Vec, } +/// A complete model example: instance + evaluations. +#[derive(Serialize, Deserialize, Clone, Debug, PartialEq)] +pub struct ModelExample { + pub problem: String, + pub variant: BTreeMap, + pub instance: serde_json::Value, + pub samples: Vec, + pub optimal: Vec, +} + +impl ModelExample { + /// Build a serializable model example from a typed problem plus evaluated configs. + pub fn from_problem

(problem: &P, samples: Vec, optimal: Vec) -> Self + where + P: Problem + Serialize, + { + Self { + problem: P::NAME.to_string(), + variant: variant_to_map(P::variant()), + instance: serde_json::to_value(problem).expect("Failed to serialize problem instance"), + samples, + optimal, + } + } + + /// Extract the structural identity of this model example. + pub fn problem_ref(&self) -> ProblemRef { + ProblemRef { + name: self.problem.clone(), + variant: self.variant.clone(), + } + } +} + +/// Canonical exported database of rule examples. +#[derive(Serialize, Deserialize, Clone, Debug, PartialEq)] +pub struct RuleDb { + pub version: u32, + pub rules: Vec, +} + +/// Canonical exported database of model examples. +#[derive(Serialize, Deserialize, Clone, Debug, PartialEq)] +pub struct ModelDb { + pub version: u32, + pub models: Vec, +} + +pub const EXAMPLE_DB_VERSION: u32 = 1; + +#[derive(Serialize, Deserialize, Clone, Debug, PartialEq)] +pub struct SampleEval { + pub config: Vec, + pub metric: serde_json::Value, +} + /// Convert a `ReductionOverhead` to JSON-serializable entries. pub fn overhead_to_json(overhead: &ReductionOverhead) -> Vec { overhead @@ -70,46 +151,81 @@ pub fn overhead_to_json(overhead: &ReductionOverhead) -> Vec { } /// Look up `ReductionOverhead` for a direct reduction using `ReductionGraph::find_best_entry`. -/// -/// Finds the best matching registered reduction entry for the given source/target -/// names and source variant. Returns `None` if no compatible direct reduction exists. pub fn lookup_overhead( source_name: &str, - source_variant: &HashMap, + source_variant: &BTreeMap, target_name: &str, - _target_variant: &HashMap, + target_variant: &BTreeMap, ) -> Option { let graph = ReductionGraph::new(); - let src_bt: BTreeMap = source_variant - .iter() - .map(|(k, v)| (k.clone(), v.clone())) - .collect(); - let matched = graph.find_best_entry(source_name, target_name, &src_bt)?; + let matched = + graph.find_best_entry(source_name, source_variant, target_name, target_variant)?; Some(matched.overhead) } -/// Convert `Problem::variant()` output to a `HashMap`. -pub fn variant_to_map(variant: Vec<(&str, &str)>) -> HashMap { +/// Convert `Problem::variant()` output to a stable `BTreeMap`. +/// +/// Normalizes empty `"graph"` values to `"SimpleGraph"` for consistency +/// with the reduction graph convention. +pub fn variant_to_map(variant: Vec<(&str, &str)>) -> BTreeMap { variant .into_iter() - .map(|(k, v)| (k.to_string(), v.to_string())) + .map(|(k, v)| { + let value = if k == "graph" && v.is_empty() { + "SimpleGraph".to_string() + } else { + v.to_string() + }; + (k.to_string(), value) + }) .collect() } -/// Write both `.json` and `.result.json` to `docs/paper/examples/`. -pub fn write_example(name: &str, reduction: &ReductionData, results: &ResultData) { - let dir = Path::new("docs/paper/examples"); +/// Default output directory for generated example JSON. +pub fn examples_output_dir() -> PathBuf { + if let Some(dir) = env::var_os(EXAMPLES_DIR_ENV) { + PathBuf::from(dir) + } else { + PathBuf::from("docs/paper/examples/generated") + } +} + +fn write_json_file(dir: &Path, name: &str, payload: &T) { fs::create_dir_all(dir).expect("Failed to create examples directory"); + let path = dir.join(format!("{name}.json")); + let json = serde_json::to_string_pretty(payload).expect("Failed to serialize example"); + fs::write(&path, json).expect("Failed to write example JSON"); + println!("Exported: {}", path.display()); +} + +/// Write a merged rule example JSON file. +pub fn write_rule_example_to(dir: &Path, name: &str, example: &RuleExample) { + write_json_file(dir, name, example); +} + +/// Write a merged rule example JSON file to the configured output directory. +pub fn write_rule_example(name: &str, example: &RuleExample) { + write_rule_example_to(&examples_output_dir(), name, example); +} - let reduction_path = dir.join(format!("{}.json", name)); - let json = serde_json::to_string_pretty(reduction).expect("Failed to serialize reduction"); - fs::write(&reduction_path, json).expect("Failed to write reduction JSON"); - println!("Exported: {}", reduction_path.display()); +/// Write a model example JSON file to a target directory. +pub fn write_model_example_to(dir: &Path, name: &str, example: &ModelExample) { + write_json_file(dir, name, example); +} + +/// Write a model example JSON file to the configured output directory. +pub fn write_model_example(name: &str, example: &ModelExample) { + write_model_example_to(&examples_output_dir(), name, example); +} + +/// Write the canonical rule database to `rules.json`. +pub fn write_rule_db_to(dir: &Path, db: &RuleDb) { + write_json_file(dir, "rules", db); +} - let results_path = dir.join(format!("{}.result.json", name)); - let json = serde_json::to_string_pretty(results).expect("Failed to serialize results"); - fs::write(&results_path, json).expect("Failed to write results JSON"); - println!("Exported: {}", results_path.display()); +/// Write the canonical model database to `models.json`. +pub fn write_model_db_to(dir: &Path, db: &ModelDb) { + write_json_file(dir, "models", db); } #[cfg(test)] diff --git a/src/lib.rs b/src/lib.rs index 4ecbe505..1f1c99c3 100644 --- a/src/lib.rs +++ b/src/lib.rs @@ -17,10 +17,14 @@ //! //! Use [`prelude`] for convenient imports. +extern crate self as problemreductions; + pub(crate) mod big_o; pub(crate) mod canonical; pub mod config; pub mod error; +#[cfg(feature = "example-db")] +pub mod example_db; pub mod export; pub(crate) mod expr; pub mod io; diff --git a/src/models/algebraic/bmf.rs b/src/models/algebraic/bmf.rs index 113c9829..4e91eda6 100644 --- a/src/models/algebraic/bmf.rs +++ b/src/models/algebraic/bmf.rs @@ -12,6 +12,9 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "BMF", + display_name: "BMF", + aliases: &[], + dimensions: &[], module_path: module_path!(), description: "Boolean matrix factorization", fields: &[ @@ -229,7 +232,28 @@ impl OptimizationProblem for BMF { } crate::declare_variants! { - BMF => "2^(rows * rank + rank * cols)", + default opt BMF => "2^(rows * rank + rank * cols)", +} + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "bmf", + build: || { + let problem = BMF::new( + vec![ + vec![true, true, false], + vec![true, true, true], + vec![false, true, true], + ], + 2, + ); + crate::example_db::specs::optimization_example( + problem, + vec![vec![1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1]], + ) + }, + }] } #[cfg(test)] diff --git a/src/models/algebraic/closest_vector_problem.rs b/src/models/algebraic/closest_vector_problem.rs index 7588d634..e3595f00 100644 --- a/src/models/algebraic/closest_vector_problem.rs +++ b/src/models/algebraic/closest_vector_problem.rs @@ -3,7 +3,7 @@ //! Given a lattice basis B and target vector t, find integer coefficients x //! minimizing ‖Bx - t‖₂. -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::traits::{OptimizationProblem, Problem}; use crate::types::{Direction, SolutionSize}; use serde::{Deserialize, Serialize}; @@ -11,6 +11,9 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "ClosestVectorProblem", + display_name: "Closest Vector Problem", + aliases: &["CVP"], + dimensions: &[VariantDimension::new("weight", "i32", &["i32", "f64"])], module_path: module_path!(), description: "Find the closest lattice point to a target vector", fields: &[ @@ -248,8 +251,23 @@ where } crate::declare_variants! { - ClosestVectorProblem => "2^num_basis_vectors", - ClosestVectorProblem => "2^num_basis_vectors", + default opt ClosestVectorProblem => "2^num_basis_vectors", + opt ClosestVectorProblem => "2^num_basis_vectors", +} + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "closest_vector_problem_i32", + build: || { + let problem = ClosestVectorProblem::new( + vec![vec![2, 0], vec![1, 2]], + vec![2.8, 1.5], + vec![VarBounds::bounded(-2, 4), VarBounds::bounded(-2, 4)], + ); + crate::example_db::specs::optimization_example(problem, vec![vec![3, 3]]) + }, + }] } #[cfg(test)] diff --git a/src/models/algebraic/ilp.rs b/src/models/algebraic/ilp.rs index fb263100..c947c82f 100644 --- a/src/models/algebraic/ilp.rs +++ b/src/models/algebraic/ilp.rs @@ -7,7 +7,7 @@ //! - `ILP`: binary variables (0 or 1) //! - `ILP`: non-negative integer variables (0..2^31-1) -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::traits::{OptimizationProblem, Problem}; use crate::types::{Direction, SolutionSize}; use serde::{Deserialize, Serialize}; @@ -16,6 +16,9 @@ use std::marker::PhantomData; inventory::submit! { ProblemSchemaEntry { name: "ILP", + display_name: "ILP", + aliases: &[], + dimensions: &[VariantDimension::new("variable", "bool", &["bool", "i32"])], module_path: module_path!(), description: "Optimize linear objective subject to linear constraints", fields: &[ @@ -272,8 +275,27 @@ impl OptimizationProblem for ILP { } crate::declare_variants! { - ILP => "2^num_vars", - ILP => "num_vars^num_vars", + default opt ILP => "2^num_vars", + opt ILP => "num_vars^num_vars", +} + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "ilp_i32", + build: || { + let problem = ILP::::new( + 2, + vec![ + LinearConstraint::le(vec![(0, 1.0), (1, 1.0)], 5.0), + LinearConstraint::le(vec![(0, 4.0), (1, 7.0)], 28.0), + ], + vec![(0, -5.0), (1, -6.0)], + ObjectiveSense::Minimize, + ); + crate::example_db::specs::explicit_example(problem, vec![vec![0, 4]], vec![vec![3, 2]]) + }, + }] } #[cfg(test)] diff --git a/src/models/algebraic/mod.rs b/src/models/algebraic/mod.rs index 6cfc0069..a4945487 100644 --- a/src/models/algebraic/mod.rs +++ b/src/models/algebraic/mod.rs @@ -7,11 +7,21 @@ //! - [`BMF`]: Boolean Matrix Factorization pub(crate) mod bmf; -mod closest_vector_problem; -mod ilp; -mod qubo; +pub(crate) mod closest_vector_problem; +pub(crate) mod ilp; +pub(crate) mod qubo; pub use bmf::BMF; pub use closest_vector_problem::{ClosestVectorProblem, VarBounds}; pub use ilp::{Comparison, LinearConstraint, ObjectiveSense, VariableDomain, ILP}; pub use qubo::QUBO; + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + let mut specs = Vec::new(); + specs.extend(qubo::canonical_model_example_specs()); + specs.extend(ilp::canonical_model_example_specs()); + specs.extend(closest_vector_problem::canonical_model_example_specs()); + specs.extend(bmf::canonical_model_example_specs()); + specs +} diff --git a/src/models/algebraic/qubo.rs b/src/models/algebraic/qubo.rs index 1e30f35b..602ee1bf 100644 --- a/src/models/algebraic/qubo.rs +++ b/src/models/algebraic/qubo.rs @@ -2,7 +2,7 @@ //! //! QUBO minimizes a quadratic function over binary variables. -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::traits::{OptimizationProblem, Problem}; use crate::types::{Direction, SolutionSize, WeightElement}; use serde::{Deserialize, Serialize}; @@ -10,6 +10,9 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "QUBO", + display_name: "QUBO", + aliases: &[], + dimensions: &[VariantDimension::new("weight", "f64", &["f64"])], module_path: module_path!(), description: "Minimize quadratic unconstrained binary objective", fields: &[ @@ -189,7 +192,22 @@ where } crate::declare_variants! { - QUBO => "2^num_vars", + default opt QUBO => "2^num_vars", +} + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "qubo_f64", + build: || { + let problem = QUBO::from_matrix(vec![ + vec![-1.0, 2.0, 0.0], + vec![0.0, -1.0, 2.0], + vec![0.0, 0.0, -1.0], + ]); + crate::example_db::specs::optimization_example(problem, vec![vec![1, 0, 1]]) + }, + }] } #[cfg(test)] diff --git a/src/models/formula/circuit.rs b/src/models/formula/circuit.rs index 0c3be9bb..7279357d 100644 --- a/src/models/formula/circuit.rs +++ b/src/models/formula/circuit.rs @@ -11,6 +11,9 @@ use std::collections::HashMap; inventory::submit! { ProblemSchemaEntry { name: "CircuitSAT", + display_name: "Circuit SAT", + aliases: &[], + dimensions: &[], module_path: module_path!(), description: "Find satisfying input to a boolean circuit", fields: &[ @@ -304,7 +307,34 @@ impl Problem for CircuitSAT { impl SatisfactionProblem for CircuitSAT {} crate::declare_variants! { - CircuitSAT => "2^num_variables", + default sat CircuitSAT => "2^num_variables", +} + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "circuit_sat", + build: || { + let problem = CircuitSAT::new(Circuit::new(vec![ + Assignment::new( + vec!["a".to_string()], + BooleanExpr::and(vec![BooleanExpr::var("x1"), BooleanExpr::var("x2")]), + ), + Assignment::new( + vec!["b".to_string()], + BooleanExpr::or(vec![BooleanExpr::var("x1"), BooleanExpr::var("x2")]), + ), + Assignment::new( + vec!["c".to_string()], + BooleanExpr::xor(vec![BooleanExpr::var("a"), BooleanExpr::var("b")]), + ), + ])); + crate::example_db::specs::satisfaction_example( + problem, + vec![vec![0, 1, 1, 0, 1], vec![0, 1, 1, 1, 0]], + ) + }, + }] } #[cfg(test)] diff --git a/src/models/formula/ksat.rs b/src/models/formula/ksat.rs index 4a542efb..d8d7d806 100644 --- a/src/models/formula/ksat.rs +++ b/src/models/formula/ksat.rs @@ -5,7 +5,7 @@ //! version - for the optimization variant (MAX-K-SAT), see the separate //! MaxKSatisfiability type (if available). -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::traits::{Problem, SatisfactionProblem}; use crate::variant::{KValue, K2, K3, KN}; use serde::{Deserialize, Serialize}; @@ -15,6 +15,9 @@ use super::CNFClause; inventory::submit! { ProblemSchemaEntry { name: "KSatisfiability", + display_name: "K-Satisfiability", + aliases: &["KSAT"], + dimensions: &[VariantDimension::new("k", "KN", &["KN", "K2", "K3"])], module_path: module_path!(), description: "SAT with exactly k literals per clause", fields: &[ @@ -184,9 +187,28 @@ impl Problem for KSatisfiability { impl SatisfactionProblem for KSatisfiability {} crate::declare_variants! { - KSatisfiability => "2^num_variables", - KSatisfiability => "num_variables + num_clauses", - KSatisfiability => "1.307^num_variables", + default sat KSatisfiability => "2^num_variables", + sat KSatisfiability => "num_variables + num_clauses", + sat KSatisfiability => "1.307^num_variables", +} + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "ksatisfiability_k3", + build: || { + use super::CNFClause; + let problem = KSatisfiability::::new( + 3, + vec![ + CNFClause::new(vec![1, 2, 3]), + CNFClause::new(vec![-1, -2, 3]), + CNFClause::new(vec![1, -2, -3]), + ], + ); + crate::example_db::specs::satisfaction_example(problem, vec![vec![1, 0, 1]]) + }, + }] } #[cfg(test)] diff --git a/src/models/formula/mod.rs b/src/models/formula/mod.rs index f2a33cce..71459cfb 100644 --- a/src/models/formula/mod.rs +++ b/src/models/formula/mod.rs @@ -6,9 +6,18 @@ //! - [`CircuitSAT`]: Boolean circuit satisfiability pub(crate) mod circuit; -mod ksat; -mod sat; +pub(crate) mod ksat; +pub(crate) mod sat; pub use circuit::{Assignment, BooleanExpr, BooleanOp, Circuit, CircuitSAT}; pub use ksat::KSatisfiability; pub use sat::{CNFClause, Satisfiability}; + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + let mut specs = Vec::new(); + specs.extend(sat::canonical_model_example_specs()); + specs.extend(ksat::canonical_model_example_specs()); + specs.extend(circuit::canonical_model_example_specs()); + specs +} diff --git a/src/models/formula/sat.rs b/src/models/formula/sat.rs index 448f58fa..11875163 100644 --- a/src/models/formula/sat.rs +++ b/src/models/formula/sat.rs @@ -12,6 +12,9 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "Satisfiability", + display_name: "Satisfiability", + aliases: &["SAT"], + dimensions: &[], module_path: module_path!(), description: "Find satisfying assignment for CNF formula", fields: &[ @@ -196,7 +199,7 @@ impl Problem for Satisfiability { impl SatisfactionProblem for Satisfiability {} crate::declare_variants! { - Satisfiability => "2^num_variables", + default sat Satisfiability => "2^num_variables", } /// Check if an assignment satisfies a SAT formula. @@ -224,6 +227,24 @@ pub(crate) fn is_satisfying_assignment( }) } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "satisfiability", + build: || { + let problem = Satisfiability::new( + 3, + vec![ + CNFClause::new(vec![1, 2]), + CNFClause::new(vec![-1, 3]), + CNFClause::new(vec![-2, -3]), + ], + ); + crate::example_db::specs::satisfaction_example(problem, vec![vec![1, 0, 1]]) + }, + }] +} + #[cfg(test)] #[path = "../../unit_tests/models/formula/sat.rs"] mod tests; diff --git a/src/models/graph/biclique_cover.rs b/src/models/graph/biclique_cover.rs index 77bb9e5c..8df44f6d 100644 --- a/src/models/graph/biclique_cover.rs +++ b/src/models/graph/biclique_cover.rs @@ -13,6 +13,9 @@ use std::collections::HashSet; inventory::submit! { ProblemSchemaEntry { name: "BicliqueCover", + display_name: "Biclique Cover", + aliases: &[], + dimensions: &[], module_path: module_path!(), description: "Cover bipartite edges with k bicliques", fields: &[ @@ -244,7 +247,25 @@ impl OptimizationProblem for BicliqueCover { } crate::declare_variants! { - BicliqueCover => "2^num_vertices", + default opt BicliqueCover => "2^num_vertices", +} + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "biclique_cover", + build: || { + use crate::topology::BipartiteGraph; + let problem = BicliqueCover::new( + BipartiteGraph::new(2, 3, vec![(0, 0), (0, 1), (1, 1), (1, 2)]), + 2, + ); + crate::example_db::specs::optimization_example( + problem, + vec![vec![1, 0, 0, 1, 1, 0, 1, 1, 0, 1]], + ) + }, + }] } #[cfg(test)] diff --git a/src/models/graph/graph_partitioning.rs b/src/models/graph/graph_partitioning.rs index b109a347..c6027fff 100644 --- a/src/models/graph/graph_partitioning.rs +++ b/src/models/graph/graph_partitioning.rs @@ -3,7 +3,7 @@ //! The Graph Partitioning (Minimum Bisection) problem asks for a balanced partition //! of vertices into two equal halves minimizing the number of crossing edges. -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::topology::{Graph, SimpleGraph}; use crate::traits::{OptimizationProblem, Problem}; use crate::types::{Direction, SolutionSize}; @@ -12,6 +12,11 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "GraphPartitioning", + display_name: "Graph Partitioning", + aliases: &[], + dimensions: &[ + VariantDimension::new("graph", "SimpleGraph", &["SimpleGraph"]), + ], module_path: module_path!(), description: "Find minimum cut balanced bisection of a graph", fields: &[ @@ -134,7 +139,7 @@ where } crate::declare_variants! { - GraphPartitioning => "2^num_vertices", + default opt GraphPartitioning => "2^num_vertices", } #[cfg(test)] diff --git a/src/models/graph/hamiltonian_path.rs b/src/models/graph/hamiltonian_path.rs index 7fabfc1c..4ecc617c 100644 --- a/src/models/graph/hamiltonian_path.rs +++ b/src/models/graph/hamiltonian_path.rs @@ -3,7 +3,7 @@ //! The Hamiltonian Path problem asks whether a graph contains a simple path //! that visits every vertex exactly once. -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::topology::{Graph, SimpleGraph}; use crate::traits::{Problem, SatisfactionProblem}; use crate::variant::VariantParam; @@ -12,6 +12,11 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "HamiltonianPath", + display_name: "Hamiltonian Path", + aliases: &[], + dimensions: &[ + VariantDimension::new("graph", "SimpleGraph", &["SimpleGraph"]), + ], module_path: module_path!(), description: "Find a Hamiltonian path in a graph", fields: &[ @@ -141,9 +146,32 @@ pub(crate) fn is_valid_hamiltonian_path(graph: &G, config: &[usize]) - true } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "hamiltonian_path_simplegraph", + build: || { + let problem = HamiltonianPath::new(SimpleGraph::new( + 6, + vec![ + (0, 1), + (0, 2), + (1, 3), + (2, 3), + (3, 4), + (3, 5), + (4, 2), + (5, 1), + ], + )); + crate::example_db::specs::satisfaction_example(problem, vec![vec![0, 2, 4, 3, 1, 5]]) + }, + }] +} + // Use Bjorklund (2014) O*(1.657^n) as best known for general undirected graphs crate::declare_variants! { - HamiltonianPath => "1.657^num_vertices", + default sat HamiltonianPath => "1.657^num_vertices", } #[cfg(test)] diff --git a/src/models/graph/isomorphic_spanning_tree.rs b/src/models/graph/isomorphic_spanning_tree.rs index dff17693..6f8c61c9 100644 --- a/src/models/graph/isomorphic_spanning_tree.rs +++ b/src/models/graph/isomorphic_spanning_tree.rs @@ -12,6 +12,9 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "IsomorphicSpanningTree", + display_name: "Isomorphic Spanning Tree", + aliases: &[], + dimensions: &[], module_path: module_path!(), description: "Does graph G contain a spanning tree isomorphic to tree T?", fields: &[ @@ -162,8 +165,21 @@ impl Problem for IsomorphicSpanningTree { impl SatisfactionProblem for IsomorphicSpanningTree {} +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "isomorphic_spanning_tree", + build: || { + let graph = SimpleGraph::new(4, vec![(0, 1), (0, 2), (0, 3), (1, 2), (1, 3), (2, 3)]); + let tree = SimpleGraph::new(4, vec![(0, 1), (0, 2), (0, 3)]); + let problem = IsomorphicSpanningTree::new(graph, tree); + crate::example_db::specs::satisfaction_example(problem, vec![vec![0, 1, 2, 3]]) + }, + }] +} + crate::declare_variants! { - IsomorphicSpanningTree => "factorial(num_vertices)", + default sat IsomorphicSpanningTree => "factorial(num_vertices)", } #[cfg(test)] diff --git a/src/models/graph/kcoloring.rs b/src/models/graph/kcoloring.rs index abe96618..d45bb3b0 100644 --- a/src/models/graph/kcoloring.rs +++ b/src/models/graph/kcoloring.rs @@ -3,7 +3,7 @@ //! The K-Coloring problem asks whether a graph can be colored with K colors //! such that no two adjacent vertices have the same color. -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::topology::{Graph, SimpleGraph}; use crate::traits::{Problem, SatisfactionProblem}; use crate::variant::{KValue, VariantParam, K2, K3, K4, K5, KN}; @@ -12,6 +12,12 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "KColoring", + display_name: "K-Coloring", + aliases: &[], + dimensions: &[ + VariantDimension::new("graph", "SimpleGraph", &["SimpleGraph"]), + VariantDimension::new("k", "KN", &["KN", "K2", "K3", "K4", "K5"]), + ], module_path: module_path!(), description: "Find valid k-coloring of a graph", fields: &[ @@ -183,13 +189,26 @@ pub(crate) fn is_valid_coloring( true } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "kcoloring_k3_simplegraph", + build: || { + use crate::topology::SimpleGraph; + let graph = SimpleGraph::new(5, vec![(0, 1), (0, 2), (1, 3), (2, 3), (2, 4), (3, 4)]); + let problem = KColoring::::new(graph); + crate::example_db::specs::satisfaction_example(problem, vec![vec![0, 1, 1, 0, 2]]) + }, + }] +} + crate::declare_variants! { - KColoring => "2^num_vertices", - KColoring => "num_vertices + num_edges", - KColoring => "1.3289^num_vertices", - KColoring => "1.7159^num_vertices", + default sat KColoring => "2^num_vertices", + sat KColoring => "num_vertices + num_edges", + sat KColoring => "1.3289^num_vertices", + sat KColoring => "1.7159^num_vertices", // Best known: O*((2-ε)^n) for some ε > 0 (Zamir 2021), concrete ε unknown - KColoring => "2^num_vertices", + sat KColoring => "2^num_vertices", } #[cfg(test)] diff --git a/src/models/graph/max_cut.rs b/src/models/graph/max_cut.rs index 09b9a8d4..fdd5aa65 100644 --- a/src/models/graph/max_cut.rs +++ b/src/models/graph/max_cut.rs @@ -3,7 +3,7 @@ //! The Maximum Cut problem asks for a partition of vertices into two sets //! that maximizes the total weight of edges crossing the partition. -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::topology::{Graph, SimpleGraph}; use crate::traits::{OptimizationProblem, Problem}; use crate::types::{Direction, SolutionSize, WeightElement}; @@ -13,6 +13,12 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "MaxCut", + display_name: "Max Cut", + aliases: &[], + dimensions: &[ + VariantDimension::new("graph", "SimpleGraph", &["SimpleGraph"]), + VariantDimension::new("weight", "i32", &["i32"]), + ], module_path: module_path!(), description: "Find maximum weight cut in a graph", fields: &[ @@ -215,7 +221,19 @@ where } crate::declare_variants! { - MaxCut => "2^(2.372 * num_vertices / 3)", + default opt MaxCut => "2^(2.372 * num_vertices / 3)", +} + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "max_cut_simplegraph_i32", + build: || { + let graph = SimpleGraph::new(5, vec![(0, 1), (0, 2), (1, 3), (2, 3), (2, 4), (3, 4)]); + let problem = MaxCut::<_, i32>::unweighted(graph); + crate::example_db::specs::optimization_example(problem, vec![vec![1, 0, 0, 1, 0]]) + }, + }] } #[cfg(test)] diff --git a/src/models/graph/maximal_is.rs b/src/models/graph/maximal_is.rs index 06a2ec4a..a2156b9f 100644 --- a/src/models/graph/maximal_is.rs +++ b/src/models/graph/maximal_is.rs @@ -3,7 +3,7 @@ //! The Maximal Independent Set problem asks for an independent set that //! cannot be extended by adding any other vertex. -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::topology::{Graph, SimpleGraph}; use crate::traits::{OptimizationProblem, Problem}; use crate::types::{Direction, SolutionSize, WeightElement}; @@ -13,6 +13,12 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "MaximalIS", + display_name: "Maximal IS", + aliases: &[], + dimensions: &[ + VariantDimension::new("graph", "SimpleGraph", &["SimpleGraph"]), + VariantDimension::new("weight", "i32", &["i32"]), + ], module_path: module_path!(), description: "Find maximum weight maximal independent set", fields: &[ @@ -183,6 +189,21 @@ where } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "maximal_is_simplegraph_i32", + build: || { + let graph = SimpleGraph::new(5, vec![(0, 1), (1, 2), (2, 3), (3, 4)]); + let problem = MaximalIS::new(graph, vec![1i32; 5]); + crate::example_db::specs::optimization_example( + problem, + vec![vec![0, 1, 0, 1, 0], vec![1, 0, 1, 0, 1]], + ) + }, + }] +} + /// Check if a set is a maximal independent set. /// /// # Panics @@ -216,7 +237,7 @@ pub(crate) fn is_maximal_independent_set(graph: &G, selected: &[bool]) } crate::declare_variants! { - MaximalIS => "3^(num_vertices / 3)", + default opt MaximalIS => "3^(num_vertices / 3)", } #[cfg(test)] diff --git a/src/models/graph/maximum_clique.rs b/src/models/graph/maximum_clique.rs index 0de42f13..d15eafc8 100644 --- a/src/models/graph/maximum_clique.rs +++ b/src/models/graph/maximum_clique.rs @@ -3,7 +3,7 @@ //! The MaximumClique problem asks for a maximum weight subset of vertices //! such that all vertices in the subset are pairwise adjacent. -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::topology::{Graph, SimpleGraph}; use crate::traits::{OptimizationProblem, Problem}; use crate::types::{Direction, SolutionSize, WeightElement}; @@ -13,6 +13,12 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "MaximumClique", + display_name: "Maximum Clique", + aliases: &[], + dimensions: &[ + VariantDimension::new("graph", "SimpleGraph", &["SimpleGraph"]), + VariantDimension::new("weight", "i32", &["i32"]), + ], module_path: module_path!(), description: "Find maximum weight clique in a graph", fields: &[ @@ -171,7 +177,19 @@ fn is_clique_config(graph: &G, config: &[usize]) -> bool { } crate::declare_variants! { - MaximumClique => "1.1996^num_vertices", + default opt MaximumClique => "1.1996^num_vertices", +} + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "maximum_clique_simplegraph_i32", + build: || { + let graph = SimpleGraph::new(5, vec![(0, 1), (0, 2), (1, 3), (2, 3), (2, 4), (3, 4)]); + let problem = MaximumClique::new(graph, vec![1i32; 5]); + crate::example_db::specs::optimization_example(problem, vec![vec![0, 0, 1, 1, 1]]) + }, + }] } /// Check if a set of vertices forms a clique. diff --git a/src/models/graph/maximum_independent_set.rs b/src/models/graph/maximum_independent_set.rs index c9dba5ab..0b8a3ddf 100644 --- a/src/models/graph/maximum_independent_set.rs +++ b/src/models/graph/maximum_independent_set.rs @@ -3,7 +3,7 @@ //! The Independent Set problem asks for a maximum weight subset of vertices //! such that no two vertices in the subset are adjacent. -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::topology::{Graph, KingsSubgraph, SimpleGraph, TriangularSubgraph, UnitDiskGraph}; use crate::traits::{OptimizationProblem, Problem}; use crate::types::{Direction, One, SolutionSize, WeightElement}; @@ -13,6 +13,12 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "MaximumIndependentSet", + display_name: "Maximum Independent Set", + aliases: &["MIS"], + dimensions: &[ + VariantDimension::new("graph", "SimpleGraph", &["SimpleGraph", "KingsSubgraph", "TriangularSubgraph", "UnitDiskGraph"]), + VariantDimension::new("weight", "One", &["One", "i32"]), + ], module_path: module_path!(), description: "Find maximum weight independent set in a graph", fields: &[ @@ -160,13 +166,82 @@ fn is_independent_set_config(graph: &G, config: &[usize]) -> bool { } crate::declare_variants! { - MaximumIndependentSet => "1.1996^num_vertices", - MaximumIndependentSet => "1.1996^num_vertices", - MaximumIndependentSet => "2^sqrt(num_vertices)", - MaximumIndependentSet => "2^sqrt(num_vertices)", - MaximumIndependentSet => "2^sqrt(num_vertices)", - MaximumIndependentSet => "2^sqrt(num_vertices)", - MaximumIndependentSet => "2^sqrt(num_vertices)", + opt MaximumIndependentSet => "1.1996^num_vertices", + default opt MaximumIndependentSet => "1.1996^num_vertices", + opt MaximumIndependentSet => "2^sqrt(num_vertices)", + opt MaximumIndependentSet => "2^sqrt(num_vertices)", + opt MaximumIndependentSet => "2^sqrt(num_vertices)", + opt MaximumIndependentSet => "2^sqrt(num_vertices)", + opt MaximumIndependentSet => "2^sqrt(num_vertices)", +} + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![ + crate::example_db::specs::ModelExampleSpec { + id: "maximum_independent_set_simplegraph_one", + build: || { + // Petersen graph (10 vertices, 15 edges) — MIS = 4 + let graph = SimpleGraph::new( + 10, + vec![ + (0, 1), + (1, 2), + (2, 3), + (3, 4), + (4, 0), + (5, 7), + (7, 9), + (9, 6), + (6, 8), + (8, 5), + (0, 5), + (1, 6), + (2, 7), + (3, 8), + (4, 9), + ], + ); + let problem = MaximumIndependentSet::new(graph, vec![One; 10]); + crate::example_db::specs::optimization_example( + problem, + vec![vec![0, 1, 0, 1, 0, 1, 0, 0, 0, 1]], + ) + }, + }, + crate::example_db::specs::ModelExampleSpec { + id: "maximum_independent_set_simplegraph_i32", + build: || { + // Petersen graph (10 vertices, 15 edges), non-uniform weights — MWIS = 10 + let graph = SimpleGraph::new( + 10, + vec![ + (0, 1), + (1, 2), + (2, 3), + (3, 4), + (4, 0), + (5, 7), + (7, 9), + (9, 6), + (6, 8), + (8, 5), + (0, 5), + (1, 6), + (2, 7), + (3, 8), + (4, 9), + ], + ); + let problem = + MaximumIndependentSet::new(graph, vec![5, 1, 1, 1, 1, 3, 1, 1, 1, 3]); + crate::example_db::specs::optimization_example( + problem, + vec![vec![1, 0, 1, 0, 0, 0, 0, 0, 1, 1]], + ) + }, + }, + ] } /// Check if a set of vertices forms an independent set. diff --git a/src/models/graph/maximum_matching.rs b/src/models/graph/maximum_matching.rs index e3c0ccaa..0e832215 100644 --- a/src/models/graph/maximum_matching.rs +++ b/src/models/graph/maximum_matching.rs @@ -3,7 +3,7 @@ //! The Maximum Matching problem asks for a maximum weight set of edges //! such that no two edges share a vertex. -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::topology::{Graph, SimpleGraph}; use crate::traits::{OptimizationProblem, Problem}; use crate::types::{Direction, SolutionSize, WeightElement}; @@ -14,6 +14,12 @@ use std::collections::HashMap; inventory::submit! { ProblemSchemaEntry { name: "MaximumMatching", + display_name: "Maximum Matching", + aliases: &["MaxMatching"], + dimensions: &[ + VariantDimension::new("graph", "SimpleGraph", &["SimpleGraph"]), + VariantDimension::new("weight", "i32", &["i32"]), + ], module_path: module_path!(), description: "Find maximum weight matching in a graph", fields: &[ @@ -220,7 +226,19 @@ where } crate::declare_variants! { - MaximumMatching => "num_vertices^3", + default opt MaximumMatching => "num_vertices^3", +} + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "maximum_matching_simplegraph_i32", + build: || { + let graph = SimpleGraph::new(5, vec![(0, 1), (0, 2), (1, 3), (2, 3), (2, 4), (3, 4)]); + let problem = MaximumMatching::<_, i32>::unit_weights(graph); + crate::example_db::specs::optimization_example(problem, vec![vec![1, 0, 0, 0, 1, 0]]) + }, + }] } /// Check if a selection of edges forms a valid matching. diff --git a/src/models/graph/minimum_dominating_set.rs b/src/models/graph/minimum_dominating_set.rs index c665e452..1e4b5b97 100644 --- a/src/models/graph/minimum_dominating_set.rs +++ b/src/models/graph/minimum_dominating_set.rs @@ -3,7 +3,7 @@ //! The Dominating Set problem asks for a minimum weight subset of vertices //! such that every vertex is either in the set or adjacent to a vertex in the set. -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::topology::{Graph, SimpleGraph}; use crate::traits::{OptimizationProblem, Problem}; use crate::types::{Direction, SolutionSize, WeightElement}; @@ -14,6 +14,12 @@ use std::collections::HashSet; inventory::submit! { ProblemSchemaEntry { name: "MinimumDominatingSet", + display_name: "Minimum Dominating Set", + aliases: &[], + dimensions: &[ + VariantDimension::new("graph", "SimpleGraph", &["SimpleGraph"]), + VariantDimension::new("weight", "i32", &["i32"]), + ], module_path: module_path!(), description: "Find minimum weight dominating set in a graph", fields: &[ @@ -170,7 +176,19 @@ where } crate::declare_variants! { - MinimumDominatingSet => "1.4969^num_vertices", + default opt MinimumDominatingSet => "1.4969^num_vertices", +} + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "minimum_dominating_set_simplegraph_i32", + build: || { + let graph = SimpleGraph::new(5, vec![(0, 1), (0, 2), (1, 3), (2, 3), (2, 4), (3, 4)]); + let problem = MinimumDominatingSet::new(graph, vec![1i32; 5]); + crate::example_db::specs::optimization_example(problem, vec![vec![0, 0, 1, 1, 0]]) + }, + }] } /// Check if a set of vertices is a dominating set. diff --git a/src/models/graph/minimum_feedback_arc_set.rs b/src/models/graph/minimum_feedback_arc_set.rs index d7c92085..bfb8e3ed 100644 --- a/src/models/graph/minimum_feedback_arc_set.rs +++ b/src/models/graph/minimum_feedback_arc_set.rs @@ -3,7 +3,7 @@ //! The Feedback Arc Set problem asks for a minimum-weight subset of arcs //! whose removal makes a directed graph acyclic (a DAG). -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::topology::DirectedGraph; use crate::traits::{OptimizationProblem, Problem}; use crate::types::{Direction, SolutionSize, WeightElement}; @@ -13,6 +13,11 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "MinimumFeedbackArcSet", + display_name: "Minimum Feedback Arc Set", + aliases: &["FAS"], + dimensions: &[ + VariantDimension::new("weight", "i32", &["i32"]), + ], module_path: module_path!(), description: "Find minimum weight feedback arc set in a directed graph", fields: &[ @@ -171,7 +176,7 @@ fn is_valid_fas(graph: &DirectedGraph, config: &[usize]) -> bool { } crate::declare_variants! { - MinimumFeedbackArcSet => "2^num_vertices", + default opt MinimumFeedbackArcSet => "2^num_vertices", } #[cfg(test)] diff --git a/src/models/graph/minimum_feedback_vertex_set.rs b/src/models/graph/minimum_feedback_vertex_set.rs index 16347981..6ccaa94e 100644 --- a/src/models/graph/minimum_feedback_vertex_set.rs +++ b/src/models/graph/minimum_feedback_vertex_set.rs @@ -3,7 +3,7 @@ //! The Feedback Vertex Set problem asks for a minimum weight subset of vertices //! whose removal makes the directed graph acyclic (a DAG). -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::topology::DirectedGraph; use crate::traits::{OptimizationProblem, Problem}; use crate::types::{Direction, SolutionSize, WeightElement}; @@ -13,6 +13,11 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "MinimumFeedbackVertexSet", + display_name: "Minimum Feedback Vertex Set", + aliases: &["FVS"], + dimensions: &[ + VariantDimension::new("weight", "i32", &["i32"]), + ], module_path: module_path!(), description: "Find minimum weight feedback vertex set in a directed graph", fields: &[ @@ -159,7 +164,25 @@ where } crate::declare_variants! { - MinimumFeedbackVertexSet => "1.9977^num_vertices", + default opt MinimumFeedbackVertexSet => "1.9977^num_vertices", +} + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "minimum_feedback_vertex_set_i32", + build: || { + use crate::topology::DirectedGraph; + let problem = MinimumFeedbackVertexSet::new( + DirectedGraph::new( + 5, + vec![(0, 1), (1, 2), (2, 0), (0, 3), (3, 4), (4, 1), (4, 2)], + ), + vec![1i32; 5], + ); + crate::example_db::specs::optimization_example(problem, vec![vec![1, 0, 0, 0, 0]]) + }, + }] } /// Check if a set of vertices is a feedback vertex set (removing them makes the graph a DAG). diff --git a/src/models/graph/minimum_sum_multicenter.rs b/src/models/graph/minimum_sum_multicenter.rs index bacbed94..6d9adaca 100644 --- a/src/models/graph/minimum_sum_multicenter.rs +++ b/src/models/graph/minimum_sum_multicenter.rs @@ -3,7 +3,7 @@ //! The p-median problem asks for K facility locations (centers) on a graph //! that minimize the total weighted distance from all vertices to their nearest center. -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::topology::{Graph, SimpleGraph}; use crate::traits::{OptimizationProblem, Problem}; use crate::types::{Direction, SolutionSize, WeightElement}; @@ -13,6 +13,12 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "MinimumSumMulticenter", + display_name: "Minimum Sum Multicenter", + aliases: &["pmedian"], + dimensions: &[ + VariantDimension::new("graph", "SimpleGraph", &["SimpleGraph"]), + VariantDimension::new("weight", "i32", &["i32"]), + ], module_path: module_path!(), description: "Find K centers minimizing total weighted distance (p-median problem)", fields: &[ @@ -254,7 +260,31 @@ where } crate::declare_variants! { - MinimumSumMulticenter => "2^num_vertices", + default opt MinimumSumMulticenter => "2^num_vertices", +} + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "minimum_sum_multicenter_simplegraph_i32", + build: || { + let graph = SimpleGraph::new( + 7, + vec![ + (0, 1), + (1, 2), + (2, 3), + (3, 4), + (4, 5), + (5, 6), + (0, 6), + (2, 5), + ], + ); + let problem = MinimumSumMulticenter::new(graph, vec![1i32; 7], vec![1i32; 8], 2); + crate::example_db::specs::optimization_example(problem, vec![vec![0, 0, 1, 0, 0, 1, 0]]) + }, + }] } #[cfg(test)] diff --git a/src/models/graph/minimum_vertex_cover.rs b/src/models/graph/minimum_vertex_cover.rs index 4a441f72..7dd20b2d 100644 --- a/src/models/graph/minimum_vertex_cover.rs +++ b/src/models/graph/minimum_vertex_cover.rs @@ -3,7 +3,7 @@ //! The Vertex Cover problem asks for a minimum weight subset of vertices //! such that every edge has at least one endpoint in the subset. -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::topology::{Graph, SimpleGraph}; use crate::traits::{OptimizationProblem, Problem}; use crate::types::{Direction, SolutionSize, WeightElement}; @@ -13,6 +13,12 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "MinimumVertexCover", + display_name: "Minimum Vertex Cover", + aliases: &["MVC"], + dimensions: &[ + VariantDimension::new("graph", "SimpleGraph", &["SimpleGraph"]), + VariantDimension::new("weight", "i32", &["i32"]), + ], module_path: module_path!(), description: "Find minimum weight vertex cover in a graph", fields: &[ @@ -157,7 +163,19 @@ fn is_vertex_cover_config(graph: &G, config: &[usize]) -> bool { } crate::declare_variants! { - MinimumVertexCover => "1.1996^num_vertices", + default opt MinimumVertexCover => "1.1996^num_vertices", +} + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "minimum_vertex_cover_simplegraph_i32", + build: || { + let graph = SimpleGraph::new(5, vec![(0, 1), (0, 2), (1, 3), (2, 3), (2, 4), (3, 4)]); + let problem = MinimumVertexCover::new(graph, vec![1i32; 5]); + crate::example_db::specs::optimization_example(problem, vec![vec![1, 0, 0, 1, 1]]) + }, + }] } /// Check if a set of vertices forms a vertex cover. diff --git a/src/models/graph/mod.rs b/src/models/graph/mod.rs index d63802b7..7d76de65 100644 --- a/src/models/graph/mod.rs +++ b/src/models/graph/mod.rs @@ -66,3 +66,25 @@ pub use rural_postman::RuralPostman; pub use spin_glass::SpinGlass; pub use subgraph_isomorphism::SubgraphIsomorphism; pub use traveling_salesman::TravelingSalesman; + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + let mut specs = Vec::new(); + specs.extend(maximum_independent_set::canonical_model_example_specs()); + specs.extend(minimum_vertex_cover::canonical_model_example_specs()); + specs.extend(max_cut::canonical_model_example_specs()); + specs.extend(hamiltonian_path::canonical_model_example_specs()); + specs.extend(isomorphic_spanning_tree::canonical_model_example_specs()); + specs.extend(kcoloring::canonical_model_example_specs()); + specs.extend(minimum_dominating_set::canonical_model_example_specs()); + specs.extend(maximum_matching::canonical_model_example_specs()); + specs.extend(traveling_salesman::canonical_model_example_specs()); + specs.extend(maximum_clique::canonical_model_example_specs()); + specs.extend(maximal_is::canonical_model_example_specs()); + specs.extend(minimum_feedback_vertex_set::canonical_model_example_specs()); + specs.extend(minimum_sum_multicenter::canonical_model_example_specs()); + specs.extend(spin_glass::canonical_model_example_specs()); + specs.extend(biclique_cover::canonical_model_example_specs()); + specs.extend(partition_into_triangles::canonical_model_example_specs()); + specs +} diff --git a/src/models/graph/optimal_linear_arrangement.rs b/src/models/graph/optimal_linear_arrangement.rs index bc4cd341..12c0f791 100644 --- a/src/models/graph/optimal_linear_arrangement.rs +++ b/src/models/graph/optimal_linear_arrangement.rs @@ -4,7 +4,7 @@ //! function f: V -> {0, 1, ..., |V|-1} such that the total edge length //! sum_{{u,v} in E} |f(u) - f(v)| is at most K. -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::topology::{Graph, SimpleGraph}; use crate::traits::{Problem, SatisfactionProblem}; use serde::{Deserialize, Serialize}; @@ -12,6 +12,11 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "OptimalLinearArrangement", + display_name: "Optimal Linear Arrangement", + aliases: &["OLA"], + dimensions: &[ + VariantDimension::new("graph", "SimpleGraph", &["SimpleGraph"]), + ], module_path: module_path!(), description: "Find a vertex ordering on a line with total edge length at most K", fields: &[ @@ -159,7 +164,7 @@ where impl SatisfactionProblem for OptimalLinearArrangement {} crate::declare_variants! { - OptimalLinearArrangement => "2^num_vertices", + default sat OptimalLinearArrangement => "2^num_vertices", } #[cfg(test)] diff --git a/src/models/graph/partition_into_triangles.rs b/src/models/graph/partition_into_triangles.rs index b5649d01..f0356051 100644 --- a/src/models/graph/partition_into_triangles.rs +++ b/src/models/graph/partition_into_triangles.rs @@ -3,7 +3,7 @@ //! Given a graph G = (V, E) where |V| = 3q, determine whether V can be //! partitioned into q triples, each forming a triangle (K3) in G. -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::topology::{Graph, SimpleGraph}; use crate::traits::{Problem, SatisfactionProblem}; use crate::variant::VariantParam; @@ -12,6 +12,11 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "PartitionIntoTriangles", + display_name: "Partition Into Triangles", + aliases: &[], + dimensions: &[ + VariantDimension::new("graph", "SimpleGraph", &["SimpleGraph"]), + ], module_path: module_path!(), description: "Partition vertices into triangles (K3 subgraphs)", fields: &[ @@ -152,7 +157,21 @@ where impl SatisfactionProblem for PartitionIntoTriangles {} crate::declare_variants! { - PartitionIntoTriangles => "2^num_vertices", + default sat PartitionIntoTriangles => "2^num_vertices", +} + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "partition_into_triangles_simplegraph", + build: || { + let problem = PartitionIntoTriangles::new(SimpleGraph::new( + 6, + vec![(0, 1), (0, 2), (1, 2), (3, 4), (3, 5), (4, 5), (0, 3)], + )); + crate::example_db::specs::satisfaction_example(problem, vec![vec![0, 0, 0, 1, 1, 1]]) + }, + }] } #[cfg(test)] diff --git a/src/models/graph/rural_postman.rs b/src/models/graph/rural_postman.rs index dbd8ae50..ed3909c1 100644 --- a/src/models/graph/rural_postman.rs +++ b/src/models/graph/rural_postman.rs @@ -4,7 +4,7 @@ //! that includes each edge in a required subset E' and has total length //! at most a given bound B. -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::topology::{Graph, SimpleGraph}; use crate::traits::{Problem, SatisfactionProblem}; use crate::types::WeightElement; @@ -15,6 +15,12 @@ use std::collections::VecDeque; inventory::submit! { ProblemSchemaEntry { name: "RuralPostman", + display_name: "Rural Postman", + aliases: &["RPP"], + dimensions: &[ + VariantDimension::new("graph", "SimpleGraph", &["SimpleGraph"]), + VariantDimension::new("weight", "i32", &["i32"]), + ], module_path: module_path!(), description: "Find a circuit covering required edges with total length at most B (Rural Postman Problem)", fields: &[ @@ -269,7 +275,7 @@ where } crate::declare_variants! { - RuralPostman => "2^num_vertices * num_vertices^2", + default sat RuralPostman => "2^num_vertices * num_vertices^2", } #[cfg(test)] diff --git a/src/models/graph/spin_glass.rs b/src/models/graph/spin_glass.rs index 30bf90ac..1c5d9230 100644 --- a/src/models/graph/spin_glass.rs +++ b/src/models/graph/spin_glass.rs @@ -2,7 +2,7 @@ //! //! The Spin Glass problem minimizes the Ising Hamiltonian energy. -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::topology::{Graph, SimpleGraph}; use crate::traits::{OptimizationProblem, Problem}; use crate::types::{Direction, SolutionSize, WeightElement}; @@ -11,6 +11,12 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "SpinGlass", + display_name: "Spin Glass", + aliases: &[], + dimensions: &[ + VariantDimension::new("graph", "SimpleGraph", &["SimpleGraph"]), + VariantDimension::new("weight", "i32", &["i32", "f64"]), + ], module_path: module_path!(), description: "Minimize Ising Hamiltonian on a graph", fields: &[ @@ -251,8 +257,30 @@ where } crate::declare_variants! { - SpinGlass => "2^num_spins", - SpinGlass => "2^num_spins", + default opt SpinGlass => "2^num_spins", + opt SpinGlass => "2^num_spins", +} + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "spin_glass_simplegraph_i32", + build: || { + let problem = SpinGlass::::without_fields( + 5, + vec![ + ((0, 1), 1), + ((1, 2), 1), + ((3, 4), 1), + ((0, 3), 1), + ((1, 3), 1), + ((1, 4), 1), + ((2, 4), 1), + ], + ); + crate::example_db::specs::optimization_example(problem, vec![vec![1, 0, 1, 1, 0]]) + }, + }] } #[cfg(test)] diff --git a/src/models/graph/subgraph_isomorphism.rs b/src/models/graph/subgraph_isomorphism.rs index 49366627..3234f864 100644 --- a/src/models/graph/subgraph_isomorphism.rs +++ b/src/models/graph/subgraph_isomorphism.rs @@ -13,6 +13,9 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "SubgraphIsomorphism", + display_name: "Subgraph Isomorphism", + aliases: &[], + dimensions: &[], module_path: module_path!(), description: "Determine if host graph G contains a subgraph isomorphic to pattern graph H", fields: &[ @@ -177,7 +180,7 @@ impl Problem for SubgraphIsomorphism { impl SatisfactionProblem for SubgraphIsomorphism {} crate::declare_variants! { - SubgraphIsomorphism => "num_host_vertices ^ num_pattern_vertices", + default sat SubgraphIsomorphism => "num_host_vertices ^ num_pattern_vertices", } #[cfg(test)] diff --git a/src/models/graph/traveling_salesman.rs b/src/models/graph/traveling_salesman.rs index 7c7416a7..b285ba1c 100644 --- a/src/models/graph/traveling_salesman.rs +++ b/src/models/graph/traveling_salesman.rs @@ -3,7 +3,7 @@ //! The Traveling Salesman problem asks for a minimum-weight cycle //! that visits every vertex exactly once. -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::topology::{Graph, SimpleGraph}; use crate::traits::{OptimizationProblem, Problem}; use crate::types::{Direction, SolutionSize, WeightElement}; @@ -13,6 +13,12 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "TravelingSalesman", + display_name: "Traveling Salesman", + aliases: &["TSP"], + dimensions: &[ + VariantDimension::new("graph", "SimpleGraph", &["SimpleGraph"]), + VariantDimension::new("weight", "i32", &["i32"]), + ], module_path: module_path!(), description: "Find minimum weight Hamiltonian cycle in a graph (Traveling Salesman Problem)", fields: &[ @@ -252,8 +258,20 @@ pub(crate) fn is_hamiltonian_cycle(graph: &G, selected: &[bool]) -> bo visit_count == n } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "traveling_salesman_simplegraph_i32", + build: || { + let graph = SimpleGraph::new(4, vec![(0, 1), (0, 2), (0, 3), (1, 2), (1, 3), (2, 3)]); + let problem = TravelingSalesman::new(graph, vec![1, 3, 2, 2, 3, 1]); + crate::example_db::specs::optimization_example(problem, vec![vec![1, 0, 1, 1, 0, 1]]) + }, + }] +} + crate::declare_variants! { - TravelingSalesman => "2^num_vertices", + default opt TravelingSalesman => "2^num_vertices", } #[cfg(test)] diff --git a/src/models/misc/bin_packing.rs b/src/models/misc/bin_packing.rs index cfa1647c..49375b03 100644 --- a/src/models/misc/bin_packing.rs +++ b/src/models/misc/bin_packing.rs @@ -3,7 +3,7 @@ //! The Bin Packing problem asks for an assignment of items to bins //! that minimizes the number of bins used while respecting capacity constraints. -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::traits::{OptimizationProblem, Problem}; use crate::types::{Direction, SolutionSize, WeightElement}; use serde::{Deserialize, Serialize}; @@ -11,6 +11,9 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "BinPacking", + display_name: "Bin Packing", + aliases: &[], + dimensions: &[VariantDimension::new("weight", "i32", &["i32", "f64"])], module_path: module_path!(), description: "Assign items to bins minimizing number of bins used, subject to capacity", fields: &[ @@ -151,8 +154,8 @@ fn count_bins(config: &[usize]) -> usize { } crate::declare_variants! { - BinPacking => "2^num_items", - BinPacking => "2^num_items", + default opt BinPacking => "2^num_items", + opt BinPacking => "2^num_items", } #[cfg(test)] diff --git a/src/models/misc/factoring.rs b/src/models/misc/factoring.rs index f4408a72..9b4a38b5 100644 --- a/src/models/misc/factoring.rs +++ b/src/models/misc/factoring.rs @@ -11,6 +11,9 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "Factoring", + display_name: "Factoring", + aliases: &[], + dimensions: &[], module_path: module_path!(), description: "Factor a composite integer into two factors", fields: &[ @@ -163,7 +166,18 @@ impl OptimizationProblem for Factoring { } crate::declare_variants! { - Factoring => "exp((m + n)^(1/3) * log(m + n)^(2/3))", + default opt Factoring => "exp((m + n)^(1/3) * log(m + n)^(2/3))", +} + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "factoring", + build: || { + let problem = Factoring::new(2, 3, 15); + crate::example_db::specs::optimization_example(problem, vec![vec![1, 1, 1, 0, 1]]) + }, + }] } #[cfg(test)] diff --git a/src/models/misc/flow_shop_scheduling.rs b/src/models/misc/flow_shop_scheduling.rs index 6568cced..d5e1286c 100644 --- a/src/models/misc/flow_shop_scheduling.rs +++ b/src/models/misc/flow_shop_scheduling.rs @@ -11,6 +11,9 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "FlowShopScheduling", + display_name: "Flow Shop Scheduling", + aliases: &[], + dimensions: &[], module_path: module_path!(), description: "Determine if a flow-shop schedule for jobs on m processors meets a deadline", fields: &[ @@ -193,7 +196,7 @@ impl Problem for FlowShopScheduling { impl SatisfactionProblem for FlowShopScheduling {} crate::declare_variants! { - FlowShopScheduling => "factorial(num_jobs)", + default sat FlowShopScheduling => "factorial(num_jobs)", } #[cfg(test)] diff --git a/src/models/misc/knapsack.rs b/src/models/misc/knapsack.rs index 49e3e90d..aea080f8 100644 --- a/src/models/misc/knapsack.rs +++ b/src/models/misc/knapsack.rs @@ -11,6 +11,9 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "Knapsack", + display_name: "Knapsack", + aliases: &[], + dimensions: &[], module_path: module_path!(), description: "Select items to maximize total value subject to weight capacity constraint", fields: &[ @@ -135,7 +138,7 @@ impl OptimizationProblem for Knapsack { } crate::declare_variants! { - Knapsack => "2^(num_items / 2)", + default opt Knapsack => "2^(num_items / 2)", } #[cfg(test)] diff --git a/src/models/misc/longest_common_subsequence.rs b/src/models/misc/longest_common_subsequence.rs index 8d4fd6e0..70b40424 100644 --- a/src/models/misc/longest_common_subsequence.rs +++ b/src/models/misc/longest_common_subsequence.rs @@ -12,6 +12,9 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "LongestCommonSubsequence", + display_name: "Longest Common Subsequence", + aliases: &["LCS"], + dimensions: &[], module_path: module_path!(), description: "Find the longest string that is a subsequence of every input string", fields: &[ @@ -180,7 +183,7 @@ impl OptimizationProblem for LongestCommonSubsequence { } crate::declare_variants! { - LongestCommonSubsequence => "2^min_string_length", + default opt LongestCommonSubsequence => "2^min_string_length", } #[cfg(test)] diff --git a/src/models/misc/mod.rs b/src/models/misc/mod.rs index 5ce862ad..86b71e75 100644 --- a/src/models/misc/mod.rs +++ b/src/models/misc/mod.rs @@ -27,3 +27,12 @@ pub use longest_common_subsequence::LongestCommonSubsequence; pub use paintshop::PaintShop; pub use shortest_common_supersequence::ShortestCommonSupersequence; pub use subset_sum::SubsetSum; + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + let mut specs = Vec::new(); + specs.extend(factoring::canonical_model_example_specs()); + specs.extend(paintshop::canonical_model_example_specs()); + specs.extend(shortest_common_supersequence::canonical_model_example_specs()); + specs +} diff --git a/src/models/misc/paintshop.rs b/src/models/misc/paintshop.rs index 6f7a847e..495e9205 100644 --- a/src/models/misc/paintshop.rs +++ b/src/models/misc/paintshop.rs @@ -14,6 +14,9 @@ use std::collections::{HashMap, HashSet}; inventory::submit! { ProblemSchemaEntry { name: "PaintShop", + display_name: "Paint Shop", + aliases: &[], + dimensions: &[], module_path: module_path!(), description: "Minimize color changes in paint shop sequence", fields: &[ @@ -190,7 +193,24 @@ impl OptimizationProblem for PaintShop { } crate::declare_variants! { - PaintShop => "2^num_cars", + default opt PaintShop => "2^num_cars", +} + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "paintshop", + build: || { + use crate::solvers::BruteForce; + let problem = PaintShop::new(vec!["A", "B", "A", "C", "B", "C"]); + let sample = BruteForce::new() + .find_all_best(&problem) + .into_iter() + .next() + .expect("paintshop example should solve"); + crate::example_db::specs::optimization_example(problem, vec![sample]) + }, + }] } #[cfg(test)] diff --git a/src/models/misc/shortest_common_supersequence.rs b/src/models/misc/shortest_common_supersequence.rs index a6da920f..a2c6b98d 100644 --- a/src/models/misc/shortest_common_supersequence.rs +++ b/src/models/misc/shortest_common_supersequence.rs @@ -17,6 +17,9 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "ShortestCommonSupersequence", + display_name: "Shortest Common Supersequence", + aliases: &["SCS"], + dimensions: &[], module_path: module_path!(), description: "Find a common supersequence of bounded length for a set of strings", fields: &[ @@ -146,7 +149,19 @@ impl Problem for ShortestCommonSupersequence { impl SatisfactionProblem for ShortestCommonSupersequence {} crate::declare_variants! { - ShortestCommonSupersequence => "alphabet_size ^ bound", + default sat ShortestCommonSupersequence => "alphabet_size ^ bound", +} + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "shortest_common_supersequence", + build: || { + let problem = + ShortestCommonSupersequence::new(3, vec![vec![0, 1, 2], vec![1, 0, 2]], 4); + crate::example_db::specs::satisfaction_example(problem, vec![vec![1, 0, 1, 2]]) + }, + }] } #[cfg(test)] diff --git a/src/models/misc/subset_sum.rs b/src/models/misc/subset_sum.rs index 38e315da..6fdc6d84 100644 --- a/src/models/misc/subset_sum.rs +++ b/src/models/misc/subset_sum.rs @@ -16,6 +16,9 @@ use serde::{Deserialize, Serialize}; inventory::submit! { ProblemSchemaEntry { name: "SubsetSum", + display_name: "Subset Sum", + aliases: &[], + dimensions: &[], module_path: module_path!(), description: "Find a subset of positive integers that sums to exactly a target value", fields: &[ @@ -135,7 +138,7 @@ impl Problem for SubsetSum { impl SatisfactionProblem for SubsetSum {} crate::declare_variants! { - SubsetSum => "2^(num_elements / 2)", + default sat SubsetSum => "2^(num_elements / 2)", } mod decimal_biguint { diff --git a/src/models/mod.rs b/src/models/mod.rs index 72a1aa2a..e4448805 100644 --- a/src/models/mod.rs +++ b/src/models/mod.rs @@ -15,8 +15,8 @@ pub use graph::{ BicliqueCover, GraphPartitioning, HamiltonianPath, IsomorphicSpanningTree, KColoring, MaxCut, MaximalIS, MaximumClique, MaximumIndependentSet, MaximumMatching, MinimumDominatingSet, MinimumFeedbackArcSet, MinimumFeedbackVertexSet, MinimumSumMulticenter, MinimumVertexCover, - OptimalLinearArrangement, PartitionIntoTriangles, RuralPostman, SpinGlass, - SubgraphIsomorphism, TravelingSalesman, + OptimalLinearArrangement, PartitionIntoTriangles, RuralPostman, SpinGlass, SubgraphIsomorphism, + TravelingSalesman, }; pub use misc::{ BinPacking, Factoring, FlowShopScheduling, Knapsack, LongestCommonSubsequence, PaintShop, diff --git a/src/models/set/maximum_set_packing.rs b/src/models/set/maximum_set_packing.rs index 0b35c75c..505073cc 100644 --- a/src/models/set/maximum_set_packing.rs +++ b/src/models/set/maximum_set_packing.rs @@ -3,7 +3,7 @@ //! The Set Packing problem asks for a maximum weight collection of //! pairwise disjoint sets. -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::traits::{OptimizationProblem, Problem}; use crate::types::{Direction, One, SolutionSize, WeightElement}; use num_traits::Zero; @@ -13,6 +13,9 @@ use std::collections::HashSet; inventory::submit! { ProblemSchemaEntry { name: "MaximumSetPacking", + display_name: "Maximum Set Packing", + aliases: &[], + dimensions: &[VariantDimension::new("weight", "One", &["One", "i32", "f64"])], module_path: module_path!(), description: "Find maximum weight collection of disjoint sets", fields: &[ @@ -174,9 +177,9 @@ where } crate::declare_variants! { - MaximumSetPacking => "2^num_sets", - MaximumSetPacking => "2^num_sets", - MaximumSetPacking => "2^num_sets", + default opt MaximumSetPacking => "2^num_sets", + opt MaximumSetPacking => "2^num_sets", + opt MaximumSetPacking => "2^num_sets", } /// Check if a selection forms a valid set packing (pairwise disjoint). @@ -211,6 +214,18 @@ pub(crate) fn is_set_packing(sets: &[Vec], selected: &[bool]) -> bool { is_valid_packing(sets, &config) } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "maximum_set_packing_i32", + build: || { + let problem = + MaximumSetPacking::::new(vec![vec![0, 1], vec![1, 2], vec![2, 3], vec![3, 4]]); + crate::example_db::specs::optimization_example(problem, vec![vec![1, 0, 1, 0]]) + }, + }] +} + #[cfg(test)] #[path = "../../unit_tests/models/set/maximum_set_packing.rs"] mod tests; diff --git a/src/models/set/minimum_set_covering.rs b/src/models/set/minimum_set_covering.rs index c37f34d4..1c2801f2 100644 --- a/src/models/set/minimum_set_covering.rs +++ b/src/models/set/minimum_set_covering.rs @@ -3,7 +3,7 @@ //! The Set Covering problem asks for a minimum weight collection of sets //! that covers all elements in the universe. -use crate::registry::{FieldInfo, ProblemSchemaEntry}; +use crate::registry::{FieldInfo, ProblemSchemaEntry, VariantDimension}; use crate::traits::{OptimizationProblem, Problem}; use crate::types::{Direction, SolutionSize, WeightElement}; use num_traits::Zero; @@ -13,6 +13,9 @@ use std::collections::HashSet; inventory::submit! { ProblemSchemaEntry { name: "MinimumSetCovering", + display_name: "Minimum Set Covering", + aliases: &[], + dimensions: &[VariantDimension::new("weight", "i32", &["i32"])], module_path: module_path!(), description: "Find minimum weight collection covering the universe", fields: &[ @@ -179,7 +182,7 @@ where } crate::declare_variants! { - MinimumSetCovering => "2^num_sets", + default opt MinimumSetCovering => "2^num_sets", } /// Check if a selection of sets forms a valid set cover. @@ -199,6 +202,18 @@ pub(crate) fn is_set_cover(universe_size: usize, sets: &[Vec], selected: (0..universe_size).all(|e| covered.contains(&e)) } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + vec![crate::example_db::specs::ModelExampleSpec { + id: "minimum_set_covering_i32", + build: || { + let problem = + MinimumSetCovering::::new(5, vec![vec![0, 1, 2], vec![1, 3], vec![2, 3, 4]]); + crate::example_db::specs::optimization_example(problem, vec![vec![1, 0, 1]]) + }, + }] +} + #[cfg(test)] #[path = "../../unit_tests/models/set/minimum_set_covering.rs"] mod tests; diff --git a/src/models/set/mod.rs b/src/models/set/mod.rs index 0bee7cef..6576097a 100644 --- a/src/models/set/mod.rs +++ b/src/models/set/mod.rs @@ -9,3 +9,11 @@ pub(crate) mod minimum_set_covering; pub use maximum_set_packing::MaximumSetPacking; pub use minimum_set_covering::MinimumSetCovering; + +#[cfg(feature = "example-db")] +pub(crate) fn canonical_model_example_specs() -> Vec { + let mut specs = Vec::new(); + specs.extend(maximum_set_packing::canonical_model_example_specs()); + specs.extend(minimum_set_covering::canonical_model_example_specs()); + specs +} diff --git a/src/registry/dyn_problem.rs b/src/registry/dyn_problem.rs new file mode 100644 index 00000000..4321a280 --- /dev/null +++ b/src/registry/dyn_problem.rs @@ -0,0 +1,103 @@ +use serde::Serialize; +use serde_json::Value; +use std::any::Any; +use std::collections::BTreeMap; +use std::fmt; + +use crate::traits::Problem; + +/// Type-erased problem interface for dynamic dispatch. +/// +/// Implemented via blanket impl for any `T: Problem + Serialize + 'static`. +pub trait DynProblem: Any { + /// Evaluate a configuration and return the result as a debug string. + fn evaluate_dyn(&self, config: &[usize]) -> String; + /// Serialize the problem to a JSON value. + fn serialize_json(&self) -> Value; + /// Downcast to `&dyn Any` for type recovery. + fn as_any(&self) -> &dyn Any; + /// Return the configuration space dimensions. + fn dims_dyn(&self) -> Vec; + /// Return the problem name (`Problem::NAME`). + fn problem_name(&self) -> &'static str; + /// Return the variant key-value map. + fn variant_map(&self) -> BTreeMap; + /// Return the number of variables. + fn num_variables_dyn(&self) -> usize; +} + +impl DynProblem for T +where + T: Problem + Serialize + 'static, + T::Metric: fmt::Debug, +{ + fn evaluate_dyn(&self, config: &[usize]) -> String { + format!("{:?}", self.evaluate(config)) + } + + fn serialize_json(&self) -> Value { + serde_json::to_value(self).expect("serialize failed") + } + + fn as_any(&self) -> &dyn Any { + self + } + + fn dims_dyn(&self) -> Vec { + self.dims() + } + + fn problem_name(&self) -> &'static str { + T::NAME + } + + fn variant_map(&self) -> BTreeMap { + T::variant() + .into_iter() + .map(|(k, v)| (k.to_string(), v.to_string())) + .collect() + } + + fn num_variables_dyn(&self) -> usize { + self.num_variables() + } +} + +/// Function pointer type for brute-force solve dispatch. +pub type SolveFn = fn(&dyn Any) -> Option<(Vec, String)>; + +/// A loaded problem with type-erased solve capability. +/// +/// Wraps a `Box` with a brute-force solve function pointer. +pub struct LoadedDynProblem { + inner: Box, + solve_fn: SolveFn, +} + +impl std::fmt::Debug for LoadedDynProblem { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + f.debug_struct("LoadedDynProblem") + .field("name", &self.inner.problem_name()) + .finish() + } +} + +impl LoadedDynProblem { + /// Create a new loaded dynamic problem. + pub fn new(inner: Box, solve_fn: SolveFn) -> Self { + Self { inner, solve_fn } + } + + /// Solve the problem using brute force. + pub fn solve_brute_force(&self) -> Option<(Vec, String)> { + (self.solve_fn)(self.inner.as_any()) + } +} + +impl std::ops::Deref for LoadedDynProblem { + type Target = dyn DynProblem; + + fn deref(&self) -> &(dyn DynProblem + 'static) { + &*self.inner + } +} diff --git a/src/registry/mod.rs b/src/registry/mod.rs index e7bad24e..9732f97b 100644 --- a/src/registry/mod.rs +++ b/src/registry/mod.rs @@ -44,10 +44,57 @@ //! println!("Problem: {}", info.name); //! ``` +mod dyn_problem; mod info; +pub mod problem_ref; +pub mod problem_type; mod schema; pub mod variant; +pub use dyn_problem::{DynProblem, LoadedDynProblem, SolveFn}; pub use info::{ComplexityClass, FieldInfo, ProblemInfo, ProblemMetadata}; -pub use schema::{collect_schemas, FieldInfoJson, ProblemSchemaEntry, ProblemSchemaJson}; -pub use variant::VariantEntry; +pub use problem_ref::{parse_catalog_problem_ref, require_graph_variant, ProblemRef}; +pub use problem_type::{find_problem_type, find_problem_type_by_alias, problem_types, ProblemType}; +pub use schema::{ + collect_schemas, FieldInfoJson, ProblemSchemaEntry, ProblemSchemaJson, VariantDimension, +}; +pub use variant::{find_variant_entry, VariantEntry}; + +use std::any::Any; +use std::collections::BTreeMap; + +/// Load a problem from JSON by exact problem name and exact variant map. +/// +/// No alias resolution or default fallback. Returns `Err` if the entry is not found. +pub fn load_dyn( + name: &str, + variant: &BTreeMap, + data: serde_json::Value, +) -> Result { + let entry = find_variant_entry(name, variant).ok_or_else(|| { + format!( + "No registered variant for `{name}` with variant {:?}", + variant + ) + })?; + + let inner = + (entry.factory)(data).map_err(|e| format!("Failed to deserialize `{name}`: {e}"))?; + Ok(LoadedDynProblem::new(inner, entry.solve_fn)) +} + +/// Serialize a `&dyn Any` by exact problem name and exact variant map. +/// +/// Returns `None` if the entry is not found or the downcast fails. +pub fn serialize_any( + name: &str, + variant: &BTreeMap, + any: &dyn Any, +) -> Option { + let entry = find_variant_entry(name, variant)?; + (entry.serialize_fn)(any) +} + +#[cfg(test)] +#[path = "../unit_tests/registry/dispatch.rs"] +mod dispatch_tests; diff --git a/src/registry/problem_ref.rs b/src/registry/problem_ref.rs new file mode 100644 index 00000000..07880e26 --- /dev/null +++ b/src/registry/problem_ref.rs @@ -0,0 +1,165 @@ +//! Typed internal problem references with catalog-validated variants. + +use super::problem_type::ProblemType; +use std::collections::BTreeMap; + +/// A typed internal reference to a specific problem variant. +/// +/// Unlike `export::ProblemRef` (a plain DTO), this type validates its +/// variant dimensions against the catalog at construction time. +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct ProblemRef { + /// Canonical problem name. + name: String, + /// Validated variant dimensions. + variant: BTreeMap, +} + +impl ProblemRef { + /// Create a `ProblemRef` from positional values, matching them against + /// the problem type's declared dimensions. + /// + /// Values are matched by checking which dimension's allowed_values contains + /// each positional value. Unmatched dimensions are filled with defaults. + /// + /// # Errors + /// + /// Returns an error if any value doesn't match a dimension's allowed values. + pub fn from_values(problem_type: &ProblemType, values: I) -> Result + where + I: IntoIterator, + S: AsRef, + { + // Start with all defaults + let mut variant: BTreeMap = problem_type.default_variant(); + let mut matched_dims: Vec = vec![false; problem_type.dimensions.len()]; + + for value in values { + let val = value.as_ref(); + // Find which dimension this value belongs to + let dim_idx = problem_type + .dimensions + .iter() + .enumerate() + .find(|(i, dim)| !matched_dims[*i] && dim.allowed_values.contains(&val)) + .map(|(i, _)| i); + + match dim_idx { + Some(idx) => { + matched_dims[idx] = true; + let dim = &problem_type.dimensions[idx]; + variant.insert(dim.key.to_string(), val.to_string()); + } + None => { + let known: Vec<&str> = problem_type + .dimensions + .iter() + .flat_map(|d| d.allowed_values.iter().copied()) + .collect(); + return Err(format!( + "Unknown variant value \"{val}\" for {}. Known variants: {known:?}", + problem_type.canonical_name, + )); + } + } + } + + Ok(Self { + name: problem_type.canonical_name.to_string(), + variant, + }) + } + + /// Create a `ProblemRef` from an explicit variant map, validating against the catalog. + pub fn from_map( + problem_type: &ProblemType, + variant: BTreeMap, + ) -> Result { + // Validate all keys and values + for (key, value) in &variant { + let dim = problem_type + .dimensions + .iter() + .find(|d| d.key == key.as_str()) + .ok_or_else(|| { + format!( + "Unknown dimension \"{key}\" for {}", + problem_type.canonical_name + ) + })?; + if !dim.allowed_values.contains(&value.as_str()) { + return Err(format!( + "Unknown value \"{value}\" for dimension \"{key}\" of {}. Known variants: {:?}", + problem_type.canonical_name, dim.allowed_values + )); + } + } + + // Fill in defaults for missing dimensions + let mut full_variant = problem_type.default_variant(); + full_variant.extend(variant); + + Ok(Self { + name: problem_type.canonical_name.to_string(), + variant: full_variant, + }) + } + + /// Get the canonical problem name. + pub fn name(&self) -> &str { + &self.name + } + + /// Get the validated variant map. + pub fn variant(&self) -> &BTreeMap { + &self.variant + } + + /// Convert to an `export::ProblemRef` DTO. + pub fn to_export_ref(&self) -> crate::export::ProblemRef { + crate::export::ProblemRef { + name: self.name.clone(), + variant: self.variant.clone(), + } + } +} + +/// Parse a slash-separated problem spec string against the catalog. +/// +/// Only validates against catalog schema (names, aliases, dimensions). +/// Does NOT check reduction graph reachability. +pub fn parse_catalog_problem_ref(input: &str) -> Result { + let parts: Vec<&str> = input.split('/').collect(); + let raw_name = parts[0]; + let values: Vec<&str> = parts[1..].to_vec(); + + // Resolve name through catalog + let problem_type = super::problem_type::find_problem_type_by_alias(raw_name) + .ok_or_else(|| format!("Unknown problem type: \"{raw_name}\""))?; + + let effective_values: Vec = values.iter().map(|s| s.to_string()).collect(); + + ProblemRef::from_values(&problem_type, &effective_values) +} + +/// Check whether a catalog-validated `ProblemRef` exists in the reduction graph. +/// +/// Returns the export DTO if the variant is reachable, or an error describing +/// which graph variants exist for the problem. +pub fn require_graph_variant( + graph: &crate::rules::ReductionGraph, + problem_ref: &ProblemRef, +) -> Result { + let known_variants = graph.variants_for(problem_ref.name()); + if known_variants.iter().any(|v| v == problem_ref.variant()) { + return Ok(problem_ref.to_export_ref()); + } + + Err(format!( + "Variant {:?} of {} is schema-valid but not reachable in the reduction graph. \ + Known graph variants: {:?}", + problem_ref.variant(), + problem_ref.name(), + known_variants + )) +} diff --git a/src/registry/problem_type.rs b/src/registry/problem_type.rs new file mode 100644 index 00000000..5337873c --- /dev/null +++ b/src/registry/problem_type.rs @@ -0,0 +1,80 @@ +//! Problem type catalog: runtime lookup by name, alias, and variant validation. + +use super::schema::{ProblemSchemaEntry, VariantDimension}; +use super::FieldInfo; +use std::collections::BTreeMap; + +/// A runtime view of a registered problem type from the catalog. +#[derive(Debug, Clone)] +pub struct ProblemType { + /// Canonical problem name (e.g., `"MaximumIndependentSet"`). + pub canonical_name: &'static str, + /// Human-readable display name (e.g., `"Maximum Independent Set"`). + pub display_name: &'static str, + /// Short aliases (e.g., `["MIS"]`). + pub aliases: &'static [&'static str], + /// Declared variant dimensions with defaults and allowed values. + pub dimensions: &'static [VariantDimension], + /// Human-readable description. + pub description: &'static str, + /// Struct fields. + pub fields: &'static [FieldInfo], +} + +impl ProblemType { + /// Build a `ProblemType` view from a schema entry. + fn from_entry(entry: &'static ProblemSchemaEntry) -> Self { + Self { + canonical_name: entry.name, + display_name: entry.display_name, + aliases: entry.aliases, + dimensions: entry.dimensions, + description: entry.description, + fields: entry.fields, + } + } + + /// Get the default variant map (each dimension set to its default value). + pub fn default_variant(&self) -> BTreeMap { + self.dimensions + .iter() + .map(|d| (d.key.to_string(), d.default_value.to_string())) + .collect() + } +} + +/// Find a problem type by exact canonical name. +pub fn find_problem_type(name: &str) -> Option { + inventory::iter:: + .into_iter() + .find(|entry| entry.name == name) + .map(ProblemType::from_entry) +} + +/// Find a problem type by alias (case-insensitive). +/// +/// Searches both canonical names and declared aliases. +pub fn find_problem_type_by_alias(input: &str) -> Option { + let lower = input.to_lowercase(); + inventory::iter:: + .into_iter() + .find(|entry| { + entry.name.to_lowercase() == lower + || entry.aliases.iter().any(|a| a.to_lowercase() == lower) + }) + .map(ProblemType::from_entry) +} + +/// Return all registered problem types. +pub fn problem_types() -> Vec { + let mut types: Vec = inventory::iter:: + .into_iter() + .map(ProblemType::from_entry) + .collect(); + types.sort_by_key(|t| t.canonical_name); + types +} + +#[cfg(test)] +#[path = "../unit_tests/registry/problem_type.rs"] +mod tests; diff --git a/src/registry/schema.rs b/src/registry/schema.rs index 4a362ec1..d78cebfa 100644 --- a/src/registry/schema.rs +++ b/src/registry/schema.rs @@ -3,10 +3,45 @@ use super::FieldInfo; use serde::Serialize; +/// A declared variant dimension for a problem type. +/// +/// Describes one axis of variation (e.g., graph type, weight type) with +/// its default value and the set of allowed values. +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct VariantDimension { + /// Dimension key (e.g., `"graph"`, `"weight"`, `"k"`). + pub key: &'static str, + /// Default value for this dimension (e.g., `"SimpleGraph"`). + pub default_value: &'static str, + /// All allowed values for this dimension. + pub allowed_values: &'static [&'static str], +} + +impl VariantDimension { + /// Create a new variant dimension. + pub const fn new( + key: &'static str, + default_value: &'static str, + allowed_values: &'static [&'static str], + ) -> Self { + Self { + key, + default_value, + allowed_values, + } + } +} + /// A registered problem schema entry for static inventory registration. pub struct ProblemSchemaEntry { /// Problem name (e.g., "MaximumIndependentSet"). pub name: &'static str, + /// Human-readable display name (e.g., "Maximum Independent Set"). + pub display_name: &'static str, + /// Short aliases for CLI/MCP lookup (e.g., `&["MIS"]`). + pub aliases: &'static [&'static str], + /// Declared variant dimensions with defaults and allowed values. + pub dimensions: &'static [VariantDimension], /// Module path from `module_path!()` (e.g., "problemreductions::models::graph::maximum_independent_set"). pub module_path: &'static str, /// Human-readable description. diff --git a/src/registry/variant.rs b/src/registry/variant.rs index c5b25a7d..a4e6fd35 100644 --- a/src/registry/variant.rs +++ b/src/registry/variant.rs @@ -1,6 +1,9 @@ //! Explicit variant registration via inventory. use std::any::Any; +use std::collections::BTreeMap; + +use crate::registry::dyn_problem::{DynProblem, SolveFn}; /// A registered problem variant entry. /// @@ -17,6 +20,14 @@ pub struct VariantEntry { /// Takes a `&dyn Any` (must be `&ProblemType`), calls getter methods directly, /// and returns the estimated worst-case time as f64. pub complexity_eval_fn: fn(&dyn Any) -> f64, + /// Whether this entry is the declared default variant for its problem. + pub is_default: bool, + /// Factory: deserialize JSON into a boxed dynamic problem. + pub factory: fn(serde_json::Value) -> Result, serde_json::Error>, + /// Serialize: downcast `&dyn Any` and serialize to JSON. + pub serialize_fn: fn(&dyn Any) -> Option, + /// Solve: downcast `&dyn Any` and brute-force solve. + pub solve_fn: SolveFn, } impl VariantEntry { @@ -24,6 +35,25 @@ impl VariantEntry { pub fn variant(&self) -> Vec<(&'static str, &'static str)> { (self.variant_fn)() } + + /// Get the variant as a `BTreeMap`. + pub fn variant_map(&self) -> BTreeMap { + self.variant() + .into_iter() + .map(|(k, v)| (k.to_string(), v.to_string())) + .collect() + } +} + +/// Find a variant entry by exact problem name and exact variant map. +/// +/// No alias resolution or default fallback. Both `name` and `variant` must match exactly. +pub fn find_variant_entry( + name: &str, + variant: &BTreeMap, +) -> Option<&'static VariantEntry> { + inventory::iter::() + .find(|entry| entry.name == name && entry.variant_map() == *variant) } impl std::fmt::Debug for VariantEntry { diff --git a/src/rules/analysis.rs b/src/rules/analysis.rs index d16a1603..9dac694e 100644 --- a/src/rules/analysis.rs +++ b/src/rules/analysis.rs @@ -11,7 +11,7 @@ use crate::canonical::canonical_form; use crate::expr::Expr; use crate::rules::graph::{ReductionGraph, ReductionPath}; use crate::rules::registry::ReductionOverhead; -use std::collections::BTreeMap; +use std::collections::{BTreeMap, BTreeSet}; use std::fmt; /// Result of comparing one primitive rule against one composite path. @@ -448,6 +448,262 @@ fn all_edges(graph: &ReductionGraph) -> Vec, + /// Connected components (sorted largest first). Each component is a sorted + /// list of problem type names. + pub components: Vec>, +} + +/// An isolated problem type with its variant count. +#[derive(Debug, Clone)] +pub struct IsolatedProblem { + pub name: &'static str, + pub num_variants: usize, + /// Per-variant complexity strings (if available). + pub variant_complexities: Vec<(BTreeMap, Option)>, +} + +/// Check reduction graph connectivity: find isolated problems and connected components. +pub fn check_connectivity(graph: &ReductionGraph) -> ConnectivityReport { + let mut types = graph.problem_types(); + types.sort(); + + // Build undirected adjacency at the problem-type level + let mut adj: BTreeMap<&str, BTreeSet<&str>> = BTreeMap::new(); + for &name in &types { + adj.entry(name).or_default(); + for edge in graph.outgoing_reductions(name) { + adj.entry(name).or_default().insert(edge.target_name); + adj.entry(edge.target_name).or_default().insert(name); + } + } + + // Find connected components via BFS + let mut visited: BTreeSet<&str> = BTreeSet::new(); + let mut components: Vec> = Vec::new(); + + for &name in &types { + if visited.contains(name) { + continue; + } + let mut component = Vec::new(); + let mut queue = std::collections::VecDeque::new(); + queue.push_back(name); + visited.insert(name); + + while let Some(current) = queue.pop_front() { + component.push(current); + if let Some(neighbors) = adj.get(current) { + for &neighbor in neighbors { + if visited.insert(neighbor) { + queue.push_back(neighbor); + } + } + } + } + component.sort(); + components.push(component); + } + + components.sort_by_key(|c| std::cmp::Reverse(c.len())); + + let isolated: Vec = types + .iter() + .copied() + .filter(|name| adj.get(name).is_some_and(|n| n.is_empty())) + .map(|name| { + let variants = graph.variants_for(name); + let variant_complexities = variants + .iter() + .map(|v| { + let c = graph.variant_complexity(name, v).map(|e| e.to_string()); + (v.clone(), c) + }) + .collect(); + IsolatedProblem { + name, + num_variants: variants.len(), + variant_complexities, + } + }) + .collect(); + + ConnectivityReport { + total_types: types.len(), + total_reductions: graph.num_reductions(), + isolated, + components, + } +} + +/// Classification of a problem type that is unreachable from 3-SAT. +#[derive(Debug, Clone, PartialEq, Eq)] +pub enum UnreachableReason { + /// Known to be solvable in polynomial time. + InP, + /// Intermediate complexity (e.g., Factoring — believed neither in P nor NP-complete). + Intermediate, + /// No reductions at all (orphan). + Orphan, + /// NP-hard but missing a proof chain from 3-SAT. + MissingProofChain, +} + +/// A problem type not reachable from 3-SAT via directed reduction paths. +#[derive(Debug, Clone)] +pub struct UnreachableProblem { + pub name: &'static str, + pub reason: UnreachableReason, + pub outgoing_count: usize, + pub incoming_count: usize, +} + +/// Result of checking NP-hardness proof chains from 3-SAT. +#[derive(Debug, Clone)] +pub struct ReachabilityReport { + /// Total number of problem types. + pub total_types: usize, + /// Problem types reachable from 3-SAT, with minimum hop distance. + pub reachable: BTreeMap<&'static str, usize>, + /// Problem types not reachable, classified by reason. + pub unreachable: Vec, +} + +impl ReachabilityReport { + /// Returns only the problems that are NP-hard but missing a proof chain. + pub fn missing_proof_chains(&self) -> Vec<&UnreachableProblem> { + self.unreachable + .iter() + .filter(|p| p.reason == UnreachableReason::MissingProofChain) + .collect() + } +} + +/// Check which problems are reachable from 3-SAT (KSatisfiability) via directed +/// reduction paths. Problems without such a path are classified as P-time, +/// intermediate, orphan, or missing a proof chain. +pub fn check_reachability_from_3sat(graph: &ReductionGraph) -> ReachabilityReport { + const SOURCE: &str = "KSatisfiability"; + + let mut types = graph.problem_types(); + types.sort(); + + // Build directed adjacency at the type level + let mut adj: BTreeMap<&str, BTreeSet<&str>> = BTreeMap::new(); + for &name in &types { + adj.entry(name).or_default(); + for edge in graph.outgoing_reductions(name) { + adj.entry(name).or_default().insert(edge.target_name); + } + } + + // BFS from 3-SAT following directed edges + let mut reachable: BTreeMap<&'static str, usize> = BTreeMap::new(); + let mut queue: std::collections::VecDeque<(&str, usize)> = std::collections::VecDeque::new(); + reachable.insert(SOURCE, 0); + queue.push_back((SOURCE, 0)); + + while let Some((current, hops)) = queue.pop_front() { + if let Some(neighbors) = adj.get(current) { + for &neighbor in neighbors { + if !reachable.contains_key(neighbor) { + reachable.insert(neighbor, hops + 1); + queue.push_back((neighbor, hops + 1)); + } + } + } + } + + // Known P-time problems and variants + let p_time_checks: &[(&str, Option<(&str, &str)>)] = &[ + ("MaximumMatching", None), + ("KSatisfiability", Some(("k", "K2"))), + ("KColoring", Some(("graph", "SimpleGraph"))), + ]; + + let intermediate_names: &[&str] = &["Factoring"]; + + let mut unreachable_problems: Vec = Vec::new(); + + for &name in &types { + if reachable.contains_key(name) { + continue; + } + + let out_count = graph.outgoing_reductions(name).len(); + let in_count = graph.incoming_reductions(name).len(); + + // Orphan? + if out_count == 0 && in_count == 0 { + unreachable_problems.push(UnreachableProblem { + name, + reason: UnreachableReason::Orphan, + outgoing_count: 0, + incoming_count: 0, + }); + continue; + } + + // Known P-time? + let is_p = p_time_checks.iter().any(|(pname, variant_check)| { + if *pname != name { + return false; + } + match variant_check { + None => true, + Some((key, val)) => { + let variants = graph.variants_for(name); + variants.len() == 1 && variants[0].get(*key).map(|s| s.as_str()) == Some(*val) + } + } + }); + if is_p { + unreachable_problems.push(UnreachableProblem { + name, + reason: UnreachableReason::InP, + outgoing_count: out_count, + incoming_count: in_count, + }); + continue; + } + + // Known intermediate? + if intermediate_names.contains(&name) { + unreachable_problems.push(UnreachableProblem { + name, + reason: UnreachableReason::Intermediate, + outgoing_count: out_count, + incoming_count: in_count, + }); + continue; + } + + // NP-hard but missing proof chain + unreachable_problems.push(UnreachableProblem { + name, + reason: UnreachableReason::MissingProofChain, + outgoing_count: out_count, + incoming_count: in_count, + }); + } + + ReachabilityReport { + total_types: types.len(), + reachable, + unreachable: unreachable_problems, + } +} + #[cfg(test)] #[path = "../unit_tests/rules/analysis.rs"] mod tests; diff --git a/src/rules/binpacking_ilp.rs b/src/rules/binpacking_ilp.rs index b44e29da..6a5199c0 100644 --- a/src/rules/binpacking_ilp.rs +++ b/src/rules/binpacking_ilp.rs @@ -96,6 +96,19 @@ impl ReduceTo> for BinPacking { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + vec![crate::example_db::specs::RuleExampleSpec { + id: "binpacking_to_ilp", + build: || { + crate::example_db::specs::direct_ilp_example::<_, bool, _>( + BinPacking::new(vec![6, 5, 5, 4, 3], 10), + |_, _| true, + ) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/binpacking_ilp.rs"] mod tests; diff --git a/src/rules/circuit_ilp.rs b/src/rules/circuit_ilp.rs index 1bf97af8..d6f2d730 100644 --- a/src/rules/circuit_ilp.rs +++ b/src/rules/circuit_ilp.rs @@ -220,6 +220,47 @@ impl ReduceTo> for CircuitSAT { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + use crate::models::formula::{Assignment, BooleanExpr, Circuit}; + + fn full_adder_circuit_sat() -> CircuitSAT { + let circuit = Circuit::new(vec![ + Assignment::new( + vec!["t".to_string()], + BooleanExpr::xor(vec![BooleanExpr::var("a"), BooleanExpr::var("b")]), + ), + Assignment::new( + vec!["sum".to_string()], + BooleanExpr::xor(vec![BooleanExpr::var("t"), BooleanExpr::var("cin")]), + ), + Assignment::new( + vec!["ab".to_string()], + BooleanExpr::and(vec![BooleanExpr::var("a"), BooleanExpr::var("b")]), + ), + Assignment::new( + vec!["cin_t".to_string()], + BooleanExpr::and(vec![BooleanExpr::var("cin"), BooleanExpr::var("t")]), + ), + Assignment::new( + vec!["cout".to_string()], + BooleanExpr::or(vec![BooleanExpr::var("ab"), BooleanExpr::var("cin_t")]), + ), + ]); + CircuitSAT::new(circuit) + } + + vec![crate::example_db::specs::RuleExampleSpec { + id: "circuitsat_to_ilp", + build: || { + crate::example_db::specs::direct_best_example::<_, crate::models::algebraic::ILP, _>( + full_adder_circuit_sat(), + crate::example_db::specs::keep_bool_source, + ) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/circuit_ilp.rs"] mod tests; diff --git a/src/rules/circuit_spinglass.rs b/src/rules/circuit_spinglass.rs index 86473227..a22a422c 100644 --- a/src/rules/circuit_spinglass.rs +++ b/src/rules/circuit_spinglass.rs @@ -439,6 +439,47 @@ impl ReduceTo> for CircuitSAT { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + use crate::models::formula::{Assignment, BooleanExpr, Circuit, CircuitSAT}; + + fn full_adder_circuit_sat() -> CircuitSAT { + let circuit = Circuit::new(vec![ + Assignment::new( + vec!["t".to_string()], + BooleanExpr::xor(vec![BooleanExpr::var("a"), BooleanExpr::var("b")]), + ), + Assignment::new( + vec!["sum".to_string()], + BooleanExpr::xor(vec![BooleanExpr::var("t"), BooleanExpr::var("cin")]), + ), + Assignment::new( + vec!["ab".to_string()], + BooleanExpr::and(vec![BooleanExpr::var("a"), BooleanExpr::var("b")]), + ), + Assignment::new( + vec!["cin_t".to_string()], + BooleanExpr::and(vec![BooleanExpr::var("cin"), BooleanExpr::var("t")]), + ), + Assignment::new( + vec!["cout".to_string()], + BooleanExpr::or(vec![BooleanExpr::var("ab"), BooleanExpr::var("cin_t")]), + ), + ]); + CircuitSAT::new(circuit) + } + + vec![crate::example_db::specs::RuleExampleSpec { + id: "circuitsat_to_spinglass", + build: || { + crate::example_db::specs::direct_best_example::<_, SpinGlass, _>( + full_adder_circuit_sat(), + crate::example_db::specs::keep_bool_source, + ) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/circuit_spinglass.rs"] mod tests; diff --git a/src/rules/coloring_ilp.rs b/src/rules/coloring_ilp.rs index 2f0b9dfa..cf372f89 100644 --- a/src/rules/coloring_ilp.rs +++ b/src/rules/coloring_ilp.rs @@ -137,6 +137,23 @@ macro_rules! impl_kcoloring_to_ilp { impl_kcoloring_to_ilp!(K1, K2, K3, K4); +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + use crate::topology::SimpleGraph; + + vec![crate::example_db::specs::RuleExampleSpec { + id: "kcoloring_to_ilp", + build: || { + let (n, edges) = crate::topology::small_graphs::petersen(); + let source = KColoring::::with_k(SimpleGraph::new(n, edges), 3); + crate::example_db::specs::direct_ilp_example::<_, bool, _>( + source, + crate::example_db::specs::keep_bool_source, + ) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/coloring_ilp.rs"] mod tests; diff --git a/src/rules/coloring_qubo.rs b/src/rules/coloring_qubo.rs index e94975f0..03613b46 100644 --- a/src/rules/coloring_qubo.rs +++ b/src/rules/coloring_qubo.rs @@ -127,6 +127,23 @@ macro_rules! impl_kcoloring_to_qubo { impl_kcoloring_to_qubo!(K2, K3); +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + use crate::models::algebraic::QUBO; + + vec![crate::example_db::specs::RuleExampleSpec { + id: "kcoloring_to_qubo", + build: || { + let (n, edges) = crate::topology::small_graphs::house(); + let source = KColoring::::with_k(SimpleGraph::new(n, edges), 3); + crate::example_db::specs::direct_best_example::<_, QUBO, _>( + source, + crate::example_db::specs::keep_bool_source, + ) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/coloring_qubo.rs"] mod tests; diff --git a/src/rules/factoring_circuit.rs b/src/rules/factoring_circuit.rs index 1400a032..4508dea4 100644 --- a/src/rules/factoring_circuit.rs +++ b/src/rules/factoring_circuit.rs @@ -267,6 +267,70 @@ impl ReduceTo for Factoring { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + use crate::export::SolutionPair; + use crate::prelude::{ReduceTo, ReductionResult}; + use crate::solvers::BruteForce; + use std::collections::HashMap; + + vec![crate::example_db::specs::RuleExampleSpec { + id: "factoring_to_circuitsat", + build: || { + fn simulate_circuit( + circuit: &crate::models::formula::Circuit, + initial_assignments: &HashMap, + ) -> HashMap { + let mut values = initial_assignments.clone(); + for assignment in &circuit.assignments { + let result = assignment.expr.evaluate(&values); + for output in &assignment.outputs { + values.insert(output.clone(), result); + } + } + values + } + + let source = Factoring::new(3, 3, 35); + let reduction = ReduceTo::::reduce_to(&source); + let target = reduction.target_problem(); + let source_solutions = BruteForce::new().find_all_best(&source); + let var_names = target.variable_names(); + let solutions = source_solutions + .into_iter() + .map(|source_config| { + let mut inputs: HashMap = HashMap::new(); + for (i, &bit) in source_config.iter().enumerate().take(source.m()) { + inputs.insert(format!("p{}", i + 1), bit == 1); + } + for (i, &bit) in source_config[source.m()..] + .iter() + .enumerate() + .take(source.n()) + { + inputs.insert(format!("q{}", i + 1), bit == 1); + } + let values = simulate_circuit(target.circuit(), &inputs); + let target_config = var_names + .iter() + .map(|name| usize::from(*values.get(name).unwrap_or(&false))) + .collect(); + SolutionPair { + source_config, + target_config, + } + }) + .collect(); + crate::example_db::specs::assemble_rule_example( + &source, + target, + crate::example_db::specs::direct_overhead::(), + solutions, + ) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/factoring_circuit.rs"] mod tests; diff --git a/src/rules/factoring_ilp.rs b/src/rules/factoring_ilp.rs index 1184e2cc..37d95d32 100644 --- a/src/rules/factoring_ilp.rs +++ b/src/rules/factoring_ilp.rs @@ -221,6 +221,19 @@ impl ReduceTo> for Factoring { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + vec![crate::example_db::specs::RuleExampleSpec { + id: "factoring_to_ilp", + build: || { + crate::example_db::specs::direct_ilp_example::<_, i32, _>( + Factoring::new(3, 3, 35), + |_, _| true, + ) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/factoring_ilp.rs"] mod tests; diff --git a/src/rules/graph.rs b/src/rules/graph.rs index 3c964069..dc54d8b2 100644 --- a/src/rules/graph.rs +++ b/src/rules/graph.rs @@ -266,6 +266,8 @@ pub struct ReductionGraph { nodes: Vec, /// Map from base type name to all NodeIndex values for that name. name_to_nodes: HashMap<&'static str, Vec>, + /// Declared default variant for each problem name. + default_variants: HashMap>, } impl ReductionGraph { @@ -305,18 +307,24 @@ impl ReductionGraph { } }; + // Collect declared default variants from VariantEntry inventory + let mut default_variants: HashMap> = HashMap::new(); + // Phase 1: Build nodes from VariantEntry inventory for entry in inventory::iter:: { let variant = Self::variant_to_map(&entry.variant()); ensure_node( entry.name, - variant, + variant.clone(), entry.complexity, &mut nodes, &mut graph, &mut node_index, &mut name_to_nodes, ); + if entry.is_default { + default_variants.insert(entry.name.to_string(), variant); + } } // Phase 2: Build edges from ReductionEntry inventory @@ -362,6 +370,7 @@ impl ReductionGraph { graph, nodes, name_to_nodes, + default_variants, } } @@ -516,6 +525,41 @@ impl ReductionGraph { .collect() } + /// Find up to `limit` simple paths between two specific problem variants. + /// + /// Like [`find_all_paths`](Self::find_all_paths) but stops enumeration after + /// collecting `limit` paths. This avoids combinatorial explosion on dense graphs. + pub fn find_paths_up_to( + &self, + source: &str, + source_variant: &BTreeMap, + target: &str, + target_variant: &BTreeMap, + limit: usize, + ) -> Vec { + let src = match self.lookup_node(source, source_variant) { + Some(idx) => idx, + None => return vec![], + }; + let dst = match self.lookup_node(target, target_variant) { + Some(idx) => idx, + None => return vec![], + }; + + let paths: Vec> = all_simple_paths::< + Vec, + _, + std::hash::RandomState, + >(&self.graph, src, dst, 0, None) + .take(limit) + .collect(); + + paths + .iter() + .map(|p| self.node_path_to_reduction_path(p)) + .collect() + } + /// Check if a direct reduction exists from S to T. pub fn has_direct_reduction( &self, @@ -642,6 +686,16 @@ impl ReductionGraph { variants } + /// Get the declared default variant for a problem type. + /// + /// Returns the variant that was marked `default` in `declare_variants!`. + /// If no entry was explicitly marked `default`, the first registered variant + /// for the problem is used as the implicit default. + /// Returns `None` if the problem type is not registered. + pub fn default_variant_for(&self, name: &str) -> Option> { + self.default_variants.get(name).cloned() + } + /// Get the complexity expression for a specific variant. pub fn variant_complexity( &self, @@ -1044,24 +1098,19 @@ impl ReductionGraph { } } - /// Find the best matching `ReductionEntry` for a (source_name, target_name) pair - /// given the caller's current source variant. + /// Find the matching `ReductionEntry` for a (source_name, target_name) pair + /// given exact source and target variants. /// - /// First tries an exact match on the source variant. If no exact match is found, - /// falls back to a name-only match (returning the first entry whose source and - /// target names match). This is intentional: specific variants (e.g., `K3`) may - /// not have their own `#[reduction]` entry, but the general variant (`KN`) covers - /// them with the same overhead expression. The fallback is safe because cross-name - /// reductions share the same overhead regardless of source variant; it is only - /// used by the JSON export pipeline (`export::lookup_overhead`). + /// Returns `Some(MatchedEntry)` only when both the source and target variants + /// match exactly. No fallback is attempted — callers that need fuzzy matching + /// should resolve variants before calling this method. pub fn find_best_entry( &self, source_name: &str, + source_variant: &BTreeMap, target_name: &str, - current_variant: &BTreeMap, + target_variant: &BTreeMap, ) -> Option { - let mut fallback: Option = None; - for entry in inventory::iter:: { if entry.source_name != source_name || entry.target_name != target_name { continue; @@ -1070,26 +1119,17 @@ impl ReductionGraph { let entry_source = Self::variant_to_map(&entry.source_variant()); let entry_target = Self::variant_to_map(&entry.target_variant()); - // Exact match on source variant — return immediately - if current_variant == &entry_source { + // Exact match on both source and target variant + if source_variant == &entry_source && target_variant == &entry_target { return Some(MatchedEntry { source_variant: entry_source, target_variant: entry_target, overhead: entry.overhead(), }); } - - // Remember the first name-only match as a fallback - if fallback.is_none() { - fallback = Some(MatchedEntry { - source_variant: entry_source, - target_variant: entry_target, - overhead: entry.overhead(), - }); - } } - fallback + None } } diff --git a/src/rules/ilp_qubo.rs b/src/rules/ilp_qubo.rs index 8eb9fb80..44855967 100644 --- a/src/rules/ilp_qubo.rs +++ b/src/rules/ilp_qubo.rs @@ -167,6 +167,31 @@ impl ReduceTo> for ILP { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + use crate::models::algebraic::{LinearConstraint, ObjectiveSense}; + + vec![crate::example_db::specs::RuleExampleSpec { + id: "ilp_to_qubo", + build: || { + let source = ILP::new( + 6, + vec![ + LinearConstraint::le( + vec![(0, 3.0), (1, 2.0), (2, 5.0), (3, 4.0), (4, 2.0), (5, 3.0)], + 10.0, + ), + LinearConstraint::le(vec![(0, 1.0), (1, 1.0), (2, 1.0)], 2.0), + LinearConstraint::le(vec![(3, 1.0), (4, 1.0), (5, 1.0)], 2.0), + ], + vec![(0, 10.0), (1, 7.0), (2, 12.0), (3, 8.0), (4, 6.0), (5, 9.0)], + ObjectiveSense::Maximize, + ); + crate::example_db::specs::direct_best_example::<_, QUBO, _>(source, |_, _| true) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/ilp_qubo.rs"] mod tests; diff --git a/src/rules/ksatisfiability_qubo.rs b/src/rules/ksatisfiability_qubo.rs index 7bf640da..565ce867 100644 --- a/src/rules/ksatisfiability_qubo.rs +++ b/src/rules/ksatisfiability_qubo.rs @@ -324,6 +324,34 @@ impl ReduceTo> for KSatisfiability { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + use crate::models::algebraic::QUBO; + use crate::models::formula::CNFClause; + + vec![crate::example_db::specs::RuleExampleSpec { + id: "ksatisfiability_to_qubo", + build: || { + let source = KSatisfiability::::new( + 5, + vec![ + CNFClause::new(vec![1, 2, -3]), + CNFClause::new(vec![-1, 3, 4]), + CNFClause::new(vec![2, -4, 5]), + CNFClause::new(vec![-2, 3, -5]), + CNFClause::new(vec![1, -3, 5]), + CNFClause::new(vec![-1, -2, 4]), + CNFClause::new(vec![3, -4, -5]), + ], + ); + crate::example_db::specs::direct_best_example::<_, QUBO, _>( + source, + crate::example_db::specs::keep_bool_source, + ) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/ksatisfiability_qubo.rs"] mod tests; diff --git a/src/rules/ksatisfiability_subsetsum.rs b/src/rules/ksatisfiability_subsetsum.rs index 508ec077..75e1a5a0 100644 --- a/src/rules/ksatisfiability_subsetsum.rs +++ b/src/rules/ksatisfiability_subsetsum.rs @@ -136,6 +136,29 @@ impl ReduceTo for KSatisfiability { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + use crate::models::formula::CNFClause; + use crate::models::misc::SubsetSum; + + vec![crate::example_db::specs::RuleExampleSpec { + id: "ksatisfiability_to_subsetsum", + build: || { + let source = KSatisfiability::::new( + 3, + vec![ + CNFClause::new(vec![1, 2, 3]), + CNFClause::new(vec![-1, -2, 3]), + ], + ); + crate::example_db::specs::direct_satisfying_example::<_, SubsetSum, _>( + source, + |_, _| true, + ) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/ksatisfiability_subsetsum.rs"] mod tests; diff --git a/src/rules/longestcommonsubsequence_ilp.rs b/src/rules/longestcommonsubsequence_ilp.rs index 4634045b..3394c73a 100644 --- a/src/rules/longestcommonsubsequence_ilp.rs +++ b/src/rules/longestcommonsubsequence_ilp.rs @@ -70,10 +70,12 @@ impl ReductionResult for ReductionLCSToILP { } } -#[reduction(overhead = { +#[reduction( + overhead = { num_vars = "num_chars_first * num_chars_second", num_constraints = "num_chars_first + num_chars_second + (num_chars_first * num_chars_second) ^ 2", -})] + } +)] impl ReduceTo> for LongestCommonSubsequence { type Result = ReductionLCSToILP; @@ -158,6 +160,20 @@ impl ReduceTo> for LongestCommonSubsequence { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + vec![crate::example_db::specs::RuleExampleSpec { + id: "longestcommonsubsequence_to_ilp", + build: || { + let source = LongestCommonSubsequence::new(vec![ + vec![b'A', b'B', b'A', b'C'], + vec![b'B', b'A', b'C', b'A'], + ]); + crate::example_db::specs::direct_ilp_example::<_, bool, _>(source, |_, _| true) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/longestcommonsubsequence_ilp.rs"] mod tests; diff --git a/src/rules/maximumclique_ilp.rs b/src/rules/maximumclique_ilp.rs index ed950dee..0a77d62e 100644 --- a/src/rules/maximumclique_ilp.rs +++ b/src/rules/maximumclique_ilp.rs @@ -78,6 +78,18 @@ impl ReduceTo> for MaximumClique { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + vec![crate::example_db::specs::RuleExampleSpec { + id: "maximumclique_to_ilp", + build: || { + let (n, edges) = crate::topology::small_graphs::octahedral(); + let source = MaximumClique::new(SimpleGraph::new(n, edges), vec![1i32; 6]); + crate::example_db::specs::direct_ilp_example::<_, bool, _>(source, |_, _| true) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/maximumclique_ilp.rs"] mod tests; diff --git a/src/rules/maximumclique_maximumindependentset.rs b/src/rules/maximumclique_maximumindependentset.rs index 1d29579c..94d14754 100644 --- a/src/rules/maximumclique_maximumindependentset.rs +++ b/src/rules/maximumclique_maximumindependentset.rs @@ -66,6 +66,24 @@ impl ReduceTo> for MaximumClique Vec { + vec![crate::example_db::specs::RuleExampleSpec { + id: "maximumclique_to_maximumindependentset", + build: || { + let source = MaximumClique::new( + SimpleGraph::new(4, vec![(0, 1), (1, 2), (2, 3)]), + vec![1i32; 4], + ); + crate::example_db::specs::direct_best_example::< + _, + MaximumIndependentSet, + _, + >(source, |_, _| true) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/maximumclique_maximumindependentset.rs"] mod tests; diff --git a/src/rules/maximumindependentset_maximumclique.rs b/src/rules/maximumindependentset_maximumclique.rs index 763b9d22..1523ba7d 100644 --- a/src/rules/maximumindependentset_maximumclique.rs +++ b/src/rules/maximumindependentset_maximumclique.rs @@ -61,6 +61,56 @@ impl ReduceTo> for MaximumIndependentSet Vec { + use crate::models::algebraic::QUBO; + use crate::rules::{Minimize, MinimizeSteps}; + use crate::types::ProblemSize; + + fn mis_petersen() -> MaximumIndependentSet { + let (n, edges) = crate::topology::small_graphs::petersen(); + MaximumIndependentSet::new(SimpleGraph::new(n, edges), vec![1i32; 10]) + } + + vec![ + crate::example_db::specs::RuleExampleSpec { + id: "maximumindependentset_to_maximumclique", + build: || { + let source = MaximumIndependentSet::new( + SimpleGraph::new(5, vec![(0, 1), (1, 2), (2, 3), (3, 4)]), + vec![1i32; 5], + ); + crate::example_db::specs::direct_best_example::<_, MaximumClique, _>( + source, + |_, _| true, + ) + }, + }, + crate::example_db::specs::RuleExampleSpec { + id: "maximumindependentset_to_ilp", + build: || { + crate::example_db::specs::path_ilp_example::<_, bool, _, _>( + mis_petersen(), + ProblemSize::new(vec![]), + MinimizeSteps, + |_, _| true, + ) + }, + }, + crate::example_db::specs::RuleExampleSpec { + id: "maximumindependentset_to_qubo", + build: || { + crate::example_db::specs::path_best_example::<_, QUBO, _, _>( + mis_petersen(), + ProblemSize::new(vec![("num_vertices", 10), ("num_edges", 15)]), + Minimize("num_vars"), + |_, _| true, + ) + }, + }, + ] +} + #[cfg(test)] #[path = "../unit_tests/rules/maximumindependentset_maximumclique.rs"] mod tests; diff --git a/src/rules/maximumindependentset_maximumsetpacking.rs b/src/rules/maximumindependentset_maximumsetpacking.rs index 358fdac8..19069f47 100644 --- a/src/rules/maximumindependentset_maximumsetpacking.rs +++ b/src/rules/maximumindependentset_maximumsetpacking.rs @@ -121,6 +121,41 @@ macro_rules! impl_sp_to_is { impl_sp_to_is!(i32); impl_sp_to_is!(One); +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + vec![ + crate::example_db::specs::RuleExampleSpec { + id: "maximumindependentset_to_maximumsetpacking", + build: || { + let (n, edges) = crate::topology::small_graphs::petersen(); + let source = MaximumIndependentSet::new(SimpleGraph::new(n, edges), vec![1i32; 10]); + crate::example_db::specs::direct_best_example::<_, MaximumSetPacking, _>( + source, + |_, _| true, + ) + }, + }, + crate::example_db::specs::RuleExampleSpec { + id: "maximumsetpacking_to_maximumindependentset", + build: || { + let sets = vec![ + vec![0, 1, 2], + vec![2, 3], + vec![4, 5, 6], + vec![1, 5, 7], + vec![3, 6], + ]; + let source = MaximumSetPacking::with_weights(sets, vec![1i32; 5]); + crate::example_db::specs::direct_best_example::< + _, + MaximumIndependentSet, + _, + >(source, |_, _| true) + }, + }, + ] +} + #[cfg(test)] #[path = "../unit_tests/rules/maximumindependentset_maximumsetpacking.rs"] mod tests; diff --git a/src/rules/maximummatching_ilp.rs b/src/rules/maximummatching_ilp.rs index 7042518b..68ca17fb 100644 --- a/src/rules/maximummatching_ilp.rs +++ b/src/rules/maximummatching_ilp.rs @@ -78,6 +78,18 @@ impl ReduceTo> for MaximumMatching { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + vec![crate::example_db::specs::RuleExampleSpec { + id: "maximummatching_to_ilp", + build: || { + let (n, edges) = crate::topology::small_graphs::petersen(); + let source = MaximumMatching::unit_weights(SimpleGraph::new(n, edges)); + crate::example_db::specs::direct_ilp_example::<_, bool, _>(source, |_, _| true) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/maximummatching_ilp.rs"] mod tests; diff --git a/src/rules/maximummatching_maximumsetpacking.rs b/src/rules/maximummatching_maximumsetpacking.rs index 623cfb19..a4b984bb 100644 --- a/src/rules/maximummatching_maximumsetpacking.rs +++ b/src/rules/maximummatching_maximumsetpacking.rs @@ -62,6 +62,23 @@ impl ReduceTo> for MaximumMatching { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + use crate::models::set::MaximumSetPacking; + + vec![crate::example_db::specs::RuleExampleSpec { + id: "maximummatching_to_maximumsetpacking", + build: || { + let (n, edges) = crate::topology::small_graphs::petersen(); + let source = MaximumMatching::unit_weights(SimpleGraph::new(n, edges)); + crate::example_db::specs::direct_best_example::<_, MaximumSetPacking, _>( + source, + |_, _| true, + ) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/maximummatching_maximumsetpacking.rs"] mod tests; diff --git a/src/rules/maximumsetpacking_ilp.rs b/src/rules/maximumsetpacking_ilp.rs index 96c7f7a0..579be758 100644 --- a/src/rules/maximumsetpacking_ilp.rs +++ b/src/rules/maximumsetpacking_ilp.rs @@ -77,6 +77,24 @@ impl ReduceTo> for MaximumSetPacking { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + vec![crate::example_db::specs::RuleExampleSpec { + id: "maximumsetpacking_to_ilp", + build: || { + let source = MaximumSetPacking::new(vec![ + vec![0, 1, 2], + vec![2, 3, 4], + vec![4, 5, 6], + vec![6, 7, 0], + vec![1, 3, 5], + vec![0, 4, 7], + ]); + crate::example_db::specs::direct_ilp_example::<_, bool, _>(source, |_, _| true) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/maximumsetpacking_ilp.rs"] mod tests; diff --git a/src/rules/maximumsetpacking_qubo.rs b/src/rules/maximumsetpacking_qubo.rs index 7ccbf781..29c30f74 100644 --- a/src/rules/maximumsetpacking_qubo.rs +++ b/src/rules/maximumsetpacking_qubo.rs @@ -61,6 +61,26 @@ impl ReduceTo> for MaximumSetPacking { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + use crate::models::set::MaximumSetPacking; + + vec![crate::example_db::specs::RuleExampleSpec { + id: "maximumsetpacking_to_qubo", + build: || { + let source = MaximumSetPacking::::new(vec![ + vec![0, 1, 2], + vec![2, 3, 4], + vec![4, 5, 6], + vec![6, 7, 0], + vec![1, 3, 5], + vec![0, 4, 7], + ]); + crate::example_db::specs::direct_best_example::<_, QUBO, _>(source, |_, _| true) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/maximumsetpacking_qubo.rs"] mod tests; diff --git a/src/rules/minimumdominatingset_ilp.rs b/src/rules/minimumdominatingset_ilp.rs index 978aad01..3f71767f 100644 --- a/src/rules/minimumdominatingset_ilp.rs +++ b/src/rules/minimumdominatingset_ilp.rs @@ -80,6 +80,18 @@ impl ReduceTo> for MinimumDominatingSet { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + vec![crate::example_db::specs::RuleExampleSpec { + id: "minimumdominatingset_to_ilp", + build: || { + let (n, edges) = crate::topology::small_graphs::petersen(); + let source = MinimumDominatingSet::new(SimpleGraph::new(n, edges), vec![1i32; 10]); + crate::example_db::specs::direct_ilp_example::<_, bool, _>(source, |_, _| true) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/minimumdominatingset_ilp.rs"] mod tests; diff --git a/src/rules/minimumsetcovering_ilp.rs b/src/rules/minimumsetcovering_ilp.rs index 7cab965c..10470777 100644 --- a/src/rules/minimumsetcovering_ilp.rs +++ b/src/rules/minimumsetcovering_ilp.rs @@ -81,6 +81,27 @@ impl ReduceTo> for MinimumSetCovering { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + vec![crate::example_db::specs::RuleExampleSpec { + id: "minimumsetcovering_to_ilp", + build: || { + let source = MinimumSetCovering::new( + 8, + vec![ + vec![0, 1, 2], + vec![2, 3, 4], + vec![4, 5, 6], + vec![6, 7, 0], + vec![1, 3, 5], + vec![0, 4, 7], + ], + ); + crate::example_db::specs::direct_ilp_example::<_, bool, _>(source, |_, _| true) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/minimumsetcovering_ilp.rs"] mod tests; diff --git a/src/rules/minimumvertexcover_maximumindependentset.rs b/src/rules/minimumvertexcover_maximumindependentset.rs index a0912883..74d18911 100644 --- a/src/rules/minimumvertexcover_maximumindependentset.rs +++ b/src/rules/minimumvertexcover_maximumindependentset.rs @@ -91,6 +91,68 @@ impl ReduceTo> for MinimumVertexCover Vec { + use crate::models::algebraic::QUBO; + use crate::rules::{Minimize, MinimizeSteps}; + use crate::types::ProblemSize; + + fn vc_petersen() -> MinimumVertexCover { + let (n, edges) = crate::topology::small_graphs::petersen(); + MinimumVertexCover::new(SimpleGraph::new(n, edges), vec![1i32; 10]) + } + + fn mis_petersen() -> MaximumIndependentSet { + let (n, edges) = crate::topology::small_graphs::petersen(); + MaximumIndependentSet::new(SimpleGraph::new(n, edges), vec![1i32; 10]) + } + + vec![ + crate::example_db::specs::RuleExampleSpec { + id: "maximumindependentset_to_minimumvertexcover", + build: || { + crate::example_db::specs::direct_best_example::< + _, + MinimumVertexCover, + _, + >(mis_petersen(), |_, _| true) + }, + }, + crate::example_db::specs::RuleExampleSpec { + id: "minimumvertexcover_to_maximumindependentset", + build: || { + crate::example_db::specs::direct_best_example::< + _, + MaximumIndependentSet, + _, + >(vc_petersen(), |_, _| true) + }, + }, + crate::example_db::specs::RuleExampleSpec { + id: "minimumvertexcover_to_ilp", + build: || { + crate::example_db::specs::path_ilp_example::<_, bool, _, _>( + vc_petersen(), + ProblemSize::new(vec![]), + MinimizeSteps, + |_, _| true, + ) + }, + }, + crate::example_db::specs::RuleExampleSpec { + id: "minimumvertexcover_to_qubo", + build: || { + crate::example_db::specs::path_best_example::<_, QUBO, _, _>( + vc_petersen(), + ProblemSize::new(vec![("num_vertices", 10), ("num_edges", 15)]), + Minimize("num_vars"), + |_, _| true, + ) + }, + }, + ] +} + #[cfg(test)] #[path = "../unit_tests/rules/minimumvertexcover_maximumindependentset.rs"] mod tests; diff --git a/src/rules/minimumvertexcover_minimumsetcovering.rs b/src/rules/minimumvertexcover_minimumsetcovering.rs index 7d0c2908..121adceb 100644 --- a/src/rules/minimumvertexcover_minimumsetcovering.rs +++ b/src/rules/minimumvertexcover_minimumsetcovering.rs @@ -67,6 +67,21 @@ impl ReduceTo> for MinimumVertexCover } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + vec![crate::example_db::specs::RuleExampleSpec { + id: "minimumvertexcover_to_minimumsetcovering", + build: || { + let (n, edges) = crate::topology::small_graphs::petersen(); + let source = MinimumVertexCover::new(SimpleGraph::new(n, edges), vec![1i32; 10]); + crate::example_db::specs::direct_best_example::<_, MinimumSetCovering, _>( + source, + |_, _| true, + ) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/minimumvertexcover_minimumsetcovering.rs"] mod tests; diff --git a/src/rules/mod.rs b/src/rules/mod.rs index 0d2b85f6..7e7f9f17 100644 --- a/src/rules/mod.rs +++ b/src/rules/mod.rs @@ -6,66 +6,66 @@ pub mod registry; pub use cost::{CustomCost, Minimize, MinimizeSteps, PathCostFn}; pub use registry::{ReductionEntry, ReductionOverhead}; -mod circuit_spinglass; -mod coloring_qubo; -mod factoring_circuit; +pub(crate) mod circuit_spinglass; +pub(crate) mod coloring_qubo; +pub(crate) mod factoring_circuit; mod graph; mod kcoloring_casts; mod ksatisfiability_casts; -mod ksatisfiability_qubo; -mod ksatisfiability_subsetsum; -mod maximumclique_maximumindependentset; +pub(crate) mod ksatisfiability_qubo; +pub(crate) mod ksatisfiability_subsetsum; +pub(crate) mod maximumclique_maximumindependentset; mod maximumindependentset_casts; mod maximumindependentset_gridgraph; -mod maximumindependentset_maximumclique; -mod maximumindependentset_maximumsetpacking; +pub(crate) mod maximumindependentset_maximumclique; +pub(crate) mod maximumindependentset_maximumsetpacking; mod maximumindependentset_triangular; -mod maximummatching_maximumsetpacking; +pub(crate) mod maximummatching_maximumsetpacking; mod maximumsetpacking_casts; -mod maximumsetpacking_qubo; -mod minimumvertexcover_maximumindependentset; -mod minimumvertexcover_minimumsetcovering; -mod sat_circuitsat; -mod sat_coloring; -mod sat_ksat; -mod sat_maximumindependentset; -mod sat_minimumdominatingset; +pub(crate) mod maximumsetpacking_qubo; +pub(crate) mod minimumvertexcover_maximumindependentset; +pub(crate) mod minimumvertexcover_minimumsetcovering; +pub(crate) mod sat_circuitsat; +pub(crate) mod sat_coloring; +pub(crate) mod sat_ksat; +pub(crate) mod sat_maximumindependentset; +pub(crate) mod sat_minimumdominatingset; mod spinglass_casts; -mod spinglass_maxcut; -mod spinglass_qubo; +pub(crate) mod spinglass_maxcut; +pub(crate) mod spinglass_qubo; mod traits; -mod travelingsalesman_qubo; +pub(crate) mod travelingsalesman_qubo; pub mod unitdiskmapping; #[cfg(feature = "ilp-solver")] -mod binpacking_ilp; +pub(crate) mod binpacking_ilp; #[cfg(feature = "ilp-solver")] -mod circuit_ilp; +pub(crate) mod circuit_ilp; #[cfg(feature = "ilp-solver")] -mod coloring_ilp; +pub(crate) mod coloring_ilp; #[cfg(feature = "ilp-solver")] -mod factoring_ilp; +pub(crate) mod factoring_ilp; #[cfg(feature = "ilp-solver")] mod ilp_bool_ilp_i32; #[cfg(feature = "ilp-solver")] -mod ilp_qubo; +pub(crate) mod ilp_qubo; #[cfg(feature = "ilp-solver")] -mod longestcommonsubsequence_ilp; +pub(crate) mod longestcommonsubsequence_ilp; #[cfg(feature = "ilp-solver")] -mod maximumclique_ilp; +pub(crate) mod maximumclique_ilp; #[cfg(feature = "ilp-solver")] -mod maximummatching_ilp; +pub(crate) mod maximummatching_ilp; #[cfg(feature = "ilp-solver")] -mod maximumsetpacking_ilp; +pub(crate) mod maximumsetpacking_ilp; #[cfg(feature = "ilp-solver")] -mod minimumdominatingset_ilp; +pub(crate) mod minimumdominatingset_ilp; #[cfg(feature = "ilp-solver")] -mod minimumsetcovering_ilp; +pub(crate) mod minimumsetcovering_ilp; #[cfg(feature = "ilp-solver")] -mod qubo_ilp; +pub(crate) mod qubo_ilp; #[cfg(feature = "ilp-solver")] -mod travelingsalesman_ilp; +pub(crate) mod travelingsalesman_ilp; pub use graph::{ NeighborInfo, NeighborTree, ReductionChain, ReductionEdgeInfo, ReductionGraph, ReductionPath, @@ -73,6 +73,48 @@ pub use graph::{ }; pub use traits::{ReduceTo, ReductionAutoCast, ReductionResult}; +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + let mut specs = Vec::new(); + specs.extend(circuit_spinglass::canonical_rule_example_specs()); + specs.extend(coloring_qubo::canonical_rule_example_specs()); + specs.extend(factoring_circuit::canonical_rule_example_specs()); + specs.extend(ksatisfiability_qubo::canonical_rule_example_specs()); + specs.extend(ksatisfiability_subsetsum::canonical_rule_example_specs()); + specs.extend(maximumclique_maximumindependentset::canonical_rule_example_specs()); + specs.extend(maximumindependentset_maximumclique::canonical_rule_example_specs()); + specs.extend(maximumindependentset_maximumsetpacking::canonical_rule_example_specs()); + specs.extend(maximummatching_maximumsetpacking::canonical_rule_example_specs()); + specs.extend(maximumsetpacking_qubo::canonical_rule_example_specs()); + specs.extend(minimumvertexcover_maximumindependentset::canonical_rule_example_specs()); + specs.extend(minimumvertexcover_minimumsetcovering::canonical_rule_example_specs()); + specs.extend(sat_circuitsat::canonical_rule_example_specs()); + specs.extend(sat_coloring::canonical_rule_example_specs()); + specs.extend(sat_ksat::canonical_rule_example_specs()); + specs.extend(sat_maximumindependentset::canonical_rule_example_specs()); + specs.extend(sat_minimumdominatingset::canonical_rule_example_specs()); + specs.extend(spinglass_maxcut::canonical_rule_example_specs()); + specs.extend(spinglass_qubo::canonical_rule_example_specs()); + specs.extend(travelingsalesman_qubo::canonical_rule_example_specs()); + #[cfg(feature = "ilp-solver")] + { + specs.extend(binpacking_ilp::canonical_rule_example_specs()); + specs.extend(circuit_ilp::canonical_rule_example_specs()); + specs.extend(coloring_ilp::canonical_rule_example_specs()); + specs.extend(factoring_ilp::canonical_rule_example_specs()); + specs.extend(ilp_qubo::canonical_rule_example_specs()); + specs.extend(longestcommonsubsequence_ilp::canonical_rule_example_specs()); + specs.extend(maximumclique_ilp::canonical_rule_example_specs()); + specs.extend(maximummatching_ilp::canonical_rule_example_specs()); + specs.extend(maximumsetpacking_ilp::canonical_rule_example_specs()); + specs.extend(minimumdominatingset_ilp::canonical_rule_example_specs()); + specs.extend(minimumsetcovering_ilp::canonical_rule_example_specs()); + specs.extend(qubo_ilp::canonical_rule_example_specs()); + specs.extend(travelingsalesman_ilp::canonical_rule_example_specs()); + } + specs +} + /// Generates a variant-cast `ReduceTo` impl with `#[reduction]` registration. /// /// Variant casts convert a problem from one variant to another (e.g., diff --git a/src/rules/qubo_ilp.rs b/src/rules/qubo_ilp.rs index a0e2c5c5..b4669d5a 100644 --- a/src/rules/qubo_ilp.rs +++ b/src/rules/qubo_ilp.rs @@ -99,6 +99,25 @@ impl ReduceTo> for QUBO { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + vec![crate::example_db::specs::RuleExampleSpec { + id: "qubo_to_ilp", + build: || { + let mut matrix = vec![vec![0.0; 4]; 4]; + matrix[0][0] = -2.0; + matrix[1][1] = -3.0; + matrix[2][2] = -1.0; + matrix[3][3] = -4.0; + matrix[0][1] = 1.0; + matrix[1][2] = 2.0; + matrix[2][3] = -1.0; + let source = QUBO::from_matrix(matrix); + crate::example_db::specs::direct_best_example::<_, ILP, _>(source, |_, _| true) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/qubo_ilp.rs"] mod tests; diff --git a/src/rules/registry.rs b/src/rules/registry.rs index 42a79e4f..a61dd785 100644 --- a/src/rules/registry.rs +++ b/src/rules/registry.rs @@ -157,6 +157,11 @@ impl std::fmt::Debug for ReductionEntry { inventory::collect!(ReductionEntry); +/// Return all registered reduction entries. +pub fn reduction_entries() -> Vec<&'static ReductionEntry> { + inventory::iter::().collect() +} + #[cfg(test)] #[path = "../unit_tests/rules/registry.rs"] mod tests; diff --git a/src/rules/sat_circuitsat.rs b/src/rules/sat_circuitsat.rs index 7be3cfbc..cbb32beb 100644 --- a/src/rules/sat_circuitsat.rs +++ b/src/rules/sat_circuitsat.rs @@ -117,6 +117,29 @@ impl ReduceTo for Satisfiability { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + use crate::models::formula::CNFClause; + + vec![crate::example_db::specs::RuleExampleSpec { + id: "satisfiability_to_circuitsat", + build: || { + let source = Satisfiability::new( + 3, + vec![ + CNFClause::new(vec![1, -2, 3]), + CNFClause::new(vec![-1, 2]), + CNFClause::new(vec![2, 3]), + ], + ); + crate::example_db::specs::direct_satisfying_example::<_, CircuitSAT, _>( + source, + |_, _| true, + ) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/sat_circuitsat.rs"] mod tests; diff --git a/src/rules/sat_coloring.rs b/src/rules/sat_coloring.rs index a5e169f1..421101d8 100644 --- a/src/rules/sat_coloring.rs +++ b/src/rules/sat_coloring.rs @@ -323,6 +323,29 @@ impl ReduceTo> for Satisfiability { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + use crate::models::formula::{CNFClause, Satisfiability}; + + vec![crate::example_db::specs::RuleExampleSpec { + id: "satisfiability_to_kcoloring", + build: || { + let source = Satisfiability::new( + 5, + vec![ + CNFClause::new(vec![1]), + CNFClause::new(vec![-3]), + CNFClause::new(vec![5]), + ], + ); + crate::example_db::specs::direct_satisfying_example::<_, KColoring, _>( + source, + |_, _| true, + ) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/sat_coloring.rs"] mod tests; diff --git a/src/rules/sat_ksat.rs b/src/rules/sat_ksat.rs index 41058764..cfc12c46 100644 --- a/src/rules/sat_ksat.rs +++ b/src/rules/sat_ksat.rs @@ -218,6 +218,51 @@ impl ReduceTo for KSatisfiability { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + use crate::models::formula::CNFClause; + + vec![ + crate::example_db::specs::RuleExampleSpec { + id: "satisfiability_to_ksatisfiability", + build: || { + let source = Satisfiability::new( + 5, + vec![ + CNFClause::new(vec![1]), + CNFClause::new(vec![2, -3]), + CNFClause::new(vec![-1, 3, 4]), + CNFClause::new(vec![2, -4, 5]), + CNFClause::new(vec![1, -2, 3, -5]), + CNFClause::new(vec![-1, 2, -3, 4, 5]), + ], + ); + crate::example_db::specs::direct_satisfying_example::<_, KSatisfiability, _>( + source, + |_, _| true, + ) + }, + }, + crate::example_db::specs::RuleExampleSpec { + id: "ksatisfiability_to_satisfiability", + build: || { + let source = KSatisfiability::::new( + 4, + vec![ + CNFClause::new(vec![1, -2, 3]), + CNFClause::new(vec![-1, 3, 4]), + CNFClause::new(vec![2, -3, -4]), + ], + ); + crate::example_db::specs::direct_satisfying_example::<_, Satisfiability, _>( + source, + |_, _| true, + ) + }, + }, + ] +} + #[cfg(test)] #[path = "../unit_tests/rules/sat_ksat.rs"] mod tests; diff --git a/src/rules/sat_maximumindependentset.rs b/src/rules/sat_maximumindependentset.rs index de28824e..a92ebbc0 100644 --- a/src/rules/sat_maximumindependentset.rs +++ b/src/rules/sat_maximumindependentset.rs @@ -166,6 +166,40 @@ impl ReduceTo> for Satisfiability { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + use crate::models::formula::CNFClause; + + fn sat_seven_clause_example() -> Satisfiability { + Satisfiability::new( + 5, + vec![ + CNFClause::new(vec![1, 2, -3]), + CNFClause::new(vec![-1, 3, 4]), + CNFClause::new(vec![2, -4, 5]), + CNFClause::new(vec![-2, 3, -5]), + CNFClause::new(vec![1, -3, 5]), + CNFClause::new(vec![-1, -2, 4]), + CNFClause::new(vec![3, -4, -5]), + ], + ) + } + + vec![crate::example_db::specs::RuleExampleSpec { + id: "satisfiability_to_maximumindependentset", + build: || { + crate::example_db::specs::direct_best_example::< + _, + MaximumIndependentSet, + _, + >( + sat_seven_clause_example(), + crate::example_db::specs::keep_bool_source, + ) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/sat_maximumindependentset.rs"] mod tests; diff --git a/src/rules/sat_minimumdominatingset.rs b/src/rules/sat_minimumdominatingset.rs index ac0cf052..80323961 100644 --- a/src/rules/sat_minimumdominatingset.rs +++ b/src/rules/sat_minimumdominatingset.rs @@ -175,6 +175,34 @@ impl ReduceTo> for Satisfiability { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + use crate::models::formula::CNFClause; + + vec![crate::example_db::specs::RuleExampleSpec { + id: "satisfiability_to_minimumdominatingset", + build: || { + let source = Satisfiability::new( + 5, + vec![ + CNFClause::new(vec![1, 2, -3]), + CNFClause::new(vec![-1, 3, 4]), + CNFClause::new(vec![2, -4, 5]), + CNFClause::new(vec![-2, 3, -5]), + CNFClause::new(vec![1, -3, 5]), + CNFClause::new(vec![-1, -2, 4]), + CNFClause::new(vec![3, -4, -5]), + ], + ); + crate::example_db::specs::direct_best_example::< + _, + MinimumDominatingSet, + _, + >(source, crate::example_db::specs::keep_bool_source) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/sat_minimumdominatingset.rs"] mod tests; diff --git a/src/rules/spinglass_maxcut.rs b/src/rules/spinglass_maxcut.rs index 75ca9df8..40a146c4 100644 --- a/src/rules/spinglass_maxcut.rs +++ b/src/rules/spinglass_maxcut.rs @@ -179,6 +179,39 @@ impl ReduceTo> for SpinGlass { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + vec![ + crate::example_db::specs::RuleExampleSpec { + id: "maxcut_to_spinglass", + build: || { + let (n, edges) = crate::topology::small_graphs::petersen(); + let source = MaxCut::unweighted(SimpleGraph::new(n, edges)); + crate::example_db::specs::direct_best_example::<_, SpinGlass, _>( + source, + |_, _| true, + ) + }, + }, + crate::example_db::specs::RuleExampleSpec { + id: "spinglass_to_maxcut", + build: || { + let (n, edges) = crate::topology::small_graphs::petersen(); + let couplings: Vec<((usize, usize), i32)> = edges + .iter() + .enumerate() + .map(|(i, &(u, v))| ((u, v), if i % 2 == 0 { 1 } else { -1 })) + .collect(); + let source = SpinGlass::new(n, couplings, vec![0; n]); + crate::example_db::specs::direct_best_example::<_, MaxCut, _>( + source, + |_, _| true, + ) + }, + }, + ] +} + #[cfg(test)] #[path = "../unit_tests/rules/spinglass_maxcut.rs"] mod tests; diff --git a/src/rules/spinglass_qubo.rs b/src/rules/spinglass_qubo.rs index d6b441b5..df1d6ced 100644 --- a/src/rules/spinglass_qubo.rs +++ b/src/rules/spinglass_qubo.rs @@ -146,6 +146,46 @@ impl ReduceTo> for SpinGlass { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + vec![ + crate::example_db::specs::RuleExampleSpec { + id: "qubo_to_spinglass", + build: || { + let (n, edges) = crate::topology::small_graphs::petersen(); + let mut matrix = vec![vec![0.0; n]; n]; + for (i, row) in matrix.iter_mut().enumerate() { + row[i] = -1.0 + 0.2 * i as f64; + } + for (idx, &(u, v)) in edges.iter().enumerate() { + let (i, j) = if u < v { (u, v) } else { (v, u) }; + matrix[i][j] = if idx % 2 == 0 { 2.0 } else { -1.5 }; + } + let source = QUBO::from_matrix(matrix); + crate::example_db::specs::direct_best_example::<_, SpinGlass, _>( + source, + |_, _| true, + ) + }, + }, + crate::example_db::specs::RuleExampleSpec { + id: "spinglass_to_qubo", + build: || { + let (n, edges) = crate::topology::small_graphs::petersen(); + let couplings: Vec<((usize, usize), f64)> = edges + .iter() + .enumerate() + .map(|(i, &(u, v))| ((u, v), if i % 2 == 0 { 1.0 } else { -1.0 })) + .collect(); + let source = SpinGlass::new(n, couplings, vec![0.0; n]); + crate::example_db::specs::direct_best_example::<_, QUBO, _>(source, |_, _| { + true + }) + }, + }, + ] +} + #[cfg(test)] #[path = "../unit_tests/rules/spinglass_qubo.rs"] mod tests; diff --git a/src/rules/travelingsalesman_ilp.rs b/src/rules/travelingsalesman_ilp.rs index 1cb02e68..6c9126a7 100644 --- a/src/rules/travelingsalesman_ilp.rs +++ b/src/rules/travelingsalesman_ilp.rs @@ -196,6 +196,20 @@ impl ReduceTo> for TravelingSalesman { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + vec![crate::example_db::specs::RuleExampleSpec { + id: "travelingsalesman_to_ilp", + build: || { + let source = TravelingSalesman::new( + SimpleGraph::new(4, vec![(0, 1), (0, 2), (0, 3), (1, 2), (1, 3), (2, 3)]), + vec![10, 15, 20, 35, 25, 30], + ); + crate::example_db::specs::direct_ilp_example::<_, bool, _>(source, |_, _| true) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/travelingsalesman_ilp.rs"] mod tests; diff --git a/src/rules/travelingsalesman_qubo.rs b/src/rules/travelingsalesman_qubo.rs index 09a94b15..55a83889 100644 --- a/src/rules/travelingsalesman_qubo.rs +++ b/src/rules/travelingsalesman_qubo.rs @@ -162,6 +162,22 @@ impl ReduceTo> for TravelingSalesman { } } +#[cfg(feature = "example-db")] +pub(crate) fn canonical_rule_example_specs() -> Vec { + use crate::models::algebraic::QUBO; + + vec![crate::example_db::specs::RuleExampleSpec { + id: "travelingsalesman_to_qubo", + build: || { + let source = TravelingSalesman::new( + SimpleGraph::new(3, vec![(0, 1), (0, 2), (1, 2)]), + vec![1, 2, 3], + ); + crate::example_db::specs::direct_best_example::<_, QUBO, _>(source, |_, _| true) + }, + }] +} + #[cfg(test)] #[path = "../unit_tests/rules/travelingsalesman_qubo.rs"] mod tests; diff --git a/src/traits.rs b/src/traits.rs index b4f38dcb..3ae29839 100644 --- a/src/traits.rs +++ b/src/traits.rs @@ -22,6 +22,15 @@ pub trait Problem: Clone { /// Used for generating variant IDs in the reduction graph schema. /// Returns pairs like `[("graph", "SimpleGraph"), ("weight", "i32")]`. fn variant() -> Vec<(&'static str, &'static str)>; + + /// Look up this problem's catalog entry. + /// + /// Returns the full [`ProblemType`] metadata from the catalog registry. + /// The default implementation uses `Self::NAME` to perform the lookup. + fn problem_type() -> crate::registry::ProblemType { + crate::registry::find_problem_type(Self::NAME) + .unwrap_or_else(|| panic!("no catalog entry for Problem::NAME = {:?}", Self::NAME)) + } } /// Extension for problems with a numeric objective to optimize. diff --git a/src/types.rs b/src/types.rs index 1daca23c..2e615d0e 100644 --- a/src/types.rs +++ b/src/types.rs @@ -1,6 +1,7 @@ //! Common types used across the problemreductions library. -use serde::{Deserialize, Serialize}; +use serde::de::{self, Visitor}; +use serde::{Deserialize, Deserializer, Serialize, Serializer}; use std::fmt; /// Bound for objective value types (i32, f64, etc.) @@ -62,9 +63,84 @@ impl WeightElement for f64 { /// /// When used as the weight type parameter `W`, indicates that all weights /// are uniformly 1. `One::to_sum()` returns `1i32`. -#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Default, Serialize, Deserialize)] +#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Default)] pub struct One; +impl Serialize for One { + fn serialize(&self, serializer: S) -> Result + where + S: Serializer, + { + serializer.serialize_i32(1) + } +} + +impl<'de> Deserialize<'de> for One { + fn deserialize(deserializer: D) -> Result + where + D: Deserializer<'de>, + { + struct OneVisitor; + + impl<'de> Visitor<'de> for OneVisitor { + type Value = One; + + fn expecting(&self, formatter: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + formatter.write_str("the unit weight `One` encoded as 1 or unit/null") + } + + fn visit_i64(self, value: i64) -> Result + where + E: de::Error, + { + if value == 1 { + Ok(One) + } else { + Err(E::custom(format!("expected 1 for One, got {value}"))) + } + } + + fn visit_u64(self, value: u64) -> Result + where + E: de::Error, + { + if value == 1 { + Ok(One) + } else { + Err(E::custom(format!("expected 1 for One, got {value}"))) + } + } + + fn visit_unit(self) -> Result + where + E: de::Error, + { + Ok(One) + } + + fn visit_none(self) -> Result + where + E: de::Error, + { + Ok(One) + } + + fn visit_str(self, value: &str) -> Result + where + E: de::Error, + { + if value == "One" { + Ok(One) + } else { + Err(E::custom(format!("expected \"One\" for One, got {value}"))) + } + } + } + + deserializer.deserialize_any(OneVisitor) + } +} + impl WeightElement for One { type Sum = i32; const IS_UNIT: bool = true; diff --git a/src/unit_tests/example_db.rs b/src/unit_tests/example_db.rs new file mode 100644 index 00000000..e5d1e788 --- /dev/null +++ b/src/unit_tests/example_db.rs @@ -0,0 +1,229 @@ +use crate::example_db::{build_model_db, build_rule_db, find_model_example, find_rule_example}; +use crate::export::{lookup_overhead, ProblemRef, EXAMPLE_DB_VERSION}; +use std::collections::{BTreeMap, BTreeSet, HashSet}; + +#[test] +fn test_build_model_db_contains_curated_examples() { + let db = build_model_db().expect("model db should build"); + assert_eq!(db.version, EXAMPLE_DB_VERSION); + assert!(!db.models.is_empty(), "model db should not be empty"); + assert!( + db.models + .iter() + .any(|model| model.problem == "MaximumIndependentSet"), + "model db should include a canonical MaximumIndependentSet example" + ); +} + +#[test] +fn test_find_model_example_mis_simplegraph_i32() { + let problem = ProblemRef { + name: "MaximumIndependentSet".to_string(), + variant: BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("weight".to_string(), "i32".to_string()), + ]), + }; + + let example = find_model_example(&problem).expect("MIS example should exist"); + assert_eq!(example.problem, "MaximumIndependentSet"); + assert_eq!(example.variant, problem.variant); + assert!(example.instance.is_object()); + assert!( + !example.optimal.is_empty(), + "canonical example should include optima" + ); +} + +#[test] +fn test_find_rule_example_mvc_to_mis_contains_full_problem_json() { + let source = ProblemRef { + name: "MinimumVertexCover".to_string(), + variant: BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("weight".to_string(), "i32".to_string()), + ]), + }; + let target = ProblemRef { + name: "MaximumIndependentSet".to_string(), + variant: BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("weight".to_string(), "i32".to_string()), + ]), + }; + + let example = find_rule_example(&source, &target).unwrap(); + assert!(example.source.instance.get("graph").is_some()); + assert!(example.target.instance.get("graph").is_some()); +} + +#[test] +fn test_find_rule_example_sat_to_kcoloring_contains_full_instances() { + let source = ProblemRef { + name: "Satisfiability".to_string(), + variant: BTreeMap::new(), + }; + let target = ProblemRef { + name: "KColoring".to_string(), + variant: BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("k".to_string(), "K3".to_string()), + ]), + }; + + let example = find_rule_example(&source, &target).unwrap(); + assert!( + example.source.instance.get("clauses").is_some(), + "SAT source should have clauses field" + ); + assert!( + example.target.instance.get("graph").is_some(), + "KColoring target should have graph field" + ); +} + +#[test] +fn test_build_rule_db_has_unique_structural_keys() { + let db = build_rule_db().expect("rule db should build"); + let mut seen = BTreeSet::new(); + for rule in &db.rules { + let key = (rule.source.problem_ref(), rule.target.problem_ref()); + assert!( + seen.insert(key.clone()), + "Duplicate rule key: {} {:?} -> {} {:?}", + key.0.name, + key.0.variant, + key.1.name, + key.1.variant + ); + } +} + +#[test] +fn test_path_based_rule_example_does_not_require_direct_overhead() { + let source = ProblemRef { + name: "MaximumIndependentSet".to_string(), + variant: BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("weight".to_string(), "i32".to_string()), + ]), + }; + let target = ProblemRef { + name: "ILP".to_string(), + variant: BTreeMap::from([("variable".to_string(), "bool".to_string())]), + }; + + let example = find_rule_example(&source, &target).expect("path example should exist"); + assert!( + !example.overhead.is_empty(), + "path example should carry composed overhead" + ); + assert!( + lookup_overhead(&source.name, &source.variant, &target.name, &target.variant).is_none(), + "path example should not require a direct-edge overhead entry" + ); +} + +#[test] +fn test_build_model_db_has_unique_structural_keys() { + let db = build_model_db().expect("model db should build"); + let mut seen = BTreeSet::new(); + for model in &db.models { + let key = model.problem_ref(); + assert!( + seen.insert(key.clone()), + "Duplicate model key: {} {:?}", + key.name, + key.variant + ); + } +} + +#[test] +fn test_build_rule_db_nonempty() { + let db = build_rule_db().expect("rule db should build"); + assert!(!db.rules.is_empty(), "rule db should not be empty"); +} + +#[test] +fn test_build_model_db_nonempty() { + let db = build_model_db().expect("model db should build"); + assert!(!db.models.is_empty(), "model db should not be empty"); +} + +#[test] +fn canonical_model_example_ids_are_unique() { + let specs = crate::models::graph::canonical_model_example_specs(); + let specs: Vec<_> = specs + .into_iter() + .chain(crate::models::formula::canonical_model_example_specs()) + .chain(crate::models::set::canonical_model_example_specs()) + .chain(crate::models::algebraic::canonical_model_example_specs()) + .chain(crate::models::misc::canonical_model_example_specs()) + .collect(); + let mut seen = HashSet::new(); + for spec in &specs { + assert!( + seen.insert(spec.id), + "Duplicate model example id: {}", + spec.id + ); + } +} + +#[test] +fn canonical_rule_example_ids_are_unique() { + let specs = crate::rules::canonical_rule_example_specs(); + let mut seen = HashSet::new(); + for spec in &specs { + assert!( + seen.insert(spec.id), + "Duplicate rule example id: {}", + spec.id + ); + } +} + +// ---- Error path tests for example_db ---- + +#[test] +fn find_rule_example_nonexistent_returns_error() { + let source = ProblemRef { + name: "NonExistentProblem".to_string(), + variant: BTreeMap::new(), + }; + let target = ProblemRef { + name: "AlsoNonExistent".to_string(), + variant: BTreeMap::new(), + }; + let result = find_rule_example(&source, &target); + assert!(result.is_err()); + let err_msg = format!("{}", result.unwrap_err()); + assert!( + err_msg.contains("No canonical rule example"), + "error should mention no canonical rule: {err_msg}" + ); +} + +#[test] +fn find_model_example_nonexistent_returns_error() { + let problem = ProblemRef { + name: "NonExistentModel".to_string(), + variant: BTreeMap::from([("graph".to_string(), "SimpleGraph".to_string())]), + }; + let result = find_model_example(&problem); + assert!(result.is_err()); + let err_msg = format!("{}", result.unwrap_err()); + assert!( + err_msg.contains("No canonical model example"), + "error should mention no canonical model: {err_msg}" + ); +} + +#[test] +fn default_generated_dir_returns_path() { + use crate::example_db::default_generated_dir; + let dir = default_generated_dir(); + // Should return a valid path (either from env or the default) + assert!(!dir.as_os_str().is_empty()); +} diff --git a/src/unit_tests/export.rs b/src/unit_tests/export.rs index f024b2f1..46aefc7d 100644 --- a/src/unit_tests/export.rs +++ b/src/unit_tests/export.rs @@ -93,51 +93,61 @@ fn test_lookup_overhead_unknown_reduction() { } #[test] -fn test_write_example_creates_files() { +fn test_write_canonical_example_dbs() { use std::fs; + use std::time::{SystemTime, UNIX_EPOCH}; - let data = ReductionData { - source: ProblemSide { - problem: "TestProblem".to_string(), - variant: variant_to_map(vec![("graph", "SimpleGraph")]), - instance: serde_json::json!({"num_vertices": 3}), - }, - target: ProblemSide { - problem: "TargetProblem".to_string(), - variant: variant_to_map(vec![]), - instance: serde_json::json!({"num_vars": 5}), - }, - overhead: vec![], - }; + let dir = std::env::temp_dir().join(format!( + "problemreductions-export-db-test-{}", + SystemTime::now() + .duration_since(UNIX_EPOCH) + .unwrap() + .as_nanos() + )); + fs::create_dir_all(&dir).unwrap(); - let results = ResultData { - solutions: vec![SolutionPair { - source_config: vec![1, 0, 1], - target_config: vec![1, 0, 1, 0, 0], + let rule_db = RuleDb { + version: EXAMPLE_DB_VERSION, + rules: vec![RuleExample { + source: ProblemSide { + problem: "SourceProblem".to_string(), + variant: variant_to_map(vec![("graph", "SimpleGraph")]), + instance: serde_json::json!({"n": 3}), + }, + target: ProblemSide { + problem: "TargetProblem".to_string(), + variant: variant_to_map(vec![("weight", "i32")]), + instance: serde_json::json!({"m": 4}), + }, + overhead: vec![], + solutions: vec![], + }], + }; + let model_db = ModelDb { + version: EXAMPLE_DB_VERSION, + models: vec![ModelExample { + problem: "ModelProblem".to_string(), + variant: variant_to_map(vec![("graph", "SimpleGraph")]), + instance: serde_json::json!({"n": 5}), + samples: vec![], + optimal: vec![], }], }; - write_example("_test_export", &data, &results); - - // Verify files exist and contain valid JSON - let reduction_path = "docs/paper/examples/_test_export.json"; - let results_path = "docs/paper/examples/_test_export.result.json"; + write_rule_db_to(&dir, &rule_db); + write_model_db_to(&dir, &model_db); - let reduction_json: serde_json::Value = - serde_json::from_str(&fs::read_to_string(reduction_path).unwrap()).unwrap(); - assert_eq!(reduction_json["source"]["problem"], "TestProblem"); - assert_eq!(reduction_json["target"]["problem"], "TargetProblem"); + let rules_json: serde_json::Value = + serde_json::from_str(&fs::read_to_string(dir.join("rules.json")).unwrap()).unwrap(); + let models_json: serde_json::Value = + serde_json::from_str(&fs::read_to_string(dir.join("models.json")).unwrap()).unwrap(); - let results_json: serde_json::Value = - serde_json::from_str(&fs::read_to_string(results_path).unwrap()).unwrap(); - assert_eq!( - results_json["solutions"][0]["source_config"], - serde_json::json!([1, 0, 1]) - ); + assert_eq!(rules_json["version"], EXAMPLE_DB_VERSION); + assert_eq!(rules_json["rules"][0]["source"]["problem"], "SourceProblem"); + assert_eq!(models_json["version"], EXAMPLE_DB_VERSION); + assert_eq!(models_json["models"][0]["problem"], "ModelProblem"); - // Clean up test files - let _ = fs::remove_file(reduction_path); - let _ = fs::remove_file(results_path); + let _ = fs::remove_dir_all(&dir); } #[test] @@ -153,48 +163,173 @@ fn test_problem_side_serialization() { assert!(json["instance"]["num_vertices"] == 4); } +// ---- variant_to_map normalization ---- + +#[test] +fn export_variant_to_map_normalizes_empty_graph() { + // When a variant has an empty graph value, variant_to_map should normalize + // it to "SimpleGraph" for consistency with the reduction graph convention. + let map = variant_to_map(vec![("graph", ""), ("weight", "i32")]); + assert_eq!( + map["graph"], "SimpleGraph", + "variant_to_map should normalize empty graph to SimpleGraph" + ); + assert_eq!(map["weight"], "i32"); +} + +#[test] +fn export_variant_to_map_preserves_explicit_graph() { + let map = variant_to_map(vec![("graph", "PlanarGraph"), ("weight", "f64")]); + assert_eq!(map["graph"], "PlanarGraph"); + assert_eq!(map["weight"], "f64"); +} + +// ---- ProblemSide::from_problem / ModelExample::from_problem ---- + +#[test] +fn problem_side_from_typed_problem() { + use crate::models::graph::MaximumIndependentSet; + use crate::topology::SimpleGraph; + + let g = SimpleGraph::new(3, vec![(0, 1), (1, 2)]); + let mis = MaximumIndependentSet::new(g, vec![1, 1, 1]); + let side = ProblemSide::from_problem(&mis); + assert_eq!(side.problem, "MaximumIndependentSet"); + assert_eq!(side.variant["graph"], "SimpleGraph"); + assert!(side.instance.is_object()); +} + #[test] -fn test_reduction_data_serialization() { - let data = ReductionData { +fn model_example_from_typed_problem() { + use crate::models::graph::MaximumIndependentSet; + use crate::topology::SimpleGraph; + + let g = SimpleGraph::new(3, vec![(0, 1), (1, 2)]); + let mis = MaximumIndependentSet::new(g, vec![1, 1, 1]); + let sample = SampleEval { + config: vec![1, 0, 1], + metric: serde_json::json!("Valid(2)"), + }; + let example = ModelExample::from_problem(&mis, vec![sample.clone()], vec![sample]); + assert_eq!(example.problem, "MaximumIndependentSet"); + assert!(!example.samples.is_empty()); + assert!(!example.optimal.is_empty()); + assert!(example.instance.is_object()); +} + +#[test] +fn model_example_problem_ref() { + let example = ModelExample { + problem: "TestProblem".to_string(), + variant: variant_to_map(vec![("graph", "SimpleGraph")]), + instance: serde_json::json!({}), + samples: vec![], + optimal: vec![], + }; + let pref = example.problem_ref(); + assert_eq!(pref.name, "TestProblem"); + assert_eq!(pref.variant["graph"], "SimpleGraph"); +} + +#[test] +fn default_expr_returns_zero() { + let expr = default_expr(); + assert_eq!(expr, Expr::Const(0.0)); +} + +#[test] +fn examples_output_dir_fallback() { + // Without PROBLEMREDUCTIONS_EXAMPLES_DIR set, should fallback + let dir = examples_output_dir(); + let expected = std::path::PathBuf::from("docs/paper/examples/generated"); + // Clean env first to ensure deterministic result + if std::env::var_os(EXAMPLES_DIR_ENV).is_none() { + assert_eq!(dir, expected); + } +} + +#[test] +fn examples_output_dir_env_override() { + // Temporarily set the env var and check it's respected + let key = EXAMPLES_DIR_ENV; + let old = std::env::var_os(key); + std::env::set_var(key, "/tmp/custom_examples"); + let dir = examples_output_dir(); + assert_eq!(dir, std::path::PathBuf::from("/tmp/custom_examples")); + // Restore + match old { + Some(v) => std::env::set_var(key, v), + None => std::env::remove_var(key), + } +} + +#[test] +fn write_rule_example_to_creates_json_file() { + use std::fs; + let dir = std::env::temp_dir().join(format!( + "pr-export-rule-example-{}", + std::time::SystemTime::now() + .duration_since(std::time::UNIX_EPOCH) + .unwrap() + .as_nanos() + )); + let example = RuleExample { source: ProblemSide { - problem: "IS".to_string(), + problem: "A".to_string(), variant: variant_to_map(vec![]), - instance: serde_json::json!({"n": 3}), + instance: serde_json::json!({"x": 1}), }, target: ProblemSide { - problem: "VC".to_string(), + problem: "B".to_string(), variant: variant_to_map(vec![]), - instance: serde_json::json!({"n": 3}), + instance: serde_json::json!({"y": 2}), }, - overhead: vec![OverheadEntry { - field: "num_vertices".to_string(), - expr: Expr::Var("n"), - formula: "n".to_string(), - }], + overhead: vec![], + solutions: vec![], }; - let json = serde_json::to_value(&data).unwrap(); - assert_eq!(json["overhead"][0]["field"], "num_vertices"); - assert_eq!(json["overhead"][0]["formula"], "n"); + write_rule_example_to(&dir, "test_rule", &example); + let path = dir.join("test_rule.json"); + assert!(path.exists()); + let content: serde_json::Value = + serde_json::from_str(&fs::read_to_string(&path).unwrap()).unwrap(); + assert_eq!(content["source"]["problem"], "A"); + let _ = fs::remove_dir_all(&dir); } #[test] -fn test_result_data_serialization() { - let results = ResultData { - solutions: vec![ - SolutionPair { - source_config: vec![1, 0], - target_config: vec![0, 1], - }, - SolutionPair { - source_config: vec![0, 1], - target_config: vec![1, 0], - }, - ], +fn write_model_example_to_creates_json_file() { + use std::fs; + let dir = std::env::temp_dir().join(format!( + "pr-export-model-example-{}", + std::time::SystemTime::now() + .duration_since(std::time::UNIX_EPOCH) + .unwrap() + .as_nanos() + )); + let example = ModelExample { + problem: "TestModel".to_string(), + variant: variant_to_map(vec![("graph", "SimpleGraph")]), + instance: serde_json::json!({"n": 3}), + samples: vec![SampleEval { + config: vec![1, 0, 1], + metric: serde_json::json!("Valid(2)"), + }], + optimal: vec![], }; - let json = serde_json::to_value(&results).unwrap(); - assert_eq!(json["solutions"].as_array().unwrap().len(), 2); - assert_eq!( - json["solutions"][0]["source_config"], - serde_json::json!([1, 0]) - ); + write_model_example_to(&dir, "test_model", &example); + let path = dir.join("test_model.json"); + assert!(path.exists()); + let content: serde_json::Value = + serde_json::from_str(&fs::read_to_string(&path).unwrap()).unwrap(); + assert_eq!(content["problem"], "TestModel"); + let _ = fs::remove_dir_all(&dir); +} + +#[test] +fn lookup_overhead_rejects_target_variant_mismatch() { + let source = variant_to_map(vec![("graph", "SimpleGraph"), ("weight", "i32")]); + // MIS -> QUBO exists, but not MIS -> QUBO + let wrong_target = variant_to_map(vec![("weight", "i32")]); + let result = lookup_overhead("MaximumIndependentSet", &source, "QUBO", &wrong_target); + assert!(result.is_none(), "Should reject wrong target variant"); } diff --git a/src/unit_tests/models/algebraic/qubo.rs b/src/unit_tests/models/algebraic/qubo.rs index 550810cf..d33f5b95 100644 --- a/src/unit_tests/models/algebraic/qubo.rs +++ b/src/unit_tests/models/algebraic/qubo.rs @@ -118,9 +118,15 @@ fn test_qubo_paper_example() { vec![0.0, -1.0, 2.0], vec![0.0, 0.0, -1.0], ]); - assert_eq!(Problem::evaluate(&problem, &[1, 0, 1]), SolutionSize::Valid(-2.0)); + assert_eq!( + Problem::evaluate(&problem, &[1, 0, 1]), + SolutionSize::Valid(-2.0) + ); let solver = BruteForce::new(); let best = solver.find_best(&problem).unwrap(); - assert_eq!(Problem::evaluate(&problem, &best), SolutionSize::Valid(-2.0)); + assert_eq!( + Problem::evaluate(&problem, &best), + SolutionSize::Valid(-2.0) + ); } diff --git a/src/unit_tests/models/formula/ksat.rs b/src/unit_tests/models/formula/ksat.rs index 420766fc..ebaa569a 100644 --- a/src/unit_tests/models/formula/ksat.rs +++ b/src/unit_tests/models/formula/ksat.rs @@ -230,11 +230,14 @@ fn test_size_getters() { #[test] fn test_ksat_paper_example() { // Paper: 3-SAT, (x1∨x2∨x3)∧(¬x1∨¬x2∨x3)∧(x1∨¬x2∨¬x3), assignment (1,0,1) - let problem = KSatisfiability::::new(3, vec![ - CNFClause::new(vec![1, 2, 3]), - CNFClause::new(vec![-1, -2, 3]), - CNFClause::new(vec![1, -2, -3]), - ]); + let problem = KSatisfiability::::new( + 3, + vec![ + CNFClause::new(vec![1, 2, 3]), + CNFClause::new(vec![-1, -2, 3]), + CNFClause::new(vec![1, -2, -3]), + ], + ); assert!(problem.evaluate(&[1, 0, 1])); let solver = BruteForce::new(); diff --git a/src/unit_tests/models/formula/sat.rs b/src/unit_tests/models/formula/sat.rs index d2f61cdc..cd78576f 100644 --- a/src/unit_tests/models/formula/sat.rs +++ b/src/unit_tests/models/formula/sat.rs @@ -211,11 +211,14 @@ fn test_is_valid_solution() { #[test] fn test_sat_paper_example() { // Paper: (x1∨x2)∧(¬x1∨x3)∧(¬x2∨¬x3), assignment (1,0,1) - let problem = Satisfiability::new(3, vec![ - CNFClause::new(vec![1, 2]), - CNFClause::new(vec![-1, 3]), - CNFClause::new(vec![-2, -3]), - ]); + let problem = Satisfiability::new( + 3, + vec![ + CNFClause::new(vec![1, 2]), + CNFClause::new(vec![-1, 3]), + CNFClause::new(vec![-2, -3]), + ], + ); // (1,0,1) → x1=T, x2=F, x3=T assert!(problem.evaluate(&[1, 0, 1])); diff --git a/src/unit_tests/models/set/maximum_set_packing.rs b/src/unit_tests/models/set/maximum_set_packing.rs index 4025af01..9995c34f 100644 --- a/src/unit_tests/models/set/maximum_set_packing.rs +++ b/src/unit_tests/models/set/maximum_set_packing.rs @@ -164,9 +164,8 @@ fn test_universe_size_empty() { #[test] fn test_setpacking_paper_example() { // Paper: U={0..5}, sets {0,1},{1,2},{2,3},{3,4}, max packing {S_0,S_2} - let problem = MaximumSetPacking::::new(vec![ - vec![0, 1], vec![1, 2], vec![2, 3], vec![3, 4], - ]); + let problem = + MaximumSetPacking::::new(vec![vec![0, 1], vec![1, 2], vec![2, 3], vec![3, 4]]); let config = vec![1, 0, 1, 0]; // {S_0, S_2} let result = problem.evaluate(&config); assert!(result.is_valid()); diff --git a/src/unit_tests/models/set/minimum_set_covering.rs b/src/unit_tests/models/set/minimum_set_covering.rs index 7347cfaf..0063bc69 100644 --- a/src/unit_tests/models/set/minimum_set_covering.rs +++ b/src/unit_tests/models/set/minimum_set_covering.rs @@ -120,9 +120,7 @@ fn test_is_valid_solution() { #[test] fn test_setcovering_paper_example() { // Paper: U=5, sets {0,1,2},{1,3},{2,3,4}, min cover {S_0,S_2}, weight=2 - let problem = MinimumSetCovering::::new(5, vec![ - vec![0, 1, 2], vec![1, 3], vec![2, 3, 4], - ]); + let problem = MinimumSetCovering::::new(5, vec![vec![0, 1, 2], vec![1, 3], vec![2, 3, 4]]); let config = vec![1, 0, 1]; // {S_0, S_2} covers all of {0,1,2,3,4} let result = problem.evaluate(&config); assert!(result.is_valid()); diff --git a/src/unit_tests/reduction_graph.rs b/src/unit_tests/reduction_graph.rs index 9ba9e770..dc636116 100644 --- a/src/unit_tests/reduction_graph.rs +++ b/src/unit_tests/reduction_graph.rs @@ -461,3 +461,191 @@ fn test_k_neighbors_zero_hops() { ); assert!(neighbors.is_empty()); } + +// ---- Default variant resolution ---- + +#[test] +fn default_variant_for_mis_uses_declared_default() { + let graph = ReductionGraph::new(); + let default = graph.default_variant_for("MaximumIndependentSet"); + assert!( + default.is_some(), + "MaximumIndependentSet should have a declared default variant" + ); + let variant = default.unwrap(); + assert_eq!( + variant.get("graph").map(|s| s.as_str()), + Some("SimpleGraph"), + "default MIS variant should use SimpleGraph" + ); + assert_eq!( + variant.get("weight").map(|s| s.as_str()), + Some("One"), + "default MIS variant should use One (unit weight)" + ); +} + +#[test] +fn default_variant_for_unknown_problem_returns_none() { + let graph = ReductionGraph::new(); + let default = graph.default_variant_for("NonExistentProblem"); + assert!( + default.is_none(), + "unknown problem should have no default variant" + ); +} + +#[test] +fn default_variant_for_mvc_uses_declared_default() { + let graph = ReductionGraph::new(); + let default = graph.default_variant_for("MinimumVertexCover"); + assert!( + default.is_some(), + "MinimumVertexCover should have a declared default variant" + ); + let variant = default.unwrap(); + assert_eq!( + variant.get("graph").map(|s| s.as_str()), + Some("SimpleGraph"), + "default MVC variant should use SimpleGraph" + ); + assert_eq!( + variant.get("weight").map(|s| s.as_str()), + Some("i32"), + "default MVC variant should use i32" + ); +} + +#[test] +fn default_variant_for_qubo_uses_declared_default() { + let graph = ReductionGraph::new(); + let default = graph.default_variant_for("QUBO"); + assert!( + default.is_some(), + "QUBO should have a declared default variant" + ); + let variant = default.unwrap(); + assert_eq!( + variant.get("weight").map(|s| s.as_str()), + Some("f64"), + "default QUBO variant should use f64" + ); +} + +#[test] +fn default_variant_for_ksat_uses_declared_default() { + let graph = ReductionGraph::new(); + let default = graph.default_variant_for("KSatisfiability"); + assert!( + default.is_some(), + "KSatisfiability should have a declared default variant" + ); + let variant = default.unwrap(); + assert_eq!( + variant.get("k").map(|s| s.as_str()), + Some("KN"), + "default KSatisfiability variant should use KN" + ); +} + +#[test] +fn default_variant_for_sat_returns_empty() { + // Satisfiability has no variant dimensions, so its default is an empty map + let graph = ReductionGraph::new(); + let default = graph.default_variant_for("Satisfiability"); + assert!( + default.is_some(), + "Satisfiability should have a declared default variant" + ); + assert!( + default.unwrap().is_empty(), + "Satisfiability default variant should be empty (no dimensions)" + ); +} + +// ---- Capped path enumeration ---- + +#[test] +fn find_paths_up_to_stops_after_limit() { + let graph = ReductionGraph::new(); + let src = ReductionGraph::variant_to_map(&MaximumIndependentSet::::variant()); + let dst = ReductionGraph::variant_to_map(&QUBO::::variant()); + + // Get all paths to know the total count + let all = graph.find_all_paths("MaximumIndependentSet", &src, "QUBO", &dst); + assert!(all.len() > 3, "need multiple paths for this test"); + + // With a limit of 3, should get exactly 3 + let limited = graph.find_paths_up_to("MaximumIndependentSet", &src, "QUBO", &dst, 3); + assert_eq!(limited.len(), 3, "should stop after 3 paths"); +} + +#[test] +fn find_paths_up_to_returns_all_when_limit_exceeds_total() { + let graph = ReductionGraph::new(); + let src = ReductionGraph::variant_to_map(&MaximumIndependentSet::::variant()); + let dst = ReductionGraph::variant_to_map(&MinimumVertexCover::::variant()); + + let all = graph.find_all_paths("MaximumIndependentSet", &src, "MinimumVertexCover", &dst); + let limited = graph.find_paths_up_to( + "MaximumIndependentSet", + &src, + "MinimumVertexCover", + &dst, + 1000, + ); + assert_eq!( + limited.len(), + all.len(), + "should return all paths when limit exceeds total" + ); +} + +#[test] +fn find_paths_up_to_no_path() { + let graph = ReductionGraph::new(); + let src = ReductionGraph::variant_to_map(&QUBO::::variant()); + let dst = ReductionGraph::variant_to_map(&MaximumSetPacking::::variant()); + + let limited = graph.find_paths_up_to("QUBO", &src, "MaximumSetPacking", &dst, 10); + assert!(limited.is_empty()); +} + +// ---- Exact source+target variant matching ---- + +#[test] +fn find_best_entry_rejects_wrong_target_variant() { + let graph = ReductionGraph::new(); + let source = + ReductionGraph::variant_to_map(&MaximumIndependentSet::::variant()); + // MIS -> MVC exists, but MVC does not + let wrong_target = BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("weight".to_string(), "f64".to_string()), + ]); + let result = graph.find_best_entry( + "MaximumIndependentSet", + &source, + "MinimumVertexCover", + &wrong_target, + ); + assert!(result.is_none(), "Should reject wrong target variant"); +} + +#[test] +fn find_best_entry_accepts_exact_source_and_target_variant() { + let graph = ReductionGraph::new(); + let source = + ReductionGraph::variant_to_map(&MaximumIndependentSet::::variant()); + let target = ReductionGraph::variant_to_map(&MinimumVertexCover::::variant()); + let result = graph.find_best_entry( + "MaximumIndependentSet", + &source, + "MinimumVertexCover", + &target, + ); + assert!( + result.is_some(), + "Should find exact match on both source and target variant" + ); +} diff --git a/src/unit_tests/registry/dispatch.rs b/src/unit_tests/registry/dispatch.rs new file mode 100644 index 00000000..bb8889c7 --- /dev/null +++ b/src/unit_tests/registry/dispatch.rs @@ -0,0 +1,122 @@ +use crate::models::graph::MaximumIndependentSet; +use crate::models::misc::SubsetSum; +use crate::registry::variant::find_variant_entry; +use crate::registry::{load_dyn, serialize_any, DynProblem, LoadedDynProblem}; +use crate::topology::SimpleGraph; +use crate::{Problem, Solver}; +use std::any::Any; +use std::collections::BTreeMap; + +fn solve_subset_sum(any: &dyn Any) -> Option<(Vec, String)> { + let p = any.downcast_ref::()?; + let config = crate::BruteForce::new().find_satisfying(p)?; + let eval = format!("{:?}", p.evaluate(&config)); + Some((config, eval)) +} + +#[test] +fn test_dyn_problem_blanket_impl_exposes_problem_metadata() { + let problem = MaximumIndependentSet::new(SimpleGraph::new(3, vec![(0, 1)]), vec![1i32; 3]); + let dyn_problem: &dyn DynProblem = &problem; + + assert_eq!(dyn_problem.problem_name(), "MaximumIndependentSet"); + assert_eq!(dyn_problem.num_variables_dyn(), 3); + assert_eq!(dyn_problem.dims_dyn(), vec![2, 2, 2]); + assert_eq!(dyn_problem.variant_map()["graph"], "SimpleGraph"); + assert!(dyn_problem.serialize_json().is_object()); +} + +#[test] +fn test_loaded_dyn_problem_delegates_to_solve_fn() { + let problem = SubsetSum::new(vec![3u32, 7u32, 1u32], 4u32); + let loaded = LoadedDynProblem::new(Box::new(problem), solve_subset_sum); + let solved = loaded + .solve_brute_force() + .expect("expected satisfying solution"); + assert_eq!(solved.1, "true"); + assert_eq!(solved.0.len(), 3); +} + +#[test] +fn test_find_variant_entry_requires_exact_variant() { + let partial = BTreeMap::from([("graph".to_string(), "SimpleGraph".to_string())]); + assert!(find_variant_entry("MaximumIndependentSet", &partial).is_none()); +} + +#[test] +fn test_load_dyn_round_trips_maximum_independent_set() { + let problem = MaximumIndependentSet::new(SimpleGraph::new(3, vec![(0, 1)]), vec![1i32; 3]); + let variant = BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("weight".to_string(), "i32".to_string()), + ]); + let loaded = load_dyn( + "MaximumIndependentSet", + &variant, + serde_json::to_value(&problem).unwrap(), + ) + .unwrap(); + + assert_eq!(loaded.problem_name(), "MaximumIndependentSet"); + assert_eq!( + loaded.serialize_json(), + serde_json::to_value(&problem).unwrap() + ); + assert!(loaded.solve_brute_force().is_some()); +} + +#[test] +fn test_load_dyn_solves_subset_sum() { + let problem = SubsetSum::new(vec![3u32, 7u32, 1u32], 4u32); + let variant = BTreeMap::new(); + let loaded = load_dyn( + "SubsetSum", + &variant, + serde_json::to_value(&problem).unwrap(), + ) + .unwrap(); + let solved = loaded.solve_brute_force().unwrap(); + assert_eq!(solved.1, "true"); +} + +#[test] +fn test_load_dyn_rejects_partial_variant() { + let problem = MaximumIndependentSet::new(SimpleGraph::new(3, vec![(0, 1)]), vec![1i32; 3]); + let partial = BTreeMap::from([("graph".to_string(), "SimpleGraph".to_string())]); + let err = load_dyn( + "MaximumIndependentSet", + &partial, + serde_json::to_value(&problem).unwrap(), + ) + .unwrap_err(); + + assert!(err.contains("MaximumIndependentSet")); +} + +#[test] +fn test_load_dyn_rejects_alias_name() { + let problem = MaximumIndependentSet::new(SimpleGraph::new(3, vec![(0, 1)]), vec![1i32; 3]); + let variant = BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("weight".to_string(), "i32".to_string()), + ]); + assert!(load_dyn("MIS", &variant, serde_json::to_value(&problem).unwrap()).is_err()); +} + +#[test] +fn test_serialize_any_round_trips_exact_variant() { + let problem = MaximumIndependentSet::new(SimpleGraph::new(3, vec![(0, 1)]), vec![1i32; 3]); + let variant = BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("weight".to_string(), "i32".to_string()), + ]); + let json = serialize_any("MaximumIndependentSet", &variant, &problem as &dyn Any).unwrap(); + assert_eq!(json, serde_json::to_value(&problem).unwrap()); +} + +#[test] +fn test_serialize_any_rejects_partial_variant() { + let problem = MaximumIndependentSet::new(SimpleGraph::new(3, vec![(0, 1)]), vec![1i32; 3]); + let partial = BTreeMap::from([("graph".to_string(), "SimpleGraph".to_string())]); + assert!(serialize_any("MaximumIndependentSet", &partial, &problem as &dyn Any).is_none()); +} diff --git a/src/unit_tests/registry/problem_type.rs b/src/unit_tests/registry/problem_type.rs new file mode 100644 index 00000000..6215bda0 --- /dev/null +++ b/src/unit_tests/registry/problem_type.rs @@ -0,0 +1,232 @@ +use crate::registry::{ + find_problem_type, find_problem_type_by_alias, parse_catalog_problem_ref, problem_types, + ProblemRef, ProblemSchemaEntry, +}; +use std::collections::HashMap; + +#[test] +fn typed_problem_ref_fills_declared_defaults() { + let problem = find_problem_type("MaximumIndependentSet").unwrap(); + let problem_ref = ProblemRef::from_values(&problem, ["i32"]).unwrap(); + assert_eq!( + problem_ref.variant().get("graph").map(|s| s.as_str()), + Some("SimpleGraph") + ); + assert_eq!( + problem_ref.variant().get("weight").map(|s| s.as_str()), + Some("i32") + ); +} + +#[test] +fn catalog_rejects_unknown_dimension_values() { + let problem = find_problem_type("MaximumIndependentSet").unwrap(); + let err = ProblemRef::from_values(&problem, ["HyperGraph"]).unwrap_err(); + assert!( + err.contains("Known variants"), + "error should mention known variants: {err}" + ); +} + +#[test] +fn catalog_alias_lookup_is_case_insensitive() { + let problem = find_problem_type_by_alias("mis").unwrap(); + assert_eq!(problem.canonical_name, "MaximumIndependentSet"); +} + +#[test] +fn find_problem_type_returns_none_for_unknown() { + assert!(find_problem_type("NonExistentProblem").is_none()); +} + +#[test] +fn find_problem_type_by_alias_matches_canonical_name() { + let problem = find_problem_type_by_alias("MaximumIndependentSet").unwrap(); + assert_eq!(problem.canonical_name, "MaximumIndependentSet"); +} + +#[test] +fn problem_types_returns_all_registered() { + let types = problem_types(); + assert!( + types.len() > 10, + "expected many problem types, got {}", + types.len() + ); + // Should include MIS + assert!(types + .iter() + .any(|t| t.canonical_name == "MaximumIndependentSet")); +} + +#[test] +fn problem_ref_from_values_no_values_uses_all_defaults() { + let problem = find_problem_type("MaximumIndependentSet").unwrap(); + let problem_ref = ProblemRef::from_values(&problem, Vec::<&str>::new()).unwrap(); + assert_eq!( + problem_ref.variant().get("graph").map(|s| s.as_str()), + Some("SimpleGraph") + ); + assert_eq!( + problem_ref.variant().get("weight").map(|s| s.as_str()), + Some("One") + ); +} + +#[test] +fn problem_ref_from_values_graph_override() { + let problem = find_problem_type("MaximumIndependentSet").unwrap(); + let problem_ref = ProblemRef::from_values(&problem, ["UnitDiskGraph", "i32"]).unwrap(); + assert_eq!( + problem_ref.variant().get("graph").map(|s| s.as_str()), + Some("UnitDiskGraph") + ); + assert_eq!( + problem_ref.variant().get("weight").map(|s| s.as_str()), + Some("i32") + ); +} + +#[test] +fn parse_catalog_problem_ref_bare_mis() { + let r = parse_catalog_problem_ref("MIS").unwrap(); + assert_eq!(r.name(), "MaximumIndependentSet"); + assert_eq!( + r.variant().get("graph").map(|s| s.as_str()), + Some("SimpleGraph") + ); + assert_eq!(r.variant().get("weight").map(|s| s.as_str()), Some("One")); +} + +#[test] +fn parse_catalog_problem_ref_with_value() { + let r = parse_catalog_problem_ref("MIS/UnitDiskGraph").unwrap(); + assert_eq!(r.name(), "MaximumIndependentSet"); + assert_eq!( + r.variant().get("graph").map(|s| s.as_str()), + Some("UnitDiskGraph") + ); +} + +#[test] +fn parse_catalog_problem_ref_rejects_unknown() { + let err = parse_catalog_problem_ref("NonExistent").unwrap_err(); + assert!(err.contains("Unknown problem type")); +} + +#[test] +fn problem_ref_to_export_ref() { + let problem = find_problem_type("MaximumIndependentSet").unwrap(); + let problem_ref = ProblemRef::from_values(&problem, ["i32"]).unwrap(); + let export_ref = problem_ref.to_export_ref(); + assert_eq!(export_ref.name, "MaximumIndependentSet"); + assert_eq!( + export_ref.variant.get("weight").map(|s| s.as_str()), + Some("i32") + ); +} + +// ---- Catalog invariant tests ---- + +#[test] +fn every_public_problem_schema_has_display_name() { + for entry in inventory::iter:: { + assert!( + !entry.display_name.is_empty(), + "Problem {} has empty display_name", + entry.name + ); + } +} + +#[test] +fn every_public_problem_schema_has_dimension_defaults() { + for entry in inventory::iter:: { + for dim in entry.dimensions { + assert!( + dim.allowed_values.contains(&dim.default_value), + "Problem {} dimension '{}' default '{}' not in allowed values {:?}", + entry.name, + dim.key, + dim.default_value, + dim.allowed_values, + ); + } + } +} + +#[test] +fn every_alias_is_globally_unique() { + let mut seen: HashMap = HashMap::new(); + for entry in inventory::iter:: { + for alias in entry.aliases { + let lower = alias.to_lowercase(); + if let Some(prev) = seen.get(&lower) { + panic!( + "Alias '{}' is used by both {} and {}", + alias, prev, entry.name, + ); + } + seen.insert(lower, entry.name); + } + } +} + +#[test] +fn catalog_dimensions_cover_all_declared_variants() { + use crate::registry::variant::VariantEntry; + + for entry in inventory::iter:: { + if entry.dimensions.is_empty() { + continue; + } + + // Collect all variant entries for this problem + let variants: Vec<_> = inventory::iter:: + .into_iter() + .filter(|v| v.name == entry.name) + .collect(); + + for ve in &variants { + let variant_pairs = ve.variant(); + for (key, value) in &variant_pairs { + if let Some(dim) = entry.dimensions.iter().find(|d| d.key == *key) { + assert!( + dim.allowed_values.contains(value), + "Problem {} declared variant value '{}' for dimension '{}' \ + is not in catalog allowed_values {:?}", + entry.name, + value, + key, + dim.allowed_values, + ); + } + } + } + } +} + +#[test] +fn graph_defaults_are_catalog_defaults_for_registered_variants() { + let graph = crate::rules::ReductionGraph::new(); + + for pt in problem_types() { + if pt.dimensions.is_empty() { + continue; + } + + let catalog_default = pt.default_variant(); + if let Some(graph_default) = graph.default_variant_for(pt.canonical_name) { + // Every catalog default dimension should match the graph default + for (key, cat_val) in &catalog_default { + if let Some(graph_val) = graph_default.get(key) { + assert_eq!( + cat_val, graph_val, + "Problem {} dimension '{}': catalog default '{}' != graph default '{}'", + pt.canonical_name, key, cat_val, graph_val, + ); + } + } + } + } +} diff --git a/src/unit_tests/rules/analysis.rs b/src/unit_tests/rules/analysis.rs index d4a4cd1f..e2e65a41 100644 --- a/src/unit_tests/rules/analysis.rs +++ b/src/unit_tests/rules/analysis.rs @@ -1,5 +1,8 @@ use crate::expr::Expr; -use crate::rules::analysis::{compare_overhead, find_dominated_rules, ComparisonStatus}; +use crate::rules::analysis::{ + check_connectivity, check_reachability_from_3sat, compare_overhead, find_dominated_rules, + ComparisonStatus, UnreachableReason, +}; use crate::rules::graph::ReductionGraph; use crate::rules::registry::ReductionOverhead; @@ -317,3 +320,172 @@ fn test_no_duplicate_primitive_rules_per_variant_pair() { ); } } + +// ---- Connectivity checks ---- + +#[test] +fn test_check_connectivity_returns_valid_report() { + let graph = ReductionGraph::new(); + let report = check_connectivity(&graph); + + assert!(report.total_types > 0); + assert!(report.total_reductions > 0); + assert!(!report.components.is_empty()); + + // Components should be sorted largest-first + for w in report.components.windows(2) { + assert!(w[0].len() >= w[1].len()); + } + + // Each component should be internally sorted + for comp in &report.components { + let mut sorted = comp.clone(); + sorted.sort(); + assert_eq!(comp, &sorted); + } + + // All types should appear in exactly one component + let total_in_components: usize = report.components.iter().map(|c| c.len()).sum(); + assert_eq!(total_in_components, report.total_types); +} + +#[test] +fn test_isolated_problems_have_no_reductions() { + let graph = ReductionGraph::new(); + let report = check_connectivity(&graph); + + for p in &report.isolated { + assert!( + graph.outgoing_reductions(p.name).is_empty(), + "{} has outgoing reductions but is marked isolated", + p.name + ); + assert!( + graph.incoming_reductions(p.name).is_empty(), + "{} has incoming reductions but is marked isolated", + p.name + ); + assert!(p.num_variants > 0); + } +} + +// ---- Reachability checks ---- + +#[test] +fn test_reachability_from_3sat_returns_valid_report() { + let graph = ReductionGraph::new(); + let report = check_reachability_from_3sat(&graph); + + assert!(report.total_types > 0); + // 3-SAT (KSatisfiability) should be reachable at distance 0 + assert_eq!(report.reachable.get("KSatisfiability"), Some(&0)); + // Satisfiability should be reachable (KSat -> Sat exists) + assert!(report.reachable.contains_key("Satisfiability")); + // Total should add up + assert_eq!( + report.reachable.len() + report.unreachable.len(), + report.total_types + ); +} + +#[test] +fn test_reachability_classifies_known_problems() { + let graph = ReductionGraph::new(); + let report = check_reachability_from_3sat(&graph); + + // MaximumMatching is in P + if let Some(p) = report + .unreachable + .iter() + .find(|p| p.name == "MaximumMatching") + { + assert_eq!(p.reason, UnreachableReason::InP); + } + + // Factoring is intermediate + if let Some(p) = report.unreachable.iter().find(|p| p.name == "Factoring") { + assert_eq!(p.reason, UnreachableReason::Intermediate); + } +} + +#[test] +fn test_reachability_hop_distances_are_monotonic() { + let graph = ReductionGraph::new(); + let report = check_reachability_from_3sat(&graph); + + // For every reachable problem at distance d > 0, there must be a + // predecessor at distance d-1 that has an outgoing reduction to it + for (&name, &hops) in &report.reachable { + if hops == 0 { + continue; + } + let has_predecessor = graph.incoming_reductions(name).iter().any(|edge| { + report + .reachable + .get(edge.source_name) + .is_some_and(|&h| h < hops) + }); + assert!( + has_predecessor, + "{name} at distance {hops} has no predecessor with smaller distance" + ); + } +} + +#[test] +fn test_reachability_missing_proof_chains_filter() { + let graph = ReductionGraph::new(); + let report = check_reachability_from_3sat(&graph); + let missing = report.missing_proof_chains(); + // All items returned should have MissingProofChain reason + for p in &missing { + assert_eq!(p.reason, UnreachableReason::MissingProofChain); + } + // Count should match manual filter + let manual_count = report + .unreachable + .iter() + .filter(|p| p.reason == UnreachableReason::MissingProofChain) + .count(); + assert_eq!(missing.len(), manual_count); +} + +#[test] +fn test_connectivity_reports_components() { + let graph = ReductionGraph::new(); + let report = check_connectivity(&graph); + // There should be at least one component + assert!(!report.components.is_empty()); + // The largest component should be sorted + if let Some(largest) = report.components.first() { + let mut sorted = largest.clone(); + sorted.sort(); + assert_eq!(*largest, sorted, "components should be sorted"); + } + // Components should be sorted by size (descending) + for window in report.components.windows(2) { + assert!( + window[0].len() >= window[1].len(), + "components should be sorted largest-first" + ); + } +} + +#[test] +fn test_connectivity_isolated_problems_have_variant_info() { + let graph = ReductionGraph::new(); + let report = check_connectivity(&graph); + for iso in &report.isolated { + assert!( + iso.num_variants > 0, + "isolated problem {} should have at least one variant", + iso.name + ); + assert_eq!( + iso.variant_complexities.len(), + iso.num_variants, + "variant_complexities count should match num_variants for {}", + iso.name + ); + } +} diff --git a/src/unit_tests/rules/registry.rs b/src/unit_tests/rules/registry.rs index dfe762a2..c6b0f4cb 100644 --- a/src/unit_tests/rules/registry.rs +++ b/src/unit_tests/rules/registry.rs @@ -1,5 +1,6 @@ use super::*; use crate::expr::Expr; +use std::path::Path; /// Dummy reduce_fn for unit tests that don't exercise runtime reduction. fn dummy_reduce_fn(_: &dyn std::any::Any) -> Box { @@ -283,3 +284,125 @@ fn test_complexity_eval_fn_cross_check_factoring() { let input = ProblemSize::new(vec![("m", problem.m()), ("n", problem.n())]); cross_check_complexity(entry, &problem as &dyn std::any::Any, &input); } + +type EndpointKey = (String, Vec<(String, String)>, String, Vec<(String, String)>); + +fn exact_endpoint_key(entry: &ReductionEntry) -> EndpointKey { + let source_variant = entry + .source_variant() + .into_iter() + .map(|(k, v)| (k.to_string(), v.to_string())) + .collect(); + let target_variant = entry + .target_variant() + .into_iter() + .map(|(k, v)| (k.to_string(), v.to_string())) + .collect(); + ( + entry.source_name.to_string(), + source_variant, + entry.target_name.to_string(), + target_variant, + ) +} + +fn walk_rust_files(dir: &Path, files: &mut Vec) { + for entry in std::fs::read_dir(dir).unwrap() { + let entry = entry.unwrap(); + let path = entry.path(); + if path.is_dir() { + walk_rust_files(&path, files); + } else if path.extension().is_some_and(|ext| ext == "rs") { + files.push(path); + } + } +} + +fn reduction_attribute_has_extra_top_level_field(path: &Path) -> bool { + let contents = std::fs::read_to_string(path).unwrap(); + let mut in_reduction_attr = false; + let mut attr_text = String::new(); + + for line in contents.lines() { + if !in_reduction_attr + && (line.contains("#[reduction(") || line.contains("#[$crate::reduction(")) + { + in_reduction_attr = true; + attr_text.clear(); + } + if in_reduction_attr { + attr_text.push_str(line.trim()); + attr_text.push(' '); + } + if in_reduction_attr && line.contains(")]") { + let normalized = attr_text.split_whitespace().collect::>().join(" "); + let body = normalized + .strip_prefix("#[reduction(") + .or_else(|| normalized.strip_prefix("#[$crate::reduction(")) + .unwrap_or(&normalized); + let body = body.strip_suffix(")]").unwrap_or(body).trim(); + if !body.starts_with("overhead =") { + return true; + } + in_reduction_attr = false; + } + } + + false +} + +#[test] +fn every_registered_reduction_has_unique_exact_endpoints() { + let entries = reduction_entries(); + let mut seen = std::collections::HashMap::new(); + for entry in &entries { + let key = exact_endpoint_key(entry); + if let Some(prev) = seen.insert(key.clone(), entry) { + panic!( + "Duplicate exact reduction endpoint {:?}: {} {:?} -> {} {:?} vs {} {:?} -> {} {:?}", + key, + prev.source_name, + prev.source_variant(), + prev.target_name, + prev.target_variant(), + entry.source_name, + entry.source_variant(), + entry.target_name, + entry.target_variant(), + ); + } + } +} + +#[test] +fn every_registered_reduction_has_non_empty_names() { + for entry in reduction_entries() { + assert!( + !entry.source_name.is_empty(), + "Empty source_name for reduction targeting {}", + entry.target_name, + ); + assert!( + !entry.target_name.is_empty(), + "Empty target_name for reduction sourced from {}", + entry.source_name, + ); + } +} + +#[test] +fn repo_reductions_use_overhead_only_attribute() { + let mut rust_files = Vec::new(); + walk_rust_files(Path::new("src/rules"), &mut rust_files); + + let offenders: Vec<_> = rust_files + .into_iter() + .filter(|path| reduction_attribute_has_extra_top_level_field(path)) + .collect(); + + assert!( + offenders.is_empty(), + "extra top-level reduction attribute still present in: {:?}", + offenders, + ); +} diff --git a/src/unit_tests/traits.rs b/src/unit_tests/traits.rs index c59957d6..6fadbaa2 100644 --- a/src/unit_tests/traits.rs +++ b/src/unit_tests/traits.rs @@ -230,6 +230,19 @@ fn test_float_metric_problem() { assert_eq!(p.direction(), Direction::Maximize); } +// === Catalog bridge === + +#[test] +fn problem_type_bridge_returns_catalog_entry_for_registered_type() { + use crate::models::graph::MaximumIndependentSet; + use crate::topology::SimpleGraph; + + let pt = MaximumIndependentSet::::problem_type(); + assert_eq!(pt.canonical_name, "MaximumIndependentSet"); + assert!(!pt.display_name.is_empty()); + assert!(!pt.dimensions.is_empty()); +} + // === Clone constraint === #[test] diff --git a/src/unit_tests/types.rs b/src/unit_tests/types.rs index ceb7ee38..663fee0d 100644 --- a/src/unit_tests/types.rs +++ b/src/unit_tests/types.rs @@ -58,6 +58,15 @@ fn test_one() { assert_eq!(from_int, One); } +#[test] +fn test_one_json() { + let json = serde_json::to_value(vec![One, One]).unwrap(); + assert_eq!(json, serde_json::json!([1, 1])); + + let parsed: Vec = serde_json::from_value(json).unwrap(); + assert_eq!(parsed, vec![One, One]); +} + #[test] fn test_direction() { let max_dir = Direction::Maximize; diff --git a/src/unit_tests/variant.rs b/src/unit_tests/variant.rs index 740f2fbf..7289134e 100644 --- a/src/unit_tests/variant.rs +++ b/src/unit_tests/variant.rs @@ -316,3 +316,165 @@ fn test_weight_cast_chain() { let f: f64 = i.cast_to_parent(); assert_eq!(f, 1.0); } + +// --- VariantSpec tests --- + +use crate::variant::VariantSpec; + +#[test] +fn variant_spec_basic_construction() { + let spec = VariantSpec::try_from_pairs(vec![("graph", "SimpleGraph"), ("weight", "i32")]) + .expect("valid pairs should succeed"); + let map = spec.as_map(); + assert_eq!(map.len(), 2); + assert_eq!(map["graph"], "SimpleGraph"); + assert_eq!(map["weight"], "i32"); +} + +#[test] +fn variant_spec_empty_construction() { + let spec = VariantSpec::try_from_pairs(Vec::<(&str, &str)>::new()) + .expect("empty pairs should succeed"); + assert!(spec.as_map().is_empty()); +} + +#[test] +fn variant_spec_rejects_duplicate_dimensions() { + let result = + VariantSpec::try_from_pairs(vec![("graph", "SimpleGraph"), ("graph", "PlanarGraph")]); + assert!(result.is_err()); + let err_msg = result.unwrap_err().to_string(); + assert!( + err_msg.contains("duplicate dimension"), + "error should mention duplicate dimension, got: {err_msg}" + ); +} + +#[test] +fn variant_spec_preserves_btreemap_order() { + // BTreeMap sorts by key, so insertion order doesn't matter + let spec = VariantSpec::try_from_pairs(vec![("weight", "i32"), ("graph", "SimpleGraph")]) + .expect("valid pairs"); + let keys: Vec<&String> = spec.as_map().keys().collect(); + assert_eq!(keys, vec!["graph", "weight"], "BTreeMap should sort keys"); +} + +#[test] +fn variant_spec_normalizes_empty_graph_to_simple_graph() { + // A variant with graph="" should normalize to graph="SimpleGraph" + let spec = + VariantSpec::try_from_pairs(vec![("graph", ""), ("weight", "i32")]).expect("valid pairs"); + let normalized = spec.normalize(); + assert_eq!( + normalized.as_map()["graph"], + "SimpleGraph", + "normalize() should fill in 'SimpleGraph' for empty graph dimension" + ); +} + +#[test] +fn variant_spec_normalize_preserves_explicit_values() { + // A variant with explicit values should not be changed by normalize + let spec = VariantSpec::try_from_pairs(vec![("graph", "PlanarGraph"), ("weight", "f64")]) + .expect("valid pairs"); + let normalized = spec.normalize(); + assert_eq!(normalized.as_map()["graph"], "PlanarGraph"); + assert_eq!(normalized.as_map()["weight"], "f64"); +} + +#[test] +fn variant_spec_is_default_for_default_values() { + // A variant with all default values (SimpleGraph, One) should be the default + let spec = VariantSpec::try_from_pairs(vec![("graph", "SimpleGraph"), ("weight", "One")]) + .expect("valid pairs"); + assert!( + spec.is_default(), + "variant with SimpleGraph+One should be the default" + ); +} + +#[test] +fn variant_spec_is_not_default_for_non_default_values() { + // A variant with non-default values should NOT be the default + let spec = VariantSpec::try_from_pairs(vec![("graph", "PlanarGraph"), ("weight", "i32")]) + .expect("valid pairs"); + assert!( + !spec.is_default(), + "variant with PlanarGraph+i32 should not be the default" + ); +} + +#[test] +fn variant_spec_try_from_map() { + let map = std::collections::BTreeMap::from([ + ("graph".to_string(), "SimpleGraph".to_string()), + ("weight".to_string(), "i32".to_string()), + ]); + let spec = VariantSpec::try_from_map(map.clone()).expect("should succeed for valid map"); + assert_eq!(spec.as_map(), &map); +} + +#[test] +fn variant_spec_into_map_returns_owned() { + let spec = VariantSpec::try_from_pairs(vec![("graph", "SimpleGraph"), ("weight", "One")]) + .expect("valid pairs"); + let map = spec.into_map(); + assert_eq!(map.len(), 2); + assert_eq!(map["graph"], "SimpleGraph"); + assert_eq!(map["weight"], "One"); +} + +#[test] +fn variant_spec_update_dimension_adds_new() { + let mut spec = + VariantSpec::try_from_pairs(vec![("graph", "SimpleGraph")]).expect("valid pairs"); + spec.update_dimension("weight", "i32"); + assert_eq!(spec.as_map().len(), 2); + assert_eq!(spec.as_map()["weight"], "i32"); +} + +#[test] +fn variant_spec_update_dimension_overwrites_existing() { + let mut spec = VariantSpec::try_from_pairs(vec![("graph", "SimpleGraph"), ("weight", "One")]) + .expect("valid pairs"); + spec.update_dimension("weight", "f64"); + assert_eq!(spec.as_map()["weight"], "f64"); +} + +#[test] +fn variant_spec_normalize_no_graph_dimension_unchanged() { + // A variant without a "graph" dimension should not be changed + let spec = VariantSpec::try_from_pairs(vec![("weight", "i32")]).expect("valid pairs"); + let normalized = spec.normalize(); + assert_eq!(normalized.as_map().len(), 1); + assert_eq!(normalized.as_map()["weight"], "i32"); +} + +#[test] +fn variant_spec_is_default_empty_variant() { + let spec = VariantSpec::try_from_pairs(Vec::<(&str, &str)>::new()) + .expect("empty pairs should succeed"); + assert!( + spec.is_default(), + "empty variant should be considered default" + ); +} + +#[test] +fn variant_spec_is_default_kn() { + let spec = VariantSpec::try_from_pairs(vec![("k", "KN")]).expect("valid pairs"); + assert!( + spec.is_default(), + "variant with KN should be considered default" + ); +} + +#[test] +fn variant_spec_is_not_default_mixed() { + let spec = VariantSpec::try_from_pairs(vec![("graph", "SimpleGraph"), ("weight", "i32")]) + .expect("valid pairs"); + assert!( + !spec.is_default(), + "variant with i32 weight should not be default" + ); +} diff --git a/src/variant.rs b/src/variant.rs index fcac4dc0..4acf665c 100644 --- a/src/variant.rs +++ b/src/variant.rs @@ -146,6 +146,91 @@ impl_variant_param!(K3, "k", parent: KN, cast: |_| KN, k: Some(3)); impl_variant_param!(K2, "k", parent: KN, cast: |_| KN, k: Some(2)); impl_variant_param!(K1, "k", parent: KN, cast: |_| KN, k: Some(1)); +// --- VariantSpec: canonical runtime representation of a problem variant --- + +use std::collections::BTreeMap; + +/// Canonical runtime representation of a problem variant. +/// +/// Used for validated runtime lookups and normalization. Unlike raw +/// `BTreeMap`, a `VariantSpec` validates its dimensions +/// at construction time and can normalize default values. +#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)] +pub struct VariantSpec { + dims: BTreeMap, +} + +/// Default dimension values used for normalization and default detection. +const DEFAULT_VALUES: &[&str] = &["SimpleGraph", "One", "KN"]; + +impl VariantSpec { + /// Create a `VariantSpec` from key-value pairs, rejecting duplicate dimensions. + /// + /// Returns an error if the same dimension key appears more than once. + pub fn try_from_pairs(pairs: I) -> std::result::Result + where + I: IntoIterator, + K: Into, + V: Into, + { + let mut dims = BTreeMap::new(); + for (k, v) in pairs { + let key = k.into(); + let val = v.into(); + if dims.insert(key.clone(), val).is_some() { + return Err(format!("duplicate dimension: {}", key)); + } + } + Ok(Self { dims }) + } + + /// Create a `VariantSpec` from an existing `BTreeMap`. + pub fn try_from_map(map: BTreeMap) -> std::result::Result { + Ok(Self { dims: map }) + } + + /// View the dimensions as a map. + pub fn as_map(&self) -> &BTreeMap { + &self.dims + } + + /// Consume this `VariantSpec` and return the underlying map. + pub fn into_map(self) -> BTreeMap { + self.dims + } + + /// Update or add a single dimension. + pub fn update_dimension(&mut self, key: impl Into, value: impl Into) { + self.dims.insert(key.into(), value.into()); + } + + /// Normalize the variant by filling in default values for empty dimensions. + /// + /// If a dimension has an empty string value, it is replaced with its + /// canonical default: + /// - `"graph"` → `"SimpleGraph"` + pub fn normalize(&self) -> Self { + let mut dims = self.dims.clone(); + if let Some(v) = dims.get_mut("graph") { + if v.is_empty() { + *v = "SimpleGraph".to_string(); + } + } + Self { dims } + } + + /// Check whether this variant uses only default dimension values. + /// + /// Returns `true` if every dimension value is one of the recognized + /// defaults: `"SimpleGraph"`, `"One"`, `"KN"`. An empty variant + /// (no dimensions) is also considered default. + pub fn is_default(&self) -> bool { + self.dims + .values() + .all(|v| DEFAULT_VALUES.contains(&v.as_str())) + } +} + #[cfg(test)] #[path = "unit_tests/variant.rs"] mod tests; diff --git a/tests/suites/examples.rs b/tests/suites/examples.rs index c405b28e..366d8ae1 100644 --- a/tests/suites/examples.rs +++ b/tests/suites/examples.rs @@ -1,208 +1,48 @@ -// Each example is included as a module and tested directly (no subprocess overhead). -// Individual #[test] functions let cargo's test harness run them in parallel. +// Test remaining example binaries to keep them compiling and correct. +// Examples with `pub fn run()` are included directly; others are run as subprocesses. -macro_rules! example_test { - ($mod_name:ident) => { - #[allow(unused)] - mod $mod_name { - include!(concat!("../../examples/", stringify!($mod_name), ".rs")); - } - }; +// --- Chained reduction demo (has pub fn run()) --- + +#[cfg(feature = "ilp-solver")] +#[allow(unused)] +mod chained_reduction_factoring_to_spinglass { + include!("../../examples/chained_reduction_factoring_to_spinglass.rs"); +} + +#[cfg(feature = "ilp-solver")] +#[test] +fn test_chained_reduction_factoring_to_spinglass() { + chained_reduction_factoring_to_spinglass::run(); +} + +// --- Subprocess tests for export utilities --- + +fn run_example(name: &str) { + let status = std::process::Command::new(env!("CARGO")) + .args(["run", "--example", name, "--features", "ilp-highs"]) + .status() + .unwrap_or_else(|e| panic!("Failed to run example {name}: {e}")); + assert!(status.success(), "Example {name} failed with {status}"); } -example_test!(hamiltonian_path); -example_test!(chained_reduction_factoring_to_spinglass); -example_test!(chained_reduction_ksat_to_mis); -example_test!(reduction_binpacking_to_ilp); -example_test!(reduction_circuitsat_to_ilp); -example_test!(reduction_circuitsat_to_spinglass); -example_test!(reduction_factoring_to_circuitsat); -example_test!(reduction_factoring_to_ilp); -example_test!(reduction_ilp_to_qubo); -example_test!(reduction_kcoloring_to_ilp); -example_test!(reduction_kcoloring_to_qubo); -example_test!(reduction_ksatisfiability_to_qubo); -example_test!(reduction_ksatisfiability_to_subsetsum); -example_test!(reduction_ksatisfiability_to_satisfiability); -example_test!(reduction_maxcut_to_spinglass); -example_test!(reduction_maximumclique_to_ilp); -example_test!(reduction_maximumclique_to_maximumindependentset); -example_test!(reduction_maximumindependentset_to_ilp); -example_test!(reduction_maximumindependentset_to_maximumclique); -example_test!(reduction_maximumindependentset_to_maximumsetpacking); -example_test!(reduction_maximumindependentset_to_minimumvertexcover); -example_test!(reduction_maximumindependentset_to_qubo); -example_test!(reduction_maximummatching_to_ilp); -example_test!(reduction_maximummatching_to_maximumsetpacking); -example_test!(reduction_maximumsetpacking_to_ilp); -example_test!(reduction_maximumsetpacking_to_maximumindependentset); -example_test!(reduction_maximumsetpacking_to_qubo); -example_test!(reduction_minimumdominatingset_to_ilp); -example_test!(reduction_minimumsetcovering_to_ilp); -example_test!(reduction_minimumvertexcover_to_ilp); -example_test!(reduction_minimumvertexcover_to_maximumindependentset); -example_test!(reduction_minimumvertexcover_to_minimumsetcovering); -example_test!(reduction_minimumvertexcover_to_qubo); -example_test!(reduction_qubo_to_ilp); -example_test!(reduction_qubo_to_spinglass); -example_test!(reduction_satisfiability_to_kcoloring); -example_test!(reduction_satisfiability_to_circuitsat); -example_test!(reduction_satisfiability_to_ksatisfiability); -example_test!(reduction_satisfiability_to_maximumindependentset); -example_test!(reduction_satisfiability_to_minimumdominatingset); -example_test!(reduction_longestcommonsubsequence_to_ilp); -example_test!(reduction_spinglass_to_maxcut); -example_test!(reduction_spinglass_to_qubo); -example_test!(reduction_travelingsalesman_to_ilp); -example_test!(reduction_travelingsalesman_to_qubo); +#[test] +fn test_export_graph() { + run_example("export_graph"); +} -macro_rules! example_fn { - ($test_name:ident, $mod_name:ident) => { - #[test] - fn $test_name() { - $mod_name::run(); - } - }; +#[test] +fn test_export_schemas() { + run_example("export_schemas"); } -example_fn!(test_hamiltonian_path, hamiltonian_path); -example_fn!( - test_chained_reduction_factoring_to_spinglass, - chained_reduction_factoring_to_spinglass -); -example_fn!( - test_chained_reduction_ksat_to_mis, - chained_reduction_ksat_to_mis -); -example_fn!(test_binpacking_to_ilp, reduction_binpacking_to_ilp); -example_fn!(test_circuitsat_to_ilp, reduction_circuitsat_to_ilp); -example_fn!( - test_circuitsat_to_spinglass, - reduction_circuitsat_to_spinglass -); -example_fn!( - test_factoring_to_circuitsat, - reduction_factoring_to_circuitsat -); -example_fn!(test_factoring_to_ilp, reduction_factoring_to_ilp); -example_fn!(test_ilp_to_qubo, reduction_ilp_to_qubo); -example_fn!(test_kcoloring_to_ilp, reduction_kcoloring_to_ilp); -example_fn!(test_kcoloring_to_qubo, reduction_kcoloring_to_qubo); -example_fn!( - test_ksatisfiability_to_qubo, - reduction_ksatisfiability_to_qubo -); -example_fn!( - test_ksatisfiability_to_subsetsum, - reduction_ksatisfiability_to_subsetsum -); -example_fn!( - test_ksatisfiability_to_satisfiability, - reduction_ksatisfiability_to_satisfiability -); -example_fn!(test_maxcut_to_spinglass, reduction_maxcut_to_spinglass); -example_fn!(test_maximumclique_to_ilp, reduction_maximumclique_to_ilp); -example_fn!( - test_maximumclique_to_maximumindependentset, - reduction_maximumclique_to_maximumindependentset -); -example_fn!( - test_maximumindependentset_to_ilp, - reduction_maximumindependentset_to_ilp -); -example_fn!( - test_maximumindependentset_to_maximumclique, - reduction_maximumindependentset_to_maximumclique -); -example_fn!( - test_maximumindependentset_to_maximumsetpacking, - reduction_maximumindependentset_to_maximumsetpacking -); -example_fn!( - test_maximumindependentset_to_minimumvertexcover, - reduction_maximumindependentset_to_minimumvertexcover -); -example_fn!( - test_maximumindependentset_to_qubo, - reduction_maximumindependentset_to_qubo -); -example_fn!( - test_maximummatching_to_ilp, - reduction_maximummatching_to_ilp -); -example_fn!( - test_maximummatching_to_maximumsetpacking, - reduction_maximummatching_to_maximumsetpacking -); -example_fn!( - test_maximumsetpacking_to_ilp, - reduction_maximumsetpacking_to_ilp -); -example_fn!( - test_maximumsetpacking_to_maximumindependentset, - reduction_maximumsetpacking_to_maximumindependentset -); -example_fn!( - test_maximumsetpacking_to_qubo, - reduction_maximumsetpacking_to_qubo -); -example_fn!( - test_minimumdominatingset_to_ilp, - reduction_minimumdominatingset_to_ilp -); -example_fn!( - test_minimumsetcovering_to_ilp, - reduction_minimumsetcovering_to_ilp -); -example_fn!( - test_minimumvertexcover_to_ilp, - reduction_minimumvertexcover_to_ilp -); -example_fn!( - test_minimumvertexcover_to_maximumindependentset, - reduction_minimumvertexcover_to_maximumindependentset -); -example_fn!( - test_minimumvertexcover_to_minimumsetcovering, - reduction_minimumvertexcover_to_minimumsetcovering -); -example_fn!( - test_minimumvertexcover_to_qubo, - reduction_minimumvertexcover_to_qubo -); -example_fn!(test_qubo_to_ilp, reduction_qubo_to_ilp); -example_fn!(test_qubo_to_spinglass, reduction_qubo_to_spinglass); -example_fn!( - test_satisfiability_to_circuitsat, - reduction_satisfiability_to_circuitsat -); -example_fn!( - test_satisfiability_to_kcoloring, - reduction_satisfiability_to_kcoloring -); -example_fn!( - test_satisfiability_to_ksatisfiability, - reduction_satisfiability_to_ksatisfiability -); -example_fn!( - test_satisfiability_to_maximumindependentset, - reduction_satisfiability_to_maximumindependentset -); -example_fn!( - test_satisfiability_to_minimumdominatingset, - reduction_satisfiability_to_minimumdominatingset -); -example_fn!( - test_longestcommonsubsequence_to_ilp, - reduction_longestcommonsubsequence_to_ilp -); -example_fn!(test_spinglass_to_maxcut, reduction_spinglass_to_maxcut); -example_fn!(test_spinglass_to_qubo, reduction_spinglass_to_qubo); -example_fn!( - test_travelingsalesman_to_ilp, - reduction_travelingsalesman_to_ilp -); -example_fn!( - test_travelingsalesman_to_qubo, - reduction_travelingsalesman_to_qubo -); +#[test] +fn test_export_petersen_mapping() { + run_example("export_petersen_mapping"); +} + +// Note: detect_isolated_problems and detect_unreachable_from_3sat are diagnostic +// tools that exit(1) when they find issues. They are run via `make` targets +// (topology-sanity-check), not as part of `cargo test`. + +// Note: export_examples requires the `example-db` feature which is not enabled +// in standard CI test runs. It is exercised via `make examples`.