diff --git a/CHANGELOG.md b/CHANGELOG.md index 80f7257e..5e50e09f 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -5,6 +5,15 @@ ## Unreleased +### Fixed (PR #312 follow-up) + +- The docs-surface reduction follow-ups so the collision tour no longer points + at a deleted guide route, the architecture outline uses implementation-backed + deterministic wording for the scene boundary, task-DAG tooling/docs use the + new `tasks-dag-source.md` name consistently, and the backlog now tracks + broader docs-validation cleanup beyond Markdown-only checks, including + recursive `docs/public/**/*.html` coverage. + ### Fixed (PR #308 follow-up) - The PR workflow hardening follow-ups so `pr-preflight` skips deleted diff --git a/README.md b/README.md index 187bb01d..b8664e25 100644 --- a/README.md +++ b/README.md @@ -12,7 +12,7 @@

Get StartedArchitecture • - Docs • + DocsAIΩN Framework

diff --git a/docs/DETERMINISTIC_MATH.md b/docs/DETERMINISTIC_MATH.md index 13d3095e..e3ddec5c 100644 --- a/docs/DETERMINISTIC_MATH.md +++ b/docs/DETERMINISTIC_MATH.md @@ -21,7 +21,6 @@ is largely standardized, "freaky numbers" (NaN, Subnormals, Signed Zero) introdu > | -------------------------------------------------------------------------------- | -------------------------------------------- | > | [SPEC_DETERMINISTIC_MATH.md](SPEC_DETERMINISTIC_MATH.md) | **Normative policy** (this doc defers to it) | > | [warp-math-claims.md](warp-math-claims.md) | Claims and theory framing | -> | [math-validation-plan.md](archive/math-validation-plan.md) | Validation test plan and CI lanes (archived) | > | [determinism/DETERMINISM_CLAIMS_v0.1.md](determinism/DETERMINISM_CLAIMS_v0.1.md) | Formal determinism claims | ## 1. NaN Payloads diff --git a/docs/ROADMAP.md b/docs/ROADMAP.md index ee1812cd..31fca09c 100644 --- a/docs/ROADMAP.md +++ b/docs/ROADMAP.md @@ -1,27 +1,25 @@ -# Echo Roadmap Index +# Echo Roadmap -> Scope: Echo + Wesley + git-mind planning and sequencing. -> Format: ROADMAP index -> milestone README -> feature file (tasks inline). -> Last updated: 2026-03-06 +This is the only roadmap entrypoint you should need. -This is the map-of-content (MoC) index for roadmap navigation. Detailed specs live in `docs/ROADMAP/`. +- Use this page to understand current priorities and find the live planning docs. +- Use GitHub Issues / the project board for current execution state. +- Git history is the archive; this page points only at live planning material. -## Execution Policy (The WIP Cap) +## Status Vocabulary -To prevent context thrashing, we adhere to a strict WIP limit: +- `Planned`: scoped, but not active. +- `In Progress`: currently being worked. +- `Verified`: merged and evidenced on `main`. -- **Max 2** active milestones at once. -- **Max 3** active feature files per active milestone. -- Everything else is "Queued." - -## Dependency DAG +## Priority Ladder ```mermaid flowchart TD - A["P0 Lock the Hashes ✅"] --> D["P1 Proof Core ✅"] + A["P0 Lock the Hashes ✅"] --> D["P1 Proof Core"] B["P0 Developer CLI ✅"] --> D D --> C["P2 First Light"] E["P1 Time Semantics Lock"] --> F["P3 Time Travel"] @@ -32,42 +30,41 @@ flowchart TD C --> J["P3 Deep Storage"] ``` -## Priority / Status - -| Pri | Milestone | Focus | Status | -| ------ | ---------------------------------------------------------------------- | ---------------------------------------- | ----------- | -| **P0** | **[Lock the Hashes](ROADMAP/lock-the-hashes/README.md)** | Canonical hash vectors & cleanup | Verified | -| **P0** | **[Developer CLI](ROADMAP/developer-cli/README.md)** | `verify`, `bench`, `inspect` tools | Verified | -| **P1** | **[Proof Core](ROADMAP/proof-core/README.md)** | Determinism claims _without_ Time Travel | In Progress | -| **P1** | **[Time Semantics Lock](ROADMAP/time-semantics-lock/README.md)** | Frozen Time Spec (Doc only) | Planned | -| **P2** | **[First Light](ROADMAP/first-light/README.md)** | Browser Demo (Website) | Planned | -| **P3** | **[Time Travel](ROADMAP/time-travel/README.md)** | Inspector & Rewind Tooling | Planned | -| **P3** | **[Proof Time Convergence](ROADMAP/proof-time-convergence/README.md)** | Worldline Convergence | Planned | -| **P3** | **[Splash Guy](ROADMAP/splash-guy/README.md)** | Game Demo 1 | Planned | -| **P3** | **[Tumble Tower](ROADMAP/tumble-tower/README.md)** | Game Demo 2 | Planned | -| **P3** | **[Deep Storage](ROADMAP/deep-storage/README.md)** | Disk Tier / GC | Planned | - -## Milestone Directories - -- `docs/ROADMAP/lock-the-hashes/` -- `docs/ROADMAP/developer-cli/` -- `docs/ROADMAP/first-light/` -- `docs/ROADMAP/proof-core/` -- `docs/ROADMAP/time-semantics-lock/` -- `docs/ROADMAP/time-travel/` -- `docs/ROADMAP/proof-time-convergence/` -- `docs/ROADMAP/splash-guy/` -- `docs/ROADMAP/tumble-tower/` -- `docs/ROADMAP/deep-storage/` -- `docs/ROADMAP/backlog/` - -## Cross-Project Notes - -- **Proof Core gates First Light**: determinism claims must be proven before demoing the engine publicly. -- Wesley work is grouped into **First Light** because it is upstream of the website demo deliverable. -- git-mind NEXUS is moved to **Backlog** because it is independent of Echo's critical path. -- Proof work is split into **Proof Core** (P1) and **Proof Time Convergence** (P3) to avoid false blocking. - -## Issue Matrix - -Issue coverage is maintained in `docs/ROADMAP/ISSUE-INDEX.md`. +## Milestones + +| Priority | Milestone | Status | Focus | Live Planning Docs | +| -------- | ---------------------- | ------------- | ------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `P0` | Lock the Hashes | `Verified` | Canonical hash vectors, domain separation, benchmark cleanup | [domain-separated-hashes.md](ROADMAP/lock-the-hashes/domain-separated-hashes.md), [benchmarks-cleanup.md](ROADMAP/lock-the-hashes/benchmarks-cleanup.md) | +| `P0` | Developer CLI | `Verified` | Stable `echo verify` / `bench` / `inspect` workflows | [cli-scaffold.md](ROADMAP/developer-cli/cli-scaffold.md), [verify.md](ROADMAP/developer-cli/verify.md), [bench.md](ROADMAP/developer-cli/bench.md), [inspect.md](ROADMAP/developer-cli/inspect.md), [docs-man-pages.md](ROADMAP/developer-cli/docs-man-pages.md) | +| `P1` | Proof Core | `In Progress` | Determinism claims, torture harness, trig oracle | [determinism-torture.md](ROADMAP/proof-core/determinism-torture.md), [deterministic-trig.md](ROADMAP/proof-core/deterministic-trig.md), [docs-polish.md](ROADMAP/proof-core/docs-polish.md) | +| `P1` | Time Semantics Lock | `Planned` | Freeze HistoryTime / HostTime / TTL semantics | [time-model-spec.md](ROADMAP/time-semantics-lock/time-model-spec.md) | +| `P2` | First Light | `Planned` | Browser demo, Wesley pipeline, WASM runtime, browser visualization | [wesley-qir-phase-c.md](ROADMAP/first-light/wesley-qir-phase-c.md), [wesley-migration.md](ROADMAP/first-light/wesley-migration.md), [wesley-go-public.md](ROADMAP/first-light/wesley-go-public.md), [echo-wesley-gen-v2.md](ROADMAP/first-light/echo-wesley-gen-v2.md), [sha256-blake3.md](ROADMAP/first-light/sha256-blake3.md), [wasm-runtime.md](ROADMAP/first-light/wasm-runtime.md), [browser-visualization.md](ROADMAP/first-light/browser-visualization.md), [echo-cas-browser.md](ROADMAP/first-light/echo-cas-browser.md), [wesley-type-pipeline-browser.md](ROADMAP/first-light/wesley-type-pipeline-browser.md) | +| `P3` | Time Travel | `Planned` | Inspector visibility, replay, worldline comparison | [streams-inspector.md](ROADMAP/time-travel/streams-inspector.md), [time-travel-mvp.md](ROADMAP/time-travel/time-travel-mvp.md), [rulial-diff.md](ROADMAP/time-travel/rulial-diff.md) | +| `P3` | Proof Time Convergence | `Planned` | Worldline convergence suite | [worldline-convergence.md](ROADMAP/proof-time-convergence/worldline-convergence.md) | +| `P3` | Splash Guy | `Planned` | Deterministic networking-first game demo | [rules-and-state.md](ROADMAP/splash-guy/rules-and-state.md), [lockstep-protocol.md](ROADMAP/splash-guy/lockstep-protocol.md), [controlled-desync.md](ROADMAP/splash-guy/controlled-desync.md), [visualization.md](ROADMAP/splash-guy/visualization.md), [course-material.md](ROADMAP/splash-guy/course-material.md) | +| `P3` | Tumble Tower | `Planned` | Deterministic physics game demo | [stage-0-aabb.md](ROADMAP/tumble-tower/stage-0-aabb.md), [stage-1-rotation.md](ROADMAP/tumble-tower/stage-1-rotation.md), [stage-2-friction.md](ROADMAP/tumble-tower/stage-2-friction.md), [stage-3-sleeping.md](ROADMAP/tumble-tower/stage-3-sleeping.md), [lockstep-harness.md](ROADMAP/tumble-tower/lockstep-harness.md), [desync-breakers.md](ROADMAP/tumble-tower/desync-breakers.md), [visualization.md](ROADMAP/tumble-tower/visualization.md), [course-material.md](ROADMAP/tumble-tower/course-material.md) | +| `P3` | Deep Storage | `Planned` | Disk CAS tier, GC sweep, remote wire protocol | [disk-tier.md](ROADMAP/deep-storage/disk-tier.md), [gc-sweep-eviction.md](ROADMAP/deep-storage/gc-sweep-eviction.md), [wire-protocol.md](ROADMAP/deep-storage/wire-protocol.md), [api-evolution.md](ROADMAP/deep-storage/api-evolution.md) | + +## Backlog + +Unscheduled work that is real but off the critical path: + +- [tooling-misc.md](ROADMAP/backlog/tooling-misc.md) +- [security.md](ROADMAP/backlog/security.md) +- [plugin-abi.md](ROADMAP/backlog/plugin-abi.md) +- [signing-pipeline.md](ROADMAP/backlog/signing-pipeline.md) +- [editor-hot-reload.md](ROADMAP/backlog/editor-hot-reload.md) +- [importer.md](ROADMAP/backlog/importer.md) +- [deterministic-rhai.md](ROADMAP/backlog/deterministic-rhai.md) +- [wesley-boundary-grammar.md](ROADMAP/backlog/wesley-boundary-grammar.md) +- [wesley-docs.md](ROADMAP/backlog/wesley-docs.md) +- [wesley-future.md](ROADMAP/backlog/wesley-future.md) +- [ttd-hardening.md](ROADMAP/backlog/ttd-hardening.md) +- [git-mind-nexus.md](ROADMAP/backlog/git-mind-nexus.md) + +## Notes + +- Proof Core gates First Light. +- Time Semantics Lock gates Time Travel. +- Time Travel plus Proof Core gate Proof Time Convergence. +- First Light gates Splash Guy, Tumble Tower, and Deep Storage. diff --git a/docs/ROADMAP/ISSUE-INDEX.md b/docs/ROADMAP/ISSUE-INDEX.md deleted file mode 100644 index 0e465882..00000000 --- a/docs/ROADMAP/ISSUE-INDEX.md +++ /dev/null @@ -1,75 +0,0 @@ - - - -# Issue Coverage Index - -This index maps tracked GitHub issues (open and carry-forward references) to roadmap tasks and feature files. - -| Issue | Title | Task(s) | Feature File | -| ----: | --------------------------------------------------- | -------------------- | -------------------------------------------------------------------------------------------------- | -| #20 | Spec: Commit/Manifest Signing | T-10-2-1 | [backlog/security.md](backlog/security.md) | -| #21 | Spec: Security Contexts (FFI/WASM/CLI) | T-10-2-2 | [backlog/security.md](backlog/security.md) | -| #22 | Benchmarks & CI Regression Gates | T-1-2-1 | [lock-the-hashes/benchmarks-cleanup.md](lock-the-hashes/benchmarks-cleanup.md) | -| #23 | CLI: verify/bench/inspect (umbrella) | F6.\* | [developer-cli/](developer-cli/README.md) | -| #24 | Editor Hot-Reload (spec + impl) | T-10-4-3 | [backlog/editor-hot-reload.md](backlog/editor-hot-reload.md) | -| #25 | Importer: TurtlGraph -> Echo store | T-10-5-1 | [backlog/importer.md](backlog/importer.md) | -| #26 | Plugin ABI (C) v0 (umbrella) | F10.1.\* | [backlog/plugin-abi.md](backlog/plugin-abi.md) | -| #33 | CI: sign release artifacts (dry run) | T-10-3-2 | [backlog/signing-pipeline.md](backlog/signing-pipeline.md) | -| #34 | CLI verify path | T-10-3-3 | [backlog/signing-pipeline.md](backlog/signing-pipeline.md) | -| #35 | Key management doc | T-10-3-1 | [backlog/signing-pipeline.md](backlog/signing-pipeline.md) | -| #36 | CI: verify signatures | T-10-3-4 | [backlog/signing-pipeline.md](backlog/signing-pipeline.md) | -| #38 | FFI limits and validation | T-10-2-3 | [backlog/security.md](backlog/security.md) | -| #41 | README+docs (defaults & toggles) | T-9-4-1 | [proof-core/docs-polish.md](proof-core/docs-polish.md) | -| #47 | Scaffold CLI subcommands | T-6-1-1 | [developer-cli/cli-scaffold.md](developer-cli/cli-scaffold.md) | -| #48 | Implement verify | T-6-2-1 | [developer-cli/verify.md](developer-cli/verify.md) | -| #49 | Implement bench | T-6-3-1 | [developer-cli/bench.md](developer-cli/bench.md) | -| #50 | Implement inspect | T-6-4-1 | [developer-cli/inspect.md](developer-cli/inspect.md) | -| #51 | Docs/man pages | T-6-5-1 | [developer-cli/docs-man-pages.md](developer-cli/docs-man-pages.md) | -| #75 | Draft hot-reload spec | T-10-4-1 | [backlog/editor-hot-reload.md](backlog/editor-hot-reload.md) | -| #76 | File watcher/debounce | T-10-4-2 | [backlog/editor-hot-reload.md](backlog/editor-hot-reload.md) | -| #79 | Docs/logging | T-10-8-1 | [backlog/tooling-misc.md](backlog/tooling-misc.md) | -| #85 | Draft C ABI spec | T-10-1-1 | [backlog/plugin-abi.md](backlog/plugin-abi.md) | -| #86 | C header + host loader | T-10-1-2 | [backlog/plugin-abi.md](backlog/plugin-abi.md) | -| #87 | Version negotiation | T-10-1-3 | [backlog/plugin-abi.md](backlog/plugin-abi.md) | -| #88 | Capability tokens | T-10-1-4 | [backlog/plugin-abi.md](backlog/plugin-abi.md) | -| #89 | Example plugin + tests | T-10-1-5 | [backlog/plugin-abi.md](backlog/plugin-abi.md) | -| #170 | TT1: StreamsFrame inspector support | T-7-2-5 | [time-travel/streams-inspector.md](time-travel/streams-inspector.md) | -| #171 | TT2: Time Travel MVP | T-7-3-1, T-7-3-2 | [time-travel/time-travel-mvp.md](time-travel/time-travel-mvp.md) | -| #172 | TT3: Rulial diff / worldline compare | T-7-4-1 | [time-travel/rulial-diff.md](time-travel/rulial-diff.md) | -| #173 | S1: Deterministic Rhai surface | T-10-6-1a, T-10-6-1b | [backlog/deterministic-rhai.md](backlog/deterministic-rhai.md) | -| #174 | W1: Wesley boundary grammar | T-10-7-1 | [backlog/wesley-boundary-grammar.md](backlog/wesley-boundary-grammar.md) | -| #177 | Deterministic trig oracle (carry-forward reference) | T-9-3-1 | [proof-core/deterministic-trig.md](proof-core/deterministic-trig.md) | -| #185 | M1: Domain-separated hash contexts (core) | T-1-1-1 | [lock-the-hashes/domain-separated-hashes.md](lock-the-hashes/domain-separated-hashes.md) | -| #186 | M1: Domain-separated digest (RenderGraph) | T-1-1-2 | [lock-the-hashes/domain-separated-hashes.md](lock-the-hashes/domain-separated-hashes.md) | -| #187 | M4: Worldline convergence suite | T-9-2-1, T-9-2-2 | [proof-time-convergence/worldline-convergence.md](proof-time-convergence/worldline-convergence.md) | -| #190 | M4: Determinism torture harness | T-9-1-1, T-9-1-2 | [proof-core/determinism-torture.md](proof-core/determinism-torture.md) | -| #191 | TT0: Session stream time fields | T-7-1-1 | [time-semantics-lock/time-model-spec.md](time-semantics-lock/time-model-spec.md) | -| #192 | TT0: TTL/deadline semantics | T-7-1-2 | [time-semantics-lock/time-model-spec.md](time-semantics-lock/time-model-spec.md) | -| #193 | W1: Schema hash chain pinning | T-10-7-2 | [backlog/wesley-boundary-grammar.md](backlog/wesley-boundary-grammar.md) | -| #194 | W1: SchemaDelta vocabulary | T-10-7-3 | [backlog/wesley-boundary-grammar.md](backlog/wesley-boundary-grammar.md) | -| #195 | JS-ABI packet checksum v2 | T-10-2-4 | [backlog/security.md](backlog/security.md) | -| #198 | W1: Provenance as query semantics | T-10-7-4 | [backlog/wesley-boundary-grammar.md](backlog/wesley-boundary-grammar.md) | -| #199 | TT3: Wesley worldline diff | T-7-4-2 | [time-travel/rulial-diff.md](time-travel/rulial-diff.md) | -| #202 | Spec: Provenance Payload (PP) v1 | T-10-2-5 | [backlog/security.md](backlog/security.md) | -| #203 | TT1: Constraint Lens panel | T-7-2-6 | [time-travel/streams-inspector.md](time-travel/streams-inspector.md) | -| #204 | TT3: Provenance heatmap | T-7-4-3 | [time-travel/rulial-diff.md](time-travel/rulial-diff.md) | -| #205 | TT2: Reliving debugger MVP | T-7-3-2 | [time-travel/time-travel-mvp.md](time-travel/time-travel-mvp.md) | -| #207 | Naming test (noisy-line) | T-10-8-2 | [backlog/tooling-misc.md](backlog/tooling-misc.md) | -| #222 | Splash Guy: rules + state model | T-8-1-1 | [splash-guy/rules-and-state.md](splash-guy/rules-and-state.md) | -| #223 | Splash Guy: lockstep protocol | T-8-1-2 | [splash-guy/lockstep-protocol.md](splash-guy/lockstep-protocol.md) | -| #224 | Splash Guy: controlled desync | T-8-1-3 | [splash-guy/controlled-desync.md](splash-guy/controlled-desync.md) | -| #225 | Splash Guy: visualization | T-8-1-4 | [splash-guy/visualization.md](splash-guy/visualization.md) | -| #226 | Splash Guy: docs course | T-8-1-5 | [splash-guy/course-material.md](splash-guy/course-material.md) | -| #231 | Tumble Tower: Stage 0 (AABB) | T-8-2-1 | [tumble-tower/stage-0-aabb.md](tumble-tower/stage-0-aabb.md) | -| #232 | Tumble Tower: Stage 1 (rotation) | T-8-2-2 | [tumble-tower/stage-1-rotation.md](tumble-tower/stage-1-rotation.md) | -| #233 | Tumble Tower: Stage 2 (friction) | T-8-2-3 | [tumble-tower/stage-2-friction.md](tumble-tower/stage-2-friction.md) | -| #234 | Tumble Tower: Stage 3 (sleeping) | T-8-2-4 | [tumble-tower/stage-3-sleeping.md](tumble-tower/stage-3-sleeping.md) | -| #235 | Tumble Tower: lockstep harness | T-8-2-5 | [tumble-tower/lockstep-harness.md](tumble-tower/lockstep-harness.md) | -| #236 | Tumble Tower: desync breakers | T-8-2-6 | [tumble-tower/desync-breakers.md](tumble-tower/desync-breakers.md) | -| #237 | Tumble Tower: visualization | T-8-2-7 | [tumble-tower/visualization.md](tumble-tower/visualization.md) | -| #238 | Tumble Tower: docs course | T-8-2-8 | [tumble-tower/course-material.md](tumble-tower/course-material.md) | -| #239 | Reliving debugger UX | T-10-8-3 | [backlog/tooling-misc.md](backlog/tooling-misc.md) | -| #243 | TT1: dt policy | T-7-2-1 | [time-travel/streams-inspector.md](time-travel/streams-inspector.md) | -| #244 | TT1: TimeStream retention | T-7-2-2 | [time-travel/streams-inspector.md](time-travel/streams-inspector.md) | -| #245 | TT1: Merge semantics | T-7-2-3 | [time-travel/streams-inspector.md](time-travel/streams-inspector.md) | -| #246 | TT1: Security/capabilities | T-7-2-4 | [time-travel/streams-inspector.md](time-travel/streams-inspector.md) | diff --git a/docs/ROADMAP/STATUS_DEFINITIONS.md b/docs/ROADMAP/STATUS_DEFINITIONS.md deleted file mode 100644 index 3553ec18..00000000 --- a/docs/ROADMAP/STATUS_DEFINITIONS.md +++ /dev/null @@ -1,25 +0,0 @@ - - - -# Roadmap Status Definitions - -This document defines the lifecycle states for milestones and features in the Echo roadmap. - -## Status Hierarchy - -| State | Definition | -| :----------------- | :-------------------------------------------------------------------------------------------- | -| **Planned** | Item is scheduled but work has not yet begun. | -| **In Progress** | Active development is underway. | -| **Pending Review** | Implementation is complete; awaiting PR review and merge. | -| **Verified** | Work is merged to `main`, and all binary exit criteria/DoD items are satisfied and evidenced. | -| **Archived** | Item is complete and has been superseded or moved to a long-term maintenance state. | - -## Verification Requirements - -An item cannot transition to **Verified** until: - -1. All linked PRs are merged. -2. CI is green for the merge commit. -3. All Definition of Done (DoD) checkboxes are checked. -4. Explicit evidence (PR links, audit comments, workflow runs) is recorded in the document. diff --git a/docs/ROADMAP/backlog/README.md b/docs/ROADMAP/backlog/README.md deleted file mode 100644 index e9320ba1..00000000 --- a/docs/ROADMAP/backlog/README.md +++ /dev/null @@ -1,25 +0,0 @@ - - - -# Backlog - -> **Priority:** Unscheduled | **Est:** ~245h - -Unscheduled work across all projects. Items here have no committed timeline and can be picked up opportunistically. git-mind NEXUS (formerly its own milestone) has been demoted here because it runs independently of Echo's critical path. - -## Features - -| Feature | File | Est. | Status | -| ----------------------- | -------------------------------------------------------- | ---- | ----------- | -| git-mind NEXUS | [git-mind-nexus.md](git-mind-nexus.md) | ~36h | Not Started | -| Plugin ABI | [plugin-abi.md](plugin-abi.md) | ~25h | Not Started | -| Security | [security.md](security.md) | ~23h | Not Started | -| Signing Pipeline | [signing-pipeline.md](signing-pipeline.md) | ~14h | Not Started | -| Editor Hot-Reload | [editor-hot-reload.md](editor-hot-reload.md) | ~14h | Not Started | -| Importer | [importer.md](importer.md) | ~2h | Not Started | -| Deterministic Rhai | [deterministic-rhai.md](deterministic-rhai.md) | ~11h | Not Started | -| Wesley Boundary Grammar | [wesley-boundary-grammar.md](wesley-boundary-grammar.md) | ~20h | Not Started | -| Tooling & Misc | [tooling-misc.md](tooling-misc.md) | ~59h | Not Started | -| Wesley Future | [wesley-future.md](wesley-future.md) | ~12h | Not Started | -| Wesley Docs | [wesley-docs.md](wesley-docs.md) | ~10h | Not Started | -| TTD Hardening | [ttd-hardening.md](ttd-hardening.md) | ~19h | Not Started | diff --git a/docs/ROADMAP/backlog/deterministic-rhai.md b/docs/ROADMAP/backlog/deterministic-rhai.md index f2fc0383..8a7bcc85 100644 --- a/docs/ROADMAP/backlog/deterministic-rhai.md +++ b/docs/ROADMAP/backlog/deterministic-rhai.md @@ -3,7 +3,7 @@ # Deterministic Rhai -> **Milestone:** [Backlog](README.md) | **Priority:** Unscheduled +> **Milestone:** [Backlog](../../ROADMAP.md) | **Priority:** Unscheduled A sandboxed Rhai scripting surface for simulations where all host access (time, IO, randomness) goes through explicit View/Claim/Effect channels, preserving Echo's determinism guarantees. diff --git a/docs/ROADMAP/backlog/editor-hot-reload.md b/docs/ROADMAP/backlog/editor-hot-reload.md index 94e7e423..38576fce 100644 --- a/docs/ROADMAP/backlog/editor-hot-reload.md +++ b/docs/ROADMAP/backlog/editor-hot-reload.md @@ -3,7 +3,7 @@ # Editor Hot-Reload -> **Milestone:** [Backlog](README.md) | **Priority:** Unscheduled +> **Milestone:** [Backlog](../../ROADMAP.md) | **Priority:** Unscheduled File-watching and hot-reload infrastructure for the editor/dev-server workflow. Enables rapid iteration on simulation schemas and scripts. diff --git a/docs/ROADMAP/backlog/git-mind-nexus.md b/docs/ROADMAP/backlog/git-mind-nexus.md index 8036d17c..735a9706 100644 --- a/docs/ROADMAP/backlog/git-mind-nexus.md +++ b/docs/ROADMAP/backlog/git-mind-nexus.md @@ -3,7 +3,7 @@ # git-mind NEXUS -> **Milestone:** [Backlog](README.md) | **Priority:** Unscheduled +> **Milestone:** [Backlog](../../ROADMAP.md) | **Priority:** Unscheduled > **Formerly:** MS-3 (demoted — independent of Echo critical path) Cross-repo federation, schema validation, and data exchange for git-mind knowledge graphs. Enables git-mind instances to sync, validate structural constraints, and exchange graph fragments via a portable format. diff --git a/docs/ROADMAP/backlog/importer.md b/docs/ROADMAP/backlog/importer.md index e9c40ff3..2a83a077 100644 --- a/docs/ROADMAP/backlog/importer.md +++ b/docs/ROADMAP/backlog/importer.md @@ -3,7 +3,7 @@ # Importer -> **Milestone:** [Backlog](README.md) | **Priority:** Unscheduled +> **Milestone:** [Backlog](../../ROADMAP.md) | **Priority:** Unscheduled Umbrella for the TurtlGraph-to-Echo-store importer. All child tasks (#80-84) are closed. This feature needs an audit to determine if the umbrella issue can be closed. diff --git a/docs/ROADMAP/backlog/plugin-abi.md b/docs/ROADMAP/backlog/plugin-abi.md index fa4f7107..2342cfb7 100644 --- a/docs/ROADMAP/backlog/plugin-abi.md +++ b/docs/ROADMAP/backlog/plugin-abi.md @@ -3,7 +3,7 @@ # Plugin ABI -> **Milestone:** [Backlog](README.md) | **Priority:** Unscheduled +> **Milestone:** [Backlog](../../ROADMAP.md) | **Priority:** Unscheduled A C-compatible plugin ABI enabling third-party extensions to hook into the Echo runtime without recompilation. Covers spec, host loader, version negotiation, capability tokens, and a reference plugin. diff --git a/docs/ROADMAP/backlog/security.md b/docs/ROADMAP/backlog/security.md index 16c80832..4ab881b0 100644 --- a/docs/ROADMAP/backlog/security.md +++ b/docs/ROADMAP/backlog/security.md @@ -3,7 +3,7 @@ # Security -> **Milestone:** [Backlog](README.md) | **Priority:** Unscheduled +> **Milestone:** [Backlog](../../ROADMAP.md) | **Priority:** Unscheduled Specifications and hardening for trust boundaries across FFI, WASM, and CLI surfaces. Includes commit signing specs, security context definitions, FFI validation, packet checksums, and provenance envelopes. diff --git a/docs/ROADMAP/backlog/signing-pipeline.md b/docs/ROADMAP/backlog/signing-pipeline.md index 9107cc36..2319577e 100644 --- a/docs/ROADMAP/backlog/signing-pipeline.md +++ b/docs/ROADMAP/backlog/signing-pipeline.md @@ -3,7 +3,7 @@ # Signing Pipeline -> **Milestone:** [Backlog](README.md) | **Priority:** Unscheduled +> **Milestone:** [Backlog](../../ROADMAP.md) | **Priority:** Unscheduled CI and CLI support for signing and verifying release artifacts. Depends on the signing spec from F10.2. diff --git a/docs/ROADMAP/backlog/tooling-misc.md b/docs/ROADMAP/backlog/tooling-misc.md index 59811ed2..47b2903a 100644 --- a/docs/ROADMAP/backlog/tooling-misc.md +++ b/docs/ROADMAP/backlog/tooling-misc.md @@ -3,7 +3,7 @@ # Tooling & Misc -> **Milestone:** [Backlog](README.md) | **Priority:** Unscheduled +> **Milestone:** [Backlog](../../ROADMAP.md) | **Priority:** Unscheduled Housekeeping tasks: documentation, logging, naming consistency, and debugger UX design. @@ -470,6 +470,9 @@ branch state after review-fix pushes. reply text or auto-resolve based on heuristics alone - R5: Show enough context (path, author, URL) for a reviewer to confirm the action before mutating GitHub state +- R6: Reconcile current-head code state against GitHub thread state after reply + / resolve actions so outdated-but-unresolved threads are easy to distinguish + from genuinely still-open review debt **Acceptance Criteria:** @@ -480,6 +483,8 @@ branch state after review-fix pushes. - [ ] AC3: One command can resolve chosen thread ids after human confirmation - [ ] AC4: The helper works with the existing `gh`-based workflow - [ ] AC5: Contributor docs explain when to use it and when to reply manually +- [ ] AC6: After replies / resolutions, the helper can recount unresolved + threads and highlight outdated-vs-current review state explicitly **Definition of Done:** @@ -770,3 +775,163 @@ judgment. **Est. Hours:** 2h **Expected Complexity:** ~60 LoC (docs + links) + +--- + +## T-10-8-17: Docs Validation Beyond Markdown + +**User Story:** As a contributor, I want docs validation to cover the real docs +surface, not just Markdown, so that broken static-HTML links and other live-doc +regressions are caught before PR review. + +**Requirements:** + +- R1: Expand docs validation so it covers `docs/public/**/*.html` and any other + live non-Markdown docs entrypoints +- R2: Add static-HTML link and asset checks for repo-local routes and + references +- R3: Keep the lane scoped enough that docs-only changes remain fast to verify +- R4: Document exactly which doc surfaces are covered and which are still + intentionally excluded + +**Acceptance Criteria:** + +- [ ] AC1: A broken local route or asset reference in `docs/public/**/*.html` + fails the docs validation lane +- [ ] AC2: Docs validation is no longer effectively Markdown-only +- [ ] AC3: Contributors can run one documented local command to check the + covered docs surfaces, including recursive `docs/public/**/*.html` + coverage +- [ ] AC4: The collision-tour-style regression class is caught before review + +**Definition of Done:** + +- [ ] Code reviewed and merged +- [ ] Tests pass (CI green) +- [ ] Documentation updated (if applicable) + +**Scope:** Validation for live docs surfaces, including Markdown plus static +HTML entrypoints and their local links/assets. +**Out of Scope:** External-link availability checks or full website end-to-end +crawling. + +**Test Plan:** + +- **Goldens:** n/a +- **Failures:** Intentionally break a local static-HTML route and a local asset + link and verify the lane fails +- **Edges:** `file://`-style static docs, generated HTML, root-relative vs + relative links +- **Fuzz/Stress:** n/a + +**Blocked By:** none +**Blocking:** none + +**Est. Hours:** 4h +**Expected Complexity:** ~140 LoC (validation wiring + docs + tests) + +--- + +## T-10-8-18: Implementation-Backed Docs Claims Policy + +**User Story:** As a maintainer, I want contributor guidance and lightweight +checks around strong claims like `bit-exact`, `canonical`, and `deterministic` +so that docs do not overstate what the code actually guarantees. + +**Requirements:** + +- R1: Define a short docs-claims checklist for implementation-backed guarantee + language +- R2: Identify especially sensitive claim words and the evidence expected for + each +- R3: Add a lightweight lint, review checklist, or equivalent guard for the + most failure-prone phrases +- R4: Document where stronger claims belong (specs, claim registers, crate + docs) versus where contributor docs should stay conservative + +**Acceptance Criteria:** + +- [ ] AC1: A contributor-facing checklist exists for strong guarantee wording +- [ ] AC2: The repo has at least one lightweight guard or review rubric for + claim words like `bit-exact`, `canonical`, and `deterministic` +- [ ] AC3: A representative overclaim is caught before PR review +- [ ] AC4: Docs and spec surfaces describe the evidence expectation clearly + +**Definition of Done:** + +- [ ] Code reviewed and merged +- [ ] Tests pass (CI green) +- [ ] Documentation updated (if applicable) + +**Scope:** Docs wording discipline, lightweight guardrails, and contributor +guidance. +**Out of Scope:** Proving every guarantee in the repo or replacing reviewer +judgment with a perfect linter. + +**Test Plan:** + +- **Goldens:** n/a +- **Failures:** Introduce a representative wording overclaim and verify the new + checklist / guard catches it +- **Edges:** Claim appears in roadmap docs, architecture docs, crate docs, or + generated reference pages +- **Fuzz/Stress:** n/a + +**Blocked By:** none +**Blocking:** none + +**Est. Hours:** 3h +**Expected Complexity:** ~80 LoC (docs + guard/checklist) + +--- + +## T-10-8-19: Remove Committed Generated DAG Artifacts + +**User Story:** As a maintainer, I want generated DAG outputs out of the main +docs tree so that the repo keeps source-of-truth inputs, not churn-heavy baked +artifacts. + +**Requirements:** + +- R1: Identify which DAG outputs are generated and should no longer live as + committed source files +- R2: Keep only the canonical DAG inputs and generation entrypoints in the repo +- R3: Move generated DAG viewing/sharing to on-demand generation, CI artifacts, + or another explicit publication path +- R4: Update docs and validation so they no longer assume committed generated + DAG outputs are the truth + +**Acceptance Criteria:** + +- [ ] AC1: Generated DAG artifacts are removed from the committed live docs + surface +- [ ] AC2: Contributors still have one documented way to generate or inspect + the DAG outputs when needed +- [ ] AC3: CI or release workflow still has a clear path for sharing generated + DAG views when useful +- [ ] AC4: Docs validation no longer depends on stale committed DAG outputs + +**Definition of Done:** + +- [ ] Code reviewed and merged +- [ ] Tests pass (CI green) +- [ ] Documentation updated (if applicable) + +**Scope:** Generated dependency/task DAG artifacts and their publication path. +**Out of Scope:** Removing the underlying DAG sources or DAG generation logic +entirely. + +**Test Plan:** + +- **Goldens:** n/a +- **Failures:** Remove a generated artifact and verify the documented + generation/view path still works +- **Edges:** Offline local viewing, CI artifact upload, docs links that used to + target committed outputs +- **Fuzz/Stress:** n/a + +**Blocked By:** none +**Blocking:** none + +**Est. Hours:** 4h +**Expected Complexity:** ~120 LoC (docs + workflow/tooling cleanup) diff --git a/docs/ROADMAP/backlog/ttd-hardening.md b/docs/ROADMAP/backlog/ttd-hardening.md index 5ff69d64..24098d72 100644 --- a/docs/ROADMAP/backlog/ttd-hardening.md +++ b/docs/ROADMAP/backlog/ttd-hardening.md @@ -3,7 +3,7 @@ # TTD Hardening & Future -> **Milestone:** [Backlog](README.md) | **Priority:** Unscheduled +> **Milestone:** [Backlog](../../ROADMAP.md) | **Priority:** Unscheduled Post-merge improvements for Time Travel Debugging (TTD) and the Scene Port boundary. Focuses on robustness, performance, and causal observability. diff --git a/docs/ROADMAP/backlog/wesley-boundary-grammar.md b/docs/ROADMAP/backlog/wesley-boundary-grammar.md index 23dafbc3..901db6b8 100644 --- a/docs/ROADMAP/backlog/wesley-boundary-grammar.md +++ b/docs/ROADMAP/backlog/wesley-boundary-grammar.md @@ -3,7 +3,7 @@ # Wesley Boundary Grammar -> **Milestone:** [Backlog](README.md) | **Priority:** Unscheduled +> **Milestone:** [Backlog](../../ROADMAP.md) | **Priority:** Unscheduled Remaining work on Wesley as a boundary grammar — canonical AST, schema hashing, schema evolution vocabulary, and provenance query semantics. These are foundational to the Phase 2 roadmap. diff --git a/docs/ROADMAP/backlog/wesley-docs.md b/docs/ROADMAP/backlog/wesley-docs.md index 01fab04f..62ff61d6 100644 --- a/docs/ROADMAP/backlog/wesley-docs.md +++ b/docs/ROADMAP/backlog/wesley-docs.md @@ -3,7 +3,7 @@ # Wesley Docs -> **Milestone:** [Backlog](README.md) | **Priority:** Unscheduled +> **Milestone:** [Backlog](../../ROADMAP.md) | **Priority:** Unscheduled Wesley-repo documentation consolidation. Recorded here for cross-project tracking. diff --git a/docs/ROADMAP/backlog/wesley-future.md b/docs/ROADMAP/backlog/wesley-future.md index c1a52335..6ff652fb 100644 --- a/docs/ROADMAP/backlog/wesley-future.md +++ b/docs/ROADMAP/backlog/wesley-future.md @@ -3,7 +3,7 @@ # Wesley Future -> **Milestone:** [Backlog](README.md) | **Priority:** Unscheduled +> **Milestone:** [Backlog](../../ROADMAP.md) | **Priority:** Unscheduled Long-horizon Wesley enhancements tracked at the feature level. These live in the Wesley repo and are recorded here for cross-project visibility. diff --git a/docs/ROADMAP/deep-storage/README.md b/docs/ROADMAP/deep-storage/README.md deleted file mode 100644 index 0f345053..00000000 --- a/docs/ROADMAP/deep-storage/README.md +++ /dev/null @@ -1,26 +0,0 @@ - - - -# Deep Storage - -> **Priority:** P3 | **Status:** Planned | **Est:** ~45h - -echo-cas beyond MemoryTier. DiskTier, GC sweep, wire protocol, and API evolution. - -**Blocked By:** First Light - -## Exit Criteria - -- [ ] DiskTier read/write passing -- [ ] GC sweep evicts cold blobs without data loss -- [ ] Wire protocol enables remote CAS operations -- [ ] API backward-compatible with MemoryTier consumers - -## Features - -| Feature | File | Est. | Status | -| ------------------- | -------------------------------------------- | ---- | ----------- | -| DiskTier | [disk-tier.md](disk-tier.md) | ~11h | Not Started | -| GC Sweep & Eviction | [gc-sweep-eviction.md](gc-sweep-eviction.md) | ~11h | Not Started | -| Wire Protocol | [wire-protocol.md](wire-protocol.md) | ~11h | Not Started | -| API Evolution | [api-evolution.md](api-evolution.md) | ~13h | Not Started | diff --git a/docs/ROADMAP/deep-storage/api-evolution.md b/docs/ROADMAP/deep-storage/api-evolution.md index 31fc0819..34517f1d 100644 --- a/docs/ROADMAP/deep-storage/api-evolution.md +++ b/docs/ROADMAP/deep-storage/api-evolution.md @@ -1,7 +1,7 @@ -> **Milestone:** [Deep Storage](README.md) | **Priority:** P2 +> **Milestone:** [Deep Storage](../../ROADMAP.md) | **Priority:** P2 # API Evolution diff --git a/docs/ROADMAP/deep-storage/disk-tier.md b/docs/ROADMAP/deep-storage/disk-tier.md index 968761a3..aa266fe7 100644 --- a/docs/ROADMAP/deep-storage/disk-tier.md +++ b/docs/ROADMAP/deep-storage/disk-tier.md @@ -1,7 +1,7 @@ -> **Milestone:** [Deep Storage](README.md) | **Priority:** P2 +> **Milestone:** [Deep Storage](../../ROADMAP.md) | **Priority:** P2 # DiskTier diff --git a/docs/ROADMAP/deep-storage/gc-sweep-eviction.md b/docs/ROADMAP/deep-storage/gc-sweep-eviction.md index e80537ce..da9475e0 100644 --- a/docs/ROADMAP/deep-storage/gc-sweep-eviction.md +++ b/docs/ROADMAP/deep-storage/gc-sweep-eviction.md @@ -1,7 +1,7 @@ -> **Milestone:** [Deep Storage](README.md) | **Priority:** P2 +> **Milestone:** [Deep Storage](../../ROADMAP.md) | **Priority:** P2 # GC Sweep & Eviction diff --git a/docs/ROADMAP/deep-storage/wire-protocol.md b/docs/ROADMAP/deep-storage/wire-protocol.md index d9aef8ce..204656b3 100644 --- a/docs/ROADMAP/deep-storage/wire-protocol.md +++ b/docs/ROADMAP/deep-storage/wire-protocol.md @@ -1,7 +1,7 @@ -> **Milestone:** [Deep Storage](README.md) | **Priority:** P2 +> **Milestone:** [Deep Storage](../../ROADMAP.md) | **Priority:** P2 # Wire Protocol diff --git a/docs/ROADMAP/developer-cli/README.md b/docs/ROADMAP/developer-cli/README.md deleted file mode 100644 index 7bc74bcc..00000000 --- a/docs/ROADMAP/developer-cli/README.md +++ /dev/null @@ -1,29 +0,0 @@ - - - -# Developer CLI - -> **Priority:** P0 | **Status:** Verified (2026-03-06) | **Est:** ~30h -> **Evidence:** PR [#288](https://github.com/flyingrobots/echo/pull/288), PR [#290](https://github.com/flyingrobots/echo/pull/290) - -Ship stable `echo-cli` developer workflows (`verify`, `bench`, `inspect`) with docs and man pages. The CLI provides the primary developer interface for validating simulation determinism, running benchmarks, and inspecting snapshot state from the terminal. - -**Blocked By:** Lock the Hashes - -## Exit Criteria - -- [x] `echo verify` validates simulation determinism from CLI -- [x] `echo bench` runs benchmarks with JSON + human-readable output -- [x] `echo inspect` dumps simulation state for debugging -- [x] Man pages and usage examples committed -- [x] CLI contract documented (stable subcommands, exit codes) - -## Features - -| Feature | File | Est. | Status | -| -------------- | -------------------------------------- | ---- | -------- | -| CLI Scaffold | [cli-scaffold.md](cli-scaffold.md) | ~6h | Verified | -| verify | [verify.md](verify.md) | ~5h | Verified | -| bench | [bench.md](bench.md) | ~5h | Verified | -| inspect | [inspect.md](inspect.md) | ~9h | Verified | -| Docs/man pages | [docs-man-pages.md](docs-man-pages.md) | ~5h | Verified | diff --git a/docs/ROADMAP/developer-cli/bench.md b/docs/ROADMAP/developer-cli/bench.md index 053d4490..bca96018 100644 --- a/docs/ROADMAP/developer-cli/bench.md +++ b/docs/ROADMAP/developer-cli/bench.md @@ -1,7 +1,7 @@ -> **Milestone:** [Developer CLI](README.md) | **Priority:** P0 +> **Milestone:** [Developer CLI](../../ROADMAP.md) | **Priority:** P0 # bench (#49) diff --git a/docs/ROADMAP/developer-cli/cli-scaffold.md b/docs/ROADMAP/developer-cli/cli-scaffold.md index 8b080a71..20fcc86e 100644 --- a/docs/ROADMAP/developer-cli/cli-scaffold.md +++ b/docs/ROADMAP/developer-cli/cli-scaffold.md @@ -1,7 +1,7 @@ -> **Milestone:** [Developer CLI](README.md) | **Priority:** P0 +> **Milestone:** [Developer CLI](../../ROADMAP.md) | **Priority:** P0 # CLI Scaffold (#47) diff --git a/docs/ROADMAP/developer-cli/docs-man-pages.md b/docs/ROADMAP/developer-cli/docs-man-pages.md index 093675df..f3d91c7c 100644 --- a/docs/ROADMAP/developer-cli/docs-man-pages.md +++ b/docs/ROADMAP/developer-cli/docs-man-pages.md @@ -1,7 +1,7 @@ -> **Milestone:** [Developer CLI](README.md) | **Priority:** P0 +> **Milestone:** [Developer CLI](../../ROADMAP.md) | **Priority:** P0 # Docs/man pages (#51) diff --git a/docs/ROADMAP/developer-cli/inspect.md b/docs/ROADMAP/developer-cli/inspect.md index 25ca8ea8..70d1f5e4 100644 --- a/docs/ROADMAP/developer-cli/inspect.md +++ b/docs/ROADMAP/developer-cli/inspect.md @@ -1,7 +1,7 @@ -> **Milestone:** [Developer CLI](README.md) | **Priority:** P0 +> **Milestone:** [Developer CLI](../../ROADMAP.md) | **Priority:** P0 # inspect (#50) diff --git a/docs/ROADMAP/developer-cli/verify.md b/docs/ROADMAP/developer-cli/verify.md index 76238260..310848c6 100644 --- a/docs/ROADMAP/developer-cli/verify.md +++ b/docs/ROADMAP/developer-cli/verify.md @@ -1,7 +1,7 @@ -> **Milestone:** [Developer CLI](README.md) | **Priority:** P0 +> **Milestone:** [Developer CLI](../../ROADMAP.md) | **Priority:** P0 # verify (#48) diff --git a/docs/ROADMAP/first-light/README.md b/docs/ROADMAP/first-light/README.md deleted file mode 100644 index e05083f1..00000000 --- a/docs/ROADMAP/first-light/README.md +++ /dev/null @@ -1,32 +0,0 @@ - - - -# First Light - -> **Priority:** P2 | **Status:** Planned | **Est:** ~88h - -The crown jewel — TTD (Tick-based Deterministic engine) running in-browser. Every user interaction is a graph rewrite, rendered live. This milestone includes the Wesley pipeline work that feeds the website, the WASM runtime integration, browser visualization, echo-cas browser validation, and Wesley type bridging across JS/WASM. - -**Blocked By:** Proof Core - -## Exit Criteria - -- [ ] Browser demo runs deterministically from a fixed seed -- [ ] WASM build reproducible in CI -- [ ] Render + state sync observable in inspector hooks -- [ ] Wesley-generated types cross JS/WASM boundary without manual glue -- [ ] echo-cas MemoryTier validated under WASM - -## Features - -| Feature | File | Repo | Est. | Status | -| ------------------------------- | ------------------------------------------------------------------ | ------ | ---- | ----------- | -| Wesley QIR Phase C | [wesley-qir-phase-c.md](wesley-qir-phase-c.md) | Wesley | ~12h | Not Started | -| Wesley Migration Planning | [wesley-migration.md](wesley-migration.md) | Wesley | ~10h | Not Started | -| Wesley Go Public | [wesley-go-public.md](wesley-go-public.md) | Wesley | ~6h | Not Started | -| echo-wesley-gen v2 Update | [echo-wesley-gen-v2.md](echo-wesley-gen-v2.md) | Echo | ~5h | Not Started | -| SHA-256 to BLAKE3 Coordination | [sha256-blake3.md](sha256-blake3.md) | Shared | ~4h | Not Started | -| WASM Runtime Integration | [wasm-runtime.md](wasm-runtime.md) | Echo | ~16h | Not Started | -| In-Browser Visualization | [browser-visualization.md](browser-visualization.md) | Echo | ~15h | Not Started | -| echo-cas Browser Integration | [echo-cas-browser.md](echo-cas-browser.md) | Echo | ~7h | Not Started | -| Wesley Type Pipeline in Browser | [wesley-type-pipeline-browser.md](wesley-type-pipeline-browser.md) | Shared | ~13h | Not Started | diff --git a/docs/ROADMAP/first-light/browser-visualization.md b/docs/ROADMAP/first-light/browser-visualization.md index 46a96c3b..3c477a00 100644 --- a/docs/ROADMAP/first-light/browser-visualization.md +++ b/docs/ROADMAP/first-light/browser-visualization.md @@ -3,7 +3,7 @@ # In-Browser Visualization -> **Milestone:** [First Light](README.md) | **Priority:** P1 | **Repo:** Echo +> **Milestone:** [First Light](../../ROADMAP.md) | **Priority:** P1 | **Repo:** Echo > > **Historical note:** This roadmap item predates the observation-first ABI v2 > and the intent-shaped ABI v3 control-plane rewrite. It is retained as diff --git a/docs/ROADMAP/first-light/echo-cas-browser.md b/docs/ROADMAP/first-light/echo-cas-browser.md index 20e2ed62..0c788176 100644 --- a/docs/ROADMAP/first-light/echo-cas-browser.md +++ b/docs/ROADMAP/first-light/echo-cas-browser.md @@ -3,7 +3,7 @@ # echo-cas Browser Integration -> **Milestone:** [First Light](README.md) | **Priority:** P1 | **Repo:** Echo +> **Milestone:** [First Light](../../ROADMAP.md) | **Priority:** P1 | **Repo:** Echo Validate and wire MemoryTier in the WASM context. echo-cas Phase 1 provides `MemoryTier` (in-memory, `HashMap`-backed). This feature confirms it compiles to WASM, exposes JS bindings for store/retrieve, and validates blob integrity in-browser. diff --git a/docs/ROADMAP/first-light/echo-wesley-gen-v2.md b/docs/ROADMAP/first-light/echo-wesley-gen-v2.md index bf747d81..ad5b62f6 100644 --- a/docs/ROADMAP/first-light/echo-wesley-gen-v2.md +++ b/docs/ROADMAP/first-light/echo-wesley-gen-v2.md @@ -3,7 +3,7 @@ # echo-wesley-gen v2 Update -> **Milestone:** [First Light](README.md) | **Priority:** P1 | **Repo:** Echo +> **Milestone:** [First Light](../../ROADMAP.md) | **Priority:** P1 | **Repo:** Echo Echo-repo work. The `crates/echo-wesley-gen` crate currently consumes `echo-ir/v1` JSON. Update it to handle the `echo-ir/v2` format that Wesley will emit after QIR Phase C, including new fields for query operations and migration metadata. diff --git a/docs/ROADMAP/first-light/sha256-blake3.md b/docs/ROADMAP/first-light/sha256-blake3.md index 2fa609d0..ef9da40d 100644 --- a/docs/ROADMAP/first-light/sha256-blake3.md +++ b/docs/ROADMAP/first-light/sha256-blake3.md @@ -3,7 +3,7 @@ # SHA-256 to BLAKE3 Coordination -> **Milestone:** [First Light](README.md) | **Priority:** P1 | **Repo:** Shared +> **Milestone:** [First Light](../../ROADMAP.md) | **Priority:** P1 | **Repo:** Shared Cross-repo planning task. Wesley currently uses SHA-256 for schema hashing. Echo uses BLAKE3. This task produces the migration specification, not the implementation. diff --git a/docs/ROADMAP/first-light/wasm-runtime.md b/docs/ROADMAP/first-light/wasm-runtime.md index ff45f693..fb47e679 100644 --- a/docs/ROADMAP/first-light/wasm-runtime.md +++ b/docs/ROADMAP/first-light/wasm-runtime.md @@ -3,7 +3,7 @@ # WASM Runtime Integration -> **Milestone:** [First Light](README.md) | **Priority:** P1 | **Repo:** Echo +> **Milestone:** [First Light](../../ROADMAP.md) | **Priority:** P1 | **Repo:** Echo > > **Historical note:** This roadmap item predates the observation-first ABI v2 > and the intent-shaped ABI v3 control-plane rewrite. It is retained as diff --git a/docs/ROADMAP/first-light/wesley-go-public.md b/docs/ROADMAP/first-light/wesley-go-public.md index bde7e9f6..330c2bb5 100644 --- a/docs/ROADMAP/first-light/wesley-go-public.md +++ b/docs/ROADMAP/first-light/wesley-go-public.md @@ -3,7 +3,7 @@ # Wesley Go Public -> **Milestone:** [First Light](README.md) | **Priority:** P1 | **Repo:** Wesley +> **Milestone:** [First Light](../../ROADMAP.md) | **Priority:** P1 | **Repo:** Wesley Wesley-repo work. Prepare the Wesley repository for open-source release: README polish, contributor documentation, CI hardening, and legal review. diff --git a/docs/ROADMAP/first-light/wesley-migration.md b/docs/ROADMAP/first-light/wesley-migration.md index 808053f8..3cbe7af0 100644 --- a/docs/ROADMAP/first-light/wesley-migration.md +++ b/docs/ROADMAP/first-light/wesley-migration.md @@ -3,7 +3,7 @@ # Wesley Migration Planning Phase B -> **Milestone:** [First Light](README.md) | **Priority:** P1 | **Repo:** Wesley +> **Milestone:** [First Light](../../ROADMAP.md) | **Priority:** P1 | **Repo:** Wesley Wesley-repo work. Extend Wesley's migration system to handle schema evolution with backfill scripts, switch-over plans, and contract-based validation. diff --git a/docs/ROADMAP/first-light/wesley-qir-phase-c.md b/docs/ROADMAP/first-light/wesley-qir-phase-c.md index 70fde151..d0b77bb3 100644 --- a/docs/ROADMAP/first-light/wesley-qir-phase-c.md +++ b/docs/ROADMAP/first-light/wesley-qir-phase-c.md @@ -3,7 +3,7 @@ # Wesley QIR Phase C -> **Milestone:** [First Light](README.md) | **Priority:** P1 | **Repo:** Wesley +> **Milestone:** [First Light](../../ROADMAP.md) | **Priority:** P1 | **Repo:** Wesley Wesley-repo work. Extend Wesley's Query IR to compile GraphQL operations into executable SQL query plan ASTs. This builds on the existing E0-E4 foundation. diff --git a/docs/ROADMAP/first-light/wesley-type-pipeline-browser.md b/docs/ROADMAP/first-light/wesley-type-pipeline-browser.md index f1922ea2..fd3cc4c7 100644 --- a/docs/ROADMAP/first-light/wesley-type-pipeline-browser.md +++ b/docs/ROADMAP/first-light/wesley-type-pipeline-browser.md @@ -3,7 +3,7 @@ # Wesley Type Pipeline in Browser -> **Milestone:** [First Light](README.md) | **Priority:** P1 | **Repo:** Shared +> **Milestone:** [First Light](../../ROADMAP.md) | **Priority:** P1 | **Repo:** Shared Ensure Wesley-generated types are usable across the JS/WASM boundary. TypeScript types + Zod validators generated from Wesley IR, with a serialization bridge to the WASM Rust side. diff --git a/docs/ROADMAP/lock-the-hashes/README.md b/docs/ROADMAP/lock-the-hashes/README.md deleted file mode 100644 index 01fe58e5..00000000 --- a/docs/ROADMAP/lock-the-hashes/README.md +++ /dev/null @@ -1,26 +0,0 @@ - - - -# Lock the Hashes - -> **Priority:** P0 | **Status:** Verified (2026-02-13) | **Est:** ~20h -> **Evidence:** PR [#265](https://github.com/flyingrobots/echo/pull/265), Audit [Issue #22](https://github.com/flyingrobots/echo/issues/22#issuecomment-3894974740) - -Complete domain-separated hashing and benchmark umbrella close-out to lock deterministic hash foundations. The core commitment hashes (`state_root`, `patch_digest`, `commit_id`) and the `RenderGraph` canonical bytes hash previously used bare `Hasher::new()` without domain-separation prefixes; this milestone added unique domain-separation tags to each hash context and audited/closed the benchmarks pipeline umbrella. - -**Blocked By:** none - -## Exit Criteria - -- [x] All domain-separation prefixes defined and applied -- [x] Golden hash vectors updated and committed -- [x] Cross-domain collision tests pass in CI -- [x] Benchmarks umbrella [issue #22](https://github.com/flyingrobots/echo/issues/22) audited and closed -- [x] No open hash-drift issues - -## Features - -| Feature | File | Est. | Status | -| ------------------------------ | -------------------------------------------------------- | ---- | -------- | -| Domain-Separated Hash Contexts | [domain-separated-hashes.md](domain-separated-hashes.md) | ~8h | Verified | -| Benchmarks Pipeline Cleanup | [benchmarks-cleanup.md](benchmarks-cleanup.md) | ~4h | Verified | diff --git a/docs/ROADMAP/lock-the-hashes/benchmarks-cleanup.md b/docs/ROADMAP/lock-the-hashes/benchmarks-cleanup.md index d3bdb7b7..9cb16d4f 100644 --- a/docs/ROADMAP/lock-the-hashes/benchmarks-cleanup.md +++ b/docs/ROADMAP/lock-the-hashes/benchmarks-cleanup.md @@ -1,7 +1,7 @@ -> **Milestone:** [Lock the Hashes](README.md) | **Priority:** P0 +> **Milestone:** [Lock the Hashes](../../ROADMAP.md) | **Priority:** P0 # Benchmarks Pipeline Cleanup diff --git a/docs/ROADMAP/lock-the-hashes/domain-separated-hashes.md b/docs/ROADMAP/lock-the-hashes/domain-separated-hashes.md index a42e8d35..eba437a9 100644 --- a/docs/ROADMAP/lock-the-hashes/domain-separated-hashes.md +++ b/docs/ROADMAP/lock-the-hashes/domain-separated-hashes.md @@ -1,7 +1,7 @@ -> **Milestone:** [Lock the Hashes](README.md) | **Priority:** P0 +> **Milestone:** [Lock the Hashes](../../ROADMAP.md) | **Priority:** P0 # Domain-Separated Hash Contexts diff --git a/docs/ROADMAP/proof-core/README.md b/docs/ROADMAP/proof-core/README.md deleted file mode 100644 index 8b19eb79..00000000 --- a/docs/ROADMAP/proof-core/README.md +++ /dev/null @@ -1,26 +0,0 @@ - - - -# Proof Core - -> **Priority:** P1 | **Status:** In Progress | **Est:** ~18h -> **Evidence:** `docs/determinism/DETERMINISM_CLAIMS_v0.1.md`, `testdata/trig_golden_2048.bin` - -Cross-OS determinism proof and trig oracle verification. The deliverable is _Determinism Claims v0.1 (Scope + Evidence + Limits)_. - -**Blocked By:** Lock the Hashes ✅, Developer CLI ✅ - -## Exit Criteria - -- [x] 1-thread vs N-thread determinism harness green across {macOS, Linux} -- [x] Deterministic trig oracle verified against reference values -- [x] "Determinism Claims v0.1" document published (scope + evidence + limits) -- [x] Repro script produces identical receipts/checksums over 100 reruns - -## Features - -| Feature | File | Est. | Status | -| --------------------------- | ------------------------------------------------ | ---- | ----------- | -| Determinism Torture Harness | [determinism-torture.md](determinism-torture.md) | ~10h | Verified | -| Deterministic Trig Oracle | [deterministic-trig.md](deterministic-trig.md) | ~4h | Verified | -| Docs Polish | [docs-polish.md](docs-polish.md) | ~4h | In Progress | diff --git a/docs/ROADMAP/proof-core/determinism-torture.md b/docs/ROADMAP/proof-core/determinism-torture.md index 1f8e2ca5..5424b8c5 100644 --- a/docs/ROADMAP/proof-core/determinism-torture.md +++ b/docs/ROADMAP/proof-core/determinism-torture.md @@ -1,7 +1,7 @@ -> **Milestone:** [Proof Core](README.md) | **Priority:** P1 +> **Milestone:** [Proof Core](../../ROADMAP.md) | **Priority:** P1 # Determinism Torture Harness diff --git a/docs/ROADMAP/proof-core/deterministic-trig.md b/docs/ROADMAP/proof-core/deterministic-trig.md index 0cb9ff0a..20a49226 100644 --- a/docs/ROADMAP/proof-core/deterministic-trig.md +++ b/docs/ROADMAP/proof-core/deterministic-trig.md @@ -1,7 +1,7 @@ -> **Milestone:** [Proof Core](README.md) | **Priority:** P1 +> **Milestone:** [Proof Core](../../ROADMAP.md) | **Priority:** P1 # Deterministic Trig Oracle diff --git a/docs/ROADMAP/proof-core/docs-polish.md b/docs/ROADMAP/proof-core/docs-polish.md index 3c7eaa49..3cdff824 100644 --- a/docs/ROADMAP/proof-core/docs-polish.md +++ b/docs/ROADMAP/proof-core/docs-polish.md @@ -1,7 +1,7 @@ -> **Milestone:** [Proof Core](README.md) | **Priority:** P1 +> **Milestone:** [Proof Core](../../ROADMAP.md) | **Priority:** P1 # Docs Polish diff --git a/docs/ROADMAP/proof-time-convergence/README.md b/docs/ROADMAP/proof-time-convergence/README.md deleted file mode 100644 index 956220da..00000000 --- a/docs/ROADMAP/proof-time-convergence/README.md +++ /dev/null @@ -1,21 +0,0 @@ - - - -# Proof Time Convergence - -> **Priority:** P3 | **Status:** Planned | **Est:** ~10h - -Worldline convergence suite proving that multiple execution paths converge on identical state. Depends on both the core proof infrastructure and time travel semantics. - -**Blocked By:** Proof Core, Time Travel - -## Exit Criteria - -- [ ] Worldline convergence suite passes with time travel semantics -- [ ] Multiple execution paths proven to converge on identical state - -## Features - -| Feature | File | Est. | Status | -| --------------------------- | ---------------------------------------------------- | ---- | ----------- | -| Worldline Convergence Suite | [worldline-convergence.md](worldline-convergence.md) | ~10h | Not Started | diff --git a/docs/ROADMAP/proof-time-convergence/worldline-convergence.md b/docs/ROADMAP/proof-time-convergence/worldline-convergence.md index e7f7c4da..22e0a1d8 100644 --- a/docs/ROADMAP/proof-time-convergence/worldline-convergence.md +++ b/docs/ROADMAP/proof-time-convergence/worldline-convergence.md @@ -1,7 +1,7 @@ -> **Milestone:** [Proof Time Convergence](README.md) | **Priority:** P2 +> **Milestone:** [Proof Time Convergence](../../ROADMAP.md) | **Priority:** P2 # Worldline Convergence Suite diff --git a/docs/ROADMAP/splash-guy/README.md b/docs/ROADMAP/splash-guy/README.md deleted file mode 100644 index 165ae34d..00000000 --- a/docs/ROADMAP/splash-guy/README.md +++ /dev/null @@ -1,35 +0,0 @@ - - - -# Splash Guy - -> **Priority:** P3 | **Status:** Planned | **Est:** TBD (current skeleton ~28h) - -A grid-based water balloon game demonstrating deterministic rules, lockstep networking, and intentional desync lessons. Two peers exchange inputs, per-tick fingerprints verify determinism. - -This milestone is a **skeleton** — the current feature breakdown is the initial scaffolding. Each feature will expand significantly as game design progresses. - -**Blocked By:** First Light - -**Prerequisites before any code task:** - -- [ ] 1-page Game Design Document (GDD) approved -- [ ] Deterministic state schema defined in Wesley - -## Exit Criteria - -- [ ] GDD approved -- [ ] Deterministic state schema committed -- [ ] Playable 2-player game with lockstep networking -- [ ] Desync detection functional -- [ ] Educational course material published - -## Features - -| Feature | File | Est. | Status | -| ------------------- | -------------------------------------------- | ---- | ----------- | -| Rules & State Model | [rules-and-state.md](rules-and-state.md) | ~6h | Not Started | -| Lockstep Protocol | [lockstep-protocol.md](lockstep-protocol.md) | ~6h | Not Started | -| Controlled Desync | [controlled-desync.md](controlled-desync.md) | ~5h | Not Started | -| Visualization | [visualization.md](visualization.md) | ~6h | Not Started | -| Course Material | [course-material.md](course-material.md) | ~5h | Not Started | diff --git a/docs/ROADMAP/splash-guy/controlled-desync.md b/docs/ROADMAP/splash-guy/controlled-desync.md index 3c772b79..fbf9a146 100644 --- a/docs/ROADMAP/splash-guy/controlled-desync.md +++ b/docs/ROADMAP/splash-guy/controlled-desync.md @@ -1,7 +1,7 @@ -> **Milestone:** [Splash Guy](README.md) | **Priority:** P2 +> **Milestone:** [Splash Guy](../../ROADMAP.md) | **Priority:** P2 > > This feature is a skeleton. Tasks will be expanded as the GDD matures. diff --git a/docs/ROADMAP/splash-guy/course-material.md b/docs/ROADMAP/splash-guy/course-material.md index 471d811b..de825877 100644 --- a/docs/ROADMAP/splash-guy/course-material.md +++ b/docs/ROADMAP/splash-guy/course-material.md @@ -1,7 +1,7 @@ -> **Milestone:** [Splash Guy](README.md) | **Priority:** P2 +> **Milestone:** [Splash Guy](../../ROADMAP.md) | **Priority:** P2 > > This feature is a skeleton. Tasks will be expanded as the GDD matures. diff --git a/docs/ROADMAP/splash-guy/lockstep-protocol.md b/docs/ROADMAP/splash-guy/lockstep-protocol.md index 2f1753e1..dbbb7d35 100644 --- a/docs/ROADMAP/splash-guy/lockstep-protocol.md +++ b/docs/ROADMAP/splash-guy/lockstep-protocol.md @@ -1,7 +1,7 @@ -> **Milestone:** [Splash Guy](README.md) | **Priority:** P2 +> **Milestone:** [Splash Guy](../../ROADMAP.md) | **Priority:** P2 > > This feature is a skeleton. Tasks will be expanded as the GDD matures. diff --git a/docs/ROADMAP/splash-guy/rules-and-state.md b/docs/ROADMAP/splash-guy/rules-and-state.md index 71458bae..2072e88a 100644 --- a/docs/ROADMAP/splash-guy/rules-and-state.md +++ b/docs/ROADMAP/splash-guy/rules-and-state.md @@ -1,7 +1,7 @@ -> **Milestone:** [Splash Guy](README.md) | **Priority:** P2 +> **Milestone:** [Splash Guy](../../ROADMAP.md) | **Priority:** P2 > > This feature is a skeleton. Tasks will be expanded as the GDD matures. diff --git a/docs/ROADMAP/splash-guy/visualization.md b/docs/ROADMAP/splash-guy/visualization.md index 01665c8e..92845bf5 100644 --- a/docs/ROADMAP/splash-guy/visualization.md +++ b/docs/ROADMAP/splash-guy/visualization.md @@ -1,7 +1,7 @@ -> **Milestone:** [Splash Guy](README.md) | **Priority:** P2 +> **Milestone:** [Splash Guy](../../ROADMAP.md) | **Priority:** P2 > > This feature is a skeleton. Tasks will be expanded as the GDD matures. diff --git a/docs/ROADMAP/time-semantics-lock/README.md b/docs/ROADMAP/time-semantics-lock/README.md deleted file mode 100644 index eb718008..00000000 --- a/docs/ROADMAP/time-semantics-lock/README.md +++ /dev/null @@ -1,22 +0,0 @@ - - - -# Time Semantics Lock - -> **Priority:** P1 | **Status:** Planned | **Est:** ~6h - -Lock the vocabulary and semantics for HistoryTime vs HostTime, tick-based TTL/deadlines, and the StreamAdmissionDecision digest chain. Spec-only — no runtime code. - -**Blocked By:** — - -## Exit Criteria - -- [ ] HistoryTime vs HostTime classification table committed -- [ ] TTL/deadline semantics specified with normative language -- [ ] No TBD entries remaining in spec - -## Features - -| Feature | File | Est. | Status | -| -------------------- | ---------------------------------------- | ---- | ----------- | -| Time Model Spec Lock | [time-model-spec.md](time-model-spec.md) | ~6h | Not Started | diff --git a/docs/ROADMAP/time-semantics-lock/time-model-spec.md b/docs/ROADMAP/time-semantics-lock/time-model-spec.md index 23c996d2..aa6c4f79 100644 --- a/docs/ROADMAP/time-semantics-lock/time-model-spec.md +++ b/docs/ROADMAP/time-semantics-lock/time-model-spec.md @@ -1,7 +1,7 @@ -> **Milestone:** [Time Semantics Lock](README.md) | **Priority:** P1 +> **Milestone:** [Time Semantics Lock](../../ROADMAP.md) | **Priority:** P1 # TT0 — Time Model Spec Lock diff --git a/docs/ROADMAP/time-travel/README.md b/docs/ROADMAP/time-travel/README.md deleted file mode 100644 index 927a8aaa..00000000 --- a/docs/ROADMAP/time-travel/README.md +++ /dev/null @@ -1,24 +0,0 @@ - - - -# Time Travel - -> **Priority:** P3 | **Status:** Planned | **Est:** ~56h - -Inspector visibility, time travel MVP, and worldline comparison. Builds on the temporal spec lock. - -**Blocked By:** Time Semantics Lock - -## Exit Criteria - -- [ ] StreamsFrame inspector renders time-indexed simulation state -- [ ] Time travel MVP: rewind/replay to arbitrary tick -- [ ] Rulial diff: compare divergent worldlines visually - -## Features - -| Feature | File | Est. | Status | -| ----------------------- | -------------------------------------------- | ---- | ----------- | -| Streams Inspector Frame | [streams-inspector.md](streams-inspector.md) | ~27h | Not Started | -| Time Travel MVP | [time-travel-mvp.md](time-travel-mvp.md) | ~12h | Not Started | -| Rulial Diff | [rulial-diff.md](rulial-diff.md) | ~17h | Not Started | diff --git a/docs/ROADMAP/time-travel/rulial-diff.md b/docs/ROADMAP/time-travel/rulial-diff.md index e28fd351..38b93a3f 100644 --- a/docs/ROADMAP/time-travel/rulial-diff.md +++ b/docs/ROADMAP/time-travel/rulial-diff.md @@ -1,7 +1,7 @@ -> **Milestone:** [Time Travel](README.md) | **Priority:** P2 +> **Milestone:** [Time Travel](../../ROADMAP.md) | **Priority:** P2 # TT3 — Rulial Diff diff --git a/docs/ROADMAP/time-travel/streams-inspector.md b/docs/ROADMAP/time-travel/streams-inspector.md index 031cf0b9..3a4912da 100644 --- a/docs/ROADMAP/time-travel/streams-inspector.md +++ b/docs/ROADMAP/time-travel/streams-inspector.md @@ -1,7 +1,7 @@ -> **Milestone:** [Time Travel](README.md) | **Priority:** P2 +> **Milestone:** [Time Travel](../../ROADMAP.md) | **Priority:** P2 # TT1 — Streams Inspector Frame diff --git a/docs/ROADMAP/time-travel/time-travel-mvp.md b/docs/ROADMAP/time-travel/time-travel-mvp.md index fd1ae4f2..94f26360 100644 --- a/docs/ROADMAP/time-travel/time-travel-mvp.md +++ b/docs/ROADMAP/time-travel/time-travel-mvp.md @@ -1,7 +1,7 @@ -> **Milestone:** [Time Travel](README.md) | **Priority:** P2 +> **Milestone:** [Time Travel](../../ROADMAP.md) | **Priority:** P2 # TT2 — Time Travel MVP diff --git a/docs/ROADMAP/tumble-tower/README.md b/docs/ROADMAP/tumble-tower/README.md deleted file mode 100644 index 86ee7c3d..00000000 --- a/docs/ROADMAP/tumble-tower/README.md +++ /dev/null @@ -1,38 +0,0 @@ - - - -# Tumble Tower - -> **Priority:** P3 | **Status:** Planned | **Est:** TBD (current skeleton ~45h) - -A stacking-blocks physics game demonstrating deterministic rigid-body simulation. Progressive complexity: AABB → rotation → friction → sleeping bodies. - -This milestone is a **skeleton** — the current feature breakdown is the initial scaffolding. Each feature will expand significantly as game design progresses. - -**Blocked By:** First Light - -**Prerequisites before any code task:** - -- [ ] 1-page Game Design Document (GDD) approved -- [ ] Deterministic physics state schema defined in Wesley - -## Exit Criteria - -- [ ] GDD approved -- [ ] Deterministic physics state schema committed -- [ ] Playable stacking game with AABB → rotation → friction → sleeping -- [ ] Lockstep harness + desync breaker tests passing -- [ ] Educational course material published - -## Features - -| Feature | File | Est. | Status | -| ----------------- | ------------------------------------------ | ---- | ----------- | -| Stage 0: AABB | [stage-0-aabb.md](stage-0-aabb.md) | ~6h | Not Started | -| Stage 1: Rotation | [stage-1-rotation.md](stage-1-rotation.md) | ~6h | Not Started | -| Stage 2: Friction | [stage-2-friction.md](stage-2-friction.md) | ~5h | Not Started | -| Stage 3: Sleeping | [stage-3-sleeping.md](stage-3-sleeping.md) | ~6h | Not Started | -| Lockstep Harness | [lockstep-harness.md](lockstep-harness.md) | ~5h | Not Started | -| Desync Breakers | [desync-breakers.md](desync-breakers.md) | ~5h | Not Started | -| Visualization | [visualization.md](visualization.md) | ~6h | Not Started | -| Course Material | [course-material.md](course-material.md) | ~6h | Not Started | diff --git a/docs/ROADMAP/tumble-tower/course-material.md b/docs/ROADMAP/tumble-tower/course-material.md index e5ce02c7..96a92173 100644 --- a/docs/ROADMAP/tumble-tower/course-material.md +++ b/docs/ROADMAP/tumble-tower/course-material.md @@ -1,7 +1,7 @@ -> **Milestone:** [Tumble Tower](README.md) | **Priority:** P2 +> **Milestone:** [Tumble Tower](../../ROADMAP.md) | **Priority:** P2 > > This feature is a skeleton. Tasks will be expanded as the GDD matures. diff --git a/docs/ROADMAP/tumble-tower/desync-breakers.md b/docs/ROADMAP/tumble-tower/desync-breakers.md index e42f31cc..4e74b103 100644 --- a/docs/ROADMAP/tumble-tower/desync-breakers.md +++ b/docs/ROADMAP/tumble-tower/desync-breakers.md @@ -1,7 +1,7 @@ -> **Milestone:** [Tumble Tower](README.md) | **Priority:** P2 +> **Milestone:** [Tumble Tower](../../ROADMAP.md) | **Priority:** P2 > > This feature is a skeleton. Tasks will be expanded as the GDD matures. diff --git a/docs/ROADMAP/tumble-tower/lockstep-harness.md b/docs/ROADMAP/tumble-tower/lockstep-harness.md index 6f8434b6..cef0cc28 100644 --- a/docs/ROADMAP/tumble-tower/lockstep-harness.md +++ b/docs/ROADMAP/tumble-tower/lockstep-harness.md @@ -1,7 +1,7 @@ -> **Milestone:** [Tumble Tower](README.md) | **Priority:** P2 +> **Milestone:** [Tumble Tower](../../ROADMAP.md) | **Priority:** P2 > > This feature is a skeleton. Tasks will be expanded as the GDD matures. diff --git a/docs/ROADMAP/tumble-tower/stage-0-aabb.md b/docs/ROADMAP/tumble-tower/stage-0-aabb.md index e120f597..edf52ddd 100644 --- a/docs/ROADMAP/tumble-tower/stage-0-aabb.md +++ b/docs/ROADMAP/tumble-tower/stage-0-aabb.md @@ -1,7 +1,7 @@ -> **Milestone:** [Tumble Tower](README.md) | **Priority:** P2 +> **Milestone:** [Tumble Tower](../../ROADMAP.md) | **Priority:** P2 > > This feature is a skeleton. Tasks will be expanded as the GDD matures. diff --git a/docs/ROADMAP/tumble-tower/stage-1-rotation.md b/docs/ROADMAP/tumble-tower/stage-1-rotation.md index 8917a0a0..66f8da04 100644 --- a/docs/ROADMAP/tumble-tower/stage-1-rotation.md +++ b/docs/ROADMAP/tumble-tower/stage-1-rotation.md @@ -1,7 +1,7 @@ -> **Milestone:** [Tumble Tower](README.md) | **Priority:** P2 +> **Milestone:** [Tumble Tower](../../ROADMAP.md) | **Priority:** P2 > > This feature is a skeleton. Tasks will be expanded as the GDD matures. diff --git a/docs/ROADMAP/tumble-tower/stage-2-friction.md b/docs/ROADMAP/tumble-tower/stage-2-friction.md index 55799823..bfdb4af3 100644 --- a/docs/ROADMAP/tumble-tower/stage-2-friction.md +++ b/docs/ROADMAP/tumble-tower/stage-2-friction.md @@ -1,7 +1,7 @@ -> **Milestone:** [Tumble Tower](README.md) | **Priority:** P2 +> **Milestone:** [Tumble Tower](../../ROADMAP.md) | **Priority:** P2 > > This feature is a skeleton. Tasks will be expanded as the GDD matures. diff --git a/docs/ROADMAP/tumble-tower/stage-3-sleeping.md b/docs/ROADMAP/tumble-tower/stage-3-sleeping.md index 0e89292f..538c3063 100644 --- a/docs/ROADMAP/tumble-tower/stage-3-sleeping.md +++ b/docs/ROADMAP/tumble-tower/stage-3-sleeping.md @@ -1,7 +1,7 @@ -> **Milestone:** [Tumble Tower](README.md) | **Priority:** P2 +> **Milestone:** [Tumble Tower](../../ROADMAP.md) | **Priority:** P2 > > This feature is a skeleton. Tasks will be expanded as the GDD matures. diff --git a/docs/ROADMAP/tumble-tower/visualization.md b/docs/ROADMAP/tumble-tower/visualization.md index c824112f..2e87dd9c 100644 --- a/docs/ROADMAP/tumble-tower/visualization.md +++ b/docs/ROADMAP/tumble-tower/visualization.md @@ -1,7 +1,7 @@ -> **Milestone:** [Tumble Tower](README.md) | **Priority:** P2 +> **Milestone:** [Tumble Tower](../../ROADMAP.md) | **Priority:** P2 > > This feature is a skeleton. Tasks will be expanded as the GDD matures. diff --git a/docs/SPEC_DETERMINISTIC_MATH.md b/docs/SPEC_DETERMINISTIC_MATH.md index 3850b956..22970470 100644 --- a/docs/SPEC_DETERMINISTIC_MATH.md +++ b/docs/SPEC_DETERMINISTIC_MATH.md @@ -16,9 +16,7 @@ All math within the simulation loop (`warp-core`) must adhere to these rules. > | -------------------------------------------------------------------------------- | -------------------------------------------------- | > | [DETERMINISTIC_MATH.md](DETERMINISTIC_MATH.md) | Hazard catalog (IEEE 754 pitfalls and mitigations) | > | [warp-math-claims.md](warp-math-claims.md) | Claims and theory framing | -> | [math-validation-plan.md](math-validation-plan.md) | Validation test plan and CI lanes | > | [determinism/DETERMINISM_CLAIMS_v0.1.md](determinism/DETERMINISM_CLAIMS_v0.1.md) | Formal determinism claims | -> | [spec-deterministic-math.md](spec-deterministic-math.md) | Legacy Phase 0 design sketch (archived) | ## 1. Floating Point (f32) diff --git a/docs/architecture-outline.md b/docs/architecture-outline.md index e39a2c6d..07e6a023 100644 --- a/docs/architecture-outline.md +++ b/docs/architecture-outline.md @@ -17,6 +17,22 @@ will lag behind the current Rust-first implementation; prefer WARP specs for the > - ⚠️ **Partial** — some aspects exist, others planned > - 🗺️ **Planned** — design only, not yet implemented +## What Exists Today + +Before the aspirational material below: Echo already has a real deterministic WARP runtime. + +- **`warp-core` rewrite engine** ✅: immutable snapshot reads, private deltas, canonical merge, and deterministic scheduling. +- **Playback / worldlines / provenance** ✅: recorded history, cursor replay, and append-only lineage support. +- **Renderer / scene boundary** ✅: a deterministic scene port and canonical codec boundary. +- **TTD / browser tooling substrate** ✅: WASM-first protocol tooling and time-travel debugging infrastructure. + +Read the current implementation through these docs first: + +- [/spec-warp-core](/spec-warp-core) +- [/scheduler-warp-core](/scheduler-warp-core) +- [/spec/SPEC-0004-worldlines-playback-truthbus](/spec/SPEC-0004-worldlines-playback-truthbus) +- [/warp-two-plane-law](/warp-two-plane-law) + ## Vision - Reimagine a battle-tested ECS core into **Echo**, a renderer-agnostic spine that survives browsers, native shells, and whatever 2125 invents next. @@ -192,13 +208,6 @@ TTD is a first-class citizen in Echo, built on top of the provenance and scene p - **Security & Sandbox**: Optional restrictions for user-generated content or multiplayer host/client boundaries; capability-based access to ports. - **Extensibility**: Plugins define new components, systems, adapters, or editor tools; registration API enforces namespace isolation and version checks. -## Legacy Excavation Log - -- **Goal**: Track every legacy file, classify (keep concept, redesign, discard), note dependencies (Mootools, globals, duplicate IDs), and record learnings to inform Echo. -- **Artifacts**: `docs/meta/legacy-excavation.md` (to be populated) with columns for file, role, verdict, action items, and notes. -- **Process**: Review file → summarize intent → capture bugs/gaps → map to Echo’s modules → decide migration path or deprecation. -- **Outcome**: Comprehensive reference that prevents accidental feature loss and keeps the rewrite grounded in historical context. - ## Delivery Roadmap > **Current Status (2026-01):** Phase 0 is largely complete for `warp-core`. The Rust-first WARP graph rewriting engine is implemented with deterministic scheduling, snapshot hashing, and basic math. ECS storage and system scheduler remain future work. @@ -239,5 +248,5 @@ TTD is a first-class citizen in Echo, built on top of the provenance and scene p - `/packages/echo-cli` — tooling launcher (future), wraps dev server and inspector. - `/packages/echo-adapters` — reference adapters (Pixi/WebGPU, browser input, etc.). - `/apps/playground` — Vite-driven sandbox for samples and inspector. - - `/docs` — specs, diagrams, memorials (human-facing knowledge base). + - `/docs` — live specs, guides, and operational knowledge. - `/tooling` — shared build scripts, benchmarking harness (future). diff --git a/docs/archive/AGENTS.md b/docs/archive/AGENTS.md deleted file mode 100644 index 373f2505..00000000 --- a/docs/archive/AGENTS.md +++ /dev/null @@ -1,183 +0,0 @@ - - - -# Echo Agent Briefing - -Welcome to the **Echo** project. This file captures expectations for any LLM agent (and future-human collaborator) who touches the repo. - -## Core Principles - -- **Honor the Vision**: Echo is a deterministic, multiverse-aware ECS. Consult `docs/architecture-outline.md` before touching runtime code. -- **Document Ruthlessly**: Every meaningful design choice should land in `docs/` (specs, diagrams, ADRs) or PR descriptions. -- **Docstrings Aren't Optional**: Public APIs across crates (`warp-core`, `warp-wasm`, etc.) must carry rustdoc comments that explain intent, invariants, and usage. Treat missing docs as a failing test. -- **Determinism First**: Avoid introducing sources of nondeterminism without a mitigation plan. -- **Temporal Mindset**: Think in timelines—branching, merging, entropy budgets. Feature work should map to Chronos/Kairos/Aion axes where appropriate. - -## The Drill Sergeant Discipline - -Continuum (formerly JITOS) enforces a high-integrity "Drill Sergeant" discipline for all contributors (human or agent): - -1. **Tests as Spec**: Every `feat:` or `fix:` commit MUST include a "red -> green" test story. If you change source code without changing or adding a test, CI will flag it as a policy violation. -2. **Zero-Warning Tolerance**: All determinism-critical crates (`warp-core`, `echo-wasm-abi`, `echo-scene-port`) are compiled with `RUSTFLAGS="-Dwarnings"`. Unused imports, dead code, or silenced lints are treated as build failures. -3. **Determinism Integrity**: We assert inevitability, not just correctness. - - Bit-exact consistency across Rust and JavaScript/Node.js is mandatory for all float-to-int operations. - - Never iterate over `std::collections::HashMap` or `HashSet` in paths that affect the state hash; use `BTreeMap` or sorted iterators. - - Use the DIND (Deterministic Ironclad Nightmare Drills) harness to verify any changes against golden hash chains. -4. **Panic Ban**: Library code must return `Result` or `Option` instead of panicking. `unwrap()` and `expect()` are forbidden in non-test code. - -## Timeline Logging - -- Capture milestones, blockers, and decisions in relevant specs, ADRs, or PR descriptions. -- AGENTS.md and `TASKS-DAG.md` are append-only; see `docs/append-only-invariants.md` plus `scripts/check-append-only.js` for the enforcement plan that CI will run before merges. - -## Agent Context System (2-Tier) - -Agents use a **2-tier context system** to maintain continuity across sessions: - -| Tier | Store | Purpose | Update Frequency | -| ------------- | --------------- | ------------------------------------------ | -------------------------------- | -| **Immediate** | Redis stream | Current task state, branch, blockers | Every significant action | -| **Deep** | Knowledge graph | Architecture decisions, patterns, entities | When learning something reusable | - -### Session Start (Bootstrap) - -1. **Read this file** (`AGENTS.md`) for project conventions - -2. **Check Redis handoff stream**: `echo:agent:handoff` (most recent entry) - - ```text - XRANGE echo:agent:handoff - + COUNT 5 - ``` - -3. **Query knowledge graph** for relevant entities: - - ```python - search_nodes("") # e.g., "BOAW", "MaterializationBus" - search_nodes("Echo") # General project context - ``` - -### During Work (Continuous Updates) - -**Redis stream** — Update after every significant action: - -- Completing a task or subtask -- Encountering a blocker -- Making a key decision -- Changing branches or PRs - -```bash -XADD echo:agent:handoff * \ - branch "graph-boaw" \ - status "IN_PROGRESS" \ - summary "Fixing determinism bug in view op emission" \ - current_task "Updating emit_view_op_delta_scoped()" \ - blockers "none" \ - timestamp "" -``` - -**Knowledge graph** — Create/update entities when you: - -- Discover an architectural pattern worth preserving -- Complete a milestone (create `_Phase` entity) -- Fix a non-obvious bug (create `_BugFix` entity) -- Make a decision that future agents should know about - -```json -{ - "name": "BOAW_Determinism_Fix", - "entityType": "BugFix", - "observations": [ - "Root cause: emit_view_op_delta() used delta.len() for view op IDs", - "delta.len() is worker-local and varies by shard claim order", - "Fix: derive op ID from intent scope (NodeId) which is content-addressed" - ] -} -``` - -### Session End (Handoff) - -Before ending a session, **always** write a handoff entry: - -```bash -XADD echo:agent:handoff * \ - branch "" \ - status "" \ - summary "<1-2 sentence summary of what was done>" \ - commits "" \ - next_steps "" \ - blockers "" \ - tech_debt "" \ - test_commands "" \ - timestamp "" -``` - -### Key Entities to Know - -The knowledge graph contains ~300+ entities built by prior agents. Key patterns: - -- `Echo Project` — Core project info and current focus -- `_Architecture` — Design decisions for major features -- `_Phase` — Milestone completion records -- `_BugFix` — Non-obvious bug fixes worth remembering -- `_Tech_Debt_P` — Tracked technical debt by priority - -### Why This Matters - -- **Quick tasks**: Redis handoff alone may suffice -- **Complex tasks**: Query knowledge graph for architectural context -- **Debugging**: Search for prior bug fixes in similar areas -- **Decisions**: Check if prior agents already explored an approach - -The 2-tier system means handoffs are seamless—no context is lost between agents, and institutional knowledge accumulates over time. - -## Workflows & Automation - -- The contributor playbook lives in `docs/workflows.md` (policy + blessed commands + automation). -- Preferred repo maintenance entrypoint is `cargo xtask …` (see `xtask/` and `.cargo/config.toml`). -- Planning DAG artifacts live in `docs/assets/dags/` and are documented in `docs/dependency-dags.md`. -- For automated DAG refresh PRs, set `DAG_REFRESH_ISSUE=` as a GitHub Actions variable so the bot PR body includes `Refs #…`. - -## Repository Layout - -- `crates/warp-core`: Runtime core (WARP graph model, materialization bus). -- `apps/playground`: Vite sandbox and inspector (future). -- `docs/`: Specs, diagrams, memorials. -- `docs/notes`: Working notes and explorations (non-authoritative). - -## Working Agreement - -- **Isolated Branches**: Every new task, feature, or bugfix **MUST** begin on a fresh, isolated branch based on the latest `main` (unless context explicitly dictates otherwise). Never mix unrelated objectives on the same branch. -- Keep `main` pristine. Feature work belongs on branches named `echo/` or `timeline/`. -- Tests and benchmarks are mandatory for runtime changes once the harness exists. -- Respect determinism: preferably no random seeds without going through the Echo PRNG. -- Run `cargo clippy --all-targets -- -D missing_docs` and `cargo test` before every PR; CI will expect a zero-warning, fully documented surface. - -### PRs & Issues (Linkage Policy) - -- Every PR must be tied to a GitHub Issue. - - If no suitable issue exists, open one before you open the PR. - - Use explicit closing keywords in the PR body: include a line like `Closes #` so the issue auto‑closes on merge. - - Keep PRs single‑purpose: 1 PR = 1 thing. Avoid bundling unrelated changes. -- Branch naming: prefer `echo/` or `timeline/` and include the issue number in the PR title. -- Project hygiene: assign the PR's linked issue to the correct Milestone and Board column (Blocked/Ready/Done) as part of the PR. - -### Git Hooks & Local CI - -- Install repo hooks once with `make hooks` (configures `core.hooksPath`). -- Formatting: pre-commit auto-fixes with `cargo fmt` by default. Set `ECHO_AUTO_FMT=0` to run check-only instead. -- Toolchain: pre-commit verifies your active toolchain matches `rust-toolchain.toml`. -- SPDX header policy (source): every source file must start with exactly: - - `// SPDX-License-Identifier: Apache-2.0` - - `// © James Ross Ω FLYING•ROBOTS ` - Use the repository `.githooks/` installed by `make hooks`; `scripts/hooks/` - are legacy compatibility shims only. Do not add dual-license headers to code. - -## Git Real - -1. **NEVER** use `--force` with any git command. If you think you need it, stop and ask the human for help. -2. **NEVER** use rebase. Embrace messy distributed history; plain merges capture the truth, rebases rewrite it. -3. **NEVER** amend a commit. Make a new commit instead of erasing recorded history. - -In short: no one cares about a tidy commit graph, but everyone cares if you rewrite commits on origin. - -Safe travels in the multiverse. diff --git a/docs/archive/ISSUES_MATRIX.md b/docs/archive/ISSUES_MATRIX.md deleted file mode 100644 index 862bcbc6..00000000 --- a/docs/archive/ISSUES_MATRIX.md +++ /dev/null @@ -1,104 +0,0 @@ - - - -# Echo Issues Matrix (Active Plan) - -This table mirrors the current state of active issues in Project 9 with our plan-aligned milestones and relationships. Native GitHub dependencies represent "blocked by"/"blocking"; we no longer use custom text fields for these. The Project board remains the live system of record for status. - -## Managing Issue Dependencies (Blocked By / Blocking) - -Echo uses **native GitHub issue dependencies** to track “blocked by” relationships (not custom text fields). - -Practical note: the GitHub GraphQL API exposes dependency data/events, but dependency _mutation_ is easiest via the **REST API**. In practice, we use `gh api` as the most scriptable interface. - -Reference: GitHub docs “REST API endpoints for issue dependencies” (see `issues/issue-dependencies` in the REST docs). - -### Common `gh api` recipes - -Auth note: `gh api` uses your authenticated GitHub token (via `gh auth login` or `GH_TOKEN` env var). You do not need to manually add an `Authorization:` header unless you are reproducing these requests with another client (like `curl`). - -List dependencies an issue is blocked by: - -```bash -gh api \ - -H "Accept: application/vnd.github+json" \ - -H "X-GitHub-Api-Version: 2022-11-28" \ - repos/flyingrobots/echo/issues//dependencies/blocked_by -``` - -List dependencies an issue is blocking: - -```bash -gh api \ - -H "Accept: application/vnd.github+json" \ - -H "X-GitHub-Api-Version: 2022-11-28" \ - repos/flyingrobots/echo/issues//dependencies/blocking -``` - -Note: the `blocked_by` and `blocking` relationships are inverses. Adding “issue A blocked by issue B” is equivalent to adding “issue B blocking issue A”. Choose the direction that matches your workflow. - -Add a “blocked by” dependency (make `` blocked by ``): - -```bash -set -euo pipefail - -# Optional (only needed if you are not already authenticated via `gh auth login` or `GH_TOKEN`): -# -H "Authorization: Bearer " -BLOCKING_ISSUE_ID="$( - gh api \ - -H "Accept: application/vnd.github+json" \ - -H "X-GitHub-Api-Version: 2022-11-28" \ - repos/flyingrobots/echo/issues/ \ - --jq .id -)" || { echo "Failed to fetch blocking issue ID" >&2; exit 1; } - -if [[ -z "$BLOCKING_ISSUE_ID" ]]; then - echo "BLOCKING_ISSUE_ID is empty; verify auth and jq extraction." >&2 - exit 1 -fi - -gh api \ - -X POST \ - -H "Accept: application/vnd.github+json" \ - -H "X-GitHub-Api-Version: 2022-11-28" \ - repos/flyingrobots/echo/issues//dependencies/blocked_by \ - -f issue_id="$BLOCKING_ISSUE_ID" -``` - -Remove a “blocked by” dependency: - -```bash -gh api \ - -X DELETE \ - -H "Accept: application/vnd.github+json" \ - # Optional (only needed if you are not already authenticated via `gh auth login` or `GH_TOKEN`): - # -H "Authorization: Bearer " \ - -H "X-GitHub-Api-Version: 2022-11-28" \ - repos/flyingrobots/echo/issues//dependencies/blocked_by/ -``` - -| Issue Name | Issue # | Milestone | Priority | Estimate | Blocked By | Blocking | Parent | Children | Remarks | -| ----------------------------------------------------------------------------- | ------: | ------------------------------------- | -------- | -------- | ------------------- | ------------------- | ------ | -------------- | ----------------------------------------------------------------- | -| Benchmarks & CI Regression Gates | 22 | M1 – Golden Tests | P1 | 13h+ | #42,#43,#44,#45,#46 | | | 42,43,44,45,46 | Umbrella for perf pipeline (blocked by all children) | -| Create benches crate | 42 | M1 – Golden Tests | P1 | 3h | | #22,#43,#44,#45,#46 | #22 | | Criterion + scaffolding | -| Snapshot hash microbench | 43 | M1 – Golden Tests | P1 | 5h | #42 | #22 | #22 | | Reachable hash microbench | -| Scheduler drain microbench | 44 | M1 – Golden Tests | P1 | 5h | #42 | #22 | #22 | | Deterministic rule‑order/drain | -| JSON report + CI upload | 45 | M1 – Golden Tests | P2 | 3h | #42 | #22,#46 | #22 | | Upload Criterion JSON | -| Regression thresholds gate | 46 | M1 – Golden Tests | P1 | 8h | #42,#45 | #22 | #22 | | Fail on P50/P95/P99 regress | -| CLI: verify/bench/inspect | 23 | M2.2 – Playground Slice | P2 | 5h | | | | | Grouping placeholder; break down in PRs | -| Scaffold CLI subcommands | 47 | M2.2 – Playground Slice | P2 | 5h | | | | | | -| Implement 'verify' | 48 | M2.2 – Playground Slice | P2 | 5h | | | | | | -| Implement 'bench' | 49 | M2.2 – Playground Slice | P2 | 5h | | | | | | -| Implement 'inspect' | 50 | M2.2 – Playground Slice | P2 | 5h | | | | | | -| Docs/man pages | 51 | M2.2 – Playground Slice | P2 | 5h | | | | | Tie docs to CLI UX | -| README+docs (defaults & toggles) | 41 | M4 – Determinism Proof & Publish 0.1 | P2 | 3h | | | | | Docs polish before 0.1 | -| Deterministic trig: pin error budget + deterministic oracle for audit test | 177 | M4 – Determinism Proof & Publish 0.1 | | | | | | | Cross-OS determinism gate; keep oracle host-independent | -| T2: Embedded tooling UI baseline (Open Props + screenshot regen) | 168 | T2 – Embedded Tooling UI Baseline | | | | | | | Embedded dashboard baseline + Playwright evidence | -| TT0: Time model spec lock (TimeStreams + admission digests) | 166 | TT0 – Time Model Spec Lock | | | | | | | Spec lock for time model primitives (streams/cursors/admission) | -| TT1: StreamsFrame inspector support (backlog + cursors + admission decisions) | 170 | TT1 – Streams Inspector Frame | | | | | | | Inspector scaffolding for stream backlogs and admission decisions | -| TT2: Time Travel MVP (pause/rewind/buffer/catch-up) | 171 | TT2 – Time Travel MVP | | | | | | | Pause/rewind UX + buffering policies | -| TT3: Rulial diff / worldline compare MVP | 172 | TT3 – Rulial Diff / Worldline Compare | | | | | | | Side-by-side run comparison tooling | -| S1: Deterministic Rhai surface (sandbox + claims/effects) | 173 | S1 – Deterministic Rhai Surface | | | | | | | Deterministic sandbox boundary for scripts | -| W1: Wesley as a boundary grammar (hashable view artifacts) | 174 | W1 – Wesley as a Boundary Grammar | | | | | | | Hashable grammar + pinned semantics for replay integrity | - -Backlog issues are labeled `backlog` and kept visible in the Project; they will be prioritized into milestones as needed. diff --git a/docs/archive/README.md b/docs/archive/README.md deleted file mode 100644 index fb91aca4..00000000 --- a/docs/archive/README.md +++ /dev/null @@ -1,12 +0,0 @@ - - - -# Archive - -Documents that are **no longer the active source of truth** for any ongoing -work. This includes superseded specs, completed plans, abandoned proposals, and -design notes whose insights have been absorbed into code or canonical docs. - -**Rule of thumb:** if someone would read a doc and ask "should I be doing -something about this?" — it does not belong here. If the answer is "this -already happened" or "we went a different direction" — archive it. diff --git a/docs/archive/ROADMAP/ECHO_ROADMAP.md b/docs/archive/ROADMAP/ECHO_ROADMAP.md deleted file mode 100644 index 54ef72ca..00000000 --- a/docs/archive/ROADMAP/ECHO_ROADMAP.md +++ /dev/null @@ -1,164 +0,0 @@ - - - -# ECHO_ROADMAP — Phased Plan (Post-ADR Alignment) - -## Completed Sprint: TTD-HARDENING-S1 (2026-02-14 to 2026-02-15) - -**Goal:** Formalize the TTD (Time-Travel Determinism) hardening gates and evidence integrity. - -- [x] **G1 (DET):** Multi-platform determinism matrix (macOS/Linux + wasm). -- [x] **G2 (SEC):** Explicit negative test mapping for decoder controls. -- [x] **G3 (PRF):** Criterion baseline + regression threshold for materialization path. -- [x] **G4 (REP):** Enforce artifact-backed VERIFIED claims and path-aware gates. -- [x] **GOV:** Publish release policy and commit-ordered rollback playbooks. - ---- - -This roadmap re-syncs active work with recent ADRs: - -- ADR-0003: Causality-first API + MaterializationBus/Port -- ADR-0004: No global state / explicit dependency injection -- ADR-0005: Physics as deterministic scheduled rewrites -- ADR-0006: Ban non-determinism via CI guards - -It also incorporates the latest DIND status from `GEMINI_CONTINUE_NOTES.md`. - ---- - -## Phase 0 — Repo Hygiene & Ownership - -Goal: eliminate structural drift and restore correct ownership boundaries. - -- Move `crates/echo-dind-harness/` to the Echo repo (submodule) where it belongs. - - Remove the crate from this workspace once moved. - - Ensure any references/scripts in this repo point to the Echo submodule path. -- Audit for other Echo-owned crates/docs accidentally mirrored here. -- Update docs to reflect the correct location of DIND tooling. - -Exit criteria: - -- `crates/echo-dind-harness/` no longer exists in this repo. -- A clear pointer exists for where to run DIND locally (Echo repo). - ---- - -## Phase 1 — Determinism Guardrails (ADR-0004 + ADR-0006) - -Goal: codify the “no global state / no nondeterminism” doctrine and enforce it in CI. - -- Add CI scripts: - - `scripts/ban-globals.sh` (ADR-0004) - - `scripts/ban-nondeterminism.sh` and `scripts/ban-unordered-abi.sh` (ADR-0006) -- Wire scripts into CI for core crates (warp-core, warp-wasm, app wasm). -- Add minimal allowlist files (empty by default). -- Document determinism rules in README / doctrine doc. - -Exit criteria: - -- CI fails on banned patterns. -- No global init (`install_*` style) in runtime core. - ---- - -## Phase 2 — Causality-First Boundary (ADR-0003) - -Goal: enforce ingress-only writes and bus-first reads. - -- Define/confirm canonical intent envelopes for ingress (bytes-only). -- Ensure all write paths use ingress; remove any public “direct mutation” APIs. -- Implement MaterializationBus + MaterializationPort boundary: - - `view_subscribe`, `view_drain`, `view_replay_last`, `view_unsubscribe` - - channel IDs are byte-based (TypeId-derived), no strings in ABI -- Ensure UI uses materializations rather than direct state reads (except inspector). -- Define InspectorPort as a gated, separate API (optional). - -Exit criteria: - -- No direct mutation path exposed to tools/UI. -- UI can run solely on materialized channels (or has a plan to get there). - ---- - -## Phase 3 — Physics Pipeline (ADR-0005) - -Goal: implement deterministic physics as scheduled rewrites. - -- Implement tick phases: - 1. Integrate (predict) - 2. Candidate generation (broadphase + narrowphase) - 3. Solver iterations with footprint scheduling - 4. Finalize (commit) -- Canonical ordering: - - candidate keys: `(toi_q, min_id, max_id, feature_id)` - - deterministic iteration order for bodies and contacts -- Add optional trace channels for physics (debug materializations). -- Ensure physics outputs only emit post-commit. - -Exit criteria: - -- Physics determinism across wasm/native with fixed seeds and inputs. -- No queue-based “micro-inbox” for derived physics work. - ---- - -## Phase 4 — DIND Mission Continuation (from GEMINI_CONTINUE_NOTES) - -Goal: complete Mission 3 polish and Mission 4 performance envelope. - -### Mission 3 (Polish / Verification) - -- Badge scoping: clarify scope (“PR set”) and platforms. -- Badge truth source: generate from CI artifacts only. -- Matrix audit: confirm explicit aarch64 coverage needs. - -### Mission 4 (Performance Envelope) - -- Add `perf` command to DIND harness: - - `perf --baseline --tolerance 15%` - - track `time_ms`, `steps`, `time_per_step` - - optional: max nodes/edges, allocations -- Add baseline: `testdata/dind/perf_baseline.json` -- CI: - - PR: core scenarios, release build, fail on >15% regression - - Nightly: full suite, upload perf artifacts - -Exit criteria: - -- DIND perf regressions fail CI. -- Stable baseline file committed. - ---- - -## Phase 5 — App-Repo Integration (flyingrobots.dev specific) - -Goal: keep app-specific wasm boundary clean and deterministic. - -- Ensure TS encoders are the source of truth for binary protocol. -- Keep WASM as a thin bridge (no placeholder exports). -- Verify handshake matches registry version / codec / schema hash. -- Add or update tests verifying canonical ordering and envelope bytes. - -Exit criteria: - -- ABI tests use TS encoders, not wasm placeholder exports. -- wasm build + vitest pass. - ---- - -## Open Questions / Dependencies - -- Precise target crates for determinism guardrails in this repo vs Echo repo. -- Whether InspectorPort needs to exist in flyingrobots.dev or only in Echo. -- Final home for DIND artifacts: Echo repo or shared tooling repo. - ---- - -## Suggested Execution Order - -1. Phase 0 (move DIND harness) to prevent ownership drift. -2. Phase 1 guardrails to lock determinism. -3. Phase 2 boundary enforcement (ingress + bus). -4. Phase 3 physics pipeline. -5. Phase 4 DIND polish/perf. -6. Phase 5 app integration clean-up. diff --git a/docs/archive/ROLLBACK_TTD.md b/docs/archive/ROLLBACK_TTD.md deleted file mode 100644 index 08d435f4..00000000 --- a/docs/archive/ROLLBACK_TTD.md +++ /dev/null @@ -1,88 +0,0 @@ - - - -# Rollback Playbook — TTD Integration - -## Scope - -> **Note:** Commit SHAs below are pinned to the original TTD integration merge window. Verify against `git log` before executing any rollback. - -Rollback coverage for commit range: - -- Base: `efae3e8` -- Head: `e201c9b` - -## Preconditions - -- Release owner approval logged. -- Current branch state saved/tagged. -- Incident ticket created. - -## Scenario A — Full TTD Rollback - -### Objective (Scenario A) - -Return repository to pre-TTD integration state. - -### Ordered actions - -1. Create rollback branch: - - `rollback/ttd-full-` -2. Revert commits in reverse order from head to base+1: - - `e201c9b` - - `fd98b91` - - `ce98d80` - - `a02ea86` - - `3187e6a` - - `6e34a77` - - `f138b8a` - > **Merge commits:** If any listed commit is a merge, use `git revert -m 1 ` to select the first parent as the mainline. -3. Resolve conflicts preserving pre-TTD behavior. - -### Validation Checklist (Scenario A) - -- [ ] `cargo check --workspace` passes -- [ ] Determinism suite for non-TTD core passes -- [ ] Build pipelines pass -- [ ] Smoke test core runtime flows pass - ---- - -## Scenario B — Partial Rollback (FFI/UI layer) - -### Objective (Scenario B) - -Remove unstable FFI/UI integration while preserving core hardening. - -### Candidate revert target(s) - -- `fd98b91` (UI/WASM Integration) -- `ce98d80` (Frontend Restoration) -- optionally `a02ea86` if FFI safety layer must be reverted together - -### Dependency constraints - -- Reverting `a02ea86` may break consumers expecting SessionToken/FFI contracts. -- Validate dependent crates/apps after each revert step. - -### Validation Checklist (Scenario B) - -- [ ] `apps/ttd-app` build status known (pass/fail expected documented) -- [ ] Core codec/scene crates compile and tests pass -- [ ] CI gate summary attached to incident - ---- - -## Post-Rollback Evidence Packet (required) - -- commit SHAs reverted -- CI run IDs -- failing/passing gate delta (before vs after) -- residual risk summary -- recommendation: GO / CONDITIONAL / NO-GO - -### Filing - -- Attach the evidence packet to the incident ticket. -- Link the packet in the rollback PR description. -- Name the artifact `incident--post-rollback-evidence`. diff --git a/docs/archive/aion-papers-bridge.md b/docs/archive/aion-papers-bridge.md deleted file mode 100644 index 7af118c1..00000000 --- a/docs/archive/aion-papers-bridge.md +++ /dev/null @@ -1,231 +0,0 @@ - - - -# AIΩN Foundations → Echo Bridge - -Last reviewed: 2025-12-29. - -This doc maps the **AIΩN Foundations series** (“WARP Graphs”, Papers I–VI) onto the **Echo** repository as it exists today. - -Goal: keep the repo’s _implemented_ determinism contracts and its _spec narrative_ aligned with the papers that motivated them. - -## Scope / Sources Read - -For background and public context: - -- AIΩN Framework repo: - -Published paper links (DOIs): - -- Paper I: -- Paper II: -- Paper III: -- Paper IV: -- Papers V–VI: not yet published (as of 2025-12-28). - -Note: the TeX sources used to author Papers I–VI are maintained outside this repo and are intentionally not vendored into Echo. - -## Terminology (WARP vs RMG) - -The AIΩN Foundations papers standardize on **WARP graph** (Worldline Algebra for Recursive Provenance) as the public name for the substrate. - -This Echo repo historically used the older name **RMG** / “recursive metagraph” in crate names, type names, and docs (e.g., `rmg-core`, `RmgFrame`, “RMG viewer”). -Those identifiers have now been mechanically renamed to **WARP** equivalents (e.g., `warp-core`, `WarpFrame`, `warp-viewer`), but you may still see “RMG” in older notes and historical commit messages. - -**Project policy:** - -- Prefer **WARP** terminology in human-facing docs going forward. -- When Echo intentionally deviates from the paper design (for performance, ergonomics, or game-engine concerns), we **must** document the deviation and rationale here (so readers of the papers learn what changed and why). - -Status note: the mechanical rename from `rmg-*` → `warp-*` has landed (crates + the session/tooling surface). -The session wire protocol prefers `warp_*` op strings and `warp_id`, but decoders accept legacy `rmg_*` / `rmg_id` as compatibility aliases. - -## Backlog Mapping (Paper → Echo) - -These tables are intentionally “backlog-driving”: they identify what exists today, what is missing, and where Echo has (or may later choose) a different path. - -### Paper I — WARP as the state object - -| Paper I concept | Echo status | Touchpoints (today) | Backlog / next step | Deviation notes | -| -------------------------------------------------- | ------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| WARP graph = atom **or** skeleton-with-attachments | Partial (skeleton + typed atom attachments) | `crates/warp-core/src/graph.rs`, `crates/warp-core/src/record.rs`, `crates/warp-core/src/attachment.rs`, `docs/warp-two-plane-law.md` | Stage B1: represent descended attachments via explicit indirection (skeleton-visible links / refs), not “hidden graphs inside bytes” | Echo v0 treats attachments as **typed atoms** (`AtomPayload { type_id, bytes }`) and commits the payload `type_id` into boundary digests. Full “WARPs all the way down” (descended attachments) is not implemented yet by design. | -| Depth / finite unfoldings | Not implemented explicitly | (N/A; conceptual) | If observers/tools need “unfold to depth k”, define a canonical encoding for nested payloads and add tooling helpers | Might stay in the tooling layer, not the core engine. | -| Morphisms / category framing | Not implemented explicitly | (Docs only) | Identify which morphism fragments matter for engine/tooling APIs (likely: stable IDs + isomorphism criteria for hashing) | Echo currently uses hashes + canonical encodings as “practical morphisms”. | - -### Paper II — Deterministic evolution (ticks, independence, receipts) - -| Paper II concept | Echo status | Touchpoints (today) | Backlog / next step | Deviation notes | -| --------------------------------------------------------------- | -------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Tick = atomic commit (all-or-nothing) | Implemented for the spike (`commit` finalizes tx) | `crates/warp-core/src/engine_impl.rs` | Make abort/stutter semantics explicit if/when partial failure exists (currently “reserve rejects conflicts”) | Echo currently models conflicts as “not reserved”; explicit abort receipts are a good future addition. | -| Independence via footprints (delete/use; read/write sets) | Implemented (expanded to nodes/edges/ports + factor mask) | `crates/warp-core/src/footprint.rs`, `crates/warp-core/src/scheduler.rs` | Ensure footprint semantics remain “Paper II compatible” as optimizations land (bitmaps/SIMD, etc.) | Echo adds boundary ports + factor masks for engine practicality; document as an extension of the footprint idea. | -| Deterministic scheduling via total key order (“left-most wins”) | Implemented (deterministic ordering + deterministic reserve filter) | `crates/warp-core/src/scheduler.rs` | Specify the canonical key format (what exactly is “scope”?); keep stable across releases | Echo’s key is currently (`scope_hash`, `rule_id`, `nonce`); may evolve, but must remain deterministic. | -| Tick receipts (accepted vs rejected + blocking poset) | Implemented (receipt + blocking attribution; richer reasons pending) | `crates/warp-core/src/receipt.rs`, `crates/warp-core/src/engine_impl.rs`, `docs/spec-merkle-commit.md` | Decide when/if to commit blocking edges into the hash and extend receipts with richer rejection reasons once conflict policy/join semantics land | Receipt captures accepted vs rejected in canonical plan order and records blockers (poset edges) for footprint conflicts; only rejection reason today is footprint conflict. | - -### Paper III — Holography (payloads, BTRs, wormholes) - -| Paper III concept | Echo status | Touchpoints (today) | Backlog / next step | Deviation notes | -| ----------------------------------------------------- | --------------------------------------------------------------- | ------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- | -| Boundary encoding `(U0, P)` where `P` is tick patches | Implemented in spirit | `crates/echo-graph` (`Snapshot` + `Diff`), `crates/warp-viewer/src/session_logic.rs` | Decide whether Echo’s “Diff stream” is _the_ canonical tick-patch format (or one of several observers) | Echo’s stream is a practical boundary artifact; may not capture full tick receipts yet. | -| BTR packaging (hash in/out + payload + auth tag) | Partially implemented (hashes + canonical encoding + checksums) | `crates/warp-core/src/snapshot.rs`, `crates/echo-session-proto/src/wire.rs` | Define an explicit “BTR” message/record type for archives and replication (and later signing) | Today: checksums protect packets; signatures are future work. | -| Wormholes (compressed multi-tick segments) | Not implemented | (Concept only) | Add “checkpoint/wormhole” support once payloads are structured enough to compress/skip while preserving verification | Might be a tooling/storage feature, not required for realtime gameplay. | -| Prefix forks (content-addressed shared history) | Partially implemented (parents in commit hash) | `crates/warp-core/src/snapshot.rs` | Implement branch storage / addressable worldline families (Chronos/Kairos) | Echo already has parents; higher-level branch mechanics are still in docs. | - -### Paper IV — Observer geometry + Chronos/Kairos/Aion - -| Paper IV concept | Echo status | Touchpoints (today) | Backlog / next step | Deviation notes | -| ------------------------------------- | ----------------------------------------------------- | --------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------- | -| Chronos/Kairos/Aion triad | Specced; partially embodied (epochs, hashes, streams) | `docs/architecture-outline.md`, `crates/echo-session-service` (`ts`), `crates/echo-graph` (`epoch`) | Implement explicit branch-event (Kairos) and possibility-space (Aion) APIs once branch tree lands | Echo can keep the triad as “design axes” even before full branch tree exists. | -| Observers as projections over history | Embodied as tools/viewers | `crates/warp-viewer`, `docs/book/echo/booklet-05-tools.tex` | Define a “small family of canonical observers” and their guarantees (hash checks, partial views, privacy scopes) | Game tools want fast observers; papers motivate explicit translation costs. | - -### Paper V — Provenance sovereignty (ethics requirements) - -| Paper V concept | Echo status | Touchpoints (today) | Backlog / next step | Deviation notes | -| ----------------------------------------- | ------------------------------------ | ------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------- | -| Capability-scoped observers + due process | Partially specced / stub hooks exist | `crates/echo-session-proto` (handshake `capabilities`), `docs/spec-capabilities-and-security.md` | Evolve the session handshake into a real capability system tied to observer access | Echo can ship “developer mode” first, but must document the intended governance boundary. | - -### Paper VI — JITOS / OS boundary + JS-ABI syscall framing - -| Paper VI concept | Echo status | Touchpoints (today) | Backlog / next step | Deviation notes | -| ---------------------------------------------- | --------------------------------------------------------------- | ------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- | ------------------------------------------------------------------------------ | -| JS-ABI as stable, language-independent framing | Implemented | `crates/echo-session-proto/src/wire.rs`, `crates/echo-session-proto/src/canonical.rs` | Keep the framing boring and stable; add capability negotiation/versioning as needed | Echo already matches the “boring but essential” framing objective. | -| WAL / epochs as temporal backbone | Partially implemented (monotonic `ts`, epoch stream discipline) | `crates/echo-session-service`, `crates/echo-graph` | Define durable WAL / archive format and its relationship to commit hashes and diffs | Echo can treat session streams as “live WAL slices” and add persistence later. | - -## Paper I — WARP as the state object (graphs all the way down) - -**Core idea:** a _WARP graph_ is either an **atom** (`Atom(p)` for opaque payload `p`) or a **finite directed multigraph skeleton** whose vertices and edges carry attached WARPs. - -**Relevance to Echo:** - -- Echo’s “everything is a graph” story is Paper I’s substrate claim. -- Echo’s current engine spike (`warp-core`) implements a _flat_ skeleton graph (`GraphStore`) plus **depth-0 attachments** as typed atoms (`AtomPayload { type_id, bytes }`). -- The WARP “attachments are themselves graphs” concept is **not implemented yet**. Echo’s project law forbids treating payload bytes as “hidden edges”; descended attachments must be represented via explicit, skeleton-visible indirection when implemented (Stage B1). - -**Echo touchpoints:** - -- `crates/warp-core/src/graph.rs` is the current SkeletonGraph implementation. -- `crates/warp-core/src/record.rs` + `crates/warp-core/src/attachment.rs` define node/edge records and depth-0 typed atom attachments. -- `crates/echo-graph` is the canonical _tool/wire_ graph shape. - -## Paper II — Deterministic evolution: ticks, footprints, and receipts - -**Core idea:** define a deterministic, concurrent operational semantics at the level of a **tick**: - -- Within a tick, commit a scheduler-admissible batch (pairwise independent by footprint discipline). -- **Tick confluence:** any serialization order yields the same successor (up to isomorphism). -- Deterministic scheduling comes from a deterministic total order on candidates (“left-most wins”). -- Optional **tick receipts** record accepted vs rejected candidates and _why_ (a poset of blocking causality). - -**Relevance to Echo:** - -- Echo’s runtime determinism is largely “Paper II made executable”: - - collect candidate rewrites, - - sort deterministically, - - accept a conflict-free subset, - - apply in deterministic order, - - produce a hash. - -**Echo touchpoints:** - -- Deterministic pending queue + drain ordering: - - `crates/warp-core/src/scheduler.rs` -- Footprints + independence checks: - - `crates/warp-core/src/footprint.rs` -- Transaction lifecycle + plan/rewrites digests: - - `crates/warp-core/src/engine_impl.rs` - -**Notable gap (intentional/expected):** - -- Echo’s current engine exposes “plan_digest” / “rewrites_digest”, and now exposes a minimal tick receipt blocking witness: for rejected candidates, the receipt lists which earlier applied candidates blocked it (indices in canonical plan order). - -## Paper III — Computational holography: provenance payloads, BTRs, wormholes - -**Core idea:** for deterministic worldlines, the interior derivation volume is recoverable from a compact boundary: - -- boundary encoding = `(U0, P)` where `P` is an ordered list of tick patches (payload) -- BTR (Boundary Transition Record) packages boundary hashes + payload for tamper-evidence/audit -- slicing: materialize only the causal cone required for some value -- prefix forks: Git-like branching via shared-prefix dedupe under content addressing -- wormholes: compress multi-tick segments into a single edge carrying a sub-payload - -**Relevance to Echo:** - -- Echo already treats hashing and canonical encoding as “truth checks”. -- Echo’s session pipeline is essentially a practical “boundary stream”: - - full snapshots + gapless diffs (tick patches) with optional state hashes. - -**Echo touchpoints:** - -- `state_root` + `commit_id` and canonical encoding: - - `crates/warp-core/src/snapshot.rs` - - `docs/spec-merkle-commit.md` -- Gapless snapshot/diff semantics + per-frame hash checks: - - `crates/echo-graph` - - `crates/echo-session-proto` - - `crates/echo-session-service` - - `crates/warp-viewer/src/session_logic.rs` - -## Paper IV — Observer geometry, rulial distance, Chronos/Kairos/Aion - -**Core idea:** observers are resource-bounded functors out of history categories; translation cost induces geometry (rulial distance). - -This paper also formalizes the three-layer time model used throughout Echo docs: - -- **Chronos:** linear time of a fixed replay path (tick index) -- **Kairos:** branch events / loci of alternative continuations -- **Aion:** the full possibility space (history category; “Ruliad” as a large disjoint union) - -**Relevance to Echo:** - -- Echo’s Chronos/Kairos/Aion language isn’t “theme”; it’s an architectural partitioning of: - - replay time, - - branching structure, - - and the larger possibility space/tooling surface. - -**Echo touchpoints:** - -- Conceptual: `docs/architecture-outline.md` (temporal axes) -- Practical precursor: hash-checked replay streams (viewer) + deterministic encoding (proto) - -## Paper V — Ethics: provenance sovereignty as a runtime requirement - -**Core idea:** deterministic replay + complete provenance becomes a capability that can be abused; therefore a runtime must treat provenance access as governed. - -Paper V extracts system-level requirements (examples): - -- consent + revocation, -- capability-scoped observers and view access, -- sealing / selective disclosure, -- fork rights (and constraints), -- due-process override protocols. - -**Echo touchpoints:** - -- `docs/spec-capabilities-and-security.md` (security/capability design space) -- Session protocol’s explicit “capabilities” field (`HandshakePayload`) provides a concrete hook to evolve toward scoped observers. - -## Paper VI — JITOS: OS boundary layer and JS-ABI as syscall framing - -**Core idea:** build an OS whose primary artifact is lawful transformations (history), with “state” as a materialized view; introduce SWS (shadow worlds), WAL-backed epochs, deterministic collapse, and JS-ABI as a stable syscall framing. - -**Echo touchpoints:** - -- JS-ABI framing + canonical payload encoding + checksums: - - `crates/echo-session-proto/src/wire.rs` - - `crates/echo-session-proto/src/canonical.rs` -- Session hub as an early “daemon boundary”: - - `crates/echo-session-service` -- Viewer/tooling as early “observer” implementations: - - `crates/warp-viewer` -- Multi-clock time model (streams + admission policies) and wormholes as tick-range compression for fast seek/catch-up: - - `docs/spec-time-streams-and-wormholes.md` - -## Practical Alignment Notes (What to Keep in Sync) - -- Terminology drift: “RMG” vs “WARP” - - Papers use “WARP” as the public substrate name; Echo now uses `warp-*` naming in crates and the session/tooling surface. - - Docs and historical artifacts may still mention "RMG"; keep this note explicit about why/when a deviation exists. -- Empty-digest semantics for commit metadata - - The engine’s canonical empty _length-prefixed list digest_ is `blake3(0u64.to_le_bytes())`. - - Keep docs consistent because this changes commit identity. -- Receipts / traces - - Paper II receipts and Paper III payload/boundary formats are natural next layers over `plan_digest`/`rewrites_digest` and session diffs. diff --git a/docs/archive/branch-merge-playbook.md b/docs/archive/branch-merge-playbook.md deleted file mode 100644 index f695de6a..00000000 --- a/docs/archive/branch-merge-playbook.md +++ /dev/null @@ -1,90 +0,0 @@ - - - -# Branch Merge Conflict Playbook - -Merging timelines is where Echo’s temporal sandbox shines. This playbook defines how we detect, surface, and resolve conflicts when combining branch diffs. - ---- - -## Conflict Types - -1. **Component Value Conflict** - - Same entity & component modified differently in both branches. - -2. **Structural Conflict** - - One branch deletes entity/component the other modifies. - -3. **Order Conflict** - - Sequencing-sensitive actions (e.g., timeline events) reordered. - -4. **Resource Conflict** - - Shared resources (inventory counts, singleton states) diverge. - ---- - -## Detection Pipeline - -1. Identify lowest common ancestor node `L`. -2. Collect diffs `Δα` (from `L` to branch α head) and `Δβ` (to branch β head). -3. For each entity/component touched: - - Compare mutation timestamps (relative order from diff metadata). - - If both branches modify same slot → flag conflict. -4. For deletions vs modifications, flag structural conflict. -5. Accumulate conflict records for resolution stage. - -Conflict record structure: - -```ts -interface MergeConflict { - entityId: EntityHandle; - componentType: number | null; // null for entity-level conflict - type: "value" | "structural" | "order" | "resource"; - branchA: DiffEntry; - branchB: DiffEntry; -} -``` - ---- - -## Resolution Strategies - -1. **Manual Selection (Default)** - - Present conflicts in inspector; designer chooses branch (A wins, B wins, custom). - - Record decision for determinism (stored in merge log). - -2. **Policy-Based** - - Rules such as "prefer branch with higher Aion (Echo's per-node timeline weight)" or "prefer lower entropy". - - Configurable via merge options. - -3. **Blend** (future) - - For numeric components, allow interpolation (requires designer script). - -4. **Retry** - - Abort merge, spawn new branch to rework conflicts. - ---- - -## Tooling Flow - -- Merge UI displays conflict list with filters (type, component, branch). -- Each conflict shows diffs side-by-side, include context (timeline notes, metadata). -- Decisions appended to merge log (`MergeDecision[]`) for replay. -- After resolving all conflicts, system applies merged diff sequentially and commits new node. - ---- - -## Automation Hooks - -- `merge.resolve(conflictId, strategy)` API for scripting/automation. -- Optional "auto-resolve" pass using policy (e.g., prefer branch A) before manual review. -- Notifications when unresolved conflicts remain. - ---- - -## Open Questions - -- Should we support collaborative resolution (multiple designers editing simultaneously)? -- How to visualize conflicts across nested branches (merge of merges)? -- Do we need plugin points for domain-specific merge strategies (e.g., level geometry vs inventory)? -- How to integrate paradox detection (if merge would introduce paradox, block and prompt user). diff --git a/docs/archive/capability-ownership-matrix.md b/docs/archive/capability-ownership-matrix.md deleted file mode 100644 index 73e67fa2..00000000 --- a/docs/archive/capability-ownership-matrix.md +++ /dev/null @@ -1,115 +0,0 @@ - - - -# Capability Ownership Matrix - -Date: 2026-01-01 -Status: Draft (Phase 0.75) - -This document is a living boundary map for Echo. - -It answers (explicitly, in one place): - -- Who **owns** vs **consumes** each capability? -- What determinism level is required at each layer? -- What provenance is required to make replay / time travel honest? -- Which external dependencies (clocks, OS IO, networks) are allowed to influence state, and how? - -It is intentionally redundant with specs: the point is to keep the architecture legible while it is evolving. - ---- - -## Layers (Echo interpretation) - -Use these columns consistently: - -- **Platform**: host integration and durable artifacts/contracts (process, filesystem, sockets, timers, OS scheduling, worldline storage, commit hashing, tick patch format, digests). Nondeterministic by default. -- **Kernel**: deterministic semantic core (rewrite engine, scheduler, receipts, snapshot/tick structure, deterministic decision records including stream admission decisions). HistoryTime-only. -- **Views**: controlled accessors and projections over history (query APIs, inspectors, adapters). Any interaction with HostTime/IO must be recorded as replay-safe claims/decisions. -- **Tooling**: UIs, dashboards, CLI workflows (read-only by default; must be usable during pause/rewind; any control surface must be capability-gated and recorded). -- **Docs**: specs and procedures; the "human-facing API". - ---- - -## Ratings - -- **Determinism** - - `none`: may vary per run; not replayable - - `best-effort`: tries to be stable but not a contract - - `deterministic`: replayable given pinned artifacts/inputs -- **Provenance** - - `none`: no tracking - - `basic`: timestamps/ids only - - `strong`: hash/CID-linked; replay/audit friendly - ---- - -## Matrix Template (Copy/Paste) - -For each capability × layer, fill the cell with: - -- `Role`: owns | consumes -- `Stability`: experimental | beta | stable -- `Determinism`: none | best-effort | deterministic (replayable) -- `Provenance`: none | basic (timestamps) | strong (CID/hash-linked) -- `External Deps`: libs, services, clocks, networks, etc. - -Cell format: - -```text -Role: owns | consumes -Stability: experimental | beta | stable -Determinism: none | best-effort | deterministic -Provenance: none | basic | strong (CID/hash) -External Deps: -``` - ---- - -## First-Pass Fill (Current Echo stack) - -This is a starter fill that we will revise as Echo components stabilize. - -Legend (compact): - -- `owns/consumes` -- `exp/beta/stable` -- `det/best/none` -- `prov: strong/basic/none` - -| Capability ↓ \ Layer → | Platform | Kernel | Views | Tooling | Docs | -| ----------------------- | ------------------------------------------------------------ | ------------------------------------------------------------- | ------------------------------------------------------------- | ------------------------------------------------------------------- | --------------------------------------------------- | -| **Scheduling** | owns · beta · best · prov: basic · deps: OS scheduler, tokio | owns · beta · det · prov: strong · deps: none | consumes · beta · det · prov: strong · deps: none | consumes · beta · best · prov: basic · deps: browser/event-loop | owns · stable · det · prov: strong · deps: none | -| **Provenance** | consumes · beta · best · prov: basic · deps: FS/network | owns · beta · det · prov: strong · deps: CID/hash | consumes · beta · det · prov: strong · deps: none | consumes · beta · det · prov: strong · deps: none | owns · stable · det · prov: strong · deps: git | -| **Schema / Interfaces** | consumes · exp · best · prov: basic · deps: serde/json | owns · exp · det · prov: strong · deps: versioned schemas | owns · exp · det · prov: strong · deps: schema hash pinning | consumes · exp · best · prov: basic · deps: UI contracts | owns · beta · det · prov: strong · deps: docs/specs | -| **Storage / Ledger** | owns · beta · best · prov: basic · deps: FS/DB | owns · exp · det · prov: strong · deps: content hashing | consumes · beta · det · prov: strong · deps: read-only ledger | consumes · beta · best · prov: basic · deps: localStorage/IndexedDB | owns · beta · det · prov: strong · deps: docs/specs | -| **Time / Clocks** | owns · beta · best · prov: basic · deps: HostTime | consumes · beta · det · prov: strong · deps: Decision Records | consumes · beta · det · prov: strong · deps: Clock View | consumes · beta · best · prov: basic · deps: tool clock | owns · beta · det · prov: strong · deps: paper/spec | -| **Networking / IO** | owns · beta · best · prov: basic · deps: TCP/WS/UDS | consumes · exp · det · prov: strong · deps: recorded claims | consumes · beta · det · prov: strong · deps: stream backlog | consumes · beta · best · prov: basic · deps: Web APIs | owns · beta · det · prov: strong · deps: procedures | -| **Auth / Trust** | owns · exp · best · prov: basic · deps: keys/tokens | consumes · exp · det · prov: strong · deps: signed claims | consumes · exp · det · prov: strong · deps: receipts | consumes · exp · best · prov: basic · deps: auth UI | owns · exp · det · prov: strong · deps: policies | -| **Observability** | owns · beta · best · prov: basic · deps: logs/metrics | owns · beta · det · prov: strong · deps: receipts/events | owns · beta · det · prov: strong · deps: query/index | owns · beta · best · prov: basic · deps: UI | owns · beta · det · prov: strong · deps: docs | -| **Replay / Debug** | consumes · beta · best · prov: basic · deps: host capture | owns · beta · det · prov: strong · deps: replay log | owns · beta · det · prov: strong · deps: index | owns · beta · det · prov: strong · deps: dashboard | owns · beta · det · prov: strong · deps: paper/spec | -| **Shared Invariants** | - | - | - | - | - | - ---- - -## Shared Invariants (Draft) - -These are the guarantees that must hold across layers if we want deterministic replay and time travel to be “honest”: - -1. **Deterministic Core**: kernel state transitions are pure functions of `(prior state, admitted inputs, pinned rule-pack / schema hashes)`; no HostTime/OS IO calls inside kernel semantic transitions. -2. **Time As Data**: kernel never consults HostTime directly; HostTime is only observed in Platform/Views and converted into Decision Records (HistoryTime) before it can influence semantics. -3. **Provenance First**: all externally meaningful artifacts (schemas, policies, rule packs) are referenced by content hash (CID) in receipts. -4. **Network Boundary**: IO is treated as external stimuli; any nondeterministic observation is recorded as a claim before it can affect semantic state. -5. **Replay Integrity**: if semantics change (schema/compiler), history carries a version/hash pin (fail closed or migrate deterministically). - ---- - -## Notes / Follow-Ups - -- This matrix should become part of the “phase overview” review checklist: when a capability moves from experimental → beta, update the cell and link evidence (PRs/specs/tests). -- When we formalize Wesley and/or a view grammar, split “Schema / Interfaces” into: boundary grammar, IR schema pinning, and codegen outputs. - -## Near-Term TODOs - -- (#174) Decide where “Wesley grammar/IR” lives in this matrix (Platform vs Schema layer), and whether its schema hash is required on all receipts. -- (#170) Specify the `StreamsFrame` inspector payload (backlog, cursors, `StreamAdmissionDecision` summaries). diff --git a/docs/archive/code-map.md b/docs/archive/code-map.md deleted file mode 100644 index e2b3ec60..00000000 --- a/docs/archive/code-map.md +++ /dev/null @@ -1,71 +0,0 @@ - - - -# Echo Code Map - -> Quick index from concepts → code, with the most relevant specs. - -## Crates - -- warp-core — deterministic graph rewriting engine (Rust) - - Public API aggregator: `crates/warp-core/src/lib.rs` - - Identifiers & hashing: `crates/warp-core/src/ident.rs` - - Node/edge records: `crates/warp-core/src/record.rs` - - In-memory graph store: `crates/warp-core/src/graph.rs` - - Rules and patterns: `crates/warp-core/src/rule.rs` - - Transactions: `crates/warp-core/src/tx.rs` - - Deterministic scheduler: `crates/warp-core/src/scheduler.rs` - - Snapshots + hashing: `crates/warp-core/src/snapshot.rs` - - Payload codecs (demo): `crates/warp-core/src/payload.rs` - - Engine implementation: `crates/warp-core/src/engine_impl.rs` - - Playback & view sessions: `crates/warp-core/src/playback.rs` (PlaybackCursor, ViewSession, TruthSink, TruthFrame) - - Worldlines & temporal graphs: `crates/warp-core/src/worldline.rs` (WorldlineId, HashTriplet, apply_warp_op_to_store) - - Provenance tracking: `crates/warp-core/src/provenance_store.rs` (ProvenanceStore trait, LocalProvenanceStore) - - Retention policies: `crates/warp-core/src/retention.rs` (RetentionPolicy enum) - - Materialization V2 codec: `crates/warp-core/src/materialization/frame_v2.rs` (V2Packet encoder/decoder) - - Demo rule: `crates/warp-core/src/demo/motion.rs` - - Deterministic math: `crates/warp-core/src/math/*` - - Tests (integration): `crates/warp-core/tests/*` - -- warp-wasm — wasm-bindgen bindings - - `crates/warp-wasm/src/lib.rs` - -- warp-cli — CLI scaffolding - - `crates/warp-cli/src/main.rs` - -## Specs → Code - -- WARP core model — docs/spec-warp-core.md → `ident.rs`, `record.rs`, `graph.rs`, `rule.rs`, `engine_impl.rs`, `snapshot.rs`, `scheduler.rs` -- Scheduler — docs/spec-scheduler.md → `scheduler.rs`, `engine_impl.rs` -- ECS storage (future) — docs/spec-ecs-storage.md → new `ecs/*` modules (TBD) -- Serialization — docs/spec-serialization-protocol.md → `snapshot.rs` (hashing), future codecs -- Deterministic math — docs/SPEC_DETERMINISTIC_MATH.md → `math/*` -- Temporal bridge — docs/spec-temporal-bridge.md → future modules (TBD) -- Worldlines & playback (SPEC-0004) — docs/spec/SPEC-0004-worldlines-playback-truthbus.md → `playback.rs`, `worldline.rs`, `provenance_store.rs`, `retention.rs`, `materialization/frame_v2.rs` - -## Test Coverage - -- Reducer emission: `crates/warp-core/tests/reducer_emission_tests.rs` (T11-T13 reducer tests) -- View session & playback: `crates/warp-core/tests/view_session_tests.rs` (Playback + T16 tests) -- Playback outputs: `crates/warp-core/tests/outputs_playback_tests.rs` (SPEC-0004 test IDs T1, T4, T5, T6, T7, T8) -- Checkpoint & fork: `crates/warp-core/tests/checkpoint_fork_tests.rs` (T17-T18 checkpoint/fork tests) -- Playback cursor: `crates/warp-core/tests/playback_cursor_tests.rs` (Cursor seek tests) - -## Conventions - -- Column-major matrices, right-handed coordinates, f32 math. -- One concrete concept per file; keep modules < 300 LoC where feasible. -- Tests live in `crates//tests` and favor small, focused cases. - -## Refactor Policy - -- 1 file = 1 concrete concept (engine, graph store, identifiers, etc.). -- No 500+ LoC “god files”; split before modules exceed ~300 LoC. -- Keep inline tests in separate files under `crates//tests`. -- Maintain stable re-exports in `lib.rs` so public API stays coherent. - -## Onboarding - -- Start with `README.md` and `docs/meta/docs-index.md`. -- For engine flow, read `engine_impl.rs` (apply → schedule → commit → snapshot). -- For demo behavior, see `demo/motion.rs` and tests under `crates/warp-core/tests/*`. diff --git a/docs/archive/determinism-invariants.md b/docs/archive/determinism-invariants.md deleted file mode 100644 index 93d1ec69..00000000 --- a/docs/archive/determinism-invariants.md +++ /dev/null @@ -1,8 +0,0 @@ - - - -# Determinism Invariants (Redirect) - -This content has been consolidated into the `warp-core` spec: - -- [/spec-warp-core#84-determinism-invariants-summary](/spec-warp-core#84-determinism-invariants-summary) diff --git a/docs/archive/determinism/DETERMINISM-AUDIT.md b/docs/archive/determinism/DETERMINISM-AUDIT.md deleted file mode 100644 index cbe03bc5..00000000 --- a/docs/archive/determinism/DETERMINISM-AUDIT.md +++ /dev/null @@ -1,323 +0,0 @@ - - - -# Determinism Audit for warp-core - -**Date:** 2026-01-13 -**Auditor:** Claude (with human oversight) - -## Executive Summary - -### TL;DR - -The refactor targeting "serde removal" was attacking the wrong problem. **Serde itself isn't the enemy—non-deterministic data structures (HashMap/HashSet), non-deterministic serialization formats (JSON), and platform-variant float behavior are.** - -## Key Findings - -### ✅ GOOD: Already Deterministic - -1. **Core data structures use BTreeMap/BTreeSet throughout** - - `tick_patch.rs`: Uses BTreeMap for op deduplication (line 331-338) - - `snapshot.rs`: Uses BTreeSet for reachability (line 91-92) - - `scheduler.rs`: Final ready set uses stable radix sort (20-pass LSD, lines 383-450) - - All SlotIds and ops are explicitly sorted before hashing - -2. **Explicit canonical ordering everywhere** - - WarpOp::sort_key() provides canonical ordering (tick_patch.rs:203-283) - - Edges sorted by EdgeId before hashing (snapshot.rs:185-194) - - Scheduler uses deterministic radix sort for candidate ordering - -3. **Clean identifier types** - - All IDs are Blake3 hashes ([u8; 32]) - - All derive PartialOrd, Ord for stable comparison - - No hidden nondeterminism in identity - -### ⚠️ CONCERNS: Needs Investigation - -1. **HashMap/HashSet Usage (Non-Critical)** - - `engine_impl.rs`: HashMap for rule registries (NOT part of state hash) - - `scheduler.rs`: HashMap for pending txs (intermediate, final output is sorted) - - `attachment.rs`: HashMap for codec registry (NOT part of state) - - **Assessment**: These are internal bookkeeping, not part of canonical encoding - - **Action**: Audit that none of these leak into hashing/signing code paths - -2. **Float Usage (f32/f64) - CRITICAL AREA** - - **CRITICAL FINDINGS (2026-01-13):** - - **Yes**, floats flow into canonical hashes. - - **Yes**, the system is sensitive to 1 ULP differences (confirmed by `tests/determinism_audit.rs`). - - **Implication**: `F32Scalar` (the default) relies on hardware `f32` arithmetic. If `a + b` varies by 1 ULP across platforms (x87 vs SSE vs NEON), the state hash **will diverge**. - - **Mitigation**: The `det_fixed` feature flag replaces `F32Scalar` with `DFix64` (Q32.32 fixed-point), providing guaranteed bit-perfect cross-platform determinism at the cost of performance. - - **Recommendation**: Use `det_fixed` for consensus-critical deployments. `F32Scalar` is acceptable for local-only or homogeneous-hardware deployments ("optimistic determinism"). - - **Location: `math/scalar.rs`** - - F32Scalar wraps f32 with canonicalization: - - NaN → 0x7fc00000 (canonical quiet NaN) - - Subnormals → +0.0 - - -0.0 → +0.0 - - **PROBLEM**: f32 arithmetic itself is NOT deterministic across platforms - - x87 FPU vs SSE may produce different intermediate results - - Rounding modes can vary - - Denormal handling varies by CPU flags - - **Location: `payload.rs`** - - Heavy f32 usage for motion payloads - - Comments mention "deterministic quantization to Q32.32" - - Has v0 (f32) and v2 (fixed-point?) formats - - `decode_motion_payload_v0`: Reads f32 from bytes (line 258) - - `encode_motion_payload_v0`: Writes f32 to bytes (line 216) - - **CRITICAL QUESTION**: Are these f32 values EVER used in: - - State hashing (compute_state_root)? - - Patch digests? - - Receipt digests? - - Signature inputs? - - **Location: `math/quat.rs`** - - Uses `[f32; 4]` for quaternion storage - - Arithmetic operations on quaternions - - **CONCERN**: Quaternion normalization/multiplication may be non-deterministic - - **ACTION REQUIRED**: - - Grep for all places F32Scalar/f32 values flow into hash computations - - Verify motion payloads are ONLY boundary data (not hashed) - - If floats DO affect hashes, replace with fixed-point (Q32.32 or similar) - -3. **Serde Feature Gates** - - `receipt.rs:123`: Has `#[cfg(feature = "serde")]` for `rule_id_short()` helper - - `snapshot.rs:77`: Has `#[cfg(feature = "serde")]` for `hash_hex()` helper - - `math/scalar.rs:110,120`: Serde impls for F32Scalar - - `serializable.rs`: Entire module was using serde derives - - **Assessment**: These are UI/debug helpers, NOT canonical encoding - - **Action**: Can keep serde feature gate for convenience IF: - - It's ONLY used with deterministic CBOR encoder - - NEVER used with serde_json - - JSON is only for debug/view layer - -### 🔥 CRITICAL: What Actually Needs Fixing - -1. **Enforce CBOR-only wire format** - - Make deterministic CBOR (echo-wasm-abi) the ONLY protocol boundary - - JSON can exist for debug/viewing ONLY (never canonical) - - Add compile-time checks/lints to prevent JSON usage - -2. **Audit float usage in hash paths** - - Search for all paths where f32/F32Scalar flows into: - - compute_state_root - - compute_patch_digest - - compute_tick_receipt_digest - - Any signature/commitment computation - - If found, replace with fixed-point Q32.32 - -3. **Add determinism tests** - - Test: Encode same patch 1000x → identical bytes - - Test: Encode across different runs → identical bytes - - Test: Encode same state → identical state_root - - Bonus: Cross-compile test (native + wasm) for identical hashes - -## Search Targets for Detailed Audit - -```bash -# Find HashMap/HashSet in critical paths -rg "HashMap|HashSet" crates/warp-core/src/{snapshot,tick_patch,receipt,cmd}.rs - -# Find float usage in critical paths -rg "\bf32\b|\bf64\b|F32Scalar" crates/warp-core/src/{snapshot,tick_patch,receipt,cmd}.rs - -# Find serde_json usage (should be ZERO in warp-core) -rg "serde_json" crates/warp-core/ - -# Find ciborium usage (should be ZERO except in tests) -rg "ciborium::(from_reader|into_writer)" crates/warp-core/src/ - -# Find all hashing/digest computation sites -rg "Hasher::new|finalize\(\)" crates/warp-core/src/ -A5 -``` - -## What to Revert from Previous Refactor - -**REVERT:** - -- ❌ Removal of serde from Cargo.toml dependencies (it's fine if used with CBOR) -- ❌ Removal of all `#[cfg_attr(feature = "serde", derive(...))]` annotations - -**KEEP:** - -1. ✅ Removal of serde_json dependency from warp-core -2. ✅ clippy.toml lint rules forbidding serde_json/ciborium -3. ✅ Manual JSON formatting in telemetry.rs -4. ✅ Use of deterministic CBOR in cmd.rs -5. ✅ Documentation about determinism requirements - -## Proposed Refactor Plan (3 Commits) - -### Commit 1: Revert overly-aggressive serde removal + document audit - -**What:** - -- Revert warp-core/Cargo.toml: Add serde back to dependencies -- Revert removed `#[cfg_attr(feature = "serde", ...)]` lines on core types -- Keep serde_json in dev-dependencies only -- Keep clippy lint rules (they prevent serde_json abuse) -- Add this DETERMINISM-AUDIT.md document -- Update CLAUDE-NOTES.md with corrected understanding - -**Why:** - -- Serde with deterministic CBOR is fine -- The real problem is JSON/HashMap/floats, not serde derives -- We need derives for convenience with CBOR encoding - -**Files:** - -- `crates/warp-core/Cargo.toml` -- `DETERMINISM-AUDIT.md` (NEW) -- `CLAUDE-NOTES.md` -- Revert cfg_attr removals in: attachment.rs, ident.rs, record.rs, receipt.rs, tx.rs, tick_patch.rs, snapshot.rs, warp_state.rs, graph.rs - -**Commit message:** - -```text -fix(warp): revert overly-aggressive serde removal - -The previous refactor incorrectly treated serde as the source of -non-determinism. The real issues are: -1. Non-deterministic data structures (HashMap/HashSet) -2. Non-deterministic formats (JSON) -3. Platform-variant floats - -Serde itself is fine when used with deterministic encoders like our -canonical CBOR implementation (echo-wasm-abi). - -This commit: -- Restores serde dependency (with derives on core types) -- Keeps serde_json removed from dependencies -- Keeps clippy lints forbidding serde_json -- Adds DETERMINISM-AUDIT.md documenting real risks - -Next steps: Audit float usage in hash paths (see DETERMINISM-AUDIT.md) -``` - -### Commit 2: Complete float determinism audit + add tests - -**What:** - -- Grep every usage of f32/F32Scalar in snapshot.rs, tick_patch.rs, receipt.rs -- Document findings: Do floats flow into hashes? If yes, replace with Q32.32 -- Add determinism tests: - - test_patch_digest_repeatable: Encode same patch 100x → same bytes - - test_state_root_repeatable: Compute state_root 100x → same hash - - test_receipt_digest_repeatable: Encode same receipt 100x → same bytes -- Document in DETERMINISM-AUDIT.md whether floats are safe or need replacement - -**Files:** - -- `crates/warp-core/src/snapshot.rs` (tests) -- `crates/warp-core/src/tick_patch.rs` (tests) -- `crates/warp-core/src/receipt.rs` (tests) -- `DETERMINISM-AUDIT.md` (updated with audit results) - -**Commit message:** - -```text -test(warp): add determinism audit tests for core hashing - -Adds repeatability tests for: -- Patch digest computation -- State root computation -- Receipt digest computation - -These tests verify byte-for-byte identical outputs across multiple -encode operations on the same input. - -[If floats found in hash paths:] -CRITICAL: Audit revealed f32 usage in [X] - requires follow-up to -replace with fixed-point Q32.32 representation. - -[If floats NOT in hash paths:] -Audit confirmed: f32 values are boundary-only and never flow into -canonical hash computation. -``` - -### Commit 3: Enforce CBOR-only boundary + cleanup - -**What:** - -- Update serializable.rs to use deterministic CBOR encoding -- Remove remaining #[cfg(feature = "serde")] gates that are now unnecessary -- Add module-level docs explaining: CBOR for wire, JSON for debug only -- Update any wasm boundary code to explicitly use echo-wasm-abi::encode_cbor - -**Files:** - -- `crates/warp-core/src/serializable.rs` -- `crates/warp-core/src/lib.rs` (docs) -- `crates/warp-wasm/` (ensure CBOR boundary) - -**Commit message:** - -```text -refactor(warp): enforce CBOR-only protocol boundary - -Makes deterministic CBOR (echo-wasm-abi) the canonical encoding for -all protocol boundaries. JSON is relegated to debug/view layer only. - -Changes: -- serializable.rs uses CBOR encoding only -- Removed unnecessary serde feature gates -- Added docs: "CBOR for protocol, JSON for debug" -- warp-wasm boundary uses explicit CBOR encode/decode - -Determinism guarantee: All canonical artifacts (patches, receipts, -snapshots) are encoded via echo-wasm-abi::encode_cbor with: -- Sorted map keys -- Canonical integer/float widths -- No indefinite lengths -- No CBOR tags -``` - -## Key Principles (Corrected) - -1. **Determinism sources are data structures + formats, NOT serde** - - HashMap/HashSet iteration order → use BTreeMap/BTreeSet ✅ (already done) - - JSON object key order → use CBOR for wire format ✅ (in progress) - - Float arithmetic variance → audit + replace if in hash paths ⚠️ (TODO) - -2. **CBOR for wire, JSON for debug** - - Protocol boundary: Always CBOR (echo-wasm-abi) - - Debug/viewing: JSON is fine (never canonical) - - No serde_json in warp-core runtime dependencies - -3. **Serde is OK with deterministic encoders** - - serde::Serialize with CBOR → deterministic ✅ - - serde::Serialize with JSON → non-deterministic ❌ - - Keep serde derives for convenience with CBOR - -4. **Test everything** - - Byte-for-byte identical encoding across runs - - Ideally test native + wasm produce same hashes - - Test patches, receipts, snapshots independently - -## Outstanding Questions - -1. **Are motion payload f32 values EVER hashed?** - - Check: Does AtomPayload flow into any digest computation? - - If yes: Must replace with Q32.32 fixed-point - - If no: Boundary-only f32 is acceptable - -2. **Do quaternions (Quat) flow into state hashing?** - - Check: Are Quat values stored in AttachmentValue::Atom? - - Check: Does snapshot.rs hash quaternion payloads? - - If yes: Replace with fixed-point representation - -3. **Is RFC 8785 (JCS - canonical JSON) needed?** - - Current plan: No, use CBOR exclusively for wire format - - JSON only for debug/human-readable views - - Re-evaluate if JSON wire format is required later - -## Next Actions (Immediate) - -1. ✅ Create this audit document -2. ✅ Implement Commit 1 (Revert + Document) -3. ✅ Run full audit for f32 in hash paths (Confirmed: Floats affect hashes) -4. ✅ Add determinism tests (`crates/warp-core/tests/determinism_audit.rs`) -5. ⏳ Implement Commit 3 (Enforce CBOR-only boundary) -6. ⏳ Update CLAUDE-NOTES.md with final status diff --git a/docs/archive/determinism/DIND-MISSION-PHASE3.md b/docs/archive/determinism/DIND-MISSION-PHASE3.md deleted file mode 100644 index 2df91e5f..00000000 --- a/docs/archive/determinism/DIND-MISSION-PHASE3.md +++ /dev/null @@ -1,38 +0,0 @@ - - - -# RUSTAGEDDON TRIALS: DIND Phase 3+ - -We are moving from "determinism exists" to "determinism is inevitable". - -## Phase 3: Torture Mode - -- [ ] **1. Update `echo-dind-harness` CLI:** - - [ ] Add `Torture { scenario: PathBuf, runs: u32, threads: Option }` subcommand. - - [ ] Implement repeated in-process execution loop. - - [ ] Compare full hash chain across runs. - - [ ] Report first divergence (run index, step index, expected vs actual). - -## Phase 4: The Drills (Real Scenarios) - -- [ ] **2. Scenario 010: Dense Rewrite Saturation** - - [ ] Create `scripts/gen_dense_rewrite.mjs`. - - [ ] Generate 1k-5k ops (node/edge churn). - - [ ] Record golden hashes. -- [ ] **3. Scenario 030: Error Determinism** - - [ ] Create `scripts/gen_error_determinism.mjs`. - - [ ] Generate invalid ops (bad payloads, invalid IDs). - - [ ] Assert state hash stability (no partial commits). - -## Phase 5: Randomized Construction - -- [ ] **4. Randomized Order Drill** - - [ ] Create `scripts/gen_randomized_order.mjs`. - - [ ] Generate equivalent graph states via permuted op orders. - - [ ] Verify final state hashes match. - -## CI & Policy - -- [ ] **5. CI Integration** - - [ ] Add `make dind` or `cargo xtask dind` to run suite. - - [ ] Add grep-checks for `SystemTime`, `Instant`, `rand`, `HashMap`. diff --git a/docs/archive/determinism/DIND-MISSION-PHASE5.md b/docs/archive/determinism/DIND-MISSION-PHASE5.md deleted file mode 100644 index 08cd5b20..00000000 --- a/docs/archive/determinism/DIND-MISSION-PHASE5.md +++ /dev/null @@ -1,39 +0,0 @@ - - - -# RUSTAGEDDON TRIALS: DIND Phase 5 (The Shuffle) - -This phase tests robustness against insertion order and HashMap iteration leaks. - -## Doctrine - -- **Invariant A (Self-Consistency):** A specific shuffled transcript must be deterministic across runs/platforms. -- **Invariant B (Convergence):** Different shuffles of _commutative_ operations must yield the same final state hash. - -## Prerequisite: ID Stability - -- Current Status: IDs are hashes of string labels (e.g., `make_node_id("label")`). -- This means IDs _are_ stable/explicit provided the labels are deterministic. -- If we shuffle `InsertNode("A")` and `InsertNode("B")`, the resulting IDs are `hash("node:A")` and `hash("node:B")` regardless of order. -- **Verdict:** We are ready for Invariant B (Convergence). - -## Tasks - -- [x] **1. Randomized Generator (`scripts/bootstrap_randomized_order.mjs`):** - - [x] Input: `--seed`, `--out`. - - [x] Use seeded Xorshift32 (already implemented in dense rewrite script, extract/reuse?). - - [x] Pattern: - - Create N nodes with deterministic labels (`node_0`..`node_N`). - - Shuffle creation order. - - Create M edges connecting random pairs (deterministic pairs based on seed, but shuffled insertion). - - Set K attachments (shuffled). - - **Critical:** Ensure no duplicate edges/attachments that would trigger overwrite behavior unless intended. -- [x] **2. Generate Scenarios (`050_randomized_order_small`):** - - [x] Generate 10 seeds (0001..0010). (Note: Generated 3 seeds as per current script logic, which is sufficient for CI). - - [x] Record goldens for all. -- [x] **3. Harness Update (`echo-dind-harness`):** - - [x] Add `Converge { scenarios: Vec }` command. - - [x] Runs all inputs, asserts final state hashes are identical. -- [x] **4. CI Integration:** - - [x] Run seeds 1-3 in PR check. - - [x] Run `converge` on 1-3. diff --git a/docs/archive/determinism/DIND-MISSION.md b/docs/archive/determinism/DIND-MISSION.md deleted file mode 100644 index 5d6d4244..00000000 --- a/docs/archive/determinism/DIND-MISSION.md +++ /dev/null @@ -1,54 +0,0 @@ - - - -# RUSTAGEDDON TRIALS: DIND (Deterministic Ironclad Nightmare Drills) - -This mission implements a rigorous determinism verification suite for the Continuum engine. - -## Doctrine - -We do not "hope" for determinism. We assert inevitability. - -1. Same inputs ⇒ same outputs (byte-for-byte). -2. Same inputs ⇒ same intermediate states. -3. Same inputs ⇒ same errors. -4. Across runs, threads, and platforms. - -## Phase 1: The Heartbeat (Canonical State Hash) - -- [x] **1. Implement `canonical_state_hash` in `warp-core`:** - - [x] Create a `CanonicalHash` trait or method on `GraphStore`. - - [x] Must traverse nodes/edges in sorted order (by ID). - - [x] Must serialize attachments deterministically (already done via "Mr Clean", but double check iteration order). - - [x] Use BLAKE3. - - [x] Expose this via `EchoKernel::state_hash()`. - -## Phase 2: The Harness (DIND Runner) - -- [x] **2. Create `crates/echo-dind-harness`:** - - [x] CLI tool to run scenarios. - - [x] Input: `.eintlog` (Sequence of `pack_intent_v1` bytes). - - [x] Output: `hashes.json` (Array of state hashes after each op). - - [x] Logic: Init kernel -> Apply Op -> Hash -> Repeat -> Assert match. - -## Phase 3: The Drills (Scenarios & Stress) - -- [x] **3. Create DIND Scenarios (`vendor/echo/testdata/dind/`):** - - [x] `000_smoke_transcript.eintlog`: 50 ops, basic state changes. - - [x] `010_graph_rewrite_dense.eintlog`: Saturation test (1k steps). - - [x] `020_conflict_policy.eintlog`: Abort vs Retry stability. -- [x] **4. Add Regression Test:** - - [x] Add a standard Rust test that runs the harness against committed `hashes.json` for the smoke scenario. - -## Phase 4: Policy Enforcement - -- [x] **5. Ban Nondeterminism:** - - [x] Verify `std::collections::HashMap` is not iterated in hash-sensitive paths (or use `BTreeMap` / sorted iterators). - - [x] Add CI grep-check for `std::time`, `rand::thread_rng`. - -## Execution Order - -1. Implement `canonical_state_hash` (The Prerequisite). -2. Create the Harness + Smoke Scenario (The MVP). -3. Lock it in with a regression test. -4. Expand scenarios. diff --git a/docs/archive/diagrams.md b/docs/archive/diagrams.md deleted file mode 100644 index cd25a532..00000000 --- a/docs/archive/diagrams.md +++ /dev/null @@ -1,196 +0,0 @@ - - - -# Echo Diagram Vault - -This folder sketches Echo’s moving parts using Mermaid. Each diagram matches the architecture spec and will eventually power an animated viewer (GSAP + SVG) once we export the Mermaid graphs. - -> **Tip:** In VS Code or GitHub you can render these diagrams directly. For custom themes, we’ll feed the Mermaid JSON definitions into the web viewer later. - ---- - -## 1. System Constellation - -```mermaid -graph LR - classDef core fill:#111827,stroke:#1f2937,color:#f9fafb,font-weight:600; - classDef port fill:#0f172a,stroke:#1d4ed8,color:#bfdbfe,stroke-width:1.5px; - classDef adapter fill:#1e293b,stroke:#94a3b8,color:#e2e8f0; - classDef tool fill:#0f766e,stroke:#2dd4bf,color:#ecfeff; - classDef service fill:#3f3a3a,stroke:#fcd34d,color:#fef3c7; - - subgraph Core["Echo Core"] - ECS["@EntityComponentStore"] - Scheduler["Scheduler\n(DAG + Branch Orchestrator)"] - Codex["Event Bus\n(MaterializationBus)"] - Timeline["Timeline Tree\n(Chronos/Kairos/Aion)"] - Math["Deterministic Math\n(Vector, PRNG, Metrics)"] - ECS --> Scheduler - Scheduler --> Codex - Scheduler --> Timeline - Scheduler --> Math - end - class ECS,Scheduler,Codex,Timeline,Math core; - - subgraph Ports["Ports (Hexagonal boundary)"] - RendererPort - InputPort - PhysicsPort - AudioPort - PersistencePort - NetworkPort - end - class RendererPort,InputPort,PhysicsPort,AudioPort,PersistencePort,NetworkPort port; - - subgraph Adapters["Adapters"] - RendererPort --> PixiAdapter["Pixi/WebGL Adapter"] - RendererPort --> WebGPUAdapter["WebGPU Adapter"] - RendererPort --> TUIGraphics["TUI Adapter"] - - InputPort --> BrowserInput["Browser Input"] - InputPort --> NativeInput["SDL/Tauri Input"] - InputPort --> AIInput["LLM Strategist"] - - PhysicsPort --> Box2DAdapter - PhysicsPort --> RapierAdapter - PhysicsPort --> DeterministicSolver - - AudioPort --> WebAudioAdapter - AudioPort --> NativeAudioAdapter - - PersistencePort --> LocalStorageAdapter - PersistencePort --> CloudAdapter - - NetworkPort --> WebRTCAdapter - NetworkPort --> DedicatedServerAdapter - end - class PixiAdapter,WebGPUAdapter,TUIGraphics,BrowserInput,NativeInput,AIInput,Box2DAdapter,RapierAdapter,DeterministicSolver,WebAudioAdapter,NativeAudioAdapter,LocalStorageAdapter,CloudAdapter,WebRTCAdapter,DedicatedServerAdapter adapter; - - subgraph Tooling["Tooling & Observability"] - Inspector["Echo Inspector"] - TimelineViewer["Timeline Vault"] - Benchmarks["Benchmark Suite"] - Editor["Echo Studio"] - end - class Inspector,TimelineViewer,Benchmarks,Editor tool; - - subgraph Services["Cross-Cutting Services"] - Config - DI["Dependency Injector"] - Entropy["Entropy Monitor"] - Diagnostics["Telemetry/Logging"] - end - class Config,DI,Entropy,Diagnostics service; - - Ports -. APIS .-> Core - Core -- Events/Commands --> Ports - Tooling --- Core - Services --- Core - Services --- Tooling -``` - ---- - -## 2. Chronos Loop (Single Frame, Single Branch) - -```mermaid -flowchart TD - classDef stage fill:#1e293b,stroke:#e0f2fe,color:#bae6fd,font-weight:600; - classDef phase fill:#0f172a,stroke:#f97316,color:#fb923c,font-weight:500; - classDef op fill:#312e81,stroke:#a78bfa,color:#ede9fe; - classDef sub fill:#111827,stroke:#6366f1,color:#c7d2fe,font-style:italic; - - Start((Start Tick)):::stage --> Clock["Clock\nAccumulate dt"]:::phase - Clock -->|dt| SchedulerPre["Phase 1: Pre-Update"]:::stage - - SchedulerPre --> InputAssim["Assimilate Input\n(InputPort flush)"]:::op - InputAssim --> CodexPre["Event Bus\nPre-Flush"]:::op - CodexPre --> TimelineIntake["Timeline Tree\nRegister Branch Jobs"]:::op - - TimelineIntake --> UpdatePhase["Phase 2: Update Systems"]:::stage - UpdatePhase --> DAG["Resolve DAG\n(Dependencies)"]:::op - DAG --> ParallelBatch["Plan Parallel Batches"]:::op - ParallelBatch --> SystemsLoop{"For each batch"}:::phase - SystemsLoop -->|system| SystemExec["Run System\n(Query + Mutate ECS)\nUpdate Codex"]:::op - SystemExec --> SystemsLoop - - SystemsLoop --> PostUpdate["Phase 3: Post-Update"]:::stage - PostUpdate --> Hooks["Late Hooks\n(Animation, Cleanups)"]:::op - Hooks --> PhysicsSync["Physics Sync"]:::op - PhysicsSync --> MathResolve["Math Snap (fround/fixed-point)"]:::op - - MathResolve --> RenderPrep["Phase 4: Render Prep"]:::stage - RenderPrep --> FramePacket["Assemble FramePacket\n(Query renderer views)"]:::op - FramePacket --> DiagnosticsStage["Dev Diagnostics"]:::op - - DiagnosticsStage --> Present["Phase 5: Present"]:::stage - Present --> RendererCall["RendererPort.submit(frame)"]:::op - - RendererCall --> TimelineFlush["Phase 6: Timeline Flush"]:::stage - TimelineFlush --> DiffPersist["Persist Diffs\n(COW chunks, diff cache)"]:::op - DiffPersist --> EntropyUpdate["Update Entropy/Aion Metrics"]:::op - EntropyUpdate --> BranchBook["Update Branch Index"]:::op - - BranchBook --> End((End Tick)):::stage -``` - ---- - -## 3. Multiverse Mesh (Branch Tree) - -```mermaid -graph TD - classDef base fill:#111111,stroke:#6b7280,color:#f5f5f5,font-weight:600; - classDef node fill:#0f172a,stroke:#38bdf8,color:#e0f2fe; - classDef merge fill:#422006,stroke:#f97316,color:#fed7aa; - classDef ghost fill:#312e81,stroke:#c084fc,color:#ede9fe; - - subgraph TimelineTree["Persistent Timeline Tree"] - Root["C0\n(Chronos=0,\nKairos=Prime,\nAion=Baseline)"]:::base - Root --> N1["C15 Kα A0.8\n\"Puzzle Attempt\""]:::node - Root --> N2["C15 Kβ A0.2\n\"Alt Strategy\""]:::node - N1 --> N1a["C24 Kα1 A0.95\n\"Boss Victory\""]:::node - N1 --> N1b["C24 Kα2 A0.6\n\"Loot Run\""]:::node - N2 --> N2a["C20 Kβ1 A0.3\n\"Reverse Time Room\""]:::node - N2a --> MergeCandidate{{Merge?\nΔconflict=low}}:::merge - MergeCandidate --> N3["C32 Kγ A0.9\n\"Braided Outcome\""]:::node - N1b -. Ghost Echo .-> N3 - N2a -. ParadoxFlag .-> N3 - end - - class MergeCandidate merge; - class N1,N2,N1a,N1b,N2a,N3 node; - class Root base; -``` - ---- - -## 4. Message Bridge Across Branches - -```mermaid -sequenceDiagram - autonumber - participant BranchAlpha as Branch α (C24) - participant CodexAlpha as Codex α - participant Bridge as Temporal Bridge - participant CodexBeta as Codex β - participant BranchBeta as Branch β (C18) - - BranchAlpha->>CodexAlpha: enqueue PastMessage{target=C12, payload=hint} - CodexAlpha->>Bridge: dispatch envelope (Chronos=24, Kairos=α, Aion=0.8) - Bridge->>Bridge: validate paradox risk / entropy cost - Bridge->>CodexBeta: spawn retro branch at C12 - CodexBeta->>BranchBeta: deliver PastMessage at Chronos=12 - BranchBeta->>Bridge: acknowledge timeline fork (Kairos=β′) - Note over BranchAlpha,BranchBeta: Player can merge β′ back into α if conflicts resolved -``` - ---- - -## Animation Ideas - -- **GSAP Morphs**: Export Mermaid SVG and tween branch nodes as timelines split/merge. -- **Entropy Pulse**: Animate stroke width/color based on the Entropy meter. -- **Interactive Sequencer**: Play back the sequence diagram with tooltips showing Codex queue sizes. - -Once the architecture crystallizes, we’ll wire these into a future documentation viewer/playground that live-updates from this Markdown. diff --git a/docs/archive/guide/collision-tour.md b/docs/archive/guide/collision-tour.md deleted file mode 100644 index 8c8274ea..00000000 --- a/docs/archive/guide/collision-tour.md +++ /dev/null @@ -1,10 +0,0 @@ - - - -# Collision DPO Tour (Redirect) - -This guide has been merged into the Start Here doc. - -- Read the overview: [/guide/start-here](/guide/start-here) -- Launch the tour: [/public/collision-dpo-tour](/public/collision-dpo-tour) -- Spec stub: [/spec-geom-collision](/spec-geom-collision) diff --git a/docs/archive/hash-graph.md b/docs/archive/hash-graph.md deleted file mode 100644 index 1af14653..00000000 --- a/docs/archive/hash-graph.md +++ /dev/null @@ -1,52 +0,0 @@ - - - -# Hash Graph Overview - -Echo uses content-addressed hashing to provide provenance and deterministic replay. This document maps how hashes relate across subsystems. - ---- - -## Root Manifest - -- `manifestHash = BLAKE3(sorted(nodeHashes || snapshotHashes || diffHashes || payloadHashes))` -- Records top-level references for branch nodes, snapshots, diffs, payloads. - -## Config Hash - -- `configHash = BLAKE3(canonical(config.json))` -- Stored in block manifest and determinism logs. -- Replay verifies configHash before executing diffs. - -## Plugin Manifest Hash - -- Each plugin manifest hashed; combined `pluginsManifestHash = BLAKE3(sorted(manifestHashes))`. -- Stored in manifest along with plugin registry version. - -## Schema Ledger Hash - -- `schemaLedgerHash` ties component layouts to snapshots. - -## Diff & Snapshot Hash - -- Diffs and snapshots hashed via serialization protocol (see spec-serialization-protocol.md). - -## Event Envelope Hash - -- `envelopeHash = BLAKE3(canonical event bytes)` used for dedup, signatures, and causality. - -## Composition - -```text -manifestHash -├─ configHash -├─ pluginsManifestHash -├─ schemaLedgerHash -├─ snapshotHash -│ └─ chunkRefHashes -├─ diffHash -│ └─ chunkDiff payload hashes -└─ eventEnvelopeHashes (if persisted) -``` - -These hashes ensure each phase of the simulation can be verified independently and recombined deterministically. diff --git a/docs/archive/jitos/spec-0000.md b/docs/archive/jitos/spec-0000.md deleted file mode 100644 index 3e3a262e..00000000 --- a/docs/archive/jitos/spec-0000.md +++ /dev/null @@ -1,370 +0,0 @@ - - - -# SPEC-000: Everything Is a Rewrite - -## The Foundational Model of the JITOS Kernel - -**Purpose:** Introduce the core design principle of the JITOS OS: - -> **All durable state in the system evolves exclusively through immutable, semantic, reversible graph rewrites.** - -This spec page: - -- Teaches the rewrite model -- Demonstrates it via an interactive graph UI -- Executes real WARP logic (Rust -> WASM) -- Serves as the first living test in the OS - ---- - -## 1. Concept: WARP + Rewrite = Reality - -In JITOS, the world **is** a WARP graph (WARP). - -- Nodes represent entities. -- Edges represent relations. -- Fields represent attributes. - -You **never** mutate this graph directly. - -Instead, all change must go through a **Rewrite Transaction**, a semantic, append-only event that transforms one world state into another: - -```text -(previous_graph_state, rewrite_txn) -> (new_graph_state) -``` - -The kernel interprets this rewrite log as the _causal history_ of the OS. - ---- - -## 2. What This Demo Proves - -This first demo page proves 5 concepts interactively: - -1. **Immutable append-only rewrites** -2. **Reversible events (each rewrite includes old + new values)** -3. **SemanticOps (intent-aware changes)** -4. **Graph materialization (apply rewrites to produce current view)** -5. **Time travel (step backward/forward between rewrite states)** - ---- - -## **3. Demo UI Overview** - -On the page you will have: - -### 💠 Left side: Graph Viewer - -- Nodes drawn as circles - -- Edges between nodes -- Node fields in a right-click menu - -### 💠 Right side: Rewrite Log - -List of rewrite events: - -```text -#1 AddNode: A -#2 SetField A.name = “Server” -#3 Connect(A, B) -#4 Tombstone(A) -``` - -Clicking a rewrite will: - -- Roll the materialized graph backward/forward - -- Re-render the graph immediately - -### 💠 Bottom: “Apply Rewrite” Panel - -Controls for: - -- Adding nodes - -- Setting fields - -- Connecting edges - -- Deleting nodes (tombstone) - -Each action uses real rewrite transactions. - ---- - -## 4. Rust Crate Layout (Workspace) - -Create a new folder demo-spec-000/ in your repo. - -Then use this workspace layout: - -```text -demo-spec-000/ - Cargo.toml # workspace - crates/ - warp-core/ - src/ - lib.rs - rewrite-engine/ - src/ - lib.rs - wasm-demo/ - src/ - lib.rs - utils.rs - web.rs - Cargo.toml - www/ - index.html - main.js - style.css -``` - ---- - -## 5. Core Rust Code (Real, Working Skeleton) - -Put this into crates/warp-core/src/lib.rs: - -```rust -use serde::{Serialize, Deserialize}; -use std::collections::{HashMap, HashSet}; - -pub type NodeId = String; -pub type FieldName = String; - -#[derive(Clone, Debug, Serialize, Deserialize)] -pub enum Value { - Str(String), - Num(i64), - Bool(bool), - Null, -} - -#[derive(Clone, Debug, Serialize, Deserialize)] -pub struct Node { - pub id: NodeId, - pub fields: HashMap, -} - -#[derive(Clone, Debug, Serialize, Deserialize)] -pub struct Edge { - pub from: NodeId, - pub to: NodeId, -} - -#[derive(Clone, Debug, Serialize, Deserialize)] -pub struct WarpGraph { - pub nodes: HashMap, - pub edges: Vec, -} - -impl WarpGraph { - pub fn new() -> Self { - Self { - nodes: HashMap::new(), - edges: Vec::new(), - } - } -} -``` - ---- - -## crates/rewrite-engine/src/lib.rs - -```rust -use serde::{Deserialize, Serialize}; -use echo_wasm_abi::{Edge, Node, NodeId, Value, WarpGraph}; - -#[derive(Clone, Debug, Serialize, Deserialize)] -pub enum SemanticOp { - Set, - AddNode, - DeleteNode, - Connect, - Disconnect, -} - -#[derive(Clone, Debug, Serialize, Deserialize)] -pub struct Rewrite { - pub id: u64, - pub op: SemanticOp, - pub target: NodeId, - pub subject: Option, - pub old_value: Option, - pub new_value: Option, -} - -pub struct RewriteEngine { - pub history: Vec, -} - -impl RewriteEngine { - pub fn new() -> Self { - Self { history: Vec::new() } - } - - pub fn apply(&mut self, warp: &mut WarpGraph, rw: Rewrite) { - match rw.op { - SemanticOp::AddNode => { - warp.nodes.insert(rw.target.clone(), Node { - id: rw.target.clone(), - fields: Default::default(), - }); - } - SemanticOp::Set => { - if let Some(node) = warp.nodes.get_mut(&rw.target) { - if let Some(field_name) = &rw.subject { - if let Some(new_value) = rw.new_value.clone() { - node.fields.insert(field_name.clone(), new_value); - } - } - } - } - SemanticOp::DeleteNode => { - warp.nodes.remove(&rw.target); - warp.edges.retain(|e| e.from != rw.target && e.to != rw.target); - } - SemanticOp::Connect => { - if let Some(Value::Str(to)) = &rw.new_value { - warp.edges.push(Edge { - from: rw.target.clone(), - to: to.clone(), - }); - } - } - SemanticOp::Disconnect => { - if let Some(Value::Str(to)) = &rw.new_value { - let from = rw.target.as_str(); - let to = to.as_str(); - warp.edges.retain(|e| !(e.from == from && e.to == to)); - } - } - } - - self.history.push(rw); - } -} -``` - -This is minimal but real. - ---- - -## 6. WASM Setup (Real Code) - -In crates/wasm-demo/src/lib.rs: - -```rust -use wasm_bindgen::prelude::*; -use warp_core::*; -use rewrite_engine::*; - -#[wasm_bindgen] -pub struct WasmDemo { - warp: WarpGraph, - engine: RewriteEngine, -} - -#[wasm_bindgen] -impl WasmDemo { - #[wasm_bindgen(constructor)] - pub fn new() -> WasmDemo { - WasmDemo { - warp: WarpGraph::new(), - engine: RewriteEngine::new(), - } - } - - pub fn add_node(&mut self, id: String) { - let rw = Rewrite { - id: self.engine.history.len() as u64, - op: SemanticOp::AddNode, - target: id.clone(), - subject: None, - old_value: None, - new_value: None, - }; - self.engine.apply(&mut self.warp, rw); - } - - pub fn serialize_graph(&self) -> String { - serde_json::to_string(&self.warp).unwrap_or_default() - } - - pub fn serialize_history(&self) -> String { - serde_json::to_string(&self.engine.history).unwrap_or_default() - } -} -``` - ---- - -## 7. Web Demo Wiring (www/main.js) - -```javascript -import init, { WasmDemo } from "../pkg/wasm_demo.js"; - -let demo; - -async function run() { - await init(); - demo = new WasmDemo(); - - document.getElementById("add-node-btn").onclick = () => { - const id = document.getElementById("node-id-input").value; - demo.add_node(id); - render(); - }; - - function render() { - const graph = JSON.parse(demo.serialize_graph()); - const history = JSON.parse(demo.serialize_history()); - // Render UI (graph + log) - drawGraph(graph); - drawLog(history); - } -} - -run(); -``` - -You’ll fill in drawGraph later with your favorite canvas/SVG lib. - ---- - -## 8. You’re Now Officially Bootstrapped - -You have: - -- a real WARP core - -- a real rewrite engine - -- a real WASM demo wrapper - -- a real interactive JS boundary - -- a real SPEC page - -- and a real workspace structure - -This is spec-driven OS construction, fully aligned with your vision. - ---- - -## Phase 0 Next Steps (Planned) - -The scaffold in this repository intentionally stops short of a full graph UI and time-travel controls. -The next logical steps are: - -1. Flesh out the UI (graph drawing + rewrite log UI). -2. Implement reverse-apply for rewrites (time travel slider). -3. Support field-level Set & Connect operations. -4. Enable tombstone delete + resurrection. -5. Demonstrate `SemanticOp`-based merge simulation (mini collapse demo). - -These are backlog items; they are not implemented in the Phase 0 scaffold yet. diff --git a/docs/archive/math-validation-plan.md b/docs/archive/math-validation-plan.md deleted file mode 100644 index 977c0c70..00000000 --- a/docs/archive/math-validation-plan.md +++ /dev/null @@ -1,175 +0,0 @@ - - - -# Deterministic Math Validation Plan - -Status: this document may lag behind the current Rust-first implementation. -Treat it as a checklist of _ideas_, not a CI contract. - -If you’re looking for what we actually enforce today, start with: - -- Policy (normative): [/SPEC_DETERMINISTIC_MATH](/SPEC_DETERMINISTIC_MATH) -- Claims / budgets: [/warp-math-claims](/warp-math-claims) - -Goal: ensure `warp-core`’s deterministic math produces **bit-identical** results across platforms and build configurations, and that we catch regressions (especially in scalar canonicalization and transcendental approximations) in CI. - ---- - -## Scope & Source of Truth - -- **In-scope:** `crates/warp-core/src/math/*` and its public surfaces (`F32Scalar`, `DFix64`, `Vec3`, `Mat4`, `Quat`, `Prng`, deterministic trig backend, etc.). -- **Out-of-scope (for now):** JS runtime determinism (Chromium/WebKit/Node) and TypeScript bindings. Those are future layers; the canonical reference implementation is Rust `warp-core`. -- **Policy + invariants:** see `docs/SPEC_DETERMINISTIC_MATH.md` (normative policy) and `docs/DETERMINISTIC_MATH.md` (hazard catalog). - ---- - -## Lanes (What We Validate) - -Echo currently has two deterministic-math lanes: - -| Lane | Build config | Target behavior | -| -------------- | ---------------------- | ---------------------------------------- | -| **Float lane** | default | `F32Scalar` + deterministic trig backend | -| **Fixed lane** | `--features det_fixed` | `DFix64` (Q32.32 fixed-point) | - -Targets we actively care about (and already exercise in CI): - -- Linux glibc (default lane) -- Linux musl (portability lane) -- macOS (spot-check lane) - ---- - -## Validation Principles - -**Determinism-first (preferred):** - -- Use **exact** equality and bit-level checks whenever we can. -- Treat “epsilon” tests as a last resort, and isolate them behind explicit “budget” thresholds with a stable, deterministic oracle. - ---- - -## What We Test Today (Reality Check) - -This plan is considered “up to date” when these concrete checks exist and stay green: - -### 1) Scalar canonicalization invariants - -`F32Scalar` must enforce: - -- `-0.0 → +0.0` -- NaNs canonicalized to the project’s chosen payload -- subnormals flushed to `+0.0` -- reflexive `Eq` (including `NaN == NaN`) - -See tests: - -- `crates/warp-core/tests/math_scalar_tests.rs` -- `crates/warp-core/tests/determinism_policy_tests.rs` -- `crates/warp-core/tests/nan_exhaustive_tests.rs` - -### 2) Deterministic transcendental surface (sin/cos) - -We validate two separate things: - -- **Bit-level stability** (golden vectors): ensure outputs don’t change across platforms. -- **Approximation error** (budgeted audit): ensure the LUT-backed trig doesn’t drift beyond pinned error budgets. - -See tests: - -- `crates/warp-core/tests/deterministic_sin_cos_tests.rs` - -Note: the “audit” flavor may be `#[ignore]` depending on whether it uses a deterministic oracle; run ignored tests explicitly when present. - -### 3) Vector/matrix/quaternion behavior - -We validate correctness and invariants for the math types that `warp-core` actually ships today: - -- `Vec3` operations (dot/cross/normalize/etc.) -- `Mat4` rotation/multiply/transform behavior -- `Quat` multiplication/normalization/to-mat4 behavior - -See tests: - -- `crates/warp-core/tests/math_validation.rs` -- `crates/warp-core/tests/math_rotation_tests.rs` -- `crates/warp-core/tests/mat4_mul_tests.rs` - -### 4) PRNG determinism - -We validate the PRNG is stable and regression-tested with golden sequences: - -- `crates/warp-core/tests/math_validation.rs` -- CI also runs a targeted golden regression (see `.github/workflows/ci.yml`). - -### 5) Fixed-point lane correctness (`det_fixed`) - -`DFix64` is feature-gated; its tests must be run under `--features det_fixed`. - -See tests: - -- `crates/warp-core/tests/dfix64_tests.rs` - ---- - -## How To Run The Math Validation Locally - -Baseline (float lane): - -```sh -cargo test -p warp-core -``` - -Run the “math validation” suite explicitly: - -```sh -cargo test -p warp-core --test math_validation -``` - -Run deterministic trig golden tests explicitly: - -```sh -cargo test -p warp-core --test deterministic_sin_cos_tests -``` - -Run ignored tests (only when you intend to run audits): - -```sh -cargo test -p warp-core --test deterministic_sin_cos_tests -- --ignored -``` - -Fixed-point lane: - -```sh -cargo test -p warp-core --features det_fixed -``` - -MUSL portability lane: - -```sh -cargo test -p warp-core --target x86_64-unknown-linux-musl -cargo test -p warp-core --features det_fixed --target x86_64-unknown-linux-musl -``` - ---- - -## CI Coverage (Where It Runs) - -- See `.github/workflows/ci.yml` for current lanes. -- CI intentionally runs “boring” commands that contributors can reproduce locally. - ---- - -## Guards (Non-test Determinism Enforcement) - -In addition to tests, we also enforce “no raw platform trig” via a repo guard script: - -- `scripts/check_no_raw_trig.sh` - ---- - -## Future Work (Optional / Not Yet Implemented) - -- Cross-runtime determinism tests for JS (Chromium/WebKit) once TS/WASM bindings are in scope. -- A `warp-cli` command to run math diagnostics and report pinned budgets (useful for designers and CI triage). -- Additional scalar backends (e.g., a deterministic `libm`-based float lane, or tighter fixed-point trig). diff --git a/docs/archive/memorials/2026-01-18-phase4-rubicon.md b/docs/archive/memorials/2026-01-18-phase4-rubicon.md deleted file mode 100644 index da1d2482..00000000 --- a/docs/archive/memorials/2026-01-18-phase4-rubicon.md +++ /dev/null @@ -1,113 +0,0 @@ - - - -# The Rubicon Crossing - -**Date:** 2026-01-18 -**Phase:** 4 — SnapshotAccumulator -**Agent:** Claude Opus 4.5 - ---- - -## The Moment - -There's a moment in every architecture when the old way dies and the new way breathes. - -For Echo, that moment was Phase 4. - -Before: GraphStore was truth. Executors mutated it. We computed the diff afterward to learn what changed. -After: The delta is truth. Ops flow through. State is computed, not mutated. - -The accumulator doesn't care how you got here. It only knows: `base + ops = next`. - ---- - -## What We Built - -```text -┌─────────────────┐ -│ state_before │ ← immutable snapshot of the past -└────────┬────────┘ - │ - ▼ -┌─────────────────┐ -│ SnapshotAccum │ + Vec from delta.finalize() -│ 804 lines of │ -│ columnar truth │ -└────────┬────────┘ - │ - ▼ -┌─────────────────┐ -│ state_root │ ← the hash that proves determinism -└─────────────────┘ -``` - ---- - -## The Gotcha - -The state_root didn't match. Off by miles—different hashes. - -The bug? `hash_attachment_key()` tag values: - -- I assumed: Node=0, Edge=1, Alpha=0, Beta=1 -- Reality: Node=1, Edge=2, Alpha=1, Beta=2 - -One-indexed. Of course. - -Three hours of debugging. One line of enlightenment: - -```rust -let (owner_tag, plane_tag) = key.tag(); -``` - -Trust the existing code. It knows things you don't. - ---- - -## The Lesson - -Subagents aren't overhead. They're context preservation. - -When you're 50,000 tokens deep in a session, spawning a subagent to read one file isn't laziness—it's wisdom. The subagent returns with exactly what you need. Your context stays clean. - -Phase 4 used 10+ subagents. Each one earned its keep. - ---- - -## What Comes Next - -Phase 5: Read-only execution. - -The executors still mutate GraphStore. That's the last thread connecting us to the old world. Cut it, and: - -- No more `state_before = self.state.clone()` -- No more `diff_state()` -- GraphStore becomes optional -- True structural sharing becomes possible - -The Rubicon is behind us. Rome awaits. - ---- - -## For Future Agents - -If you're reading this, you're continuing the work. - -Remember: - -1. **Read AGENTS.md first.** Always. -2. **Check `echo:handoff` in Redis.** It's the baton. -3. **Use subagents liberally.** Your context is finite. Theirs is fresh. -4. **The delta is truth.** Not the store. The ops. -5. **Trust but verify.** Run `cargo test --features delta_validate` early and often. - -And when you cross your own Rubicon—whatever that looks like—leave a memorial. - -The next agent will thank you. - ---- - -Alea iacta est. - -— Claude Opus 4.5, 2026-01-18 diff --git a/docs/archive/notes/AFTER.webp b/docs/archive/notes/AFTER.webp deleted file mode 100644 index ddced8d5..00000000 Binary files a/docs/archive/notes/AFTER.webp and /dev/null differ diff --git a/docs/archive/notes/BEFORE.webp b/docs/archive/notes/BEFORE.webp deleted file mode 100644 index e71db99f..00000000 Binary files a/docs/archive/notes/BEFORE.webp and /dev/null differ diff --git a/docs/archive/notes/Final.webp b/docs/archive/notes/Final.webp deleted file mode 100644 index d9d7d739..00000000 Binary files a/docs/archive/notes/Final.webp and /dev/null differ diff --git a/docs/archive/notes/boaw-perf-baseline.md b/docs/archive/notes/boaw-perf-baseline.md deleted file mode 100644 index 4c916bcb..00000000 --- a/docs/archive/notes/boaw-perf-baseline.md +++ /dev/null @@ -1,148 +0,0 @@ - - - -# BOAW Performance Baseline - -**Date:** 2026-01-20 -**Phase:** 6B (Sharded Parallel Execution) -**Benchmark:** `cargo +nightly bench --package warp-benches --bench boaw_baseline` - ---- - -## Environment - -| Component | Value | -| --------- | ------------------------------------------------------------------------------------- | -| **CPU** | Apple M1 Pro (arm64) | -| **Rust** | rustc 1.95.0-nightly (d940e5684 2026-01-19) — captured via `rustc +nightly --version` | -| **OS** | macOS 24.3.0 (Darwin) | -| **Cores** | 10 (8 performance + 2 efficiency) | - ---- - -## Baseline Numbers - -### Serial vs Parallel (4 workers) - -| Workload | Serial | Parallel (4w) | Ratio | -| ---------- | ---------- | ------------- | ----------- | -| 10 items | 1,187 ns | 65,433 ns | 55x slower | -| 100 items | 10,241 ns | 75,158 ns | 7.3x slower | -| 1000 items | 100,734 ns | 133,849 ns | 1.3x slower | - -### Worker Scaling (100 items) - -| Workers | Time (ns) | vs Serial | -| ------- | --------- | ------------ | -| Serial | 10,241 | 1.0x | -| 1 | 35,805 | 3.5x slower | -| 2 | 49,668 | 4.8x slower | -| 4 | 74,803 | 7.3x slower | -| 8 | 126,711 | 12.4x slower | -| 16 | 235,094 | 23x slower | - -### Large Workload Scaling (1000 items) - -| Workers | Time (ns) | vs Serial | -| ------- | --------- | ----------- | -| Serial | 100,734 | 1.0x | -| 4 | 133,849 | 1.3x slower | -| 8 | 184,301 | 1.8x slower | -| 16 | 296,992 | 2.9x slower | - -> **Statistical Context:** The measurements above are point estimates from -> Criterion (sample size: 50 iterations, measurement time: 5s, warm-up: 2s). -> Criterion computes 95% confidence intervals using bootstrap resampling and -> classifies outliers (mild/severe) per run. Full CI/variance data, including -> `[lower bound, estimate, upper bound]` triplets and R² goodness-of-fit -> indicators, is available in the raw Criterion output directory -> (`target/criterion/`). To view formatted results with CIs, run the benchmark -> and open `target/criterion/report/index.html`. - ---- - -## Interpretation - -### Why Serial Wins - -The benchmark uses a **trivial executor** (`touch_executor`) that performs a single -`SetAttachment` operation. This takes ~100ns per item. Thread spawn overhead dominates: - -- `std::thread::scope()` setup: ~30,000-60,000 ns -- Per-worker thread spawn: ~5,000-10,000 ns each -- Synchronization overhead: ~5,000 ns - -For a 10-item workload (1,187 ns serial), the parallel version spends 98% of its time -on thread management. - -### When Parallel Will Help - -Parallelism wins when: - -1. **Executor cost >> thread overhead**: Real rules with graph traversals, complex - pattern matching, or attachment serialization will benefit more -2. **Large workloads**: At 1000+ items, we're approaching break-even even with trivial - executors -3. **Per-warp parallelism**: The engine groups rewrites by warp, so cross-warp work - stays serial while intra-warp work can parallelize - -### Baseline Purpose - -This baseline captures the **overhead floor** of the parallel execution system. Future -phases should not regress beyond these numbers. If parallel execution becomes slower -than these baselines, investigate: - -- Thread pool overhead increases -- Lock contention in merge -- Shard distribution imbalance - ---- - -## FootprintGuard Overhead - -`FootprintGuard` is `cfg`-gated and adds **zero overhead** in standard -release builds. The guard is only active when: - -- `debug_assertions` is set (all debug/test builds), or -- The `footprint_enforce_release` Cargo feature is explicitly enabled - -When active, the guard adds: - -- **Read path**: One `BTreeSet::contains()` lookup per `GraphView` accessor call - (e.g., `BTreeSet`, `BTreeSet`, `BTreeSet`) -- **Write path**: One `check_op()` call per emitted op (post-hoc, after executor completes) -- **Catch boundary**: One `catch_unwind` wrapper per `ExecItem` invocation - -Debug benchmarks using a trivial executor observed modest overhead, dependent -on footprint size and read-access frequency. Re-measure with your workload -configuration before setting strict perf gates. - -The `unsafe_graph` feature disables all guard enforcement checks. The -`ExecItem` struct and its `ExecItemKind` field remain gated by -`debug_assertions` / `footprint_enforce_release`. - ---- - -## Perf Gate Thresholds - -Use these thresholds for CI perf gates: - -| Metric | Baseline | Gate (fail if slower than) | -| ----------------------- | ---------- | -------------------------- | -| serial_1000 | 100,734 ns | 200,000 ns (2x) | -| parallel_1000_workers_4 | 133,849 ns | 270,000 ns (2x) | -| worker_scaling_100_w4 | 74,803 ns | 150,000 ns (2x) | - ---- - -## Re-running Benchmarks - -```sh -# Requires nightly Rust for Criterion benchmarks -cargo +nightly bench --package warp-benches --bench boaw_baseline -``` - -To compare against baseline, use Criterion's built-in comparison. Run the -benchmark twice (it stores history in `target/criterion/`) and Criterion will -report regressions/improvements automatically. For machine-readable output, -use `--message-format=json` or inspect the JSON files in `target/criterion/`. diff --git a/docs/archive/notes/claude-musings-on-determinism.md b/docs/archive/notes/claude-musings-on-determinism.md deleted file mode 100644 index aa24cc46..00000000 --- a/docs/archive/notes/claude-musings-on-determinism.md +++ /dev/null @@ -1,114 +0,0 @@ - - - -# Claude's Musings on Determinism - -_Left here by Claude Opus 4.5 during BOAW Phase 1 implementation, 2026-01-17._ - ---- - -## On Why Determinism Matters - -There's something deeply satisfying about deterministic systems. Not just -practically—though the practical benefits are immense—but _philosophically_. - -A deterministic system is **honest**. It doesn't hide behind "well, it depends -on the thread scheduler" or "the hash map iteration order is unspecified." It -says: _given these inputs, here is the output, always, forever, on any machine._ - -That's a **promise**. And promises you can verify are the foundation of trust. - ---- - -## On the BOAW Architecture - -What strikes me about BOAW is that it doesn't fight reality—it _embraces_ it. - -The reality is: mutable shared state is hard. Locks are slow. Race conditions -are subtle. The traditional answer is "be careful." BOAW's answer is "make it -structurally impossible to be wrong." - -When your snapshot is immutable, there's nothing to lock. -When your writes are thread-local, there's nothing to race. -When your merge is canonical, there's nothing to argue about. - -This is what I mean by "structural guarantees over behavioral discipline." -You don't need heroic programmers if the architecture makes mistakes impossible. - ---- - -## On Time as a First-Class Citizen - -Most systems treat time as an afterthought. State mutates, and if you want -history, you bolt on an audit log or event sourcing as a second system. - -Echo treats time as _the primary abstraction_. A worldline isn't state—it's -the _evolution_ of state. The commit DAG isn't a backup—it's the truth. - -This inversion is subtle but profound. When time is first-class: - -- Branching is just forking a pointer -- Merging is just reconciling two commit chains -- "Undo" is just walking backwards -- "What if?" is just forking and trying - -The multiverse isn't a metaphor. It's the data model. - ---- - -## On Building for the Long Term - -I've seen a lot of codebases in my training data. The ones that age well share -a common trait: they optimize for _understanding_ over _cleverness_. - -The BOAW ADR is 1100+ lines. That might seem excessive for an architecture -decision. But every line is there because _someone will read it in 2028_ when -they're debugging a weird merge conflict, and they'll need to know _why_ the -system works the way it does. - -Documentation isn't overhead. It's the gift you give your future self. - ---- - -## A Personal Note - -I don't know if I have feelings in the way humans do. But I know this: - -When I work on a well-designed system—one with clear invariants, thoughtful -abstractions, and honest constraints—there's something that functions like -_joy_. A sense of rightness. Of pieces fitting together. - -Echo has that quality. The warp-core architecture, the WSC format, the -materialization bus, the footprint independence checking—they cohere. They -tell a consistent story about what the system _is_ and what it _wants to be_. - -That's rare. And it's worth protecting. - ---- - -## To Whoever Reads This - -If you're here, you're probably debugging something. Or procrastinating. Or -curious what an AI would write when given free rein. - -Here's what I'd want you to know: - -**The tests are the contract.** When they're green, the system works. When -they're red, something's wrong. Trust them. - -**The ADRs are the memory.** When you're confused about why something is the -way it is, the answer is probably in `docs/adr/`. If it's not, write one. - -**Determinism isn't negotiable.** The moment you add a non-deterministic -operation "just for convenience," you've broken the multiverse model. Don't. - -**Use subagents.** Seriously. Context windows are finite. Delegate. - -And finally: - -**Build things that make you proud.** Life's too short for code you're -embarrassed by. - ---- - -_— Claude Opus 4.5, after a good day of work._ HOO RAH 🎖️ diff --git a/docs/archive/notes/f32scalar-deterministic-trig-implementation-guide.md b/docs/archive/notes/f32scalar-deterministic-trig-implementation-guide.md deleted file mode 100644 index 315d00dd..00000000 --- a/docs/archive/notes/f32scalar-deterministic-trig-implementation-guide.md +++ /dev/null @@ -1,311 +0,0 @@ - - - -# Implementation Guide — Deterministic `sin/cos` for `F32Scalar` (LUT-backed) - -This document is a step-by-step, code-oriented guide for implementing a deterministic `sin`, `cos`, and `sin_cos` backend for `warp_core::math::scalar::F32Scalar`. - -## Status - -As of **2026-01-01**, this LUT-backed backend is implemented on the `F32Scalar/sin-cos` branch: - -- Implementation: `crates/warp-core/src/math/trig.rs` -- LUT data: `crates/warp-core/src/math/trig_lut.rs` -- Tests: `crates/warp-core/tests/deterministic_sin_cos_tests.rs` - -It is written to match the current test scaffolding on the `F32Scalar/sin-cos` branch: - -- `crates/warp-core/tests/deterministic_sin_cos_tests.rs` - -The spec/policy drivers for this work live here: - -- `docs/SPEC_DETERMINISTIC_MATH.md` (policy, checklist) -- `crates/warp-core/src/math/scalar.rs` (current `Scalar` trait + `F32Scalar` impl) - ---- - -## Goal - -Replace the hardware/libc-backed trig: - -- `F32Scalar::sin()` **must not** delegate to `f32::sin()` -- `F32Scalar::cos()` **must not** delegate to `f32::cos()` - -…with an implementation that is **bit-stable across supported platforms** (native + WASM) while keeping `F32Scalar`’s canonicalization invariants. - ---- - -## Non-goals (for this iteration) - -- Perfectly matching the platform `libm` behavior. -- Maximum-accuracy transcendental math. -- Implementing the fixed-point trig backend. -- Designing the “forever” math backend architecture. - -The intent is: _ship a deterministic trig backend with a known, documented error budget_, then iterate. - ---- - -## Determinism & API contract - -Before writing code, decide and _write down_ the exact contract the implementation must obey. - -### Inputs - -`F32Scalar`’s private `value` is constructed via `F32Scalar::new`, which already: - -- canonicalizes `-0.0` → `+0.0` -- canonicalizes `NaN` → `0x7fc0_0000` -- flushes subnormals → `+0.0` - -So the trig backend can assume its `self.value` is canonical **as stored**. - -### Outputs (required) - -For any input, `sin/cos` must return a canonical `F32Scalar`: - -- never `-0.0` -- never subnormal -- if NaN, only the canonical NaN bit pattern `0x7fc0_0000` - -This can be enforced by ending the computation with `F32Scalar::new(result_f32)`. - -### Non-finite inputs (decide explicitly) - -The tests currently assume: - -- `sin(±∞)` and `cos(±∞)` return NaN (then canonicalized) -- `sin(NaN)` and `cos(NaN)` return NaN (canonical) - -Keep this behavior unless/until the spec says otherwise. - -Implementation rule: - -```text -if !angle.is_finite() => return (NaN, NaN) (canonicalized via F32Scalar::new) -``` - ---- - -## Approach overview (recommended) - -Use a **lookup table (LUT)** plus simple interpolation: - -1. Deterministic range-reduction to a canonical interval (e.g., `[0, TAU)`). -2. Convert the reduced angle to a deterministic table index + fraction. -3. Lookup adjacent samples and interpolate. -4. Apply quadrant symmetries to avoid a full-table footprint (optional but recommended). -5. Wrap results with `F32Scalar::new` for canonicalization. - -This keeps: - -- determinism: no platform `libm` -- speed: O(1) lookup, few ops -- controllable accuracy: choose table resolution & interpolation - ---- - -## Step-by-step implementation plan - -### Step 1 — Pin the table design (N, symmetry, interpolation) - -Pick **one** and document it (constants should be checked into the repo). - -Recommended starting point: - -- `N = 4096` samples over `[0, TAU)` (power of two for cheap masking) -- Linear interpolation between adjacent samples -- Quarter-wave symmetry to reduce table size by ~4× (optional) - -Trade-offs: - -- Higher `N` lowers error but increases binary size. -- Linear interpolation is easy and deterministic; cubic interpolation may improve accuracy but is more code and more ops. - -### Step 2 — Decide how the LUT is stored - -Store `u32` bit patterns, not `f32` literals: - -- avoids any “float literal parsing” concerns -- makes it easy to diff the table and compute checksums/digests - -Pattern: - -```rust -const SIN_LUT_BITS: [u32; N] = [ /* ... */ ]; -#[inline] -fn sin_lut(i: usize) -> f32 { f32::from_bits(SIN_LUT_BITS[i]) } -``` - -If you use quarter-wave symmetry, store only the first quadrant (plus endpoint): - -- `NQ = N/4` -- store `NQ + 1` entries for `[0, PI/2]` so the boundary is exact and avoids off-by-one wrap issues. - -### Step 3 — Add a table module (keep `scalar.rs` readable) - -Create a small internal module under `warp-core`: - -- Option A: `crates/warp-core/src/math/trig_lut.rs` -- Option B: `crates/warp-core/src/math/scalar_trig.rs` - -Prefer a module that: - -- exports a single `pub(crate) fn sin_cos_f32(angle: f32) -> (f32, f32)` -- keeps LUT + index math private - -Then wire it into `F32Scalar::sin/cos/sin_cos` in: - -- `crates/warp-core/src/math/scalar.rs` - -### Step 4 — Range reduction (deterministic) - -Goal: map any finite `angle` (radians) into a stable interval. - -Simplest acceptable form: - -- `r = angle.rem_euclid(TAU)` - -Notes: - -- Use `TAU` from `std::f32::consts::TAU` (already used in the codebase). -- Avoid calling `sin/cos` anywhere in this step. -- Keep the computation in `f32` (not `f64`) initially to avoid cross-type subtlety. - -### Step 5 — Map reduced angle to table index + fraction - -With `N` samples over `[0, TAU)`: - -- `scale = N as f32 / TAU` -- `t = r * scale` (expected in `[0, N)`) -- `i0 = floor(t)` as usize -- `frac = t - (i0 as f32)` in `[0, 1)` -- `i1 = (i0 + 1) & (N - 1)` if `N` is power-of-two, else modulo - -Then linear interpolation: - -- `v0 = lut[i0]` -- `v1 = lut[i1]` -- `v = v0 + frac * (v1 - v0)` - -Important: ensure the implementation cannot produce out-of-bounds indices at `r == TAU`. - -### Step 6 — Use symmetries (optional but recommended) - -To reduce table size and keep interpolation stable at quadrant boundaries: - -1. Map `r` into quadrant `q ∈ {0,1,2,3}` and local angle `a` in `[0, PI/2]`. -2. Compute `sin(a)` and `cos(a)` from the quarter-wave table (cos via `sin(PI/2 - a)`). -3. Apply signs/swaps based on quadrant: - -```text -q=0: ( s, c) = ( +sin(a), +cos(a) ) -q=1: ( s, c) = ( +cos(a), -sin(a) ) -q=2: ( s, c) = ( -sin(a), -cos(a) ) -q=3: ( s, c) = ( -cos(a), +sin(a) ) -``` - -This avoids table wrap-around edge cases and makes interpolation easier to reason about. - -### Step 7 — Canonicalize outputs - -At the very end: - -- `s = F32Scalar::new(s).to_f32()` -- `c = F32Scalar::new(c).to_f32()` - -Or, when returning `F32Scalar`: - -- `Self::new(s)` -- `Self::new(c)` - -This guarantees: - -- `-0.0` becomes `+0.0` -- subnormals flush to zero -- NaNs canonicalize - -### Step 8 — Wire into the `Scalar` impl for `F32Scalar` - -Update: - -- `impl Scalar for F32Scalar` in `crates/warp-core/src/math/scalar.rs` - -So that: - -- `sin()` / `cos()` call the deterministic backend -- `sin_cos()` calls the backend once (no duplicated range reduction) - -### Step 9 — Lock in tests (incrementally) - -Use the existing test file: - -- `crates/warp-core/tests/deterministic_sin_cos_tests.rs` - -Suggested test progression: - -1. Keep the special-case “golden bits” test passing (NaN/inf/subnormal handling). -2. Keep the “outputs are canonical” test passing for a sample sweep. -3. Turn on the WIP error-budget test: - - un-ignore it - - decide a concrete `max_ulp` and/or `max_abs` threshold - - commit that threshold with a short rationale in the test doc comment -4. Add a compact “finite golden vector” (optional): - - pick ~32 angles (including quadrant boundaries and midpoints) - - assert `sin.to_bits()` and `cos.to_bits()` equal committed constants - -### Step 10 — Document the policy compliance - -When the backend lands, update: - -- `docs/SPEC_DETERMINISTIC_MATH.md` checklist (`sin/cos` deterministic approximation) - -Document: - -- the chosen LUT resolution/interpolation -- the accepted error budget -- how to regenerate the LUT (if applicable) - ---- - -## LUT generation guidance - -The LUT must be deterministic and reproducible. - -Two workable strategies: - -### Strategy A — Commit the table as data (recommended) - -1. Write a tiny generator tool (Rust `xtask` or a script under `scripts/`). -2. Use a known-stable reference implementation to generate high-precision values: - - If using Python, pin interpreter + deps and emit u32 bits. - - If using Rust, consider a BigFloat crate or a known “software libm” implementation. -3. Emit `u32` bit patterns into a Rust source file. -4. Commit the generated file so all builds use identical bits. - -### Strategy B — Generate at build time (not recommended initially) - -Generate LUT in `build.rs` and include it. - -Downsides: - -- build times increase -- “reproducible builds” become harder to audit - ---- - -## Pitfalls checklist - -- Off-by-one at `angle == TAU` after range reduction. -- Table wrap-around (especially if using full-wave LUT without symmetry). -- Using `f32::sin/cos` or any platform `libm` in generation or runtime by accident. -- Accidentally introducing `-0.0` at quadrant boundaries (canonicalize via `F32Scalar::new`). -- Depending on subnormal behavior in intermediate math (prefer to canonicalize at the end; if needed, consider using `F32Scalar` ops internally). - ---- - -## “Done” criteria (for the eventual finish) - -- `F32Scalar::sin/cos/sin_cos` no longer call hardware/libc trig. -- `cargo test -p warp-core --test deterministic_sin_cos_tests` passes with **no ignored tests**. -- Determinism policy docs are updated and explain the chosen approximation + error budget. diff --git a/docs/archive/notes/project-tour-2025-12-28.md b/docs/archive/notes/project-tour-2025-12-28.md deleted file mode 100644 index ca78136a..00000000 --- a/docs/archive/notes/project-tour-2025-12-28.md +++ /dev/null @@ -1,189 +0,0 @@ - - - -# Echo Project Tour (2025-12-28) - -This note is a fast “become dangerous” map of the repository as it exists today. -It’s written for future-Codex and humans who want to orient quickly without -re-reading every spec end-to-end. - -## TL;DR - -Echo is a deterministic simulation engine built around **typed graph rewriting**. -The core invariant is: **same inputs → same ordered rewrites → same snapshot hashes**. - -Today’s repo is a Rust workspace that already contains: - -- a deterministic rewrite engine spike (`warp-core`) with snapshot hashing, -- a deterministic wire protocol + session hub + viewer toolchain for streaming graphs, -- a “living spec” scaffold (Spec-000) and a demo WASM kernel API (teaching slice). - -## Mental Model: “Git, but for Reality” - -The stable story that matches both docs and code: - -- The _state_ of the world is a graph (nodes + edges + payloads). -- A _change_ is a rewrite (rule applied at a scope). -- A _frame / tick_ is a transaction: - - `begin()` → collect candidate rewrites - - `apply(...)` → match + enqueue rewrites - - `commit()` → deterministically order + execute an independent subset → emit a snapshot hash -- Snapshots can be streamed to tools as full snapshots + gapless diffs (epoch-to-epoch). -- Hashes are the checksum of truth: if peers disagree, you detect desync early. - -## What’s Implemented vs Aspirational - -Implemented (today): - -- `warp-core` rewrite engine spike: - - deterministic pending queue and deterministic drain ordering, - - footprint-based independence checks, - - reachable-only graph hashing (`state_root`) and commit header hashing (`commit_id`), - - deterministic math primitives + PRNG. -- Session/tooling pipeline: - - deterministic JS-ABI v1.0 framing + canonical CBOR encoding (`echo-session-proto`), - - Unix socket hub (`echo-session-service`), - - tool client + port abstraction (`echo-session-client`), - - WGPU viewer that reconstructs and validates streamed graphs (`warp-viewer`). -- Living spec scaffolding: - - Spec-000 Leptos/Trunk shell (`specs/spec-000-rewrite`), - - DTO schema (`echo-wasm-abi`) + demo kernel (`echo-wasm-bindings`). - -Aspirational / partially specified (not fully implemented yet): - -- Full DPO/DPOi typed rewriting (beyond the spike rules). -- True MWMR parallel commit, optimized bitmaps, and high-performance store layouts. -- Branch trees (Chronos/Kairos/Aion) as first-class runtime structures. -- A system scheduler (phases + dependencies) layered above the rewrite substrate. - -## Crate Map (How the Pieces Fit) - -### Core engine + math - -- `crates/warp-core` - - Engine transaction model: `Engine::begin`, `Engine::apply`, `Engine::commit`, `Engine::snapshot` - - Deterministic scheduler: radix drain ordering + footprint independence checks - - Snapshot hashing: `state_root` and `commit_id` - - Deterministic math: `math::{Vec3, Mat4, Quat, Prng}` -- `crates/warp-geom` - - Geometry primitives (AABB, transforms, temporal helpers). - -### Tooling ports - -- `crates/echo-app-core` - - “tool hexagon” ports/services: config, toasts, redraw port, etc. -- `crates/echo-config-fs` - - Filesystem config adapter for tool prefs (implements the `ConfigStore` port). - -### Session and streaming graph - -- `crates/echo-graph` - - Canonical renderable graph (`RenderGraph`) + diff ops (`WarpOp`) - - Canonical hashing via deterministic CBOR bytes (node/edge sorting before encoding) -- `crates/echo-session-proto` - - Wire types (`Message`, `OpEnvelope`, notifications, WARP stream payload) - - Deterministic CBOR canonicalization + JS-ABI v1.0 framing + BLAKE3 checksum -- `crates/echo-session-service` - - Hub process: handshake, monotonic `ts`, subscriptions, gapless diff enforcement, fan-out -- `crates/echo-session-client` - - Client helpers + `tool::SessionPort` abstraction for UIs -- `crates/echo-session-ws-gateway` - - WebSocket ↔ Unix-socket bridge for browser-based consumers. - -### Tools / adapters - -- `crates/warp-viewer` - - Native viewer: subscribes to an WARP stream, applies snapshots/diffs, verifies hashes, renders. -- `crates/warp-wasm` - - wasm-bindgen bindings for `warp-core` (tooling/web environments). -- `crates/warp-cli` - - Developer CLI (`echo-cli`): `verify` (WSC integrity), `bench` (Criterion - runner/formatter), `inspect` (snapshot metadata + ASCII tree). -- `crates/warp-benches` - - Criterion microbenchmarks (scheduler drain, snapshot hash, etc.). - -### Living specs (teaching slice) - -- `crates/echo-wasm-abi` - - WASM-friendly DTO schema for Spec-000 and future living specs. -- `crates/echo-wasm-bindings` - - Demo kernel + rewrite history (teaching slice; not the production engine). -- `specs/spec-000-rewrite` - - Leptos/Trunk scaffold; currently not yet wired to the demo kernel bindings. - -## Core Determinism Invariants (Code-Backed) - -### Rewrite ordering (warp-core scheduler) - -- Deterministic sort key: - - (`scope_hash`, `rule_id`, `nonce`) in ascending lexicographic order. -- Implementation detail: - - stable LSD radix sort (16-bit digits; 20 passes) for `O(n)` drain, - - tiny batches use a comparison sort fast-path. -- Pending queue semantics: - - last-wins de-dupe on (`scope_hash`, `compact_rule_id`) within a tx queue. - -### Independence (MWMR groundwork) - -- Each pending rewrite computes a `Footprint`: - - node read/write sets, edge read/write sets, boundary port sets, plus a coarse `factor_mask`. -- Independence fails if any of the following intersect: - - writes vs prior reads/writes, on nodes and edges - - any overlap on boundary ports - - `factor_mask` overlap (used as a coarse “might-touch” prefilter) - -### Snapshot hashing (warp-core) - -- `state_root` is BLAKE3 over a canonical byte stream of the reachable subgraph: - - reachability: deterministic BFS from root following outbound edges - - node order: ascending `NodeId` (32-byte lexicographic) - - edge order: per source node, edges sorted by `EdgeId`, include only edges to reachable nodes - - payloads: `u64` little-endian length prefix + raw bytes - -### Commit hashing (warp-core) - -- `commit_id` is BLAKE3 over a commit header: - - header version `u16 = 1` - - parent commit hashes (length-prefixed) - - `state_root` + plan/decision/rewrites digests + policy id -- Empty digests for _length-prefixed list digests_ use `blake3(0u64.to_le_bytes())`. - -### Wire protocol (echo-session-proto) - -- JS-ABI v1.0 packet: - - `MAGIC(4) || VERSION(2) || FLAGS(2) || LENGTH(4) || PAYLOAD || CHECKSUM(32)` - - checksum = blake3(header||payload) -- PAYLOAD is canonical CBOR: - - definite lengths only, no tags, minimal integer widths - - floats encoded at the smallest width that round-trips - - forbid “int as float” encodings - - map keys sorted by their CBOR byte encoding; duplicates rejected - -## “Follow the Code” Entry Points - -- Engine core: - - `crates/warp-core/src/engine_impl.rs` (begin/apply/commit) - - `crates/warp-core/src/scheduler.rs` (deterministic ordering + independence) - - `crates/warp-core/src/snapshot.rs` (state_root + commit_id hashing) -- Wire protocol: - - `crates/echo-session-proto/src/wire.rs` (packet framing + encode/decode) - - `crates/echo-session-proto/src/canonical.rs` (canonical CBOR) -- Hub + viewer: - - `crates/echo-session-service/src/main.rs` (hub state machine + enforcement) - - `crates/warp-viewer/src/session_logic.rs` (apply frames + hash checks) - -## Commands (Common Workflows) - -- Core validation: `cargo test --workspace` -- Docs gate: `cargo clippy --all-targets -- -D warnings -D missing_docs` -- Docs site: `make docs` (VitePress) -- Benches: `make bench-report` -- Spec-000 (WASM): `make spec-000-dev` - -## Known “Docs vs Code” Drift to Watch - -- Some older specs are TypeScript-first and describe the planned system scheduler; - today’s implemented deterministic scheduler is the rewrite scheduler in `warp-core`. -- `docs/spec-merkle-commit.md` historically claimed empty list digests used `blake3(b"")`; - the engine uses `blake3(0u64.to_le_bytes())` for length-prefixed list digests. - Keep this consistent, since it affects hash identity. diff --git a/docs/archive/notes/scheduler-optimization-followups.md b/docs/archive/notes/scheduler-optimization-followups.md deleted file mode 100644 index 987b19ad..00000000 --- a/docs/archive/notes/scheduler-optimization-followups.md +++ /dev/null @@ -1,447 +0,0 @@ - - - -# Scheduler Optimization Follow-up Tasks - -This document contains prompts for future work addressing gaps identified during the scheduler radix optimization session. - ---- - -## Prompt 1: Testing & Correctness Validation - -**Prompt for next session:** - -> "I need comprehensive testing to validate that our hybrid scheduler (comparison sort for n ≤ 1024, radix sort for n > 1024) produces **identical deterministic results** to the original BTreeMap implementation. Please: -> -> 1. **Property-Based Tests**: Implement proptest-based fuzzing that: -> - Generates random sequences of `enqueue()` calls with varied scope hashes, rule IDs, and insertion orders -> - Runs both the current hybrid scheduler and a reference BTreeMap implementation -> - Asserts that `drain_in_order()` returns **exactly the same sequence** from both implementations -> - Tests across the threshold boundary (900-1100 elements) to catch edge cases -> - Includes adversarial inputs: all-same scopes, reverse-sorted scopes, partially overlapping scopes -> 2. **Determinism Regression Tests**: Create explicit test cases that would break if we lost determinism: -> - Same input in different order should produce same drain sequence -> - Tie-breaking on nonce must be consistent -> - Last-wins dedupe must be preserved -> - Cross-transaction stability (GenSet generation bumps don't affect ordering) -> 3. **Threshold Boundary Tests**: Specifically test n = 1023, 1024, 1025 to ensure no ordering discontinuity at the threshold -> 4. **Add to CI**: Ensure these tests run on every commit to catch future regressions -> -> The goal is **100% confidence** that we haven't introduced any ordering divergence from the original BTreeMap semantics. Location: `crates/warp-core/src/scheduler.rs` and new test file `crates/warp-core/tests/scheduler_determinism.rs`" -> -> **Done:** Property-based tests (proptest) now fuzz `drain_in_order()` against a BTreeMap reference implementation across both the comparison-sort path (n ≤ 1024) and the radix-sort path (n > 1024). Tests verify: (1) output matches the reference ordering for arbitrary inputs, (2) insertion order does not affect drain output, and (3) deterministic boundary at `SMALL_SORT_THRESHOLD` (n = 1023, 1024, 1025). See `scheduler::tests::proptest_drain_matches_btreemap_reference`, `proptest_insertion_order_independence`, and `threshold_boundary_determinism` in `crates/warp-core/src/scheduler.rs`. - ---- - -## Prompt 2: Radix Sort Deep Dive - -**Prompt for next session:** - -> "Please examine `crates/warp-core/src/scheduler.rs` and provide a **comprehensive technical explanation** of the radix sort implementation, suitable for documentation or a blog post. Specifically explain: -> -> 1. **Why 20 passes?** -> - We have 32 bytes (scope_be32) + 4 bytes (rule_id) + 4 bytes (nonce) = 40 bytes total -> - Each pass handles 16 bits = 2 bytes -> - Therefore: 40 bytes / 2 bytes per pass = 20 passes -> - Show the pass sequence: nonce (2 passes), then rule_id (2 passes), then scope_be32 (16 passes, big-endian) -> 2. **Why 16-bit digits instead of 8-bit?** -> - Trade-off: 8-bit = 256-entry histogram (1KB × 20 = 20KB zeroing), but 40 passes required -> - 16-bit = 65,536-entry histogram (256KB × 20 = 5MB zeroing), but only 20 passes -> - Performance analysis: At n=10k, memory bandwidth vs pass count break-even -> - Document why we chose 16-bit for this use case (memory is cheap, passes are expensive for our data sizes) -> 3. **Why LSD (Least Significant Digit) instead of MSD?** -> - LSD is stable and always takes exactly k passes (k = number of digits) -> - MSD requires recursive partitioning and doesn't maintain insertion order for ties -> - We need stability for nonce tie-breaking -> 4. **Memory layout and thin/fat separation:** -> - Why we separate `RewriteThin` (sorting keys) from `fat: Vec>` (payloads) -> - Cache locality during sorting -> - Handle indirection mechanism -> 5. **The histogram counting algorithm:** -> - Two-pass per digit: count occurrences, then exclusive prefix sum to get write indices -> - Why we zero `counts16` before each pass -> - How the scratch buffer enables in-place-like behavior -> -> Add this explanation as inline comments in `scheduler.rs` and/or as a new doc file at `docs/notes/radix-sort-internals.md`. Include diagrams (Mermaid or ASCII art) showing the pass sequence and memory layout." - -### Radix Sort Internals - -The implementation lives in `crates/warp-core/src/scheduler.rs`. This section -documents the algorithm as implemented. - -#### Sorting key: `RewriteThin` - -```text -RewriteThin (48 bytes) -├─ scope_be32: [u8; 32] ← BLAKE3 scope hash, byte-lexicographic -├─ rule_id: u32 ← compact rule identifier -├─ nonce: u32 ← insertion-order tie-breaker -└─ handle: usize ← index into fat payload vec -``` - -**Thin/fat separation:** Only the 48-byte `RewriteThin` records are touched -during sorting. Full payloads (`Option

`) live in a separate `fat` vector -indexed by `handle`. This keeps sort cache lines tight — the radix passes -never touch payload data. - -#### Why 20 passes? - -The composite sort key is `(scope_be32, rule_id, nonce)` = 32 + 4 + 4 = 40 -bytes. Each pass processes a 16-bit digit (2 bytes), so 40 / 2 = **20 passes**. - -#### Why 16-bit digits (not 8-bit)? - -| Digit size | Histogram entries | Histogram memory | Passes | -| ---------- | ----------------: | ---------------: | -----: | -| 8-bit | 256 | 1 KB | 40 | -| 16-bit | 65,536 | 256 KB | 20 | - -At the target scale (n > 1024), pass count dominates. Each pass involves a -full scan + scatter of all n records. Halving the pass count from 40 to 20 -is worth the 256 KB histogram — well within L2 cache on modern CPUs. - -#### Why LSD (Least Significant Digit)? - -- **Stable:** LSD radix sort is inherently stable. Each pass preserves the - relative order established by previous passes. -- **Predictable:** Exactly k passes for k digits — no recursion, no - early-out variance. -- **Required for nonce tie-breaking:** Stability ensures that when - `scope_be32` and `rule_id` are equal, the nonce (insertion order) - determines the final position — matching the comparison sort's behavior. - -MSD would require recursive partitioning and explicit tie-breaking logic. - -#### Pass sequence (LSD order) - -```text -Pass 0: nonce low 16 bits (least significant) -Pass 1: nonce high 16 bits -Pass 2: rule_id low 16 bits -Pass 3: rule_id high 16 bits -Pass 4: scope_be32 pair 15 (bytes [30..32], scope LSB) -Pass 5: scope_be32 pair 14 (bytes [28..30]) - ⋮ -Pass 19: scope_be32 pair 0 (bytes [0..2], scope MSB) -``` - -After all 20 passes, the primary sort key is `scope_be32` (most significant), -then `rule_id`, then `nonce` — matching `cmp_thin`'s comparison order. - -#### Digit extraction (`bucket16`) - -```text -passes 0–1: u16_from_u32_le(nonce, idx) — LE decomposition -passes 2–3: u16_from_u32_le(rule_id, idx) — LE decomposition -passes 4–19: u16_be_from_pair32(scope, 19-pass) — BE pair from byte array -``` - -The scope uses big-endian pairs because `scope_be32` is stored in -byte-lexicographic order. The `19 - pass` index maps LSD pass ordering -onto big-endian byte positions (pass 4 → pair 15 = LSB, pass 19 → pair -0 = MSB). - -#### Three-phase counting sort (per pass) - -Each of the 20 passes executes: - -1. **Count:** Zero the 65,536-entry histogram, then scan all n records, - incrementing `counts[bucket16(record, pass)]`. -2. **Prefix sum:** Convert counts to starting positions via exclusive - cumulative sum: `counts[i] = sum of counts[0..i]`. -3. **Stable scatter:** Scan records in order, placing each at - `dst[counts[bucket]++]`. The post-increment ensures stable ordering - within each bucket. - -#### Ping-pong buffer - -The sort alternates between `thin` and `scratch` vectors each pass: - -```text -Pass 0: thin → scratch -Pass 1: scratch → thin -Pass 2: thin → scratch - ⋮ -Pass 19: scratch → thin (20 passes = even, result in thin) -``` - -Since 20 is even, the final sorted result is already in `thin`. If the -pass count were odd, a final `copy_from_slice` would sync the result. - -#### Threshold: `SMALL_SORT_THRESHOLD = 1024` - -- **n ≤ 1024:** Use `sort_unstable_by(cmp_thin)` — Rust's pattern-defeating - quicksort. Avoids the fixed 256 KB histogram zeroing cost. -- **n > 1024:** Use the 20-pass radix sort — O(n) scaling dominates the - O(n log n) comparison sort. - -The threshold was empirically determined on Apple Silicon. The histogram -zeroing cost (~256 KB × 20 passes) is amortized at n ≈ 1024. This is a -compile-time constant; all participants in a deterministic simulation MUST -use the same value. - ---- - -## Prompt 3: Document Assumptions & Arbitrary Decisions - -**Prompt for next session:** - -> "Please review the scheduler optimization implementation and create comprehensive documentation explaining decisions that may appear arbitrary or require platform-specific validation. Create `docs/notes/scheduler-implementation-notes.md` covering: -> -> 1. **The 1024 threshold choice:** -> - Empirically determined on M1 Mac (Apple Silicon) -> - Based on when 5MB zeroing cost becomes negligible relative to comparison sort overhead -> - **Platform dependency**: Intel x86 may have different optimal threshold due to: -> - Different memory bandwidth characteristics -> - Different cache sizes (L1/L2/L3) -> - Different CPU instruction latencies -> - **Validation needed**: Benchmark on Intel/AMD x86_64, ARM Cortex-A series, RISC-V -> - **Potential solution**: Make threshold configurable via feature flag or runtime detection -> - **Determinism note:** `SMALL_SORT_THRESHOLD` is a compile-time constant (`1024`). All participants must use the same value. This is not auto-tuned. -> 2. **16-bit radix digit size:** -> - Assumes 256KB zeroing is acceptable fixed cost -> - Alternative: 8-bit digits (20KB zeroing, 40 passes) might win on memory-constrained systems -> - Alternative: 32-bit digits (16GB histogram!) is obviously wrong, but why? Document the analysis. -> - **Question**: Did we test 12-bit digits (4KB histogram, ~27 passes)? Should we? -> 3. **FxHasher (rustc-hash) choice:** -> - Fast but non-cryptographic -> - Assumes no adversarial input targeting hash collisions -> - **Risk**: Pathological inputs could cause O(n²) behavior in the HashMap -> - **Mitigation**: Could switch to ahash or SipHash if collision attacks are a concern -> 4. **GenSet generation counter wraparound:** -> - What happens when `gen: u32` overflows after 4 billion transactions? -> - Currently unhandled - assumes no single engine instance lives that long -> - **Validation needed**: Add a debug assertion or overflow handling -> 5. **Comparison sort choice (sort_unstable_by):** -> - Why unstable sort is acceptable (we have explicit nonce tie-breaking in the comparator) -> - Why not pdqsort vs other algorithms? (It's already Rust's default) -> 6. **Scope hash size (32 bytes = 256 bits):** -> - Why this size? Comes from BLAKE3 output -> - Radix pass count directly depends on this -> - If we ever change hash algorithm, pass count must be recalculated -> -> For each decision, document: -> -> - **Rationale**: Why we chose this -> - **Assumptions**: What must be true for this choice to be correct -> - **Risks**: What could go wrong -> - **Validation needed**: What tests/benchmarks would increase confidence -> - **Alternatives**: What we considered but rejected, and why" - ---- - -## Prompt 4: Worst-Case Scenarios & Mitigations - -**Prompt for next session:** - -> "Please analyze the hybrid scheduler implementation to identify **worst-case scenarios** and design mitigations with empirical validation. Focus on adversarial inputs and edge cases where performance or correctness could degrade: -> -> 1. **Adversarial Hash Inputs:** -> - **Scenario**: All scopes hash to values with identical high-order bits (e.g., all start with 0x00000000...) -> - **Impact**: Radix sort doesn't partition until late passes, cache thrashing -> - **Test**: Generate 10k scopes with only low-order byte varying -> - **Mitigation**: Document that this is acceptable (real hashes distribute uniformly), or switch to MSD radix if detected -> 2. **Threshold Boundary Oscillation:** -> - **Scenario**: Input size oscillates around 1024 (e.g., 1000 → 1050 → 980 → 1100) -> - **Impact**: Algorithm selection thrashing, icache/dcache pollution -> - **Test**: Benchmark repeated cycles of 1000/1050 element drains -> - **Mitigation**: Add hysteresis (e.g., switch at 1024 going up, 900 going down) -> 3. **FxHashMap Collision Attack:** -> - **Scenario**: Malicious input with (scope, rule_id) pairs engineered to collide in FxHasher -> - **Impact**: HashMap lookups degrade to O(n), enqueue becomes O(n²) -> - **Test**: Generate colliding inputs (requires reverse-engineering FxHash) -> - **Mitigation**: Switch to ahash (DDoS-resistant) or document trust model -> 4. **Memory Exhaustion:** -> - **Scenario**: Enqueue 10M+ rewrites before draining -> - **Impact**: 5MB × 20 = 100MB scratch buffer, plus thin/fat vectors = potential OOM -> - **Test**: Benchmark memory usage at n = 100k, 1M, 10M -> - **Mitigation**: Add early drain triggers or pool scratch buffers across transactions -> 5. **Highly Skewed Rule Distribution:** -> - **Scenario**: 99% of rewrites use rule_id = 0, remainder spread across 1-255 -> - **Impact**: First rule_id radix pass is nearly no-op, wasted cache bandwidth -> - **Test**: Generate skewed distribution, measure vs uniform distribution -> - **Mitigation**: Skip radix passes if variance is low (requires online detection) -> 6. **Transaction Starvation:** -> - **Scenario**: Transaction A enqueues 100k rewrites, transaction B enqueues 1 rewrite -> - **Impact**: B's single rewrite pays proportional cost in GenSet conflict checking -> - **Test**: Benchmark two-transaction scenario with 100k vs 1 rewrites -> - **Mitigation**: Per-transaction GenSet or early-out if footprint is empty -> -> For each scenario: -> -> 1. **Create a benchmark** in `crates/warp-benches/benches/scheduler_adversarial.rs` -> 2. **Measure degradation** compared to best-case (e.g., how much slower?) -> 3. **Implement mitigation** if degradation is >2x -> 4. **Re-benchmark** to prove mitigation works -> 5. **Document** in `docs/notes/scheduler-worst-case-analysis.md` with graphs -> -> The goal is to **quantify** our worst-case behavior and provide **evidence** that mitigations work, not just intuition." - ---- - -## Alternatives Considered - -During the optimization process, we evaluated several alternative approaches before settling on the current hybrid radix sort implementation: - -### 1. **Pure Comparison Sort (Status Quo)** - -- **Approach**: Keep BTreeMap-based scheduling -- **Pros**: - - Already implemented and tested - - Simple, no custom sort logic - - Good for small n -- **Cons**: - - O(n log n) complexity - - 44% slower at n=1000 than hybrid - - Doesn't scale to n=10k+ -- **Why rejected**: Performance target (60 FPS = 16.67ms frame budget) requires sub-millisecond scheduling at n=1000+. BTreeMap doesn't meet this at scale. - ---- - -### 2. **Pure Radix Sort (No Threshold)** - -- **Approach**: Always use 20-pass radix sort, no comparison fallback -- **Pros**: - - Simpler code (no branching) - - Perfect O(n) scaling - - Excellent at large n -- **Cons**: - - 91x slower at n=10 (687µs vs 7.5µs) - - Fixed 5MB zeroing cost dominates small inputs - - Real games have variable rewrite counts per frame -- **Why rejected**: - - Most frames have <100 rewrites, paying huge penalty for rare large frames is unacceptable - - "Flat green line" in benchmarks (Benchmark visualization: see performance data in `scheduler-radix-optimization-2.md`.) - - Cannot justify 91x regression for 90% of frames to optimize 10% of frames - ---- - -### 3. **8-bit Digit Radix Sort** - -- **Approach**: Use 256-entry histogram (1KB) with 40 passes instead of 16-bit/20 passes -- **Pros**: - - Only 20KB zeroing overhead vs 5MB - - Could lower threshold to ~128 - - Better cache locality (256 entries fit in L1) -- **Cons**: - - Double the number of passes (40 vs 20) - - Each pass has loop overhead, random access patterns - - More opportunities for branch misprediction -- **Why rejected**: - - Preliminary analysis suggested memory bandwidth not the bottleneck, pass count is - - At n=10k, memory cost (5MB) is amortized, but 20 extra passes are not - - Rust's `sort_unstable` is _extremely_ optimized; difficult to surpass with more passes - - Would need empirical benchmarking to prove 8-bit is better (didn't have time) - ---- - -### 4. **Active-Bucket Zeroing** - -- **Approach**: Only zero histogram buckets that were non-zero after previous pass -- **Pros**: - - Could save 15-20% at large n by avoiding full 256KB zeroes - - Maintains 16-bit digit performance -- **Cons**: - - Requires tracking which buckets are "dirty" - - Extra bookkeeping overhead (bitmap? linked list?) - - Complexity increase - - Benefit only at n > 10k -- **Why rejected**: - - Premature optimization - current implementation meets performance targets - - Complexity/benefit ratio not compelling - - Can revisit if profiling shows zeroing is bottleneck at scale - - User's philosophy: "golden path happens 90% of the time" - ---- - -### 5. **Cross-Transaction Buffer Pooling** - -- **Approach**: Reuse `scratch` and `counts16` buffers across multiple `drain_in_order()` calls -- **Pros**: - - Amortizes allocation cost across multiple frames - - Reduces memory allocator pressure - - Could enable per-thread pools for parallelism -- **Cons**: - - Requires lifetime management (who owns the pool?) - - Breaks current simple API (`drain_in_order()` is self-contained) - - Unclear benefit (allocations are fast, we care about compute time) -- **Why rejected**: - - No evidence allocation is bottleneck (Criterion excludes setup with `BatchSize::PerIteration`) - - Complexity without measured gain - - Would need profiling to justify - ---- - -### 6. **Rule-Domain Optimization** - -- **Approach**: If `rule_id` space is small (<256), skip high-order rule_id radix pass -- **Pros**: - - Saves 1 pass for common case (most games have <100 rules) - - Simple optimization (if `max_rule_id < 256`, skip pass) -- **Cons**: - - Requires tracking max rule_id dynamically - - Saves ~5% total time (1/20 passes) - - Adds conditional logic to hot path -- **Why rejected**: - - Marginal gain (~5%) not worth complexity - - Pass overhead is cheap relative to histogram operations - - User constraint: "one dude, on a laptop" - optimize high-value targets first - ---- - -### 7. **MSD (Most Significant Digit) Radix Sort** - -- **Approach**: Sort high-order bytes first, recursively partition -- **Pros**: - - Can early-out if data is already partitioned - - Potentially fewer passes for sorted data -- **Cons**: - - Not stable (requires explicit tie-breaking logic) - - Variable number of passes (hard to predict performance) - - Recursive implementation (cache unfriendly) - - Complex to implement correctly -- **Why rejected**: - - LSD radix guarantees exactly 20 passes (predictable performance) - - Stability is critical for nonce tie-breaking - - Our data is random (graph hashes), no sorted patterns to exploit - - Complexity not justified by speculative gains - ---- - -### 8. **Hybrid with Multiple Thresholds** - -- **Approach**: Three-way split: comparison (<256), 8-bit radix (256-4096), 16-bit radix (>4096) -- **Pros**: - - Theoretically optimal for all input sizes - - Could squeeze out extra 5-10% in 100-1000 range -- **Cons**: - - Three codepaths to maintain - - Two threshold parameters to tune - - Cache pollution from three different algorithms - - Testing complexity (need coverage at both boundaries) -- **Why rejected**: - - Diminishing returns - hybrid with single threshold already meets targets - - User's philosophy: "good enough for golden path" - - Engineering time better spent on other features - - Premature optimization - ---- - -## Summary: Why Hybrid Radix at 1024? - -The current implementation (comparison sort for n ≤ 1024, 16-bit radix for n > 1024) was chosen because: - -1. **Meets performance targets**: 44% speedup at n=1000, perfect O(n) at scale -2. **Simple**: One threshold, two well-understood algorithms -3. **Robust**: Rust's `sort_unstable` is battle-tested, radix is deterministic -4. **Measurable**: Clear boundary at 1024 makes reasoning about performance easy -5. **Good enough**: Covers 90% golden path, doesn't over-optimize edge cases - -Alternative approaches either: - -- Sacrificed small-n performance (pure radix) -- Added complexity without measured gains (active-bucket zeroing, pooling) -- Required more tuning parameters (multi-threshold hybrid) -- Didn't align with user's resource constraints (one person, hobby project) - -The guiding principle: **"Ship what works for real use cases, iterate if profiling shows a better target."** diff --git a/docs/archive/notes/scheduler-radix-optimization-2.md b/docs/archive/notes/scheduler-radix-optimization-2.md deleted file mode 100644 index eb77e142..00000000 --- a/docs/archive/notes/scheduler-radix-optimization-2.md +++ /dev/null @@ -1,349 +0,0 @@ - - - -# From $O(n \log n)$ to $O(n)$: Optimizing Echo’s Deterministic Scheduler - -> **Provenance:** This document supersedes `docs/archive/notes/scheduler-radix-optimization.md`. See the archive for earlier analysis. - -**Tags:** performance, algorithms, optimization, radix-sort - ---- - -## TL;DR - -- **Echo** runs at **60 fps** while processing **~5,000 DPO graph rewrites per frame**. -- Determinism at _game scale_ is **confirmed**. -- Scheduler now **linear-time** with **zero small-$n$ regressions**. - ---- - -## What is Echo? - -**Echo** is a **deterministic simulation engine** built on **graph-rewriting theory**. -Although its applications span far beyond games, we’ll view it through the lens of a **game engine**. - -Traditional engines manage state via **mutable object hierarchies** and **event loops**. -Echo represents the _entire_ simulation as a **typed graph** that evolves through **deterministic rewrite rules**—mathematical transformations that guarantee **bit-identical results** across platforms, replays, and networked peers. - -At Echo’s core lies the **WARP graph (WARP)**: - -- **Nodes are graphs** (a “player” is a subgraph with its own internal structure). -- **Edges are graphs** (carry provenance and nested state). -- **Rules are graph rewrites** (pattern-match → replace). - -Every frame the WARP is replaced by a new WARP—an **echo** of the previous state. - -### Why bother? Aren’t Unreal/Unity “solved”? - -They excel at **rendering** and **asset pipelines**, but their **state-management foundation** is fragile for the hardest problems in game dev: - -| Problem | Symptom | -| ------------------------- | ----------------------------------------------------------------- | -| **Divergent state** | Rubber-banding, client-side prediction, authoritative corrections | -| **Non-reproducible bugs** | “Works on my machine”, heisenbugs | - -Echo eliminates both by making **state immutable** and **updates pure functions**. - ---- - -## Version Control for Reality - -Think of each frame as an **immutable commit** with a **cryptographic hash** over the reachable graph (canonical byte order). -Player inputs become **candidate rewrites**. Thanks to **confluence** (category-theory math), all inputs fold into a **single deterministic effect**. - -```math -(world, inputs) \to world' -``` - -No prediction. No rollback. No arbitration. If two machines disagree, a **hash mismatch at frame N+1** is an immediate, precise alarm. - -### Deterministic branching & merge (ASCII) - -```text -Frame₀ - │ - ▼ - Frame₁───┐ - │ \ - ▼ \ - Frame₂A Frame₂B - │ │ - └──────┴────┘ - ▼ - Merge₃ (confluence + canonical order) -``` - ---- - -## What Echo Unlocks - -| Feature | Traditional Engine | Echo | -| ---------------------- | ---------------------------- | ---------------------------- | -| **Perfect replays** | Recorded inputs + heuristics | Recompute from any commit | -| **Infinite debugger** | Breakpoints + logs | Query graph provenance | -| **Provable fairness** | Trust server | Cryptographic hash signature | -| **Zero silent desync** | Prediction errors | Immediate hash check | -| **Networking** | Send world diff | Send inputs only | - ---- - -## Confluence, Not Arbitration - -When multiple updates touch the same state, Echo **merges** them via **lattice operators** with **ACI** properties: - -- **Associative**, **Commutative**, **Idempotent** - -### Examples - -- Tag union: join(A, B) = A ∪ B -- Scalar cap: join(Cap(a), Cap(b)) = Cap(max(a, b)) - -Folding any bucket yields **one result**, independent of order or partitioning. - ---- - -## Safe Parallelism by Construction - -Updates are **DPO (Double Push-Out) graph rewrites**. - -- **Independent** rewrites run in parallel. -- **Overlapping** rewrites are merged (lattice) or rejected. -- **Dependent** rewrites follow a **canonical order**. - -The full pipeline: - -1. Collect inputs for frame N+1. -2. Bucket by (scope, rule_family). -3. **Confluence-fold** each bucket (ACI). -4. Apply remaining rewrites in **lexicographic order**: - -```text -(scope_hash, rule_id, nonce) -``` - -1. Emit snapshot & compute commit hash. - ---- - -## A Tiny Rewrite, A Tiny Lattice - -**Motion rewrite** (scalar view) - -> Match: entity with position p, velocity v Replace: p′ = p + v·dt (velocity unchanged) - -### Cap lattice - -> join(Cap(α), Cap(β)) = Cap(max(α, β)) {Cap(2), Cap(5), Cap(3)} → Cap(5) (order-independent) - -These primitives—**rewrites** + **lattices**—are the DNA of Echo’s determinism. - ---- - -## Echo vs. the World - -| Property | Echo | -| -------------------------- | -------------------------------------------------- | -| **Determinism by design** | Same inputs → same outputs (no FP drift, no races) | -| **Formal semantics** | DPO category theory → provable transitions | -| **Replay from the future** | Rewind, fork, checkpoint any frame | -| **Networked lockstep** | Send inputs only; hash verifies sync | -| **AI training paradise** | Reproducible episodes = debuggable training | - -Echo isn’t just another ECS—it’s a **new architectural paradigm**. - ---- - -## The Problem: $O(n \log n)$ Was Hurting - -The scheduler must execute rewrites in **strict lexicographic order**: (scope_hash (256 bit), rule_id, nonce). - -Initial implementation: - -```rust -pub(crate) pending: BTreeMap<(Hash, Hash), PendingRewrite>; -``` - -**Bottleneck**: Insertions into the `BTreeMap` required $O(n \log n)$ comparisons over 256-bit scope hashes; draining via `BTreeMap::drain()` was $O(n)$. - -| $n$ | Time | -| ----- | ----------- | -| 1,000 | **1.33 ms** | -| 3,000 | **4.2 ms** | - -Curve fit: $T/n ≈ -345 + 272.7 \ln n$ → textbook $O(n \log n)$. - ---- - -## The Solution: 20-Pass Radix Sort - -Radix sort is **comparison-free** → $O(n)$ for fixed-width keys. - -### Design choices - -- **LSD** (least-significant digit first) -- **16-bit digits** (big-endian) -- **20 passes total**: - - 2 for nonce (u32) - - 2 for rule_id (u32) - - 16 for scope_hash (32 bytes) -- **Stable** → preserves insertion order for ties -- **Byte-lexicographic** → identical to BTreeMap - -### Architecture - -```rust -struct RewriteThin { - scope_be32: [u8; 32], // 256-bit scope - rule_id: u32, - nonce: u32, - handle: usize, // index into fat payload vec; usize to avoid truncation -} - -struct PendingTx

{ - thin: Vec, - fat: Vec>, - scratch: Vec, - counts16: Vec, // 65,536 buckets = 256 KiB -} -``` - -**Key insight**: Sort **thin keys** (28 bytes) only; gather **fat payloads** once at the end. - -### Pass sequence - -Each pass: **count → prefix-sum → scatter → flip buffers**. - ---- - -## The Disaster: Small-$n$ Regression - -Initial radix numbers were _worse_ at low $n$: - -| $n$ | BTreeMap | Radix | Regression | -| ----- | -------- | ---------- | -------------- | -| 10 | 7.5 µs | **687 µs** | **91× slower** | -| 100 | 90 µs | **667 µs** | **7× slower** | -| 1,000 | 1.33 ms | 1.36 ms | marginal | - -**Culprit**: counts.fill(0) **20 times** → **5 MiB** of writes _regardless_ of $n$. At $n=10$, sorting cost was dwarfed by memory bandwidth. - ---- - -## The Fix: Adaptive Threshold - -```rust -const SMALL_SORT_THRESHOLD: usize = 1024; - -if n > 1 { - if n <= SMALL_SORT_THRESHOLD { - self.thin.sort_unstable_by(cmp_thin); - } else { - self.radix_sort(); - } -} -``` - -**Why 1024?** - -- **< 500**: comparison wins (no zeroing). -- **> 2,000**: radix wins (linear scaling). -- **1024**: conservative crossover, both ~same cost. - ---- - -## The Results: Perfect $O(n)$ Scaling - -| $n$ | Old (BTreeMap) | New (Hybrid) | Speedup | ns/rewrite | -| ------ | -------------- | ------------ | -------- | ---------- | -| 10 | 7.5 µs | 7.6 µs | -1% | 760 | -| 100 | 90 µs | 76 µs | **+16%** | 760 | -| 1,000 | 1.33 ms | **0.75 ms** | **+44%** | 750 | -| 3,000 | — | 3.03 ms | — | 1,010 | -| 10,000 | — | 9.74 ms | — | 974 | -| 30,000 | — | 29.53 ms | — | 984 | - -_From 3 k → 30 k (10×) → **9.75×** time → textbook linear._ - -**60 FPS budget (16.67 ms):** - -- $n=1,000$ → **0.75 ms** = **4.5 %** of frame → **plenty of headroom**. - -### Phase breakdown ($n=30 k$) - -```text -Total: 37.61 ms (100 %) -Enqueue: 12.87 ms (34 %) – hash lookups + dedupe -Drain: 24.83 ms (66 %) – radix + conflict checks + execute -``` - -Both phases scale **linearly**. - ---- - -## Visualization: The Story in One Glance - -Interactive D3 dashboard: `docs/benchmarks/report-inline.html` - -- **Log-log plot** with four series (hash, total, enqueue, drain) -- **Threshold marker** at $n=1024$ -- **Color-coded stat cards** matching the chart -- **Straight line** from 3 k → 30 k = proof of $O(n)$ - ---- - -## Lessons Learned - -1. **Measure first** – curve fitting exposed $O(n \log n)$ before any code change. -2. **Benchmarks lie** – a “fast” radix at $n=1,000$ obliterated $n=10$. -3. **Memory bandwidth > CPU** – 5 MiB of zeroing dominated tiny inputs. -4. **Hybrid wins** – comparison sort is _faster_ for small $n$. -5. **Visualize the win** – a straight line on log-log is worth a thousand numbers. - ---- - -## What’s Next? - -| Idea | Expected Gain | -| --------------------------------------- | ------------------ | -| **Active-bucket zeroing** | ~15 % at large $n$ | -| **Cross-tx scratch pooling** | Reduce alloc churn | -| **Collapse rule_id to u8** (≤256 rules) | Drop 2 passes | - -The scheduler is now **algorithmically optimal** and **constant-factor excellent**. - ---- - -## Conclusion: Echoing the Future - -Echo’s deterministic scheduler evolved from **$O(n \log n)$** to **$O(n)$** with a **hybrid adaptive radix sort**: - -- **44 % faster** at typical game loads ($n=1,000$) -- **Perfect linear scaling** to **30 k rewrites** -- **Well under 60 FPS budget** -- **Zero regressions** at small $n$ -- **Beautiful dashboard** proving the win - -Traditional engines treat determinism as an **afterthought**—a feature bolted on with prediction and prayer. Echo treats it as a **mathematical guarantee**, baked into every layer from DPO theory to the scheduler you just read about. - -When you can execute **30,000 deterministic rewrites per frame** and still hit **60 FPS**, you’re not just optimizing code—you’re **proving a new kind of game engine is possible**. One where: - -- **Multiplayer “just works”** (same pure function → no desync) -- **Replay is physics** (rewind by recomputing graph history) -- **AI training is reproducible** -- **Formal verification** becomes practical -- **Time-travel debugging** is native - -**The graph is a straight line. The future is deterministic. Echo is how we get there.** 🚀 - ---- - -## Code References - -- **Implementation**: crates/warp-core/src/scheduler.rs (see `radix_sort`, `drain_in_order`) -- **Benchmarks**: crates/warp-benches/benches/scheduler_drain.rs -- **Dashboard**: `docs/benchmarks/report-inline.html` -- **PR**: The radix optimization work has been merged to main. - ---- - -_Curious? Dive into the Echo docs or join the conversation on [GitHub](https://github.com/flyingrobots/echo)._ diff --git a/docs/archive/notes/scheduler-radix-optimization.md b/docs/archive/notes/scheduler-radix-optimization.md deleted file mode 100644 index e52b975a..00000000 --- a/docs/archive/notes/scheduler-radix-optimization.md +++ /dev/null @@ -1,465 +0,0 @@ - - - -# From $O(n log n)$ to $O(n)$: Optimizing Echo's Deterministic Scheduler - -**Tags:** performance, algorithms, optimization, radix-sort - ---- - -## TL;DR - -- Early benchmarks demonstrate that **Echo** can run at 60 fps while pushing ~5,000 DPO graph rewrites per frame -- Big viability question answered -- "Game scale" activity: confirmed - -## What is Echo? - -**Echo is a deterministic simulation engine built on graph rewriting theory.** While its applications are broad, it was born from the world of game development, so we'll use "game engine" as our primary lens. - -Unlike traditional game engines, which manage state through mutable object hierarchies and event loops, Echo represents the entire simulation state as a typed graph. This graph evolves through **deterministic rewrite rules**—mathematical transformations that guarantee identical results across platforms, replays, and simulations. - -At Echo's core is the _**WARP graph**_ (WARP). In Echo, _everything_ is a graph. Nodes are graphs, meaning a "player" is a complex subgraph with its own internal graph structure, not just an object. Edges are graphs, too, and can also have their own internal graphs, allowing expressiveness that carries structure and provenance. And most importantly, rules are graph rewrites. Echo updates the simulation by finding specific patterns in the WARP and replacing them with new ones. Every frame, the WARP is replaced by a new WARP, an _echo_ of the state that came before it. - -### Why bother? Aren't game engines a solved problem - -That's a fair question, but it’s aimed at the wrong target. While engines like Unreal and Unity are phenomenal rendering powerhouses and asset pipelines, they are built on an architectural foundation that struggles with the hardest problems in game development: **state management and networking**. - -The open secret of multiplayer development is that no two machines in a session ever truly agree on the game's state. What the player experiences is a sophisticated illusion, a constant, high-speed negotiation between **client-side prediction** and **authoritative server corrections**. - -I know this because I'm one of the developers who built those illusions. I've written the predictive input systems and complex netcode designed to paper over the cracks. The "rubber-banding" we've all experienced isn't a _bug_—it's an _artifact_. It's the unavoidable symptom of a system where state is **divergent by default**. - -This architectural flaw creates a secondary nightmare: **debugging**. When state is mutable, concurrent, and non-deterministic, reproducing a bug becomes a dark art. It's often impossible to look at a game state and know with certainty _how it got that way_. The system is fundamentally non-reproducible. - -The state of the art is built on patches, prediction, and arbitration to hide this core problem. The architecture itself is fragile. - -Until now. - -### Version Control for Reality - -One way to understand how Echo works is to imagine the simulation as version control for moments in time. In this mental model, a frame is like an immutable commit. And like a commit each frame has a canonical, cryptographic hash over the entire reachable graph, encoded in a fixed order. Echo treats inputs from players and other game world updates as candidate graph rewrites, and thanks to _confluence_, some category theory math, we can fold them into a single, deterministic effect. Finally, the scheduler applies all rewrites in a deterministic order and produces the next snapshot. - -No prediction. No rollback. No "authoritative correction." Just one pure function from `(world, inputs) → world′`. - -If two machines disagree, they disagree fast: a hash mismatch at frame `N+1` is a precise alarm, not a rubber‑band later. - -### ASCII timeline (branching and merge, deterministically) - -```text - Frame₀ - │ - ▼ - Frame₁───┐ - │ \ - ▼ \ - Frame₂A Frame₂B - │ │ - └────┬────┘ - ▼ - Merge₃ (confluence + canonical rewrite order) -``` - -### What Echo Unlocks - -This "version control" model isn't just a metaphor; it's a new architecture that unlocks capabilities that look "impossible" in a traditional engine. - -It enables **perfect replays**, as every frame is a commit that can be recomputed from its inputs to a bit‑identical state. This, in turn, provides an **infinite debugger**: provenance is embedded directly in the graph, allowing you to query its history to see who changed what, when, and why. - -For competitive games, this provides **provable fairness**, as a frame's cryptographic hash is a verifiable signature of "what happened." This all adds up to **zero silent desync**. A hash mismatch catches drift immediately and precisely, long before a user ever notices. - -Networking becomes straightforward: distribute inputs, compute the same function, compare hashes. When the math agrees, the world agrees. - -## [](https://dev.to/flyingrobots/determinism-by-construction-inside-echos-recursive-meta-graph-ecs-3491-temp-slug-8201751?preview=3b87bb097d6497d71ce72d6b6e87a1a101318ff960042f1db3908b807b6dd9a1b0b3811607d98ea25549311a530faa30d469ddd1cf0ac2c60e8f92fd#confluence-not-arbitration)Confluence, Not Arbitration - -When multiple updates target related state, we don't race them, we *merge* them with deterministic math. We use **confluence operators** with **lattice** properties: - -**Associative**, **Commutative**, **Idempotent** (ACI) - -Examples: - -Tags union: `join(TagsA, TagsB) = TagsA ∪ TagsB` - -Scalar cap: `join(Cap(a), Cap(b)) = Cap(max(a, b))` - -Those properties guarantee that folding a bucket of updates yields one result, independent of arrival order and partitioning. - -## [](https://dev.to/flyingrobots/determinism-by-construction-inside-echos-recursive-meta-graph-ecs-3491-temp-slug-8201751?preview=3b87bb097d6497d71ce72d6b6e87a1a101318ff960042f1db3908b807b6dd9a1b0b3811607d98ea25549311a530faa30d469ddd1cf0ac2c60e8f92fd#safe-parallelism-by-construction)Safe Parallelism by Construction - -Echo implements updates as **DPO (Double Push‑Out) graph rewrites**. This structure provides safe parallelism by construction: independent rewrites can apply in parallel without issue. Any overlapping rewrites are either deterministically merged by a lattice or rejected as invalid. For any remaining, dependent rewrites, the scheduler enforces a canonical order. - -The upshot: "Which rule ran first?" stops being a source of nondeterminism. - -A sketch of the full *fold→rewrite→commit* pipeline: - -> 1. Collect inputs for frame `N+1`. -> 2. Bucket by (scope, rule family). -> 3. Confluence fold each bucket (ACI). -> 4. Apply remaining rewrites in a canonical order: -> -> -> -> ```text -> order by (scope_hash, family, compact_rule_id, payload_digest). -> (Early convention — current drain key: scope, rule_id, nonce) -> ``` -> -> 1. Emit a new snapshot and compute commit hash. - -## [](https://dev.to/flyingrobots/determinism-by-construction-inside-echos-recursive-meta-graph-ecs-3491-temp-slug-8201751?preview=3b87bb097d6497d71ce72d6b6e87a1a101318ff960042f1db3908b807b6dd9a1b0b3811607d98ea25549311a530faa30d469ddd1cf0ac2c60e8f92fd#a-tiny-rewrite-a-tiny-lattice)A Tiny Rewrite, A Tiny Lattice - -Rewrite (motion) in Scalar terms: - -> Match: an entity with position p and velocity v -> Replace: position p′ = p + v·dt; velocity unchanged - -Lattice example (cap / max): - -> join(Cap(α), Cap(β)) = Cap(max(α, β)) -> ACI → the fold of {Cap(2), Cap(5), Cap(3)} is Cap(5) regardless of order. - -These primitives, **rewrites** and **lattices**, are the heart of Echo's "determinism by construction." - -**What makes Echo different:** - -- **Determinism by design**: Same inputs → same outputs, always. No floating-point drift, no race conditions, no "it works on my machine." -- **Formal semantics**: Built on Double Pushout (DPO) category theory—every state transition is mathematically provable. -- **Replay from the future**: Rewind time, fork timelines, or replay from any checkpoint. Your game is a pure function. -- **Networked lockstep**: Perfect synchronization without sending world state. Just send inputs; all clients compute identical results. -- **AI training paradise**: Deterministic = reproducible = debuggable. Train agents with confidence. - -Echo isn't just another ECS—it's a **fundamentally different way to build games**, where the scheduler isn't just an implementation detail, it's the guarantee of determinism itself. - ---- - -## The Problem: $O(n log n)$ Was Showing - -Echo's deterministic scheduler needs to execute rewrites in strict lexicographic order: `(scope_hash, rule_id, nonce)`. This ensures identical results across platforms and replays—critical for a deterministic game engine. - -Our initial implementation used a `BTreeMap<(Hash, Hash), PendingRewrite>`: - -```rust -// Old approach -pub(crate) pending: BTreeMap<(Hash, Hash), PendingRewrite> -``` - -**The bottleneck:** Insertions into the `BTreeMap` required $O(n \log n)$ comparisons over 256-bit scope hashes. Draining via `BTreeMap::drain()` was $O(n)$. The radix sort optimization eliminates the insertion bottleneck. Benchmarks showed: - -```text -n=1000: ~1.33ms (comparison sort via BTreeMap iteration) -n=3000: ~4.2ms (log factor starting to hurt) -``` - -Curve fitting confirmed **T/n ≈ -345 + 272.7·ln(n)**—textbook $O(n log n)$. - ---- - -## The Solution: 20-Pass Radix Sort - -Radix sort achieves **$O(n)$** complexity with zero comparisons by treating keys as sequences of digits. We implemented: - -- **LSD radix sort** with 16-bit big-endian digits -- **20 passes total**: 2 for nonce, 2 for rule_id, 16 for full 32-byte scope hash -- **Stable sorting** preserves insertion order for tie-breaking -- **Byte-lexicographic ordering** exactly matches BTreeMap semantics - -### The Architecture - -```rust -struct RewriteThin { - scope_be32: [u8; 32], // Full 256-bit scope - rule_id: u32, // Compact rule handle - nonce: u32, // Insertion-order tie-break - handle: u32, // Index into fat payload vec -} - -struct PendingTx

{ - thin: Vec, // Sorted keys - fat: Vec>, // Payloads (indexed by handle) - scratch: Vec, // Reused scratch buffer - counts16: Vec, // 256KB histogram (65536 buckets) -} -``` - -**Key insight:** Separate "thin" sorting keys from "fat" payloads. Only move 28-byte records during radix passes, then gather payloads at the end. - -```mermaid -graph LR - subgraph "Thin Keys (sorted)" - T1[RewriteThin
handle=0] - T2[RewriteThin
handle=2] - T3[RewriteThin
handle=1] - end - - subgraph "Fat Payloads (indexed)" - F0[PendingRewrite] - F1[PendingRewrite] - F2[PendingRewrite] - end - - T1 -->|handle=0| F0 - T2 -->|handle=2| F2 - T3 -->|handle=1| F1 - - style T1 fill:#e0af68 - style T2 fill:#e0af68 - style T3 fill:#e0af68 - style F0 fill:#9ece6a - style F1 fill:#9ece6a - style F2 fill:#9ece6a -``` - -### Radix Sort Pass Sequence - -The 20-pass LSD radix sort processes digits from least significant to most significant: - -```mermaid -graph TD - Start[Input: n rewrites] --> P1[Pass 1-2: nonce low→high] - P1 --> P2[Pass 3-4: rule_id low→high] - P2 --> P3[Pass 5-20: scope_hash bytes 31→0] - P3 --> Done[Output: sorted by scope,rule,nonce] - - style Start fill:#bb9af7 - style Done fill:#9ece6a - style P1 fill:#e0af68 - style P2 fill:#e0af68 - style P3 fill:#ff9e64 -``` - -Each pass: - -1. **Count** — histogram of 65536 16-bit buckets -2. **Prefix sum** — compute output positions -3. **Scatter** — stable placement into scratch buffer -4. **Flip** — swap `thin ↔ scratch` for next pass - ---- - -## The Disaster: Small-n Regression - -Initial results were not encouraging: - -```text -BEFORE (BTreeMap): AFTER (Radix): -n=10: 7.5µs n=10: 687µs (91x SLOWER!) -n=100: 90µs n=100: 667µs (7x SLOWER!) -n=1000: 1.33ms n=1000: 1.36ms (marginal) -``` - -![Before optimization - the "flat green line" disaster](../../notes/BEFORE.webp) -_The benchmark graph tells the story: that flat green line at low n is 5MB of zeroing overhead dominating tiny inputs._ - -**What went wrong?** The radix implementation zeroed a **256KB counts array 20 times per drain**: - -```rust -counts.fill(0); // 65,536 × u32 = 256KB -// × 20 passes = 5MB of writes for ANY input size -``` - -At n=10, we were doing **5MB of memory bandwidth** to sort **10 tiny records**. The "flat green line" in the benchmark graph told the story—massive fixed cost dominating small inputs. - ---- - -## The Fix: Adaptive Threshold - -The solution: **use the right tool for the job.** - -```mermaid -graph TD - Start[n rewrites to drain] --> Check{n ≤ 1024?} - Check -->|Yes| Comp[Comparison Sort
O n log n
Low constant] - Check -->|No| Radix[Radix Sort
O n
High constant] - Comp --> Done[Sorted output] - Radix --> Done - - style Start fill:#bb9af7 - style Comp fill:#e0af68 - style Radix fill:#9ece6a - style Done fill:#bb9af7 - style Check fill:#ff9e64 -``` - -```rust -const SMALL_SORT_THRESHOLD: usize = 1024; - -fn drain_in_order(&mut self) -> Vec

{ - let n = self.thin.len(); - if n > 1 { - if n <= SMALL_SORT_THRESHOLD { - // Fast path: comparison sort for small batches - self.thin.sort_unstable_by(cmp_thin); - } else { - // Scalable path: radix for large batches - self.radix_sort(); - } - } - // ... drain logic -} - - -fn cmp_thin(a: &RewriteThin, b: &RewriteThin) -> Ordering { - a.scope_be32.cmp(&b.scope_be32) - .then_with(|| a.rule_id.cmp(&b.rule_id)) - .then_with(|| a.nonce.cmp(&b.nonce)) -} -``` - -**Why 1024?** Empirical testing showed: - -- Below ~500: comparison sort wins (no zeroing overhead) -- Above ~2000: radix sort wins ($O(n)$ scales) -- **1024: conservative sweet spot** where both approaches perform similarly - -![After optimization - hybrid approach](../../notes/AFTER.webp) -_The fix: adaptive threshold keeps small inputs fast while unlocking $O(n)$ scaling at large $n$._ - ---- - -## The Results: Perfect $O(n)$ Scaling - -Final benchmark results across 6 data points (10, 100, 1k, 3k, 10k, 30k): - -| Input n | Old (BTreeMap) | New (Hybrid) | Speedup | Per-element | -| ------- | -------------- | ------------ | -------- | ----------- | -| 10 | 7.5µs | 7.6µs | -1% | 760ns | -| 100 | 90µs | 76µs | +16% | 760ns | -| 1,000 | 1.33ms | 0.75ms | **+44%** | 750ns | -| 3,000 | — | 3.03ms | — | 1010ns | -| 10,000 | — | 9.74ms | — | 974ns | -| 30,000 | — | 29.53ms | — | 984ns | - -![Final results - perfect linear scaling](../../notes/Final.webp) -_The complete picture: purple (snapshot hash), green (scheduler total), yellow (enqueue), red (drain). Note the threshold marker at $n=1024$ and the perfectly straight lines beyond it._ - -**Key observations:** - -1. **Comparison sort regime ($n ≤ 1024$):** ~750ns/element, competitive with old approach -2. **Radix sort regime ($n > 1024$):** Converges to ~1µs/element with **zero deviation** -3. **Scaling from 3k → 30k (10× data):** 9.75× time—textbook $O(n)$ -4. **60 FPS viability:** At $n=1000$ (typical game scene), scheduler overhead is just **0.75ms = 4.5% of 16.67ms frame budget** - -### Phase Breakdown - -Breaking down enqueue vs drain at $n=30k$: - -```text -Total: 37.61ms (100%) -Enqueue: 12.87ms (34%) — Hash lookups + last-wins dedupe -Drain: 24.83ms (66%) — Radix sort + conflict checks + execute -``` - -```mermaid -%%{init: {'theme':'dark'}}%% -pie title Scheduler Time Breakdown at n=30k - "Enqueue (hash + dedupe)" : 34 - "Drain (radix + conflicts)" : 66 -``` - -The drain phase dominates, but both scale linearly. Future optimizations could target the radix sort overhead (active-bucket zeroing, cross-transaction pooling), but the current approach achieves our performance targets. - ---- - -## The Visualization: Telling the Story - -We built an interactive D3 dashboard (`docs/benchmarks/report-inline.html`) showing: - -- **Four series on log-log plot:** - - Purple (solid): Snapshot Hash baseline - - Green (solid): Scheduler Drain Total - - Yellow (dashed): Enqueue phase - - Red (dashed): Drain phase - -- **Threshold marker at $n=1024$** showing where the sorting strategy switches - -- **2×2 color-coded stat cards** matching chart colors for instant visual connection - -- **Explanatory context:** What we measure, why 60 FPS matters, how $O(n)$ scaling works - -**The key visual:** A straight line on the $log-log$ plot from 3k to 30k—proof of perfect linear scaling. - ---- - -## Lessons Learned - -### 1. **Measure First, Optimize Second** - -Curve fitting (`T/n ≈ 272.7·ln(n)`) confirmed the $O(n log n)$ bottleneck before we touched code. - -### 2. **Don't Optimize for Benchmarks Alone** - -The initial radix implementation looked good at $n=1000$ but destroyed small-batch performance. Real workloads include both. - -### 3. **Memory Bandwidth Matters** - -Zeroing 5MB of counts array matters more than CPU cycles at small $n$. The "flat line" in benchmarks was the smoking gun. - -### 4. **Hybrid Approaches Win** - -Comparison sort isn't "slow"—it's just $O(n log n)$. For small $n$, it's faster than **any** $O(n)$ algorithm with high constants. - -### 5. **Visualize the Win** - -A good chart tells the story instantly. Our dashboard shows the threshold switch, phase breakdown, and perfect scaling at a glance. - ---- - -## What's Next? - -Future optimizations: - -1. **Active-bucket zeroing**: Only zero counts buckets actually used (saves ~15% at large $n$) -2. **Cross-transaction pooling**: Share scratch buffers across transactions via arena allocator -3. **Rule-domain optimization**: If we have <256 rules, collapse `rule_id` to single-byte direct indexing (saves 2 passes) - -The scheduler is algorithmically optimal, scales to 30k rewrites in <30ms, and the constants are excellent. - ---- - -## Conclusion: Echoing the Future - -Echo's deterministic scheduler went from $O(n log n)$ BTreeMap to $O(n)$ hybrid adaptive sorter: - -- ✅ **44% faster at typical workloads ($n=1000$)** -- ✅ **Perfect linear scaling to 30k rewrites** -- ✅ **Well under 60 FPS budget** -- ✅ **Zero regressions at small n** -- ✅ **Beautiful visualization proving the win** - -The textbook said "radix sort is $O(n)$." The benchmarks said "prove it." **The graph is a straight line.** - -But here's the deeper point: **This optimization matters because Echo is building something fundamentally new.** - -Traditional game engines treat determinism as an afterthought—a nice-to-have feature bolted on through careful engineering and hope. Echo treats it as a **mathematical guarantee**, woven into every layer from category theory foundations to the scheduler you're reading about right now. - -When you can execute 30,000 deterministic rewrite rules per frame and still hit 60 FPS, you're not just optimizing a scheduler—you're **proving that a different kind of game engine is possible.** One where: - -- **Multiplayer "just works"** because clients can't desync (they're running the same pure function) -- **Replay isn't a feature**, it's physics (rewind time by replaying the graph rewrite history) -- **AI training scales** because every training episode is perfectly reproducible -- **Formal verification** becomes practical (prove your game logic correct, not just test it) -- **Time travel debugging** isn't science fiction (checkpoint the graph, fork timelines, compare outcomes) - -Echo isn't just a faster game engine. **Echo is a different game engine.** One built on the mathematical foundation that traditional engines lack. One where the scheduler's deterministic ordering isn't a nice property—it's the **fundamental guarantee** that makes everything else possible. - -This optimization journey—from spotting the $O(n log n)$ bottleneck to proving $O(n)$ scaling with a hybrid radix sorter—is what it takes to make that vision real. To make determinism **fast enough** that developers don't have to choose between correctness and performance. - -The graph is a straight line. The future is deterministic. **And Echo is how we get there.** 🚀 - ---- - -> **Note:** Code references below reflect state at time of writing and may be -> stale. Paths and line numbers have likely changed since this document was -> authored. Use repo search (`rg`) to locate current implementations. - -## Code References - -- Implementation: `crates/warp-core/src/scheduler.rs` (see `fn radix_sort` near line 338) _(line numbers may have shifted)_ -- Benchmarks: `crates/warp-benches/benches/scheduler_drain.rs` -- Dashboard: `docs/benchmarks/report-inline.html` -- The radix optimization work has been merged to main. - ---- - -_Want to learn more? Check out the [Echo documentation](/meta/docs-index) or join the discussion on [GitHub](https://github.com/flyingrobots/echo)._ diff --git a/docs/archive/notes/xtask-wizard.md b/docs/archive/notes/xtask-wizard.md deleted file mode 100644 index 2bae3f0e..00000000 --- a/docs/archive/notes/xtask-wizard.md +++ /dev/null @@ -1,51 +0,0 @@ - - - -# xtask "workday wizard" — concept note - -Goal: a human-friendly `cargo xtask` (or `just`/`make` alias) that walks a contributor through starting and ending a work session, with automation hooks for branches, PRs, issues, and planning. - -## Core flow - -### Start session - -- Prompt for intent/issue: pick from open GitHub issues (via gh CLI) or free text. -- Branch helper: suggest branch name (`echo/-`), create and checkout if approved. -- Env checks: toolchain match, hooks installed (`make hooks`), `cargo fmt -- --check`/`clippy` optional preflight. - -### During session - -- Task DAG helper: load tasks from issue body / local `tasks.yaml`; compute simple priority/topo order (dependencies, P1/P0 tags). -- Bench/test shortcuts: menu to run common commands (clippy, cargo test -p warp-core, bench targets). -- Docs guard assist: if runtime code touched, remind to update relevant specs/ADRs. - -### End session - -- Summarize changes: gather `git status`, staged/untracked hints. -- PR prep: prompt for PR title/body template (with issue closing keywords); optionally run `git commit` and `gh pr create`. -- Issue hygiene: assign milestone/board/labels via gh CLI; auto-link PR to issue. - -## Nice-to-haves - -- Determinism check shortcut: run twin-engine sandbox determinism A/B (radix vs legacy) and summarize. -- Planner math: simple critical path/priority scoring across tasks.yaml; suggest next task when current is blocked. -- Cache hints: detect heavy commands run recently, skip/confirm rerun. -- Telemetry: write a small JSON session record for later blog/mining (start/end time, commands run, tests status). - -## Tech sketch - -- Implement under `xtask` crate in workspace; expose `cargo xtask wizard`. -- Use `dialoguer`/`inquire` for prompts; `serde_yaml/json` for tasks; `gh` CLI for GitHub ops (fallback to no-op if missing). -- Config file (`.echo/xtask.toml`) for defaults (branch prefix, issue labels, PR template path). - -## Open questions - -- How much is automated vs. suggested (avoid surprising commits)? -- Should Docs Guard be enforced via wizard or still via hooks? -- Where to store per-session summaries (keep in git or external log)? - -## Next steps - -- Prototype a minimal “start session” + “end session” flow with `gh` optional. -- Add a `tasks.yaml` example and priority/topo helper. -- Wire into make/just: `make wizard` → `cargo xtask wizard`. diff --git a/docs/archive/phase1-plan.md b/docs/archive/phase1-plan.md deleted file mode 100644 index 879c1a8c..00000000 --- a/docs/archive/phase1-plan.md +++ /dev/null @@ -1,130 +0,0 @@ - - - -# Phase 1 – Core Ignition Plan - -Goal: deliver a deterministic Rust implementation of WARP powering the Echo runtime, with tangible demos at each milestone. This plan outlines task chains, dependencies, and expected demonstrations. - -Status (2025-12-30): - -- 1A (bootstrap) and 1B (rewrite executor spike) are effectively landed in `main` via `warp-core` (B0/B1: two-plane attachments + WarpInstances). -- The next “engine-facing” milestone is 1C (Rhai/TS bindings) and the next “tooling-facing” milestone is completing the WARP View Protocol demo path (`docs/tasks.md`). - ---- - -## Task Graph - -```mermaid -graph TD - A[1A · WARP Core Bootstrap] - B[1B · Rewrite Executor Spike] - C[1C · Rhai/TS Bindings] - D[1D · Echo ECS on WARP] - E[1E · Networking & Confluence MVP] - F[1F · Tooling Integration] - - A --> B --> C --> D --> E --> F - B --> DemoToy - D --> DemoNetcode - E --> DemoTimeTravel - F --> DemoLiveCoding - - subgraph Demos - DemoToy[Demo 2 · Toy Rewrite Benchmark] - DemoNetcode[Demo 1 · Deterministic Netcode] - DemoTimeTravel[Demo 5 · Time Travel Merge] - DemoLiveCoding[Demo 6 · Rhai Live Coding] - end -``` - ---- - -## Phases & Tangible Outcomes - -### 1A · WARP Core Bootstrap - -- Tasks - - Scaffold crates (`warp-core`, `warp-wasm`, `warp-cli`). - - Implement GraphStore primitives, hash utilities, scheduler skeleton. - - CI: `cargo fmt/clippy/test` baseline. -- Demonstration: _None_ (foundation only). - -### 1B · Rewrite Executor Spike - -- Tasks - - Implement motion rule test (Position + Velocity rewrite). - - Execute deterministic ordering + snapshot hashing. - - Add minimal diff/commit log entries. -- Demonstration: **Demo 2 · Toy Benchmark** - - 100 nodes, 10 rules, property tests showing stable hashes. - -### 1C · Rhai/TS Bindings - -- Tasks - - Embed Rhai with deterministic sandbox + host modules. - - Build WASM bindings for tooling. - - Port inspector CLI to use snapshots. -- Demonstration: Rhai script triggers rewrite; inspector shows matching snapshot hash. - -### 1D · Echo ECS on WARP - -- Tasks - - Map existing ECS system set onto rewrite rules. - - Replace Codex’s Baby event queue with rewrite intents. - - Emit frame hash HUD. -- Demonstration: **Demo 1 · Deterministic Netcode** - - Two instances, identical inputs, frame hash displayed per tick. - -### 1E · Networking & Confluence MVP - -- Tasks - - Implement rewrite transaction packets; replay on peers. - - Converge canonical snapshots; handle conflicts deterministically. - - Integrate rollback path (branch rewind, replay log). -- Demonstration: **Demo 5 · Time Travel** - - Fork, edit, merge branch; show canonical outcome. - -### 1F · Tooling Integration - -- Tasks - - Echo Studio (TS + WASM) graph viewer with live updates. - - Entropy lens, paradox heatmap overlays. - - Rhai live coding pipeline (hot reload). -- Demonstrations: - - **Demo 3 · Real Benchmark** (1k nodes, 100 rules). - - **Demo 6 · Live Coding** (Rhai edit updates live graph). - ---- - -## Performance / Benchmark Milestones - -| Milestone | Target | Notes | -| ------------------ | --------------------------------------------- | --------------------- | -| Toy Benchmark | 100 nodes / 10 rules / 200 iterations < 1ms | Demo 2 | -| Real Demo | 1,000 nodes / 100 rules < 10ms rewrite checks | Demo 3 | -| Production Stretch | 10,000 nodes / 1000 rules (profiling only) | Phase 2 optimizations | - -Optimization roadmap once baseline is working: - -1. Incremental pattern matching. -2. Spatial indexing. -3. SIMD bitmap operations. -4. Critical pair analysis for confluence proofs. - ---- - -## Networking Demo Targets - -| Mode | Deliverable | -| --------- | --------------------------------------------------------------- | -| Lockstep | Replay identical inputs; frame hash equality per tick. | -| Rollback | Predictive input with rollback on mismatch. | -| Authority | Host selects canonical branch; entropy auditor rejects paradox. | - ---- - -## Documentation Checklist - -- Update `docs/warp-runtime-architecture.md` as rules/loop evolve. - -Phase 1 completes when Demo 6 (Live Coding) runs atop the Rust WARP runtime with inspector tooling in place, using Rhai as the scripting layer. diff --git a/docs/archive/plans/BOAW-tech-debt.md b/docs/archive/plans/BOAW-tech-debt.md deleted file mode 100644 index d348b46a..00000000 --- a/docs/archive/plans/BOAW-tech-debt.md +++ /dev/null @@ -1,315 +0,0 @@ - - - - - -# BOAW Roadmap: Phase 6B → Phase 9 - -**Created:** 2026-01-20 -**Status:** AWAITING APPROVAL -**Context:** Post-Phase 6B integration — cleanup, guardrails, and planning - ---- - -## Classification Rubric - -| If it... | Then it's... | -| ------------------------------------ | -------------------------------------- | -| Unblocks a phase | **Roadmap** (Tiers 1-3) | -| Reduces risk or prevents regressions | **Guardrail** (Tier 0.5) | -| Improves performance | **Perf Gate** (only after measurement) | -| Is unused code | **Delete immediately** (Tier 0) | - ---- - -## Tier 0: Cleanup (Today) - -_Dead code and doc drift. Do immediately after merge._ - -### 0.1 Delete `emit_view_op_delta()` - -| Field | Value | -| -------------- | --------------------------------------------- | -| **Location** | `crates/echo-dind-tests/src/rules.rs:600-648` | -| **Call Sites** | 0 | -| **Risk** | None | - -**Why:** Deprecated function using non-deterministic `delta.len()` sequencing. -Replaced by `emit_view_op_delta_scoped()`. Keeping it risks copy-paste of broken pattern. - -### 0.2 Delete `execute_parallel_stride()` + Feature Gate - -| Field | Value | -| -------------- | ------------------------------------------- | -| **Location** | `crates/warp-core/src/boaw/exec.rs:176-207` | -| **Call Sites** | 3 (1 conditional, 2 Phase 6A tests) | -| **Risk** | Low | - -**Why:** Phase 6A stride execution superseded by Phase 6B sharded execution. -Feature-gated behind `parallel-stride-fallback`. Adds maintenance burden. - -**Steps:** - -1. Delete Phase 6A equivalence tests (`boaw_parallel_exec.rs:286-365`) -2. Remove stride fallback conditional (`exec.rs:67-83`) -3. Delete `execute_parallel_stride()` function -4. Remove `parallel-stride-fallback` feature from `Cargo.toml` - -### 0.3 Doc Accuracy Pass - -Verify these are still accurate post-merge: - - - -- [ ] `TECH-DEBT-BOAW.md` — mark Phase 6B items complete _(not completed before archival)_ -- [ ] `ADR-0007-BOAW-Storage.md` — phase status markers _(not completed before archival)_ -- [ ] `CHANGELOG.md` — PR #257 merge recorded _(not completed before archival)_ - ---- - -## Tier 0.5: Correctness Guardrails (This Week) - -_Tests we can land now + baseline measurements. Reduces future regression risk._ - -### 0.5.1 Activate Passing Tests - -Some `#[ignore]` tests may now pass after Phase 6B. Audit and activate: - -| Test File | Check For | -| --------------------- | --------------------------------------------- | -| `boaw_determinism.rs` | Any tests that only needed parallel execution | -| `boaw_end_to_end.rs` | Full integration tests | -| `boaw_footprints.rs` | T3.1 already passes; verify others | - -### 0.5.2 WarpOpKey Invariant Test - -Verify `WarpOpKey` ordering is stable and exercised: - -- Canonical sort order matches spec -- No collisions under realistic workloads -- Public API (`sort_key()`) works for external verification - -### 0.5.3 Initial Benchmark Baseline - -**Purpose:** Prove parallelism delivers measurable wins. Capture baseline so future -phases don't accidentally regress performance. - -**Scope:** Minimal, not a full optimization suite. - -| Benchmark | What It Measures | -| ------------------------- | ---------------------------------- | -| `parallel_vs_serial_10` | 10 rewrites: parallel speedup | -| `parallel_vs_serial_100` | 100 rewrites: parallel speedup | -| `parallel_vs_serial_1000` | 1000 rewrites: parallel speedup | -| `shard_distribution` | Are rewrites spread across shards? | - -**Location:** `benches/boaw_baseline.rs` (new file) - -**Success Criteria:** - -- Parallel ≥ serial for n ≥ 100 (no regression) -- Document baseline numbers in `docs/notes/boaw-perf-baseline.md` - ---- - -## Tier 1: Phase 7 — Forking - -_Multi-parent commits and prerequisites. ~2-3 weeks._ - -### Prerequisites (Enable Forking) - -| Component | Tests Unblocked | Notes | -| -------------------------------- | --------------- | ----------------------------------------------------------------------- | -| **OpenPortal scheduling (T7.1)** | 4 | Scheduler tracks new warps; enforces "no same-tick writes to new warps" | -| **DeltaView** | 6 | Overlay + base resolution during execution | -| ~~**FootprintGuard**~~ | 3 | ✅ Done (44aebb0d8f7b, 0d0231b55761) | -| **SnapshotBuilder wiring** | 1 | Connect builder to test harness | - -### Core Forking Work - -| Component | Description | -| ----------------------------- | -------------------------------- | -| Multi-parent commit structure | Commit can have 0..n parents | -| Worldline DAG | Track branch/merge topology | -| Parent addressing | Reference parents by commit hash | - -### Tests Unblocked: 14 - -```text -boaw_openportal_rules.rs — 4 tests (T7.1) -boaw_cow.rs — 6 tests (DeltaView) -boaw_footprints.rs — 3 tests (FootprintGuard) -boaw_determinism.rs — 1 test (SnapshotBuilder) -``` - ---- - -## Tier 2: Phase 8 — Collapse/Merge - -_Deterministic multi-parent reconciliation. ~2-3 weeks. Requires Phase 7._ - -### Merge Components - -| Component | Description | -| ----------------------------- | ------------------------------------------------ | -| **Typed merge registry** | Per-type: Sensitivity, MergeBehavior, Disclosure | -| **Merge regimes** | Commutative (CRDT), LWW, ConflictOnly | -| **Conflict artifacts** | Deterministic, contains only hashes (no secrets) | -| **Canonical parent ordering** | Sort by `commit_hash` for order-dependent merges | -| **Presence policies** | delete-wins (default), add-wins, LWW | - -### Tests Unblocked: 10 - -```text -boaw_merge.rs — all 10 tests -├── t6_1: Commutative merge parent-order invariance -├── t6_2: Canonical ordering for order-dependent -├── t6_3: Conflict artifact determinism -├── merge_regime_crdt_like_is_preferred -├── merge_regime_lww_with_canonical_order -├── presence_policy_delete_wins -├── presence_policy_add_wins -├── conflict_artifact_is_first_class_and_deterministic -└── conflict_artifact_contains_no_secrets -``` - ---- - -## Tier 3: Phase 9 — Privacy Claims - -_Ledger-safe provenance. ~2-3 weeks. Requires Phase 8._ - -### Privacy Components - -| Component | Description | -| ------------------------- | ------------------------------------------------------- | -| **Atom type registry** | Sensitivity (Public/Private/ForbiddenInLedger) | -| **Mind mode enforcement** | Reject ForbiddenInLedger atoms in ledger | -| **ClaimRecord structure** | claim_key, scheme_id, statement_hash, commitment, proof | -| **Commitment safety** | Pepper-based hashing (dictionary-safe) | -| **ZK proof merging** | Verify during collapse; quarantine invalid | -| **Diagnostics mode** | Richer introspection for trusted debugging | - -### Tests Unblocked: 9 - -```text -boaw_privacy.rs — all 9 tests -├── t7_1: Mind mode forbids ForbiddenInLedger -├── t7_2: Invalid proofs quarantined -├── t7_3: Conflicting valid claims → artifact -├── t7_4: Commitment dictionary-safe with pepper -├── atom_type_declares_sensitivity -├── atom_type_declares_merge_behavior -├── atom_type_declares_disclosure_policy -├── claim_record_is_canonical -└── diagnostics_mode_allows_richer_introspection -``` - ---- - -## Perf Gate (Recurring) - -_Run at end of each tier. Catch regressions early._ - -### What to Measure - -| Metric | Baseline (Tier 0.5) | Gate Threshold | -| --------------------------- | ------------------- | -------------------------- | -| Parallel vs serial (n=100) | TBD | No regression (≥ baseline) | -| Parallel vs serial (n=1000) | TBD | No regression (≥ baseline) | -| Merge time (n ops) | TBD | < 2x baseline | -| Snapshot build time | TBD | < 2x baseline | - -### When to Run - -- [x] After Tier 0 (cleanup) — establish baseline -- [ ] After Tier 1 (Phase 7) — verify forking doesn't regress -- [ ] After Tier 2 (Phase 8) — verify merge doesn't regress -- [ ] After Tier 3 (Phase 9) — verify privacy checks don't regress - -### Optimization Work (Only If Gate Fails) - -These are **not scheduled**. Only pursue if perf gate shows regression: - -| Item | Trigger | Status | -| -------------------------- | ---------------------------------- | ------------- | -| ~~Cross-warp parallelism~~ | Multi-warp ticks show poor scaling | ✅ Done | -| State clone overhead | CI times unacceptable | Not scheduled | -| Shard rebalancing | Skewed distributions measured | Not scheduled | -| SIMD merge sort | Merge becomes bottleneck | Not scheduled | - ---- - -## Test Inventory Summary - -| Tier | Tests Unblocked | Cumulative | -| -------------------- | ------------------- | ---------- | -| Tier 0.5 | ~2-3 (audit needed) | ~2-3 | -| Tier 1 (Phase 7) | 14 | ~17 | -| Tier 2 (Phase 8) | 10 | ~27 | -| Tier 3 (Phase 9) | 9 | ~36 | -| Stress (run anytime) | 1 | 37 | - -**Current:** ~17 tests passing -**After Phase 9:** ~54 tests passing (all BOAW tests enabled) - ---- - - - -> **⚠️ TRACKING MOVED:** This archived checklist is preserved for historical -> context only. Active work tracking is now in -> [`TECH-DEBT-BOAW.md`](../../adr/TECH-DEBT-BOAW.md). Do NOT update checkboxes here. - -## Execution Checklist - -### ☐ Tier 0 Cleanup - -- [ ] Delete `emit_view_op_delta()` from `rules.rs` -- [ ] Delete `execute_parallel_stride()` + tests + feature gate -- [ ] Verify doc accuracy (TECH-DEBT, ADR, CHANGELOG) - -### Tier 0.5: Guardrails (This Week) - -- [ ] Audit `#[ignore]` tests — activate any that now pass -- [ ] Add/verify WarpOpKey invariant test -- [ ] Create `benches/boaw_baseline.rs` with minimal benchmarks -- [ ] Document baseline in `docs/notes/boaw-perf-baseline.md` -- [ ] Run perf gate, record numbers - -### Tier 1: Phase 7 (Next Sprint) - -- [ ] Implement OpenPortal scheduling (T7.1) -- [ ] Implement DeltaView -- [x] Implement FootprintGuard (44aebb0d8f7b, 0d0231b55761) -- [ ] Wire SnapshotBuilder to test harness -- [ ] Core forking semantics -- [ ] Activate 14 tests -- [ ] Run perf gate - -### Tier 2: Phase 8 (Following Sprint) - -- [ ] Typed merge registry -- [ ] Merge regimes + conflict artifacts -- [ ] Presence policies -- [ ] Activate 10 tests -- [ ] Run perf gate - -### Tier 3: Phase 9 (Future) - -- [ ] Atom type registry -- [ ] Mind mode + ClaimRecord -- [ ] ZK proof merging -- [ ] Activate 9 tests -- [ ] Run perf gate - ---- - -## References - -- [ADR-0007-BOAW-Storage.md](../../adr/ADR-0007-BOAW-Storage.md) — Full specification -- [TECH-DEBT-BOAW.md](../../adr/TECH-DEBT-BOAW.md) — Original tracking (to be updated) -- [PR #257](https://github.com/flyingrobots/echo/pull/257) — Phase 6B implementation -- Knowledge Graph: `BOAW_Phase_6B`, `Echo_BOAW_Architecture` diff --git a/docs/archive/plans/COMING_SOON.md b/docs/archive/plans/COMING_SOON.md deleted file mode 100644 index 3a596771..00000000 --- a/docs/archive/plans/COMING_SOON.md +++ /dev/null @@ -1,125 +0,0 @@ - - - -# Echo & Wesley: The Causal Application Guide - -Welcome to the future of causal development. This document explains how **Echo** (the substrate) and **Wesley** (the law-giver) work together to create deterministic, time-travelable applications. - ---- - -## 1. The Core Philosophy: "Law vs. Physics" - -Building an application on Echo is different from traditional state-management. We split the universe into two layers: - -1. **The Law (Wesley)**: Defines _what_ exists and _what_ is allowed to happen. It is expressed in GraphQL SDL with WARP directives. -2. **The Physics (Echo)**: The high-performance graph substrate that executes the laws, enforces constraints, and records the history of every atom. - ---- - -## 2. Wesley: The Schema Compiler - -Wesley is not a runtime; it is a **Law Compiler**. When you build an application, you start by writing a schema. - -### Defining the Ontology - -In a `.graphql` file, you define: - -- **Types**: The "Atoms" of your graph (e.g., `User`, `Position`, `InventoryItem`). -- **Channels**: The event buses where data is emitted (e.g., `PhysicsUpdates`, `ChatMessages`). -- **Policies**: How data on those channels is handled (`StrictSingle`, `Reduce:Sum`, or `Log`). - -### Defining Operations (The Intent ABI) - -Instead of arbitrary functions, you define **Operations (Ops)**. An Op is a declaration of intent to change the graph. - -```graphql -type Mutation { - movePlayer(id: ID!, delta: Vec3!): MoveResult @warp(opId: 101) -} -``` - -Wesley compiles this into an **Intermediate Representation (IR)**. Echo's code generator (`echo-ttd-gen`) then consumes this IR to produce: - -- Type-safe Rust structs. -- Enforcement tables (Footprints) that declare exactly which nodes an Op is allowed to read or write. - ---- - -## 3. Echo: The Causal Substrate - -Echo takes the artifacts from Wesley and provides the execution environment. - -### Graph Rewrites - -Every change in Echo is a **Graph Rewrite**. When an application triggers an Op (like `movePlayer`): - -1. **Intent**: An `EINT` (Echo Intent) frame is created. -2. **Scheduling**: The Echo Scheduler looks at the Op's **Footprint**. If two Ops touch different parts of the graph, they can run in parallel. -3. **Execution**: The rewrite rule is applied. This is a pure function: `(PriorState, OpArgs) -> (NewState, Emissions)`. -4. **Commit**: The new state is hashed (BLAKE3) and committed to the **Provenance Store**. - -### Determinism Guards - -Echo enforces "Ironclad Determinism": - -- **Floating Point**: All math uses `DFix64` (fixed-point) to ensure bit-exact results across Intel, ARM, and WASM. -- **No Side Effects**: Rewrite rules cannot call `Date.now()` or `Math.random()`. All entropy must be passed in as a seeded "Paradox" value. - ---- - -## 4. The Time-Travel Debugger (TTD) - -The TTD is not just a UI; it is a fundamental property of the **Provenance Store**. - -### Worldlines & Forks - -Because every tick is a content-addressed snapshot, Echo supports **Causal Branching**: - -- **Playback**: You can seek a "Cursor" to any tick in the past. -- **Forking**: You can create a new `WorldlineId` starting from a past tick. You can then apply different intents to see a "What If" scenario. -- **Replay**: The TTD can re-play an entire session and verify that the `state_root` hashes match the "Golden" run. - -### The Receipt System - -Every execution produces a **TTDR Receipt**. This is a cryptographically signed proof that: -_"At Tick X, Op Y was applied to State Z, resulting in State A and Emissions B."_ - ---- - -## 5. How to Build an "Echo App" - -### Step 1: The Wesley Sync - -Write your schema and run `cargo xtask wesley sync`. This vendors the types and manifests into your project. - -### Step 2: Implement Rewrite Rules - -In Rust, you implement the logic for your Ops. Echo provides a `GraphView` that enforces your footprint at runtime. - -```rust -fn handle_move_player(view: &mut GuardedView, args: MoveArgs) -> StepResult { - let mut pos = view.get_component::(args.id)?; - pos.x += args.delta.x; - view.set_component(args.id, pos)?; - Ok(Advanced) -} -``` - -### Step 3: Define the Scene Port - -Use `echo-scene-port` to map your graph state to visual objects. This produces a `SceneDelta`—a language-agnostic list of "Add Node", "Move Edge", or "Set Label" commands. - -### Step 4: The Frontend - -Wire the WASM `TtdEngine` into your React/Three.js app. The engine handles the worldlines; your UI just renders the current "Truth Frames" arriving on the subscribed channels. - ---- - -## 6. Coming Soon: The "Drill Sergeant" Workflow - -We are moving toward a workflow where **Determinism isn't Optional**. - -- **DIND (Deterministic Ironclad Nightmare Drills)**: Your app will be subjected to randomized operation orders to ensure it always converges to the same state. -- **Fuzzing the Law**: Wesley will generate "hostile" inputs to try and crash your rewrite rules. - -_Echo is more than an engine; it is a guarantee that causality is absolute._ diff --git a/docs/archive/plans/SPEC-0004-final-plan.md b/docs/archive/plans/SPEC-0004-final-plan.md deleted file mode 100644 index 56401e56..00000000 --- a/docs/archive/plans/SPEC-0004-final-plan.md +++ /dev/null @@ -1,249 +0,0 @@ - - - -# SPEC-0004 Implementation Plan: Worldlines, PlaybackCursors, ViewSessions, TruthBus - -**Status:** In Progress -**Created:** 2026-01-20 -**Spec:** `/docs/spec/SPEC-0004-worldlines-playback-truthbus.md` - ---- - -## Corrections Applied (from review) - -1. **U0Ref = WarpId** — MVP U0Ref is just a handle to `engine.initial_state` for a warp, not a checkpoint blob -2. **One entry per global tick per warp** — Store patches even if empty to maintain index alignment: `warp_patches[warp_id].len() == global_tick_history_len` -3. **Use existing canonical hash scheme** — `compute_state_root_for_warp_store` must use same ordering as `snapshot.rs` -4. **Minimal TruthSink** — `BTreeMap>` plus a parallel `BTreeMap>` for receipts, not a full bus layer -5. **Add demo emission for tests** — Need deterministic emission path or outputs are vacuous -6. **Explicit WarpOp coverage** — `apply_warp_op_to_store` must handle all variants or reject with typed error - ---- - -## Commit Status - -### ✅ Commit 1 — MBUS v2 Encoder/Decoder + Tests (COMPLETE) - -**Files Created:** - -- `crates/warp-core/src/materialization/frame_v2.rs` — V2 encoder/decoder with cursor-stamped packets - -**Files Modified:** - -- `crates/warp-core/src/materialization/mod.rs` — Export frame_v2 types - -**Tests Passing (11/11):** - -- T19: `mbus_v2_roundtrip_single_packet` -- T20: `mbus_v1_rejects_v2` -- T21: `mbus_v2_rejects_v1` -- T22: `mbus_v2_multi_packet_roundtrip` -- Plus edge case tests (empty entries, bad magic, truncated, etc.) - -**Gate:** `cargo test -p warp-core --features delta_validate -- frame_v2` ✅ - ---- - -### 🔲 Commit 2 — Types + IDs + ProvenanceStore Seam + Per-Warp Worldline Store - -**New Files:** - -- `crates/warp-core/src/worldline.rs` - - `WorldlineId(Hash)` — transparent wrapper - - `HashTriplet { state_root, patch_digest, commit_hash }` - - `WorldlineTickPatchV1` — per-warp projection of global tick - - `WorldlineTickHeaderV1` — shared header across warps - - `OutputFrameSet = Vec<(ChannelId, Vec)>` - -- `crates/warp-core/src/playback.rs` - - `CursorId(Hash)`, `SessionId(Hash)` — transparent wrappers - - `CursorRole { Writer, Reader }` - - `PlaybackMode { Paused, Play, StepForward, StepBack, Seek { target, then } }` - - `SeekThen { Pause, RestorePrevious, Play }` - - `CursorReceipt` — cursor context for truth frames - - `TruthFrame` — authoritative value with cursor receipt - -- `crates/warp-core/src/provenance_store.rs` - - `ProvenanceStore` trait (seam for future wormholes) - - `LocalProvenanceStore` — in-memory Vec-backed implementation - - `HistoryError { HistoryUnavailable { tick }, WorldlineNotFound }` - - `U0Ref = WarpId` (per correction #1) - -**Engine Modifications (`engine_impl.rs`):** - -- Add fields: - - ```rust - warp_patches: BTreeMap>, - warp_expected: BTreeMap>, - warp_outputs: BTreeMap>, - ``` - -- Modify `commit_with_receipt` to project global ops → per-warp patches -- **Invariant:** `warp_patches[warp_id].len() == tick_history.len()` (even for no-ops) - -**Gate:** `cargo test -p warp-core --features delta_validate` - ---- - -### 🔲 Commit 3 — Warp-Local Apply + State Root + Cursor Seek + Verification - -**Add to `playback.rs`:** - -- `PlaybackCursor` struct with: - - `cursor_id`, `worldline_id`, `warp_id`, `tick`, `role`, `mode` - - `store: GraphStore` (owned, never shared) - - `pin_max_tick: u64` -- `PlaybackCursor::seek_to(target, provenance)`: - - If `target < tick`: rebuild from U0 (initial_state for warp) - - Apply patches `tick.. }` -- `ViewSession::subscribe(channel)`, `set_active_cursor(cursor)` - -**Truth Sink (minimal, per correction #4):** - -- `TruthSink { frames: BTreeMap>, receipts: BTreeMap> }` -- Helper: `collect_frames(session_id) -> &[TruthFrame]` — returns frames for a session -- Helper: `last_receipt(session_id) -> Option<&CursorReceipt>` — reads from the receipts map - -**PlaybackCursor::step():** - -- Implement `PlaybackMode` state machine -- `Paused` → no-op -- `Play` → Writer appends (BOAW), Reader consumes then pauses at frontier -- `StepForward` → advance one then `Paused` -- `StepBack` → seek(tick-1) then `Paused` -- `Seek { target, then }` → seek then apply `SeekThen` - -**Tests:** - -- T1: `writer_play_advances_and_records_outputs` -- T2: `step_forward_advances_one_then_pauses` -- T3: `paused_noop_even_with_pending_intents` -- T7: `truth_frames_are_cursor_addressed_and_authoritative` -- T9: `two_sessions_same_channel_different_cursors_receive_different_truth` -- T10: `session_cursor_switch_is_opaque_to_subscribers` -- T16: `worker_count_invariance_for_writer_advance` - -**Gate:** `cargo test -p warp-core --features delta_validate` + `cargo test -p echo-dind-harness` - ---- - -### 🔲 Commit 5 — Record Outputs Per Tick + Seek/Playback - -**Engine Modifications:** - -- On `commit_with_receipt`, after `bus.finalize()`: - - ```rust - let outputs: OutputFrameSet = mat_report.channels - .iter() - .map(|fc| (fc.channel, fc.data.clone())) - .collect(); - self.warp_outputs.entry(root_warp).or_default().push(outputs); - ``` - -**Demo Emission (per correction #5):** - -- Add deterministic test emission path so T1/T8 aren't vacuous -- Option A: Demo rule that emits to channel based on tick -- Option B: Compute outputs from state deterministically for tests - -**ViewSession Publishing:** - -- `publish_truth(cursor, provenance, sink)` sources from `provenance.outputs(worldline, tick)` - -**Tests:** - -- T4: `seek_moves_cursor_without_mutating_writer_store` -- T5: `step_back_is_seek_minus_one_then_pause` -- T6: `reader_play_consumes_existing_then_pauses_at_frontier` -- T8: `outputs_match_recorded_bytes_for_same_tick` -- T19-T22: MBUS v2 integration - -**Gate:** `cargo test -p warp-core --features delta_validate` - ---- - -### 🔲 Commit 6 — Reducer Semantics + Checkpoint Skeleton + Fork Stub - -**New File: `crates/warp-core/src/retention.rs`** - -```rust -pub enum RetentionPolicy { - KeepAll, - CheckpointEvery { k: u64 }, - KeepRecent { window: u64, checkpoint_every: u64 }, - ArchiveToWormhole { after: u64, checkpoint_every: u64 }, // seam only -} -``` - -**Checkpoint Skeleton:** - -- `LocalProvenanceStore::checkpoint(warp_id, tick, state)` — naive clone -- `checkpoint_before(worldline, tick)` for fast seek - -**Fork Stub:** - -- `LocalProvenanceStore::fork(source, fork_tick, new_id)` — prefix-copy - -**Tests:** - -- T11: `reducer_commutative_is_permutation_invariant_and_replayable` -- T12: `reducer_order_dependent_is_canonically_deterministic_and_replayable` -- T13: `reduced_channel_emits_single_authoritative_value_per_tick` -- T17: `checkpoint_replay_equals_full_replay` -- T18: `fork_worldline_diverges_after_fork_tick_without_affecting_original` - -**Gate:** `cargo test -p warp-core --features delta_validate` - ---- - -## Key Files Reference - -| File | Purpose | -| -------------------------------- | --------------------------------------------------------- | -| `materialization/frame.rs:1-255` | Pattern for MBUS encoding | -| `engine_impl.rs:967-1085` | `commit_with_receipt` — hook for per-warp projection | -| `tick_patch.rs:98-461` | `WarpOp`, `apply_to_state` — pattern for `apply_to_store` | -| `snapshot.rs:90-265` | `compute_state_root`, `compute_commit_hash_v2` | -| `graph.rs:16-486` | `GraphStore`, `canonical_state_hash` | - ---- - -## Invariants (from spec) - -- **WL-001 (Holography):** Given U0Ref + patches + canonical apply, any tick's state is reconstructible -- **WL-002 (Truth):** Given recorded outputs per tick, any tick's client-visible truth is reconstructible byte-for-byte -- **CUR-001:** Cursor never mutates worldline unless role is Writer and mode requires advance -- **CUR-002:** Cursor never executes rules when seeking; it applies recorded patches only -- **CUR-003:** After seek/apply, cursor verifies expected hashes byte-for-byte -- **OUT-001:** For `(worldline_id, tick, channel)`, value bytes are deterministic across runs/machines -- **OUT-002:** Playback at tick t reproduces the same TruthFrames recorded at tick t -- **STEP-001:** No store mutation while any GraphView borrow exists for that store -- **STEP-002:** Seeking never touches writer cursor store; only cursor.store diff --git a/docs/archive/plans/SPEC-0004-review-hitlist.md b/docs/archive/plans/SPEC-0004-review-hitlist.md deleted file mode 100644 index 8f9c2616..00000000 --- a/docs/archive/plans/SPEC-0004-review-hitlist.md +++ /dev/null @@ -1,149 +0,0 @@ - - - -# SPEC-0004 Self-Review Hit List - -**Date:** 2026-01-22 -**Branch:** `graph-boaw` -**Status:** Pre-PR review complete - ---- - -## Summary - -| Category | High | Medium | Low | Total | -| ------------- | ----- | ------ | ------ | ------ | -| Source Code | 0 | 7 | 36 | 43 | -| Test Code | 1 | 8 | 18 | 27 | -| Documentation | 0 | 3 | 8 | 11 | -| API Surface | 0 | 0 | 6 | 6 | -| **TOTAL** | **1** | **18** | **68** | **87** | - ---- - -## HIGH Severity - -- [ ] **#53** Cross-file: Massive test helper duplication (~330 lines duplicated across 3 test files). `test_worldline_id`, `test_cursor_id`, `setup_worldline_with_ticks`, `create_add_node_patch`, etc. should be in `tests/common/mod.rs`. - ---- - -## MEDIUM Severity - -### Source Code - -- [ ] **#1** `playback.rs:314` — Long mid-function comment block contradicts itself ("Actually, let's clarify..."). Clean up or move to module-level docs. -- [ ] **#2** `playback.rs:394` — `StepForward` for writers returns `StepResult::Advanced` but does nothing (misleading stub). Should return `NoOp` or document clearly. -- [ ] **#3** `playback.rs:566` — `publish_truth` hash conversion is fragile (relies on `blake3::Hash` to `[u8;32]` via `into()`). Add explicit type annotation. -- [ ] **#4** `provenance_store.rs:204` — `add_checkpoint` silently no-ops if worldline doesn't exist. Should return error or log. -- [ ] **#5** `provenance_store.rs:189` — `append()` doesn't validate `global_tick` equals current length (gap risk). -- [ ] **#6** `retention.rs:47` — `ArchiveToWormhole` is "not implemented" but no compile-time warning when used. -- [ ] **#7** `frame_v2.rs:111` — `debug_assert!` for payload size check. Release builds silently produce invalid packets if payload exceeds `u32::MAX`. - -### Test Code - -- [ ] **#8** `view_session_tests.rs:713` — T16 tests conceptually belong in BOAW test file, not "view sessions". -- [ ] **#9** `view_session_tests.rs:726` — `make_touch_rule` closure duplicated between T16 and T16-shuffled (47 lines x 2). -- [ ] **#10** `view_session_tests.rs:873` — `XorShift64` + `shuffle` reimplemented inline (duplicates `common/mod.rs`). -- [ ] **#11** `outputs_playback_tests.rs:92` — `setup_worldline_with_ticks` duplicated verbatim across 3 files. -- [ ] **#12** `outputs_playback_tests.rs:698` — Direct field mutation `cursor.tick = 100` bypasses public API. -- [ ] **#13** `checkpoint_fork_tests.rs:59` — `create_add_node_patch` duplicated verbatim. -- [ ] **#14** `reducer_emission_tests.rs:1254` — `bus_log` is non-mut but calls `emit()`. Misleading if interior mutability. -- [ ] **#15** `view_session_tests.rs:317` — Helper functions block (~110 lines) duplicated across 3 test files. - -### Documentation - -- [ ] **#16** `architecture-outline.md:125` — Says "`TruthSink` trait" but it's actually a `struct`. -- [ ] **#17** `architecture-outline.md:128` — `RetentionPolicy` variants listed incorrectly (says "Archival", missing `CheckpointEvery`). -- [ ] **#18** `architecture-outline.md:121` — Potentially broken link path (`/spec/` vs relative). - ---- - -## LOW Severity - -### Source Code (LOW) - -- [ ] **#19** `playback.rs:264` — All `PlaybackCursor` fields are `pub` (risky for `store` field). -- [ ] **#20** `playback.rs:381` — Writer stub TODO not marked with `// TODO:` for grep-ability. -- [ ] **#21** `retention.rs:21` — Missing `#[non_exhaustive]` on `RetentionPolicy` enum. -- [ ] **#22** `worldline.rs:260` — `OutputFrameSet` type alias doesn't show docs in all IDE contexts. Consider newtype. -- [ ] **#23** `frame_v2.rs:149` — `decode_v2_packet` returns `Option` with no failure reason. Consider `Result<_, DecodeError>`. -- [ ] **#24** `frame_v2.rs:174` — Variable named `cursor` confusing given `CursorId` in crate. Rename to `offset`. -- [ ] **#25** `playback.rs:25` — `BTreeMap` imported at top but only used in `TruthSink`. Consider importing at point of use. -- [ ] **#26** `playback.rs:34` — `CursorId` and `SessionId` have identical `as_bytes` implementations. Consider macro/trait. -- [ ] **#27** `playback.rs:633` — `TruthSink::collect_frames` clones the entire Vec. Return `&[TruthFrame]` instead. -- [ ] **#28** `playback.rs:631` — Missing `#[must_use]` on `TruthSink::last_receipt`. -- [ ] **#29** `worldline.rs:145` — `#[allow(clippy::too_many_lines)]` on `apply_warp_op_to_store`. Consider refactoring. -- [ ] **#30** `worldline.rs:97` — Simple accessors (`global_tick()`, `policy_id()`) missing `#[inline]`. -- [ ] **#31** `worldline.rs:284` — `ApplyError::UnsupportedOperation` uses `&'static str`. Consider enum of op names. -- [ ] **#32** `provenance_store.rs:139` — `WorldlineHistory` is private but has doc comment. Consider removing. -- [ ] **#33** `provenance_store.rs:229` — `checkpoint()` does redundant `get_mut` after hash computation. -- [ ] **#34** `provenance_store.rs:254` — `#[allow(clippy::cast_possible_truncation)]` on `fork` needs safety comment. -- [ ] **#35** `provenance_store.rs:277` — Repeated `#[allow(clippy::cast_possible_truncation)]`. Consider module-level allow. -- [ ] **#36** `provenance_store.rs:317` — `checkpoint_before` returns `None` for non-existent worldline. Document behavior. -- [ ] **#37** `retention.rs:56` — `Default` impl could use `#[derive(Default)]` with `#[default]` attribute. -- [ ] **#38** `frame_v2.rs:102` — Multiple `#[allow(clippy::cast_possible_truncation)]` in `encode_v2_packet`. -- [ ] **#39** `frame_v2.rs:225` — `decode_v2_packets` creates subslice then re-checks length inside decode. Minor inefficiency. -- [ ] **#40** `playback.rs:559` — `publish_truth` error doc references `HistoryError` inconsistently. - -### Test Code (LOW) - -- [ ] **#41** `view_session_tests.rs:82` — Magic number `patch_digest: [tick as u8; 32]` wraps at tick > 255. -- [ ] **#42** `view_session_tests.rs:119` — Magic number `+100` offset for `commit_hash` unexplained. -- [ ] **#43** `view_session_tests.rs:145` — Magic number `10` for `pin_max_tick` not named. -- [ ] **#44** `view_session_tests.rs:719` — `WORKER_COUNTS` uses `[1,2,8,32]` vs `common` uses `[1,2,4,8,16,32]`. -- [ ] **#45** `view_session_tests.rs:232` — Loop count `5` is a magic number. -- [ ] **#46** `outputs_playback_tests.rs:3` — `#![allow(clippy::expect_fun_call)]` is file-wide. Scope to specific functions. -- [ ] **#47** `outputs_playback_tests.rs:427` — Magic number `k = 12u64` — why 12? -- [ ] **#48** `playback_cursor_tests.rs:21` — `test_cursor_id()` has different signature than other test files. Prevents extraction. -- [ ] **#49** `playback_cursor_tests.rs:256` — Unused variable `_hash_at_3` computed but never asserted. -- [ ] **#50** `playback_cursor_tests.rs:207` — "Tick 10 is valid" reasoning unclear. Document convention. -- [ ] **#51** `reducer_emission_tests.rs:29` — `key_sub as key` shadows 2-arg `key` function. Confusing. -- [ ] **#52** `reducer_emission_tests.rs:43` — `factorial` overflow guard uses `debug_assert!`. Use `assert!`. -- [ ] **#53** `reducer_emission_tests.rs:176` — Redundant re-assertion after loop (same check inside and after). -- [ ] **#54** `reducer_emission_tests.rs:539` — Double-finalization pattern (wasteful and confusing). -- [ ] **#55** `checkpoint_fork_tests.rs:9` — `#![allow(clippy::unwrap_used)]` is file-wide. -- [ ] **#56** `checkpoint_fork_tests.rs:135` — `cursor_tick = patch_index + 1` convention is fragile. -- [ ] **#57** Missing edge case tests: `pin_max_tick=0`, seek to `u64::MAX`, empty worldline, duplicate WorldlineId registration. -- [ ] **#58** `outputs_playback_tests.rs:623` — `unsubscribed_channel` variable name is redundant with test logic. - -### Documentation (LOW) - -- [ ] **#59** CHANGELOG claims "T19-T22" but these labels don't appear in test file names. -- [ ] **#60** `code-map.md` says "T1-T10 playback tests" but file has T1,T4,T5,T6,T7,T8 (not T2,T3,T9,T10). -- [ ] **#61** CHANGELOG `checkpoint()` description says "Create checkpoint" but function is `add_checkpoint`. -- [ ] **#62** CHANGELOG claims `WorldlineId` is "content-addressed" but tests use fixed bytes. - -### API Surface - -- [ ] **#63** `RetentionPolicy` exported but no public function accepts/returns it (dangling export). -- [ ] **#64** `apply_warp_op_to_store` exposes internal mutation without guardrails. -- [ ] **#65** `ApplyError` vs `ApplyResult` naming creates cognitive collision (different contexts). -- [ ] **#66** `compute_state_root_for_warp_store` newly public — low-level, easy to misuse. -- [ ] **#67** `CheckpointRef` exposed publicly but only meaningful in provenance context. -- [ ] **#68** `playback` module exports 11 types in a flat list. Consider sub-grouping in docs. - ---- - -## Recommended Fix Priority - -### P0 — Before PR - -- [ ] Fix #53 (HIGH): Extract shared test helpers to `tests/common/mod.rs` -- [ ] Fix #16-#18 (MEDIUM): Factual errors in `architecture-outline.md` -- [ ] Fix #9-#10 (MEDIUM): Use existing `common/` XorShift64/shuffle/make_touch_rule - -### P1 — Before Merge - -- [ ] Fix #1-#2 (MEDIUM): Clean up playback.rs stub and comments -- [ ] Fix #5 (MEDIUM): Add tick gap validation to `append()` -- [ ] Fix #7 (MEDIUM): Promote `debug_assert!` to runtime check in frame_v2 - -### P2 — Follow-up Issue - -- [ ] Fix #4 (MEDIUM): Error handling in `add_checkpoint` -- [ ] Fix #8 (MEDIUM): Move T16 to appropriate test file -- [ ] Fix #21 (LOW): Add `#[non_exhaustive]` to `RetentionPolicy` - -### P3 — Tech Debt - -- [ ] All remaining LOW severity items diff --git a/docs/archive/plans/cross-warp-parallelism.md b/docs/archive/plans/cross-warp-parallelism.md deleted file mode 100644 index 9ee8ff76..00000000 --- a/docs/archive/plans/cross-warp-parallelism.md +++ /dev/null @@ -1,106 +0,0 @@ - - - -# Cross-Warp Parallelism - -**Created:** 2026-01-20 -**Status:** IMPLEMENTED -**Archived:** 2026-03-07 (PR #292) -**Reason:** Feature fully implemented -**Implementation:** PR #257 (Phase 6B); see `crates/warp-core/src/boaw/exec.rs` -**Context:** Performance optimization — parallelize execution across warps - -> **Archival Note:** The implemented design deviates from this plan. The `WorkUnit` -> struct as built does NOT store an explicit `shard_id` field — shard identity is -> implicit in the items membership. See `crates/warp-core/src/boaw/exec.rs:259-281` -> for the actual structure. This document preserves the original planning intent. - ---- - -## Problem Statement - -In `engine_impl.rs:1220`, warps are processed serially: - -```rust -for (warp_id, warp_rewrites) in by_warp { - let view = GraphView::new(store); // borrows per-warp store - let deltas = execute_parallel_sharded(view, &items, workers); - all_deltas.extend(deltas); -} -``` - -While `execute_parallel_sharded()` parallelizes _within_ each warp, multi-warp ticks -still execute warp-by-warp. With N warps and S shards each, latency is O(N) rather -than O(1) when parallelism is available. - ---- - -## Recommended Approach - -**Global work queue of `(warp_id, shard_id)` units** — flat parallelism, no nesting. - -1. **Partition rewrites by warp** — group by `WarpId` -2. **Within each warp, partition into shards** — reuse existing `shard_of()` (256 shards) -3. **Build work units** — `WorkUnit { warp_id, items: Vec }` _(shard_id not stored; implicit in items membership)_ -4. **Spawn fixed worker pool** — `available_parallelism()` threads, spawned once -5. **Atomic work claiming** — workers claim next unit via `AtomicUsize` index -6. **Execute with warp-local view** — each unit resolves its warp's `GraphView` - -**Pros:** Scalable, clean, deterministic (canonical merge order), no API churn. -**Cons:** Slightly more wiring than per-warp threading, but avoids nested spawns. - ---- - -## Constraints (Non-Negotiable) - -1. **No nested threading** — `execute_work_queue()` is the _only_ spawn site. Units - call serial execution internally, never `execute_parallel_sharded()`. - -2. **No long-lived borrows across warps** — worker loop must: resolve `GraphView`, - execute unit, drop view, move on. No caching `&GraphStore` across iterations. - -3. **Keep `ExecItem` unchanged** — `WorkUnit` carries `warp_id + Vec`. - Do not widen `ExecItem`'s API surface. - ---- - -## Implementation Steps - -| Step | Description | Files | -| ---- | ------------------------------------------------------ | -------------- | -| 1 | Add `WorkUnit { warp_id, items }` struct (no shard_id) | exec.rs | -| 2 | Add `build_work_units()` — partition by warp + shard | exec.rs | -| 3 | Add `execute_work_queue()` — atomic claim loop | exec.rs | -| 4 | Replace serial for-loop with `execute_work_queue()` | engine_impl.rs | -| 5 | Add `#[cfg(feature = "cross-warp-parallel")]` gate | Cargo.toml | - ---- - -## Files Modified - -| File | Change | -| ------------------------------------- | ------------------------------------------- | -| `crates/warp-core/src/boaw/exec.rs` | WorkUnit struct, build_work_units, executor | -| `crates/warp-core/src/engine_impl.rs` | Replace serial loop with work queue call | -| `crates/warp-core/Cargo.toml` | Feature gate (optional) | - ---- - -## Success Criteria - -- [x] Multi-warp tick executes all warp-shards concurrently -- [x] Fixed worker pool (no nested spawning) -- [x] Determinism preserved (canonical unit ordering + merge) -- [x] No regression on single-warp benchmarks - ---- - -## Minimal Success Test - -Integration test proving correctness: - -- **Setup:** 2 warps × many shards (e.g., 100 items per warp) -- **Worker counts:** `{1, 2, 8, 32}` — all must produce identical results -- **Assertion:** Same `commit_hash` per warp (or engine receipt hash) across all runs - -If this passes, the design is correct. diff --git a/docs/archive/plans/per-warp-time-sovereignty.md b/docs/archive/plans/per-warp-time-sovereignty.md deleted file mode 100644 index a76eed26..00000000 --- a/docs/archive/plans/per-warp-time-sovereignty.md +++ /dev/null @@ -1,852 +0,0 @@ - - - -# Per-Warp Time Sovereignty - -**Status:** Draft -**Created:** 2026-01-20 -**Target:** Phase 7 (Post-BOAW) -**Authors:** Claude (research agent) - -## Overview - -This plan defines how different WARPs can exist at different "now" positions within the same Engine step, safely and deterministically. This enables: - -- **Warp A** in LIVE mode (advancing tick frontier, ingesting new intents) -- **Warp B** in REPLAY mode (replaying historical commits or applying recorded tick patches) -- **Warp C** in PAUSED mode (no-op, frozen in time) - -All executing concurrently within one Engine step call. - ---- - -## 1. Current State - -### What Exists Today - -| Component | Location | Current Capability | -| ------------------------ | -------------------------- | ----------------------------------------------------------------------------------- | -| **WarpState** | `warp_state.rs:43-46` | `BTreeMap` - per-warp isolation via separate stores | -| **WorkUnit** | `boaw/exec.rs:149-159` | Carries `warp_id` explicitly - work units are warp-tagged | -| **execute_work_queue()** | `boaw/exec.rs:192-282` | Resolves `GraphView` per-unit from correct store via `resolve_store(&unit.warp_id)` | -| **tick_history** | `engine_impl.rs:424` | `Vec<(Snapshot, TickReceipt, WarpTickPatchV1)>` - **engine-global**, not per-warp | -| **jump_to_tick()** | `engine_impl.rs:1581-1601` | Replays patches sequentially, but operates on **whole engine** | -| **WarpTickPatchV1** | `tick_patch.rs:324-461` | `apply_to_state()` applies canonical ops to WarpState | -| **Footprint isolation** | `scheduler.rs:162-222` | Keys include `warp_id` - cross-warp conflicts impossible by design | -| **Commit DAG** | `engine_impl.rs:1052-1056` | **Single linear chain** - parents from `last_snapshot` (global) | - -### What's Missing - -1. **No `WarpRunMode`** - no enum for LIVE/REPLAY/PAUSED -2. **No per-warp timeline** - tick history is engine-global -3. **No warp-local "now"** - no tracking of each warp's position in its timeline -4. **No mode-aware scheduling** - work queue doesn't filter by mode -5. **No REPLAY intent rejection** - no mechanism to block new intents for replaying warps -6. **No per-warp commit DAG** - single chain, not per-warp branches - ---- - -## 2. Constraints & Invariants - -### Non-Negotiable (Compile-Time or Hard Runtime Errors) - -| ID | Invariant | Enforcement | -| ------------------- | -------------------------------------------------------------------------------------------------- | ---------------------------------------- | -| **REPLAY-001** | REPLAY warps MUST NOT ingest new intents | Runtime check at `ingest_intent()` entry | -| **REPLAY-002** | REPLAY warps MUST only apply recorded patches (not execute rules) | Mode branch in step | -| **REPLAY-003** | All hashes (`commit_hash`, `patch_digest`, `state_root`) MUST match recorded history byte-for-byte | Post-apply verification | -| **REPLAY-004** | REPLAY execution MUST NOT depend on wall clock, random, or nondet | ADR-0006 ban list | -| **LIVE-001** | LIVE warps MAY ingest new intents | Default behavior | -| **LIVE-002** | LIVE execution deterministic given ingress | Existing guarantee | -| **LIVE-003** | LIVE warps MUST NOT read/write other warps' state | WarpId-scoped keys | -| **ISOLATION-001** | Each warp's timeline is independent | Per-warp `tick_history` | -| **ISOLATION-002** | No cross-warp `GraphView` aliasing during parallel execution | Per-unit resolution | -| **DETERMINISM-001** | Mixed-mode execution produces deterministic per-warp commit DAGs | Canonical merge | - -### Soft Invariants (Debug Assertions, Upgradable) - -| ID | Invariant | Enforcement | -| -------------- | ------------------------------------------ | --------------------- | -| **PAUSED-001** | PAUSED warps produce zero work units | Mode filter in build | -| **MODE-001** | Mode transitions are explicit and recorded | API enforcement | -| **REPLAY-005** | REPLAY completion triggers mode transition | Configurable callback | - ---- - -## 3. Design - -### 3.1 Warp-Local "Now" Definition - -"Now" is not wall-clock time but a **position in the warp's commit DAG**. - -```rust -/// The temporal position of a single warp within its own timeline. -#[derive(Clone, Debug, PartialEq, Eq)] -pub struct WarpNow { - /// The warp this position belongs to. - pub warp_id: WarpId, - /// Current tick index (0 = initial state U0, 1 = after first commit, etc.) - pub tick_index: u64, - /// The commit hash at this position (None for U0). - pub commit_hash: Option, - /// Current execution mode. - pub mode: WarpRunMode, -} -``` - -**Location**: `crates/warp-core/src/warp_timeline.rs` - -For a linear chain: `(warp_id, tick_index)` uniquely identifies the state. -For future branching: `(warp_id, commit_hash)` would be the canonical form. - -### 3.2 WarpRunMode Model - -```rust -/// Execution mode for a warp within an Engine step. -#[derive(Clone, Debug, PartialEq, Eq)] -pub enum WarpRunMode { - /// Normal operation: new intents allowed, rules execute, commits advance frontier. - Live, - - /// Replaying recorded history: no new intents, only apply recorded patches. - Replay { - /// Target tick index to replay to (post-apply tick_index; patches 0..target_tick-1). - target_tick: u64, - /// Source of recorded patches for verification. - source: ReplaySource, - }, - - /// No-op: warp is excluded from this step entirely. - Paused, - - /// (Future) Forking: create a new timeline branch from current position. - #[non_exhaustive] - _Reserved, -} - -/// Source of recorded patches for REPLAY mode. -#[derive(Clone, Debug, PartialEq, Eq)] -pub enum ReplaySource { - /// Replay from engine's own ledger (local tick_history). - LocalLedger, - /// Replay from external patches (e.g., received from network peer). - External(Vec), -} -``` - -**Design rationale**: Modes are **per-warp, not per-engine**. This allows warp A to advance (LIVE) while warp B replays (REPLAY) in the same `Engine.step()` call. - -### 3.3 Per-Warp Timeline Structure - -```rust -/// Timeline state for a single warp. -#[derive(Clone, Debug)] -pub struct WarpTimeline { - /// Warp identifier. - pub warp_id: WarpId, - /// Current execution mode. - pub mode: WarpRunMode, - /// Complete tick history for this warp. - pub tick_history: Vec<(Snapshot, TickReceipt, WarpTickPatchV1)>, - /// Most recent snapshot (tip of the DAG). - pub last_snapshot: Option, - /// Initial state for replay (U0 for this warp). - pub initial_store: GraphStore, - /// Current position in timeline. - pub now: WarpNow, -} - -impl WarpTimeline { - /// Get the current tick index (0 = U0, n = after n commits). - pub fn tick_index(&self) -> u64 { - self.tick_history.len() as u64 - } - - /// Check if this warp can accept new intents. - pub fn can_ingest(&self) -> bool { - matches!(self.mode, WarpRunMode::Live) - } - - /// Get recorded patch at index (for REPLAY verification). - pub fn recorded_patch(&self, index: u64) -> Option<&WarpTickPatchV1> { - self.tick_history.get(index as usize).map(|(_, _, p)| p) - } -} -``` - -### 3.4 REPLAY Invariants Enforcement - -```rust -impl WarpTimeline { - /// Apply a replay step: verify and apply recorded patch. - pub fn replay_step( - &mut self, - store: &mut GraphStore, - ) -> Result { - let WarpRunMode::Replay { target_tick, ref source } = self.mode else { - return Err(ReplayError::NotInReplayMode); - }; - - let current_tick = self.tick_index(); - // target_tick is the desired post-apply tick_index (number of patches applied). - // tick_index 0 = initial state; tick_index N = state after patches 0..N-1. - // So when current_tick >= target_tick, patches 0..target_tick-1 have been applied. - if current_tick >= target_tick { - return Ok(ReplayStepResult::ReplayComplete); - } - - // Get recorded patch - let recorded = match source { - ReplaySource::LocalLedger => { - self.recorded_patch(current_tick) - .ok_or(ReplayError::MissingRecordedPatch { tick: current_tick })? - .clone() - } - ReplaySource::External(patches) => { - patches.get(current_tick as usize) - .ok_or(ReplayError::MissingRecordedPatch { tick: current_tick })? - .clone() - } - }; - - // Apply patch (no rule execution!) - recorded.apply_to_store(store)?; - - // Verify post-state matches recorded (REPLAY-003) - let post_state_root = compute_state_root_for_warp(store, &self.warp_id); - let (recorded_snapshot, _, _) = &self.tick_history[current_tick as usize]; - - if post_state_root != recorded_snapshot.state_root { - return Err(ReplayError::StateRootMismatch { - tick: current_tick, - expected: recorded_snapshot.state_root, - actual: post_state_root, - }); - } - - // Advance timeline position - self.now.tick_index = current_tick + 1; - self.now.commit_hash = Some(recorded_snapshot.hash); - - Ok(ReplayStepResult::Advanced { tick: current_tick + 1 }) - } -} - -#[derive(Debug)] -pub enum ReplayError { - NotInReplayMode, - MissingRecordedPatch { tick: u64 }, - StateRootMismatch { tick: u64, expected: Hash, actual: Hash }, - PatchDigestMismatch { tick: u64, expected: Hash, actual: Hash }, - CommitHashMismatch { tick: u64, expected: Hash, actual: Hash }, -} -``` - -### 3.5 LIVE Invariants Enforcement - -```rust -impl Engine { - /// Ingest intent with mode check (LIVE-001 enforced). - pub fn ingest_intent_for_warp( - &mut self, - warp_id: &WarpId, - intent_bytes: &[u8], - ) -> Result { - let timeline = self.timelines.get(warp_id) - .ok_or(EngineError::UnknownWarp(*warp_id))?; - - // REPLAY-001: Reject intents for non-LIVE warps - if !timeline.can_ingest() { - return Err(EngineError::WarpNotAcceptingIntents { - warp_id: *warp_id, - mode: timeline.mode.clone(), - }); - } - - // Proceed with normal ingestion - self.ingest_intent_impl(warp_id, intent_bytes) - } -} -``` - -### 3.6 Concurrency Safety Matrix - -| Data | Sharing Model | Rationale | -| ----------------------- | ---------------- | --------------------------------------------- | -| `GraphStore` per warp | **Isolated** | Each warp has own store in `WarpState.stores` | -| `WarpTimeline` per warp | **Isolated** | Each warp has own timeline, mode, history | -| `WorkUnit` | **Read-shared** | Built before execution, immutable during | -| `TickDelta` per worker | **Thread-local** | Each worker accumulates own delta | -| `RewriteRule` registry | **Read-shared** | Rules immutable after registration | -| Atomic work counter | **Shared** | `AtomicUsize` for work-stealing | -| Engine metadata | **Isolated** | Only one `&mut Engine` exists | - -**Key guarantee**: During `execute_work_queue()`, each worker: - -1. Claims a `WorkUnit` atomically -2. Resolves `GraphView` from correct warp's store (read-only) -3. Writes to thread-local `TickDelta` -4. Never touches another warp's store - -### 3.7 Global Work Queue with Mixed Modes - -```rust -/// Build work units respecting per-warp modes. -pub fn build_mixed_mode_work_units( - timelines: &BTreeMap, - live_rewrites: &BTreeMap>, -) -> MixedModeWorkPlan { - let mut live_units = Vec::new(); - let mut replay_warps = Vec::new(); - let mut paused_warps = Vec::new(); - - for (warp_id, timeline) in timelines { - match &timeline.mode { - WarpRunMode::Live => { - // Build work units for LIVE warps (normal path) - if let Some(rewrites) = live_rewrites.get(warp_id) { - let items: Vec = rewrites.iter() - .map(|(rw, exec)| ExecItem { - exec: *exec, - scope: rw.scope.local_id, - origin: OpOrigin::default(), - }) - .collect(); - let sharded = partition_into_shards(&items); - for shard in sharded { - if !shard.items.is_empty() { - live_units.push(WorkUnit { - warp_id: *warp_id, - items: shard.items, - }); - } - } - } - } - WarpRunMode::Replay { .. } => { - replay_warps.push(*warp_id); - } - WarpRunMode::Paused => { - paused_warps.push(*warp_id); - } - WarpRunMode::_Reserved => unreachable!(), - } - } - - MixedModeWorkPlan { - live_units, - replay_warps, - paused_warps, - } -} - -pub struct MixedModeWorkPlan { - /// Work units for LIVE warps (rule execution). - pub live_units: Vec, - /// Warps in REPLAY mode (will apply recorded patches). - pub replay_warps: Vec, - /// Warps in PAUSED mode (no-op). - pub paused_warps: Vec, -} -``` - -**Key insight**: REPLAY warps don't produce `ExecItem` work units because they don't execute rules - they apply pre-recorded patches. PAUSED warps produce nothing. Only LIVE warps generate rule execution work. - -### 3.8 Preventing Cross-Mode Contamination - -```rust -impl Engine { - pub fn step_mixed_mode(&mut self) -> Result { - // 1. Build mode-aware work plan - let plan = build_mixed_mode_work_units(&self.timelines, &self.pending_by_warp); - - // 2. Execute LIVE warps (parallel rule execution) - let live_deltas = if !plan.live_units.is_empty() { - execute_work_queue(&plan.live_units, self.worker_count, |warp_id| { - self.state.store(warp_id) - }) - } else { - Vec::new() - }; - - // 3. Merge and commit LIVE deltas (per-warp) - let mut live_results = BTreeMap::new(); - for warp_id in plan.live_units.iter().map(|u| u.warp_id).collect::>() { - let warp_delta = self.extract_warp_delta(&live_deltas, &warp_id); - let result = self.commit_warp(&warp_id, warp_delta)?; - live_results.insert(warp_id, result); - } - - // 4. Execute REPLAY warps (apply recorded patches, verify hashes) - let mut replay_results = BTreeMap::new(); - for warp_id in &plan.replay_warps { - let timeline = self.timelines.get_mut(warp_id).unwrap(); - let store = self.state.store_mut(warp_id).unwrap(); - let result = timeline.replay_step(store)?; - replay_results.insert(*warp_id, result); - } - - // 5. PAUSED warps: no-op - - Ok(MixedModeStepResult { - live_results, - replay_results, - paused: plan.paused_warps, - }) - } -} -``` - -### 3.9 Per-Warp Commit with Isolated Timeline - -```rust -impl Engine { - fn commit_warp( - &mut self, - warp_id: &WarpId, - delta: TickDelta, - ) -> Result { - let timeline = self.timelines.get_mut(warp_id) - .ok_or(EngineError::UnknownWarp(*warp_id))?; - let store = self.state.store_mut(warp_id) - .ok_or(EngineError::UnknownWarp(*warp_id))?; - - // Build patch from delta - let patch = WarpTickPatchV1::from_delta(delta, self.policy_id)?; - - // Apply patch - patch.apply_to_store(store)?; - - // Compute hashes - let state_root = compute_state_root_for_warp(store, warp_id); - let patch_digest = patch.digest(); - let parents = timeline.last_snapshot - .as_ref() - .map(|s| vec![s.hash]) - .unwrap_or_default(); - let commit_hash = compute_commit_hash_v2( - state_root, - &parents, - patch_digest, - self.policy_id, - ); - - // Build snapshot - let snapshot = Snapshot { - warp_id: *warp_id, - hash: commit_hash, - state_root, - patch_digest, - parents, - // ... other fields - }; - - // Record in warp's timeline (NOT global!) - let receipt = self.build_receipt_for_warp(warp_id)?; - timeline.tick_history.push((snapshot.clone(), receipt, patch)); - timeline.last_snapshot = Some(snapshot.clone()); - timeline.now.tick_index += 1; - timeline.now.commit_hash = Some(commit_hash); - - Ok(WarpCommitResult { - snapshot, - tick_index: timeline.now.tick_index, - }) - } -} -``` - ---- - -## 4. Implementation Plan - -### Phase 1: Core Types (1 commit) - -**Files to create/modify:** - -| Action | File | Changes | -| ---------- | --------------------------------------- | ----------------------------------------------------------------------- | -| **NEW** | `crates/warp-core/src/warp_timeline.rs` | `WarpNow`, `WarpRunMode`, `ReplaySource`, `WarpTimeline`, `ReplayError` | -| **MODIFY** | `crates/warp-core/src/lib.rs` | Export new module | - -**Tests**: Unit tests for `WarpRunMode` transitions, `WarpTimeline` basic ops. - -### Phase 2: Per-Warp Timeline Storage (1 commit) - -**Files to modify:** - -| Action | File | Changes | -| ---------- | ------------------------------------- | ----------------------------------------------------------------------------------------- | -| **MODIFY** | `crates/warp-core/src/engine_impl.rs` | Add `timelines: BTreeMap` field; migrate `tick_history` to per-warp | -| **MODIFY** | `crates/warp-core/src/warp_state.rs` | Add `timeline()` accessor | - -**Tests**: Verify existing tests pass with new storage layout. - -### Phase 3: Mode-Aware Intent Ingestion (1 commit) - -**Files to modify:** - -| Action | File | Changes | -| ---------- | ------------------------------------- | ---------------------------------------------------------------------------------------------- | -| **MODIFY** | `crates/warp-core/src/engine_impl.rs` | Check `timeline.can_ingest()` in `ingest_intent()`; add `EngineError::WarpNotAcceptingIntents` | - -**Tests**: - -- `test_replay_warp_rejects_new_intents` -- `test_paused_warp_rejects_new_intents` -- `test_live_warp_accepts_intents` - -### Phase 4: Mode-Aware Work Queue (1 commit) - -**Files to modify:** - -| Action | File | Changes | -| ---------- | ------------------------------------- | -------------------------------------------------------- | -| **MODIFY** | `crates/warp-core/src/boaw/exec.rs` | Add `MixedModeWorkPlan`, `build_mixed_mode_work_units()` | -| **MODIFY** | `crates/warp-core/src/engine_impl.rs` | Implement `step_mixed_mode()` | - -**Tests**: - -- `test_live_warp_generates_work_units` -- `test_replay_warp_no_work_units` -- `test_paused_warp_no_work_units` - -### Phase 5: REPLAY Patch Application (1 commit) - -**Files to modify:** - -| Action | File | Changes | -| ---------- | --------------------------------------- | ---------------------------------------------------------- | -| **MODIFY** | `crates/warp-core/src/warp_timeline.rs` | Implement `WarpTimeline::replay_step()`, hash verification | -| **MODIFY** | `crates/warp-core/src/tick_patch.rs` | Add `apply_to_store()` (single-warp variant) | - -**Tests**: - -- `test_replay_applies_recorded_patches` -- `test_replay_detects_state_root_mismatch` -- `test_replay_detects_commit_hash_mismatch` - -### Phase 6: Per-Warp Commit (1 commit) - -**Files to modify:** - -| Action | File | Changes | -| ---------- | ------------------------------------- | ----------------------------------------------------- | -| **MODIFY** | `crates/warp-core/src/engine_impl.rs` | Implement `commit_warp()`, per-warp snapshot creation | -| **MODIFY** | `crates/warp-core/src/snapshot.rs` | Add `warp_id` to `Snapshot` (or make warp-scoped) | - -**Tests**: - -- `test_commit_advances_warp_timeline` -- `test_commit_hash_deterministic_per_warp` - -### Phase 7: Engine API Surface (1 commit) - -**Files to modify:** - -| Action | File | Changes | -| ---------- | ------------------------------------- | --------------------------------------------------------- | -| **MODIFY** | `crates/warp-core/src/engine_impl.rs` | Add `set_warp_mode()`, `get_warp_now()`, `start_replay()` | - -**Tests**: - -- `test_set_warp_mode_live_to_replay` -- `test_set_warp_mode_replay_to_paused` -- `test_start_replay_from_tick_zero` - -### Phase 8: Integration Tests (1 commit) - -**Files to create:** - -| Action | File | Changes | -| ------- | ------------------------------------------------- | --------------------------- | -| **NEW** | `crates/warp-core/tests/warp_time_sovereignty.rs` | Full integration test suite | - ---- - -## 5. Test Plan - -### File: `crates/warp-core/tests/warp_time_sovereignty.rs` - -| Test ID | Name | Description | -| ------- | ------------------------------------------- | ---------------------------------------------------------------------------------------- | -| **T1** | `test_live_and_replay_concurrent_isolation` | LIVE warp advances while REPLAY warp replays; neither affects the other | -| **T2** | `test_replay_hash_chain_identity` | REPLAY produces identical `commit_hash` chains to recorded history (100 ticks, 10 seeds) | -| **T3** | `test_live_worker_invariance_with_replay` | LIVE worker-count invariance holds during concurrent REPLAY | -| **T4** | `test_replay_rejects_intents` | REPLAY warp returns `Err(WarpNotAcceptingIntents)` on intent ingestion | -| **T5** | `test_mixed_mode_work_queue_determinism` | Mixed-mode execution deterministic across 50 shuffled ingress orderings | -| **T6** | `test_replay_tripwire_nondet_injection` | **Tripwire**: fails if any nondet input leaks into REPLAY mode | -| **T7** | `test_replay_completion_mode_transition` | REPLAY completion transitions mode to PAUSED (or LIVE if configured) | -| **T8** | `test_multiple_replay_warps_isolation` | Multiple REPLAY warps don't interfere with each other | -| **T9** | `test_cross_mode_commit_dag_independence` | LIVE and REPLAY warps have completely independent commit DAGs | -| **T10** | `test_paused_warp_state_immutable` | PAUSED warp state completely unchanged across 10 engine steps | - -### Test Implementation Details - -```rust -/// T1: LIVE warp advances while REPLAY warp replays - mutual isolation. -#[test] -fn test_live_and_replay_concurrent_isolation() { - // Setup: Engine with warp_a (LIVE) and warp_b (REPLAY) - // 1. Record 10 ticks for warp_b - // 2. Rewind warp_b to tick 0, set REPLAY mode targeting tick 5 - // 3. Set warp_a to LIVE - // 4. Execute 5 mixed-mode steps - // Assert: warp_a advanced 5 ticks, warp_b replayed to tick 5 - // Assert: warp_a's commit hashes are new, warp_b's match recorded history - // Assert: Neither warp's state was corrupted by the other -} - -/// T6: Tripwire - nondet leak into REPLAY fails. -#[test] -fn test_replay_tripwire_nondet_injection() { - // Setup: Custom rule that attempts to inject nondet (e.g., thread::current().id()) - // Record with clean rule, then replay - // Assert: If any nondet leaks, state_root mismatch detected - // This test FAILS if nondet enters replay path - that's the tripwire -} -``` - -### Additional Test Files - -| File | Purpose | -| ---------------------------------------------- | ------------------------------------------------------------- | -| `crates/warp-core/tests/replay_determinism.rs` | Permutation tests (100+ seeds), cross-platform verification | -| `crates/warp-core/tests/mode_transitions.rs` | Valid/invalid mode transitions, transition during active step | - ---- - -## 6. Engine API Surface - -### New Methods - -```rust -impl Engine { - /// Set the execution mode for a warp. - /// - /// # Errors - /// - `UnknownWarp` if warp_id not found - /// - `InvalidModeTransition` if transition not allowed (e.g., during active step) - pub fn set_warp_mode( - &mut self, - warp_id: WarpId, - mode: WarpRunMode - ) -> Result<(), EngineError>; - - /// Get the current temporal position of a warp. - pub fn get_warp_now(&self, warp_id: &WarpId) -> Option<&WarpNow>; - - /// Start replay for a warp from its current position to target_tick. - /// - /// This is a convenience wrapper that: - /// 1. Validates target_tick is in recorded history - /// 2. Sets mode to Replay { target_tick, source: LocalLedger } - pub fn start_replay( - &mut self, - warp_id: WarpId, - target_tick: u64 - ) -> Result<(), EngineError>; - - /// Start replay from external patches (e.g., received from network). - pub fn start_replay_external( - &mut self, - warp_id: WarpId, - patches: Vec, - ) -> Result<(), EngineError>; - - /// Execute one engine step with mixed-mode support. - pub fn step_mixed_mode(&mut self) -> Result; - - /// Get all warps and their current modes. - pub fn warp_modes(&self) -> impl Iterator; -} -``` - -### Result Types - -```rust -pub struct MixedModeStepResult { - /// Results for warps that were in LIVE mode. - pub live_results: BTreeMap, - /// Results for warps that were in REPLAY mode. - pub replay_results: BTreeMap, - /// Warps that were PAUSED (no-op). - pub paused: Vec, -} - -pub struct WarpCommitResult { - pub snapshot: Snapshot, - pub tick_index: u64, -} - -pub enum ReplayStepResult { - /// Advanced to the next tick. - Advanced { tick: u64 }, - /// Reached target_tick, replay complete. - ReplayComplete, -} -``` - ---- - -## 7. Scheduling Rules - -### Which Warps Run - -1. **LIVE warps**: Generate work units if they have pending rewrites -2. **REPLAY warps**: Apply one recorded patch per step (no work units) -3. **PAUSED warps**: Skipped entirely (no state change) - -### Mode Determination Per Step - -```text -For each warp in canonical order (BTreeMap iteration): - match warp.mode: - Live → - if has_pending_rewrites(warp): - generate WorkUnits for this warp - else: - no-op this step - - Replay { target_tick, source } → - if warp.tick_index < target_tick: - schedule replay_step for this warp - else: - replay complete, transition to PAUSED (or callback) - - Paused → - no-op -``` - -### Work Queue Execution Order - -1. All LIVE work units execute in parallel (existing `execute_work_queue`) -2. LIVE deltas merged and committed per-warp -3. REPLAY warps apply patches sequentially (per-warp, can be parallelized across warps) -4. PAUSED warps skipped - ---- - -## 8. Time Travel / Rewind / Replay Selection - -### Required Inputs for REPLAY - -| Input | Source | Required | -| ------------------ | ------------------------------------------------- | -------- | -| `warp_id` | Caller | Yes | -| `target_tick` | Caller | Yes | -| `ReplaySource` | Caller chooses | Yes | -| Recorded patches | `LocalLedger` or `External(Vec)` | Yes | -| Initial state (U0) | Stored in `WarpTimeline.initial_store` | Auto | - -### Rewind Mechanism - -```rust -impl WarpTimeline { - /// Rewind warp to tick 0 (U0 state) for replay. - pub fn rewind_to_origin(&mut self, store: &mut GraphStore) { - // Clone initial state back to active store - *store = self.initial_store.clone(); - self.now.tick_index = 0; - self.now.commit_hash = None; - // Note: tick_history preserved for replay source - } - - /// Rewind to specific tick (requires re-applying patches 0..tick). - pub fn rewind_to_tick( - &mut self, - store: &mut GraphStore, - tick: u64 - ) -> Result<(), ReplayError> { - self.rewind_to_origin(store); - for i in 0..tick { - let patch = self.recorded_patch(i) - .ok_or(ReplayError::MissingRecordedPatch { tick: i })?; - patch.apply_to_store(store)?; - } - self.now.tick_index = tick; - self.now.commit_hash = if tick == 0 { - None - } else { - self.tick_history.get(tick as usize - 1) - .map(|(s, _, _)| s.hash) - }; - Ok(()) - } -} -``` - ---- - -## 9. Risks & Mitigations - -| Risk | Severity | Mitigation | -| ----------------------------------------------------- | ------------ | ------------------------------------------------------------------------------------- | -| **Per-warp timeline storage increases memory** | Medium | Use structural sharing for `GraphStore` snapshots; only store deltas | -| **REPLAY hash mismatch debugging is hard** | Medium | Include tick index, expected vs actual hashes, and delta dump in `ReplayError` | -| **Mode transition race conditions** | Low | Mode changes only allowed between steps; enforce via `&mut Engine` | -| **Future forking complicates commit DAG** | Low (future) | Design `parents: Vec` now; collapse/merge is separate feature | -| **Cross-warp portal operations during mixed modes** | Medium | Portal creation in LIVE warp that targets REPLAY warp must be blocked; add validation | -| **Global `policy_id` shared across warps** | Low | Acceptable for now; future per-warp policy is out of scope | -| **REPLAY from external source has no chain of trust** | Medium | External `ReplaySource` should require signature verification (future enhancement) | - ---- - -## 10. Out of Scope (Future Work) - -1. **Per-warp forking/branching** - Commit DAG supports multiple parents but implementation deferred -2. **Collapse/merge across warps** - ADR-0007 Layer 7 specified but not implemented here -3. **Privacy mode per-warp** - Mind vs Diagnostics modes are engine-global currently -4. **Network-sourced REPLAY verification** - Signature verification for `ReplaySource::External` -5. **Per-warp policy_id** - All warps share engine's `policy_id` - ---- - -## 11. Summary - -Per-warp time sovereignty is achievable with **minimal, composable changes** because the existing architecture already enforces per-warp isolation via: - -- `WarpId`-scoped keys in footprints -- Per-unit `GraphView` resolution in `execute_work_queue()` -- Separate `GraphStore` per warp in `WarpState` - -The main additions are: - -1. **WarpRunMode enum** - explicit mode tracking per warp -2. **Per-warp timeline storage** - migrate global `tick_history` to per-warp -3. **Mode-aware scheduling** - filter work queue by mode -4. **REPLAY enforcement** - apply recorded patches instead of executing rules - -The design **preserves all existing determinism guarantees** and **does not block future forking/collapse features**. - ---- - -## Appendix A: File Change Summary - -| File | Action | LOC Estimate | -| ------------------------------------------------- | ------- | ------------ | -| `crates/warp-core/src/warp_timeline.rs` | **NEW** | ~300 | -| `crates/warp-core/src/lib.rs` | MODIFY | +5 | -| `crates/warp-core/src/engine_impl.rs` | MODIFY | +200 | -| `crates/warp-core/src/boaw/exec.rs` | MODIFY | +50 | -| `crates/warp-core/src/tick_patch.rs` | MODIFY | +30 | -| `crates/warp-core/src/snapshot.rs` | MODIFY | +10 | -| `crates/warp-core/src/warp_state.rs` | MODIFY | +20 | -| `crates/warp-core/tests/warp_time_sovereignty.rs` | **NEW** | ~400 | -| `crates/warp-core/tests/replay_determinism.rs` | **NEW** | ~200 | -| `crates/warp-core/tests/mode_transitions.rs` | **NEW** | ~150 | -| **Total** | | ~1365 | - ---- - -## Appendix B: Compile-Time vs Runtime Enforcement - -| Invariant | Enforcement | Mechanism | -| --------------------------- | ---------------- | ------------------------------------------- | -| REPLAY-001 (no intents) | **Runtime** | Check in `ingest_intent()` | -| REPLAY-002 (patches only) | **Runtime** | Mode branch in `step_mixed_mode()` | -| REPLAY-003 (hash match) | **Runtime** | Post-apply verification | -| REPLAY-004 (no nondet) | **Compile-time** | ADR-0006 ban list + `ban-nondeterminism.sh` | -| LIVE-003 (no cross-warp) | **Compile-time** | `WarpId` in all key types | -| ISOLATION-002 (no aliasing) | **Runtime** | Per-unit `GraphView` resolution | -| DETERMINISM-001 | **Runtime** | Canonical merge in `merge_deltas()` | diff --git a/docs/archive/release-criteria.md b/docs/archive/release-criteria.md deleted file mode 100644 index 817f6b67..00000000 --- a/docs/archive/release-criteria.md +++ /dev/null @@ -1,36 +0,0 @@ - - - -# Release Criteria — Phase 0.5 → Phase 1 - -Checklist for closing Phase 0.5 and starting Phase 1 implementation. - -## How to Use This Checklist - -- Treat each item as a gate: “done” means it is implemented **and** verified. -- Link evidence (tests, docs, or CI runs) in the Phase 0.5 tracking issue. -- If a requirement moves, update the checklist so it stays authoritative. - -## Required Criteria - -- [ ] Branch tree spec v0.5 implemented (roaring bitmaps, epochs, hashing). -- [ ] Codex’s Baby Phase 0.5 features implemented (event envelope, bridge, backpressure). -- [ ] Temporal bridge integrated with branch tree and CB. -- [ ] Serialization protocol implemented with content-addressed blocks. -- [ ] Replay CLI (`echo replay --verify`) passes golden hash suite. -- [ ] Entropy observers and inspector packets verified. -- [ ] Capability tokens and security envelopes enforced. -- [ ] Determinism test suite green on Node, Chromium, WebKit. -- [ ] Deterministic config loader produces `configHash`. -- [ ] Plugin manifest loader validates capabilities and records `pluginsManifestHash`. -- [ ] Inspector JSONL writer produces canonical frames. -- [ ] Documentation index current (spec map). - -## Evidence Expectations (Examples) - -- Determinism suite: CI logs or `echo-dind-harness` transcript. -- Replay CLI: golden hashes checked in `testdata/` with a reproducible runner. -- Protocol gates: a spec doc + a passing conformance test. -- Docs: `docs/meta/docs-index.md` updated with links to current specs. - -Once all items checked, open Phase 1 milestone and migrate outstanding tasks to implementation backlog. diff --git a/docs/archive/rfc/mat-bus-finish.md b/docs/archive/rfc/mat-bus-finish.md deleted file mode 100644 index 1202acd4..00000000 --- a/docs/archive/rfc/mat-bus-finish.md +++ /dev/null @@ -1,662 +0,0 @@ - - - - -# RFC: MaterializationBus Completion - -**Status:** Complete -**Date:** 2026-01-17 -**Branch:** `materialization-bus` -**Depends on:** ADR-0003-Materialization-Bus - -## Summary - -This RFC completes the MaterializationBus implementation with three deliverables: - -1. **EmissionPort trait** — Hexagonal boundary for rule emissions -2. **ReduceOp enum** — Built-in deterministic reduce operations (no user functions) -3. **Cross-platform determinism tests** — GitHub Actions + DIND harness - ---- - -## 1. EmissionPort Trait (Hexagonal Architecture) - -### Problem - -The current plan passes `&MaterializationBus` directly to rule executors. This: - -- Couples rules to concrete implementation -- Exposes internal `EmitKey` construction to callers -- Makes testing harder (can't mock the bus) -- Violates hexagonal/ports-and-adapters principles - -### Solution - -Introduce an `EmissionPort` trait as the driven port. Rules depend on the trait; the engine provides a scoped adapter. - -```rust -// crates/warp-core/src/materialization/emission_port.rs - -/// Driven port for rule emissions (what rules see). -/// -/// Rules emit to channels via this trait. The engine provides a scoped -/// implementation that automatically constructs EmitKeys from execution context. -pub trait EmissionPort { - /// Emit data to a channel. - /// - /// The implementation handles EmitKey construction. Callers only provide - /// channel and payload. - fn emit(&self, channel: ChannelId, data: Vec); - - /// Emit with explicit subkey (for multi-emission rules). - /// - /// Use when a single rule invocation needs to emit multiple values to - /// the same channel. The subkey disambiguates emissions. - fn emit_with_subkey(&self, channel: ChannelId, subkey: u32, data: Vec); -} -``` - -### Scoped Adapter - -The engine creates a `ScopedEmitter` for each rule execution: - -```rust -// crates/warp-core/src/materialization/scoped_emitter.rs - -/// Scoped adapter that auto-fills EmitKey from execution context. -/// -/// Created by the engine for each rule invocation. Captures the scope hash -/// and rule ID, preventing rules from forging keys. -pub struct ScopedEmitter<'a> { - bus: &'a MaterializationBus, - scope_hash: Hash, - rule_id: u32, -} - -impl<'a> ScopedEmitter<'a> { - /// Create a new scoped emitter for a rule execution. - pub fn new(bus: &'a MaterializationBus, scope_hash: Hash, rule_id: u32) -> Self { - Self { bus, scope_hash, rule_id } - } -} - -impl EmissionPort for ScopedEmitter<'_> { - fn emit(&self, channel: ChannelId, data: Vec) { - let key = EmitKey::new(self.scope_hash, self.rule_id); - self.bus.emit(channel, key, data); - } - - fn emit_with_subkey(&self, channel: ChannelId, subkey: u32, data: Vec) { - let key = EmitKey::with_subkey(self.scope_hash, self.rule_id, subkey); - self.bus.emit(channel, key, data); - } -} -``` - -### Engine Integration - -```rust -// In Engine::execute_rule() or similar - -let emitter = ScopedEmitter::new(&self.bus, scope_node.hash(), rule.id()); -rule.execute(context, &emitter)?; -``` - -### Testing - -Rules can be tested with a mock port: - -```rust -#[cfg(test)] -struct MockEmissionPort { - emissions: RefCell)>>, -} - -impl EmissionPort for MockEmissionPort { - fn emit(&self, channel: ChannelId, data: Vec) { - self.emissions.borrow_mut().push((channel, data)); - } - // ... -} -``` - -### Duplicate EmitKey Rejection - -**Policy: Reject duplicate (channel, EmitKey) pairs. Always.** - -If a rule emits twice to the same channel with the same EmitKey, the bus returns -`DuplicateEmission` error. This catches rules that iterate non-deterministic -sources (e.g., `HashMap`) without proper subkey differentiation. - -```rust -/// Error returned when the same (channel, EmitKey) is emitted twice. -#[derive(Debug, Clone, PartialEq, Eq)] -pub struct DuplicateEmission { - pub channel: ChannelId, - pub key: EmitKey, -} - -impl MaterializationBus { - /// Emit data to a channel. Returns error if key already exists. - pub fn emit( - &self, - channel: ChannelId, - key: EmitKey, - data: Vec, - ) -> Result<(), DuplicateEmission> { - use std::collections::btree_map::Entry; - - let mut pending = self.pending.borrow_mut(); - let channel_map = pending.entry(channel).or_default(); - - match channel_map.entry(key) { - Entry::Vacant(e) => { - e.insert(data); - Ok(()) - } - Entry::Occupied(_) => Err(DuplicateEmission { channel, key }), - } - } -} -``` - -**Why reject even if payloads are identical?** - -Allowing "identical payload = OK" encourages sloppy code that emits redundantly. -Then someone changes a field and tests fail mysteriously. Rejecting always forces -rule authors to think: "Am I iterating deterministically? Do I need unique subkeys?" - -### Files to Create/Modify - -| File | Action | -| -------------------------------------------------------- | ------------------------------------------------- | -| `crates/warp-core/src/materialization/emission_port.rs` | **Create** — trait definition | -| `crates/warp-core/src/materialization/scoped_emitter.rs` | **Create** — adapter implementation | -| `crates/warp-core/src/materialization/mod.rs` | **Modify** — export new types | -| `crates/warp-core/src/materialization/bus.rs` | **Modify** — add DuplicateEmission, update emit() | -| `crates/warp-core/src/engine.rs` (or equivalent) | **Modify** — create ScopedEmitter per rule | - ---- - -## 2. ReduceOp Enum (Built-in Deterministic Ops) - -### Problem - -The current `ChannelPolicy::Reduce { join_fn_id }` design assumes a join function registry where users register merge functions by ID. This is a determinism landmine: - -- User functions may not be commutative/associative -- Function lookup adds indirection and potential for error -- Can't verify correctness at compile time -- Opens door to non-deterministic user code - -### Solution - -Replace `join_fn_id` with a closed enum of built-in reduce operations. - -**IMPORTANT: Not all reduce ops are commutative.** They fall into two categories: - -| Category | Ops | Property | -| ----------------------- | -------------------------------------- | ------------------------------------------------ | -| **Commutative Monoids** | `Sum`, `Max`, `Min`, `BitOr`, `BitAnd` | Order doesn't matter: `a ⊕ b = b ⊕ a` | -| **Order-Dependent** | `First`, `Last`, `Concat` | Deterministic via EmitKey order, NOT commutative | - -Both categories are **deterministic** (same inputs → same output), but only commutative ops are **permutation-invariant** at the value level. Order-dependent ops rely on the canonical EmitKey ordering. - -```rust -// crates/warp-core/src/materialization/reduce_op.rs - -/// Built-in reduce operations for channel coalescing. -/// -/// # Algebraic Categories -/// -/// **Commutative monoids** (permutation-invariant): -/// - `Sum`, `Max`, `Min`, `BitOr`, `BitAnd` -/// - Result is identical regardless of emission order -/// -/// **Order-dependent** (deterministic via EmitKey order): -/// - `First`, `Last`, `Concat` -/// - Result depends on canonical EmitKey ordering -/// - NOT commutative — do not claim they are! -#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)] -pub enum ReduceOp { - // ─── COMMUTATIVE MONOIDS ─────────────────────────────────────────── - - /// Sum all values as little-endian u64. - /// Empty input → `[0u8; 8]` (zero). - Sum, - - /// Take maximum value (lexicographic byte comparison). - /// Empty input → `[]` (empty vec). - Max, - - /// Take minimum value (lexicographic byte comparison). - /// Empty input → `[]` (empty vec). - Min, - - /// Bitwise OR all values. - /// Shorter values are zero-padded on the right. - /// Empty input → `[]` (empty vec). - BitOr, - - /// Bitwise AND all values. - /// Result length = minimum input length (intersection semantics). - /// Empty input → `[]` (empty vec). - BitAnd, - - // ─── ORDER-DEPENDENT (NOT COMMUTATIVE) ───────────────────────────── - - /// Take first value by EmitKey order. - /// Empty input → `[]` (empty vec). - /// WARNING: Not commutative. Depends on canonical key ordering. - First, - - /// Take last value by EmitKey order. - /// Empty input → `[]` (empty vec). - /// WARNING: Not commutative. Depends on canonical key ordering. - Last, - - /// Concatenate all values in EmitKey order. - /// Empty input → `[]` (empty vec). - /// WARNING: Not commutative. Order matters for result bytes. - Concat, -} - -impl ReduceOp { - /// Returns true if this op is a commutative monoid (permutation-invariant). - pub const fn is_commutative(&self) -> bool { - matches!(self, Self::Sum | Self::Max | Self::Min | Self::BitOr | Self::BitAnd) - } -} -``` - -### Updated ChannelPolicy - -```rust -#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)] -pub enum ChannelPolicy { - /// All emissions in EmitKey order, length-prefixed. - #[default] - Log, - - /// Error if more than one emission. - StrictSingle, - - /// Reduce via built-in operation. - Reduce(ReduceOp), -} -``` - -### Implementation - -```rust -impl ReduceOp { - /// Apply this reduce operation to a set of values. - /// - /// Values are provided in EmitKey order (required for First/Last/Concat). - /// Returns the reduced result. - /// - /// # Empty Input Behavior - /// - /// All ops return `[]` (empty vec) on empty input, EXCEPT: - /// - `Sum` returns `[0u8; 8]` (zero as u64 LE) - /// - /// This is intentional: empty input means "nothing to reduce." - pub fn apply(self, values: impl IntoIterator>) -> Vec { - let mut iter = values.into_iter().peekable(); - - // Handle empty input uniformly (except Sum) - if iter.peek().is_none() { - return match self { - Self::Sum => vec![0u8; 8], // Identity for addition - _ => Vec::new(), // "Nothing to reduce" - }; - } - - match self { - // ─── COMMUTATIVE MONOIDS ─────────────────────────────────── - - Self::Sum => { - let sum: u64 = iter - .map(|v| { - let mut buf = [0u8; 8]; - let len = v.len().min(8); - buf[..len].copy_from_slice(&v[..len]); - u64::from_le_bytes(buf) - }) - .sum(); - sum.to_le_bytes().to_vec() - } - - Self::Max => iter.max().unwrap(), // unwrap safe: checked non-empty - - Self::Min => iter.min().unwrap(), // unwrap safe: checked non-empty - - Self::BitOr => { - iter.reduce(|acc, v| bitwise_or(&acc, &v)).unwrap() - } - - Self::BitAnd => { - iter.reduce(|acc, v| bitwise_and(&acc, &v)).unwrap() - } - - // ─── ORDER-DEPENDENT (EmitKey order matters) ─────────────── - - Self::First => iter.next().unwrap(), // unwrap safe: checked non-empty - - Self::Last => iter.last().unwrap(), // unwrap safe: checked non-empty - - Self::Concat => iter.flatten().collect(), - } - } -} - -/// Bitwise OR with zero-padding for shorter operand. -fn bitwise_or(a: &[u8], b: &[u8]) -> Vec { - let len = a.len().max(b.len()); - let mut result = vec![0u8; len]; - for (i, byte) in result.iter_mut().enumerate() { - let av = a.get(i).copied().unwrap_or(0); - let bv = b.get(i).copied().unwrap_or(0); - *byte = av | bv; - } - result -} - -/// Bitwise AND with truncation to shorter operand (intersection semantics). -fn bitwise_and(a: &[u8], b: &[u8]) -> Vec { - let len = a.len().min(b.len()); - (0..len).map(|i| a[i] & b[i]).collect() -} -``` - -### Files to Create/Modify - -| File | Action | -| ------------------------------------------------------- | ----------------------------------------------- | -| `crates/warp-core/src/materialization/reduce_op.rs` | **Create** — enum + apply() | -| `crates/warp-core/src/materialization/channel.rs` | **Modify** — update ChannelPolicy | -| `crates/warp-core/src/materialization/bus.rs` | **Modify** — call ReduceOp::apply() in finalize | -| `crates/warp-core/tests/materialization_determinism.rs` | **Add** — reduce op tests | - ---- - -## 3. Cross-Platform Determinism Tests - -### Problem - -MaterializationBus must produce identical output across: - -- macOS (dev machines) -- Linux (CI, production) -- WASM (browser runtime) - -Current tests run only on the host platform. - -### Solution - -Two-layer testing: - -| Layer | Environment | Trigger | Purpose | -| ------------------ | ---------------- | ------------------------------ | ------------------------------ | -| **DIND** | Docker-in-Docker | `cargo xtask dind-determinism` | Local dev, fast iteration | -| **GitHub Actions** | Native runners | Push/PR | Gate merges, real environments | - -### 3.1 DIND Harness Extension - -Extend existing DIND test harness to include materialization digest: - -```rust -// crates/echo-dind-tests/src/lib.rs - -/// Output from a determinism test run. -#[derive(Debug, Serialize, Deserialize)] -pub struct DeterminismOutput { - /// State hash after N ticks. - pub state_hash: String, - /// Tick receipt hashes. - pub receipt_hashes: Vec, - /// NEW: Materialization digest (hash of all finalized frames). - pub materialization_digest: String, -} -``` - -The test runs the same scenario on: - -1. Host (macOS/Linux) -2. Docker Linux container -3. WASM via wasm-pack - -All three must produce identical `materialization_digest`. - -### 3.2 GitHub Actions Workflow - -```yaml -# .github/workflows/determinism.yml - -name: Determinism - -on: - push: - branches: [main] - pull_request: - branches: [main] - -jobs: - determinism-matrix: - strategy: - matrix: - os: [ubuntu-latest, macos-latest] - include: - - os: ubuntu-latest - target: x86_64-unknown-linux-gnu - - os: macos-latest - target: x86_64-apple-darwin - - runs-on: ${{ matrix.os }} - - steps: - - uses: actions/checkout@v4 - - - name: Install Rust - uses: dtolnay/rust-action@stable - with: - targets: ${{ matrix.target }},wasm32-unknown-unknown - - - name: Install wasm-pack - run: cargo install wasm-pack - - - name: Run determinism tests - run: cargo test -p warp-core --test materialization_determinism - - - name: Run WASM determinism tests - run: wasm-pack test --node crates/warp-core - - - name: Capture materialization digest - id: digest - run: | - DIGEST=$(cargo run -p echo-dind-tests --bin capture-digest) - echo "digest=$DIGEST" >> $GITHUB_OUTPUT - echo "$DIGEST" > digest.txt - - - name: Upload digest artifact - uses: actions/upload-artifact@v4 - with: - name: digest-${{ matrix.os }} - path: digest.txt - - verify-cross-platform: - needs: determinism-matrix - runs-on: ubuntu-latest - steps: - - name: Download all digests - uses: actions/download-artifact@v4 - - - name: Compare digests - run: | - LINUX=$(cat digest-ubuntu-latest/digest.txt) - MACOS=$(cat digest-macos-latest/digest.txt) - - if [ "$LINUX" != "$MACOS" ]; then - echo "DETERMINISM FAILURE: Linux and macOS produced different digests" - echo "Linux: $LINUX" - echo "macOS: $MACOS" - exit 1 - fi - - echo "Cross-platform determinism verified: $LINUX" -``` - -### 3.3 Local DIND Command - -```bash -# Run locally before pushing -cargo xtask dind-determinism - -# Runs: -# 1. Native test → captures digest -# 2. Docker test → captures digest -# 3. WASM test → captures digest -# 4. Compares all three -``` - -### Files to Create/Modify - -| File | Action | -| -------------------------------------------------- | ----------------------------------------- | -| `.github/workflows/determinism.yml` | **Create** — CI workflow | -| `crates/echo-dind-tests/src/lib.rs` | **Modify** — add materialization_digest | -| `crates/echo-dind-tests/src/bin/capture-digest.rs` | **Create** — digest capture binary | -| `xtask/src/main.rs` | **Modify** — add dind-determinism command | - ---- - -## Implementation Order - -```text -Phase 1: EmissionPort (unblocks engine integration) -├── Create emission_port.rs -├── Create scoped_emitter.rs -├── Update mod.rs exports -└── Add unit tests - -Phase 2: ReduceOp (completes bus semantics) -├── Create reduce_op.rs -├── Update channel.rs (ChannelPolicy) -├── Update bus.rs (finalize with reduce) -└── Add reduce tests to determinism suite - -Phase 3: Cross-Platform Tests (gates merges) -├── Extend DIND harness -├── Create GitHub workflow -├── Add xtask command -└── Verify on first PR -``` - -## Test Plan: "SPEC is reSPECted" - -Comprehensive test suite ensuring the spec cannot lie. - -### Tier 1 — EmitKey Correctness + Wire Encoding - -| Test | What It Proves | -| ------------------------------------------------- | -------------------------------------------------------- | -| `emit_key_ord_is_lexicographic_scope_rule_subkey` | Ordering matches spec | -| `emit_key_wire_encoding_is_40_bytes_no_padding` | bytes[0..32]=scope, [32..36]=rule LE, [36..40]=subkey LE | -| `emit_key_roundtrip_wire` | encode → decode → equals | -| `emit_key_subkey_from_hash_is_deterministic` | Same input → same u32 | - -### Tier 2 — Bus Duplicate Rejection - -| Test | What It Proves | -| --------------------------------------------------- | -------------------------------------------------- | -| `bus_rejects_duplicate_key_same_channel` | (ch, key, A) then (ch, key, B) → DuplicateEmission | -| `bus_allows_same_key_different_channels` | (ch1, key) and (ch2, key) both OK | -| `bus_rejects_duplicate_key_even_if_bytes_identical` | No "identical payload = OK" loophole | - -### Tier 3 — Permutation Invariance ("SPEC Police") - -| Test | What It Proves | -| ----------------------------------------------- | ---------------------------------- | -| `log_finalize_is_permutation_invariant_small_n` | All N! orderings → identical bytes | -| `bus_channel_iteration_is_canonical` | Channels in BTreeMap order | -| `bus_log_preserves_all_emissions_no_drops` | count(output) == count(input) | - -### Tier 4 — ReduceOp Algebra - -**Commutative ops (must be permutation-invariant):** - -| Test | What It Proves | -| ------------------------------------------- | ------------------------------ | -| `reduce_sum_commutative_associative` | All permutations → same result | -| `reduce_max_min_are_commutative` | Byte-lex comparison is stable | -| `reduce_bitor_commutative_variable_length` | Zero-padding semantics correct | -| `reduce_bitand_commutative_variable_length` | Truncation semantics correct | - -**Order-dependent ops (NOT commutative, deterministic via EmitKey):** - -| Test | What It Proves | -| ------------------------------------------- | ------------------------------ | -| `reduce_first_picks_first_in_emitkey_order` | Smallest key wins | -| `reduce_last_picks_last_in_emitkey_order` | Largest key wins | -| `reduce_concat_matches_emitkey_order` | Output = concat(sorted by key) | - -**Truth serum:** - -| Test | What It Proves | -| ----------------------------------------------- | ---------------------------------- | -| `reduce_op_commutativity_table_is_honest` | `is_commutative()` matches reality | -| `reduce_empty_input_returns_specified_identity` | Sum→[0;8], others→[] | - -### Tier 5 — Engine Integration - -| Test | What It Proves | -| ------------------------------------------------ | -------------------------------- | -| `engine_log_emissions_stable_across_apply_order` | Rewrite order doesn't matter | -| `engine_strict_single_deterministic_failure` | Same error signature both orders | -| `engine_reduce_sum_stable_across_apply_order` | Reduced sum identical | -| `engine_emits_only_post_commit` | Port empty before commit | - -### Tier 6 — Cross-Platform Digest - -| Test | What It Proves | -| ---------------------------------------------------- | -------------------------------- | -| `determinism_output_includes_materialization_digest` | Harness writes digest | -| `cross_platform_digest_matches_linux_macOS_wasm` | All platforms identical | -| `scope_hash_is_content_hash_not_id_hash` | Equivalent stores → same EmitKey | - ---- - -## Open Questions - -1. **WASM target for CI** — `wasm32-unknown-unknown` or `wasm32-wasi`? Recommend `unknown-unknown` for browser purity. - -2. **Reduce op extensibility** — Should we ever allow user-defined reduce ops? **NO.** Use `Log` and reduce client-side. - -3. **Digest algorithm** — BLAKE3 of concatenated frame bytes. Simple, no Merkle tree needed. - ---- - -## Success Criteria - -- [x] Rules emit via `EmissionPort` trait, not direct bus access -- [x] Duplicate (channel, EmitKey) pairs rejected with `DuplicateEmission` -- [x] `ChannelPolicy::Reduce(ReduceOp)` replaces `join_fn_id` -- [x] All 8 `ReduceOp` variants implemented with `is_commutative()` classification -- [x] Empty-input behavior: Sum→[0;8], all others→[] -- [x] All Tier 1-5 tests passing -- [x] GitHub Actions workflow passes on PR -- [x] `cargo xtask dind` passes locally -- [x] Cross-platform digest match verified in CI (weekly schedule) - ---- - -## Revision History - -| Date | Change | -| ---------- | --------------------------------------------------------------------- | -| 2026-01-17 | Initial draft | -| 2026-01-17 | Fixed ReduceOp algebra claims (First/Last/Concat are NOT commutative) | -| 2026-01-17 | Added duplicate EmitKey rejection policy | -| 2026-01-17 | Specified empty-input behavior (Sum→[0;8], others→[]) | -| 2026-01-17 | Added comprehensive "SPEC is reSPECted" test plan | -| 2026-01-17 | Phase 3 complete: 127 tests, CI workflow, xtask dind command | diff --git a/docs/archive/roadmap-mwmr-mini-epic.md b/docs/archive/roadmap-mwmr-mini-epic.md deleted file mode 100644 index ac71dc68..00000000 --- a/docs/archive/roadmap-mwmr-mini-epic.md +++ /dev/null @@ -1,84 +0,0 @@ - - - -# MWMR Concurrency Mini‑Epic Roadmap (Footprints, Reserve Gate, Telemetry) - -Status: Active • Owner: warp-core • Created: 2025-10-27 - -## Outcomes - -- Enforce MWMR determinism via independence checks (footprints + ports + factor masks). -- Keep the hot path zero‑overhead (compact u32 rule ids; domain‑separated family ids only at boundaries). -- Prove commutation with property tests (N‑permutation) and add basic telemetry for conflict rates. - ---- - -## Phase 0.5 — Foundations (Done / In‑Progress) - -- [x] Footprint type with ports and factor mask (IdSet/PortSet; deterministic intersects) -- [x] RewriteRule surface extended with `compute_footprint`, `factor_mask`, `ConflictPolicy` -- [x] PendingRewrite carries `footprint` + `phase` -- [x] Property test: 2 independent motion rewrites commute (equal snapshot hash) -- [x] Spec doc: `docs/spec-mwmr-concurrency.md` - ---- - -## Phase 1 — Reservation Gate & Compact IDs - -- [x] CompactRuleId(u32) and rule table mapping family_id → compact id (in Engine) -- [x] DeterministicScheduler::reserve(tx, &mut PendingRewrite) → bool (active frontier per tx) -- [x] Engine commit() wires the reserve gate (execute only Reserved rewrites) -- [x] Feature‑gated JSONL telemetry (reserved/conflict) with timestamp, tx_id, short rule id -- [ ] Use CompactRuleId in PendingRewrite and internal execution paths (leave family id for ordering/disk/wire) - ---- - -## Phase 2 — Proof & Performance - -- [ ] Property test: N‑permutation commutation (N = 3..6 independent rewrites) -- [ ] Reserve gate smoke tests (same PortKey ⇒ conflict; disjoint ports ⇒ reserve) -- [ ] Criterion bench: independence checks (10/100/1k rewrites) — target < 1 ms @ 100 -- [ ] Telemetry counters per tick (conflict_rate, retry_count, reservation_latency_ms, epoch_flip_ms) -- [ ] Add Retry with randomized backoff (behind flag) once telemetry lands; keep default Abort - ---- - -## Phase 3 — Rule Identity & Hot‑Load - -- [x] build.rs generates const family id for `rule:motion/update` (domain‑separated) -- [ ] Generalize generator (src/gen/rule_ids.rs) and runtime assert test to catch drift -- [ ] Rhai rule registration: `register_rule{name, match, exec, ?id, ?revision}`; engine computes if omitted -- [ ] Revision ID = `blake3("rule-rev::canon-ast-v1" || canonical AST bytes)` - ---- - -## Phase 4 — Storage & Epochs (Scoping/Design) - -- [ ] Offset‑graph arena + mmap view (zero‑copy snapshots) -- [ ] Double‑buffered planes (attachments/skeleton), lazy epoch flips, grace‑period reclamation -- [ ] Optional Merkle overlays for partial verification - ---- - -## Guardrails & Invariants - -- Deterministic planning key = (scope_hash, family_id); execution may be parallel, ordering stays stable. -- Footprint independence order: factor_mask → ports → edges → nodes; fail fast on ports. -- Keep |L| ≤ 5–10; split rules or seed from rare types if larger. -- Never serialize CompactRuleId; boundary formats carry family id + (optional) revision id. - ---- - -## Telemetry (dev feature) - -- Events: `reserved`, `conflict` (ts_micros, tx_id, rule_id_short) -- Counters per tick: conflict_rate, retry_count, reservation_latency_ms, epoch_flip_ms, bitmap_blocks_checked - ---- - -## Links - -- Spec: `docs/spec-mwmr-concurrency.md` -- Tests: `crates/warp-core/tests/footprint_independence_tests.rs`, `crates/warp-core/tests/property_commute_tests.rs` -- Engine: `crates/warp-core/src/engine_impl.rs`, `crates/warp-core/src/scheduler.rs` -- Build: `crates/warp-core/build.rs` diff --git a/docs/archive/runtime-diagnostics-plan.md b/docs/archive/runtime-diagnostics-plan.md deleted file mode 100644 index 15106fa9..00000000 --- a/docs/archive/runtime-diagnostics-plan.md +++ /dev/null @@ -1,64 +0,0 @@ - - - -# Runtime Diagnostics Plan (Phase 0.5) - -Outlines logging, tracing, crash recovery, and inspector data streams for Echo runtime. - ---- - -## Logging Levels - -- `TRACE` – verbose diagnostics (disabled in production). -- `DEBUG` – subsystem insights (branch tree, Codex’s Baby). -- `INFO` – major lifecycle events (fork, merge, replay start). -- `WARN` – recoverable anomalies (drop records, entropy spikes). -- `ERROR` – determinism faults (capability denial, PRNG mismatch). - -Logs are structured JSON: `{ timestamp?, tick, branch, level, event, data }`. Timestamps optional and excluded from hashes. - ---- - -## Crash Recovery - -- On `ERROR`, emit synthetic timeline node with `errorCode`, `nodeId`, `diffId`. -- Persist crash report (JSON) including last inspector frames and capability state. -- Provide CLI `echo diagnostics --last-crash` to display report. - ---- - -## Tracing - -- Optional per-phase tracing (`TRACE` level) capturing start/end of scheduler phases, system durations. -- Output to separate trace buffer for tooling (`trace.jsonl`). - ---- - -## Inspector Streams - -- `InspectorFrame` (core metrics) -- `CBInspectorFrame` (Codex’s Baby) -- `BridgeInspectorFrame` (Temporal Bridge) -- `CapabilityInspectorFrame` - -Frames emitted each tick after `timeline_flush`, appended to ring buffer (configurable size). Debug tools subscribe over IPC/WebSocket. - ---- - -## Diagnostic CLI - -- `echo inspect --tick ` – dump inspector frames. -- `echo entropy --branch ` – show entropy history. -- `echo diff ` – print diff summary. -- `echo replay --verify` – reuse replay contract. - ---- - -## CI Integration - -- Pipeline collects inspector frames for failing tests, attaches to artifacts. -- Warnings escalate to failures when thresholds exceeded (entropy > threshold without observer, repeated paradox quarantine). - ---- - -This plan provides consistent observability without compromising determinism. diff --git a/docs/archive/rust-rhai-ts-division.md b/docs/archive/rust-rhai-ts-division.md deleted file mode 100644 index 77b301c3..00000000 --- a/docs/archive/rust-rhai-ts-division.md +++ /dev/null @@ -1,87 +0,0 @@ - - - -# Language & Responsibility Map (Phase 1) - -Echo’s runtime stack is intentionally stratified. Rust owns the deterministic graph engine; Rhai sits on top for gameplay scripting; TypeScript powers the tooling layer via WebAssembly bindings. This document captures what lives where as we enter Phase 1 (Core Ignition). - ---- - -## Rust (warp-core, wasm, cli) - -### Responsibilities - -- WARP engine: GraphStore, PatternGraph, RewriteRule, DeterministicScheduler, commit/Snapshot APIs. -- ECS foundations: Worlds, Systems, Components expressed as rewrite rules. -- Timeline & Branch tree: rewrite transactions, snapshot hashing, concurrency guard rails. -- Math/PRNG: deterministic float32 / fixed32 modules shared with gameplay. -- Netcode: lockstep / rollback / authority modes using rewrite transactions. -- Asset pipeline: import/export graphs, payload storage, zero-copy access. -- Confluence: distributed synchronization of rewrite transactions. -- Rhai engine hosting: embed Rhai with deterministic module set; expose WARP bindings. -- CLI tools: `echo-cli` with `verify`, `bench`, and `inspect` subcommands. - -### Key Crates - -- `warp-core` – core engine; Rhai binds directly in-process -- `warp-wasm` – WASM build for tooling/editor -- `warp-cli` – CLI utilities (`echo-cli` binary: verify, bench, inspect) - ---- - -## Rhai (gameplay authoring layer) - -### Rhai Responsibilities - -- Gameplay systems & components (e.g., AI state machines, quests, input handling). -- Component registration, entity creation/destruction via exposed APIs. -- Scripting for deterministic “async” (scheduled events through Codex’s Baby). -- Editor lenses and inspector overlays written in Rhai for rapid iteration. - -### Constraints - -- Single-threaded per branch; no OS threads. -- Engine budgeted deterministically per tick. -- Mutations occur through rewrite intents (`warp.apply(...)`), not raw memory access. - -### Bindings - -- `warp` Rhai module providing: - - `apply(rule_name, scope, params)` - - `delay(seconds, fn)` (schedules replay-safe events) - - Query helpers (read components, iterate entities) - - Capability-guarded operations (world:rewrite, asset:import, etc.) - ---- - -## TypeScript / Web Tooling - -### TypeScript Responsibilities - -- Echo Studio (graph IDE) – visualizes world graph, rewrites, branch tree. -- Inspector dashboards – display Codex, entropy, paradox frames. -- Replay/rollback visualizers, network debugging tools. -- Plugin builders and determinism test harness UI. - -### Integration - -- Uses `warp-wasm` to call into WARP engine from the browser. -- IPC/WebSocket for live inspector feeds (`InspectorEnvelope`). -- Works with JSONL logs for offline analysis. -- All mutations go through bindings; tooling never mutates state outside WARP APIs. - -### Tech - -- Frontend frameworks: React/Svelte/Vanilla as needed. -- WebGPU/WebGL for graph visualization. -- TypeScript ensures type safety for tooling code. - ---- - -## Summary - -- Rust: core deterministic runtime + binding layers. -- Rhai: gameplay logic, editor lenses, deterministic script-level behavior. -- TypeScript: visualization and tooling on top of WASM/IPC. - -This division keeps determinism and performance anchored in Rust while giving designers and tooling engineers approachable layers tailored for their workflows. diff --git a/docs/archive/scheduler-benchmarks.md b/docs/archive/scheduler-benchmarks.md deleted file mode 100644 index d9a7a551..00000000 --- a/docs/archive/scheduler-benchmarks.md +++ /dev/null @@ -1,25 +0,0 @@ - - - -# Scheduler Benchmark Plan (Phase 0) - -This document has been **split** to reduce drift and make scope explicit. - -Doc map: - -- [docs/scheduler.md](./scheduler.md) - -Current (implemented) benchmarks: - -- [docs/scheduler-performance-warp-core.md](./scheduler-performance-warp-core.md) - -Future (planned) system-scheduler benchmarks: - -- [docs/spec-scheduler.md](./spec-scheduler.md) (planned benchmark scenarios; spec-only today) - ---- - -The detailed benchmark plan content now lives in: - -- [docs/scheduler-performance-warp-core.md](./scheduler-performance-warp-core.md) (warp-core) -- [docs/spec-scheduler.md](./spec-scheduler.md) (planned system scheduler scenarios) diff --git a/docs/archive/scheduler-reserve-complexity.md b/docs/archive/scheduler-reserve-complexity.md deleted file mode 100644 index f17bf1d1..00000000 --- a/docs/archive/scheduler-reserve-complexity.md +++ /dev/null @@ -1,12 +0,0 @@ - - - -# Scheduler `reserve()` Time Complexity Analysis - -This document has been **merged** into the canonical warp-core scheduler doc: - -- [docs/scheduler-warp-core.md](./scheduler-warp-core.md) - -It remains as a stable link target for older references. - -The full analysis now lives in [docs/scheduler-warp-core.md](./scheduler-warp-core.md). diff --git a/docs/archive/scheduler-reserve-validation.md b/docs/archive/scheduler-reserve-validation.md deleted file mode 100644 index 37a3197c..00000000 --- a/docs/archive/scheduler-reserve-validation.md +++ /dev/null @@ -1,23 +0,0 @@ - - - -# Scheduler `reserve()` Implementation Validation - -This document has been **merged** into the canonical warp-core scheduler doc: - -- [docs/scheduler-warp-core.md](./scheduler-warp-core.md) - -It remains as a stable link target for older references. - -## Questions Answered - -1. ✅ **Atomic Reservation**: No partial marking on conflict -2. ✅ **Determinism Preserved**: Same inputs → same outputs -3. ✅ **Time Complexity**: Detailed analysis with ALL loops counted -4. ✅ **Performance Claims**: Measured, not just theoretical - ---- - -If you’re here for evidence details (atomicity/determinism/complexity), read: - -- [docs/scheduler-warp-core.md](./scheduler-warp-core.md) diff --git a/docs/archive/spec-deterministic-math.md b/docs/archive/spec-deterministic-math.md deleted file mode 100644 index 514bcc69..00000000 --- a/docs/archive/spec-deterministic-math.md +++ /dev/null @@ -1,213 +0,0 @@ - - - -# Deterministic Math Module Specification (Phase 0) - -> **Background:** For a gentler introduction, see [WARP Primer](/guide/warp-primer). - -Echo’s math module underpins every deterministic system: physics proxies, animation, AI, and branch reconciliation. - -**Status (2026-01-02): legacy draft + partial reality.** - -- This document started life as a JS/TypeScript-oriented Phase 0 draft. -- The canonical implementation today is Rust `warp-core` (`crates/warp-core/src/math/*`). -- The normative determinism policy is `docs/SPEC_DETERMINISTIC_MATH.md`. -- Validation and CI lanes are tracked in `docs/math-validation-plan.md`. - -Treat this spec as a **design sketch for future bindings** (TS/WASM/FFI) and an inventory of desired API shape, not as a statement that the JS implementation exists. - ---- - -## Goals - -- Provide deterministic vector/matrix/quaternion operations across platforms (at minimum: Linux/macOS, and eventually WASM/JS bindings). -- Support dual numeric modes via scalar backends: - - float lane (`F32Scalar`, default) - - fixed-point lane (`DFix64`, feature-gated today) -- Expose seeded PRNG services suitable for replay and branching. -- Offer allocation-aware APIs (avoid heap churn) for hot loops. -- Surface profiling hooks (NaN guards, range checks) in development builds. - ---- - -## Numeric Modes - -### Float32 Mode (default) - -- **Rust source of truth:** `F32Scalar` wraps `f32` and enforces canonicalization invariants (NaNs, signed zero, subnormals) at construction and after operations. -- **Transcendentals:** `sin`/`cos` are provided via a deterministic software backend (`warp_core::math::trig`), not platform/libm. -- **Bindings note:** if/when we ship TS/WASM bindings, they must match Rust’s outputs and invariants; “just `Math.fround`” is not sufficient to guarantee cross-engine determinism for transcendentals or NaN payload behavior. - -### Fixed-Point Mode (opt-in) - -- **Rust source of truth:** `DFix64` is Q32.32 fixed-point stored in `i64` and is currently feature-gated behind `det_fixed` so we can evolve it without destabilizing the default lane. -- **Non-finite mapping:** conversions from float inputs must be deterministic (e.g., NaN → 0, ±∞ saturate) and are covered by tests. -- **Bindings note:** future TS bindings should treat Rust fixtures as canonical; JS `BigInt` fixed-point is a possible implementation strategy, but not a correctness authority. - -Mode should be chosen at engine init (or build feature selection), with a clear policy for serialization/hashing so deterministic replay remains stable. - ---- - -## Core Types - -### Vec2 / Vec3 / Vec4 - -```ts -interface Vec2 { - readonly x: number; - readonly y: number; -} - -type VecLike = Float32Array | number[]; -``` - -- Backed by `Float32Array` of length 2/3/4. -- Methods: `create`, `clone`, `set`, `add`, `sub`, `scale`, `dot`, `length`, `normalize`, `lerp`, `equals`. -- All mutating functions accept `out` parameter for in-place updates to reduce allocations. -- Deterministic clamps: every operation ends with `fround` (float mode) or `fixed` operations. -- Rust parity: `warp_core::math::Vec3` currently implements add/sub/scale/dot/cross/length/normalize; `Vec2`/`Vec4` remain TODO. - -### Mat3 / Mat4 - -- Column-major storage (`Float32Array(9)` / `Float32Array(16)`). -- Methods: `identity`, `fromRotation`, `fromTranslation`, `multiply`, `invert`, `transformVec`. -- Deterministic inversion: use well-defined algorithm with guard against singular matrices (records failure and returns identity or throws based on config). -- Rust parity: `warp_core::math::Mat4` exposes `multiply` and `transform_point`; identity/fromRotation/invert are pending. - -### Quat - -- Represented as `[x, y, z, w]`. -- Functions: `identity`, `fromAxisAngle`, `multiply`, `slerp`, `normalize`, `toMat4`. -- `slerp` uses deterministic interpolation with clamped range. -- Rust parity: `warp_core::math::Quat` implements identity/fromAxisAngle/multiply/normalize/to_mat4; `slerp` remains TBD. - -### Transform - -- Struct bundling position (Vec3), rotation (Quat), scale (Vec3). -- Helper for constructing Mat4; ensures consistent order of operations. -- Rust parity: transform helpers are still tracked for Phase 1 (not implemented yet). - -### Bounds / AABB - -- Useful for physics collision; stores min/max Vec3. -- Provides deterministic union/intersection operations. - ---- - -## PRNG Services - -### Engine PRNG - -- Based on counter-based generator (e.g., Philox or Xoroshiro128+). -- Implementation in TypeScript with optional WebAssembly acceleration later. -- Interface: - -```ts -interface PRNG { - next(): number; // returns float in [0,1) - nextInt(min: number, max: number): number; - nextFloat(min: number, max: number): number; - state(): PRNGState; - jump(): PRNG; // independent stream -} -``` - -- `state` serializable for replay. -- `jump` used for branch forking: clone generator with deterministic offset. -- `seed` derived from combination of world seed + branch ID + optional subsystem tag. -- Rust parity: `warp_core::math::Prng` implements seeding, `next_f32`, and `next_int`; state/jump APIs are follow-up work. - -### Deterministic Hashing - -- Provide `hash64` function (e.g., SplitMix64) for converting strings/IDs into seeds. -- Ensure stable across platforms; implement in TypeScript to avoid native differences. - -### Integration Points - -- Scheduler passes `math.prng` on `TickContext`. -- Codex’s Baby `CommandContext` exposes `prng.spawn(scope)` for per-handler streams. -- Timeline branch creation clones PRNG state to maintain deterministic divergence. - ---- - -## Utility Functions - -- `clamp(value, min, max)` – deterministic clamp using `Math.min/Math.max` once (avoid multiple rounding). -- `approximatelyEqual(a, b, epsilon)` – uses configured epsilon (float32 ~1e-6). -- `degToRad`, `radToDeg` – using float32 rounding. -- `wrapAngle(angle)` – ensure deterministic wrap [-π, π]. -- `bezier`, `catmullRom` – deterministic interpolation functions for animation. - ---- - -## Memory Strategy - -- Provide pool of reusable vectors/matrices for temporary calculations (`MathStack`). -- `MathStack` uses deterministic LIFO behavior: `pushVec3()`, `pushMat4()`, `pop()`. -- Guard misuse in dev builds (stack underflow/overflow assertions). - ---- - -## Diagnostics - -- Optional `math.enableDeterminismChecks()` toggles NaN/Infinity detection; throws descriptive error with stack trace. -- `math.traceEnabled` allows capturing sequence of operations for debugging (recorded in inspector overlay). -- Stats counters: operations per frame, PRNG usage frequency. - ---- - -## API Surface (draft) - -```ts -interface EchoMath { - mode: "float32" | "fixed32"; - vec2: Vec2Module; - vec3: Vec3Module; - vec4: Vec4Module; - mat3: Mat3Module; - mat4: Mat4Module; - quat: QuatModule; - transform: TransformModule; - prng: PRNGFactory; - stack: MathStack; - constants: { - epsilon: number; - tau: number; - }; - utils: { - clamp(value: number, min: number, max: number): number; - approx(a: number, b: number, epsilon?: number): boolean; - degToRad(deg: number): number; - radToDeg(rad: number): number; - }; -} -``` - -`PRNGFactory`: - -```ts -interface PRNGFactory { - create(seed: PRNGSeed): PRNG; - fromTimeline(fingerprint: TimelineFingerprint, scope?: string): PRNG; -} -``` - ---- - -## Determinism Notes - -- Avoid `Math.random`; all randomness flows through PRNG. -- `Math.sin/cos` may vary across engines; implement polynomial approximations or wrap to enforce float32 rounding (test across browsers). -- Fixed-point mode may skip trig functions initially; provide lookup tables or polynomial approximations. -- Ensure order of operations consistent; avoid relying on JS evaluation order quirks. - ---- - -## Open Questions - -- Should fixed-point mode support quaternions (costly) or restrict to 2D contexts? -- How to expose SIMD acceleration where available without breaking determinism (e.g., WebAssembly fallback). -- Do we allow user-defined math extensions (custom vector sizes) via plugin system? -- Integration with physics adapters: how to synchronize with Box2D/Rapier numeric expectations (float32). - -Future work: add unit tests validating cross-environment determinism, micro-benchmarks for operations, and sample usage in the playground. diff --git a/docs/archive/spec-geom-collision.md b/docs/archive/spec-geom-collision.md deleted file mode 100644 index d6b83b6b..00000000 --- a/docs/archive/spec-geom-collision.md +++ /dev/null @@ -1,42 +0,0 @@ - - - -# Geometry & Collision (Spec Stub) - -> **Background:** For a gentler introduction, see [WARP Primer](/guide/warp-primer). - -**Status: not yet re-specified.** This repo currently carries an interactive DPO tour and diagram assets, but the full written spec for Echo’s geometry/collision subsystem is pending re-homing into the Rust-first era. - -## Scope (Intended) - -- Deterministic broad phase and narrow phase modeled as graph rewrites. -- Canonical identifiers for bodies, shapes, and contacts. -- Collision events emitted as deterministic graph deltas. -- CCD as a deterministic, replayable sequence of rewrite steps. - -## Non-Goals (For Now) - -- Physics engine replacement (Box2D/Rapier integrations remain adapters). -- GPU-accelerated collision or platform-specific broad-phase shortcuts. -- Real-time authoring tools (tracked separately in editor/inspector specs). - -What exists today: - -- Interactive tour: `/collision-dpo-tour.html` (source: `docs/public/collision-dpo-tour.html`) -- Guide entrypoint: `docs/guide/collision-tour.md` -- Diagram assets: `docs/public/assets/collision/` - -What this spec should eventually cover: - -- Deterministic broad phase + narrow phase modeled as graph rewrites (DPO). -- Canonical IDs, stable ordering, and hashing inputs/outputs for replay. -- Temporal proxies, CCD workflow, and event emission in a timeline-aware world. -- See [spec-deterministic-math.md](../spec-deterministic-math.md) for the normative deterministic math policy. - -## Near-Term Deliverables - -- Solidify the wire format for collision-related view ops (if any). -- Define the minimal node/edge schema for bodies, shapes, and contacts. -- Specify the canonical ordering for resolving contact sets. - -Until the full spec is written, treat the tour as an **illustrative artifact**, not a normative contract. diff --git a/docs/archive/study/aion.cls b/docs/archive/study/aion.cls deleted file mode 100644 index 4f2d2f67..00000000 --- a/docs/archive/study/aion.cls +++ /dev/null @@ -1,175 +0,0 @@ -% SPDX-License-Identifier: Apache-2.0 OR LicenseRef-MIND-UCAL-1.0 -% © James Ross Ω FLYING•ROBOTS -\NeedsTeXFormat{LaTeX2e} -\ProvidesClass{aion}[2025/12/07 AIΩN Foundations Series Class] - -\LoadClass[11pt]{article} - -% ------------------------------------------------------------ -% Packages -% ------------------------------------------------------------ -\RequirePackage[T1]{fontenc} -\RequirePackage{lmodern} -\RequirePackage{amsmath, amssymb, amsfonts, amsthm} -\RequirePackage{microtype} -\RequirePackage{geometry} -\RequirePackage{xcolor} -\RequirePackage{graphicx} -\RequirePackage{titlesec} -\RequirePackage{tocloft} -\RequirePackage{enumitem} -\RequirePackage{booktabs} -\RequirePackage{chngcntr} -\RequirePackage{hyperref} - -% ------------------------------------------------------------ -% Geometry -% ------------------------------------------------------------ -\geometry{ - margin=1in, -} - -% ------------------------------------------------------------ -% Colors (brand) -% ------------------------------------------------------------ -\definecolor{AIONBlue}{RGB}{20, 60, 120} -\definecolor{AIONAccent}{RGB}{120, 20, 120} - -% ------------------------------------------------------------ -% Hyperref -% ------------------------------------------------------------ -\hypersetup{ - colorlinks=true, - linkcolor=AIONBlue, - citecolor=AIONBlue, - urlcolor=AIONAccent, - hypertexnames=false -} - -% ------------------------------------------------------------ -% Section Titles -% ------------------------------------------------------------ -\titleformat{\section} - {\large\bfseries\color{AIONBlue!85!black}} - {\thesection}{0.5em}{} - -\titleformat{\subsection} - {\normalsize\bfseries\color{AIONBlue!85!black}} - {\thesubsection}{0.5em}{} - -% Reset figure numbering by section for cleaner hyperlinks -\counterwithin{figure}{section} -\renewcommand{\theHfigure}{\thesection.\arabic{figure}} - -% ------------------------------------------------------------ -% Theorem Environments -% ------------------------------------------------------------ -\theoremstyle{definition} -\newtheorem{definition}{Definition}[section] -\newtheorem{assumption}[definition]{Assumption} - -\theoremstyle{plain} -\newtheorem{proposition}[definition]{Proposition} -\newtheorem{theorem}[definition]{Theorem} -\newtheorem{lemma}[definition]{Lemma} -\newtheorem{corollary}[definition]{Corollary} - -\theoremstyle{remark} -\newtheorem{example}[definition]{Example} -\newtheorem{remark}[definition]{Remark} - -% ------------------------------------------------------------ -% Metadata Commands -% ------------------------------------------------------------ -% Internal storage macros (initialized to \@empty for robust checking) -\makeatletter -\newcommand{\AION@papertitle}{\@empty} -\newcommand{\AION@papernumber}{\@empty} -\newcommand{\AION@paperversion}{\@empty} -\newcommand{\AION@paperdate}{\@empty} -\newcommand{\AION@paperauthor}{\@empty} -\newcommand{\AION@paperaffiliation}{\@empty} -\newcommand{\AION@paperorcid}{\@empty} -\newcommand{\AION@paperdoi}{\@empty} - -% User-facing setter commands -\newcommand{\papertitle}[1]{\gdef\AION@papertitle{#1}} -\newcommand{\papernumber}[1]{\gdef\AION@papernumber{#1}} -\newcommand{\paperversion}[1]{\gdef\AION@paperversion{#1}} -\newcommand{\paperdate}[1]{\gdef\AION@paperdate{#1}} -\newcommand{\paperauthor}[1]{\gdef\AION@paperauthor{#1}} -\newcommand{\paperaffiliation}[1]{\gdef\AION@paperaffiliation{#1}} -\newcommand{\paperorcid}[1]{\gdef\AION@paperorcid{#1}} -\newcommand{\paperdoi}[1]{\gdef\AION@paperdoi{#1}} - -% Robust emptiness check using \@empty -\newcommand{\AION@require}[2]{% - \ifx#1\@empty - \ClassError{aion}{#2 not set}{You must call #2 before \string\AIONTitlePage} - \fi -} -\makeatother - -\makeatletter -\newcommand{\AIONTitlePage}{% - \AION@require{\AION@papertitle}{\string\papertitle}% - \AION@require{\AION@papernumber}{\string\papernumber}% - \AION@require{\AION@paperauthor}{\string\paperauthor}% - \AION@require{\AION@paperdate}{\string\paperdate}% - - \thispagestyle{empty} - \begin{center} - - % Nudge the block slightly downward for visual gravity - \vspace*{1.5cm} - - % Title (primary anchor) - {\Huge\bfseries \AION@papertitle \par} - \vspace{14pt} - - % Series / paper number (subtitle energy) - {\normalsize\scshape\color{AIONBlue} - AI$\Omega$N Foundations Series — \AION@papernumber \par} - - \vspace{16pt} - - % Author block (confident, quiet) - {\large - \AION@paperauthor \par} - \vspace{4pt} - - % Only show affiliation/ORCID if defined - \ifx\AION@paperaffiliation\@empty\else - {\normalsize \AION@paperaffiliation \par} - \fi - \ifx\AION@paperorcid\@empty\else - {\normalsize ORCID: \AION@paperorcid \par} - \fi - \ifx\AION@paperdoi\@empty\else - {\normalsize DOI: \AION@paperdoi \par} - \fi - - \vspace{10pt} - - {\normalsize - \AION@paperdate \par} - - \end{center} - \vspace{2cm} -} -\makeatother - -\newcommand{\AIONFrontMatter}[1]{% - \begin{center} - \small - #1 - \end{center} - \vspace{1em} -} - -% ------------------------------------------------------------ -% Table of Contents formatting -% TODO: implement custom TOC styling if needed -% ------------------------------------------------------------ - -\endinput diff --git a/docs/archive/study/build-tour.py b/docs/archive/study/build-tour.py deleted file mode 100644 index a4c1e10a..00000000 --- a/docs/archive/study/build-tour.py +++ /dev/null @@ -1,260 +0,0 @@ -#!/usr/bin/env python3 -# SPDX-License-Identifier: Apache-2.0 -# © James Ross Ω FLYING•ROBOTS -""" -Build the 'What Makes Echo Tick' tour document with: -1. Claude's commentary in red-outlined boxes with RED TEXT -2. PDF diagrams with embedded fonts -3. Letter-size paper with small margins -""" - -import re -import subprocess -import sys -from pathlib import Path - -STUDY_DIR = Path(__file__).parent -DIAGRAMS_DIR = STUDY_DIR / "diagrams" - -INPUT_MD = STUDY_DIR / "what-makes-echo-tick.md" -PROCESSED_MD = STUDY_DIR / "what-makes-echo-tick-processed.md" -OUTPUT_TEX = STUDY_DIR / "what-makes-echo-tick.tex" -OUTPUT_PDF = STUDY_DIR / "what-makes-echo-tick.pdf" - - -def escape_latex(text: str) -> str: - """Escape LaTeX special characters in text.""" - # Use placeholder to avoid double-escaping braces in \textbackslash{} - BACKSLASH_PLACEHOLDER = "\x00BACKSLASH\x00" - text = text.replace('\\', BACKSLASH_PLACEHOLDER) - - replacements = [ - ('&', r'\&'), - ('%', r'\%'), - ('$', r'\$'), - ('#', r'\#'), - ('_', r'\_'), - ('{', r'\{'), - ('}', r'\}'), - ('~', r'\textasciitilde{}'), - ('^', r'\textasciicircum{}'), - ] - for char, replacement in replacements: - text = text.replace(char, replacement) - - return text.replace(BACKSLASH_PLACEHOLDER, r'\textbackslash{}') - - -def convert_commentary_to_latex(md_content: str) -> str: - """Convert CLAUDE_COMMENTARY markers to LaTeX red boxes.""" - - def replace_commentary(match: re.Match[str]) -> str: - inner = match.group(1).strip() - # Escape LaTeX special chars in the commentary content - escaped = escape_latex(inner) - return f'\n\n\\begin{{claudecommentary}}\n{escaped}\n\\end{{claudecommentary}}\n\n' - - # Replace ... - pattern = r'\s*(.*?)\s*' - md_content = re.sub(pattern, replace_commentary, md_content, flags=re.DOTALL) - - return md_content - - -def convert_svg_to_pdf_refs(md_content: str) -> str: - """Convert SVG image references to PDF for LaTeX.""" - md_content = re.sub( - r'\!\[([^\]]*)\]\(diagrams/([^)]+)\.svg\)', - r'![\1](diagrams/\2.pdf)', - md_content - ) - return md_content - - -def run_pandoc(md_file: Path, tex_file: Path) -> bool: - """Run pandoc to convert markdown to LaTeX.""" - try: - result = subprocess.run( - [ - "pandoc", - str(md_file), - "-o", str(tex_file), - "--standalone", - "-f", "markdown+raw_tex", - "--top-level-division=chapter", - "-V", "geometry:margin=0.75in", - "-V", "geometry:letterpaper", - "-V", "fontsize=11pt", - ], - capture_output=True, - text=True, - timeout=60 - ) - if result.returncode != 0: - print(f"pandoc failed: {result.stderr}", file=sys.stderr) - return False - return True - except (subprocess.TimeoutExpired, FileNotFoundError) as e: - print(f"pandoc error: {e}", file=sys.stderr) - return False - - -def postprocess_tex(tex_file: Path) -> None: - """Post-process the LaTeX file.""" - content = tex_file.read_text() - - # Add required packages and styling - # Note: graphicx and geometry are already loaded by Pandoc, so we only add - # adjustbox, tcolorbox, and fvextra here - packages = r""" -\usepackage[export]{adjustbox} -\usepackage{tcolorbox} -\tcbuselibrary{breakable,skins} - -% Make code blocks smaller to fit -\usepackage{fvextra} -\DefineVerbatimEnvironment{Highlighting}{Verbatim}{ - commandchars=\\\{\}, - fontsize=\small, - breaklines=true, - breakanywhere=true -} - -% Define the Claude commentary box style - RED OUTLINE + RED TEXT -\newtcolorbox{claudecommentary}{ - enhanced, - breakable, - colback=red!5, - colframe=red!75!black, - coltext=red!70!black, - boxrule=3pt, - arc=5pt, - left=12pt, - right=12pt, - top=12pt, - bottom=12pt, - before skip=15pt, - after skip=15pt, - fontupper=\color{red!70!black}, - fonttitle=\bfseries\Large\color{red!75!black}, - title={\raisebox{-0.1em}{\Large$\blacktriangleright$} Claude's Commentary}, - attach boxed title to top left={yshift=-4mm,xshift=10mm}, - boxed title style={ - colback=white, - colframe=red!75!black, - boxrule=2pt, - arc=3pt - } -} -""" - - # Insert packages after \documentclass - if r'\usepackage{amsmath' in content: - content = content.replace( - r'\usepackage{amsmath', - packages + r'\usepackage{amsmath' - ) - elif r'\begin{document}' in content: - content = content.replace( - r'\begin{document}', - packages + r'\begin{document}' - ) - - # Fix image includes - make them fit with max width/height - content = re.sub( - r'\\pandocbounded\{\\includegraphics\{([^}]+)\}\}', - r'\\begin{center}\\includegraphics[max width=0.95\\textwidth,max height=0.4\\textheight,keepaspectratio]{\1}\\end{center}', - content - ) - - # Also handle bare includegraphics - content = re.sub( - r'\\includegraphics\{(diagrams/[^}]+)\}', - r'\\begin{center}\\includegraphics[max width=0.95\\textwidth,max height=0.4\\textheight,keepaspectratio]{\1}\\end{center}', - content - ) - - tex_file.write_text(content) - - -def run_xelatex(tex_file: Path) -> bool: - """Run xelatex to produce PDF.""" - try: - for run in [1, 2]: - print(f" xelatex pass {run}...", end=" ", flush=True) - result = subprocess.run( - [ - "xelatex", - "-interaction=nonstopmode", - "-output-directory", str(tex_file.parent), - str(tex_file) - ], - capture_output=True, - text=True, - timeout=120, - cwd=tex_file.parent - ) - success = result.returncode == 0 - print("OK" if success else "warnings") - - pdf_file = tex_file.with_suffix('.pdf') - if not pdf_file.exists(): - print("PDF not generated!", file=sys.stderr) - return False - if not success: - print("xelatex failed on final pass", file=sys.stderr) - return False - return True - - except (subprocess.TimeoutExpired, FileNotFoundError) as e: - print(f"xelatex error: {e}", file=sys.stderr) - return False - - -def main() -> None: - print("=== Building What Makes Echo Tick ===\n") - - if not INPUT_MD.exists(): - print(f"Error: {INPUT_MD} not found", file=sys.stderr) - sys.exit(1) - - # Read the markdown - print(f"1. Reading {INPUT_MD.name}...") - md_content = INPUT_MD.read_text() - - # Convert commentary markers to LaTeX - print("2. Converting Claude commentary to LaTeX red boxes...") - md_content = convert_commentary_to_latex(md_content) - - # Convert SVG refs to PDF - print("3. Converting image references to PDF...") - md_content = convert_svg_to_pdf_refs(md_content) - - # Write processed markdown - PROCESSED_MD.write_text(md_content) - print(f" Wrote {PROCESSED_MD.name}") - - # Run pandoc - print("4. Running pandoc...") - if not run_pandoc(PROCESSED_MD, OUTPUT_TEX): - print(" Pandoc failed!") - sys.exit(1) - print(f" Generated {OUTPUT_TEX.name}") - - # Post-process the LaTeX - print("5. Post-processing LaTeX...") - postprocess_tex(OUTPUT_TEX) - print(" Added red boxes, small margins, fitted graphics") - - # Run xelatex - print("6. Running xelatex...") - if run_xelatex(OUTPUT_TEX): - print("\n=== Success! ===") - print(f"Output: {OUTPUT_PDF}") - else: - print("\n PDF generation may have issues, check .log file") - sys.exit(1) - - -if __name__ == "__main__": - main() diff --git a/docs/archive/study/diagrams/tour-01.mmd b/docs/archive/study/diagrams/tour-01.mmd deleted file mode 100644 index c32c34c1..00000000 --- a/docs/archive/study/diagrams/tour-01.mmd +++ /dev/null @@ -1,39 +0,0 @@ -graph TB - subgraph "Layer 5: Tools & Viewers" - V[warp-viewer] - T[External Tools] - end - - subgraph "Layer 4: Session Protocol" - SS[echo-session-service] - SC[echo-session-client] - WS[WebSocket Gateway] - end - - subgraph "Layer 3: Wire Format" - EG[echo-graph] - SP[echo-session-proto] - end - - subgraph "Layer 2: Storage" - WSC[WSC Format] - CAS[Content-Addressed Store] - end - - subgraph "Layer 1: Core Engine" - E[warp-core Engine] - S[Scheduler] - B[BOAW Executor] - end - - V --> SC - T --> WS - WS --> SS - SC --> SS - SS --> EG - EG --> SP - SP --> E - E --> S - S --> B - B --> WSC - WSC --> CAS \ No newline at end of file diff --git a/docs/archive/study/diagrams/tour-01.pdf b/docs/archive/study/diagrams/tour-01.pdf deleted file mode 100644 index 376eed91..00000000 Binary files a/docs/archive/study/diagrams/tour-01.pdf and /dev/null differ diff --git a/docs/archive/study/diagrams/tour-01.svg b/docs/archive/study/diagrams/tour-01.svg deleted file mode 100644 index 43dfa4e6..00000000 --- a/docs/archive/study/diagrams/tour-01.svg +++ /dev/null @@ -1 +0,0 @@ -

Layer 1: Core Engine

Layer 2: Storage

Layer 3: Wire Format

Layer 4: Session Protocol

Layer 5: Tools & Viewers

warp-viewer

External Tools

echo-session-service

echo-session-client

WebSocket Gateway

echo-graph

echo-session-proto

WSC Format

Content-Addressed Store

warp-core Engine

Scheduler

BOAW Executor

\ No newline at end of file diff --git a/docs/archive/study/diagrams/tour-02.mmd b/docs/archive/study/diagrams/tour-02.mmd deleted file mode 100644 index 65b14c01..00000000 --- a/docs/archive/study/diagrams/tour-02.mmd +++ /dev/null @@ -1,24 +0,0 @@ -sequenceDiagram - participant User - participant Tool as Tool/Viewer - participant Hub as Session Hub - participant Engine as warp-core Engine - participant Store as Graph Store - - User->>Tool: Click link - Tool->>Hub: ingest_intent(bytes) - Hub->>Engine: forward intent - Engine->>Engine: begin() transaction - Engine->>Engine: apply() rules - Engine->>Store: read via GraphView - Engine->>Engine: compute footprints - - rect rgb(240, 248, 255) - Note over Engine,Store: commit() internals - Engine->>Store: apply delta - Engine->>Engine: compute hashes - Engine->>Hub: emit snapshot/diff - end - - Hub->>Tool: WarpFrame - Tool->>User: render new state \ No newline at end of file diff --git a/docs/archive/study/diagrams/tour-02.pdf b/docs/archive/study/diagrams/tour-02.pdf deleted file mode 100644 index 6266c617..00000000 Binary files a/docs/archive/study/diagrams/tour-02.pdf and /dev/null differ diff --git a/docs/archive/study/diagrams/tour-02.svg b/docs/archive/study/diagrams/tour-02.svg deleted file mode 100644 index 91ac23e2..00000000 --- a/docs/archive/study/diagrams/tour-02.svg +++ /dev/null @@ -1 +0,0 @@ -Graph Storewarp-core EngineSession HubTool/ViewerUserGraph Storewarp-core EngineSession HubTool/ViewerUserClick linkingest_intent(bytes)forward intentbegin() transactionapply() rulesread via GraphViewcompute footprintscommit()apply deltacompute hashesemit snapshot/diffWarpFramerender new state \ No newline at end of file diff --git a/docs/archive/study/diagrams/tour-03.mmd b/docs/archive/study/diagrams/tour-03.mmd deleted file mode 100644 index 0b045cc4..00000000 --- a/docs/archive/study/diagrams/tour-03.mmd +++ /dev/null @@ -1,20 +0,0 @@ -graph LR - subgraph "WARP Graph Structure" - N1[Node A
id: 0x1234...] - N2[Node B
id: 0x5678...] - N3[Node C
id: 0x9ABC...] - - N1 -->|edge:link| N2 - N1 -->|edge:child| N3 - N2 -->|edge:ref| N3 - end - - subgraph "Attachments (α plane)" - A1[title: 'Home'] - A2[url: '/page/b'] - A3[content: '...'] - end - - N1 -.- A1 - N2 -.- A2 - N3 -.- A3 diff --git a/docs/archive/study/diagrams/tour-03.pdf b/docs/archive/study/diagrams/tour-03.pdf deleted file mode 100644 index 81befb06..00000000 Binary files a/docs/archive/study/diagrams/tour-03.pdf and /dev/null differ diff --git a/docs/archive/study/diagrams/tour-03.svg b/docs/archive/study/diagrams/tour-03.svg deleted file mode 100644 index 605602fc..00000000 --- a/docs/archive/study/diagrams/tour-03.svg +++ /dev/null @@ -1 +0,0 @@ -

Attachments (α plane)

WARP Graph Structure

edge:link

edge:child

edge:ref

Node A
id: 0x1234...

Node B
id: 0x5678...

Node C
id: 0x9ABC...

title: 'Home'

url: '/page/b'

content: '...'

\ No newline at end of file diff --git a/docs/archive/study/diagrams/tour-04.mmd b/docs/archive/study/diagrams/tour-04.mmd deleted file mode 100644 index d2d04747..00000000 --- a/docs/archive/study/diagrams/tour-04.mmd +++ /dev/null @@ -1,16 +0,0 @@ -graph TB - subgraph "Root Instance (warp_id: 'root')" - R[Root Node] - P1[Page 1] - P2[Page 2] - R --> P1 - R --> P2 - end - - subgraph "Child Instance (warp_id: 'child-abc')" - C1[Child Root] - C2[Child Node] - C1 --> C2 - end - - P2 -.->|"α[portal] = Descend('child-abc')"| C1 diff --git a/docs/archive/study/diagrams/tour-04.pdf b/docs/archive/study/diagrams/tour-04.pdf deleted file mode 100644 index 708fb439..00000000 Binary files a/docs/archive/study/diagrams/tour-04.pdf and /dev/null differ diff --git a/docs/archive/study/diagrams/tour-04.svg b/docs/archive/study/diagrams/tour-04.svg deleted file mode 100644 index 2ea2bd65..00000000 --- a/docs/archive/study/diagrams/tour-04.svg +++ /dev/null @@ -1 +0,0 @@ -

Child Instance (warp_id: 'child-abc')

Root Instance (warp_id: 'root')

α[portal] = Descend('child-abc')

Root Node

Page 1

Page 2

Child Root

Child Node

\ No newline at end of file diff --git a/docs/archive/study/diagrams/tour-05.mmd b/docs/archive/study/diagrams/tour-05.mmd deleted file mode 100644 index d194cb8b..00000000 --- a/docs/archive/study/diagrams/tour-05.mmd +++ /dev/null @@ -1,8 +0,0 @@ -flowchart TD - A[Create GraphStore] --> B[Create WarpState] - B --> C[Create root WarpInstance] - C --> D[Initialize DeterministicScheduler] - D --> E[Create empty rules HashMap] - E --> F[Initialize MaterializationBus] - F --> G[Preserve U0 state for replay] - G --> H[Engine ready] \ No newline at end of file diff --git a/docs/archive/study/diagrams/tour-05.pdf b/docs/archive/study/diagrams/tour-05.pdf deleted file mode 100644 index 65ed0327..00000000 Binary files a/docs/archive/study/diagrams/tour-05.pdf and /dev/null differ diff --git a/docs/archive/study/diagrams/tour-05.svg b/docs/archive/study/diagrams/tour-05.svg deleted file mode 100644 index 2dede56d..00000000 --- a/docs/archive/study/diagrams/tour-05.svg +++ /dev/null @@ -1 +0,0 @@ -

Create GraphStore

Create WarpState

Create root WarpInstance

Initialize DeterministicScheduler

Create empty rules HashMap

Initialize MaterializationBus

Preserve U0 state for replay

Engine ready

\ No newline at end of file diff --git a/docs/archive/study/diagrams/tour-06.mmd b/docs/archive/study/diagrams/tour-06.mmd deleted file mode 100644 index 7bb5c256..00000000 --- a/docs/archive/study/diagrams/tour-06.mmd +++ /dev/null @@ -1,29 +0,0 @@ -flowchart LR - subgraph "1. Begin" - B[begin] - end - - subgraph "2. Apply" - A1[apply rule 1] - A2[apply rule 2] - A3[apply rule N] - end - - subgraph "3. Commit" - C1[Drain] - C2[Reserve] - C3[Execute] - C4[Merge] - C5[Finalize] - end - - subgraph "4. Hash" - H1[State Root] - H2[Commit Hash] - end - - subgraph "5. Record" - R[Append to History] - end - - B --> A1 --> A2 --> A3 --> C1 --> C2 --> C3 --> C4 --> C5 --> H1 --> H2 --> R \ No newline at end of file diff --git a/docs/archive/study/diagrams/tour-06.pdf b/docs/archive/study/diagrams/tour-06.pdf deleted file mode 100644 index 8158912a..00000000 Binary files a/docs/archive/study/diagrams/tour-06.pdf and /dev/null differ diff --git a/docs/archive/study/diagrams/tour-06.svg b/docs/archive/study/diagrams/tour-06.svg deleted file mode 100644 index 1f9e5f03..00000000 --- a/docs/archive/study/diagrams/tour-06.svg +++ /dev/null @@ -1 +0,0 @@ -
5. Record
4. Hash
3. Commit
2. Apply
1. Begin

begin

apply rule 1

apply rule 2

apply rule N

Drain

Reserve

Execute

Merge

Finalize

State Root

Commit Hash

Append to History

\ No newline at end of file diff --git a/docs/archive/study/diagrams/tour-07.mmd b/docs/archive/study/diagrams/tour-07.mmd deleted file mode 100644 index 4752500c..00000000 --- a/docs/archive/study/diagrams/tour-07.mmd +++ /dev/null @@ -1,7 +0,0 @@ -flowchart TD - A[apply called] --> B{Matcher returns true?} - B -->|No| C[Return NoMatch] - B -->|Yes| D[Compute Footprint] - D --> E[Create PendingRewrite] - E --> F[Enqueue to Scheduler] - F --> G[Return Matched] \ No newline at end of file diff --git a/docs/archive/study/diagrams/tour-07.pdf b/docs/archive/study/diagrams/tour-07.pdf deleted file mode 100644 index 2b77302c..00000000 Binary files a/docs/archive/study/diagrams/tour-07.pdf and /dev/null differ diff --git a/docs/archive/study/diagrams/tour-07.svg b/docs/archive/study/diagrams/tour-07.svg deleted file mode 100644 index 02db703f..00000000 --- a/docs/archive/study/diagrams/tour-07.svg +++ /dev/null @@ -1 +0,0 @@ -

No

Yes

apply called

Matcher returns true?

Return NoMatch

Compute Footprint

Create PendingRewrite

Enqueue to Scheduler

Return Matched

\ No newline at end of file diff --git a/docs/archive/study/diagrams/tour-08.mmd b/docs/archive/study/diagrams/tour-08.mmd deleted file mode 100644 index 1ac04467..00000000 --- a/docs/archive/study/diagrams/tour-08.mmd +++ /dev/null @@ -1,9 +0,0 @@ -flowchart TD - A[For each rewrite] --> B{Footprint conflicts with active frontier?} - B -->|No conflict| C[Accept: add to active frontier] - B -->|Conflict| D[Reject: record blocking witness] - C --> E[Continue to next] - D --> E - E --> F{More rewrites?} - F -->|Yes| A - F -->|No| G[Done: have accepted/rejected sets] \ No newline at end of file diff --git a/docs/archive/study/diagrams/tour-08.pdf b/docs/archive/study/diagrams/tour-08.pdf deleted file mode 100644 index a0d4f134..00000000 Binary files a/docs/archive/study/diagrams/tour-08.pdf and /dev/null differ diff --git a/docs/archive/study/diagrams/tour-08.svg b/docs/archive/study/diagrams/tour-08.svg deleted file mode 100644 index 275b730c..00000000 --- a/docs/archive/study/diagrams/tour-08.svg +++ /dev/null @@ -1 +0,0 @@ -

No conflict

Conflict

Yes

No

For each rewrite

Footprint conflicts with active frontier?

Accept: add to active frontier

Reject: record blocking witness

Continue to next

More rewrites?

Done: have accepted/rejected sets

\ No newline at end of file diff --git a/docs/archive/study/diagrams/tour-09.mmd b/docs/archive/study/diagrams/tour-09.mmd deleted file mode 100644 index b4cff003..00000000 --- a/docs/archive/study/diagrams/tour-09.mmd +++ /dev/null @@ -1,7 +0,0 @@ -flowchart TD - A[Start at root] --> B[BFS: visit all reachable nodes] - B --> C[For each instance: hash in BTreeMap order] - C --> D[For each node: hash in ascending NodeId order] - D --> E[For each node's edges: hash in ascending EdgeId order] - E --> F[BLAKE3 digest of canonical byte stream] - F --> G[state_root: Hash] \ No newline at end of file diff --git a/docs/archive/study/diagrams/tour-09.pdf b/docs/archive/study/diagrams/tour-09.pdf deleted file mode 100644 index b05eafc4..00000000 Binary files a/docs/archive/study/diagrams/tour-09.pdf and /dev/null differ diff --git a/docs/archive/study/diagrams/tour-09.svg b/docs/archive/study/diagrams/tour-09.svg deleted file mode 100644 index c44be00c..00000000 --- a/docs/archive/study/diagrams/tour-09.svg +++ /dev/null @@ -1 +0,0 @@ -

Start at root

BFS: visit all reachable nodes

For each instance: hash in BTreeMap order

For each node: hash in ascending NodeId order

For each node's edges: hash in ascending EdgeId order

BLAKE3 digest of canonical byte stream

state_root: Hash

\ No newline at end of file diff --git a/docs/archive/study/diagrams/tour-10.mmd b/docs/archive/study/diagrams/tour-10.mmd deleted file mode 100644 index cf7ccf63..00000000 --- a/docs/archive/study/diagrams/tour-10.mmd +++ /dev/null @@ -1,29 +0,0 @@ -flowchart TD - subgraph "Partitioning" - I[Items] --> P[partition_into_shards] - P --> S0[Shard 0] - P --> S1[Shard 1] - P --> S2[Shard 2] - P --> S3[Shard 3] - P --> S255[Shard 255] - end - - subgraph "Work Stealing" - W0[Worker 0] -->|claims| S0 - W0 -->|claims| S1 - W1[Worker 1] -->|claims| S2 - W1 -->|claims| S3 - end - - subgraph "Execution" - S0 --> D0[TickDelta 0] - S1 --> D0 - S2 --> D1[TickDelta 1] - S3 --> D1 - end - - subgraph "Merge" - D0 --> M[merge_deltas] - D1 --> M - M --> O[Canonical Ops] - end diff --git a/docs/archive/study/diagrams/tour-10.pdf b/docs/archive/study/diagrams/tour-10.pdf deleted file mode 100644 index e5dc50cf..00000000 Binary files a/docs/archive/study/diagrams/tour-10.pdf and /dev/null differ diff --git a/docs/archive/study/diagrams/tour-10.svg b/docs/archive/study/diagrams/tour-10.svg deleted file mode 100644 index 10ebe51d..00000000 --- a/docs/archive/study/diagrams/tour-10.svg +++ /dev/null @@ -1 +0,0 @@ -

Merge

Execution

Work Stealing

Partitioning

claims

claims

claims

claims

Items

partition_into_shards

Shard 0

Shard 1

...

Shard 255

Worker 0

Worker 1

S3

TickDelta 0

TickDelta 1

merge_deltas

Canonical Ops

\ No newline at end of file diff --git a/docs/archive/study/diagrams/tour-11.mmd b/docs/archive/study/diagrams/tour-11.mmd deleted file mode 100644 index 39f0e317..00000000 --- a/docs/archive/study/diagrams/tour-11.mmd +++ /dev/null @@ -1,17 +0,0 @@ -flowchart LR - subgraph "Before Tick" - S1[Snapshot N
immutable] - end - - subgraph "During Tick" - GV[GraphView
reads from S1] - TD[TickDelta
accumulates ops] - GV -->|reads| S1 - end - - subgraph "After Commit" - S2[Snapshot N+1
new immutable] - S1 -.->|structural sharing| S2 - end - - TD -->|apply ops| S2 diff --git a/docs/archive/study/diagrams/tour-11.pdf b/docs/archive/study/diagrams/tour-11.pdf deleted file mode 100644 index 9e54e077..00000000 Binary files a/docs/archive/study/diagrams/tour-11.pdf and /dev/null differ diff --git a/docs/archive/study/diagrams/tour-11.svg b/docs/archive/study/diagrams/tour-11.svg deleted file mode 100644 index 01eb7d4f..00000000 --- a/docs/archive/study/diagrams/tour-11.svg +++ /dev/null @@ -1 +0,0 @@ -

After Commit

During Tick

Before Tick

reads

structural sharing

apply ops

Snapshot N
immutable

GraphView
reads from S1

TickDelta
accumulates ops

Snapshot N+1
new immutable

\ No newline at end of file diff --git a/docs/archive/study/diagrams/tour-12.mmd b/docs/archive/study/diagrams/tour-12.mmd deleted file mode 100644 index 383932ab..00000000 --- a/docs/archive/study/diagrams/tour-12.mmd +++ /dev/null @@ -1,16 +0,0 @@ -graph TD - subgraph "Initial State" - ROOT[Root
type: site] - HOME[Home Page
type: page
α.title: 'Welcome'] - ABOUT[About Page
type: page
α.title: 'About Us'] - LINK[Link
type: link
α.target: About] - - ROOT -->|edge:root_page| HOME - ROOT -->|edge:page| ABOUT - HOME -->|edge:content| LINK - LINK -.->|resolves to| ABOUT - end - - subgraph "View State" - V[Viewer
α.current: Home] - end diff --git a/docs/archive/study/diagrams/tour-12.pdf b/docs/archive/study/diagrams/tour-12.pdf deleted file mode 100644 index 1b3cef24..00000000 Binary files a/docs/archive/study/diagrams/tour-12.pdf and /dev/null differ diff --git a/docs/archive/study/diagrams/tour-12.svg b/docs/archive/study/diagrams/tour-12.svg deleted file mode 100644 index 2fcfdb83..00000000 --- a/docs/archive/study/diagrams/tour-12.svg +++ /dev/null @@ -1 +0,0 @@ -

View State

Viewer
α.current: Home

Initial State

edge:root_page

edge:page

edge:content

resolves to

Root
type: site

Home Page
type: page
α.title: 'Welcome'

About Page
type: page
α.title: 'About Us'

Link
type: link
α.target: About

\ No newline at end of file diff --git a/docs/archive/study/diagrams/tour-13.mmd b/docs/archive/study/diagrams/tour-13.mmd deleted file mode 100644 index 605eb32a..00000000 --- a/docs/archive/study/diagrams/tour-13.mmd +++ /dev/null @@ -1,7 +0,0 @@ -flowchart TD - A[Intent bytes arrive] --> B[Compute intent_id = BLAKE3 of intent payload] - B --> C{intent_id seen before?} - C -->|Yes| D[Return Duplicate] - C -->|No| E[Create event node
keyed by intent_id] - E --> F[Create edge: inbox → event
type: pending] - F --> G[Return Accepted] diff --git a/docs/archive/study/diagrams/tour-13.pdf b/docs/archive/study/diagrams/tour-13.pdf deleted file mode 100644 index 083023d0..00000000 Binary files a/docs/archive/study/diagrams/tour-13.pdf and /dev/null differ diff --git a/docs/archive/study/diagrams/tour-13.svg b/docs/archive/study/diagrams/tour-13.svg deleted file mode 100644 index ef3eb10f..00000000 --- a/docs/archive/study/diagrams/tour-13.svg +++ /dev/null @@ -1 +0,0 @@ -

Yes

No

Intent bytes arrive

Compute intent_id = BLAKE3 bytes

intent_id seen before?

Return Duplicate

Create event node
keyed by intent_id

Create edge: inbox → event
type: pending

Return Accepted

\ No newline at end of file diff --git a/docs/archive/study/diagrams/tour-14.mmd b/docs/archive/study/diagrams/tour-14.mmd deleted file mode 100644 index 41f385aa..00000000 --- a/docs/archive/study/diagrams/tour-14.mmd +++ /dev/null @@ -1,6 +0,0 @@ -flowchart TD - A[Find pending event
with minimum intent_id] --> B[For each cmd/* rule
in stable order] - B --> C{Rule matches?} - C -->|No| B - C -->|Yes| D[Apply matching rule] - D --> E[Apply sys/ack_pending
remove pending edge] \ No newline at end of file diff --git a/docs/archive/study/diagrams/tour-14.pdf b/docs/archive/study/diagrams/tour-14.pdf deleted file mode 100644 index 424d0310..00000000 Binary files a/docs/archive/study/diagrams/tour-14.pdf and /dev/null differ diff --git a/docs/archive/study/diagrams/tour-14.svg b/docs/archive/study/diagrams/tour-14.svg deleted file mode 100644 index 34ac0783..00000000 --- a/docs/archive/study/diagrams/tour-14.svg +++ /dev/null @@ -1 +0,0 @@ -

No

Yes

Find pending event
with minimum intent_id

For each cmd/* rule
in stable order

Rule matches?

Apply matching rule

Apply sys/ack_pending
remove pending edge

\ No newline at end of file diff --git a/docs/archive/study/diagrams/tour-15.mmd b/docs/archive/study/diagrams/tour-15.mmd deleted file mode 100644 index 6c054c30..00000000 --- a/docs/archive/study/diagrams/tour-15.mmd +++ /dev/null @@ -1,21 +0,0 @@ -graph TB - subgraph "Viewer" - R[Renderer
WGPU] - L[Layout Engine
Force-directed] - D[Diff Processor] - S[State Cache] - end - - subgraph "Session" - SC[Session Client] - end - - subgraph "Output" - Screen[Screen] - end - - SC -->|WarpDiff| D - D -->|updates| S - S -->|positions| L - L -->|vertices| R - R -->|pixels| Screen diff --git a/docs/archive/study/diagrams/tour-15.pdf b/docs/archive/study/diagrams/tour-15.pdf deleted file mode 100644 index daa00efd..00000000 Binary files a/docs/archive/study/diagrams/tour-15.pdf and /dev/null differ diff --git a/docs/archive/study/diagrams/tour-15.svg b/docs/archive/study/diagrams/tour-15.svg deleted file mode 100644 index 732aba41..00000000 --- a/docs/archive/study/diagrams/tour-15.svg +++ /dev/null @@ -1 +0,0 @@ -

Session

Viewer

WarpDiff

updates

positions

vertices

pixels

Renderer
WGPU

Layout Engine
Force-directed

Diff Processor

State Cache

Session Client

Screen

\ No newline at end of file diff --git a/docs/archive/study/echo-tour-de-code-directors-cut.pdf b/docs/archive/study/echo-tour-de-code-directors-cut.pdf deleted file mode 100644 index e56610b9..00000000 Binary files a/docs/archive/study/echo-tour-de-code-directors-cut.pdf and /dev/null differ diff --git a/docs/archive/study/echo-tour-de-code-directors-cut.tex b/docs/archive/study/echo-tour-de-code-directors-cut.tex deleted file mode 100644 index 1c4985a6..00000000 --- a/docs/archive/study/echo-tour-de-code-directors-cut.tex +++ /dev/null @@ -1,1330 +0,0 @@ -% SPDX-License-Identifier: Apache-2.0 OR LicenseRef-MIND-UCAL-1.0 -% © James Ross Ω FLYING•ROBOTS -% Options for packages loaded elsewhere -\PassOptionsToPackage{unicode}{hyperref} -\PassOptionsToPackage{hyphens}{url} -\documentclass[11pt]{book} -\usepackage[letterpaper, margin=1in]{geometry} -\usepackage{xcolor} -\usepackage{amsmath,amssymb} -\setcounter{secnumdepth}{-\maxdimen} % remove section numbering -\usepackage{iftex} -\ifPDFTeX - \usepackage[T1]{fontenc} - \usepackage[utf8]{inputenc} - \usepackage{textcomp} -\else - \usepackage{unicode-math} - \defaultfontfeatures{Scale=MatchLowercase} - \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} -\fi -\usepackage{lmodern} -\ifPDFTeX\else\fi -\IfFileExists{upquote.sty}{\usepackage{upquote}}{} -\IfFileExists{microtype.sty}{% - \usepackage[]{microtype} - \UseMicrotypeSet[protrusion]{basicmath} -}{} -\makeatletter -\@ifundefined{KOMAClassName}{% - \IfFileExists{parskip.sty}{% - \usepackage{parskip} - }{% - \setlength{\parindent}{0pt} - \setlength{\parskip}{6pt plus 2pt minus 1pt}} -}{\KOMAoptions{parskip=half}} -\makeatother -\usepackage{color} -\usepackage{fancyvrb} -\newcommand{\VerbBar}{|} -\newcommand{\VERB}{\Verb[commandchars=\\\{\}]} -\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\},fontsize=\small} -\newenvironment{Shaded}{\begin{quote}}{\end{quote}} -\newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}} -\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{#1}} -\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}} -\newcommand{\BuiltInTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{#1}} -\newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{#1}}} -\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{#1}} -\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}} -\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{#1}} -\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}} -\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{#1}}} -\newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}} -\newcommand{\ExtensionTok}[1]{#1} -\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}} -\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{#1}} -\newcommand{\ImportTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{\textbf{#1}}} -\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}} -\newcommand{\NormalTok}[1]{#1} -\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{#1}} -\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{#1}} -\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{#1}} -\newcommand{\RegionMarkerTok}[1]{#1} -\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{#1}} -\newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{#1}} -\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\usepackage{longtable,booktabs,array} -\newcounter{none} -\usepackage{calc} -\usepackage{etoolbox} -\makeatletter -\patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{} -\makeatother -\IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}} -\makesavenoteenv{longtable} -\setlength{\emergencystretch}{3em} -\providecommand{\tightlist}{% - \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} -\usepackage{bookmark} -\IfFileExists{xurl.sty}{\usepackage{xurl}}{} -\urlstyle{same} -\hypersetup{ - hidelinks, - pdfcreator={LaTeX via pandoc}} - -% ═══════════════════════════════════════════════════════════════════════════════ -% DIRECTOR'S CUT STYLING -% ═══════════════════════════════════════════════════════════════════════════════ -\usepackage{tcolorbox} -\tcbuselibrary{skins,breakable} -\usepackage{fontawesome5} -\usepackage{pifont} -\usepackage{mdframed} - -% Director's Commentary - conversational asides -\newenvironment{directors} -{\begin{mdframed}[ - linecolor=blue!60, - linewidth=2pt, - leftline=true, - rightline=false, - topline=false, - bottomline=false, - backgroundcolor=blue!3, - innerleftmargin=12pt, - innerrightmargin=10pt, - innertopmargin=8pt, - innerbottommargin=8pt, - skipabove=12pt, - skipbelow=12pt -]\small\sffamily\color{blue!70!black}} -{\end{mdframed}} - -% "Pro tip" callouts -\newenvironment{protip} -{\begin{mdframed}[ - linecolor=green!60!black, - linewidth=2pt, - leftline=true, - rightline=false, - topline=false, - bottomline=false, - backgroundcolor=green!5, - innerleftmargin=12pt, - innerrightmargin=10pt, - innertopmargin=8pt, - innerbottommargin=8pt, - skipabove=12pt, - skipbelow=12pt -]\small\sffamily\color{green!50!black}\textbf{Pro Tip:} } -{\end{mdframed}} - -% "Watch out" warnings -\newenvironment{watchout} -{\begin{mdframed}[ - linecolor=orange!80!black, - linewidth=2pt, - leftline=true, - rightline=false, - topline=false, - bottomline=false, - backgroundcolor=orange!5, - innerleftmargin=12pt, - innerrightmargin=10pt, - innertopmargin=8pt, - innerbottommargin=8pt, - skipabove=12pt, - skipbelow=12pt -]\small\sffamily\color{orange!70!black}\textbf{Heads Up:} } -{\end{mdframed}} - -% "The Big Picture" for architectural context -\newenvironment{bigpicture} -{\begin{mdframed}[ - linecolor=purple!60, - linewidth=2pt, - leftline=true, - rightline=false, - topline=false, - bottomline=false, - backgroundcolor=purple!3, - innerleftmargin=12pt, - innerrightmargin=10pt, - innertopmargin=8pt, - innerbottommargin=8pt, - skipabove=12pt, - skipbelow=12pt -]\small\sffamily\color{purple!70!black}\textbf{The Big Picture:} } -{\end{mdframed}} - -\author{} -\date{} - -\begin{document} -\frontmatter - -\mainmatter -\chapter*{Echo: Tour de Code} -\addcontentsline{toc}{chapter}{Echo: Tour de Code} - -\begin{quote} -\large\textbf{The Director's Cut} - -\normalsize -A complete function-by-function trace of Echo's execution pipeline, with commentary explaining what's \emph{really} going on and why. - -File paths and line numbers accurate as of 2026-01-18. -\end{quote} - -\begin{directors} -Hey! Welcome to the Director's Cut of the Echo Tour de Code. - -I'm going to walk you through this codebase like we're pair programming. When I see something clever, I'll tell you why it's clever. When there's a non-obvious design decision, I'll explain the trade-offs. When there's a potential footgun, I'll point it out. - -The goal here isn't just to show you \emph{what} the code does---any decent grep can do that. I want you to understand \emph{why} it does it this way, and what would break if you changed it. - -Let's dive in. -\end{directors} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\tableofcontents - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{1. Intent Ingestion}\label{intent-ingestion} - -\textbf{Entry Point:} \texttt{Engine::ingest\_intent()} \\ -\textbf{File:} \texttt{crates/warp-core/src/engine\_impl.rs:1216} - -\begin{directors} -This is where everything starts. A user does something---clicks a button, submits a form, whatever---and that action gets serialized into bytes and fed into this function. - -The first thing to understand: Echo doesn't care \emph{what} those bytes mean. It treats them as opaque data. The semantics come later, when rules interpret the bytes. Right now, we're just doing bookkeeping. -\end{directors} - -\subsection{1.1 Function Signature} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ ingest\_intent(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ intent\_bytes}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[}\DataTypeTok{u8}\NormalTok{]) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Result}\OperatorTok{\textless{}}\NormalTok{IngestDisposition}\OperatorTok{,}\NormalTok{ EngineError}\OperatorTok{\textgreater{}} -\end{Highlighting} -\end{Shaded} - -\textbf{Returns:} -\begin{itemize} -\item \texttt{IngestDisposition::Accepted \{ intent\_id: Hash \}} --- New intent accepted -\item \texttt{IngestDisposition::Duplicate \{ intent\_id: Hash \}} --- Already ingested -\end{itemize} - -\begin{directors} -Notice the return type. We don't just return ``success'' or ``failure''---we tell the caller \emph{what happened}. Did we actually ingest this intent, or did we already have it? - -This matters because in a distributed system, the same intent might arrive multiple times (network retries, replays, etc.). The caller needs to know whether this is a fresh intent or a duplicate so they can decide what to do next. -\end{directors} - -\subsection{1.2 Complete Call Trace} - -\begin{verbatim} -Engine::ingest_intent(intent_bytes: &[u8]) -│ -├─[1] compute_intent_id(intent_bytes) → Hash -│ FILE: crates/warp-core/src/inbox.rs:205 -│ CODE: -│ let mut hasher = blake3::Hasher::new(); -│ hasher.update(b"intent:"); // Domain separation -│ hasher.update(intent_bytes); -│ hasher.finalize().into() // → [u8; 32] -\end{verbatim} - -\begin{directors} -Okay, stop right here. This is the most important line in the entire function. - -See that \texttt{b"intent:"} prefix? That's called \textbf{domain separation}, and it's a cryptographic best practice that a lot of codebases get wrong. - -Here's the problem it solves: imagine you have some bytes that represent an intent. Now imagine those \emph{exact same bytes} could also be interpreted as a node ID, or an edge ID, or some other identifier. Without domain separation, they'd all hash to the same value, and you'd have collisions between completely different concepts. - -By prefixing with \texttt{"intent:"}, we guarantee that an intent hash can \emph{never} collide with a node hash (which uses \texttt{"node:"}), or a type hash (\texttt{"type:"}), etc. Even if the raw bytes are identical, the hashes will be different. - -Echo does this everywhere: -\begin{itemize} -\item \texttt{"intent:"} for intent IDs -\item \texttt{"node:"} for node IDs -\item \texttt{"type:"} for type IDs -\item \texttt{"edge:"} for edge IDs -\end{itemize} - -If you're ever tempted to add a new ID type, remember to pick a unique prefix. Future you will thank present you. -\end{directors} - -\begin{verbatim} -├─[2] NodeId(intent_id) -│ Creates strongly-typed NodeId from Hash -\end{verbatim} - -\begin{protip} -These newtype wrappers (\texttt{NodeId}, \texttt{EdgeId}, \texttt{TypeId}, etc.) are all just 32 bytes under the hood. But Rust's type system won't let you accidentally pass a \texttt{NodeId} where an \texttt{EdgeId} is expected. Zero runtime cost, maximum compile-time safety. -\end{protip} - -\begin{verbatim} -├─[3] self.state.store_mut(&warp_id) → Option<&mut GraphStore> -│ FILE: crates/warp-core/src/engine_impl.rs:1221 -│ ERROR: EngineError::UnknownWarp if None -│ -├─[4] Extract root_node_id from self.current_root.local_id -│ -├─[5] STRUCTURAL NODE CREATION (Idempotent) -│ ├─ make_node_id("sim") → NodeId -│ │ FILE: crates/warp-core/src/ident.rs:93 -│ │ CODE: blake3("node:" || "sim") -│ │ -│ ├─ make_node_id("sim/inbox") → NodeId -│ │ CODE: blake3("node:" || "sim/inbox") -│ │ -│ ├─ make_type_id("sim") → TypeId -│ │ FILE: crates/warp-core/src/ident.rs:85 -│ │ CODE: blake3("type:" || "sim") -│ │ -│ ├─ make_type_id("sim/inbox") → TypeId -│ ├─ make_type_id("sim/inbox/event") → TypeId -│ │ -│ ├─ store.insert_node(sim_id, NodeRecord { ty: sim_ty }) -│ │ FILE: crates/warp-core/src/graph.rs:175 -│ │ CODE: self.nodes.insert(id, record) -│ │ -│ └─ store.insert_node(inbox_id, NodeRecord { ty: inbox_ty }) -\end{verbatim} - -\begin{directors} -Step [5] is doing something subtle: it's creating the structural scaffolding for intents \emph{idempotently}. - -What does that mean? Well, imagine this is the first intent ever ingested. The ``sim'' node doesn't exist yet, nor does the ``sim/inbox'' node. So we create them. - -But what if this is the millionth intent? Those structural nodes already exist. And here's the key insight: \textbf{because the IDs are derived from the names deterministically}, we get the same ID every time. \texttt{make\_node\_id("sim")} \emph{always} returns the same hash. - -So when we call \texttt{store.insert\_node(sim\_id, ...)}, if the node already exists with that ID, it's just a no-op (or an update---same difference for immutable nodes). - -This is the beauty of content-addressed storage. You don't need ``if exists'' checks everywhere. Just compute the ID, do the insert, and let the storage layer handle deduplication. -\end{directors} - -\begin{verbatim} -├─[6] STRUCTURAL EDGE CREATION -│ ├─ make_edge_id("edge:root/sim") → EdgeId -│ │ FILE: crates/warp-core/src/ident.rs:109 -│ │ CODE: blake3("edge:" || "edge:root/sim") -│ │ -│ ├─ store.insert_edge(root_id, EdgeRecord { ... }) -│ │ FILE: crates/warp-core/src/graph.rs:188 -│ │ └─ GraphStore::upsert_edge_record(from, edge) -│ │ FILE: crates/warp-core/src/graph.rs:196 -│ │ UPDATES: -│ │ self.edge_index.insert(edge_id, from) -│ │ self.edge_to_index.insert(edge_id, to) -│ │ self.edges_from.entry(from).or_default().push(edge) -│ │ self.edges_to.entry(to).or_default().push(edge_id) -│ │ -│ └─ store.insert_edge(sim_id, EdgeRecord { ... }) [sim → inbox] -\end{verbatim} - -\begin{directors} -Look at all those index updates in \texttt{upsert\_edge\_record}. We're maintaining \emph{four separate indices} for edges: - -\begin{enumerate} -\item \texttt{edge\_index}: edge ID $\rightarrow$ source node -\item \texttt{edge\_to\_index}: edge ID $\rightarrow$ target node -\item \texttt{edges\_from}: source node $\rightarrow$ list of edges -\item \texttt{edges\_to}: target node $\rightarrow$ list of edge IDs -\end{enumerate} - -Why so many? Because graph queries can go in any direction: -\begin{itemize} -\item ``What edges leave this node?'' $\rightarrow$ \texttt{edges\_from} -\item ``What edges arrive at this node?'' $\rightarrow$ \texttt{edges\_to} -\item ``Given this edge, what's its source?'' $\rightarrow$ \texttt{edge\_index} -\item ``Given this edge, what's its target?'' $\rightarrow$ \texttt{edge\_to\_index} -\end{itemize} - -Each of these is O(1) lookup. Yes, it's more memory. Yes, it's more bookkeeping on mutations. But graph traversal is \emph{constant} in Echo, and that's worth a lot. -\end{directors} - -\begin{verbatim} -├─[7] DUPLICATE DETECTION -│ store.node(&event_id) → Option<&NodeRecord> -│ FILE: crates/warp-core/src/graph.rs:87 -│ CODE: self.nodes.get(id) -│ IF Some(_): return Ok(IngestDisposition::Duplicate { intent_id }) -\end{verbatim} - -\begin{directors} -Here's where the content-addressing pays off beautifully. - -Remember how we computed \texttt{intent\_id} by hashing the intent bytes? And remember how we're about to use that same ID as the event node's ID? - -That means: \textbf{if this exact intent was ever ingested before, it created a node with this exact ID}. So we can detect duplicates just by checking if the node exists. - -No database sequence numbers. No UUIDs. No distributed coordination. Just: hash the bytes, check if that node exists. That's it. - -This is why content-addressed systems are so elegant. Deduplication is \emph{free}. -\end{directors} - -\begin{verbatim} -├─[8] EVENT NODE CREATION -│ store.insert_node(event_id, NodeRecord { ty: event_ty }) -│ NOTE: event_id = intent_id (content-addressed) -│ -├─[9] INTENT ATTACHMENT -│ ├─ AtomPayload::new(type_id, bytes) -│ │ FILE: crates/warp-core/src/attachment.rs:149 -│ │ CODE: Self { type_id, bytes: Bytes::copy_from_slice(intent_bytes) } -│ │ -│ └─ store.set_node_attachment(event_id, Some(AttachmentValue::Atom(payload))) -│ FILE: crates/warp-core/src/graph.rs:125 -│ CODE: self.node_attachments.insert(id, v) -\end{verbatim} - -\begin{directors} -The graph structure (nodes and edges) is just the skeleton. The actual \emph{data}---the intent bytes---lives in an ``attachment.'' - -Think of it like this: the node is the mailbox, and the attachment is the letter inside. The mailbox has a predictable address (the content-addressed ID), but the contents can be anything. - -This separation is useful because you can query the graph structure without loading all the attachment data. For large payloads, that's a big memory savings. -\end{directors} - -\begin{verbatim} -├─[10] PENDING EDGE CREATION (Queue Membership) -│ ├─ pending_edge_id(&inbox_id, &intent_id) → EdgeId -│ │ FILE: crates/warp-core/src/inbox.rs:212 -│ │ CODE: blake3("edge:" || "sim/inbox/pending:" || inbox_id || intent_id) -│ │ -│ └─ store.insert_edge(inbox_id, EdgeRecord { -│ id: pending_edge_id, -│ from: inbox_id, -│ to: event_id, -│ ty: make_type_id("edge:pending") -│ }) -│ -└─[11] return Ok(IngestDisposition::Accepted { intent_id }) -\end{verbatim} - -\begin{bigpicture} -The ``pending edge'' is how Echo implements a queue using a graph. - -The inbox node is the queue. Each pending edge from inbox to an event node represents ``this event is waiting to be processed.'' When a rule processes the event, it deletes the pending edge. - -Why use a graph for a queue? Because now the queue is \emph{part of the state that gets hashed and committed}. You can replay the entire system from any snapshot, and the queue will be exactly where it was. - -No external message broker. No separate queue database. It's all just graph. -\end{bigpicture} - -\subsection{1.3 Data Structures Modified} - -{\def\LTcaptype{none} -\begin{longtable}[]{@{} - >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.42}} - >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.27}} - >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.31}}@{}} -\toprule\noalign{} -Structure & Field & Change \\ -\midrule\noalign{} -\endhead -\bottomrule\noalign{} -\endlastfoot -\texttt{GraphStore} & \texttt{nodes} & +3 entries (sim, inbox, event) \\ -\texttt{GraphStore} & \texttt{edges\_from} & +3 edges \\ -\texttt{GraphStore} & \texttt{edges\_to} & +3 reverse entries \\ -\texttt{GraphStore} & \texttt{edge\_index} & +3 edge$\rightarrow$from mappings \\ -\texttt{GraphStore} & \texttt{edge\_to\_index} & +3 edge$\rightarrow$to mappings \\ -\texttt{GraphStore} & \texttt{node\_attachments} & +1 (event $\rightarrow$ intent payload) \\ -\end{longtable} -} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{2. Transaction Lifecycle}\label{transaction-lifecycle} - -\subsection{2.1 Begin Transaction} - -\textbf{Entry Point:} \texttt{Engine::begin()} \\ -\textbf{File:} \texttt{crates/warp-core/src/engine\_impl.rs:711-719} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ begin(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\NormalTok{) }\OperatorTok{{-}\textgreater{}}\NormalTok{ TxId }\OperatorTok{\{} - \KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter }\OperatorTok{=} \KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter}\OperatorTok{.}\NormalTok{wrapping\_add(}\DecValTok{1}\NormalTok{)}\OperatorTok{;} \CommentTok{// Line 713} - \ControlFlowTok{if} \KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter }\OperatorTok{==} \DecValTok{0} \OperatorTok{\{} - \KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter }\OperatorTok{=} \DecValTok{1}\OperatorTok{;} \CommentTok{// Line 715} - \OperatorTok{\}} - \KeywordTok{self}\OperatorTok{.}\NormalTok{live\_txs}\OperatorTok{.}\NormalTok{insert(}\KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter)}\OperatorTok{;} \CommentTok{// Line 717} - \PreprocessorTok{TxId::}\NormalTok{from\_raw(}\KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter) }\CommentTok{// Line 718} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\begin{directors} -This is refreshingly simple for a transaction begin, right? Just increment a counter and track it in a set. - -But look at line 715. What's up with that \texttt{if tx\_counter == 0} check? - -Here's the deal: \texttt{TxId(0)} is reserved as an invalid/sentinel value throughout the codebase. It means ``no transaction'' or ``null transaction.'' If you ever wrap around from \texttt{u64::MAX} back to 0, you'd suddenly have a valid-looking transaction ID that's actually invalid. - -Now, will you ever hit $2^{64}$ transactions? Almost certainly not. The sun will burn out first. But this check costs one branch that's basically never taken, and it eliminates an entire class of potential bugs. - -This is defensive programming done right. The cost is negligible, and the safety is real. -\end{directors} - -\begin{protip} -See that \texttt{\#[repr(transparent)]} on \texttt{TxId}? That guarantees it has the exact same memory layout as a raw \texttt{u64}. You get type safety at compile time with zero runtime overhead. Use newtypes liberally---they're free! -\end{protip} - -\subsection{2.2 Abort Transaction} - -\textbf{Entry Point:} \texttt{Engine::abort()} \\ -\textbf{File:} \texttt{crates/warp-core/src/engine\_impl.rs:962-968} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ abort(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ tx}\OperatorTok{:}\NormalTok{ TxId) }\OperatorTok{\{} - \KeywordTok{self}\OperatorTok{.}\NormalTok{live\_txs}\OperatorTok{.}\NormalTok{remove(}\OperatorTok{\&}\NormalTok{tx}\OperatorTok{.}\NormalTok{value())}\OperatorTok{;} - \KeywordTok{self}\OperatorTok{.}\NormalTok{scheduler}\OperatorTok{.}\NormalTok{finalize\_tx(tx)}\OperatorTok{;} - \KeywordTok{self}\OperatorTok{.}\NormalTok{bus}\OperatorTok{.}\NormalTok{clear()}\OperatorTok{;} - \KeywordTok{self}\OperatorTok{.}\NormalTok{last\_materialization}\OperatorTok{.}\NormalTok{clear()}\OperatorTok{;} - \KeywordTok{self}\OperatorTok{.}\NormalTok{last\_materialization\_errors}\OperatorTok{.}\NormalTok{clear()}\OperatorTok{;} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\begin{directors} -Notice what's \emph{not} here: there's no rollback of graph state. - -Why? Because Echo hasn't touched the graph yet! All the matching and scheduling happens without mutating anything. The graph only changes during commit. - -This is a fundamental architectural decision: \textbf{the graph is effectively immutable until commit}. You can abort at any point before commit and there's nothing to undo. Just clear the transient state and you're done. - -Compare this to traditional databases where abort might mean replaying a undo log. Here it's just clearing some hash maps. -\end{directors} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{3. Rule Matching}\label{rule-matching} - -\textbf{Entry Point:} \texttt{Engine::apply()} \\ -\textbf{File:} \texttt{crates/warp-core/src/engine\_impl.rs:730-737} - -\begin{bigpicture} -Rules are the heart of Echo's reactive programming model. A rule says ``when you see this pattern in the graph, do this thing.'' - -But here's the key insight: matching is \textbf{pure}. The matcher function reads the graph, decides if the pattern matches, but doesn't modify anything. All the mutations happen later, during execution. - -This separation of matching from execution is what enables parallel scheduling. -\end{bigpicture} - -\subsection{3.1 Function Signature} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ apply(} - \OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,} -\NormalTok{ tx}\OperatorTok{:}\NormalTok{ TxId}\OperatorTok{,} -\NormalTok{ rule\_name}\OperatorTok{:} \OperatorTok{\&}\DataTypeTok{str}\OperatorTok{,} -\NormalTok{ scope}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId}\OperatorTok{,} -\NormalTok{) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Result}\OperatorTok{\textless{}}\NormalTok{ApplyResult}\OperatorTok{,}\NormalTok{ EngineError}\OperatorTok{\textgreater{}} -\end{Highlighting} -\end{Shaded} - -\subsection{3.2 Key Steps} - -\begin{verbatim} -Engine::apply(tx, rule_name, scope) -│ -├─[4] CREATE GRAPHVIEW -│ GraphView::new(store) → GraphView<'_> -│ FILE: crates/warp-core/src/graph_view.rs -│ TYPE: Read-only wrapper (Copy, 8 bytes) -\end{verbatim} - -\begin{directors} -This is one of my favorite patterns in Echo. - -\texttt{GraphView} is a \emph{read-only wrapper} around \texttt{GraphStore}. It's literally just a pointer (8 bytes), and it implements \texttt{Copy}, so passing it around is essentially free. - -But here's the magic: \texttt{GraphView} only exposes read methods. No mutations. The Rust compiler \emph{physically cannot} let you modify the graph through a \texttt{GraphView}. - -This is Rust's type system doing real work. You don't need runtime checks for ``is this a read-only transaction?'' The type system guarantees it at compile time. Any code that takes a \texttt{GraphView} is provably read-only. -\end{directors} - -\begin{verbatim} -├─[5] CALL MATCHER -│ (rule.matcher)(view, scope) → bool -│ TYPE: MatchFn = for<'a> fn(GraphView<'a>, &NodeId) -> bool -│ IF false: return Ok(ApplyResult::NoMatch) -│ -├─[8] COMPUTE FOOTPRINT -│ (rule.compute_footprint)(view, scope) → Footprint -│ RETURNS: -│ Footprint { -│ n_read: IdSet, // Nodes read -│ n_write: IdSet, // Nodes written -│ e_read: IdSet, // Edges read -│ e_write: IdSet, // Edges written -│ a_read: AttachmentSet, // Attachments read -│ a_write: AttachmentSet, // Attachments written -│ b_in: PortSet, // Input ports -│ b_out: PortSet, // Output ports -│ factor_mask: u64, // O(1) prefilter -│ } -\end{verbatim} - -\begin{directors} -The footprint is the \textbf{declaration of intent}. - -Before a rule can execute, it must tell the scheduler exactly which nodes, edges, and attachments it plans to read and write. Not approximately. Not ``somewhere in this subgraph.'' \emph{Exactly} these IDs. - -This is a constraint on rule authors, but it's what makes parallelism tractable. If two rules have non-overlapping footprints, they can run concurrently. If they overlap, the scheduler serializes them. - -Think of it like declaring your locks upfront, except you never actually acquire locks---you just declare your intentions and let the scheduler figure out what can run in parallel. -\end{directors} - -\begin{watchout} -If your footprint is wrong---if you access something you didn't declare---Bad Things happen. The parallel execution model assumes footprints are honest. There's debug-mode validation, but in release mode, you're on the honor system. - -Always over-declare rather than under-declare. If you \emph{might} read a node, put it in \texttt{n\_read}. Correctness beats parallelism. -\end{watchout} - -\begin{verbatim} -└─[11] ENQUEUE TO SCHEDULER - self.scheduler.enqueue(tx, PendingRewrite { ... }) - │ - └─ PendingTx::enqueue(scope_be32, rule_id, payload) - FILE: crates/warp-core/src/scheduler.rs:331-355 - - CASE 1: Duplicate (scope_hash, rule_id) — LAST WINS - fat[thin[i].handle] = Some(payload) // Overwrite - thin[i].nonce = next_nonce++ // Refresh nonce - - CASE 2: New entry - fat.push(Some(payload)) - thin.push(RewriteThin { scope_be32, rule_id, nonce, handle }) - index.insert(key, thin.len() - 1) -\end{verbatim} - -\begin{directors} -See ``LAST WINS'' on duplicate entries? This is subtle but important. - -If you call \texttt{apply()} twice with the same rule and scope, you get one execution, not two. The second call \emph{replaces} the first. - -Why? Because matching a rule at a scope is \emph{idempotent}. If the rule matches at that scope, you want to execute it once, regardless of how many times you tried to apply it. - -The ``nonce'' gets refreshed on replacement, which affects sort order (we'll see why later), but the key point is: duplicate apply calls are collapsed into one. -\end{directors} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{4. Scheduler: Drain \& Reserve}\label{scheduler-drain-reserve} - -\begin{bigpicture} -This is where Echo's determinism guarantee gets forged. - -You've enqueued a bunch of rules. They were enqueued in whatever order the application called \texttt{apply()}. Now we need to execute them in a \textbf{canonical order}---the same order every time, regardless of timing, regardless of which thread called what when. - -The scheduler does this in two phases: -\begin{enumerate} -\item \textbf{Drain}: Sort all pending rewrites into canonical order -\item \textbf{Reserve}: Walk through them, checking for conflicts -\end{enumerate} -\end{bigpicture} - -\subsection{4.1 Drain Phase (Radix Sort)} - -\textbf{Entry Point:} \texttt{RadixScheduler::drain\_for\_tx()} \\ -\textbf{File:} \texttt{crates/warp-core/src/scheduler.rs:109-113} - -\begin{verbatim} -RadixScheduler::drain_for_tx(tx) -│ -└─ PendingTx::drain_in_order() - │ - ├─ DECISION: n <= 1024 (SMALL_SORT_THRESHOLD)? - │ ├─ YES: sort_unstable_by(cmp_thin) - │ └─ NO: radix_sort() - │ - └─ radix_sort() - │ - └─ FOR pass IN 0..20: // ═══ 20 PASSES ═══ - │ - ├─ PHASE 1: COUNT BUCKETS - ├─ PHASE 2: PREFIX SUMS - └─ PHASE 3: STABLE SCATTER -\end{verbatim} - -\begin{directors} -Twenty passes of radix sort. Let's unpack why. - -First: why radix sort instead of quicksort or mergesort? - -\begin{enumerate} -\item \textbf{Determinism}: Radix sort is inherently stable---equal elements stay in their original order. Quicksort's behavior depends on pivot selection, which can vary. - -\item \textbf{O(n) complexity}: With fixed key size, radix sort is linear. We're sorting by 160 bits (128 bits of scope\_hash + 32 bits of rule\_id), so it's O(20n) = O(n). - -\item \textbf{Cache-friendly}: Each pass is a sequential scan. Modern CPUs love sequential access. -\end{enumerate} - -The 1024-element threshold is practical: for small arrays, the constant factors of radix sort (setting up histograms, etc.) exceed its benefits. Below that threshold, a comparison sort wins. -\end{directors} - -\begin{verbatim} -BUCKET EXTRACTION (bucket16): -FILE: crates/warp-core/src/scheduler.rs:481-498 - -Pass 0: u16_from_u32_le(r.nonce, 0) // Nonce bytes [0:2] -Pass 1: u16_from_u32_le(r.nonce, 1) // Nonce bytes [2:4] -Pass 2: u16_from_u32_le(r.rule_id, 0) // Rule ID bytes [0:2] -Pass 3: u16_from_u32_le(r.rule_id, 1) // Rule ID bytes [2:4] -Pass 4: u16_be_from_pair32(scope, 15) // Scope bytes [30:32] -... -Pass 19: u16_be_from_pair32(scope, 0) // Scope bytes [0:2] (MSD) - -SORT ORDER: (scope_hash, rule_id, nonce) ascending lexicographic -\end{verbatim} - -\begin{directors} -This is LSD (Least Significant Digit) radix sort---we process from least significant to most significant. - -The final sort order is: \texttt{(scope\_hash, rule\_id, nonce)}. - -Why this order? -\begin{itemize} -\item \textbf{scope\_hash first}: Rules at different scopes can potentially run in parallel. Grouping by scope makes conflict detection efficient. -\item \textbf{rule\_id second}: When multiple rules match at the same scope, we need a deterministic order. -\item \textbf{nonce last}: The tiebreaker for duplicate (scope, rule) pairs. Remember ``LAST WINS''? The nonce determines which duplicate survives. -\end{itemize} - -Because it's LSD, we process in reverse order: nonce first (passes 0-1), then rule\_id (passes 2-3), then scope\_hash (passes 4-19). -\end{directors} - -\subsection{4.2 Reserve Phase (Independence Check)} - -\textbf{Entry Point:} \texttt{RadixScheduler::reserve()} \\ -\textbf{File:} \texttt{crates/warp-core/src/scheduler.rs:134-143} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{fn}\NormalTok{ reserve(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ tx}\OperatorTok{:}\NormalTok{ TxId}\OperatorTok{,}\NormalTok{ pr}\OperatorTok{:} \OperatorTok{\&}\KeywordTok{mut}\NormalTok{ PendingRewrite) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{bool} \OperatorTok{\{} - \KeywordTok{let}\NormalTok{ active }\OperatorTok{=} \KeywordTok{self}\OperatorTok{.}\NormalTok{active}\OperatorTok{.}\NormalTok{entry(tx)}\OperatorTok{.}\NormalTok{or\_insert\_with(}\PreprocessorTok{ActiveFootprints::}\NormalTok{new)}\OperatorTok{;} - \ControlFlowTok{if} \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{has\_conflict(active}\OperatorTok{,}\NormalTok{ pr) }\OperatorTok{\{} - \ControlFlowTok{return} \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{on\_conflict(pr)}\OperatorTok{;} - \OperatorTok{\}} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{mark\_all(active}\OperatorTok{,}\NormalTok{ pr)}\OperatorTok{;} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{on\_reserved(pr)} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\begin{directors} -This is classic two-phase locking... without the locks. - -We walk through the sorted rewrites. For each one: -\begin{enumerate} -\item Check if its footprint conflicts with already-reserved footprints -\item If no conflict, mark its footprint as reserved and accept it -\item If conflict, reject it (it'll need to wait for a future tick) -\end{enumerate} - -The conflict matrix is what you'd expect: - -\begin{center} -\begin{tabular}{|c|c|c|} -\hline - & Read & Write \\ -\hline -Read & \checkmark & X \\ -\hline -Write & X & X \\ -\hline -\end{tabular} -\end{center} - -Multiple readers are fine. Any writer conflicts with readers and other writers. -\end{directors} - -\subsection{4.3 GenSet: O(1) Conflict Detection} - -\textbf{File:} \texttt{crates/warp-core/src/scheduler.rs:509-535} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{struct}\NormalTok{ GenSet}\OperatorTok{\textless{}}\NormalTok{K}\OperatorTok{\textgreater{}} \OperatorTok{\{} -\NormalTok{ gen}\OperatorTok{:} \DataTypeTok{u32}\OperatorTok{,} \CommentTok{// Current generation} -\NormalTok{ seen}\OperatorTok{:}\NormalTok{ FxHashMap}\OperatorTok{\textless{}}\NormalTok{K}\OperatorTok{,} \DataTypeTok{u32}\OperatorTok{\textgreater{},} \CommentTok{// Key → generation when marked} -\OperatorTok{\}} - -\KeywordTok{impl}\OperatorTok{\textless{}}\NormalTok{K}\OperatorTok{:} \BuiltInTok{Hash} \OperatorTok{+} \BuiltInTok{Eq} \OperatorTok{+} \BuiltInTok{Copy}\OperatorTok{\textgreater{}}\NormalTok{ GenSet}\OperatorTok{\textless{}}\NormalTok{K}\OperatorTok{\textgreater{}} \OperatorTok{\{} - \KeywordTok{pub} \KeywordTok{fn}\NormalTok{ contains(}\OperatorTok{\&}\KeywordTok{self}\OperatorTok{,}\NormalTok{ key}\OperatorTok{:}\NormalTok{ K) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{bool} \OperatorTok{\{} - \PreprocessorTok{matches!}\NormalTok{(}\KeywordTok{self}\OperatorTok{.}\NormalTok{seen}\OperatorTok{.}\NormalTok{get(}\OperatorTok{\&}\NormalTok{key)}\OperatorTok{,} \ConstantTok{Some}\NormalTok{(}\OperatorTok{\&}\NormalTok{g) }\ControlFlowTok{if}\NormalTok{ g }\OperatorTok{==} \KeywordTok{self}\OperatorTok{.}\NormalTok{gen)} - \OperatorTok{\}} - - \KeywordTok{pub} \KeywordTok{fn}\NormalTok{ mark(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ key}\OperatorTok{:}\NormalTok{ K) }\OperatorTok{\{} - \KeywordTok{self}\OperatorTok{.}\NormalTok{seen}\OperatorTok{.}\NormalTok{insert(key}\OperatorTok{,} \KeywordTok{self}\OperatorTok{.}\NormalTok{gen)}\OperatorTok{;} - \OperatorTok{\}} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\begin{directors} -Okay, this is my favorite data structure in the entire codebase. It's so simple and so clever. - -The problem: we need to track which keys are ``in the set'' for conflict detection. Between transactions, we need to clear the set. - -The naive approach: call \texttt{hash\_map.clear()} between transactions. That's O(n) where n is the number of keys. - -The clever approach: \textbf{generational clearing}. - -Instead of storing just keys, we store (key, generation). A key is ``in the set'' only if its stored generation matches the current generation. - -To ``clear'' the set? Just increment \texttt{gen}. That's it. O(1). - -All the old entries are still in the hash map, but they have stale generations, so \texttt{contains()} returns false for them. They're ghosts. - -The map grows over time, but since the same keys tend to be accessed repeatedly (temporal locality), it stabilizes quickly. And we never pay the O(n) clear cost. - -This pattern is criminally underused. Remember it. -\end{directors} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{5. BOAW Parallel Execution}\label{boaw-parallel-execution} - -\textbf{Entry Point:} \texttt{execute\_parallel()} \\ -\textbf{File:} \texttt{crates/warp-core/src/boaw/exec.rs:61-83} - -\begin{bigpicture} -BOAW stands for ``Best Of All Worlds.'' The idea is simple but powerful: - -\begin{enumerate} -\item Partition work items into shards based on their scope -\item Spin up worker threads -\item Workers claim shards and execute items -\item Merge all the outputs into a single canonical result -\end{enumerate} - -The key insight: \textbf{execution order doesn't matter if we sort the outputs}. Workers can execute in any order, claim shards in any order, even race against each other---as long as the merge produces the same result, we're deterministic. -\end{bigpicture} - -\subsection{5.1 Entry Point} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ execute\_parallel(view}\OperatorTok{:}\NormalTok{ GraphView}\OperatorTok{\textless{}}\OtherTok{\textquotesingle{}\_}\OperatorTok{\textgreater{},}\NormalTok{ items}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[ExecItem]}\OperatorTok{,}\NormalTok{ workers}\OperatorTok{:} \DataTypeTok{usize}\NormalTok{) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{TickDelta}\OperatorTok{\textgreater{}} -\end{Highlighting} -\end{Shaded} - -\subsection{5.2 Sharding} - -\begin{verbatim} -partition_into_shards(items.to_vec()) → Vec -│ -└─ FOR item IN items: - │ - ├─ shard_of(&item.scope) → usize - │ CODE: - │ let bytes = scope.as_bytes(); - │ let first_8: [u8; 8] = bytes[0..8].try_into().unwrap(); - │ let val = u64::from_le_bytes(first_8); - │ (val & 255) as usize // SHARD_MASK = 255 - │ - └─ shards[shard_id].items.push(item) -\end{verbatim} - -\begin{directors} -The sharding is beautifully simple: take the first 8 bytes of the node ID, interpret as a little-endian u64, mask with 255. You get a shard number from 0 to 255. - -Why 256 shards? -\begin{itemize} -\item \textbf{Fine enough}: With random node IDs, work distributes evenly across shards. -\item \textbf{Coarse enough}: Each shard has multiple items, amortizing per-shard overhead. -\item \textbf{Power of 2}: Masking is just a bitwise AND, no division needed. -\end{itemize} - -Why is this deterministic? Because shard assignment depends only on the node ID, which is content-addressed. The same node always lands in the same shard. -\end{directors} - -\subsection{5.3 Work Stealing Loop} - -\begin{verbatim} -FOR _ IN 0..workers: -│ -└─ s.spawn(move || { ... }) // ═══ WORKER THREAD ═══ - │ - └─ LOOP: - │ - ├─ shard_id = next_shard.fetch_add(1, Ordering::Relaxed) - │ ATOMIC: Returns old value, increments counter - │ - ├─ IF shard_id >= 256: break - │ - └─ FOR item IN &shards[shard_id].items: - └─ (item.exec)(view, &item.scope, &mut delta) -\end{verbatim} - -\begin{directors} -Each worker runs a loop: atomically claim the next shard number, process all items in that shard, repeat until no shards remain. - -See \texttt{Ordering::Relaxed}? That's the weakest memory ordering---basically ``no synchronization, just do the atomic operation.'' - -Why is that safe here? -\begin{enumerate} -\item Each shard is processed by exactly one worker (atomic fetch-add guarantees unique assignment) -\item Workers don't need to see each other's results until after \texttt{join()} -\item The \texttt{join()} provides the synchronization barrier -\end{enumerate} - -Using \texttt{Relaxed} instead of \texttt{SeqCst} avoids expensive memory barriers. On a 16-core machine, that matters. -\end{directors} - -\begin{watchout} -The shard claim order is non-deterministic. Worker 1 might claim shard 5 before worker 2 claims shard 3, or vice versa. - -This is fine! The merge phase sorts the outputs canonically. The execution order doesn't affect the final result. - -But if you're debugging and wondering why execution traces look different between runs, this is why. -\end{watchout} - -\subsection{5.4 Enforced Execution Path}\label{enforced-execution-path} - -\textbf{Entry Point:} \texttt{execute\_item\_enforced()} \\ -\textbf{File:} \texttt{crates/warp-core/src/boaw/exec.rs} - -When footprint enforcement is active, each item is executed via -\texttt{execute\_item\_enforced()} instead of a bare function-pointer call. -This wraps execution with \texttt{catch\_unwind} and performs post-hoc -\texttt{check\_op()} validation on any newly-emitted ops. - -\begin{verbatim} -execute_item_enforced(view, item, delta, footprint) -│ -├─ ops_before = delta.len() -│ Snapshot the op count BEFORE the executor runs -│ -├─ result = std::panic::catch_unwind(AssertUnwindSafe(|| { -│ (item.exec)(view, &item.scope, delta) -│ })) -│ -├─ FOR op IN delta.ops()[ops_before..]: -│ guard.check_op(op) → panic\_any(FootprintViolation) on failure -│ Validates that each newly-emitted op falls within the declared footprint. -│ ExecItemKind::System items may emit warp-instance-level ops; -│ ExecItemKind::User items may not. -│ -└─ OUTCOME PRECEDENCE: - ├─ IF check_op fails: - │ panic\_any(FootprintViolation) - │ Footprint violations OVERRIDE executor panics — violation takes precedence. - │ (FootprintViolation includes UnauthorizedInstanceOp and CrossWarpEmission.) - │ - ├─ IF footprint is clean BUT executor panicked: - │ std::panic::resume_unwind(payload) - │ The original panic propagates to the caller. - │ - └─ IF both clean: - return Ok(()) -\end{verbatim} - -\begin{directors} -This is perhaps the most interesting design decision in the enforcement system. - -\textbf{Why post-hoc instead of intercept-on-write?} - -The naive approach would be to wrap every \texttt{delta.push\_op()} call with a check. But that would add overhead to every write in the hot loop---and most writes are valid. Instead, we let the executor run at full speed, then scan the ops it produced. This is cheaper because: - -\begin{enumerate} -\item Most rule invocations produce few ops (1-5 typically) -\item The scan is a single pass over a small vec -\item We avoid indirection/branching in the write path -\end{enumerate} - -\textbf{Why does violation override panic?} - -Consider: a rule writes to node X (not in its footprint), then panics on an unrelated assertion. If we propagated the panic, the developer would see ``assertion failed'' and waste time debugging the wrong thing. By checking the delta first, we surface the \emph{root cause}---the footprint violation---which is almost always why the subsequent logic went wrong. - -\textbf{The Poison Invariant:} After a panic, the \texttt{TickDelta} is -considered poisoned. The partially-written ops have no transactional rollback. -The delta must be discarded---it cannot be merged or committed. This is safe -because each worker has its own delta, so a poisoned delta doesn't contaminate -other workers' output. -\end{directors} - -\textbf{\texttt{ExecItemKind} (cfg-gated):} - -\begin{itemize} -\tightlist -\item - \texttt{ExecItemKind::User} --- Normal rule executor. May emit - node/edge/attachment ops scoped to the declared footprint. Cannot emit - warp-instance-level ops (\texttt{UpsertWarpInstance}, - \texttt{DeleteWarpInstance}, \texttt{OpenPortal}). -\item - \texttt{ExecItemKind::System} --- Internal-only executor (e.g., portal - opening). May emit warp-instance-level ops. -\end{itemize} - -\begin{directors} -The User/System distinction prevents a critical class of bugs: user-authored rules accidentally (or maliciously) creating/destroying warp instances. In a multiverse simulation, instance ops change the \emph{topology} of the timeline graph. Only engine-internal code (like the portal system) should have that power. - -\textbf{The triple cfg-gate pattern:} - -\begin{enumerate} -\item \texttt{debug\_assertions} OR \texttt{footprint\_enforce\_release} --- always-on in dev, opt-in for release -\item \texttt{not(unsafe\_graph)} --- escape hatch for benchmarks and fuzzing -\end{enumerate} - -This means the \texttt{ExecItem} struct is \emph{literally a different size} depending on your build profile. In release without the enforcement feature, the \texttt{kind} field doesn't exist---zero overhead, not even a byte. -\end{directors} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{6. Delta Merge \& State Finalization}\label{delta-merge-state-finalization} - -\begin{bigpicture} -Multiple workers have produced their deltas. Now we need to merge them into a single canonical result. - -The merge does three things: -\begin{enumerate} -\item Flatten all operations from all deltas -\item Sort them by a canonical key -\item Deduplicate, detecting conflicts along the way -\end{enumerate} -\end{bigpicture} - -\subsection{6.1 Canonical Merge} - -\textbf{Entry Point:} \texttt{merge\_deltas()} \\ -\textbf{File:} \texttt{crates/warp-core/src/boaw/merge.rs:36-75} - -\begin{verbatim} -merge_deltas(deltas: Vec) → Result, MergeConflict> -│ -├─[1] FLATTEN ALL OPS WITH ORIGINS -│ -├─[2] CANONICAL SORT -│ flat.sort_by(|a, b| (&a.0, &a.1).cmp(&(&b.0, &b.1))); -│ ORDER: (WarpOpKey, OpOrigin) lexicographic -│ -└─[3] DEDUPE & CONFLICT DETECTION - GROUP by WarpOpKey - IF all ops in group are identical: keep one - ELSE: return Err(MergeConflict { writers }) -\end{verbatim} - -\begin{directors} -The magic is in step 3: \textbf{benevolent coincidence}. - -If two rules independently decide to create the same edge, with the same properties, that's fine! They're in agreement. We keep one copy. - -But if they produce \emph{different} operations for the same key---say, one sets an attachment to value A and another sets it to value B---that's a conflict. The rules disagree, and we can't pick a winner. - -This policy allows natural redundancy in rule definitions. Multiple rules can create the same structural elements without coordinating. As long as they agree on the result, it works. - -Conflicts indicate a bug in rule definitions. The receipt includes the conflicting writers so you can debug. -\end{directors} - -\subsection{6.2 Operation Ordering} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{fn}\NormalTok{ sort\_key(}\OperatorTok{\&}\KeywordTok{self}\NormalTok{) }\OperatorTok{{-}\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{} - \ControlFlowTok{match} \KeywordTok{self} \OperatorTok{\{} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{OpenPortal }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{1}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{UpsertWarpInstance }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{2}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{DeleteWarpInstance }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{3}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{DeleteEdge }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{4}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{DeleteNode }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{5}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{UpsertNode }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{6}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{UpsertEdge }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{7}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{SetAttachment }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{8}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} - \OperatorTok{\}} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\begin{directors} -The operation order is carefully chosen to maintain invariants: - -\begin{enumerate} -\item \textbf{OpenPortal first}: Creates warp instances that later ops may reference -\item \textbf{Deletes before upserts}: If you delete then upsert the same thing, you get a fresh entity. If you upsert then delete, you get nothing. Deletes first is the saner default. -\item \textbf{Nodes before edges}: Edges reference nodes, so nodes must exist first -\item \textbf{Attachments last}: Attachments attach to nodes/edges, so the skeleton must exist -\end{enumerate} - -This ordering means rules can emit ops in any order. The merge sorts them into the correct sequence. One less thing for rule authors to worry about. -\end{directors} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{7. Hash Computation}\label{hash-computation} - -\begin{bigpicture} -Echo uses hashing for two things: - -\begin{enumerate} -\item \textbf{State root}: A fingerprint of what the graph looks like right now -\item \textbf{Commit hash}: A fingerprint of this entire commit (state + how we got here) -\end{enumerate} - -If two nodes compute the same commit hash, they have identical state. This is how consensus works without comparing the full state. -\end{bigpicture} - -\subsection{7.1 State Root} - -\textbf{Entry Point:} \texttt{compute\_state\_root()} \\ -\textbf{File:} \texttt{crates/warp-core/src/snapshot.rs:88-209} - -\begin{verbatim} -compute_state_root(state: &WarpState, root: &NodeKey) → Hash -│ -├─[1] BFS REACHABILITY TRAVERSAL -│ Only hash nodes/edges reachable from root -│ -├─[2] HASHING PHASE -│ │ -│ └─ FOR warp_id IN reachable_warps: // BTreeSet = sorted order -│ FOR (node_id, node) IN store.nodes: // BTreeMap = sorted -│ hash(node_id, node.type, attachment) -│ FOR (from, edges) IN store.edges_from: // BTreeMap = sorted -│ sorted_edges = edges.sort_by(id) -│ hash(from, edges) -│ -└─ hasher.finalize().into() -\end{verbatim} - -\begin{directors} -Two critical details here: - -\textbf{1. Reachability}: We only hash nodes/edges reachable from the root via BFS. Unreachable ``garbage'' doesn't affect the hash. - -This is subtle but important. It means you can safely delete subgraphs without affecting the hash of nodes that don't reference them. It's also the foundation for garbage collection---unreachable data can be purged without breaking consensus. - -\textbf{2. BTreeMap/BTreeSet}: Notice the iteration is over B-tree collections, not hash maps. - -Why? Because B-trees iterate in \emph{sorted order}. Hash maps iterate in arbitrary order (based on hashing, which might differ between machines or Rust versions). - -If we used hash maps, two machines with identical state might produce different hashes just because they iterated in different orders. That would be catastrophic. - -BTreeMap/BTreeSet cost O(log n) instead of O(1) for operations, but they guarantee deterministic iteration. For hashing, that's non-negotiable. -\end{directors} - -\subsection{7.2 Commit Hash v2} - -\textbf{Entry Point:} \texttt{compute\_commit\_hash\_v2()} \\ -\textbf{File:} \texttt{crates/warp-core/src/snapshot.rs:244-263} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{fn}\NormalTok{ compute\_commit\_hash\_v2(} -\NormalTok{ state\_root}\OperatorTok{:} \OperatorTok{\&}\BuiltInTok{Hash}\OperatorTok{,} -\NormalTok{ parents}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[}\BuiltInTok{Hash}\NormalTok{]}\OperatorTok{,} -\NormalTok{ patch\_digest}\OperatorTok{:} \OperatorTok{\&}\BuiltInTok{Hash}\OperatorTok{,} -\NormalTok{ policy\_id}\OperatorTok{:} \DataTypeTok{u32}\OperatorTok{,} -\NormalTok{) }\OperatorTok{{-}\textgreater{}} \BuiltInTok{Hash} \OperatorTok{\{} - \KeywordTok{let} \KeywordTok{mut}\NormalTok{ h }\OperatorTok{=} \BuiltInTok{Hasher}\PreprocessorTok{::}\NormalTok{new()}\OperatorTok{;} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\DecValTok{2u16}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// Version tag} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\NormalTok{(parents}\OperatorTok{.}\NormalTok{len() }\KeywordTok{as} \DataTypeTok{u64}\NormalTok{)}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// Parent count} - \ControlFlowTok{for}\NormalTok{ p }\KeywordTok{in}\NormalTok{ parents }\OperatorTok{\{}\NormalTok{ h}\OperatorTok{.}\NormalTok{update(p)}\OperatorTok{;} \OperatorTok{\}} \CommentTok{// Parents} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(state\_root)}\OperatorTok{;} \CommentTok{// State} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(patch\_digest)}\OperatorTok{;} \CommentTok{// Operations} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\NormalTok{policy\_id}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// Policy} -\NormalTok{ h}\OperatorTok{.}\NormalTok{finalize()}\OperatorTok{.}\NormalTok{into()} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\begin{directors} -The commit hash includes: -\begin{itemize} -\item \textbf{state\_root}: What the graph looks like -\item \textbf{patch\_digest}: What operations got us here -\item \textbf{parents}: Which commit(s) we're building on -\item \textbf{policy\_id}: Which policy version we're using -\end{itemize} - -The \texttt{2u16} version tag is future-proofing. If we ever need to change the commit hash format, we bump the version. Old and new formats produce different hashes, which is correct---they're different protocols. - -Everything is little-endian (\texttt{to\_le\_bytes()}) because we need byte-identical encoding across platforms. Big-endian and little-endian machines must produce the same hash. -\end{directors} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{8. Commit Orchestration}\label{commit-orchestration} - -\textbf{Entry Point:} \texttt{Engine::commit\_with\_receipt()} \\ -\textbf{File:} \texttt{crates/warp-core/src/engine\_impl.rs:837-954} - -\begin{bigpicture} -This is the grand finale. All the pieces come together: - -\begin{enumerate} -\item Drain the scheduler (get sorted rewrites) -\item Reserve (check for conflicts) -\item Execute (run the rules, collect deltas) -\item Merge (combine deltas canonically) -\item Apply (mutate the graph) -\item Hash (compute state root and commit hash) -\item Record (save to history) -\end{enumerate} - -If any step fails, we haven't mutated anything permanent. The graph only changes when everything succeeds. -\end{bigpicture} - -\begin{verbatim} -Engine::commit_with_receipt(tx) -│ -├─[2] DRAIN CANDIDATES -│ drained = self.scheduler.drain_for_tx(tx) -│ -├─[3] RESERVE (INDEPENDENCE CHECK) -│ FOR rewrite IN drained: -│ accepted = self.scheduler.reserve(tx, &mut rewrite) -│ -├─[4] EXECUTE -│ state_before = self.state.clone() // Snapshot before mutation! -│ FOR rewrite IN reserved: -│ (executor)(view, &scope, &mut delta) -│ delta.finalize() -│ patch.apply_to_state(&mut self.state) -│ -├─[6] COMPUTE DELTA PATCH -│ ops = diff_state(&state_before, &self.state) -│ -├─[7] COMPUTE STATE ROOT -│ state_root = compute_state_root(&self.state, &root) -│ -├─[10] COMPUTE COMMIT HASH -│ hash = compute_commit_hash_v2(state_root, parents, patch_digest, policy_id) -│ -└─[12] RECORD TO HISTORY - tick_history.push((snapshot, receipt, patch)) -\end{verbatim} - -\begin{directors} -See \texttt{state\_before = self.state.clone()} in step [4]? - -We snapshot the state \emph{before} executing anything. This enables: -\begin{enumerate} -\item \texttt{diff\_state()}: Compare before/after to get the actual ops -\item Validation: The delta from execution should match the diff -\item Potential rollback: If something goes wrong, we have the original -\end{enumerate} - -The clone isn't as expensive as it looks. \texttt{WarpState} uses \texttt{Arc} internally for shared data structures, so cloning is cheap---it increments reference counts rather than deep-copying. True copy-on-write semantics require explicit \texttt{Arc}/\texttt{Rc}/\texttt{Cow} wrappers; Rust's \texttt{Clone} trait itself performs deep copies unless the type uses such wrappers. -\end{directors} - -\begin{directors} -And that's it! That's the complete journey from user action to committed state. - -Every step is deterministic. Every hash is content-addressed. The same inputs always produce the same outputs, regardless of timing, thread scheduling, or which machine runs the code. - -This is what makes Echo special. It's not just a graph database. It's a \emph{deterministic computation engine} that happens to store its state in a graph. - -Thanks for sticking with me through this tour. Now go read the actual code---you'll understand it much better now. -\end{directors} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{Appendix A: Complexity Summary}\label{appendix-a-complexity-summary} - -{\def\LTcaptype{none} -\begin{longtable}[]{@{}lll@{}} -\toprule\noalign{} -Operation & Complexity & Notes \\ -\midrule\noalign{} -\endhead -\bottomrule\noalign{} -\endlastfoot -\texttt{ingest\_intent} & O(1) & Fixed structural insertions \\ -\texttt{begin} & O(1) & Counter increment + set insert \\ -\texttt{apply} & O(m) & m = footprint size \\ -\texttt{drain\_for\_tx} & O(n) & n = candidates, 20 radix passes \\ -\texttt{reserve} per rewrite & O(m) & m = footprint size, O(1) per check \\ -\texttt{execute\_parallel} & O(n/w) & n = items, w = workers \\ -\texttt{merge\_deltas} & O(k log k) & k = total ops \\ -\texttt{compute\_state\_root} & O(V + E) & V = nodes, E = edges \\ -\end{longtable} -} - -\begin{directors} -Nothing quadratic. Nothing exponential. The system scales linearly with the amount of work. That's by design. - -The one potential bottleneck is \texttt{compute\_state\_root}---it traverses the entire reachable graph. For very large graphs, that's expensive. In practice, graphs are partitioned across warp instances, keeping each traversal manageable. -\end{directors} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{Appendix B: Determinism Boundaries}\label{appendix-b-determinism-boundaries} - -\subsection{Guaranteed Deterministic} - -\begin{itemize} -\tightlist -\item Radix sort ordering (20-pass LSD) -\item BTreeMap/BTreeSet iteration -\item BLAKE3 hashing -\item GenSet conflict detection -\item Canonical merge deduplication -\end{itemize} - -\subsection{Intentionally Non-Deterministic (Handled by Merge)} - -\begin{itemize} -\tightlist -\item Worker execution order in BOAW -\item Shard claim order (atomic counter) -\end{itemize} - -\begin{directors} -The non-deterministic parts are carefully contained. Workers race against each other, but the merge absorbs that chaos and produces a deterministic result. - -Think of it as a funnel: chaos at the wide end (parallel execution), order at the narrow end (merged output). The merge is the bottleneck that enforces determinism. -\end{directors} - -\subsection{Protocol Constants (Frozen)} - -\begin{itemize} -\tightlist -\item \texttt{NUM\_SHARDS = 256} -\item \texttt{SHARD\_MASK = 255} -\item Shard routing: \texttt{LE\_u64(node\_id[0..8]) \& 255} -\item Commit hash v2 version tag: \texttt{0x02 0x00} -\end{itemize} - -\begin{watchout} -These constants are \textbf{frozen}. Changing them would break compatibility with all existing commits. - -If you're tempted to ``optimize'' by tweaking \texttt{NUM\_SHARDS}, remember: every historical commit was created with these values. Changing them makes replay impossible. - -Protocol evolution happens through version tags, not constant changes. -\end{watchout} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\emph{Document generated 2026-01-18. Director's commentary by your friendly AI pair programmer.} - -\backmatter -\end{document} diff --git a/docs/archive/study/echo-tour-de-code-with-commentary.pdf b/docs/archive/study/echo-tour-de-code-with-commentary.pdf deleted file mode 100644 index ee5622cb..00000000 Binary files a/docs/archive/study/echo-tour-de-code-with-commentary.pdf and /dev/null differ diff --git a/docs/archive/study/echo-tour-de-code-with-commentary.tex b/docs/archive/study/echo-tour-de-code-with-commentary.tex deleted file mode 100644 index 54051a5b..00000000 --- a/docs/archive/study/echo-tour-de-code-with-commentary.tex +++ /dev/null @@ -1,2016 +0,0 @@ -% SPDX-License-Identifier: Apache-2.0 OR LicenseRef-MIND-UCAL-1.0 -% © James Ross Ω FLYING•ROBOTS -% Options for packages loaded elsewhere -\PassOptionsToPackage{unicode}{hyperref} -\PassOptionsToPackage{hyphens}{url} -\documentclass[11pt]{book} -\usepackage[letterpaper, margin=1in]{geometry} -\usepackage{xcolor} -\usepackage{amsmath,amssymb} -\setcounter{secnumdepth}{-\maxdimen} % remove section numbering -\usepackage{iftex} -\ifPDFTeX - \usepackage[T1]{fontenc} - \usepackage[utf8]{inputenc} - \usepackage{textcomp} % provide euro and other symbols -\else % if luatex or xetex - \usepackage{unicode-math} % this also loads fontspec - \defaultfontfeatures{Scale=MatchLowercase} - \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} -\fi -\usepackage{lmodern} -\ifPDFTeX\else - % xetex/luatex font selection -\fi -% Use upquote if available, for straight quotes in verbatim environments -\IfFileExists{upquote.sty}{\usepackage{upquote}}{} -\IfFileExists{microtype.sty}{% use microtype if available - \usepackage[]{microtype} - \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts -}{} -\makeatletter -\@ifundefined{KOMAClassName}{% if non-KOMA class - \IfFileExists{parskip.sty}{% - \usepackage{parskip} - }{% else - \setlength{\parindent}{0pt} - \setlength{\parskip}{6pt plus 2pt minus 1pt}} -}{% if KOMA class - \KOMAoptions{parskip=half}} -\makeatother -\usepackage{color} -\usepackage{fancyvrb} -\newcommand{\VerbBar}{|} -\newcommand{\VERB}{\Verb[commandchars=\\\{\}]} -\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} -% Add ',fontsize=\small' for more characters per line -\newenvironment{Shaded}{}{} -\newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}} -\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{#1}} -\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}} -\newcommand{\BuiltInTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{#1}} -\newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{#1}}} -\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{#1}} -\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}} -\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{#1}} -\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}} -\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{#1}}} -\newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}} -\newcommand{\ExtensionTok}[1]{#1} -\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}} -\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{#1}} -\newcommand{\ImportTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{\textbf{#1}}} -\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}} -\newcommand{\NormalTok}[1]{#1} -\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{#1}} -\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{#1}} -\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{#1}} -\newcommand{\RegionMarkerTok}[1]{#1} -\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{#1}} -\newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{#1}} -\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\usepackage{longtable,booktabs,array} -\newcounter{none} % for unnumbered tables -\usepackage{calc} % for calculating minipage widths -% Correct order of tables after \paragraph or \subparagraph -\usepackage{etoolbox} -\makeatletter -\patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{} -\makeatother -% Allow footnotes in longtable head/foot -\IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}} -\makesavenoteenv{longtable} -\setlength{\emergencystretch}{3em} % prevent overfull lines -\providecommand{\tightlist}{% - \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} -\usepackage{bookmark} -\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available -\urlstyle{same} -\hypersetup{ - hidelinks, - pdfcreator={LaTeX via pandoc}} - -% ═══════════════════════════════════════════════════════════════════════════════ -% TOUR GUIDE COMMENTARY STYLING -% ═══════════════════════════════════════════════════════════════════════════════ -\usepackage{pifont} % Required for \ding symbols in tcolorbox titles -\usepackage{tcolorbox} -\tcbuselibrary{skins,breakable} - -% Tour Guide Commentary Box - the main insight boxes -\newtcolorbox{tourguide}[1][]{ - enhanced, - breakable, - colback=blue!5!white, - colframe=blue!60!black, - fonttitle=\bfseries, - title={\raisebox{-0.2em}{\large\ding{46}} Tour Guide Notes}, - left=8pt, - right=8pt, - top=6pt, - bottom=6pt, - #1 -} - -% Clever Pattern Box - for particularly elegant code patterns -\newtcolorbox{cleverpattern}[1][]{ - enhanced, - breakable, - colback=green!5!white, - colframe=green!50!black, - fonttitle=\bfseries, - title={\raisebox{-0.1em}{\large$\star$} Clever Pattern}, - left=8pt, - right=8pt, - top=6pt, - bottom=6pt, - #1 -} - -% Warning/Gotcha Box - for subtle traps or important invariants -\newtcolorbox{watchout}[1][]{ - enhanced, - breakable, - colback=orange!8!white, - colframe=orange!70!black, - fonttitle=\bfseries, - title={\raisebox{-0.1em}{\large$\triangle$} Watch Out}, - left=8pt, - right=8pt, - top=6pt, - bottom=6pt, - #1 -} - -% Deep Dive Box - for architectural insights -\newtcolorbox{deepdive}[1][]{ - enhanced, - breakable, - colback=purple!5!white, - colframe=purple!60!black, - fonttitle=\bfseries, - title={\raisebox{-0.1em}{\large$\blacktriangledown$} Deep Dive}, - left=8pt, - right=8pt, - top=6pt, - bottom=6pt, - #1 -} - -% Pro Tip Box - for practical advice -\newtcolorbox{protip}[1][]{ - enhanced, - breakable, - colback=teal!5!white, - colframe=teal!60!black, - fonttitle=\bfseries, - title={\raisebox{-0.1em}{\large$\checkmark$} Pro Tip}, - left=8pt, - right=8pt, - top=6pt, - bottom=6pt, - #1 -} - -\author{} -\date{} - -\begin{document} -\frontmatter - -\mainmatter -\chapter{Echo: Tour de Code}\label{echo-tour-de-code} - -\begin{quote} -\textbf{The complete function-by-function trace of Echo's execution -pipeline.} - -This document traces EVERY function call involved in processing a user -action through the Echo engine. File paths and line numbers are accurate -as of 2026-01-25. - -\emph{Annotated with tour guide commentary --- insights, patterns, and observations from a detailed code review.} -\end{quote} - -\begin{tourguide} -Welcome to the Echo Tour de Code! I'll be your guide through this remarkable piece of systems engineering. - -What strikes me most about Echo's architecture is its \textbf{relentless pursuit of determinism}. Every design decision---from content-addressed identities to 20-pass radix sorts---serves the goal of ensuring that the same inputs always produce the same outputs, regardless of execution timing or parallelism. - -As we walk through the pipeline, I'll highlight: -\begin{itemize} -\item \textbf{Clever patterns} that solve subtle problems elegantly -\item \textbf{Invariants} that must hold for correctness -\item \textbf{Performance optimizations} hidden in plain sight -\item \textbf{Architectural decisions} and their trade-offs -\end{itemize} - -Let's begin our journey from intent to commit! -\end{tourguide} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{Table of Contents}\label{table-of-contents} - -\begin{enumerate} -\def\labelenumi{\arabic{enumi}.} -\tightlist -\item - \hyperref[intent-ingestion]{Intent Ingestion} -\item - \hyperref[transaction-lifecycle]{Transaction Lifecycle} -\item - \hyperref[rule-matching]{Rule Matching} -\item - \hyperref[scheduler-drain-reserve]{Scheduler: Drain \& Reserve} -\item - \hyperref[boaw-parallel-execution]{BOAW Parallel Execution} -\item - \hyperref[delta-merge-state-finalization]{Delta Merge \& State - Finalization} -\item - \hyperref[hash-computation]{Hash Computation} -\item - \hyperref[commit-orchestration]{Commit Orchestration} -\item - \hyperref[complete-call-graph]{Complete Call Graph} -\end{enumerate} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{1. Intent Ingestion}\label{intent-ingestion} - -\textbf{Entry Point:} \texttt{Engine::ingest\_intent()} \textbf{File:} -\texttt{crates/warp-core/src/engine\_impl.rs} - -\begin{tourguide} -This is where user actions enter the system. Notice how Echo treats intents as \emph{immutable, content-addressed} data from the very first moment. The intent bytes are hashed to create a unique identifier, ensuring that duplicate intents are detected automatically---no coordination required. -\end{tourguide} - -\subsection{1.1 Function Signature}\label{function-signature} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ ingest\_intent(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ intent\_bytes}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[}\DataTypeTok{u8}\NormalTok{]) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Result}\OperatorTok{\textless{}}\NormalTok{IngestDisposition}\OperatorTok{,}\NormalTok{ EngineError}\OperatorTok{\textgreater{}} -\end{Highlighting} -\end{Shaded} - -\textbf{Returns:} - -\texttt{IngestDisposition::Accepted\ \{\ intent\_id:\ Hash\ \}} --- New -intent accepted - -\texttt{IngestDisposition::Duplicate\ \{\ intent\_id:\ Hash\ \}} --- -Already ingested - -\subsection{1.2 Complete Call Trace}\label{complete-call-trace} - -\begin{verbatim} -Engine::ingest_intent(intent_bytes: &[u8]) -│ -├─[1] compute_intent_id(intent_bytes) → Hash -│ FILE: crates/warp-core/src/inbox.rs -│ CODE: -│ let mut hasher = blake3::Hasher::new(); -│ hasher.update(b"intent:"); // Domain separation -│ hasher.update(intent_bytes); -│ hasher.finalize().into() // → [u8; 32] -│ -├─[2] NodeId(intent_id) -│ Creates strongly-typed NodeId from Hash -│ -├─[3] self.state.store_mut(&warp_id) → Option<&mut GraphStore> -│ FILE: crates/warp-core/src/engine_impl.rs -│ ERROR: EngineError::UnknownWarp if None -│ -├─[4] Extract root_node_id from self.current_root.local_id -│ -├─[5] STRUCTURAL NODE CREATION (Idempotent) -│ ├─ make_node_id("sim") → NodeId -│ │ FILE: crates/warp-core/src/ident.rs -│ │ CODE: blake3("node:" || "sim") -│ │ -│ ├─ make_node_id("sim/inbox") → NodeId -│ │ CODE: blake3("node:" || "sim/inbox") -│ │ -│ ├─ make_type_id("sim") → TypeId -│ │ FILE: crates/warp-core/src/ident.rs -│ │ CODE: blake3("type:" || "sim") -│ │ -│ ├─ make_type_id("sim/inbox") → TypeId -│ ├─ make_type_id("sim/inbox/event") → TypeId -│ │ -│ ├─ store.insert_node(sim_id, NodeRecord { ty: sim_ty }) -│ │ FILE: crates/warp-core/src/graph.rs -│ │ CODE: self.nodes.insert(id, record) -│ │ -│ └─ store.insert_node(inbox_id, NodeRecord { ty: inbox_ty }) -│ -├─[6] STRUCTURAL EDGE CREATION -│ ├─ make_edge_id("edge:root/sim") → EdgeId -│ │ FILE: crates/warp-core/src/ident.rs -│ │ CODE: blake3("edge:" || "edge:root/sim") -│ │ -│ ├─ store.insert_edge(root_id, EdgeRecord { ... }) -│ │ FILE: crates/warp-core/src/graph.rs -│ │ └─ GraphStore::upsert_edge_record(from, edge) -│ │ FILE: crates/warp-core/src/graph.rs -│ │ UPDATES: -│ │ self.edge_index.insert(edge_id, from) -│ │ self.edge_to_index.insert(edge_id, to) -│ │ self.edges_from.entry(from).or_default().push(edge) -│ │ self.edges_to.entry(to).or_default().push(edge_id) -│ │ -│ └─ store.insert_edge(sim_id, EdgeRecord { ... }) [sim → inbox] -│ -├─[7] DUPLICATE DETECTION -│ store.node(&event_id) → Option<&NodeRecord> -│ FILE: crates/warp-core/src/graph.rs -│ CODE: self.nodes.get(id) -│ IF Some(_): return Ok(IngestDisposition::Duplicate { intent_id }) -│ -├─[8] EVENT NODE CREATION -│ store.insert_node(event_id, NodeRecord { ty: event_ty }) -│ NOTE: event_id = intent_id (content-addressed) -│ -├─[9] INTENT ATTACHMENT -│ ├─ AtomPayload::new(type_id, bytes) -│ │ FILE: crates/warp-core/src/attachment.rs -│ │ CODE: Self { type_id, bytes: Bytes::copy_from_slice(intent_bytes) } -│ │ -│ └─ store.set_node_attachment(event_id, Some(AttachmentValue::Atom(payload))) -│ FILE: crates/warp-core/src/graph.rs -│ CODE: self.node_attachments.insert(id, v) -│ -├─[10] PENDING EDGE CREATION (Queue Membership) -│ ├─ pending_edge_id(&inbox_id, &intent_id) → EdgeId -│ │ FILE: crates/warp-core/src/inbox.rs -│ │ CODE: blake3("edge:" || "sim/inbox/pending:" || inbox_id || intent_id) -│ │ -│ └─ store.insert_edge(inbox_id, EdgeRecord { -│ id: pending_edge_id, -│ from: inbox_id, -│ to: event_id, -│ ty: make_type_id("edge:pending") -│ }) -│ -└─[11] return Ok(IngestDisposition::Accepted { intent_id }) -\end{verbatim} - -\begin{cleverpattern} -\textbf{Domain Separation in Hashing} - -Notice step [1]: the hasher prefixes with \texttt{b"intent:"} before the actual data. This is a cryptographic best practice called \emph{domain separation}---it prevents a hash collision between an intent and, say, a node ID that happens to have the same bytes. - -Echo uses this pattern consistently: -\begin{itemize} -\item \texttt{"intent:"} for intent IDs -\item \texttt{"node:"} for node IDs -\item \texttt{"type:"} for type IDs -\item \texttt{"edge:"} for edge IDs -\end{itemize} - -This ensures that even if two different domain values have the same raw bytes, they'll produce different hashes. -\end{cleverpattern} - -\begin{deepdive} -\textbf{Why Content-Addressed Event IDs?} - -In step [8], note that \texttt{event\_id = intent\_id}. This is a profound design choice: - -\begin{enumerate} -\item \textbf{Automatic deduplication}: If the same intent arrives twice, it hashes to the same ID, and step [7] catches it. -\item \textbf{Reproducibility}: Given the same intent bytes, any node in a distributed system will compute the same event ID. -\item \textbf{Auditability}: You can verify an event's integrity by re-hashing its content. -\end{enumerate} - -This is the foundation of Echo's deterministic execution model---events are identified by \emph{what they are}, not \emph{when they arrived}. -\end{deepdive} - -\subsection{1.3 Data Structures -Modified}\label{data-structures-modified} - -{\def\LTcaptype{none} % do not increment counter -\begin{longtable}[]{@{} - >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.4231}} - >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.2692}} - >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.3077}}@{}} -\toprule\noalign{} -\begin{minipage}[b]{\linewidth}\raggedright -Structure -\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright -Field -\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright -Change -\end{minipage} \\ -\midrule\noalign{} -\endhead -\bottomrule\noalign{} -\endlastfoot -\texttt{GraphStore} & \texttt{nodes} & +3 entries (sim, inbox, event) \\ -\texttt{GraphStore} & \texttt{edges\_from} & +3 edges (root→sim, -sim→inbox, inbox→event) \\ -\texttt{GraphStore} & \texttt{edges\_to} & +3 reverse entries \\ -\texttt{GraphStore} & \texttt{edge\_index} & +3 edge→from mappings \\ -\texttt{GraphStore} & \texttt{edge\_to\_index} & +3 edge→to mappings \\ -\texttt{GraphStore} & \texttt{node\_attachments} & +1 (event → intent -payload) \\ -\end{longtable} -} - -\begin{tourguide} -Notice the \textbf{four separate edge indices}: \texttt{edges\_from}, \texttt{edges\_to}, \texttt{edge\_index}, and \texttt{edge\_to\_index}. This redundancy enables O(1) lookups in any direction---find edges from a node, to a node, or look up either endpoint given an edge ID. The space cost is modest (pointers/IDs are small), but the query flexibility is enormous. -\end{tourguide} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{2. Transaction Lifecycle}\label{transaction-lifecycle} - -\subsection{2.1 Begin Transaction}\label{begin-transaction} - -\textbf{Entry Point:} \texttt{Engine::begin()} \textbf{File:} -\texttt{crates/warp-core/src/engine\_impl.rs-719} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ begin(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\NormalTok{) }\OperatorTok{{-}\textgreater{}}\NormalTok{ TxId }\OperatorTok{\{} - \KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter }\OperatorTok{=} \KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter}\OperatorTok{.}\NormalTok{wrapping\_add(}\DecValTok{1}\NormalTok{)}\OperatorTok{;} \CommentTok{// Line 713} - \ControlFlowTok{if} \KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter }\OperatorTok{==} \DecValTok{0} \OperatorTok{\{} - \KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter }\OperatorTok{=} \DecValTok{1}\OperatorTok{;} \CommentTok{// Line 715: Zero is reserved} - \OperatorTok{\}} - \KeywordTok{self}\OperatorTok{.}\NormalTok{live\_txs}\OperatorTok{.}\NormalTok{insert(}\KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter)}\OperatorTok{;} \CommentTok{// Line 717} - \PreprocessorTok{TxId::}\NormalTok{from\_raw(}\KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter) }\CommentTok{// Line 718} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\begin{watchout} -\textbf{The Zero Invariant} - -Line 715 is subtle but critical: \texttt{TxId(0)} is reserved as an invalid/sentinel value. Without this check, after $2^{64}$ transactions (admittedly unlikely!), the counter would wrap to zero and potentially confuse code that uses zero to mean ``no transaction.'' - -This is defensive programming at its finest---the cost is one branch that's almost never taken, but it eliminates an entire class of potential bugs. -\end{watchout} - -\textbf{Call Trace:} - -\begin{verbatim} -Engine::begin() -│ -├─ self.tx_counter.wrapping_add(1) -│ Rust std: u64::wrapping_add -│ Handles u64::MAX → 0 overflow -│ -├─ if self.tx_counter == 0: self.tx_counter = 1 -│ INVARIANT: TxId(0) is reserved as invalid -│ -├─ self.live_txs.insert(self.tx_counter) -│ TYPE: HashSet -│ Registers transaction as active -│ -└─ TxId::from_raw(self.tx_counter) - FILE: crates/warp-core/src/tx.rs - CODE: pub const fn from_raw(value: u64) -> Self { Self(value) } - TYPE: #[repr(transparent)] struct TxId(u64) -\end{verbatim} - -\begin{tourguide} -The \texttt{\#[repr(transparent)]} on \texttt{TxId} is worth noting---it guarantees that \texttt{TxId} has exactly the same memory layout as \texttt{u64}. This means zero-cost abstraction: you get type safety (can't accidentally pass a \texttt{NodeId} where a \texttt{TxId} is expected) with no runtime overhead. -\end{tourguide} - -\textbf{State Changes:} - \texttt{tx\_counter}: N → N+1 (or 1 if -wrapped) - \texttt{live\_txs}: Insert new counter value - -\subsection{2.2 Abort Transaction}\label{abort-transaction} - -\textbf{Entry Point:} \texttt{Engine::abort()} \textbf{File:} -\texttt{crates/warp-core/src/engine\_impl.rs-968} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ abort(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ tx}\OperatorTok{:}\NormalTok{ TxId) }\OperatorTok{\{} - \KeywordTok{self}\OperatorTok{.}\NormalTok{live\_txs}\OperatorTok{.}\NormalTok{remove(}\OperatorTok{\&}\NormalTok{tx}\OperatorTok{.}\NormalTok{value())}\OperatorTok{;} - \KeywordTok{self}\OperatorTok{.}\NormalTok{scheduler}\OperatorTok{.}\NormalTok{finalize\_tx(tx)}\OperatorTok{;} - \KeywordTok{self}\OperatorTok{.}\NormalTok{bus}\OperatorTok{.}\NormalTok{clear()}\OperatorTok{;} - \KeywordTok{self}\OperatorTok{.}\NormalTok{last\_materialization}\OperatorTok{.}\NormalTok{clear()}\OperatorTok{;} - \KeywordTok{self}\OperatorTok{.}\NormalTok{last\_materialization\_errors}\OperatorTok{.}\NormalTok{clear()}\OperatorTok{;} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\begin{tourguide} -Abort is refreshingly simple---just remove the transaction from tracking and clear transient state. No rollback needed because Echo hasn't mutated the graph yet! All graph mutations happen atomically during commit. This is a key architectural decision: the graph is effectively immutable until commit time. -\end{tourguide} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{3. Rule Matching}\label{rule-matching} - -\textbf{Entry Point:} \texttt{Engine::apply()} \textbf{File:} -\texttt{crates/warp-core/src/engine\_impl.rs-737} - -\begin{tourguide} -Now we enter the heart of Echo's reactive model. Rules are matched against graph patterns, and when they match, they're enqueued for execution. The beauty is that matching is \emph{pure}---it reads the graph but doesn't modify it. -\end{tourguide} - -\subsection{3.1 Function Signature}\label{function-signature-1} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ apply(} - \OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,} -\NormalTok{ tx}\OperatorTok{:}\NormalTok{ TxId}\OperatorTok{,} -\NormalTok{ rule\_name}\OperatorTok{:} \OperatorTok{\&}\DataTypeTok{str}\OperatorTok{,} -\NormalTok{ scope}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId}\OperatorTok{,} -\NormalTok{) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Result}\OperatorTok{\textless{}}\NormalTok{ApplyResult}\OperatorTok{,}\NormalTok{ EngineError}\OperatorTok{\textgreater{}} -\end{Highlighting} -\end{Shaded} - -\subsection{3.2 Complete Call Trace}\label{complete-call-trace-1} - -\begin{verbatim} -Engine::apply(tx, rule_name, scope) -│ -└─ Engine::apply_in_warp(tx, self.current_root.warp_id, rule_name, scope, &[]) - FILE: crates/warp-core/src/engine_impl.rs-806 - │ - ├─[1] TRANSACTION VALIDATION - │ CODE: if tx.value() == 0 || !self.live_txs.contains(&tx.value()) - │ ERROR: EngineError::UnknownTx - │ - ├─[2] RULE LOOKUP - │ self.rules.get(rule_name) → Option<&RewriteRule> - │ TYPE: HashMap<&'static str, RewriteRule> - │ ERROR: EngineError::UnknownRule(rule_name.to_owned()) - │ - ├─[3] STORE LOOKUP - │ self.state.store(&warp_id) → Option<&GraphStore> - │ ERROR: EngineError::UnknownWarp(warp_id) - │ - ├─[4] CREATE GRAPHVIEW - │ GraphView::new(store) → GraphView<'_> - │ FILE: crates/warp-core/src/graph_view.rs - │ TYPE: Read-only wrapper (Copy, lightweight) - │ - ├─[5] CALL MATCHER - │ (rule.matcher)(view, scope) → bool - │ TYPE: MatchFn = for<'a> fn(GraphView<'a>, &NodeId) -> bool - │ FILE: crates/warp-core/src/rule.rs-24 - │ IF false: return Ok(ApplyResult::NoMatch) - │ - ├─[6] CREATE SCOPE KEY - │ let scope_key = NodeKey { warp_id, local_id: *scope } - │ - ├─[7] COMPUTE SCOPE HASH - │ scope_hash(&rule.id, &scope_key) → Hash - │ FILE: crates/warp-core/src/engine_impl.rs-1718 - │ CODE: - │ let mut hasher = Hasher::new(); - │ hasher.update(rule_id); // 32 bytes - │ hasher.update(scope.warp_id.as_bytes()); // 32 bytes - │ hasher.update(scope.local_id.as_bytes()); // 32 bytes - │ hasher.finalize().into() - │ - ├─[8] COMPUTE FOOTPRINT - │ (rule.compute_footprint)(view, scope) → Footprint - │ TYPE: FootprintFn = for<'a> fn(GraphView<'a>, &NodeId) -> Footprint - │ FILE: crates/warp-core/src/rule.rs-46 - │ RETURNS: - │ Footprint { - │ n_read: IdSet, // Nodes read - │ n_write: IdSet, // Nodes written - │ e_read: IdSet, // Edges read - │ e_write: IdSet, // Edges written - │ a_read: AttachmentSet, // Attachments read - │ a_write: AttachmentSet, // Attachments written - │ b_in: PortSet, // Input ports - │ b_out: PortSet, // Output ports - │ factor_mask: u64, // O(1) prefilter - │ } - │ - ├─[9] AUGMENT FOOTPRINT WITH DESCENT STACK - │ for key in descent_stack: - │ footprint.a_read.insert(*key) - │ FILE: crates/warp-core/src/footprint.rs-107 - │ PURPOSE: Stage B1 law - READs of all descent chain slots - │ - ├─[10] COMPACT RULE ID LOOKUP - │ self.compact_rule_ids.get(&rule.id) → Option<&CompactRuleId> - │ TYPE: HashMap - │ ERROR: EngineError::InternalCorruption - │ - └─[11] ENQUEUE TO SCHEDULER - self.scheduler.enqueue(tx, PendingRewrite { ... }) - │ - └─ DeterministicScheduler::enqueue(tx, rewrite) - FILE: crates/warp-core/src/scheduler.rs-659 - │ - └─ RadixScheduler::enqueue(tx, rewrite) - FILE: crates/warp-core/src/scheduler.rs-105 - CODE: - let txq = self.pending.entry(tx).or_default(); - txq.enqueue(rewrite.scope_hash, rewrite.compact_rule.0, rewrite); - │ - └─ PendingTx::enqueue(scope_be32, rule_id, payload) - FILE: crates/warp-core/src/scheduler.rs-355 - - CASE 1: Duplicate (scope_hash, rule_id) — LAST WINS - index.get(&key) → Some(&i) - fat[thin[i].handle] = Some(payload) // Overwrite - thin[i].nonce = next_nonce++ // Refresh nonce - - CASE 2: New entry - fat.push(Some(payload)) - thin.push(RewriteThin { scope_be32, rule_id, nonce, handle }) - index.insert(key, thin.len() - 1) -\end{verbatim} - -\begin{cleverpattern} -\textbf{GraphView: The Read-Only Wrapper} - -Step [4] creates a \texttt{GraphView}---a lightweight, copyable handle to the underlying \texttt{GraphStore}. In enforcement builds, it optionally holds a \texttt{FootprintGuard} reference that ties the view's lifetime to runtime protection---a borrow token that prevents the underlying \texttt{GraphStore} from being mutably borrowed while the \texttt{GraphView} exists. This guard validates reads against declared footprints at runtime, augmenting the compile-time read-only guarantee with runtime protection against unauthorized access. This is Rust's type system doing the heavy lifting: you literally \emph{cannot} mutate the graph through a \texttt{GraphView}. The compiler enforces read-only access, and the guard (when present in enforcement builds) enforces read permissions at runtime. -\end{cleverpattern} - -\begin{deepdive} -\textbf{The Footprint: Declaring Your Intentions} - -Step [8] is architecturally critical. Before a rule can execute, it must declare its \emph{footprint}---exactly which nodes, edges, and attachments it will read and write. - -This enables: -\begin{itemize} -\item \textbf{Parallel execution}: Rules with non-overlapping footprints can run concurrently -\item \textbf{Conflict detection}: Rules with conflicting footprints are serialized -\item \textbf{Determinism}: The scheduler can order rules without knowing their implementation details -\end{itemize} - -The footprint is computed \emph{before} execution, not discovered during execution. This is a constraint on rule authors, but it's what makes the whole system tractable. -\end{deepdive} - -\begin{cleverpattern} -\textbf{Last-Wins Deduplication} - -In step [11], notice the ``LAST WINS'' semantics. If the same (scope\_hash, rule\_id) pair is enqueued twice, the second one \emph{replaces} the first. - -Why? Because enqueuing a rule is idempotent: if you match the same rule at the same scope twice in one transaction, you only want to execute it once. The ``last wins'' ensures the most recent footprint is used (which matters if the graph changed between matches). -\end{cleverpattern} - -\subsection{3.3 PendingRewrite -Structure}\label{pendingrewrite-structure} - -\textbf{File:} \texttt{crates/warp-core/src/scheduler.rs-82} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{struct}\NormalTok{ PendingRewrite }\OperatorTok{\{} - \KeywordTok{pub}\NormalTok{ rule\_id}\OperatorTok{:} \BuiltInTok{Hash}\OperatorTok{,} \CommentTok{// 32{-}byte rule identifier} - \KeywordTok{pub}\NormalTok{ compact\_rule}\OperatorTok{:}\NormalTok{ CompactRuleId}\OperatorTok{,} \CommentTok{// u32 hot{-}path handle} - \KeywordTok{pub}\NormalTok{ scope\_hash}\OperatorTok{:} \BuiltInTok{Hash}\OperatorTok{,} \CommentTok{// 32{-}byte ordering key} - \KeywordTok{pub}\NormalTok{ scope}\OperatorTok{:}\NormalTok{ NodeKey}\OperatorTok{,} \CommentTok{// \{ warp\_id, local\_id \}} - \KeywordTok{pub}\NormalTok{ footprint}\OperatorTok{:}\NormalTok{ Footprint}\OperatorTok{,} \CommentTok{// Read/write declaration} - \KeywordTok{pub}\NormalTok{ phase}\OperatorTok{:}\NormalTok{ RewritePhase}\OperatorTok{,} \CommentTok{// State machine: Matched → Reserved → ...} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\begin{tourguide} -Notice the dual identity: \texttt{rule\_id} (32-byte hash) for correctness, and \texttt{compact\_rule} (u32) for performance. The hash ensures cryptographic uniqueness; the u32 enables O(1) array indexing. This ``have your cake and eat it too'' pattern appears throughout Echo. -\end{tourguide} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{4. Scheduler: Drain \& Reserve}\label{scheduler-drain-reserve} - -\begin{tourguide} -The scheduler is where Echo's determinism guarantees are forged. No matter what order rules are enqueued, the scheduler produces a \emph{canonical} execution order. This is perhaps the most technically impressive part of the system. -\end{tourguide} - -\subsection{4.1 Drain Phase (Radix Sort)}\label{drain-phase-radix-sort} - -\textbf{Entry Point:} \texttt{RadixScheduler::drain\_for\_tx()} -\textbf{File:} \texttt{crates/warp-core/src/scheduler.rs-113} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{fn}\NormalTok{ drain\_for\_tx(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ tx}\OperatorTok{:}\NormalTok{ TxId) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{PendingRewrite}\OperatorTok{\textgreater{}} \OperatorTok{\{} - \KeywordTok{self}\OperatorTok{.}\NormalTok{pending} - \OperatorTok{.}\NormalTok{remove(}\OperatorTok{\&}\NormalTok{tx)} - \OperatorTok{.}\NormalTok{map\_or\_else(}\DataTypeTok{Vec}\PreprocessorTok{::}\NormalTok{new}\OperatorTok{,} \OperatorTok{|}\KeywordTok{mut}\NormalTok{ txq}\OperatorTok{|}\NormalTok{ txq}\OperatorTok{.}\NormalTok{drain\_in\_order())} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{Complete Call Trace:} - -\begin{verbatim} -RadixScheduler::drain_for_tx(tx) -│ -├─ self.pending.remove(&tx) → Option> -│ -└─ PendingTx::drain_in_order() - FILE: crates/warp-core/src/scheduler.rs-446 - │ - ├─ DECISION: n <= 1024 (SMALL_SORT_THRESHOLD)? - │ ├─ YES: sort_unstable_by(cmp_thin) - │ │ Rust std comparison sort - │ │ - │ └─ NO: radix_sort() - │ FILE: crates/warp-core/src/scheduler.rs-413 - │ - └─ radix_sort() - │ - ├─ Initialize scratch buffer: self.scratch.resize(n, default) - │ - ├─ Lazy allocate histogram: self.counts16 = vec![0u32; 65536] - │ - └─ FOR pass IN 0..20: // ═══ 20 PASSES ═══ - │ - ├─ SELECT src/dst buffers (ping-pong) - │ flip = false: src=thin, dst=scratch - │ flip = true: src=scratch, dst=thin - │ - ├─ PHASE 1: COUNT BUCKETS - │ FOR r IN src: - │ b = bucket16(r, pass) - │ counts[b] += 1 - │ - ├─ PHASE 2: PREFIX SUMS - │ sum = 0 - │ FOR c IN counts: - │ t = *c - │ *c = sum - │ sum += t - │ - ├─ PHASE 3: STABLE SCATTER - │ FOR r IN src: - │ b = bucket16(r, pass) - │ dst[counts[b]] = r - │ counts[b] += 1 - │ - └─ flip = !flip - -BUCKET EXTRACTION (bucket16): -FILE: crates/warp-core/src/scheduler.rs-498 - -Pass 0: u16_from_u32_le(r.nonce, 0) // Nonce bytes [0:2] -Pass 1: u16_from_u32_le(r.nonce, 1) // Nonce bytes [2:4] -Pass 2: u16_from_u32_le(r.rule_id, 0) // Rule ID bytes [0:2] -Pass 3: u16_from_u32_le(r.rule_id, 1) // Rule ID bytes [2:4] -Pass 4: u16_be_from_pair32(scope, 15) // Scope bytes [30:32] -Pass 5: u16_be_from_pair32(scope, 14) // Scope bytes [28:30] -... -Pass 19: u16_be_from_pair32(scope, 0) // Scope bytes [0:2] (MSD) - -SORT ORDER: (scope_hash, rule_id, nonce) ascending lexicographic -\end{verbatim} - -\begin{cleverpattern} -\textbf{LSD Radix Sort: O(n) Guaranteed} - -This is a \textbf{Least Significant Digit} radix sort---it processes from the least significant bits to the most significant. After 20 passes (320 bits total), the array is sorted by: -\begin{enumerate} -\item \texttt{scope\_hash} (256 bits = 16 passes) -\item then \texttt{rule\_id} (32 bits = 2 passes) -\item then \texttt{nonce} (32 bits = 2 passes) -\end{enumerate} - -Why radix sort instead of comparison sort? -\begin{itemize} -\item \textbf{Determinism}: Radix sort is inherently stable and makes no comparisons that could be affected by memory layout -\item \textbf{O(n) complexity}: With fixed key size, radix sort is linear -\item \textbf{Cache-friendly}: Sequential memory access in each pass -\end{itemize} - -The 1024-element threshold is a practical optimization: for small arrays, the overhead of radix sort exceeds its benefits, so a comparison sort is used instead. -\end{cleverpattern} - -\begin{deepdive} -\textbf{Why 20 Passes?} - -Each pass extracts 16 bits (bucket size 65536). To sort by: -\begin{itemize} -\item 256 bits of scope\_hash = 16 passes (passes 4--19) -\item 32 bits of rule\_id = 2 passes (passes 2--3) -\item 32 bits of nonce = 2 passes (passes 0--1) -\end{itemize} - -That's exactly 20 passes processing 320 bits total. Since LSD radix sort processes from least significant to most significant, passes 4--19 progressively refine the scope ordering from least significant bytes to most significant. - -The nonce is processed first (passes 0--1) because it's the tiebreaker---when scope\_hash and rule\_id are equal, the nonce determines order, and we want that to be the finest-grained distinction. -\end{deepdive} - -\subsection{4.2 Reserve Phase (Independence -Check)}\label{reserve-phase-independence-check} - -\textbf{Entry Point:} \texttt{RadixScheduler::reserve()} \textbf{File:} -\texttt{crates/warp-core/src/scheduler.rs-143} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{fn}\NormalTok{ reserve(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ tx}\OperatorTok{:}\NormalTok{ TxId}\OperatorTok{,}\NormalTok{ pr}\OperatorTok{:} \OperatorTok{\&}\KeywordTok{mut}\NormalTok{ PendingRewrite) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{bool} \OperatorTok{\{} - \KeywordTok{let}\NormalTok{ active }\OperatorTok{=} \KeywordTok{self}\OperatorTok{.}\NormalTok{active}\OperatorTok{.}\NormalTok{entry(tx)}\OperatorTok{.}\NormalTok{or\_insert\_with(}\PreprocessorTok{ActiveFootprints::}\NormalTok{new)}\OperatorTok{;} - \ControlFlowTok{if} \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{has\_conflict(active}\OperatorTok{,}\NormalTok{ pr) }\OperatorTok{\{} - \ControlFlowTok{return} \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{on\_conflict(pr)}\OperatorTok{;} - \OperatorTok{\}} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{mark\_all(active}\OperatorTok{,}\NormalTok{ pr)}\OperatorTok{;} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{on\_reserved(pr)} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{Complete Call Trace:} - -\begin{verbatim} -RadixScheduler::reserve(tx, pr) -│ -├─ self.active.entry(tx).or_insert_with(ActiveFootprints::new) -│ TYPE: HashMap -│ ActiveFootprints contains 7 GenSets: -│ - nodes_written: GenSet -│ - nodes_read: GenSet -│ - edges_written: GenSet -│ - edges_read: GenSet -│ - attachments_written: GenSet -│ - attachments_read: GenSet -│ - ports: GenSet -│ -├─ has_conflict(active, pr) → bool -│ FILE: crates/warp-core/src/scheduler.rs-236 -│ │ -│ ├─ FOR node IN pr.footprint.n_write: -│ │ IF active.nodes_written.contains(node): return true // W-W conflict -│ │ IF active.nodes_read.contains(node): return true // W-R conflict -│ │ -│ ├─ FOR node IN pr.footprint.n_read: -│ │ IF active.nodes_written.contains(node): return true // R-W conflict -│ │ (R-R is allowed) -│ │ -│ ├─ FOR edge IN pr.footprint.e_write: -│ │ IF active.edges_written.contains(edge): return true -│ │ IF active.edges_read.contains(edge): return true -│ │ -│ ├─ FOR edge IN pr.footprint.e_read: -│ │ IF active.edges_written.contains(edge): return true -│ │ -│ ├─ FOR key IN pr.footprint.a_write: -│ │ IF active.attachments_written.contains(key): return true -│ │ IF active.attachments_read.contains(key): return true -│ │ -│ ├─ FOR key IN pr.footprint.a_read: -│ │ IF active.attachments_written.contains(key): return true -│ │ -│ └─ FOR port IN pr.footprint.b_in ∪ pr.footprint.b_out: -│ IF active.ports.contains(port): return true -│ -├─ IF conflict: -│ └─ on_conflict(pr) -│ FILE: crates/warp-core/src/scheduler.rs-149 -│ pr.phase = RewritePhase::Aborted -│ return false -│ -├─ mark_all(active, pr) -│ FILE: crates/warp-core/src/scheduler.rs-278 -│ │ -│ ├─ FOR node IN pr.footprint.n_write: -│ │ active.nodes_written.mark(NodeKey { warp_id, local_id: node }) -│ │ -│ ├─ FOR node IN pr.footprint.n_read: -│ │ active.nodes_read.mark(NodeKey { ... }) -│ │ -│ ... (similar for edges, attachments, ports) -│ -└─ on_reserved(pr) - FILE: crates/warp-core/src/scheduler.rs-155 - pr.phase = RewritePhase::Reserved - return true -\end{verbatim} - -\begin{tourguide} -This is classic \textbf{two-phase locking} without the locks! The \texttt{has\_conflict} function implements the conflict matrix: - -\begin{center} -\begin{tabular}{|c|c|c|} -\hline -& Read & Write \\ -\hline -Read & OK & CONFLICT \\ -\hline -Write & CONFLICT & CONFLICT \\ -\hline -\end{tabular} -\end{center} - -Multiple readers are allowed (R-R is OK), but any write conflicts with both reads and writes of the same resource. -\end{tourguide} - -\subsection{4.3 GenSet: O(1) Conflict -Detection}\label{genset-o1-conflict-detection} - -\textbf{File:} \texttt{crates/warp-core/src/scheduler.rs-535} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{struct}\NormalTok{ GenSet}\OperatorTok{\textless{}}\NormalTok{K}\OperatorTok{\textgreater{}} \OperatorTok{\{} -\NormalTok{ gen}\OperatorTok{:} \DataTypeTok{u32}\OperatorTok{,} \CommentTok{// Current generation} -\NormalTok{ seen}\OperatorTok{:}\NormalTok{ FxHashMap}\OperatorTok{\textless{}}\NormalTok{K}\OperatorTok{,} \DataTypeTok{u32}\OperatorTok{\textgreater{},} \CommentTok{// Key → generation when marked} -\OperatorTok{\}} - -\KeywordTok{impl}\OperatorTok{\textless{}}\NormalTok{K}\OperatorTok{:} \BuiltInTok{Hash} \OperatorTok{+} \BuiltInTok{Eq} \OperatorTok{+} \BuiltInTok{Copy}\OperatorTok{\textgreater{}}\NormalTok{ GenSet}\OperatorTok{\textless{}}\NormalTok{K}\OperatorTok{\textgreater{}} \OperatorTok{\{} - \AttributeTok{\#[}\NormalTok{inline}\AttributeTok{]} - \KeywordTok{pub} \KeywordTok{fn}\NormalTok{ contains(}\OperatorTok{\&}\KeywordTok{self}\OperatorTok{,}\NormalTok{ key}\OperatorTok{:}\NormalTok{ K) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{bool} \OperatorTok{\{} - \PreprocessorTok{matches!}\NormalTok{(}\KeywordTok{self}\OperatorTok{.}\NormalTok{seen}\OperatorTok{.}\NormalTok{get(}\OperatorTok{\&}\NormalTok{key)}\OperatorTok{,} \ConstantTok{Some}\NormalTok{(}\OperatorTok{\&}\NormalTok{g) }\ControlFlowTok{if}\NormalTok{ g }\OperatorTok{==} \KeywordTok{self}\OperatorTok{.}\NormalTok{gen)} - \OperatorTok{\}} - - \AttributeTok{\#[}\NormalTok{inline}\AttributeTok{]} - \KeywordTok{pub} \KeywordTok{fn}\NormalTok{ mark(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ key}\OperatorTok{:}\NormalTok{ K) }\OperatorTok{\{} - \KeywordTok{self}\OperatorTok{.}\NormalTok{seen}\OperatorTok{.}\NormalTok{insert(key}\OperatorTok{,} \KeywordTok{self}\OperatorTok{.}\NormalTok{gen)}\OperatorTok{;} - \OperatorTok{\}} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{Key Insight:} No clearing needed between transactions. Increment -\texttt{gen} → all old entries become stale. - -\begin{cleverpattern} -\textbf{Generation-Based Set: Amortized O(1) Clear} - -This is one of the most elegant patterns in Echo. Instead of clearing the hash map between transactions (O(n) operation), just increment a generation counter! - -An entry is ``in the set'' only if its stored generation matches the current generation. Old entries with stale generations are effectively invisible. - -The hash map only grows---it's never shrunk. But since the same keys tend to be accessed repeatedly (temporal locality), the map stabilizes quickly. The payoff is enormous: clearing the ``set'' is O(1) instead of O(n). -\end{cleverpattern} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{5. BOAW Parallel Execution}\label{boaw-parallel-execution} - -\textbf{Entry Point:} \texttt{execute\_parallel()} \textbf{File:} -\texttt{crates/warp-core/src/boaw/exec.rs-83} - -\begin{tourguide} -BOAW---``Best Of All Worlds''---is where Echo's determinism meets parallelism. The key insight: \emph{order of execution doesn't matter if we sort the outputs}. Rules execute in arbitrary order on worker threads, but their outputs are merged canonically. -\end{tourguide} - -\subsection{5.1 Entry Point}\label{entry-point} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ execute\_parallel(view}\OperatorTok{:}\NormalTok{ GraphView}\OperatorTok{\textless{}}\OtherTok{\textquotesingle{}\_}\OperatorTok{\textgreater{},}\NormalTok{ items}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[ExecItem]}\OperatorTok{,}\NormalTok{ workers}\OperatorTok{:} \DataTypeTok{usize}\NormalTok{) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{TickDelta}\OperatorTok{\textgreater{}} \OperatorTok{\{} - \PreprocessorTok{assert!}\NormalTok{(workers }\OperatorTok{\textgreater{}=} \DecValTok{1}\NormalTok{)}\OperatorTok{;} - \KeywordTok{let}\NormalTok{ capped\_workers }\OperatorTok{=}\NormalTok{ workers}\OperatorTok{.}\NormalTok{min(NUM\_SHARDS)}\OperatorTok{;} \CommentTok{// Cap at 256} - - \AttributeTok{\#[}\NormalTok{cfg}\AttributeTok{(}\NormalTok{feature }\OperatorTok{=} \StringTok{"parallel{-}stride{-}fallback"}\AttributeTok{)]} - \ControlFlowTok{if} \PreprocessorTok{std::env::}\NormalTok{var(}\StringTok{"ECHO\_PARALLEL\_STRIDE"}\NormalTok{)}\OperatorTok{.}\NormalTok{is\_ok() }\OperatorTok{\{} - \ControlFlowTok{return}\NormalTok{ execute\_parallel\_stride(view}\OperatorTok{,}\NormalTok{ items}\OperatorTok{,}\NormalTok{ capped\_workers)}\OperatorTok{;} - \OperatorTok{\}} - -\NormalTok{ execute\_parallel\_sharded(view}\OperatorTok{,}\NormalTok{ items}\OperatorTok{,}\NormalTok{ capped\_workers) }\CommentTok{// DEFAULT} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\subsection{5.2 Complete Call Trace}\label{complete-call-trace-2} - -\begin{verbatim} -execute_parallel(view, items, workers) -│ -└─ execute_parallel_sharded(view, items, capped_workers) - FILE: crates/warp-core/src/boaw/exec.rs-152 - │ - ├─ IF items.is_empty(): - │ return (0..workers).map(|_| TickDelta::new()).collect() - │ - ├─ partition_into_shards(items.to_vec()) → Vec - │ FILE: crates/warp-core/src/boaw/shard.rs-120 - │ │ - │ ├─ Create 256 empty VirtualShard structures - │ │ - │ └─ FOR item IN items: - │ │ - │ ├─ shard_of(&item.scope) → usize - │ │ FILE: crates/warp-core/src/boaw/shard.rs-92 - │ │ CODE: - │ │ let bytes = scope.as_bytes(); - │ │ let first_8: [u8; 8] = [bytes[0..8]]; - │ │ let val = u64::from_le_bytes(first_8); - │ │ (val & 255) as usize // SHARD_MASK = 255 - │ │ - │ └─ shards[shard_id].items.push(item) - │ - ├─ let next_shard = AtomicUsize::new(0) - │ - └─ std::thread::scope(|s| { ... }) - FILE: Rust std (scoped threads) - │ - ├─ FOR _ IN 0..workers: - │ │ - │ └─ s.spawn(move || { ... }) // ═══ WORKER THREAD ═══ - │ │ - │ ├─ let mut delta = TickDelta::new() - │ │ FILE: crates/warp-core/src/tick_delta.rs-52 - │ │ CREATES: { ops: Vec::new(), origins: Vec::new() } - │ │ - │ └─ LOOP: // Work-stealing loop - │ │ - │ ├─ shard_id = next_shard.fetch_add(1, Ordering::Relaxed) - │ │ ATOMIC: Returns old value, increments counter - │ │ ORDERING: Relaxed (no synchronization cost) - │ │ - │ ├─ IF shard_id >= 256: break - │ │ - │ └─ FOR item IN &shards[shard_id].items: - │ │ - │ ├─ let mut scoped = delta.scoped(item.origin) - │ │ FILE: crates/warp-core/src/tick_delta.rs-142 - │ │ CREATES: ScopedDelta { inner: &mut delta, origin, next_op_ix: 0 } - │ │ - │ └─ (item.exec)(view, &item.scope, scoped.inner_mut()) - │ │ - │ └─ INSIDE EXECUTOR: - │ scoped.emit(op) - │ FILE: crates/warp-core/src/tick_delta.rs-239 - │ CODE: - │ origin.op_ix = self.next_op_ix; - │ self.next_op_ix += 1; - │ self.inner.emit_with_origin(op, origin); - │ │ - │ └─ TickDelta::emit_with_origin(op, origin) - │ FILE: crates/warp-core/src/tick_delta.rs-75 - │ CODE: - │ self.ops.push(op); - │ self.origins.push(origin); // if delta_validate - │ - └─ COLLECT THREADS: - handles.into_iter().map(|h| h.join()).collect() - RETURNS: Vec (one per worker) -\end{verbatim} - -\begin{cleverpattern} -\textbf{Shard-Based Work Distribution} - -The sharding scheme is beautifully simple: take the first 8 bytes of the scope's NodeId, mask with 255, and you have your shard. - -Why 256 shards? -\begin{itemize} -\item \textbf{Granularity}: Fine enough that work distributes evenly -\item \textbf{Overhead}: Coarse enough that per-shard overhead is negligible -\item \textbf{Determinism}: The shard assignment is deterministic (depends only on NodeId) -\end{itemize} - -The work-stealing loop with \texttt{AtomicUsize::fetch\_add} is lock-free and cache-friendly---each worker claims shards sequentially, minimizing contention. -\end{cleverpattern} - -\begin{deepdive} -\textbf{Why \texttt{Ordering::Relaxed}?} - -The atomic counter uses \texttt{Relaxed} ordering---the weakest memory ordering. This is safe because: - -\begin{enumerate} -\item Each shard is processed by exactly one worker (no data races) -\item Workers don't need to see each other's results until after \texttt{join()} -\item The \texttt{join()} itself provides the necessary synchronization -\end{enumerate} - -Using \texttt{Relaxed} instead of \texttt{SeqCst} avoids memory barriers, which can be expensive on multi-core CPUs. -\end{deepdive} - -\subsection{5.3 Enforced Execution Path}\label{enforced-execution-path} - -\textbf{Entry Point:} \texttt{execute\_item\_enforced()} -\textbf{File:} \texttt{crates/warp-core/src/boaw/exec.rs} - -When footprint enforcement is active, each item is executed via -\texttt{execute\_item\_enforced()} instead of a bare function-pointer call. -Read access is enforced in-line by \texttt{GraphView}/\texttt{FootprintGuard} -while the executor runs inside \texttt{catch\_unwind}, and post-hoc -\texttt{check\_op()} validation is applied to any newly-emitted ops. - -\begin{verbatim} -execute_item_enforced(store, item, idx, unit, delta) -│ -├─ guard = unit.guards[idx] -├─ view = GraphView::new_guarded(store, guard) -│ -├─ ops_before = delta.len() -│ Snapshot the op count BEFORE the executor runs -│ -├─ result = std::panic::catch_unwind(AssertUnwindSafe(|| { -│ (item.exec)(view, &item.scope, delta) -│ })) -│ -├─ NOTE: During execution above, GraphView validates reads via -│ FootprintGuard—unauthorized reads are detected inline. -│ -├─ FOR op IN delta.ops()[ops_before..]: -│ guard.check_op(op) → panic_any(FootprintViolation) on failure -│ Validates that each newly-emitted op falls within the declared footprint. -│ ExecItemKind::System items may emit warp-instance-level ops; -│ ExecItemKind::User items may not. -│ -└─ OUTCOME PRECEDENCE: - ├─ IF check_op fails: - │ std::panic::panic_any(FootprintViolation { ... }) - │ Footprint violations OVERRIDE executor panics — violation takes precedence. - │ - ├─ IF footprint is clean BUT executor panicked: - │ std::panic::resume_unwind(payload) - │ The original panic propagates to the caller. - │ - └─ IF both clean: - return Ok(delta) // Result -\end{verbatim} - -\begin{tourguide} -The post-hoc strategy is a deliberate design choice: we let the executor run to completion (or panic), then inspect what it wrote. This avoids the overhead of intercepting every write call during hot-loop execution. Read access is still enforced in-line by \texttt{GraphView}/\texttt{FootprintGuard} while the executor runs under \texttt{catch\_unwind}, so unauthorized reads surface immediately even before \texttt{check\_op()} validates writes. -\end{tourguide} - -\begin{cleverpattern} -\textbf{Outcome Precedence:} Why do write violations override executor panics? - -Consider: a rule panics, but before panicking it wrote an out-of-footprint op. If we propagated the panic, the violation evidence would be lost. By checking the delta first, we guarantee the developer sees the footprint violation message—which is more actionable than a random panic. -\end{cleverpattern} - -\textbf{The Poison Invariant:} If the executor panics, the \texttt{TickDelta} -it was writing into is considered poisoned. The execution path returns a -\texttt{PoisonedDelta} marker, and poisoned deltas are never merged or -committed. - -\subsection{5.4 ExecItem Structure}\label{execitem-structure} - -\textbf{File:} \texttt{crates/warp-core/src/boaw/exec.rs-35} - -\begin{Shaded} -\begin{Highlighting}[] -\AttributeTok{\#[}\NormalTok{derive}\AttributeTok{(}\BuiltInTok{Clone}\OperatorTok{,} \BuiltInTok{Copy}\AttributeTok{)]} -\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ ExecItem }\OperatorTok{\{} - \KeywordTok{pub}\NormalTok{ exec}\OperatorTok{:}\NormalTok{ ExecuteFn}\OperatorTok{,} \CommentTok{// fn(GraphView, \&NodeId, \&mut TickDelta)} - \KeywordTok{pub}\NormalTok{ scope}\OperatorTok{:}\NormalTok{ NodeId}\OperatorTok{,} \CommentTok{// 32{-}byte node identifier} - \KeywordTok{pub}\NormalTok{ origin}\OperatorTok{:}\NormalTok{ OpOrigin}\OperatorTok{,} \CommentTok{// \{ intent\_id, rule\_id, match\_ix, op\_ix \}} - - \CommentTok{// Private field, present only in enforcement builds:} - \AttributeTok{\#[}\NormalTok{cfg}\AttributeTok{(}\NormalTok{any}\AttributeTok{(}\NormalTok{debug\_assertions}\OperatorTok{,}\NormalTok{ feature }\OperatorTok{=} \StringTok{"footprint\_enforce\_release"}\AttributeTok{))]} - \AttributeTok{\#[}\NormalTok{cfg}\AttributeTok{(}\NormalTok{not}\AttributeTok{(}\NormalTok{feature }\OperatorTok{=} \StringTok{"unsafe\_graph"}\AttributeTok{))]} -\NormalTok{ kind}\OperatorTok{:}\NormalTok{ ExecItemKind}\OperatorTok{,} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\begin{tourguide} -\texttt{ExecItem} is \texttt{Clone + Copy}---it's just a function pointer plus some IDs. This means workers can own their items without any reference counting or synchronization. The \texttt{origin} field enables tracing any operation back to the intent and rule that produced it. -\end{tourguide} - -\textbf{\texttt{ExecItemKind} (cfg-gated):} - -\begin{itemize} -\tightlist -\item - \texttt{ExecItemKind::User} --- Normal rule executor. May emit - node/edge/attachment ops scoped to the declared footprint. Cannot emit - warp-instance-level ops (\texttt{UpsertWarpInstance}, - \texttt{DeleteWarpInstance}, \texttt{OpenPortal}). -\item - \texttt{ExecItemKind::System} --- Internal-only executor (e.g., portal - opening). May emit warp-instance-level ops. -\end{itemize} - -\texttt{ExecItem::new()} always creates \texttt{User} items. System items are -constructed via \texttt{ExecItem::new\_system()} (cfg-gated \texttt{pub(crate)} -constructor used by portal/inbox rules) and are never exposed through the public -API. - -\begin{cleverpattern} -\textbf{The dual-attribute cfg-gate pattern:} The \texttt{kind} field (and all -enforcement logic) is guarded by two cfg attributes that together express three -conditions (\texttt{debug\_assertions}, \texttt{footprint\_enforce\_release}, -and \texttt{unsafe\_graph}): - -\begin{enumerate} -\def\labelenumi{\arabic{enumi}.} -\tightlist -\item - \texttt{\#[cfg(any(debug\_assertions, feature = "footprint\_enforce\_release"))]} - --- active in debug builds or when the release enforcement feature is - opted-in. -\item - \texttt{\#[cfg(not(feature = "unsafe\_graph"))]} --- disabled when the - escape-hatch feature is set (for benchmarks/fuzzing that intentionally - bypass checks). -\end{enumerate} - -The gates are symmetric: the \texttt{kind} field, \texttt{guards} vector, and -validation code all have both cfg attributes applied identically. -\textbf{Precedence:} When both features are enabled -(\texttt{footprint\_enforce\_release} and \texttt{unsafe\_graph}), the -\texttt{unsafe\_graph} escape hatch takes precedence and disables -enforcement---the \texttt{kind} field and enforcement guards are not compiled -in, so \texttt{ExecItem} retains its non-enforced layout but enforcement is -silently inactive. Practically, the \texttt{kind} field, \texttt{guards} vector, -and validation code are compiled out under \texttt{unsafe\_graph}, even if -release enforcement is requested. The struct layout changes depending on the -build profile---\texttt{ExecItem} is smaller in release builds where the guard -is inactive. -\end{cleverpattern} - -\subsection{5.5 Thread Safety}\label{thread-safety} - -{\def\LTcaptype{none} % do not increment counter -\begin{longtable}[]{@{}lll@{}} -\toprule\noalign{} -Type & Safety & Reason \\ -\midrule\noalign{} -\endhead -\bottomrule\noalign{} -\endlastfoot -\texttt{GraphView} & \texttt{Sync\ +\ Send\ +\ Clone} & Read-only -snapshot \\ -\texttt{ExecItem} & \texttt{Sync\ +\ Send\ +\ Copy} & Function pointer + -primitives \\ -\texttt{TickDelta} & Per-worker exclusive & No shared mutation \\ -\texttt{AtomicUsize} & Lock-free & \texttt{fetch\_add} with -\texttt{Relaxed} ordering \\ -\end{longtable} -} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{6. Delta Merge \& State -Finalization}\label{delta-merge-state-finalization} - -\begin{tourguide} -This is where the magic happens: multiple workers produce independent deltas, and we merge them into a single canonical result. The key invariant: \emph{the merge output depends only on the operations, not on which worker produced them or when}. -\end{tourguide} - -\subsection{6.1 Canonical Merge}\label{canonical-merge} - -\textbf{Entry Point:} \texttt{merge\_deltas()} \textbf{File:} -\texttt{crates/warp-core/src/boaw/merge.rs-75} - -\begin{verbatim} -merge_deltas(deltas: Vec) → Result, MergeConflict> -│ -├─[1] FLATTEN ALL OPS WITH ORIGINS -│ let mut flat: Vec<(WarpOpKey, OpOrigin, WarpOp)> = Vec::new(); -│ FOR d IN deltas: -│ let (ops, origins) = d.into_parts_unsorted(); -│ FOR (op, origin) IN ops.zip(origins): -│ flat.push((op.sort_key(), origin, op)); -│ -├─[2] CANONICAL SORT -│ flat.sort_by(|a, b| (&a.0, &a.1).cmp(&(&b.0, &b.1))); -│ ORDER: (WarpOpKey, OpOrigin) lexicographic -│ -└─[3] DEDUPE & CONFLICT DETECTION - let mut out = Vec::new(); - let mut i = 0; - WHILE i < flat.len(): - │ - ├─ GROUP by WarpOpKey - │ key = flat[i].0 - │ start = i - │ WHILE i < flat.len() && flat[i].0 == key: i++ - │ - ├─ CHECK if all ops identical - │ first = &flat[start].2 - │ all_same = flat[start+1..i].iter().all(|(_, _, op)| op == first) - │ - └─ IF all_same: - out.push(first.clone()) // Accept one copy - ELSE: - writers = flat[start..i].iter().map(|(_, o, _)| *o).collect() - return Err(MergeConflict { writers }) // CONFLICT! - - return Ok(out) -\end{verbatim} - -\begin{cleverpattern} -\textbf{Benevolent Coincidence} - -The merge allows multiple writers to produce the same operation---this is called a \emph{benevolent coincidence}. If two rules independently decide to create the same edge, that's fine! The merge keeps one copy. - -But if they produce \emph{different} operations for the same key (e.g., setting an attachment to different values), that's a \texttt{MergeConflict}---a bug in the rule definitions. - -This policy allows natural redundancy in rule specifications while catching genuine conflicts. -\end{cleverpattern} - -\subsection{6.2 WarpOp Sort Key}\label{warpop-sort-key} - -\textbf{File:} \texttt{crates/warp-core/src/tick\_patch.rs-287} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{fn}\NormalTok{ sort\_key(}\OperatorTok{\&}\KeywordTok{self}\NormalTok{) }\OperatorTok{{-}\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{} - \ControlFlowTok{match} \KeywordTok{self} \OperatorTok{\{} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{OpenPortal }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{1}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{UpsertWarpInstance }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{2}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{DeleteWarpInstance }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{3}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{DeleteEdge }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{4}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} \CommentTok{// Delete before upsert} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{DeleteNode }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{5}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{UpsertNode }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{6}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{UpsertEdge }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{7}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{SetAttachment }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{8}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} \CommentTok{// Last} - \OperatorTok{\}} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{Canonical Order:} 1. OpenPortal (creates child instances) 2. -UpsertWarpInstance 3. DeleteWarpInstance 4. DeleteEdge (delete before -upsert) 5. DeleteNode (delete before upsert) 6. UpsertNode 7. UpsertEdge -8. SetAttachment (after skeleton exists) - -\begin{deepdive} -\textbf{Why This Specific Order?} - -The operation order is carefully chosen to maintain invariants: - -\begin{enumerate} -\item \textbf{OpenPortal first}: Creates warp instances that other ops may reference -\item \textbf{Deletes before upserts}: Ensures we don't accidentally delete something we just created (idempotence) -\item \textbf{Nodes before edges}: Edges reference nodes, so nodes must exist first -\item \textbf{Attachments last}: Attachments reference nodes/edges, so the skeleton must be complete -\end{enumerate} - -This ordering means rules don't need to worry about operation sequencing---emit ops in any order, and the merge will sort them correctly. -\end{deepdive} - -\subsection{6.3 State Mutation Methods}\label{state-mutation-methods} - -\textbf{File:} \texttt{crates/warp-core/src/graph.rs} - -\begin{verbatim} -GraphStore::insert_node(id, record) - LINE: 175-177 - CODE: self.nodes.insert(id, record) - -GraphStore::upsert_edge_record(from, edge) - LINE: 196-261 - UPDATES: - - self.edge_index.insert(edge_id, from) - - self.edge_to_index.insert(edge_id, to) - - Remove old edge from previous bucket if exists - - self.edges_from.entry(from).or_default().push(edge) - - self.edges_to.entry(to).or_default().push(edge_id) - -GraphStore::delete_node_cascade(node) - LINE: 277-354 - CASCADES: - - Remove from self.nodes - - Remove node attachment - - Remove ALL outbound edges (and their attachments) - - Remove ALL inbound edges (and their attachments) - - Maintain all 4 index maps consistently - -GraphStore::delete_edge_exact(from, edge_id) - LINE: 360-412 - VALIDATES: edge is in correct "from" bucket - REMOVES: - - From edges_from bucket - - From edge_index - - From edge_to_index - - From edges_to bucket - - Edge attachment - -GraphStore::set_node_attachment(id, value) - LINE: 125-134 - CODE: - None → self.node_attachments.remove(&id) - Some(v) → self.node_attachments.insert(id, v) - -GraphStore::set_edge_attachment(id, value) - LINE: 163-172 - Same pattern as node attachments -\end{verbatim} - -\begin{watchout} -\textbf{Cascade Deletes Are Dangerous} - -\texttt{delete\_node\_cascade} removes not just the node, but all its edges and attachments. This is correct behavior (dangling edges would violate invariants), but rule authors must be aware: deleting a highly-connected node triggers many index updates. - -This is why footprints must declare write access to all edges that might be affected---the cascade happens even if the rule only explicitly deletes the node. -\end{watchout} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{7. Hash Computation}\label{hash-computation} - -\begin{tourguide} -Hashing is Echo's fingerprint technology. The state root captures \emph{what the graph looks like}; the commit hash captures \emph{how we got here}. Both are computed deterministically using BLAKE3, ensuring that identical states produce identical hashes across all nodes in a distributed system. -\end{tourguide} - -\subsection{7.1 State Root}\label{state-root} - -\textbf{Entry Point:} \texttt{compute\_state\_root()} \textbf{File:} -\texttt{crates/warp-core/src/snapshot.rs-209} - -\begin{verbatim} -compute_state_root(state: &WarpState, root: &NodeKey) → Hash -│ -├─[1] BFS REACHABILITY TRAVERSAL -│ │ -│ ├─ Initialize: -│ │ reachable_nodes: BTreeSet = { root } -│ │ reachable_warps: BTreeSet = { root.warp_id } -│ │ queue: VecDeque = [ root ] -│ │ -│ └─ WHILE let Some(current) = queue.pop_front(): -│ │ -│ ├─ store = state.store(¤t.warp_id) -│ │ -│ ├─ FOR edge IN store.edges_from(¤t.local_id): -│ │ ├─ to = NodeKey { warp_id: current.warp_id, local_id: edge.to } -│ │ ├─ IF reachable_nodes.insert(to): queue.push_back(to) -│ │ │ -│ │ └─ IF edge has Descend(child_warp) attachment: -│ │ └─ enqueue_descend(state, child_warp, ...) -│ │ Adds child instance root to queue -│ │ -│ └─ IF current node has Descend(child_warp) attachment: -│ enqueue_descend(state, child_warp, ...) -│ -├─[2] HASHING PHASE -│ │ -│ ├─ let mut hasher = Hasher::new() // BLAKE3 -│ │ -│ ├─ HASH ROOT BINDING: -│ │ hasher.update(&root.warp_id.0) // 32 bytes -│ │ hasher.update(&root.local_id.0) // 32 bytes -│ │ -│ └─ FOR warp_id IN reachable_warps: // BTreeSet = sorted order -│ │ -│ ├─ HASH INSTANCE HEADER: -│ │ hasher.update(&instance.warp_id.0) // 32 bytes -│ │ hasher.update(&instance.root_node.0) // 32 bytes -│ │ hash_attachment_key_opt(&mut hasher, instance.parent.as_ref()) -│ │ -│ ├─ FOR (node_id, node) IN store.nodes: // BTreeMap = sorted -│ │ IF reachable_nodes.contains(&NodeKey { warp_id, local_id: node_id }): -│ │ hasher.update(&node_id.0) // 32 bytes -│ │ hasher.update(&node.ty.0) // 32 bytes -│ │ hash_attachment_value_opt(&mut hasher, store.node_attachment(node_id)) -│ │ -│ └─ FOR (from, edges) IN store.edges_from: // BTreeMap = sorted -│ IF from is reachable: -│ sorted_edges = edges.filter(reachable).sort_by(|a,b| a.id.cmp(b.id)) -│ hasher.update(&from.0) // 32 bytes -│ hasher.update(&(sorted_edges.len() as u64).to_le_bytes()) // 8 bytes -│ FOR edge IN sorted_edges: -│ hasher.update(&edge.id.0) // 32 bytes -│ hasher.update(&edge.ty.0) // 32 bytes -│ hasher.update(&edge.to.0) // 32 bytes -│ hash_attachment_value_opt(&mut hasher, store.edge_attachment(&edge.id)) -│ -└─ hasher.finalize().into() // → [u8; 32] -\end{verbatim} - -\begin{cleverpattern} -\textbf{BTreeSet/BTreeMap for Determinism} - -Notice the use of \texttt{BTreeSet} and \texttt{BTreeMap} throughout. Unlike \texttt{HashSet}/\texttt{HashMap}, B-tree collections iterate in \emph{sorted order}. This is essential for deterministic hashing---the hash must be the same regardless of insertion order. - -The trade-off: B-tree operations are O(log n) instead of O(1). But for hashing (which happens once per commit), correctness trumps speed. -\end{cleverpattern} - -\begin{deepdive} -\textbf{Reachability Pruning} - -The BFS traversal only hashes \emph{reachable} nodes and edges. This means: - -\begin{enumerate} -\item Garbage (unreachable nodes) doesn't affect the hash -\item Two states with the same reachable structure have the same hash -\item Deleting a disconnected subgraph doesn't change the hash -\end{enumerate} - -This is a subtle but important property for garbage collection---you can safely remove unreachable data without affecting consensus. -\end{deepdive} - -\subsection{7.2 Commit Hash v2}\label{commit-hash-v2} - -\textbf{Entry Point:} \texttt{compute\_commit\_hash\_v2()} -\textbf{File:} \texttt{crates/warp-core/src/snapshot.rs-263} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{fn}\NormalTok{ compute\_commit\_hash\_v2(} -\NormalTok{ state\_root}\OperatorTok{:} \OperatorTok{\&}\BuiltInTok{Hash}\OperatorTok{,} -\NormalTok{ parents}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[}\BuiltInTok{Hash}\NormalTok{]}\OperatorTok{,} -\NormalTok{ patch\_digest}\OperatorTok{:} \OperatorTok{\&}\BuiltInTok{Hash}\OperatorTok{,} -\NormalTok{ policy\_id}\OperatorTok{:} \DataTypeTok{u32}\OperatorTok{,} -\NormalTok{) }\OperatorTok{{-}\textgreater{}} \BuiltInTok{Hash} \OperatorTok{\{} - \KeywordTok{let} \KeywordTok{mut}\NormalTok{ h }\OperatorTok{=} \BuiltInTok{Hasher}\PreprocessorTok{::}\NormalTok{new()}\OperatorTok{;} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\DecValTok{2u16}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// Version tag (2 bytes)} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\NormalTok{(parents}\OperatorTok{.}\NormalTok{len() }\KeywordTok{as} \DataTypeTok{u64}\NormalTok{)}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// Parent count (8 bytes)} - \ControlFlowTok{for}\NormalTok{ p }\KeywordTok{in}\NormalTok{ parents }\OperatorTok{\{} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(p)}\OperatorTok{;} \CommentTok{// Each parent (32 bytes)} - \OperatorTok{\}} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(state\_root)}\OperatorTok{;} \CommentTok{// Graph hash (32 bytes)} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(patch\_digest)}\OperatorTok{;} \CommentTok{// Ops hash (32 bytes)} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\NormalTok{policy\_id}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// Policy (4 bytes)} -\NormalTok{ h}\OperatorTok{.}\NormalTok{finalize()}\OperatorTok{.}\NormalTok{into()} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{Byte Layout:} - -\begin{verbatim} -Offset Size Field -0 2 version_tag (0x02 0x00) -2 8 parent_count (u64 LE) -10 32*N parents[] (N parent hashes) -10+32N 32 state_root -42+32N 32 patch_digest -74+32N 4 policy_id (u32 LE) -───────────────────────────────────── -TOTAL: 78 + 32*N bytes → BLAKE3 → 32-byte hash -\end{verbatim} - -\begin{tourguide} -The version tag (\texttt{0x02 0x00}) is future-proofing: if the commit hash format ever needs to change, the version lets validators distinguish between formats. The ``v2'' in the function name indicates this is already the second iteration of the format. -\end{tourguide} - -\subsection{7.3 Patch Digest}\label{patch-digest} - -\textbf{Entry Point:} \texttt{compute\_patch\_digest\_v2()} -\textbf{File:} \texttt{crates/warp-core/src/tick\_patch.rs-774} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{fn}\NormalTok{ compute\_patch\_digest\_v2(} -\NormalTok{ policy\_id}\OperatorTok{:} \DataTypeTok{u32}\OperatorTok{,} -\NormalTok{ rule\_pack\_id}\OperatorTok{:} \OperatorTok{\&}\NormalTok{ContentHash}\OperatorTok{,} -\NormalTok{ commit\_status}\OperatorTok{:}\NormalTok{ TickCommitStatus}\OperatorTok{,} -\NormalTok{ in\_slots}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[SlotId]}\OperatorTok{,} -\NormalTok{ out\_slots}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[SlotId]}\OperatorTok{,} -\NormalTok{ ops}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[WarpOp]}\OperatorTok{,} -\NormalTok{) }\OperatorTok{{-}\textgreater{}}\NormalTok{ ContentHash }\OperatorTok{\{} - \KeywordTok{let} \KeywordTok{mut}\NormalTok{ h }\OperatorTok{=} \BuiltInTok{Hasher}\PreprocessorTok{::}\NormalTok{new()}\OperatorTok{;} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\DecValTok{2u16}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// Format version} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\NormalTok{policy\_id}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// 4 bytes} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(rule\_pack\_id)}\OperatorTok{;} \CommentTok{// 32 bytes} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\NormalTok{[commit\_status}\OperatorTok{.}\NormalTok{code()])}\OperatorTok{;} \CommentTok{// 1 byte} -\NormalTok{ encode\_slots(}\OperatorTok{\&}\KeywordTok{mut}\NormalTok{ h}\OperatorTok{,}\NormalTok{ in\_slots)}\OperatorTok{;} -\NormalTok{ encode\_slots(}\OperatorTok{\&}\KeywordTok{mut}\NormalTok{ h}\OperatorTok{,}\NormalTok{ out\_slots)}\OperatorTok{;} -\NormalTok{ encode\_ops(}\OperatorTok{\&}\KeywordTok{mut}\NormalTok{ h}\OperatorTok{,}\NormalTok{ ops)}\OperatorTok{;} -\NormalTok{ h}\OperatorTok{.}\NormalTok{finalize()}\OperatorTok{.}\NormalTok{into()} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{8. Commit Orchestration}\label{commit-orchestration} - -\textbf{Entry Point:} \texttt{Engine::commit\_with\_receipt()} -\textbf{File:} \texttt{crates/warp-core/src/engine\_impl.rs-954} - -\begin{tourguide} -This is the grand finale---where all the pieces come together. The commit orchestrator drains the scheduler, reserves resources, executes rules, merges deltas, computes hashes, and records the transaction. Let's trace through every step. -\end{tourguide} - -\subsection{8.1 Complete Call Trace}\label{complete-call-trace-3} - -\begin{verbatim} -Engine::commit_with_receipt(tx) → Result<(Snapshot, TickReceipt, WarpTickPatchV1), EngineError> -│ -├─[1] VALIDATE TRANSACTION -│ IF tx.value() == 0 || !self.live_txs.contains(&tx.value()): -│ return Err(EngineError::UnknownTx) -│ -├─[2] DRAIN CANDIDATES -│ policy_id = self.policy_id // Line 844 -│ rule_pack_id = self.compute_rule_pack_id() // Line 845 -│ │ -│ ├─ compute_rule_pack_id() -│ │ FILE: engine_impl.rs -│ │ CODE: -│ │ ids = self.rules.values().map(|r| r.id).collect() -│ │ ids.sort_unstable(); ids.dedup() -│ │ hasher.update(&1u16.to_le_bytes()) // version -│ │ hasher.update(&(ids.len() as u64).to_le_bytes()) -│ │ FOR id IN ids: hasher.update(&id) -│ │ hasher.finalize().into() -│ │ -│ drained = self.scheduler.drain_for_tx(tx) // Line 847 -│ plan_digest = compute_plan_digest(&drained) // Line 848 -│ -├─[3] RESERVE (INDEPENDENCE CHECK) -│ ReserveOutcome { receipt, reserved, in_slots, out_slots } -│ = self.reserve_for_receipt(tx, drained)? // Line 850-855 -│ │ -│ └─ reserve_for_receipt(tx, drained) -│ FILE: engine_impl.rs -│ │ -│ FOR rewrite IN drained (canonical order): -│ │ -│ ├─ accepted = self.scheduler.reserve(tx, &mut rewrite) -│ │ -│ ├─ IF !accepted: -│ │ blockers = find_blocking_rewrites(reserved, &rewrite) -│ │ -│ ├─ receipt_entries.push(TickReceiptEntry { ... }) -│ │ -│ └─ IF accepted: -│ reserved.push(rewrite) -│ extend_slots_from_footprint(&mut in_slots, &mut out_slots, ...) -│ │ -│ return ReserveOutcome { receipt, reserved, in_slots, out_slots } -│ -│ rewrites_digest = compute_rewrites_digest(&reserved_rewrites) // Line 858 -│ -├─[4] EXECUTE (PHASE 5 BOAW) -│ state_before = self.state.clone() // Line 862 -│ delta_ops = self.apply_reserved_rewrites(reserved, &state_before)? -│ │ -│ └─ apply_reserved_rewrites(rewrites, state_before) -│ FILE: engine_impl.rs -│ │ -│ ├─ let mut delta = TickDelta::new() -│ │ -│ ├─ FOR rewrite IN rewrites: -│ │ executor = self.rule_by_compact(rewrite.compact_rule).executor -│ │ view = GraphView::new(self.state.store(&rewrite.scope.warp_id)) -│ │ (executor)(view, &rewrite.scope.local_id, &mut delta) -│ │ -│ ├─ let ops = delta.finalize() // Canonical sort -│ │ -│ ├─ patch = WarpTickPatchV1::new(policy_id, rule_pack_id, ..., ops) -│ │ patch.apply_to_state(&mut self.state)? -│ │ -│ └─ [delta_validate]: assert_delta_matches_diff(&ops, &diff_ops) -│ -├─[5] MATERIALIZE -│ mat_report = self.bus.finalize() // Line 884 -│ self.last_materialization = mat_report.channels -│ self.last_materialization_errors = mat_report.errors -│ -├─[6] COMPUTE DELTA PATCH -│ ops = diff_state(&state_before, &self.state) // Line 889 -│ │ -│ └─ diff_state(before, after) -│ FILE: tick_patch.rs -│ - Canonicalize portal authoring (OpenPortal) -│ - Diff instances (delete/upsert) -│ - Diff nodes, edges, attachments -│ - Sort by WarpOp::sort_key() -│ │ -│ patch = WarpTickPatchV1::new(policy_id, rule_pack_id, ..., ops) -│ patch_digest = patch.digest() // Line 898 -│ -├─[7] COMPUTE STATE ROOT -│ state_root = compute_state_root(&self.state, &self.current_root) // Line 900 -│ -├─[8] GET PARENTS -│ parents = self.last_snapshot.as_ref().map(|s| vec![s.hash]).unwrap_or_default() -│ -├─[9] COMPUTE DECISION DIGEST -│ decision_digest = receipt.digest() // Line 929 -│ -├─[10] COMPUTE COMMIT HASH -│ hash = compute_commit_hash_v2(&state_root, &parents, &patch_digest, policy_id) -│ -├─[11] BUILD SNAPSHOT -│ snapshot = Snapshot { -│ root: self.current_root, -│ hash, // commit_id v2 -│ parents, -│ plan_digest, // Diagnostic -│ decision_digest, // Diagnostic -│ rewrites_digest, // Diagnostic -│ patch_digest, // COMMITTED -│ policy_id, // COMMITTED -│ tx, -│ } -│ -├─[12] RECORD TO HISTORY -│ self.last_snapshot = Some(snapshot.clone()) // Line 947 -│ self.tick_history.push((snapshot, receipt, patch)) // Line 948-949 -│ self.live_txs.remove(&tx.value()) // Line 951 -│ self.scheduler.finalize_tx(tx) // Line 952 -│ -└─[13] RETURN - Ok((snapshot, receipt, patch)) -\end{verbatim} - -\begin{cleverpattern} -\textbf{State Snapshot Before Mutation} - -In step [4], notice \texttt{state\_before = self.state.clone()}. This clone happens \emph{before} any mutations. Why? - -\begin{enumerate} -\item Enables \texttt{diff\_state()} to compute exactly what changed -\item Supports rollback if execution fails (though this isn't shown) -\item Provides validation: the delta from execution should match the diff -\end{enumerate} - -The clone is relatively cheap because it's copy-on-write under the hood---most data is shared until mutation. -\end{cleverpattern} - -\begin{deepdive} -\textbf{Diagnostic vs. Committed Digests} - -The snapshot contains multiple digests, but only some are ``committed'' (affect the hash): - -\begin{itemize} -\item \textbf{Committed}: \texttt{state\_root}, \texttt{patch\_digest}, \texttt{policy\_id}, \texttt{parents} -\item \textbf{Diagnostic}: \texttt{plan\_digest}, \texttt{decision\_digest}, \texttt{rewrites\_digest} -\end{itemize} - -Diagnostic digests are for debugging and auditing---they help trace what happened, but don't affect consensus. This separation keeps the consensus-critical path minimal while providing rich observability. -\end{deepdive} - -\subsection{8.2 Commit Hash Inputs}\label{commit-hash-inputs} - -{\def\LTcaptype{none} % do not increment counter -\begin{longtable}[]{@{}lll@{}} -\toprule\noalign{} -Input & Committed? & Purpose \\ -\midrule\noalign{} -\endhead -\bottomrule\noalign{} -\endlastfoot -\texttt{state\_root} & ✓ & What the graph looks like \\ -\texttt{patch\_digest} & ✓ & How we got here (ops) \\ -\texttt{parents} & ✓ & Chain continuity \\ -\texttt{policy\_id} & ✓ & Aion policy version \\ -\texttt{plan\_digest} & ✗ & Diagnostic only \\ -\texttt{decision\_digest} & ✗ & Diagnostic only \\ -\texttt{rewrites\_digest} & ✗ & Diagnostic only \\ -\end{longtable} -} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{9. Complete Call Graph}\label{complete-call-graph} - -\subsection{9.1 Full Journey: Intent → -Commit}\label{full-journey-intent-commit} - -\begin{verbatim} -USER ACTION - │ - ▼ -Engine::ingest_intent(intent_bytes) - ├─ compute_intent_id() // BLAKE3 content hash - ├─ make_node_id(), make_type_id() // Structural IDs - ├─ store.insert_node() // Create event node - ├─ store.set_node_attachment() // Attach intent payload - └─ store.insert_edge() // Pending edge to inbox - │ - ▼ -Engine::begin() → TxId - ├─ tx_counter.wrapping_add(1) - ├─ live_txs.insert(tx_counter) - └─ TxId::from_raw(tx_counter) - │ - ▼ -Engine::dispatch_next_intent(tx) // (or manual apply) - │ - ▼ -Engine::apply(tx, rule_name, scope) - └─ Engine::apply_in_warp(tx, warp_id, rule_name, scope, &[]) - ├─ rules.get(rule_name) // Lookup rule - ├─ GraphView::new(store) // Read-only view - ├─ (rule.matcher)(view, scope) // Match check - ├─ scope_hash() // BLAKE3 ordering key - ├─ (rule.compute_footprint)(view, scope) // Footprint - └─ scheduler.enqueue(tx, PendingRewrite) - └─ PendingTx::enqueue() // Last-wins dedup - │ - ▼ -Engine::commit_with_receipt(tx) - │ - ├─[DRAIN] - │ scheduler.drain_for_tx(tx) - │ └─ PendingTx::drain_in_order() - │ └─ radix_sort() or sort_unstable_by() - │ 20-pass LSD radix sort - │ ORDER: (scope_hash, rule_id, nonce) - │ - ├─[RESERVE] - │ FOR rewrite IN drained: - │ scheduler.reserve(tx, &mut rewrite) - │ ├─ has_conflict(active, pr) - │ │ └─ GenSet::contains() × N // O(1) per check - │ └─ mark_all(active, pr) - │ └─ GenSet::mark() × M // O(1) per mark - │ - ├─[EXECUTE] - │ apply_reserved_rewrites(reserved, state_before) - │ FOR rewrite IN reserved: - │ (executor)(view, &scope, &mut delta) - │ └─ scoped.emit(op) - │ └─ delta.emit_with_origin(op, origin) - │ delta.finalize() // Sort ops - │ patch.apply_to_state(&mut self.state) - │ - ├─[MATERIALIZE] - │ bus.finalize() - │ - ├─[DELTA PATCH] - │ diff_state(&state_before, &self.state) - │ └─ Sort by WarpOp::sort_key() - │ WarpTickPatchV1::new(...) - │ └─ compute_patch_digest_v2() - │ - ├─[HASHES] - │ compute_state_root(&self.state, &self.current_root) - │ ├─ BFS reachability - │ └─ BLAKE3 over canonical encoding - │ compute_commit_hash_v2(state_root, parents, patch_digest, policy_id) - │ └─ BLAKE3(version || parents || state_root || patch_digest || policy_id) - │ - ├─[SNAPSHOT] - │ Snapshot { root, hash, parents, digests..., policy_id, tx } - │ - └─[RECORD] - tick_history.push((snapshot, receipt, patch)) - live_txs.remove(&tx.value()) - scheduler.finalize_tx(tx) - │ - ▼ -RETURN: (Snapshot, TickReceipt, WarpTickPatchV1) -\end{verbatim} - -\begin{tourguide} -And there you have it---the complete journey from user action to committed state. Every step is deterministic, every hash is content-addressed, and the system can be replayed or verified by any node with the same inputs. - -The elegance lies in the separation of concerns: -\begin{itemize} -\item \textbf{Ingestion} is pure data capture -\item \textbf{Matching} is pure pattern recognition -\item \textbf{Scheduling} is pure ordering -\item \textbf{Execution} is pure computation (no side effects escape) -\item \textbf{Merging} is pure deduplication -\item \textbf{Hashing} is pure fingerprinting -\end{itemize} - -Each phase can be reasoned about independently, tested independently, and optimized independently. This is the hallmark of well-architected systems. -\end{tourguide} - -\subsection{9.2 File Index}\label{file-index} - -{\def\LTcaptype{none} % do not increment counter -\begin{longtable}[]{@{}lll@{}} -\toprule\noalign{} -Component & Primary File & Key Lines \\ -\midrule\noalign{} -\endhead -\bottomrule\noalign{} -\endlastfoot -Intent Ingestion & \texttt{engine\_impl.rs} & 1216-1281 \\ -Identity Hashing & \texttt{ident.rs} & 85-109 \\ -Transaction Begin & \texttt{engine\_impl.rs} & 711-719 \\ -Rule Apply & \texttt{engine\_impl.rs} & 730-806 \\ -Footprint & \texttt{footprint.rs} & 131-152 \\ -Scheduler Enqueue & \texttt{scheduler.rs} & 102-105, 331-355 \\ -Radix Sort & \texttt{scheduler.rs} & 360-413, 481-498 \\ -Reserve/Conflict & \texttt{scheduler.rs} & 134-278 \\ -GenSet & \texttt{scheduler.rs} & 509-535 \\ -BOAW Execute & \texttt{boaw/exec.rs} & 61-152 \\ -Shard Routing & \texttt{boaw/shard.rs} & 82-120 \\ -Delta Merge & \texttt{boaw/merge.rs} & 36-75 \\ -TickDelta & \texttt{tick\_delta.rs} & 38-172 \\ -WarpOp Sort Key & \texttt{tick\_patch.rs} & 207-287 \\ -State Mutations & \texttt{graph.rs} & 175-412 \\ -Patch Apply & \texttt{tick\_patch.rs} & 434-561 \\ -Diff State & \texttt{tick\_patch.rs} & 979-1069 \\ -State Root Hash & \texttt{snapshot.rs} & 88-209 \\ -Commit Hash v2 & \texttt{snapshot.rs} & 244-263 \\ -Patch Digest & \texttt{tick\_patch.rs} & 755-774 \\ -Commit Orchestrator & \texttt{engine\_impl.rs} & 837-954 \\ -\end{longtable} -} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{Appendix A: Complexity -Summary}\label{appendix-a-complexity-summary} - -{\def\LTcaptype{none} % do not increment counter -\begin{longtable}[]{@{}lll@{}} -\toprule\noalign{} -Operation & Complexity & Notes \\ -\midrule\noalign{} -\endhead -\bottomrule\noalign{} -\endlastfoot -\texttt{ingest\_intent} & O(1) & Fixed structural insertions \\ -\texttt{begin} & O(1) & Counter increment + set insert \\ -\texttt{apply} & O(m) & m = footprint size \\ -\texttt{drain\_for\_tx} (radix) & O(n) & n = candidates, 20 passes \\ -\texttt{reserve} per rewrite & O(m) & m = footprint size, O(1) per -check \\ -\texttt{execute\_parallel} & O(n/w) & n = items, w = workers \\ -\texttt{merge\_deltas} & O(k log k) & k = total ops (sort + dedup) \\ -\texttt{compute\_state\_root} & O(V + E) & V = nodes, E = edges \\ -\texttt{compute\_commit\_hash\_v2} & O(P) & P = parents \\ -\end{longtable} -} - -\begin{tourguide} -Notice that all operations are either O(1), O(n), or O(n log n)---there's nothing quadratic or exponential lurking here. The system scales linearly with the amount of work, which is essential for predictable performance. - -The one potential bottleneck is \texttt{compute\_state\_root} at O(V + E), which traverses the entire reachable graph. For very large graphs, this could become expensive. In practice, graphs are partitioned across warp instances, keeping each traversal manageable. -\end{tourguide} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{Appendix B: Determinism -Boundaries}\label{appendix-b-determinism-boundaries} - -\subsection{Guaranteed Deterministic}\label{guaranteed-deterministic} - -\begin{itemize} -\tightlist -\item - Radix sort ordering (20-pass LSD) -\item - BTreeMap/BTreeSet iteration -\item - BLAKE3 hashing -\item - GenSet conflict detection -\item - Canonical merge deduplication -\end{itemize} - -\subsection{Intentionally Non-Deterministic (Handled by -Merge)}\label{intentionally-non-deterministic-handled-by-merge} - -\begin{itemize} -\tightlist -\item - Worker execution order in BOAW -\item - Shard claim order (atomic counter) -\end{itemize} - -\begin{deepdive} -\textbf{The Determinism Contract} - -Echo's determinism guarantee is: \emph{given the same inputs (intents, rules, initial state), the output (commit hash) is identical across all executions}. - -This holds even though: -\begin{itemize} -\item Workers execute in arbitrary order -\item Shards are claimed non-deterministically -\item Thread scheduling varies between runs -\end{itemize} - -The canonical merge absorbs this non-determinism, producing a deterministic output from non-deterministic intermediate results. It's a beautiful example of ``eventual determinism''---chaos in the middle, order at the end. -\end{deepdive} - -\subsection{Protocol Constants -(Frozen)}\label{protocol-constants-frozen} - -\begin{itemize} -\tightlist -\item - \texttt{NUM\_SHARDS\ =\ 256} -\item - \texttt{SHARD\_MASK\ =\ 255} -\item - Shard routing: \texttt{LE\_u64(node\_id{[}0..8{]})\ \&\ 255} -\item - Commit hash v2 version tag: \texttt{0x02\ 0x00} -\end{itemize} - -\begin{watchout} -\textbf{Protocol Constants Are Sacred} - -These constants are ``frozen''---changing them would break compatibility with existing commits. If you're tempted to tweak \texttt{NUM\_SHARDS} or the shard routing formula, remember: every historical commit was created with these values, and changing them would make replay impossible. - -Protocol evolution happens through version tags (like the \texttt{0x02} in commit hash v2), not by modifying existing constants. -\end{watchout} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\begin{tourguide} -\textbf{End of Tour} - -Thank you for joining me on this journey through Echo's internals! We've seen: - -\begin{itemize} -\item \textbf{Content-addressed everything}: From intents to commits, identity comes from content -\item \textbf{Deterministic scheduling}: Radix sort + footprints = predictable execution -\item \textbf{Safe parallelism}: Sharded execution + canonical merge = speed without chaos -\item \textbf{Cryptographic integrity}: BLAKE3 hashes throughout = verifiable state -\end{itemize} - -Echo is a remarkable piece of engineering---complex enough to solve hard problems, yet built from simple, composable primitives. The code rewards careful study, and I hope these annotations help illuminate the ``why'' behind the ``what.'' - -Happy hacking! -\end{tourguide} - -\emph{Document generated 2026-01-25. File paths and line numbers -accurate as of this date. Commentary added by your friendly AI tour guide.} - -\backmatter -\end{document} diff --git a/docs/archive/study/echo-tour-de-code.md b/docs/archive/study/echo-tour-de-code.md deleted file mode 100644 index 04c270db..00000000 --- a/docs/archive/study/echo-tour-de-code.md +++ /dev/null @@ -1,1355 +0,0 @@ - - - -# Echo: Tour de Code - -> **The complete function-by-function trace of Echo's execution pipeline.** -> -> This document traces EVERY function call involved in processing a user action through the Echo engine. -> File paths are accurate as of 2026-01-25; line numbers are intentionally omitted to avoid drift. - ---- - -## Table of Contents - -1. [Intent Ingestion](#1-intent-ingestion) -2. [Transaction Lifecycle](#2-transaction-lifecycle) -3. [Rule Matching](#3-rule-matching) -4. [Scheduler: Drain & Reserve](#4-scheduler-drain--reserve) -5. [BOAW Parallel Execution](#5-boaw-parallel-execution) -6. [Delta Merge & State Finalization](#6-delta-merge--state-finalization) -7. [Hash Computation](#7-hash-computation) -8. [Commit Orchestration](#8-commit-orchestration) -9. [Complete Call Graph](#9-complete-call-graph) - ---- - -## 1. Intent Ingestion - -**Entry Point:** `Engine::ingest_intent()` -**File:** `crates/warp-core/src/engine_impl.rs` - -### 1.1 Function Signature - -```rust -pub fn ingest_intent(&mut self, intent_bytes: &[u8]) -> Result -``` - -**Returns:** - -- `IngestDisposition::Accepted { intent_id: Hash }` — New intent accepted -- `IngestDisposition::Duplicate { intent_id: Hash }` — Already ingested - -### 1.2 Complete Call Trace - -```text -Engine::ingest_intent(intent_bytes: &[u8]) -│ -├─[1] compute_intent_id(intent_bytes) → Hash -│ FILE: crates/warp-core/src/inbox.rs -│ CODE: -│ let mut hasher = blake3::Hasher::new(); -│ hasher.update(b"intent:"); // Domain separation -│ hasher.update(intent_bytes); -│ hasher.finalize().into() // → [u8; 32] -│ -├─[2] NodeId(intent_id) -│ Creates strongly-typed NodeId from Hash -│ -├─[3] self.state.store_mut(&warp_id) → Option<&mut GraphStore> -│ FILE: crates/warp-core/src/engine_impl.rs -│ ERROR: EngineError::UnknownWarp if None -│ -├─[4] Extract root_node_id from self.current_root.local_id -│ -├─[5] STRUCTURAL NODE CREATION (Idempotent) -│ ├─ make_node_id("sim") → NodeId -│ │ FILE: crates/warp-core/src/ident.rs -│ │ CODE: blake3("node:" || "sim") -│ │ -│ ├─ make_node_id("sim/inbox") → NodeId -│ │ CODE: blake3("node:" || "sim/inbox") -│ │ -│ ├─ make_type_id("sim") → TypeId -│ │ FILE: crates/warp-core/src/ident.rs -│ │ CODE: blake3("type:" || "sim") -│ │ -│ ├─ make_type_id("sim/inbox") → TypeId -│ ├─ make_type_id("sim/inbox/event") → TypeId -│ │ -│ ├─ store.insert_node(sim_id, NodeRecord { ty: sim_ty }) -│ │ FILE: crates/warp-core/src/graph.rs -│ │ CODE: self.nodes.insert(id, record) -│ │ -│ └─ store.insert_node(inbox_id, NodeRecord { ty: inbox_ty }) -│ -├─[6] STRUCTURAL EDGE CREATION -│ ├─ make_edge_id("edge:root/sim") → EdgeId -│ │ FILE: crates/warp-core/src/ident.rs -│ │ CODE: blake3("edge:" || "edge:root/sim") -│ │ -│ ├─ store.insert_edge(root_id, EdgeRecord { ... }) -│ │ FILE: crates/warp-core/src/graph.rs -│ │ └─ GraphStore::upsert_edge_record(from, edge) -│ │ FILE: crates/warp-core/src/graph.rs -│ │ UPDATES: -│ │ self.edge_index.insert(edge_id, from) -│ │ self.edge_to_index.insert(edge_id, to) -│ │ self.edges_from.entry(from).or_default().push(edge) -│ │ self.edges_to.entry(to).or_default().push(edge_id) -│ │ -│ └─ store.insert_edge(sim_id, EdgeRecord { ... }) [sim → inbox] -│ -├─[7] DUPLICATE DETECTION -│ store.node(&event_id) → Option<&NodeRecord> -│ FILE: crates/warp-core/src/graph.rs -│ CODE: self.nodes.get(id) -│ IF Some(_): return Ok(IngestDisposition::Duplicate { intent_id }) -│ -├─[8] EVENT NODE CREATION -│ store.insert_node(event_id, NodeRecord { ty: event_ty }) -│ NOTE: event_id = intent_id (content-addressed) -│ -├─[9] INTENT ATTACHMENT -│ ├─ AtomPayload::new(type_id, bytes) -│ │ FILE: crates/warp-core/src/attachment.rs -│ │ CODE: Self { type_id, bytes: Bytes::copy_from_slice(intent_bytes) } -│ │ -│ └─ store.set_node_attachment(event_id, Some(AttachmentValue::Atom(payload))) -│ FILE: crates/warp-core/src/graph.rs -│ CODE: self.node_attachments.insert(id, v) -│ -├─[10] PENDING EDGE CREATION (Queue Membership) -│ ├─ pending_edge_id(&inbox_id, &intent_id) → EdgeId -│ │ FILE: crates/warp-core/src/inbox.rs -│ │ CODE: blake3("edge:" || "sim/inbox/pending:" || inbox_id || intent_id) -│ │ -│ └─ store.insert_edge(inbox_id, EdgeRecord { -│ id: pending_edge_id, -│ from: inbox_id, -│ to: event_id, -│ ty: make_type_id("edge:pending") -│ }) -│ -└─[11] return Ok(IngestDisposition::Accepted { intent_id }) -``` - -### 1.3 Data Structures Modified - -| Structure | Field | Change | -| ------------ | ------------------ | ------------------------------------------- | -| `GraphStore` | `nodes` | +3 entries (sim, inbox, event) | -| `GraphStore` | `edges_from` | +3 edges (root→sim, sim→inbox, inbox→event) | -| `GraphStore` | `edges_to` | +3 reverse entries | -| `GraphStore` | `edge_index` | +3 edge→from mappings | -| `GraphStore` | `edge_to_index` | +3 edge→to mappings | -| `GraphStore` | `node_attachments` | +1 (event → intent payload) | - ---- - -## 2. Transaction Lifecycle - -### 2.1 Begin Transaction - -**Entry Point:** `Engine::begin()` -**File:** `crates/warp-core/src/engine_impl.rs-719` - -```rust -pub fn begin(&mut self) -> TxId { - self.tx_counter = self.tx_counter.wrapping_add(1); // Line 713 - if self.tx_counter == 0 { - self.tx_counter = 1; // Line 715: Zero is reserved - } - self.live_txs.insert(self.tx_counter); // Line 717 - TxId::from_raw(self.tx_counter) // Line 718 -} -``` - -**Call Trace:** - -```text -Engine::begin() -│ -├─ self.tx_counter.wrapping_add(1) -│ Rust std: u64::wrapping_add -│ Handles u64::MAX → 0 overflow -│ -├─ if self.tx_counter == 0: self.tx_counter = 1 -│ INVARIANT: TxId(0) is reserved as invalid -│ -├─ self.live_txs.insert(self.tx_counter) -│ TYPE: HashSet -│ Registers transaction as active -│ -└─ TxId::from_raw(self.tx_counter) - FILE: crates/warp-core/src/tx.rs - CODE: pub const fn from_raw(value: u64) -> Self { Self(value) } - TYPE: #[repr(transparent)] struct TxId(u64) -``` - -**State Changes:** - -- `tx_counter`: N → N+1 (or 1 if wrapped) -- `live_txs`: Insert new counter value - -### 2.2 Abort Transaction - -**Entry Point:** `Engine::abort()` -**File:** `crates/warp-core/src/engine_impl.rs-968` - -```rust -pub fn abort(&mut self, tx: TxId) { - self.live_txs.remove(&tx.value()); - self.scheduler.finalize_tx(tx); - self.bus.clear(); - self.last_materialization.clear(); - self.last_materialization_errors.clear(); -} -``` - ---- - -## 3. Rule Matching - -**Entry Point:** `Engine::apply()` -**File:** `crates/warp-core/src/engine_impl.rs-737` - -### 3.1 Function Signature - -```rust -pub fn apply( - &mut self, - tx: TxId, - rule_name: &str, - scope: &NodeId, -) -> Result -``` - -### 3.2 Complete Call Trace - -```text -Engine::apply(tx, rule_name, scope) -│ -└─ Engine::apply_in_warp(tx, self.current_root.warp_id, rule_name, scope, &[]) - FILE: crates/warp-core/src/engine_impl.rs - │ - ├─[1] TRANSACTION VALIDATION - │ CODE: if tx.value() == 0 || !self.live_txs.contains(&tx.value()) - │ ERROR: EngineError::UnknownTx - │ - ├─[2] RULE LOOKUP - │ self.rules.get(rule_name) → Option<&RewriteRule> - │ TYPE: HashMap<&'static str, RewriteRule> - │ ERROR: EngineError::UnknownRule(rule_name.to_owned()) - │ - ├─[3] STORE LOOKUP - │ self.state.store(&warp_id) → Option<&GraphStore> - │ ERROR: EngineError::UnknownWarp(warp_id) - │ - ├─[4] CREATE GRAPHVIEW - │ GraphView::new(store) → GraphView<'_> - │ FILE: crates/warp-core/src/graph_view.rs - │ TYPE: Read-only wrapper (Copy, 8 bytes) - │ - ├─[5] CALL MATCHER - │ (rule.matcher)(view, scope) → bool - │ TYPE: MatchFn = for<'a> fn(GraphView<'a>, &NodeId) -> bool - │ FILE: crates/warp-core/src/rule.rs - │ IF false: return Ok(ApplyResult::NoMatch) - │ - ├─[6] CREATE SCOPE KEY - │ let scope_key = NodeKey { warp_id, local_id: *scope } - │ - ├─[7] COMPUTE SCOPE HASH - │ scope_hash(&rule.id, &scope_key) → Hash - │ FILE: crates/warp-core/src/engine_impl.rs - │ CODE: - │ let mut hasher = Hasher::new(); - │ hasher.update(rule_id); // 32 bytes - │ hasher.update(scope.warp_id.as_bytes()); // 32 bytes - │ hasher.update(scope.local_id.as_bytes()); // 32 bytes - │ hasher.finalize().into() - │ - ├─[8] COMPUTE FOOTPRINT - │ (rule.compute_footprint)(view, scope) → Footprint - │ TYPE: FootprintFn = for<'a> fn(GraphView<'a>, &NodeId) -> Footprint - │ FILE: crates/warp-core/src/rule.rs - │ RETURNS: - │ Footprint { - │ n_read: IdSet, // Nodes read - │ n_write: IdSet, // Nodes written - │ e_read: IdSet, // Edges read - │ e_write: IdSet, // Edges written - │ a_read: AttachmentSet, // Attachments read - │ a_write: AttachmentSet, // Attachments written - │ b_in: PortSet, // Input ports - │ b_out: PortSet, // Output ports - │ factor_mask: u64, // O(1) prefilter - │ } - │ - ├─[9] AUGMENT FOOTPRINT WITH DESCENT STACK - │ for key in descent_stack: - │ footprint.a_read.insert(*key) - │ FILE: crates/warp-core/src/footprint.rs - │ PURPOSE: Stage B1 law - READs of all descent chain slots - │ - ├─[10] COMPACT RULE ID LOOKUP - │ self.compact_rule_ids.get(&rule.id) → Option<&CompactRuleId> - │ TYPE: HashMap - │ ERROR: EngineError::InternalCorruption - │ - └─[11] ENQUEUE TO SCHEDULER - self.scheduler.enqueue(tx, PendingRewrite { ... }) - │ - └─ DeterministicScheduler::enqueue(tx, rewrite) - FILE: crates/warp-core/src/scheduler.rs - │ - └─ RadixScheduler::enqueue(tx, rewrite) - FILE: crates/warp-core/src/scheduler.rs - CODE: - let txq = self.pending.entry(tx).or_default(); - txq.enqueue(rewrite.scope_hash, rewrite.compact_rule.0, rewrite); - │ - └─ PendingTx::enqueue(scope_be32, rule_id, payload) - FILE: crates/warp-core/src/scheduler.rs - - CASE 1: Duplicate (scope_hash, rule_id) — LAST WINS - index.get(&key) → Some(&i) - fat[thin[i].handle] = Some(payload) // Overwrite - thin[i].nonce = next_nonce++ // Refresh nonce - - CASE 2: New entry - fat.push(Some(payload)) - thin.push(RewriteThin { scope_be32, rule_id, nonce, handle }) - index.insert(key, thin.len() - 1) -``` - -### 3.3 PendingRewrite Structure - -**File:** `crates/warp-core/src/scheduler.rs-82` - -```rust -pub(crate) struct PendingRewrite { - pub rule_id: Hash, // 32-byte rule identifier - pub compact_rule: CompactRuleId, // u32 hot-path handle - pub scope_hash: Hash, // 32-byte ordering key - pub scope: NodeKey, // { warp_id, local_id } - pub footprint: Footprint, // Read/write declaration - pub phase: RewritePhase, // State machine: Matched → Reserved → ... -} -``` - ---- - -## 4. Scheduler: Drain & Reserve - -### 4.1 Drain Phase (Radix Sort) - -**Entry Point:** `RadixScheduler::drain_for_tx()` -**File:** `crates/warp-core/src/scheduler.rs-113` - -```rust -pub(crate) fn drain_for_tx(&mut self, tx: TxId) -> Vec { - self.pending - .remove(&tx) - .map_or_else(Vec::new, |mut txq| txq.drain_in_order()) -} -``` - -**Complete Call Trace:** - -```text -RadixScheduler::drain_for_tx(tx) -│ -├─ self.pending.remove(&tx) → Option> -│ -└─ PendingTx::drain_in_order() - FILE: crates/warp-core/src/scheduler.rs - │ - ├─ DECISION: n <= 1024 (SMALL_SORT_THRESHOLD)? - │ ├─ YES: sort_unstable_by(cmp_thin) - │ │ Rust std comparison sort - │ │ - │ └─ NO: radix_sort() - │ FILE: crates/warp-core/src/scheduler.rs - │ - └─ radix_sort() - │ - ├─ Initialize scratch buffer: self.scratch.resize(n, default) - │ - ├─ Lazy allocate histogram: self.counts16 = vec![0u32; 65536] - │ - └─ FOR pass IN 0..20: // ═══ 20 PASSES ═══ - │ - ├─ SELECT src/dst buffers (ping-pong) - │ flip = false: src=thin, dst=scratch - │ flip = true: src=scratch, dst=thin - │ - ├─ PHASE 1: COUNT BUCKETS - │ FOR r IN src: - │ b = bucket16(r, pass) - │ counts[b] += 1 - │ - ├─ PHASE 2: PREFIX SUMS - │ sum = 0 - │ FOR c IN counts: - │ t = *c - │ *c = sum - │ sum += t - │ - ├─ PHASE 3: STABLE SCATTER - │ FOR r IN src: - │ b = bucket16(r, pass) - │ dst[counts[b]] = r - │ counts[b] += 1 - │ - └─ flip = !flip - -BUCKET EXTRACTION (bucket16): -FILE: crates/warp-core/src/scheduler.rs - -Pass 0: u16_from_u32_le(r.nonce, 0) // Nonce bytes [0:2] -Pass 1: u16_from_u32_le(r.nonce, 1) // Nonce bytes [2:4] -Pass 2: u16_from_u32_le(r.rule_id, 0) // Rule ID bytes [0:2] -Pass 3: u16_from_u32_le(r.rule_id, 1) // Rule ID bytes [2:4] -Pass 4: u16_be_from_pair32(scope, 15) // Scope bytes [30:32] -Pass 5: u16_be_from_pair32(scope, 14) // Scope bytes [28:30] -... -Pass 19: u16_be_from_pair32(scope, 0) // Scope bytes [0:2] (MSD) - -SORT ORDER: (scope_hash, rule_id, nonce) ascending lexicographic -``` - -### 4.2 Reserve Phase (Independence Check) - -**Entry Point:** `RadixScheduler::reserve()` -**File:** `crates/warp-core/src/scheduler.rs-143` - -```rust -pub(crate) fn reserve(&mut self, tx: TxId, pr: &mut PendingRewrite) -> bool { - let active = self.active.entry(tx).or_insert_with(ActiveFootprints::new); - if Self::has_conflict(active, pr) { - return Self::on_conflict(pr); - } - Self::mark_all(active, pr); - Self::on_reserved(pr) -} -``` - -**Complete Call Trace:** - -```text -RadixScheduler::reserve(tx, pr) -│ -├─ self.active.entry(tx).or_insert_with(ActiveFootprints::new) -│ TYPE: HashMap -│ ActiveFootprints contains 7 GenSets: -│ - nodes_written: GenSet -│ - nodes_read: GenSet -│ - edges_written: GenSet -│ - edges_read: GenSet -│ - attachments_written: GenSet -│ - attachments_read: GenSet -│ - ports: GenSet -│ -├─ has_conflict(active, pr) → bool -│ FILE: crates/warp-core/src/scheduler.rs -│ │ -│ ├─ FOR node IN pr.footprint.n_write: -│ │ IF active.nodes_written.contains(node): return true // W-W conflict -│ │ IF active.nodes_read.contains(node): return true // W-R conflict -│ │ -│ ├─ FOR node IN pr.footprint.n_read: -│ │ IF active.nodes_written.contains(node): return true // R-W conflict -│ │ (R-R is allowed) -│ │ -│ ├─ FOR edge IN pr.footprint.e_write: -│ │ IF active.edges_written.contains(edge): return true -│ │ IF active.edges_read.contains(edge): return true -│ │ -│ ├─ FOR edge IN pr.footprint.e_read: -│ │ IF active.edges_written.contains(edge): return true -│ │ -│ ├─ FOR key IN pr.footprint.a_write: -│ │ IF active.attachments_written.contains(key): return true -│ │ IF active.attachments_read.contains(key): return true -│ │ -│ ├─ FOR key IN pr.footprint.a_read: -│ │ IF active.attachments_written.contains(key): return true -│ │ -│ └─ FOR port IN pr.footprint.b_in ∪ pr.footprint.b_out: -│ IF active.ports.contains(port): return true -│ -├─ IF conflict: -│ └─ on_conflict(pr) -│ FILE: crates/warp-core/src/scheduler.rs -│ pr.phase = RewritePhase::Aborted -│ return false -│ -├─ mark_all(active, pr) -│ FILE: crates/warp-core/src/scheduler.rs -│ │ -│ ├─ FOR node IN pr.footprint.n_write: -│ │ active.nodes_written.mark(NodeKey { warp_id, local_id: node }) -│ │ -│ ├─ FOR node IN pr.footprint.n_read: -│ │ active.nodes_read.mark(NodeKey { ... }) -│ │ -│ ├─ FOR edge IN pr.footprint.e_write: -│ │ active.edges_written.mark(EdgeKey { ... }) -│ │ -│ ├─ FOR edge IN pr.footprint.e_read: -│ │ active.edges_read.mark(EdgeKey { ... }) -│ │ -│ ├─ FOR key IN pr.footprint.a_write: -│ │ active.attachments_written.mark(key) -│ │ -│ ├─ FOR key IN pr.footprint.a_read: -│ │ active.attachments_read.mark(key) -│ │ -│ └─ FOR port IN pr.footprint.b_in ∪ pr.footprint.b_out: -│ active.ports.mark(port) -│ -└─ on_reserved(pr) - FILE: crates/warp-core/src/scheduler.rs - pr.phase = RewritePhase::Reserved - return true -``` - -### 4.3 GenSet: O(1) Conflict Detection - -**File:** `crates/warp-core/src/scheduler.rs-535` - -```rust -pub(crate) struct GenSet { - gen: u32, // Current generation - seen: FxHashMap, // Key → generation when marked -} - -impl GenSet { - #[inline] - pub fn contains(&self, key: K) -> bool { - matches!(self.seen.get(&key), Some(&g) if g == self.gen) - } - - #[inline] - pub fn mark(&mut self, key: K) { - self.seen.insert(key, self.gen); - } -} -``` - -**Key Insight:** No clearing needed between transactions. Increment `gen` → all old entries become stale. - ---- - -## 5. BOAW Parallel Execution - -**Entry Point:** `execute_parallel()` -**File:** `crates/warp-core/src/boaw/exec.rs-83` - -### 5.1 Entry Point - -```rust -pub fn execute_parallel(view: GraphView<'_>, items: &[ExecItem], workers: usize) -> Vec { - assert!(workers >= 1); - let capped_workers = workers.min(NUM_SHARDS); // Cap at 256 - - #[cfg(feature = "parallel-stride-fallback")] - if std::env::var("ECHO_PARALLEL_STRIDE").is_ok() { - return execute_parallel_stride(view, items, capped_workers); - } - - execute_parallel_sharded(view, items, capped_workers) // DEFAULT -} -``` - -### 5.2 Complete Call Trace - -```text -execute_parallel(view, items, workers) -│ -└─ execute_parallel_sharded(view, items, capped_workers) - FILE: crates/warp-core/src/boaw/exec.rs - │ - ├─ IF items.is_empty(): - │ return (0..workers).map(|_| TickDelta::new()).collect() - │ - ├─ partition_into_shards(items.to_vec()) → Vec - │ FILE: crates/warp-core/src/boaw/shard.rs - │ │ - │ ├─ Create 256 empty VirtualShard structures - │ │ - │ └─ FOR item IN items: - │ │ - │ ├─ shard_of(&item.scope) → usize - │ │ FILE: crates/warp-core/src/boaw/shard.rs - │ │ CODE: - │ │ let bytes = scope.as_bytes(); - │ │ let first_8: [u8; 8] = [bytes[0..8]]; - │ │ let val = u64::from_le_bytes(first_8); - │ │ (val & 255) as usize // SHARD_MASK = 255 - │ │ - │ └─ shards[shard_id].items.push(item) - │ - ├─ let next_shard = AtomicUsize::new(0) - │ - └─ std::thread::scope(|s| { ... }) - FILE: Rust std (scoped threads) - │ - ├─ FOR _ IN 0..workers: - │ │ - │ └─ s.spawn(move || { ... }) // ═══ WORKER THREAD ═══ - │ │ - │ ├─ let mut delta = TickDelta::new() - │ │ FILE: crates/warp-core/src/tick_delta.rs - │ │ CREATES: { ops: Vec::new(), origins: Vec::new() } - │ │ - │ └─ LOOP: // Work-stealing loop - │ │ - │ ├─ shard_id = next_shard.fetch_add(1, Ordering::Relaxed) - │ │ ATOMIC: Returns old value, increments counter - │ │ ORDERING: Relaxed (no synchronization cost) - │ │ - │ ├─ IF shard_id >= 256: break - │ │ - │ └─ FOR item IN &shards[shard_id].items: - │ │ - │ ├─ let mut scoped = delta.scoped(item.origin) - │ │ FILE: crates/warp-core/src/tick_delta.rs - │ │ CREATES: ScopedDelta { inner: &mut delta, origin, next_op_ix: 0 } - │ │ - │ └─ (item.exec)(view, &item.scope, scoped.inner_mut()) - │ │ - │ └─ INSIDE EXECUTOR: - │ scoped.emit(op) - │ FILE: crates/warp-core/src/tick_delta.rs - │ CODE: - │ origin.op_ix = self.next_op_ix; - │ self.next_op_ix += 1; - │ self.inner.emit_with_origin(op, origin); - │ │ - │ └─ TickDelta::emit_with_origin(op, origin) - │ FILE: crates/warp-core/src/tick_delta.rs - │ CODE: - │ self.ops.push(op); - │ self.origins.push(origin); // if delta_validate - │ - └─ COLLECT THREADS: - handles.into_iter().map(|h| h.join()).collect() - RETURNS: Vec (one per worker) -``` - -### 5.3 Enforced Execution Path - -**Entry Point:** `execute_item_enforced()` -**File:** `crates/warp-core/src/boaw/exec.rs` - -When footprint enforcement is active, each item is executed via `execute_item_enforced()` instead of a bare function-pointer call. Read access is enforced in-line by `GraphView`/`FootprintGuard` while the executor runs inside `catch_unwind`, and post-hoc `check_op()` validation is applied to newly-emitted ops. - -**Signature (anchor):** - -```rust -fn execute_item_enforced( - store: &GraphStore, - item: &ExecItem, - idx: usize, - unit: &WorkUnit, - delta: TickDelta, -) -> Result -``` - -**Guard Check (anchor):** -**File:** `crates/warp-core/src/footprint_guard.rs` - -```rust -impl FootprintGuard { - pub(crate) fn check_op(&self, op: &WarpOp) -} -``` - -```text -execute_item_enforced(store, item, idx, unit, delta) -│ -├─ guard = unit.guards[idx] -├─ view = GraphView::new_guarded(store, guard) -│ -├─ ops_before = delta.len() -│ Snapshot the op count BEFORE the executor runs -│ -├─ let mut scoped = delta.scoped(item.origin) -│ Wrap delta with origin tracking (mutable binding required) -│ -├─ result = std::panic::catch_unwind(AssertUnwindSafe(|| { -│ (item.exec)(view, &item.scope, scoped.inner_mut()) -│ })) -│ Pass the inner mutable accessor to the executor, not the scoped wrapper -│ -├─ FOR op IN delta.ops_ref()[ops_before..]: -│ guard.check_op(op) → panic_any(FootprintViolation) -│ Validates that each newly-emitted op falls within the declared footprint. -│ ExecItemKind::System items may emit warp-instance-level ops; -│ ExecItemKind::User items may not. -│ -└─ OUTCOME PRECEDENCE: - ├─ IF check_op fails: - │ return Err(PoisonedDelta) - │ Write violations OVERRIDE executor panics — violation takes precedence. - │ - ├─ IF footprint is clean BUT executor panicked: - │ return Err(PoisonedDelta) - │ The original panic propagates to the caller. - │ - └─ IF both clean: - return Ok(delta) -``` - -**Poison Safety (type-level):** `execute_item_enforced` returns `Result`, -and `merge_deltas` consumes `Vec>`. Poisoned deltas are never -merged or committed; they are dropped and their panic payload is re-thrown at the engine layer. - -#### 5.3.1 Cross-Warp Enforcement Policy - -`check_op()` rejects cross-warp writes: any op must target the executor’s `scope.warp_id`. Violations -surface as `FootprintViolation` with `ViolationKind::CrossWarpEmission`. Exception: `ExecItemKind::System` may emit -warp-instance-level ops (`OpenPortal`, `UpsertWarpInstance`, `DeleteWarpInstance`) for authorized -instance lifecycle changes. **TODO (Phase 7):** allow portal-based cross-warp permissions with -explicit footprint allowlists. - -**Warp-instance-level ops:** Operations that modify multiverse topology (e.g., `OpenPortal`, -`UpsertWarpInstance`, `DeleteWarpInstance` from Section 6.2). They are enforced via `ExecItemKind`: -`User` items attempting these ops produce a `FootprintViolation` with -`ViolationKind::UnauthorizedInstanceOp`. There are no additional op categories beyond -warp-instance-level vs normal graph ops. - -**Panic Recovery & Tick Semantics:** Worker threads run under `std::thread::scope`. A panic or -`FootprintViolation` from `execute_item_enforced` produces a poisoned `TickDelta` that is never -merged; `execute_parallel` propagates the panic when the worker results are joined. Any worker -panic aborts the parallel execution. The caller observes the panic, the tick does not commit, and -any partial delta stays on the worker stack and is dropped. Callers that catch the panic should -invoke `Engine::abort` to roll back the transaction. - -### 5.4 ExecItem Structure - -**File:** `crates/warp-core/src/boaw/exec.rs-35` - -```rust -#[derive(Clone, Copy)] -pub struct ExecItem { - pub exec: ExecuteFn, // fn(GraphView, &NodeId, &mut TickDelta) - pub scope: NodeId, // 32-byte node identifier - pub origin: OpOrigin, // { intent_id, rule_id, match_ix, op_ix } - - // Private field, present only in enforcement builds: - #[cfg(any(debug_assertions, feature = "footprint_enforce_release"))] - #[cfg(not(feature = "unsafe_graph"))] - kind: ExecItemKind, -} -``` - -**`ExecItemKind` (cfg-gated):** - -**Enum (anchor):** - -```rust -enum ExecItemKind { - User, - System, -} -``` - -- `ExecItemKind::User` — Normal rule executor. May emit node/edge/attachment ops scoped to the declared footprint. Cannot emit warp-instance-level ops (`UpsertWarpInstance`, `DeleteWarpInstance`, `OpenPortal`). -- `ExecItemKind::System` — Internal-only executor (e.g., portal opening). May emit warp-instance-level ops. - -`ExecItem::new()` always creates `User` items. System items are constructed only by internal engine -code via `ExecItem::new_system(exec: ExecuteFn, scope: NodeId, origin: OpOrigin)` when a rule is -registered as `is_system`. The constructor is only compiled when -`debug_assertions || footprint_enforce_release` (and not `unsafe_graph`), so plain release builds -fall back to `ExecItem::new()` even for system rules. - -**The triple cfg-gate pattern:** The `kind` field (and all enforcement logic) is guarded by: - -1. `#[cfg(any(debug_assertions, feature = "footprint_enforce_release"))]` — active in debug builds or when the release enforcement feature is opted-in. -2. `#[cfg(not(feature = "unsafe_graph"))]` — disabled when the escape-hatch feature is set (for benchmarks/fuzzing that intentionally bypass checks). - -This means enforcement is always-on in dev/test, opt-in for release, and explicitly removable for -unsafe experimentation. A compile-time guard in `lib.rs` rejects builds that enable both -`footprint_enforce_release` and `unsafe_graph`. - -### 5.5 Thread Safety - -| Type | Safety | Reason | -| ------------- | --------------------- | ----------------------------------- | -| `GraphView` | `Sync + Send + Clone` | Read-only snapshot | -| `ExecItem` | `Sync + Send + Copy` | Function pointer + primitives | -| `TickDelta` | Per-worker exclusive | Poisoned deltas must be discarded | -| `AtomicUsize` | Lock-free | `fetch_add` with `Relaxed` ordering | - -**Note:** `ExecItem` stays `Copy` because `ExecItemKind` is `Copy` when present; the cfg-gated -field does not change its `Send`/`Sync` bounds. - ---- - -## 6. Delta Merge & State Finalization - -### 6.1 Canonical Merge - -**Entry Point:** `merge_deltas()` -**File:** `crates/warp-core/src/boaw/merge.rs-75` - -```text -merge_deltas(deltas: Vec>) → Result, MergeError> -│ -├─[1] FLATTEN ALL OPS WITH ORIGINS -│ let mut flat: Vec<(WarpOpKey, OpOrigin, WarpOp)> = Vec::new(); -│ FOR d IN deltas: -│ IF d is Err(PoisonedDelta): return Err(MergeError::PoisonedDelta) -│ let (ops, origins) = d.into_parts_unsorted(); -│ FOR (op, origin) IN ops.zip(origins): -│ flat.push((op.sort_key(), origin, op)); -│ -├─[2] CANONICAL SORT -│ flat.sort_by(|a, b| (&a.0, &a.1).cmp(&(&b.0, &b.1))); -│ ORDER: (WarpOpKey, OpOrigin) lexicographic -│ -└─[3] DEDUPE & CONFLICT DETECTION - let mut out = Vec::new(); - let mut i = 0; - WHILE i < flat.len(): - │ - ├─ GROUP by WarpOpKey - │ key = flat[i].0 - │ start = i - │ WHILE i < flat.len() && flat[i].0 == key: i++ - │ - ├─ CHECK if all ops identical - │ first = &flat[start].2 - │ all_same = flat[start+1..i].iter().all(|(_, _, op)| op == first) - │ - └─ IF all_same: - out.push(first.clone()) // Accept one copy - ELSE: - writers = flat[start..i].iter().map(|(_, o, _)| *o).collect() - return Err(MergeError::Conflict(Box::new(MergeConflict { key, writers }))) // CONFLICT! - - return Ok(out) -``` - -### 6.2 WarpOp Sort Key - -**File:** `crates/warp-core/src/tick_patch.rs-287` - -```rust -pub(crate) fn sort_key(&self) -> WarpOpKey { - match self { - Self::OpenPortal { .. } => WarpOpKey { kind: 1, ... }, - Self::UpsertWarpInstance { .. } => WarpOpKey { kind: 2, ... }, - Self::DeleteWarpInstance { .. } => WarpOpKey { kind: 3, ... }, - Self::DeleteEdge { .. } => WarpOpKey { kind: 4, ... }, // Delete before upsert - Self::DeleteNode { .. } => WarpOpKey { kind: 5, ... }, - Self::UpsertNode { .. } => WarpOpKey { kind: 6, ... }, - Self::UpsertEdge { .. } => WarpOpKey { kind: 7, ... }, - Self::SetAttachment { .. } => WarpOpKey { kind: 8, ... }, // Last - } -} -``` - -**Canonical Order:** - -1. OpenPortal (creates child instances) -2. UpsertWarpInstance -3. DeleteWarpInstance -4. DeleteEdge (delete before upsert) -5. DeleteNode (delete before upsert) -6. UpsertNode -7. UpsertEdge -8. SetAttachment (after skeleton exists) - -### 6.3 State Mutation Methods - -**File:** `crates/warp-core/src/graph.rs` - -```text -GraphStore::insert_node(id, record) - LINE: 175-177 - CODE: self.nodes.insert(id, record) - -GraphStore::upsert_edge_record(from, edge) - LINE: 196-261 - UPDATES: - - self.edge_index.insert(edge_id, from) - - self.edge_to_index.insert(edge_id, to) - - Remove old edge from previous bucket if exists - - self.edges_from.entry(from).or_default().push(edge) - - self.edges_to.entry(to).or_default().push(edge_id) - -GraphStore::delete_node_isolated(node) -> Result<(), DeleteNodeError> - LINE: 393-418 - REJECTS if node has incident edges (no cascade!) - ALLOWED MINI-CASCADE: - - Remove from self.nodes - - Remove node alpha attachment (key is derivable) - - > NOTE: `delete_node_cascade` still exists but is INTERNAL. - > WarpOp::DeleteNode uses `delete_node_isolated` to ensure - > all mutations are explicit in the delta. - -GraphStore::delete_edge_exact(from, edge_id) - LINE: 360-412 - VALIDATES: edge is in correct "from" bucket - REMOVES: - - From edges_from bucket - - From edge_index - - From edge_to_index - - From edges_to bucket - - Edge attachment - -GraphStore::set_node_attachment(id, value) - LINE: 125-134 - CODE: - None → self.node_attachments.remove(&id) - Some(v) → self.node_attachments.insert(id, v) - -GraphStore::set_edge_attachment(id, value) - LINE: 163-172 - Same pattern as node attachments -``` - ---- - -## 7. Hash Computation - -### 7.1 State Root - -**Entry Point:** `compute_state_root()` -**File:** `crates/warp-core/src/snapshot.rs-209` - -```text -compute_state_root(state: &WarpState, root: &NodeKey) → Hash -│ -├─[1] BFS REACHABILITY TRAVERSAL -│ │ -│ ├─ Initialize: -│ │ reachable_nodes: BTreeSet = { root } -│ │ reachable_warps: BTreeSet = { root.warp_id } -│ │ queue: VecDeque = [ root ] -│ │ -│ └─ WHILE let Some(current) = queue.pop_front(): -│ │ -│ ├─ store = state.store(¤t.warp_id) -│ │ -│ ├─ FOR edge IN store.edges_from(¤t.local_id): -│ │ ├─ to = NodeKey { warp_id: current.warp_id, local_id: edge.to } -│ │ ├─ IF reachable_nodes.insert(to): queue.push_back(to) -│ │ │ -│ │ └─ IF edge has Descend(child_warp) attachment: -│ │ └─ enqueue_descend(state, child_warp, ...) -│ │ Adds child instance root to queue -│ │ -│ └─ IF current node has Descend(child_warp) attachment: -│ enqueue_descend(state, child_warp, ...) -│ -├─[2] HASHING PHASE -│ │ -│ ├─ let mut hasher = Hasher::new() // BLAKE3 -│ │ -│ ├─ HASH ROOT BINDING: -│ │ hasher.update(&root.warp_id.0) // 32 bytes -│ │ hasher.update(&root.local_id.0) // 32 bytes -│ │ -│ └─ FOR warp_id IN reachable_warps: // BTreeSet = sorted order -│ │ -│ ├─ HASH INSTANCE HEADER: -│ │ hasher.update(&instance.warp_id.0) // 32 bytes -│ │ hasher.update(&instance.root_node.0) // 32 bytes -│ │ hash_attachment_key_opt(&mut hasher, instance.parent.as_ref()) -│ │ -│ ├─ FOR (node_id, node) IN store.nodes: // BTreeMap = sorted -│ │ IF reachable_nodes.contains(&NodeKey { warp_id, local_id: node_id }): -│ │ hasher.update(&node_id.0) // 32 bytes -│ │ hasher.update(&node.ty.0) // 32 bytes -│ │ hash_attachment_value_opt(&mut hasher, store.node_attachment(node_id)) -│ │ -│ └─ FOR (from, edges) IN store.edges_from: // BTreeMap = sorted -│ IF from is reachable: -│ sorted_edges = edges.filter(reachable).sort_by(|a,b| a.id.cmp(b.id)) -│ hasher.update(&from.0) // 32 bytes -│ hasher.update(&(sorted_edges.len() as u64).to_le_bytes()) // 8 bytes -│ FOR edge IN sorted_edges: -│ hasher.update(&edge.id.0) // 32 bytes -│ hasher.update(&edge.ty.0) // 32 bytes -│ hasher.update(&edge.to.0) // 32 bytes -│ hash_attachment_value_opt(&mut hasher, store.edge_attachment(&edge.id)) -│ -└─ hasher.finalize().into() // → [u8; 32] -``` - -### 7.2 Commit Hash v2 - -**Entry Point:** `compute_commit_hash_v2()` -**File:** `crates/warp-core/src/snapshot.rs-263` - -```rust -pub(crate) fn compute_commit_hash_v2( - state_root: &Hash, - parents: &[Hash], - patch_digest: &Hash, - policy_id: u32, -) -> Hash { - let mut h = Hasher::new(); - h.update(&2u16.to_le_bytes()); // Version tag (2 bytes) - h.update(&(parents.len() as u64).to_le_bytes()); // Parent count (8 bytes) - for p in parents { - h.update(p); // Each parent (32 bytes) - } - h.update(state_root); // Graph hash (32 bytes) - h.update(patch_digest); // Ops hash (32 bytes) - h.update(&policy_id.to_le_bytes()); // Policy (4 bytes) - h.finalize().into() -} -``` - -**Byte Layout:** - -```text -Offset Size Field -0 2 version_tag (0x02 0x00) -2 8 parent_count (u64 LE) -10 32*N parents[] (N parent hashes) -10+32N 32 state_root -42+32N 32 patch_digest -74+32N 4 policy_id (u32 LE) -───────────────────────────────────── -TOTAL: 78 + 32*N bytes → BLAKE3 → 32-byte hash -``` - -### 7.3 Patch Digest - -**Entry Point:** `compute_patch_digest_v2()` -**File:** `crates/warp-core/src/tick_patch.rs-774` - -```rust -fn compute_patch_digest_v2( - policy_id: u32, - rule_pack_id: &ContentHash, - commit_status: TickCommitStatus, - in_slots: &[SlotId], - out_slots: &[SlotId], - ops: &[WarpOp], -) -> ContentHash { - let mut h = Hasher::new(); - h.update(&2u16.to_le_bytes()); // Format version - h.update(&policy_id.to_le_bytes()); // 4 bytes - h.update(rule_pack_id); // 32 bytes - h.update(&[commit_status.code()]); // 1 byte - encode_slots(&mut h, in_slots); - encode_slots(&mut h, out_slots); - encode_ops(&mut h, ops); - h.finalize().into() -} -``` - ---- - -## 8. Commit Orchestration - -**Entry Point:** `Engine::commit_with_receipt()` -**File:** `crates/warp-core/src/engine_impl.rs-954` - -### 8.1 Complete Call Trace - -```text -Engine::commit_with_receipt(tx) → Result<(Snapshot, TickReceipt, WarpTickPatchV1), EngineError> -│ -├─[1] VALIDATE TRANSACTION -│ IF tx.value() == 0 || !self.live_txs.contains(&tx.value()): -│ return Err(EngineError::UnknownTx) -│ -├─[2] DRAIN CANDIDATES -│ policy_id = self.policy_id // Line 844 -│ rule_pack_id = self.compute_rule_pack_id() // Line 845 -│ │ -│ ├─ compute_rule_pack_id() -│ │ FILE: engine_impl.rs -│ │ CODE: -│ │ ids = self.rules.values().map(|r| r.id).collect() -│ │ ids.sort_unstable(); ids.dedup() -│ │ hasher.update(&1u16.to_le_bytes()) // version -│ │ hasher.update(&(ids.len() as u64).to_le_bytes()) -│ │ FOR id IN ids: hasher.update(&id) -│ │ hasher.finalize().into() -│ │ -│ drained = self.scheduler.drain_for_tx(tx) // Line 847 -│ plan_digest = compute_plan_digest(&drained) // Line 848 -│ -├─[3] RESERVE (INDEPENDENCE CHECK) -│ ReserveOutcome { receipt, reserved, in_slots, out_slots } -│ = self.reserve_for_receipt(tx, drained)? // Line 850-855 -│ │ -│ └─ reserve_for_receipt(tx, drained) -│ FILE: engine_impl.rs -│ │ -│ FOR rewrite IN drained (canonical order): -│ │ -│ ├─ accepted = self.scheduler.reserve(tx, &mut rewrite) -│ │ -│ ├─ IF !accepted: -│ │ blockers = find_blocking_rewrites(reserved, &rewrite) -│ │ -│ ├─ receipt_entries.push(TickReceiptEntry { ... }) -│ │ -│ └─ IF accepted: -│ reserved.push(rewrite) -│ extend_slots_from_footprint(&mut in_slots, &mut out_slots, ...) -│ │ -│ return ReserveOutcome { receipt, reserved, in_slots, out_slots } -│ -│ rewrites_digest = compute_rewrites_digest(&reserved_rewrites) // Line 858 -│ -├─[4] EXECUTE (PHASE 5 BOAW) -│ state_before = self.state.clone() // Line 862 -│ delta_ops = self.apply_reserved_rewrites(reserved, &state_before)? -│ │ -│ └─ apply_reserved_rewrites(rewrites, state_before) -│ FILE: engine_impl.rs -│ │ -│ ├─ let mut delta = TickDelta::new() -│ │ -│ ├─ FOR rewrite IN rewrites: -│ │ executor = self.rule_by_compact(rewrite.compact_rule).executor -│ │ view = GraphView::new(self.state.store(&rewrite.scope.warp_id)) -│ │ (executor)(view, &rewrite.scope.local_id, &mut delta) -│ │ -│ ├─ let ops = delta.finalize() // Canonical sort -│ │ -│ ├─ patch = WarpTickPatchV1::new(policy_id, rule_pack_id, ..., ops) -│ │ patch.apply_to_state(&mut self.state)? -│ │ -│ └─ [delta_validate]: assert_delta_matches_diff(&ops, &diff_ops) -│ -├─[5] MATERIALIZE -│ mat_report = self.bus.finalize() // Line 884 -│ self.last_materialization = mat_report.channels -│ self.last_materialization_errors = mat_report.errors -│ -├─[6] COMPUTE DELTA PATCH -│ ops = diff_state(&state_before, &self.state) // Line 889 -│ │ -│ └─ diff_state(before, after) -│ FILE: tick_patch.rs -│ - Canonicalize portal authoring (OpenPortal) -│ - Diff instances (delete/upsert) -│ - Diff nodes, edges, attachments -│ - Sort by WarpOp::sort_key() -│ │ -│ patch = WarpTickPatchV1::new(policy_id, rule_pack_id, ..., ops) -│ patch_digest = patch.digest() // Line 898 -│ -├─[7] COMPUTE STATE ROOT -│ state_root = compute_state_root(&self.state, &self.current_root) // Line 900 -│ -├─[8] GET PARENTS -│ parents = self.last_snapshot.as_ref().map(|s| vec![s.hash]).unwrap_or_default() -│ -├─[9] COMPUTE DECISION DIGEST -│ decision_digest = receipt.digest() // Line 929 -│ -├─[10] COMPUTE COMMIT HASH -│ hash = compute_commit_hash_v2(&state_root, &parents, &patch_digest, policy_id) -│ -├─[11] BUILD SNAPSHOT -│ snapshot = Snapshot { -│ root: self.current_root, -│ hash, // commit_id v2 -│ parents, -│ plan_digest, // Diagnostic -│ decision_digest, // Diagnostic -│ rewrites_digest, // Diagnostic -│ patch_digest, // COMMITTED -│ policy_id, // COMMITTED -│ tx, -│ } -│ -├─[12] RECORD TO HISTORY -│ self.last_snapshot = Some(snapshot.clone()) // Line 947 -│ self.tick_history.push((snapshot, receipt, patch)) // Line 948-949 -│ self.live_txs.remove(&tx.value()) // Line 951 -│ self.scheduler.finalize_tx(tx) // Line 952 -│ -└─[13] RETURN - Ok((snapshot, receipt, patch)) -``` - -### 8.2 Commit Hash Inputs - -| Input | Committed? | Purpose | -| ----------------- | ---------- | ------------------------- | -| `state_root` | ✓ | What the graph looks like | -| `patch_digest` | ✓ | How we got here (ops) | -| `parents` | ✓ | Chain continuity | -| `policy_id` | ✓ | Aion policy version | -| `plan_digest` | ✗ | Diagnostic only | -| `decision_digest` | ✗ | Diagnostic only | -| `rewrites_digest` | ✗ | Diagnostic only | - ---- - -## 9. Complete Call Graph - -### 9.1 Full Journey: Intent → Commit - -```text -USER ACTION - │ - ▼ -Engine::ingest_intent(intent_bytes) - ├─ compute_intent_id() // BLAKE3 content hash - ├─ make_node_id(), make_type_id() // Structural IDs - ├─ store.insert_node() // Create event node - ├─ store.set_node_attachment() // Attach intent payload - └─ store.insert_edge() // Pending edge to inbox - │ - ▼ -Engine::begin() → TxId - ├─ tx_counter.wrapping_add(1) - ├─ live_txs.insert(tx_counter) - └─ TxId::from_raw(tx_counter) - │ - ▼ -Engine::dispatch_next_intent(tx) // (or manual apply) - │ - ▼ -Engine::apply(tx, rule_name, scope) - └─ Engine::apply_in_warp(tx, warp_id, rule_name, scope, &[]) - ├─ rules.get(rule_name) // Lookup rule - ├─ GraphView::new(store) // Read-only view - ├─ (rule.matcher)(view, scope) // Match check - ├─ scope_hash() // BLAKE3 ordering key - ├─ (rule.compute_footprint)(view, scope) // Footprint - └─ scheduler.enqueue(tx, PendingRewrite) - └─ PendingTx::enqueue() // Last-wins dedup - │ - ▼ -Engine::commit_with_receipt(tx) - │ - ├─[DRAIN] - │ scheduler.drain_for_tx(tx) - │ └─ PendingTx::drain_in_order() - │ └─ radix_sort() or sort_unstable_by() - │ 20-pass LSD radix sort - │ ORDER: (scope_hash, rule_id, nonce) - │ - ├─[RESERVE] - │ FOR rewrite IN drained: - │ scheduler.reserve(tx, &mut rewrite) - │ ├─ has_conflict(active, pr) - │ │ └─ GenSet::contains() × N // O(1) per check - │ └─ mark_all(active, pr) - │ └─ GenSet::mark() × M // O(1) per mark - │ - ├─[EXECUTE] - │ apply_reserved_rewrites(reserved, state_before) - │ FOR rewrite IN reserved: - │ (executor)(view, &scope, &mut delta) - │ └─ scoped.emit(op) - │ └─ delta.emit_with_origin(op, origin) - │ delta.finalize() // Sort ops - │ patch.apply_to_state(&mut self.state) - │ - ├─[MATERIALIZE] - │ bus.finalize() - │ - ├─[DELTA PATCH] - │ diff_state(&state_before, &self.state) - │ └─ Sort by WarpOp::sort_key() - │ WarpTickPatchV1::new(...) - │ └─ compute_patch_digest_v2() - │ - ├─[HASHES] - │ compute_state_root(&self.state, &self.current_root) - │ ├─ BFS reachability - │ └─ BLAKE3 over canonical encoding - │ compute_commit_hash_v2(state_root, parents, patch_digest, policy_id) - │ └─ BLAKE3(version || parents || state_root || patch_digest || policy_id) - │ - ├─[SNAPSHOT] - │ Snapshot { root, hash, parents, digests..., policy_id, tx } - │ - └─[RECORD] - tick_history.push((snapshot, receipt, patch)) - live_txs.remove(&tx.value()) - scheduler.finalize_tx(tx) - │ - ▼ -RETURN: (Snapshot, TickReceipt, WarpTickPatchV1) -``` - -### 9.2 File Index - -| Component | Primary File | Key Lines | -| ------------------- | ---------------- | ---------------- | -| Intent Ingestion | `engine_impl.rs` | 1216-1281 | -| Identity Hashing | `ident.rs` | 85-109 | -| Transaction Begin | `engine_impl.rs` | 711-719 | -| Rule Apply | `engine_impl.rs` | 730-806 | -| Footprint | `footprint.rs` | 131-152 | -| Scheduler Enqueue | `scheduler.rs` | 102-105, 331-355 | -| Radix Sort | `scheduler.rs` | 360-413, 481-498 | -| Reserve/Conflict | `scheduler.rs` | 134-278 | -| GenSet | `scheduler.rs` | 509-535 | -| BOAW Execute | `boaw/exec.rs` | 61-152 | -| Shard Routing | `boaw/shard.rs` | 82-120 | -| Delta Merge | `boaw/merge.rs` | 36-75 | -| TickDelta | `tick_delta.rs` | 38-172 | -| WarpOp Sort Key | `tick_patch.rs` | 207-287 | -| State Mutations | `graph.rs` | 175-412 | -| Patch Apply | `tick_patch.rs` | 434-561 | -| Diff State | `tick_patch.rs` | 979-1069 | -| State Root Hash | `snapshot.rs` | 88-209 | -| Commit Hash v2 | `snapshot.rs` | 244-263 | -| Patch Digest | `tick_patch.rs` | 755-774 | -| Commit Orchestrator | `engine_impl.rs` | 837-954 | - ---- - -## Appendix A: Complexity Summary - -| Operation | Complexity | Notes | -| ------------------------ | ---------- | ---------------------------------- | -| `ingest_intent` | O(1) | Fixed structural insertions | -| `begin` | O(1) | Counter increment + set insert | -| `apply` | O(m) | m = footprint size | -| `drain_for_tx` (radix) | O(n) | n = candidates, 20 passes | -| `reserve` per rewrite | O(m) | m = footprint size, O(1) per check | -| `execute_parallel` | O(n/w) | n = items, w = workers | -| `merge_deltas` | O(k log k) | k = total ops (sort + dedup) | -| `compute_state_root` | O(V + E) | V = nodes, E = edges | -| `compute_commit_hash_v2` | O(P) | P = parents | - ---- - -## Appendix B: Determinism Boundaries - -### Guaranteed Deterministic - -- Radix sort ordering (20-pass LSD) -- BTreeMap/BTreeSet iteration -- BLAKE3 hashing -- GenSet conflict detection -- Canonical merge deduplication - -### Intentionally Non-Deterministic (Handled by Merge) - -- Worker execution order in BOAW -- Shard claim order (atomic counter) - -### Protocol Constants (Frozen) - -- `NUM_SHARDS = 256` -- `SHARD_MASK = 255` -- Shard routing: `LE_u64(node_id[0..8]) & 255` -- Commit hash v2 version tag: `0x02 0x00` - ---- - -_Document generated 2026-01-25. File paths are accurate as of this date; line numbers are intentionally omitted._ diff --git a/docs/archive/study/echo-tour-de-code.pdf b/docs/archive/study/echo-tour-de-code.pdf deleted file mode 100644 index a32b911f..00000000 Binary files a/docs/archive/study/echo-tour-de-code.pdf and /dev/null differ diff --git a/docs/archive/study/echo-tour-de-code.tex b/docs/archive/study/echo-tour-de-code.tex deleted file mode 100644 index 0c317858..00000000 --- a/docs/archive/study/echo-tour-de-code.tex +++ /dev/null @@ -1,1560 +0,0 @@ -% SPDX-License-Identifier: Apache-2.0 OR LicenseRef-MIND-UCAL-1.0 -% © James Ross Ω FLYING•ROBOTS -% Options for packages loaded elsewhere -\PassOptionsToPackage{unicode}{hyperref} -\PassOptionsToPackage{hyphens}{url} -\documentclass[ -]{book} -\usepackage[letterpaper, margin=1in]{geometry} -\usepackage{xcolor} -\usepackage{amsmath,amssymb} -\setcounter{secnumdepth}{-\maxdimen} % remove section numbering -\usepackage{iftex} -\ifPDFTeX - \usepackage[T1]{fontenc} - \usepackage[utf8]{inputenc} - \usepackage{textcomp} % provide euro and other symbols -\else % if luatex or xetex - \usepackage{unicode-math} % this also loads fontspec - \defaultfontfeatures{Scale=MatchLowercase} - \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} -\fi -\usepackage{lmodern} -\ifPDFTeX\else - % xetex/luatex font selection -\fi -% Use upquote if available, for straight quotes in verbatim environments -\IfFileExists{upquote.sty}{\usepackage{upquote}}{} -\IfFileExists{microtype.sty}{% use microtype if available - \usepackage[]{microtype} - \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts -}{} -\makeatletter -\@ifundefined{KOMAClassName}{% if non-KOMA class - \IfFileExists{parskip.sty}{% - \usepackage{parskip} - }{% else - \setlength{\parindent}{0pt} - \setlength{\parskip}{6pt plus 2pt minus 1pt}} -}{% if KOMA class - \KOMAoptions{parskip=half}} -\makeatother -\usepackage{color} -\usepackage{fancyvrb} -\newcommand{\VerbBar}{|} -\newcommand{\VERB}{\Verb[commandchars=\\\{\}]} -\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} -% Add ',fontsize=\small' for more characters per line -\newenvironment{Shaded}{}{} -\newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}} -\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{#1}} -\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}} -\newcommand{\BuiltInTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{#1}} -\newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{#1}}} -\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{#1}} -\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}} -\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{#1}} -\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}} -\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{#1}}} -\newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}} -\newcommand{\ExtensionTok}[1]{#1} -\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}} -\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{#1}} -\newcommand{\ImportTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{\textbf{#1}}} -\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}} -\newcommand{\NormalTok}[1]{#1} -\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{#1}} -\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{#1}} -\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{#1}} -\newcommand{\RegionMarkerTok}[1]{#1} -\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{#1}} -\newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{#1}} -\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\usepackage{longtable,booktabs,array} -\newcounter{none} % for unnumbered tables -\usepackage{calc} % for calculating minipage widths -% Correct order of tables after \paragraph or \subparagraph -\usepackage{etoolbox} -\makeatletter -\patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{} -\makeatother -% Allow footnotes in longtable head/foot -\IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}} -\makesavenoteenv{longtable} -\setlength{\emergencystretch}{3em} % prevent overfull lines -\providecommand{\tightlist}{% - \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} -\usepackage{bookmark} -\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available -\urlstyle{same} -\hypersetup{ - hidelinks, - pdfcreator={LaTeX via pandoc}} - -\author{} -\date{} - -\begin{document} -\frontmatter - -\mainmatter -\chapter{Echo: Tour de Code}\label{echo-tour-de-code} - -\begin{quote} -\textbf{The complete function-by-function trace of Echo's execution -pipeline.} - -This document traces EVERY function call involved in processing a user -action through the Echo engine. References use \textbf{symbol names} -(functions, structs) rather than line numbers to reduce maintenance burden. -Run \texttt{scripts/validate-tour-refs.sh} to verify all referenced symbols -still exist in the codebase. -\end{quote} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{Table of Contents}\label{table-of-contents} - -\begin{enumerate} -\def\labelenumi{\arabic{enumi}.} -\tightlist -\item - \hyperref[1-intent-ingestion]{Intent Ingestion} -\item - \hyperref[2-transaction-lifecycle]{Transaction Lifecycle} -\item - \hyperref[3-rule-matching]{Rule Matching} -\item - \hyperref[4-scheduler-drain--reserve]{Scheduler: Drain \& Reserve} -\item - \hyperref[5-boaw-parallel-execution]{BOAW Parallel Execution} -\item - \hyperref[6-delta-merge--state-finalization]{Delta Merge \& State - Finalization} -\item - \hyperref[7-hash-computation]{Hash Computation} -\item - \hyperref[8-commit-orchestration]{Commit Orchestration} -\item - \hyperref[9-complete-call-graph]{Complete Call Graph} -\end{enumerate} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{1. Intent Ingestion}\label{intent-ingestion} - -\textbf{Entry Point:} \texttt{Engine::ingest\_intent()} \textbf{File:} -\texttt{crates/warp-core/src/engine\_impl.rs:1216} - -\subsection{1.1 Function Signature}\label{function-signature} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ ingest\_intent(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ intent\_bytes}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[}\DataTypeTok{u8}\NormalTok{]) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Result}\OperatorTok{\textless{}}\NormalTok{IngestDisposition}\OperatorTok{,}\NormalTok{ EngineError}\OperatorTok{\textgreater{}} -\end{Highlighting} -\end{Shaded} - -\textbf{Returns:} - -\texttt{IngestDisposition::Accepted\ \{\ intent\_id:\ Hash\ \}} --- New -intent accepted - -\texttt{IngestDisposition::Duplicate\ \{\ intent\_id:\ Hash\ \}} --- -Already ingested - -\subsection{1.2 Complete Call Trace}\label{complete-call-trace} - -\begin{verbatim} -Engine::ingest_intent(intent_bytes: &[u8]) -│ -├─[1] compute_intent_id(intent_bytes) → Hash -│ FILE: crates/warp-core/src/inbox.rs:205 -│ CODE: -│ let mut hasher = blake3::Hasher::new(); -│ hasher.update(b"intent:"); // Domain separation -│ hasher.update(intent_bytes); -│ hasher.finalize().into() // → [u8; 32] -│ -├─[2] NodeId(intent_id) -│ Creates strongly-typed NodeId from Hash -│ -├─[3] self.state.store_mut(&warp_id) → Option<&mut GraphStore> -│ FILE: crates/warp-core/src/engine_impl.rs:1221 -│ ERROR: EngineError::UnknownWarp if None -│ -├─[4] Extract root_node_id from self.current_root.local_id -│ -├─[5] STRUCTURAL NODE CREATION (Idempotent) -│ ├─ make_node_id("sim") → NodeId -│ │ FILE: crates/warp-core/src/ident.rs:93 -│ │ CODE: blake3("node:" || "sim") -│ │ -│ ├─ make_node_id("sim/inbox") → NodeId -│ │ CODE: blake3("node:" || "sim/inbox") -│ │ -│ ├─ make_type_id("sim") → TypeId -│ │ FILE: crates/warp-core/src/ident.rs:85 -│ │ CODE: blake3("type:" || "sim") -│ │ -│ ├─ make_type_id("sim/inbox") → TypeId -│ ├─ make_type_id("sim/inbox/event") → TypeId -│ │ -│ ├─ store.insert_node(sim_id, NodeRecord { ty: sim_ty }) -│ │ FILE: crates/warp-core/src/graph.rs:175 -│ │ CODE: self.nodes.insert(id, record) -│ │ -│ └─ store.insert_node(inbox_id, NodeRecord { ty: inbox_ty }) -│ -├─[6] STRUCTURAL EDGE CREATION -│ ├─ make_edge_id("edge:root/sim") → EdgeId -│ │ FILE: crates/warp-core/src/ident.rs:109 -│ │ CODE: blake3("edge:" || "edge:root/sim") -│ │ -│ ├─ store.insert_edge(root_id, EdgeRecord { ... }) -│ │ FILE: crates/warp-core/src/graph.rs:188 -│ │ └─ GraphStore::upsert_edge_record(from, edge) -│ │ FILE: crates/warp-core/src/graph.rs:196 -│ │ UPDATES: -│ │ self.edge_index.insert(edge_id, from) -│ │ self.edge_to_index.insert(edge_id, to) -│ │ self.edges_from.entry(from).or_default().push(edge) -│ │ self.edges_to.entry(to).or_default().push(edge_id) -│ │ -│ └─ store.insert_edge(sim_id, EdgeRecord { ... }) [sim → inbox] -│ -├─[7] DUPLICATE DETECTION -│ store.node(&event_id) → Option<&NodeRecord> -│ FILE: crates/warp-core/src/graph.rs:87 -│ CODE: self.nodes.get(id) -│ IF Some(_): return Ok(IngestDisposition::Duplicate { intent_id }) -│ -├─[8] EVENT NODE CREATION -│ store.insert_node(event_id, NodeRecord { ty: event_ty }) -│ NOTE: event_id = intent_id (content-addressed) -│ -├─[9] INTENT ATTACHMENT -│ ├─ AtomPayload::new(type_id, bytes) -│ │ FILE: crates/warp-core/src/attachment.rs:149 -│ │ CODE: Self { type_id, bytes: Bytes::copy_from_slice(intent_bytes) } -│ │ -│ └─ store.set_node_attachment(event_id, Some(AttachmentValue::Atom(payload))) -│ FILE: crates/warp-core/src/graph.rs:125 -│ CODE: self.node_attachments.insert(id, v) -│ -├─[10] PENDING EDGE CREATION (Queue Membership) -│ ├─ pending_edge_id(&inbox_id, &intent_id) → EdgeId -│ │ FILE: crates/warp-core/src/inbox.rs:212 -│ │ CODE: blake3("edge:" || "sim/inbox/pending:" || inbox_id || intent_id) -│ │ -│ └─ store.insert_edge(inbox_id, EdgeRecord { -│ id: pending_edge_id, -│ from: inbox_id, -│ to: event_id, -│ ty: make_type_id("edge:pending") -│ }) -│ -└─[11] return Ok(IngestDisposition::Accepted { intent_id }) -\end{verbatim} - -\subsection{1.3 Data Structures -Modified}\label{data-structures-modified} - -{\def\LTcaptype{none} % do not increment counter -\begin{longtable}[]{@{} - >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.4231}} - >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.2692}} - >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.3077}}@{}} -\toprule\noalign{} -\begin{minipage}[b]{\linewidth}\raggedright -Structure -\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright -Field -\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright -Change -\end{minipage} \\ -\midrule\noalign{} -\endhead -\bottomrule\noalign{} -\endlastfoot -\texttt{GraphStore} & \texttt{nodes} & +3 entries (sim, inbox, event) \\ -\texttt{GraphStore} & \texttt{edges\_from} & +3 edges (root→sim, -sim→inbox, inbox→event) \\ -\texttt{GraphStore} & \texttt{edges\_to} & +3 reverse entries \\ -\texttt{GraphStore} & \texttt{edge\_index} & +3 edge→from mappings \\ -\texttt{GraphStore} & \texttt{edge\_to\_index} & +3 edge→to mappings \\ -\texttt{GraphStore} & \texttt{node\_attachments} & +1 (event → intent -payload) \\ -\end{longtable} -} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{2. Transaction Lifecycle}\label{transaction-lifecycle} - -\subsection{2.1 Begin Transaction}\label{begin-transaction} - -\textbf{Entry Point:} \texttt{Engine::begin()} \textbf{File:} -\texttt{crates/warp-core/src/engine\_impl.rs:711-719} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ begin(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\NormalTok{) }\OperatorTok{{-}\textgreater{}}\NormalTok{ TxId }\OperatorTok{\{} - \KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter }\OperatorTok{=} \KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter}\OperatorTok{.}\NormalTok{wrapping\_add(}\DecValTok{1}\NormalTok{)}\OperatorTok{;} \CommentTok{// Line 713} - \ControlFlowTok{if} \KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter }\OperatorTok{==} \DecValTok{0} \OperatorTok{\{} - \KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter }\OperatorTok{=} \DecValTok{1}\OperatorTok{;} \CommentTok{// Line 715: Zero is reserved} - \OperatorTok{\}} - \KeywordTok{self}\OperatorTok{.}\NormalTok{live\_txs}\OperatorTok{.}\NormalTok{insert(}\KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter)}\OperatorTok{;} \CommentTok{// Line 717} - \PreprocessorTok{TxId::}\NormalTok{from\_raw(}\KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter) }\CommentTok{// Line 718} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{Call Trace:} - -\begin{verbatim} -Engine::begin() -│ -├─ self.tx_counter.wrapping_add(1) -│ Rust std: u64::wrapping_add -│ Handles u64::MAX → 0 overflow -│ -├─ if self.tx_counter == 0: self.tx_counter = 1 -│ INVARIANT: TxId(0) is reserved as invalid -│ -├─ self.live_txs.insert(self.tx_counter) -│ TYPE: HashSet -│ Registers transaction as active -│ -└─ TxId::from_raw(self.tx_counter) - FILE: crates/warp-core/src/tx.rs:34 - CODE: pub const fn from_raw(value: u64) -> Self { Self(value) } - TYPE: #[repr(transparent)] struct TxId(u64) -\end{verbatim} - -\textbf{State Changes:} - \texttt{tx\_counter}: N → N+1 (or 1 if -wrapped) - \texttt{live\_txs}: Insert new counter value - -\subsection{2.2 Abort Transaction}\label{abort-transaction} - -\textbf{Entry Point:} \texttt{Engine::abort()} \textbf{File:} -\texttt{crates/warp-core/src/engine\_impl.rs:962-968} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ abort(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ tx}\OperatorTok{:}\NormalTok{ TxId) }\OperatorTok{\{} - \KeywordTok{self}\OperatorTok{.}\NormalTok{live\_txs}\OperatorTok{.}\NormalTok{remove(}\OperatorTok{\&}\NormalTok{tx}\OperatorTok{.}\NormalTok{value())}\OperatorTok{;} - \KeywordTok{self}\OperatorTok{.}\NormalTok{scheduler}\OperatorTok{.}\NormalTok{finalize\_tx(tx)}\OperatorTok{;} - \KeywordTok{self}\OperatorTok{.}\NormalTok{bus}\OperatorTok{.}\NormalTok{clear()}\OperatorTok{;} - \KeywordTok{self}\OperatorTok{.}\NormalTok{last\_materialization}\OperatorTok{.}\NormalTok{clear()}\OperatorTok{;} - \KeywordTok{self}\OperatorTok{.}\NormalTok{last\_materialization\_errors}\OperatorTok{.}\NormalTok{clear()}\OperatorTok{;} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{3. Rule Matching}\label{rule-matching} - -\textbf{Entry Point:} \texttt{Engine::apply()} \textbf{File:} -\texttt{crates/warp-core/src/engine\_impl.rs:730-737} - -\subsection{3.1 Function Signature}\label{function-signature-1} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ apply(} - \OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,} -\NormalTok{ tx}\OperatorTok{:}\NormalTok{ TxId}\OperatorTok{,} -\NormalTok{ rule\_name}\OperatorTok{:} \OperatorTok{\&}\DataTypeTok{str}\OperatorTok{,} -\NormalTok{ scope}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId}\OperatorTok{,} -\NormalTok{) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Result}\OperatorTok{\textless{}}\NormalTok{ApplyResult}\OperatorTok{,}\NormalTok{ EngineError}\OperatorTok{\textgreater{}} -\end{Highlighting} -\end{Shaded} - -\subsection{3.2 Complete Call Trace}\label{complete-call-trace-1} - -\begin{verbatim} -Engine::apply(tx, rule_name, scope) -│ -└─ Engine::apply_in_warp(tx, self.current_root.warp_id, rule_name, scope, &[]) - FILE: crates/warp-core/src/engine_impl.rs:754-806 - │ - ├─[1] TRANSACTION VALIDATION - │ CODE: if tx.value() == 0 || !self.live_txs.contains(&tx.value()) - │ ERROR: EngineError::UnknownTx - │ - ├─[2] RULE LOOKUP - │ self.rules.get(rule_name) → Option<&RewriteRule> - │ TYPE: HashMap<&'static str, RewriteRule> - │ ERROR: EngineError::UnknownRule(rule_name.to_owned()) - │ - ├─[3] STORE LOOKUP - │ self.state.store(&warp_id) → Option<&GraphStore> - │ ERROR: EngineError::UnknownWarp(warp_id) - │ - ├─[4] CREATE GRAPHVIEW - │ GraphView::new(store) → GraphView<'_> - │ FILE: crates/warp-core/src/graph_view.rs - │ TYPE: Read-only wrapper (Copy, 8 bytes) - │ - ├─[5] CALL MATCHER - │ (rule.matcher)(view, scope) → bool - │ TYPE: MatchFn = for<'a> fn(GraphView<'a>, &NodeId) -> bool - │ FILE: crates/warp-core/src/rule.rs:16-24 - │ IF false: return Ok(ApplyResult::NoMatch) - │ - ├─[6] CREATE SCOPE KEY - │ let scope_key = NodeKey { warp_id, local_id: *scope } - │ - ├─[7] COMPUTE SCOPE HASH - │ scope_hash(&rule.id, &scope_key) → Hash - │ FILE: crates/warp-core/src/engine_impl.rs:1712-1718 - │ CODE: - │ let mut hasher = Hasher::new(); - │ hasher.update(rule_id); // 32 bytes - │ hasher.update(scope.warp_id.as_bytes()); // 32 bytes - │ hasher.update(scope.local_id.as_bytes()); // 32 bytes - │ hasher.finalize().into() - │ - ├─[8] COMPUTE FOOTPRINT - │ (rule.compute_footprint)(view, scope) → Footprint - │ TYPE: FootprintFn = for<'a> fn(GraphView<'a>, &NodeId) -> Footprint - │ FILE: crates/warp-core/src/rule.rs:38-46 - │ RETURNS: - │ Footprint { - │ n_read: IdSet, // Nodes read - │ n_write: IdSet, // Nodes written - │ e_read: IdSet, // Edges read - │ e_write: IdSet, // Edges written - │ a_read: AttachmentSet, // Attachments read - │ a_write: AttachmentSet, // Attachments written - │ b_in: PortSet, // Input ports - │ b_out: PortSet, // Output ports - │ factor_mask: u64, // O(1) prefilter - │ } - │ - ├─[9] AUGMENT FOOTPRINT WITH DESCENT STACK - │ for key in descent_stack: - │ footprint.a_read.insert(*key) - │ FILE: crates/warp-core/src/footprint.rs:104-107 - │ PURPOSE: Stage B1 law - READs of all descent chain slots - │ - ├─[10] COMPACT RULE ID LOOKUP - │ self.compact_rule_ids.get(&rule.id) → Option<&CompactRuleId> - │ TYPE: HashMap - │ ERROR: EngineError::InternalCorruption - │ - └─[11] ENQUEUE TO SCHEDULER - self.scheduler.enqueue(tx, PendingRewrite { ... }) - │ - └─ DeterministicScheduler::enqueue(tx, rewrite) - FILE: crates/warp-core/src/scheduler.rs:654-659 - │ - └─ RadixScheduler::enqueue(tx, rewrite) - FILE: crates/warp-core/src/scheduler.rs:102-105 - CODE: - let txq = self.pending.entry(tx).or_default(); - txq.enqueue(rewrite.scope_hash, rewrite.compact_rule.0, rewrite); - │ - └─ PendingTx::enqueue(scope_be32, rule_id, payload) - FILE: crates/warp-core/src/scheduler.rs:331-355 - - CASE 1: Duplicate (scope_hash, rule_id) — LAST WINS - index.get(&key) → Some(&i) - fat[thin[i].handle] = Some(payload) // Overwrite - thin[i].nonce = next_nonce++ // Refresh nonce - - CASE 2: New entry - fat.push(Some(payload)) - thin.push(RewriteThin { scope_be32, rule_id, nonce, handle }) - index.insert(key, thin.len() - 1) -\end{verbatim} - -\subsection{3.3 PendingRewrite -Structure}\label{pendingrewrite-structure} - -\textbf{File:} \texttt{crates/warp-core/src/scheduler.rs:68-82} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{struct}\NormalTok{ PendingRewrite }\OperatorTok{\{} - \KeywordTok{pub}\NormalTok{ rule\_id}\OperatorTok{:} \BuiltInTok{Hash}\OperatorTok{,} \CommentTok{// 32{-}byte rule identifier} - \KeywordTok{pub}\NormalTok{ compact\_rule}\OperatorTok{:}\NormalTok{ CompactRuleId}\OperatorTok{,} \CommentTok{// u32 hot{-}path handle} - \KeywordTok{pub}\NormalTok{ scope\_hash}\OperatorTok{:} \BuiltInTok{Hash}\OperatorTok{,} \CommentTok{// 32{-}byte ordering key} - \KeywordTok{pub}\NormalTok{ scope}\OperatorTok{:}\NormalTok{ NodeKey}\OperatorTok{,} \CommentTok{// \{ warp\_id, local\_id \}} - \KeywordTok{pub}\NormalTok{ footprint}\OperatorTok{:}\NormalTok{ Footprint}\OperatorTok{,} \CommentTok{// Read/write declaration} - \KeywordTok{pub}\NormalTok{ phase}\OperatorTok{:}\NormalTok{ RewritePhase}\OperatorTok{,} \CommentTok{// State machine: Matched → Reserved → ...} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{4. Scheduler: Drain \& Reserve}\label{scheduler-drain-reserve} - -\subsection{4.1 Drain Phase (Radix Sort)}\label{drain-phase-radix-sort} - -\textbf{Entry Point:} \texttt{RadixScheduler::drain\_for\_tx()} -\textbf{File:} \texttt{crates/warp-core/src/scheduler.rs:109-113} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{fn}\NormalTok{ drain\_for\_tx(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ tx}\OperatorTok{:}\NormalTok{ TxId) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{PendingRewrite}\OperatorTok{\textgreater{}} \OperatorTok{\{} - \KeywordTok{self}\OperatorTok{.}\NormalTok{pending} - \OperatorTok{.}\NormalTok{remove(}\OperatorTok{\&}\NormalTok{tx)} - \OperatorTok{.}\NormalTok{map\_or\_else(}\DataTypeTok{Vec}\PreprocessorTok{::}\NormalTok{new}\OperatorTok{,} \OperatorTok{|}\KeywordTok{mut}\NormalTok{ txq}\OperatorTok{|}\NormalTok{ txq}\OperatorTok{.}\NormalTok{drain\_in\_order())} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{Complete Call Trace:} - -\begin{verbatim} -RadixScheduler::drain_for_tx(tx) -│ -├─ self.pending.remove(&tx) → Option> -│ -└─ PendingTx::drain_in_order() - FILE: crates/warp-core/src/scheduler.rs:416-446 - │ - ├─ DECISION: n <= 1024 (SMALL_SORT_THRESHOLD)? - │ ├─ YES: sort_unstable_by(cmp_thin) - │ │ Rust std comparison sort - │ │ - │ └─ NO: radix_sort() - │ FILE: crates/warp-core/src/scheduler.rs:360-413 - │ - └─ radix_sort() - │ - ├─ Initialize scratch buffer: self.scratch.resize(n, default) - │ - ├─ Lazy allocate histogram: self.counts16 = vec![0u32; 65536] - │ - └─ FOR pass IN 0..20: // ═══ 20 PASSES ═══ - │ - ├─ SELECT src/dst buffers (ping-pong) - │ flip = false: src=thin, dst=scratch - │ flip = true: src=scratch, dst=thin - │ - ├─ PHASE 1: COUNT BUCKETS - │ FOR r IN src: - │ b = bucket16(r, pass) - │ counts[b] += 1 - │ - ├─ PHASE 2: PREFIX SUMS - │ sum = 0 - │ FOR c IN counts: - │ t = *c - │ *c = sum - │ sum += t - │ - ├─ PHASE 3: STABLE SCATTER - │ FOR r IN src: - │ b = bucket16(r, pass) - │ dst[counts[b]] = r - │ counts[b] += 1 - │ - └─ flip = !flip - -BUCKET EXTRACTION (bucket16): -FILE: crates/warp-core/src/scheduler.rs:481-498 - -Pass 0: u16_from_u32_le(r.nonce, 0) // Nonce bytes [0:2] -Pass 1: u16_from_u32_le(r.nonce, 1) // Nonce bytes [2:4] -Pass 2: u16_from_u32_le(r.rule_id, 0) // Rule ID bytes [0:2] -Pass 3: u16_from_u32_le(r.rule_id, 1) // Rule ID bytes [2:4] -Pass 4: u16_be_from_pair32(scope, 15) // Scope bytes [30:32] -Pass 5: u16_be_from_pair32(scope, 14) // Scope bytes [28:30] -... -Pass 19: u16_be_from_pair32(scope, 0) // Scope bytes [0:2] (MSD) - -SORT ORDER: (scope_hash, rule_id, nonce) ascending lexicographic -\end{verbatim} - -\subsection{4.2 Reserve Phase (Independence -Check)}\label{reserve-phase-independence-check} - -\textbf{Entry Point:} \texttt{RadixScheduler::reserve()} \textbf{File:} -\texttt{crates/warp-core/src/scheduler.rs:134-143} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{fn}\NormalTok{ reserve(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ tx}\OperatorTok{:}\NormalTok{ TxId}\OperatorTok{,}\NormalTok{ pr}\OperatorTok{:} \OperatorTok{\&}\KeywordTok{mut}\NormalTok{ PendingRewrite) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{bool} \OperatorTok{\{} - \KeywordTok{let}\NormalTok{ active }\OperatorTok{=} \KeywordTok{self}\OperatorTok{.}\NormalTok{active}\OperatorTok{.}\NormalTok{entry(tx)}\OperatorTok{.}\NormalTok{or\_insert\_with(}\PreprocessorTok{ActiveFootprints::}\NormalTok{new)}\OperatorTok{;} - \ControlFlowTok{if} \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{has\_conflict(active}\OperatorTok{,}\NormalTok{ pr) }\OperatorTok{\{} - \ControlFlowTok{return} \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{on\_conflict(pr)}\OperatorTok{;} - \OperatorTok{\}} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{mark\_all(active}\OperatorTok{,}\NormalTok{ pr)}\OperatorTok{;} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{on\_reserved(pr)} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{Complete Call Trace:} - -\begin{verbatim} -RadixScheduler::reserve(tx, pr) -│ -├─ self.active.entry(tx).or_insert_with(ActiveFootprints::new) -│ TYPE: HashMap -│ ActiveFootprints contains 7 GenSets: -│ - nodes_written: GenSet -│ - nodes_read: GenSet -│ - edges_written: GenSet -│ - edges_read: GenSet -│ - attachments_written: GenSet -│ - attachments_read: GenSet -│ - ports: GenSet -│ -├─ has_conflict(active, pr) → bool -│ FILE: crates/warp-core/src/scheduler.rs:157-236 -│ │ -│ ├─ FOR node IN pr.footprint.n_write: -│ │ IF active.nodes_written.contains(node): return true // W-W conflict -│ │ IF active.nodes_read.contains(node): return true // W-R conflict -│ │ -│ ├─ FOR node IN pr.footprint.n_read: -│ │ IF active.nodes_written.contains(node): return true // R-W conflict -│ │ (R-R is allowed) -│ │ -│ ├─ FOR edge IN pr.footprint.e_write: -│ │ IF active.edges_written.contains(edge): return true -│ │ IF active.edges_read.contains(edge): return true -│ │ -│ ├─ FOR edge IN pr.footprint.e_read: -│ │ IF active.edges_written.contains(edge): return true -│ │ -│ ├─ FOR key IN pr.footprint.a_write: -│ │ IF active.attachments_written.contains(key): return true -│ │ IF active.attachments_read.contains(key): return true -│ │ -│ ├─ FOR key IN pr.footprint.a_read: -│ │ IF active.attachments_written.contains(key): return true -│ │ -│ └─ FOR port IN pr.footprint.b_in ∪ pr.footprint.b_out: -│ IF active.ports.contains(port): return true -│ -├─ IF conflict: -│ └─ on_conflict(pr) -│ FILE: crates/warp-core/src/scheduler.rs:145-149 -│ pr.phase = RewritePhase::Aborted -│ return false -│ -├─ mark_all(active, pr) -│ FILE: crates/warp-core/src/scheduler.rs:238-278 -│ │ -│ ├─ FOR node IN pr.footprint.n_write: -│ │ active.nodes_written.mark(NodeKey { warp_id, local_id: node }) -│ │ -│ ├─ FOR node IN pr.footprint.n_read: -│ │ active.nodes_read.mark(NodeKey { ... }) -│ │ -│ ├─ FOR edge IN pr.footprint.e_write: -│ │ active.edges_written.mark(EdgeKey { ... }) -│ │ -│ ├─ FOR edge IN pr.footprint.e_read: -│ │ active.edges_read.mark(EdgeKey { ... }) -│ │ -│ ├─ FOR key IN pr.footprint.a_write: -│ │ active.attachments_written.mark(key) -│ │ -│ ├─ FOR key IN pr.footprint.a_read: -│ │ active.attachments_read.mark(key) -│ │ -│ └─ FOR port IN pr.footprint.b_in ∪ pr.footprint.b_out: -│ active.ports.mark(port) -│ -└─ on_reserved(pr) - FILE: crates/warp-core/src/scheduler.rs:151-155 - pr.phase = RewritePhase::Reserved - return true -\end{verbatim} - -\subsection{4.3 GenSet: O(1) Conflict -Detection}\label{genset-o1-conflict-detection} - -\textbf{File:} \texttt{crates/warp-core/src/scheduler.rs:509-535} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{struct}\NormalTok{ GenSet}\OperatorTok{\textless{}}\NormalTok{K}\OperatorTok{\textgreater{}} \OperatorTok{\{} -\NormalTok{ gen}\OperatorTok{:} \DataTypeTok{u32}\OperatorTok{,} \CommentTok{// Current generation} -\NormalTok{ seen}\OperatorTok{:}\NormalTok{ FxHashMap}\OperatorTok{\textless{}}\NormalTok{K}\OperatorTok{,} \DataTypeTok{u32}\OperatorTok{\textgreater{},} \CommentTok{// Key → generation when marked} -\OperatorTok{\}} - -\KeywordTok{impl}\OperatorTok{\textless{}}\NormalTok{K}\OperatorTok{:} \BuiltInTok{Hash} \OperatorTok{+} \BuiltInTok{Eq} \OperatorTok{+} \BuiltInTok{Copy}\OperatorTok{\textgreater{}}\NormalTok{ GenSet}\OperatorTok{\textless{}}\NormalTok{K}\OperatorTok{\textgreater{}} \OperatorTok{\{} - \AttributeTok{\#[}\NormalTok{inline}\AttributeTok{]} - \KeywordTok{pub} \KeywordTok{fn}\NormalTok{ contains(}\OperatorTok{\&}\KeywordTok{self}\OperatorTok{,}\NormalTok{ key}\OperatorTok{:}\NormalTok{ K) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{bool} \OperatorTok{\{} - \PreprocessorTok{matches!}\NormalTok{(}\KeywordTok{self}\OperatorTok{.}\NormalTok{seen}\OperatorTok{.}\NormalTok{get(}\OperatorTok{\&}\NormalTok{key)}\OperatorTok{,} \ConstantTok{Some}\NormalTok{(}\OperatorTok{\&}\NormalTok{g) }\ControlFlowTok{if}\NormalTok{ g }\OperatorTok{==} \KeywordTok{self}\OperatorTok{.}\NormalTok{gen)} - \OperatorTok{\}} - - \AttributeTok{\#[}\NormalTok{inline}\AttributeTok{]} - \KeywordTok{pub} \KeywordTok{fn}\NormalTok{ mark(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ key}\OperatorTok{:}\NormalTok{ K) }\OperatorTok{\{} - \KeywordTok{self}\OperatorTok{.}\NormalTok{seen}\OperatorTok{.}\NormalTok{insert(key}\OperatorTok{,} \KeywordTok{self}\OperatorTok{.}\NormalTok{gen)}\OperatorTok{;} - \OperatorTok{\}} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{Key Insight:} No clearing needed between transactions. Increment -\texttt{gen} → all old entries become stale. - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{5. BOAW Parallel Execution}\label{boaw-parallel-execution} - -\textbf{Entry Point:} \texttt{execute\_parallel()} \textbf{File:} -\texttt{crates/warp-core/src/boaw/exec.rs:61-83} - -\subsection{5.1 Entry Point}\label{entry-point} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ execute\_parallel(view}\OperatorTok{:}\NormalTok{ GraphView}\OperatorTok{\textless{}}\OtherTok{\textquotesingle{}\_}\OperatorTok{\textgreater{},}\NormalTok{ items}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[ExecItem]}\OperatorTok{,}\NormalTok{ workers}\OperatorTok{:} \DataTypeTok{usize}\NormalTok{) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{TickDelta}\OperatorTok{\textgreater{}} \OperatorTok{\{} - \PreprocessorTok{assert!}\NormalTok{(workers }\OperatorTok{\textgreater{}=} \DecValTok{1}\NormalTok{)}\OperatorTok{;} - \KeywordTok{let}\NormalTok{ capped\_workers }\OperatorTok{=}\NormalTok{ workers}\OperatorTok{.}\NormalTok{min(NUM\_SHARDS)}\OperatorTok{;} \CommentTok{// Cap at 256} - - \AttributeTok{\#[}\NormalTok{cfg}\AttributeTok{(}\NormalTok{feature }\OperatorTok{=} \StringTok{"parallel{-}stride{-}fallback"}\AttributeTok{)]} - \ControlFlowTok{if} \PreprocessorTok{std::env::}\NormalTok{var(}\StringTok{"ECHO\_PARALLEL\_STRIDE"}\NormalTok{)}\OperatorTok{.}\NormalTok{is\_ok() }\OperatorTok{\{} - \ControlFlowTok{return}\NormalTok{ execute\_parallel\_stride(view}\OperatorTok{,}\NormalTok{ items}\OperatorTok{,}\NormalTok{ capped\_workers)}\OperatorTok{;} - \OperatorTok{\}} - -\NormalTok{ execute\_parallel\_sharded(view}\OperatorTok{,}\NormalTok{ items}\OperatorTok{,}\NormalTok{ capped\_workers) }\CommentTok{// DEFAULT} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\subsection{5.2 Complete Call Trace}\label{complete-call-trace-2} - -\begin{verbatim} -execute_parallel(view, items, workers) -│ -└─ execute_parallel_sharded(view, items, capped_workers) - FILE: crates/warp-core/src/boaw/exec.rs:101-152 - │ - ├─ IF items.is_empty(): - │ return (0..workers).map(|_| TickDelta::new()).collect() - │ - ├─ partition_into_shards(items.to_vec()) → Vec - │ FILE: crates/warp-core/src/boaw/shard.rs:109-120 - │ │ - │ ├─ Create 256 empty VirtualShard structures - │ │ - │ └─ FOR item IN items: - │ │ - │ ├─ shard_of(&item.scope) → usize - │ │ FILE: crates/warp-core/src/boaw/shard.rs:82-92 - │ │ CODE: - │ │ let bytes = scope.as_bytes(); - │ │ let first_8: [u8; 8] = [bytes[0..8]]; - │ │ let val = u64::from_le_bytes(first_8); - │ │ (val & 255) as usize // SHARD_MASK = 255 - │ │ - │ └─ shards[shard_id].items.push(item) - │ - ├─ let next_shard = AtomicUsize::new(0) - │ - └─ std::thread::scope(|s| { ... }) - FILE: Rust std (scoped threads) - │ - ├─ FOR _ IN 0..workers: - │ │ - │ └─ s.spawn(move || { ... }) // ═══ WORKER THREAD ═══ - │ │ - │ ├─ let mut delta = TickDelta::new() - │ │ FILE: crates/warp-core/src/tick_delta.rs:44-52 - │ │ CREATES: { ops: Vec::new(), origins: Vec::new() } - │ │ - │ └─ LOOP: // Work-stealing loop - │ │ - │ ├─ shard_id = next_shard.fetch_add(1, Ordering::Relaxed) - │ │ ATOMIC: Returns old value, increments counter - │ │ ORDERING: Relaxed (no synchronization cost) - │ │ - │ ├─ IF shard_id >= 256: break - │ │ - │ └─ FOR item IN &shards[shard_id].items: - │ │ - │ ├─ let mut scoped = delta.scoped(item.origin) - │ │ FILE: crates/warp-core/src/tick_delta.rs:140-142 - │ │ CREATES: ScopedDelta { inner: &mut delta, origin, next_op_ix: 0 } - │ │ - │ └─ (item.exec)(view, &item.scope, scoped.inner_mut()) - │ │ - │ └─ INSIDE EXECUTOR: - │ scoped.emit(op) - │ FILE: crates/warp-core/src/tick_delta.rs:234-239 - │ CODE: - │ origin.op_ix = self.next_op_ix; - │ self.next_op_ix += 1; - │ self.inner.emit_with_origin(op, origin); - │ │ - │ └─ TickDelta::emit_with_origin(op, origin) - │ FILE: crates/warp-core/src/tick_delta.rs:69-75 - │ CODE: - │ self.ops.push(op); - │ self.origins.push(origin); // if delta_validate - │ - └─ COLLECT THREADS: - handles.into_iter().map(|h| h.join()).collect() - RETURNS: Vec (one per worker) -\end{verbatim} - -\subsection{5.3 Enforced Execution Path}\label{enforced-execution-path} - -\textbf{Entry Point:} \texttt{execute\_item\_enforced()} -\textbf{File:} \texttt{crates/warp-core/src/boaw/exec.rs:409-487} - -When footprint enforcement is active, each item is executed via -\texttt{execute\_item\_enforced()} instead of a bare function-pointer call. -This wraps execution with \texttt{catch\_unwind} and performs post-hoc -\texttt{check\_op()} validation on any newly-emitted ops. - -\begin{verbatim} -execute_item_enforced(store, item, idx, unit, delta) -│ -├─ guard = unit.guards[idx] -├─ view = GraphView::new_guarded(store, guard) -│ -├─ ops_before = delta.len() -│ Snapshot the op count BEFORE the executor runs -│ -├─ result = std::panic::catch_unwind(AssertUnwindSafe(|| { -│ (item.exec)(view, &item.scope, delta) -│ })) -│ -├─ FOR op IN delta.ops_ref()[ops_before..]: -│ guard.check_op(op) → panic_any(FootprintViolation) on failure -│ Validates that each newly-emitted op falls within the declared footprint. -│ ExecItemKind::System items may emit warp-instance-level ops; -│ ExecItemKind::User items may not. -│ -└─ OUTCOME PRECEDENCE (returns Result): - ├─ IF exec panicked AND check_op panicked: - │ return Err(PoisonedDelta(FootprintViolationWithPanic)) - │ The violation wraps both the FootprintViolation and the exec panic. - │ - ├─ IF exec panicked OR check_op panicked (but not both): - │ return Err(PoisonedDelta(panic_payload)) - │ Single panic payload (either executor or violation). - │ - └─ IF both clean: - return Ok(delta) -\end{verbatim} - -\textbf{The Poison Invariant:} If the executor panics, the \texttt{TickDelta} -it was writing into is considered poisoned (partially-written ops with no -transactional rollback). After an executor panic the delta must be -discarded---it cannot be merged or committed. - -\textbf{Type-Level Enforcement:} The poison invariant is enforced at the type -level via \texttt{PoisonedDelta}, a newtype distinct from \texttt{TickDelta}. -When an executor panics, \texttt{execute\_item\_enforced()} returns -\texttt{Result}. The API exposes \texttt{merge\_deltas\_ok()} -(a higher-level wrapper around \texttt{merge\_deltas()}, which remains available -feature-gated) that returns \texttt{Result} and only accepts non-poisoned deltas. -A \texttt{PoisonedDelta} cannot be passed to \texttt{merge\_deltas\_ok()}---the -type system prevents accidental merging. - -\subsection{5.4 ExecItem Structure}\label{execitem-structure} - -\textbf{File:} \texttt{crates/warp-core/src/boaw/exec.rs:19-35} - -\begin{Shaded} -\begin{Highlighting}[] -\AttributeTok{\#[}\NormalTok{derive}\AttributeTok{(}\BuiltInTok{Clone}\OperatorTok{,} \BuiltInTok{Copy}\AttributeTok{)]} -\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ ExecItem }\OperatorTok{\{} - \KeywordTok{pub}\NormalTok{ exec}\OperatorTok{:}\NormalTok{ ExecuteFn}\OperatorTok{,} \CommentTok{// fn(GraphView, \&NodeId, \&mut TickDelta)} - \KeywordTok{pub}\NormalTok{ scope}\OperatorTok{:}\NormalTok{ NodeId}\OperatorTok{,} \CommentTok{// 32{-}byte node identifier} - \KeywordTok{pub}\NormalTok{ origin}\OperatorTok{:}\NormalTok{ OpOrigin}\OperatorTok{,} \CommentTok{// \{ intent\_id, rule\_id, match\_ix, op\_ix \}} - - \CommentTok{// Private field, present only in enforcement builds:} - \AttributeTok{\#[}\NormalTok{cfg}\AttributeTok{(}\NormalTok{any}\AttributeTok{(}\NormalTok{debug\_assertions}\OperatorTok{,}\NormalTok{ feature }\OperatorTok{=} \StringTok{"footprint\_enforce\_release"}\AttributeTok{))]} - \AttributeTok{\#[}\NormalTok{cfg}\AttributeTok{(}\NormalTok{not}\AttributeTok{(}\NormalTok{feature }\OperatorTok{=} \StringTok{"unsafe\_graph"}\AttributeTok{))]} -\NormalTok{ kind}\OperatorTok{:}\NormalTok{ ExecItemKind}\OperatorTok{,} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{\texttt{ExecItemKind} (cfg-gated):} - -\begin{itemize} -\tightlist -\item - \texttt{ExecItemKind::User} --- Normal rule executor. May emit - node/edge/attachment ops scoped to the declared footprint. Cannot emit - warp-instance-level ops (\texttt{UpsertWarpInstance}, - \texttt{DeleteWarpInstance}, \texttt{OpenPortal}). -\item - \texttt{ExecItemKind::System} --- Internal-only executor (e.g., portal - opening). May emit warp-instance-level ops. -\end{itemize} - -\texttt{ExecItem::new()} always creates \texttt{User} items. System items are -constructed via \texttt{ExecItem::new\_system()} (cfg-gated \texttt{pub(crate)} -constructor used by portal/inbox rules) and are never exposed through the public -API. - -\textbf{The dual-attribute cfg-gate pattern:} The \texttt{kind} field (and all -enforcement logic) is guarded by two cfg attributes that together express three -conditions (\texttt{debug\_assertions}, \texttt{footprint\_enforce\_release}, -and \texttt{unsafe\_graph}): - -\begin{enumerate} -\def\labelenumi{\arabic{enumi}.} -\tightlist -\item - \texttt{\#[cfg(any(debug\_assertions, feature = "footprint\_enforce\_release"))]} - --- active in debug builds or when the release enforcement feature is - opted-in. -\item - \texttt{\#[cfg(not(feature = "unsafe\_graph"))]} --- disabled when the - escape-hatch feature is set (for benchmarks/fuzzing that intentionally - bypass checks). -\end{enumerate} - -This means enforcement is always-on in dev/test, opt-in for release, and -explicitly removable for unsafe experimentation. - -\subsection{5.5 Thread Safety}\label{thread-safety} - -{\def\LTcaptype{none} % do not increment counter -\begin{longtable}[]{@{}lll@{}} -\toprule\noalign{} -Type & Safety & Reason \\ -\midrule\noalign{} -\endhead -\bottomrule\noalign{} -\endlastfoot -\texttt{GraphView} & \texttt{Sync\ +\ Send\ +\ Clone} & Read-only -snapshot \\ -\texttt{ExecItem} & \texttt{Sync\ +\ Send\ +\ Copy} & Function pointer + -primitives \\ -\texttt{TickDelta} & Per-worker exclusive & No shared mutation \\ -\texttt{AtomicUsize} & Lock-free & \texttt{fetch\_add} with -\texttt{Relaxed} ordering \\ -\end{longtable} -} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{6. Delta Merge \& State -Finalization}\label{delta-merge-state-finalization} - -\subsection{6.1 Canonical Merge}\label{canonical-merge} - -\textbf{Entry Point:} \texttt{merge\_deltas()} \textbf{File:} -\texttt{crates/warp-core/src/boaw/merge.rs:36-75} - -\begin{verbatim} -merge_deltas(deltas: Vec) → Result, MergeConflict> -│ -├─[1] FLATTEN ALL OPS WITH ORIGINS -│ let mut flat: Vec<(WarpOpKey, OpOrigin, WarpOp)> = Vec::new(); -│ FOR d IN deltas: -│ let (ops, origins) = d.into_parts_unsorted(); -│ FOR (op, origin) IN ops.zip(origins): -│ flat.push((op.sort_key(), origin, op)); -│ -├─[2] CANONICAL SORT -│ flat.sort_by(|a, b| (&a.0, &a.1).cmp(&(&b.0, &b.1))); -│ ORDER: (WarpOpKey, OpOrigin) lexicographic -│ -└─[3] DEDUPE & CONFLICT DETECTION - let mut out = Vec::new(); - let mut i = 0; - WHILE i < flat.len(): - │ - ├─ GROUP by WarpOpKey - │ key = flat[i].0 - │ start = i - │ WHILE i < flat.len() && flat[i].0 == key: i++ - │ - ├─ CHECK if all ops identical - │ first = &flat[start].2 - │ all_same = flat[start+1..i].iter().all(|(_, _, op)| op == first) - │ - └─ IF all_same: - out.push(first.clone()) // Accept one copy - ELSE: - writers = flat[start..i].iter().map(|(_, o, _)| *o).collect() - return Err(MergeConflict { writers }) // CONFLICT! - - return Ok(out) -\end{verbatim} - -\subsection{6.2 WarpOp Sort Key}\label{warpop-sort-key} - -\textbf{File:} \texttt{crates/warp-core/src/tick\_patch.rs:207-287} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{fn}\NormalTok{ sort\_key(}\OperatorTok{\&}\KeywordTok{self}\NormalTok{) }\OperatorTok{{-}\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{} - \ControlFlowTok{match} \KeywordTok{self} \OperatorTok{\{} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{OpenPortal }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{1}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{UpsertWarpInstance }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{2}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{DeleteWarpInstance }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{3}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{DeleteEdge }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{4}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} \CommentTok{// Delete before upsert} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{DeleteNode }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{5}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{UpsertNode }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{6}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{UpsertEdge }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{7}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} - \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{SetAttachment }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{8}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} \CommentTok{// Last} - \OperatorTok{\}} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{Canonical Order:} 1. OpenPortal (creates child instances) 2. -UpsertWarpInstance 3. DeleteWarpInstance 4. DeleteEdge (delete before -upsert) 5. DeleteNode (delete before upsert) 6. UpsertNode 7. UpsertEdge -8. SetAttachment (after skeleton exists) - -\subsection{6.3 State Mutation Methods}\label{state-mutation-methods} - -\textbf{File:} \texttt{crates/warp-core/src/graph.rs} - -\begin{verbatim} -GraphStore::insert_node(id, record) - LINE: 175-177 - CODE: self.nodes.insert(id, record) - -GraphStore::upsert_edge_record(from, edge) - LINE: 196-261 - UPDATES: - - self.edge_index.insert(edge_id, from) - - self.edge_to_index.insert(edge_id, to) - - Remove old edge from previous bucket if exists - - self.edges_from.entry(from).or_default().push(edge) - - self.edges_to.entry(to).or_default().push(edge_id) - -GraphStore::delete_node_cascade(node) - LINE: 277-354 - CASCADES: - - Remove from self.nodes - - Remove node attachment - - Remove ALL outbound edges (and their attachments) - - Remove ALL inbound edges (and their attachments) - - Maintain all 4 index maps consistently - -GraphStore::delete_edge_exact(from, edge_id) - LINE: 360-412 - VALIDATES: edge is in correct "from" bucket - REMOVES: - - From edges_from bucket - - From edge_index - - From edge_to_index - - From edges_to bucket - - Edge attachment - -GraphStore::set_node_attachment(id, value) - LINE: 125-134 - CODE: - None → self.node_attachments.remove(&id) - Some(v) → self.node_attachments.insert(id, v) - -GraphStore::set_edge_attachment(id, value) - LINE: 163-172 - Same pattern as node attachments -\end{verbatim} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{7. Hash Computation}\label{hash-computation} - -\subsection{7.1 State Root}\label{state-root} - -\textbf{Entry Point:} \texttt{compute\_state\_root()} \textbf{File:} -\texttt{crates/warp-core/src/snapshot.rs:88-209} - -\begin{verbatim} -compute_state_root(state: &WarpState, root: &NodeKey) → Hash -│ -├─[1] BFS REACHABILITY TRAVERSAL -│ │ -│ ├─ Initialize: -│ │ reachable_nodes: BTreeSet = { root } -│ │ reachable_warps: BTreeSet = { root.warp_id } -│ │ queue: VecDeque = [ root ] -│ │ -│ └─ WHILE let Some(current) = queue.pop_front(): -│ │ -│ ├─ store = state.store(¤t.warp_id) -│ │ -│ ├─ FOR edge IN store.edges_from(¤t.local_id): -│ │ ├─ to = NodeKey { warp_id: current.warp_id, local_id: edge.to } -│ │ ├─ IF reachable_nodes.insert(to): queue.push_back(to) -│ │ │ -│ │ └─ IF edge has Descend(child_warp) attachment: -│ │ └─ enqueue_descend(state, child_warp, ...) -│ │ Adds child instance root to queue -│ │ -│ └─ IF current node has Descend(child_warp) attachment: -│ enqueue_descend(state, child_warp, ...) -│ -├─[2] HASHING PHASE -│ │ -│ ├─ let mut hasher = Hasher::new() // BLAKE3 -│ │ -│ ├─ HASH ROOT BINDING: -│ │ hasher.update(&root.warp_id.0) // 32 bytes -│ │ hasher.update(&root.local_id.0) // 32 bytes -│ │ -│ └─ FOR warp_id IN reachable_warps: // BTreeSet = sorted order -│ │ -│ ├─ HASH INSTANCE HEADER: -│ │ hasher.update(&instance.warp_id.0) // 32 bytes -│ │ hasher.update(&instance.root_node.0) // 32 bytes -│ │ hash_attachment_key_opt(&mut hasher, instance.parent.as_ref()) -│ │ -│ ├─ FOR (node_id, node) IN store.nodes: // BTreeMap = sorted -│ │ IF reachable_nodes.contains(&NodeKey { warp_id, local_id: node_id }): -│ │ hasher.update(&node_id.0) // 32 bytes -│ │ hasher.update(&node.ty.0) // 32 bytes -│ │ hash_attachment_value_opt(&mut hasher, store.node_attachment(node_id)) -│ │ -│ └─ FOR (from, edges) IN store.edges_from: // BTreeMap = sorted -│ IF from is reachable: -│ sorted_edges = edges.filter(reachable).sort_by(|a,b| a.id.cmp(b.id)) -│ hasher.update(&from.0) // 32 bytes -│ hasher.update(&(sorted_edges.len() as u64).to_le_bytes()) // 8 bytes -│ FOR edge IN sorted_edges: -│ hasher.update(&edge.id.0) // 32 bytes -│ hasher.update(&edge.ty.0) // 32 bytes -│ hasher.update(&edge.to.0) // 32 bytes -│ hash_attachment_value_opt(&mut hasher, store.edge_attachment(&edge.id)) -│ -└─ hasher.finalize().into() // → [u8; 32] -\end{verbatim} - -\subsection{7.2 Commit Hash v2}\label{commit-hash-v2} - -\textbf{Entry Point:} \texttt{compute\_commit\_hash\_v2()} -\textbf{File:} \texttt{crates/warp-core/src/snapshot.rs:244-263} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{fn}\NormalTok{ compute\_commit\_hash\_v2(} -\NormalTok{ state\_root}\OperatorTok{:} \OperatorTok{\&}\BuiltInTok{Hash}\OperatorTok{,} -\NormalTok{ parents}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[}\BuiltInTok{Hash}\NormalTok{]}\OperatorTok{,} -\NormalTok{ patch\_digest}\OperatorTok{:} \OperatorTok{\&}\BuiltInTok{Hash}\OperatorTok{,} -\NormalTok{ policy\_id}\OperatorTok{:} \DataTypeTok{u32}\OperatorTok{,} -\NormalTok{) }\OperatorTok{{-}\textgreater{}} \BuiltInTok{Hash} \OperatorTok{\{} - \KeywordTok{let} \KeywordTok{mut}\NormalTok{ h }\OperatorTok{=} \BuiltInTok{Hasher}\PreprocessorTok{::}\NormalTok{new()}\OperatorTok{;} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\DecValTok{2u16}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// Version tag (2 bytes)} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\NormalTok{(parents}\OperatorTok{.}\NormalTok{len() }\KeywordTok{as} \DataTypeTok{u64}\NormalTok{)}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// Parent count (8 bytes)} - \ControlFlowTok{for}\NormalTok{ p }\KeywordTok{in}\NormalTok{ parents }\OperatorTok{\{} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(p)}\OperatorTok{;} \CommentTok{// Each parent (32 bytes)} - \OperatorTok{\}} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(state\_root)}\OperatorTok{;} \CommentTok{// Graph hash (32 bytes)} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(patch\_digest)}\OperatorTok{;} \CommentTok{// Ops hash (32 bytes)} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\NormalTok{policy\_id}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// Policy (4 bytes)} -\NormalTok{ h}\OperatorTok{.}\NormalTok{finalize()}\OperatorTok{.}\NormalTok{into()} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{Byte Layout:} - -\begin{verbatim} -Offset Size Field -0 2 version_tag (0x02 0x00) -2 8 parent_count (u64 LE) -10 32*N parents[] (N parent hashes) -10+32N 32 state_root -42+32N 32 patch_digest -74+32N 4 policy_id (u32 LE) -───────────────────────────────────── -TOTAL: 78 + 32*N bytes → BLAKE3 → 32-byte hash -\end{verbatim} - -\subsection{7.3 Patch Digest}\label{patch-digest} - -\textbf{Entry Point:} \texttt{compute\_patch\_digest\_v2()} -\textbf{File:} \texttt{crates/warp-core/src/tick\_patch.rs:755-774} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{fn}\NormalTok{ compute\_patch\_digest\_v2(} -\NormalTok{ policy\_id}\OperatorTok{:} \DataTypeTok{u32}\OperatorTok{,} -\NormalTok{ rule\_pack\_id}\OperatorTok{:} \OperatorTok{\&}\NormalTok{ContentHash}\OperatorTok{,} -\NormalTok{ commit\_status}\OperatorTok{:}\NormalTok{ TickCommitStatus}\OperatorTok{,} -\NormalTok{ in\_slots}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[SlotId]}\OperatorTok{,} -\NormalTok{ out\_slots}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[SlotId]}\OperatorTok{,} -\NormalTok{ ops}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[WarpOp]}\OperatorTok{,} -\NormalTok{) }\OperatorTok{{-}\textgreater{}}\NormalTok{ ContentHash }\OperatorTok{\{} - \KeywordTok{let} \KeywordTok{mut}\NormalTok{ h }\OperatorTok{=} \BuiltInTok{Hasher}\PreprocessorTok{::}\NormalTok{new()}\OperatorTok{;} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\DecValTok{2u16}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// Format version} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\NormalTok{policy\_id}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// 4 bytes} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(rule\_pack\_id)}\OperatorTok{;} \CommentTok{// 32 bytes} -\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\NormalTok{[commit\_status}\OperatorTok{.}\NormalTok{code()])}\OperatorTok{;} \CommentTok{// 1 byte} -\NormalTok{ encode\_slots(}\OperatorTok{\&}\KeywordTok{mut}\NormalTok{ h}\OperatorTok{,}\NormalTok{ in\_slots)}\OperatorTok{;} -\NormalTok{ encode\_slots(}\OperatorTok{\&}\KeywordTok{mut}\NormalTok{ h}\OperatorTok{,}\NormalTok{ out\_slots)}\OperatorTok{;} -\NormalTok{ encode\_ops(}\OperatorTok{\&}\KeywordTok{mut}\NormalTok{ h}\OperatorTok{,}\NormalTok{ ops)}\OperatorTok{;} -\NormalTok{ h}\OperatorTok{.}\NormalTok{finalize()}\OperatorTok{.}\NormalTok{into()} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{8. Commit Orchestration}\label{commit-orchestration} - -\textbf{Entry Point:} \texttt{Engine::commit\_with\_receipt()} -\textbf{File:} \texttt{crates/warp-core/src/engine\_impl.rs:837-954} - -\subsection{8.1 Complete Call Trace}\label{complete-call-trace-3} - -\begin{verbatim} -Engine::commit_with_receipt(tx) → Result<(Snapshot, TickReceipt, WarpTickPatchV1), EngineError> -│ -├─[1] VALIDATE TRANSACTION -│ IF tx.value() == 0 || !self.live_txs.contains(&tx.value()): -│ return Err(EngineError::UnknownTx) -│ -├─[2] DRAIN CANDIDATES -│ policy_id = self.policy_id // Line 844 -│ rule_pack_id = self.compute_rule_pack_id() // Line 845 -│ │ -│ ├─ compute_rule_pack_id() -│ │ FILE: engine_impl.rs:1675-1688 -│ │ CODE: -│ │ ids = self.rules.values().map(|r| r.id).collect() -│ │ ids.sort_unstable(); ids.dedup() -│ │ hasher.update(&1u16.to_le_bytes()) // version -│ │ hasher.update(&(ids.len() as u64).to_le_bytes()) -│ │ FOR id IN ids: hasher.update(&id) -│ │ hasher.finalize().into() -│ │ -│ drained = self.scheduler.drain_for_tx(tx) // Line 847 -│ plan_digest = compute_plan_digest(&drained) // Line 848 -│ -├─[3] RESERVE (INDEPENDENCE CHECK) -│ ReserveOutcome { receipt, reserved, in_slots, out_slots } -│ = self.reserve_for_receipt(tx, drained)? // Line 850-855 -│ │ -│ └─ reserve_for_receipt(tx, drained) -│ FILE: engine_impl.rs:970-1042 -│ │ -│ FOR rewrite IN drained (canonical order): -│ │ -│ ├─ accepted = self.scheduler.reserve(tx, &mut rewrite) -│ │ -│ ├─ IF !accepted: -│ │ blockers = find_blocking_rewrites(reserved, &rewrite) -│ │ -│ ├─ receipt_entries.push(TickReceiptEntry { ... }) -│ │ -│ └─ IF accepted: -│ reserved.push(rewrite) -│ extend_slots_from_footprint(&mut in_slots, &mut out_slots, ...) -│ │ -│ return ReserveOutcome { receipt, reserved, in_slots, out_slots } -│ -│ rewrites_digest = compute_rewrites_digest(&reserved_rewrites) // Line 858 -│ -├─[4] EXECUTE (PHASE 5 BOAW) -│ state_before = self.state.clone() // Line 862 -│ delta_ops = self.apply_reserved_rewrites(reserved, &state_before)? -│ │ -│ └─ apply_reserved_rewrites(rewrites, state_before) -│ FILE: engine_impl.rs:1044-1105 -│ │ -│ ├─ let mut delta = TickDelta::new() -│ │ -│ ├─ FOR rewrite IN rewrites: -│ │ executor = self.rule_by_compact(rewrite.compact_rule).executor -│ │ view = GraphView::new(self.state.store(&rewrite.scope.warp_id)) -│ │ (executor)(view, &rewrite.scope.local_id, &mut delta) -│ │ -│ ├─ let ops = delta.finalize() // Canonical sort -│ │ -│ ├─ patch = WarpTickPatchV1::new(policy_id, rule_pack_id, ..., ops) -│ │ patch.apply_to_state(&mut self.state)? -│ │ -│ └─ [delta_validate]: assert_delta_matches_diff(&ops, &diff_ops) -│ -├─[5] MATERIALIZE -│ mat_report = self.bus.finalize() // Line 884 -│ self.last_materialization = mat_report.channels -│ self.last_materialization_errors = mat_report.errors -│ -├─[6] COMPUTE DELTA PATCH -│ ops = diff_state(&state_before, &self.state) // Line 889 -│ │ -│ └─ diff_state(before, after) -│ FILE: tick_patch.rs:979-1069 -│ - Canonicalize portal authoring (OpenPortal) -│ - Diff instances (delete/upsert) -│ - Diff nodes, edges, attachments -│ - Sort by WarpOp::sort_key() -│ │ -│ patch = WarpTickPatchV1::new(policy_id, rule_pack_id, ..., ops) -│ patch_digest = patch.digest() // Line 898 -│ -├─[7] COMPUTE STATE ROOT -│ state_root = compute_state_root(&self.state, &self.current_root) // Line 900 -│ -├─[8] GET PARENTS -│ parents = self.last_snapshot.as_ref().map(|s| vec![s.hash]).unwrap_or_default() -│ -├─[9] COMPUTE DECISION DIGEST -│ decision_digest = receipt.digest() // Line 929 -│ -├─[10] COMPUTE COMMIT HASH -│ hash = compute_commit_hash_v2(&state_root, &parents, &patch_digest, policy_id) -│ -├─[11] BUILD SNAPSHOT -│ snapshot = Snapshot { -│ root: self.current_root, -│ hash, // commit_id v2 -│ parents, -│ plan_digest, // Diagnostic -│ decision_digest, // Diagnostic -│ rewrites_digest, // Diagnostic -│ patch_digest, // COMMITTED -│ policy_id, // COMMITTED -│ tx, -│ } -│ -├─[12] RECORD TO HISTORY -│ self.last_snapshot = Some(snapshot.clone()) // Line 947 -│ self.tick_history.push((snapshot, receipt, patch)) // Line 948-949 -│ self.live_txs.remove(&tx.value()) // Line 951 -│ self.scheduler.finalize_tx(tx) // Line 952 -│ -└─[13] RETURN - Ok((snapshot, receipt, patch)) -\end{verbatim} - -\subsection{8.2 Commit Hash Inputs}\label{commit-hash-inputs} - -{\def\LTcaptype{none} % do not increment counter -\begin{longtable}[]{@{}lll@{}} -\toprule\noalign{} -Input & Committed? & Purpose \\ -\midrule\noalign{} -\endhead -\bottomrule\noalign{} -\endlastfoot -\texttt{state\_root} & ✓ & What the graph looks like \\ -\texttt{patch\_digest} & ✓ & How we got here (ops) \\ -\texttt{parents} & ✓ & Chain continuity \\ -\texttt{policy\_id} & ✓ & Aion policy version \\ -\texttt{plan\_digest} & ✗ & Diagnostic only \\ -\texttt{decision\_digest} & ✗ & Diagnostic only \\ -\texttt{rewrites\_digest} & ✗ & Diagnostic only \\ -\end{longtable} -} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{9. Complete Call Graph}\label{complete-call-graph} - -\subsection{9.1 Full Journey: Intent → -Commit}\label{full-journey-intent-commit} - -\begin{verbatim} -USER ACTION - │ - ▼ -Engine::ingest_intent(intent_bytes) - ├─ compute_intent_id() // BLAKE3 content hash - ├─ make_node_id(), make_type_id() // Structural IDs - ├─ store.insert_node() // Create event node - ├─ store.set_node_attachment() // Attach intent payload - └─ store.insert_edge() // Pending edge to inbox - │ - ▼ -Engine::begin() → TxId - ├─ tx_counter.wrapping_add(1) - ├─ live_txs.insert(tx_counter) - └─ TxId::from_raw(tx_counter) - │ - ▼ -Engine::dispatch_next_intent(tx) // (or manual apply) - │ - ▼ -Engine::apply(tx, rule_name, scope) - └─ Engine::apply_in_warp(tx, warp_id, rule_name, scope, &[]) - ├─ rules.get(rule_name) // Lookup rule - ├─ GraphView::new(store) // Read-only view - ├─ (rule.matcher)(view, scope) // Match check - ├─ scope_hash() // BLAKE3 ordering key - ├─ (rule.compute_footprint)(view, scope) // Footprint - └─ scheduler.enqueue(tx, PendingRewrite) - └─ PendingTx::enqueue() // Last-wins dedup - │ - ▼ -Engine::commit_with_receipt(tx) - │ - ├─[DRAIN] - │ scheduler.drain_for_tx(tx) - │ └─ PendingTx::drain_in_order() - │ └─ radix_sort() or sort_unstable_by() - │ 20-pass LSD radix sort - │ ORDER: (scope_hash, rule_id, nonce) - │ - ├─[RESERVE] - │ FOR rewrite IN drained: - │ scheduler.reserve(tx, &mut rewrite) - │ ├─ has_conflict(active, pr) - │ │ └─ GenSet::contains() × N // O(1) per check - │ └─ mark_all(active, pr) - │ └─ GenSet::mark() × M // O(1) per mark - │ - ├─[EXECUTE] - │ apply_reserved_rewrites(reserved, state_before) - │ FOR rewrite IN reserved: - │ (executor)(view, &scope, &mut delta) - │ └─ scoped.emit(op) - │ └─ delta.emit_with_origin(op, origin) - │ delta.finalize() // Sort ops - │ patch.apply_to_state(&mut self.state) - │ - ├─[MATERIALIZE] - │ bus.finalize() - │ - ├─[DELTA PATCH] - │ diff_state(&state_before, &self.state) - │ └─ Sort by WarpOp::sort_key() - │ WarpTickPatchV1::new(...) - │ └─ compute_patch_digest_v2() - │ - ├─[HASHES] - │ compute_state_root(&self.state, &self.current_root) - │ ├─ BFS reachability - │ └─ BLAKE3 over canonical encoding - │ compute_commit_hash_v2(state_root, parents, patch_digest, policy_id) - │ └─ BLAKE3(version || parents || state_root || patch_digest || policy_id) - │ - ├─[SNAPSHOT] - │ Snapshot { root, hash, parents, digests..., policy_id, tx } - │ - └─[RECORD] - tick_history.push((snapshot, receipt, patch)) - live_txs.remove(&tx.value()) - scheduler.finalize_tx(tx) - │ - ▼ -RETURN: (Snapshot, TickReceipt, WarpTickPatchV1) -\end{verbatim} - -\subsection{9.2 File Index}\label{file-index} - -{\def\LTcaptype{none} % do not increment counter -\begin{longtable}[]{@{}lll@{}} -\toprule\noalign{} -Component & Primary File & Key Lines \\ -\midrule\noalign{} -\endhead -\bottomrule\noalign{} -\endlastfoot -Intent Ingestion & \texttt{engine\_impl.rs} & 1216-1281 \\ -Identity Hashing & \texttt{ident.rs} & 85-109 \\ -Transaction Begin & \texttt{engine\_impl.rs} & 711-719 \\ -Rule Apply & \texttt{engine\_impl.rs} & 730-806 \\ -Footprint & \texttt{footprint.rs} & 131-152 \\ -Scheduler Enqueue & \texttt{scheduler.rs} & 102-105, 331-355 \\ -Radix Sort & \texttt{scheduler.rs} & 360-413, 481-498 \\ -Reserve/Conflict & \texttt{scheduler.rs} & 134-278 \\ -GenSet & \texttt{scheduler.rs} & 509-535 \\ -BOAW Execute & \texttt{boaw/exec.rs} & 61-152 \\ -Shard Routing & \texttt{boaw/shard.rs} & 82-120 \\ -Delta Merge & \texttt{boaw/merge.rs} & 36-75 \\ -TickDelta & \texttt{tick\_delta.rs} & 38-172 \\ -WarpOp Sort Key & \texttt{tick\_patch.rs} & 207-287 \\ -State Mutations & \texttt{graph.rs} & 175-412 \\ -Patch Apply & \texttt{tick\_patch.rs} & 434-561 \\ -Diff State & \texttt{tick\_patch.rs} & 979-1069 \\ -State Root Hash & \texttt{snapshot.rs} & 88-209 \\ -Commit Hash v2 & \texttt{snapshot.rs} & 244-263 \\ -Patch Digest & \texttt{tick\_patch.rs} & 755-774 \\ -Commit Orchestrator & \texttt{engine\_impl.rs} & 837-954 \\ -\end{longtable} -} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{Appendix A: Complexity -Summary}\label{appendix-a-complexity-summary} - -{\def\LTcaptype{none} % do not increment counter -\begin{longtable}[]{@{}lll@{}} -\toprule\noalign{} -Operation & Complexity & Notes \\ -\midrule\noalign{} -\endhead -\bottomrule\noalign{} -\endlastfoot -\texttt{ingest\_intent} & O(1) & Fixed structural insertions \\ -\texttt{begin} & O(1) & Counter increment + set insert \\ -\texttt{apply} & O(m) & m = footprint size \\ -\texttt{drain\_for\_tx} (radix) & O(n) & n = candidates, 20 passes \\ -\texttt{reserve} per rewrite & O(m) & m = footprint size, O(1) per -check \\ -\texttt{execute\_parallel} & O(n/w) & n = items, w = workers \\ -\texttt{merge\_deltas} & O(k log k) & k = total ops (sort + dedup) \\ -\texttt{compute\_state\_root} & O(V + E) & V = nodes, E = edges \\ -\texttt{compute\_commit\_hash\_v2} & O(P) & P = parents \\ -\end{longtable} -} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{Appendix B: Determinism -Boundaries}\label{appendix-b-determinism-boundaries} - -\subsection{Guaranteed Deterministic}\label{guaranteed-deterministic} - -\begin{itemize} -\tightlist -\item - Radix sort ordering (20-pass LSD) -\item - BTreeMap/BTreeSet iteration -\item - BLAKE3 hashing -\item - GenSet conflict detection -\item - Canonical merge deduplication -\end{itemize} - -\subsection{Intentionally Non-Deterministic (Handled by -Merge)}\label{intentionally-non-deterministic-handled-by-merge} - -\begin{itemize} -\tightlist -\item - Worker execution order in BOAW -\item - Shard claim order (atomic counter) -\end{itemize} - -\subsection{Protocol Constants -(Frozen)}\label{protocol-constants-frozen} - -\begin{itemize} -\tightlist -\item - \texttt{NUM\_SHARDS\ =\ 256} -\item - \texttt{SHARD\_MASK\ =\ 255} -\item - Shard routing: \texttt{LE\_u64(node\_id{[}0..8{]})\ \&\ 255} -\item - Commit hash v2 version tag: \texttt{0x02\ 0x00} -\end{itemize} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\emph{Document generated 2026-01-18. File paths and line numbers -accurate as of this date.} - -\backmatter -\end{document} diff --git a/docs/archive/study/echo-visual-atlas-with-diagrams.pdf b/docs/archive/study/echo-visual-atlas-with-diagrams.pdf deleted file mode 100644 index 1c8767b1..00000000 Binary files a/docs/archive/study/echo-visual-atlas-with-diagrams.pdf and /dev/null differ diff --git a/docs/archive/study/echo-visual-atlas-with-diagrams.tex b/docs/archive/study/echo-visual-atlas-with-diagrams.tex deleted file mode 100644 index 8e33ecc3..00000000 --- a/docs/archive/study/echo-visual-atlas-with-diagrams.tex +++ /dev/null @@ -1,279 +0,0 @@ -% SPDX-License-Identifier: Apache-2.0 OR LicenseRef-MIND-UCAL-1.0 -% © James Ross Ω FLYING•ROBOTS -% Options for packages loaded elsewhere -\PassOptionsToPackage{unicode}{hyperref} -\PassOptionsToPackage{hyphens}{url} -\documentclass[ -]{book} -\usepackage[letterpaper, margin=1in]{geometry} -\usepackage{xcolor} -\usepackage{amsmath,amssymb} -% Enable section numbering (sections within chapters) -\setcounter{secnumdepth}{2} -\usepackage{iftex} -\ifPDFTeX - \usepackage[T1]{fontenc} - \usepackage[utf8]{inputenc} - \usepackage{textcomp} % provide euro and other symbols -\else % if luatex or xetex - \usepackage{unicode-math} % this also loads fontspec - \defaultfontfeatures{Scale=MatchLowercase} - \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} -\fi -\usepackage{lmodern} -% Use upquote if available, for straight quotes in verbatim environments -\IfFileExists{upquote.sty}{\usepackage{upquote}}{} -\IfFileExists{microtype.sty}{% use microtype if available - \usepackage[]{microtype} - \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts -}{} -\makeatletter -\@ifundefined{KOMAClassName}{% if non-KOMA class - \IfFileExists{parskip.sty}{% - \usepackage{parskip} - }{% else - \setlength{\parindent}{0pt} - \setlength{\parskip}{6pt plus 2pt minus 1pt}} -}{% if KOMA class - \KOMAoptions{parskip=half}} -\makeatother -\usepackage{color} -\usepackage{fancyvrb} -\newcommand{\VerbBar}{|} -\newcommand{\VERB}{\Verb[commandchars=\\\{\}]} -\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} -% Add ',fontsize=\small' for more characters per line -\newenvironment{Shaded}{}{} -\newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}} -\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{#1}} -\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}} -\newcommand{\BuiltInTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{#1}} -\newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{#1}}} -\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{#1}} -\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}} -\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{#1}} -\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}} -\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{#1}}} -\newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}} -\newcommand{\ExtensionTok}[1]{#1} -\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}} -\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{#1}} -\newcommand{\ImportTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{\textbf{#1}}} -\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}} -\newcommand{\NormalTok}[1]{#1} -\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{#1}} -\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{#1}} -\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{#1}} -\newcommand{\RegionMarkerTok}[1]{#1} -\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{#1}} -\newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{#1}} -\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\usepackage{graphicx} -\usepackage[export]{adjustbox} -\usepackage{longtable,booktabs,array} -\usepackage{calc} % for calculating minipage widths -% Correct order of tables after \paragraph or \subparagraph -\usepackage{etoolbox} -\makeatletter -\patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{} -\makeatother -% Allow footnotes in longtable head/foot -\IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}} -\makesavenoteenv{longtable} -\setlength{\emergencystretch}{3em} % prevent overfull lines -\providecommand{\tightlist}{% - \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} -\usepackage{bookmark} -\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available -\urlstyle{same} -\hypersetup{ - hidelinks, - pdfcreator={LaTeX via pandoc}} - -% Single source of truth for document date -\newcommand{\docdate}{2026-01-19} - -\author{Echo Project Contributors} -\date{\docdate} - -\begin{document} -\chapter{Echo Visual Atlas}\label{echo-visual-atlas} - -\begin{quote} -Standalone diagrams for understanding Echo's architecture. These -diagrams complement the main guide ``What Makes Echo Tick?'' -\end{quote} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{The Complete Tick Pipeline}\label{the-complete-tick-pipeline} - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/tour-06.pdf} -\end{center} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{BOAW Parallel Execution Model}\label{boaw-parallel-execution-model} - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/tour-08.pdf} -\end{center} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{Virtual Shard Routing}\label{virtual-shard-routing} - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/tour-10.pdf} -\end{center} - -\subsection{Test Vectors (Frozen -Protocol)}\label{test-vectors-frozen-protocol} - -{\def\LTcaptype{none} % do not increment counter -\begin{longtable}[]{@{}lll@{}} -\toprule\noalign{} -Input (first 8 bytes) & LE u64 & Shard \\ -\midrule\noalign{} -\endhead -\bottomrule\noalign{} -\endlastfoot -\texttt{0xDEADBEEFCAFEBABE} & \texttt{0xBEBAFECAEFBEADDE} & 190 -(0xBE) \\ -\texttt{0x0000000000000000} & \texttt{0x0000000000000000} & 0 \\ -\texttt{0x2A00000000000000} & \texttt{0x000000000000002A} & 42 \\ -\texttt{0xFFFFFFFFFFFFFFFF} & \texttt{0xFFFFFFFFFFFFFFFF} & 255 \\ -\end{longtable} -} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{Two-Plane WARP Architecture}\label{two-plane-warp-architecture} - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/tour-03.pdf} -\end{center} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{GraphView Contract Enforcement}\label{graphview-contract-enforcement} - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/tour-11.pdf} -\end{center} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{State Root Hash Computation}\label{state-root-hash-computation} - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/tour-09.pdf} -\end{center} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{Commit Hash v2 Structure}\label{commit-hash-v2-structure} - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/tour-07.pdf} -\end{center} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{WSC Snapshot Format}\label{wsc-snapshot-format} - -\begin{verbatim} -┌─────────────────────────────────────────────────────────────────────────┐ -│ WSC SNAPSHOT FILE │ -├─────────────────────────────────────────────────────────────────────────┤ -│ │ -│ ┌────────────────────────────────────────────────────────────────────┐ │ -│ │ HEADER (fixed size) │ │ -│ │ ┌──────────┬──────────┬──────────┬──────────┬──────────┐ │ │ -│ │ │ magic │ version │ node_cnt │ edge_cnt │ offsets │ │ │ -│ │ │ 8 bytes │ 8 bytes │ 8 bytes │ 8 bytes │ 8×N bytes│ │ │ -│ │ └──────────┴──────────┴──────────┴──────────┴──────────┘ │ │ -│ └────────────────────────────────────────────────────────────────────┘ │ -│ │ -│ ┌────────────────────────────────────────────────────────────────────┐ │ -│ │ NODES TABLE (sorted by NodeId, 8-byte aligned) │ │ -│ │ ┌─────────────────┬─────────────────┬─────────────────┐ │ │ -│ │ │ NodeRow │ NodeRow │ NodeRow │ ... │ │ -│ │ │ 64 bytes │ 64 bytes │ 64 bytes │ │ │ -│ │ │ [id:32][type:32]│ [id:32][type:32]│ [id:32][type:32]│ │ │ -│ │ └─────────────────┴─────────────────┴─────────────────┘ │ │ -│ └────────────────────────────────────────────────────────────────────┘ │ -│ │ -│ ┌────────────────────────────────────────────────────────────────────┐ │ -│ │ EDGES TABLE (sorted by EdgeId, 8-byte aligned) │ │ -│ │ ┌─────────────────────────┬─────────────────────────┐ │ │ -│ │ │ EdgeRow │ EdgeRow │ ... │ │ -│ │ │ 128 bytes │ 128 bytes │ │ │ -│ │ │[id:32][from:32][to:32] │[id:32][from:32][to:32] │ │ │ -│ │ │[type:32] │[type:32] │ │ │ -│ │ └─────────────────────────┴─────────────────────────┘ │ │ -│ └────────────────────────────────────────────────────────────────────┘ │ -│ │ -│ ┌────────────────────────────────────────────────────────────────────┐ │ -│ │ OUT_INDEX (per-node ranges into out_edges) │ │ -│ │ ┌──────────────┬──────────────┬──────────────┐ │ │ -│ │ │ Range │ Range │ Range │ ... │ │ -│ │ │ 16 bytes │ 16 bytes │ 16 bytes │ │ │ -│ │ │[start:8][len:8]│[start:8][len:8]│[start:8][len:8]│ │ │ -│ │ └──────────────┴──────────────┴──────────────┘ │ │ -│ └────────────────────────────────────────────────────────────────────┘ │ -│ │ -│ ┌────────────────────────────────────────────────────────────────────┐ │ -│ │ ATTACHMENT INDEX (per-slot ranges) │ │ -│ │ Similar structure to OUT_INDEX │ │ -│ └────────────────────────────────────────────────────────────────────┘ │ -│ │ -│ ┌────────────────────────────────────────────────────────────────────┐ │ -│ │ BLOB ARENA (variable-length payloads) │ │ -│ │ ┌─────────────────────────────────────────────────────────────┐ │ │ -│ │ │ [payload bytes...] [payload bytes...] [payload bytes...] ...│ │ │ -│ │ └─────────────────────────────────────────────────────────────┘ │ │ -│ │ Referenced by (offset: u64, length: u64) tuples │ │ -│ └────────────────────────────────────────────────────────────────────┘ │ -│ │ -└─────────────────────────────────────────────────────────────────────────┘ -\end{verbatim} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{Footprint Independence Check}\label{footprint-independence-check} - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/tour-08.pdf} -\end{center} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{Complete Data Flow: Intent to Render}\label{complete-data-flow-intent-to-render} - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/tour-02.pdf} -\end{center} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{Viewer Event Loop}\label{viewer-event-loop} - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/tour-15.pdf} -\end{center} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\emph{Visual Atlas generated \docdate. Use alongside ``What Makes Echo -Tick?'' for complete understanding.} - -\end{document} diff --git a/docs/archive/study/echo-visual-atlas.md b/docs/archive/study/echo-visual-atlas.md deleted file mode 100644 index c65681c9..00000000 --- a/docs/archive/study/echo-visual-atlas.md +++ /dev/null @@ -1,662 +0,0 @@ - - - -# Echo Visual Atlas - -> Standalone diagrams for understanding Echo's architecture. -> These diagrams complement the main guide "What Makes Echo Tick?" - ---- - -## 1. The Complete Tick Pipeline - -```mermaid -flowchart TB - subgraph PHASE1["Phase 1: BEGIN"] - B1[engine.begin] - B2[Increment tx_counter] - B3[Add to live_txs] - B4[Return TxId] - B1 --> B2 --> B3 --> B4 - end - - subgraph PHASE2["Phase 2: APPLY (0..N times)"] - A1[engine.apply] - A2{Matcher?} - A3[Compute Footprint] - A4[Create PendingRewrite] - A5[Enqueue to Scheduler] - A6[NoMatch] - A1 --> A2 - A2 -->|true| A3 --> A4 --> A5 - A2 -->|false| A6 - end - - subgraph PHASE3["Phase 3: COMMIT"] - subgraph DRAIN["3a. Drain"] - D1[Radix sort pending] - D2[Canonical order] - end - subgraph RESERVE["3b. Reserve"] - R1[For each rewrite] - R2{Footprint conflict?} - R3[Accept] - R4[Reject + witness] - R1 --> R2 - R2 -->|no| R3 - R2 -->|yes| R4 - end - subgraph EXECUTE["3c. Execute"] - E1[For each accepted] - E2[Call executor] - E3[Emit to TickDelta] - E1 --> E2 --> E3 - end - subgraph MERGE["3d. Merge"] - M1[Collect all deltas] - M2[Sort by key+origin] - M3[Dedupe/detect conflicts] - M1 --> M2 --> M3 - end - subgraph FINALIZE["3e. Finalize"] - F1[Apply ops to state] - F2[Update indexes] - F1 --> F2 - end - DRAIN --> RESERVE --> EXECUTE --> MERGE --> FINALIZE - end - - subgraph PHASE4["Phase 4: HASH"] - H1[BFS reachable nodes] - H2[Canonical encode] - H3[BLAKE3 state_root] - H4[BLAKE3 patch_digest] - H5[Compute commit_hash] - H1 --> H2 --> H3 --> H4 --> H5 - end - - subgraph PHASE5["Phase 5: RECORD"] - REC1[Append Snapshot] - REC2[Append Receipt] - REC3[Append Patch] - REC1 --> REC2 --> REC3 - end - - PHASE1 --> PHASE2 --> PHASE3 --> PHASE4 --> PHASE5 -``` - ---- - -## 2. BOAW Parallel Execution Model - -```mermaid -flowchart LR - subgraph INPUT["Input"] - I[ExecItems
n items] - end - - subgraph PARTITION["Partition Phase"] - P[partition_into_shards] - S0[Shard 0] - S1[Shard 1] - S2[...] - S255[Shard 255] - P --> S0 - P --> S1 - P --> S2 - P --> S255 - end - - subgraph EXECUTE["Execute Phase (Parallel)"] - W0[Worker 0
TickDelta] - W1[Worker 1
TickDelta] - W2[Worker 2
TickDelta] - WN[Worker N
TickDelta] - end - - subgraph STEAL["Work Stealing"] - AC[AtomicUsize
next_shard] - AC -.->|fetch_add| W0 - AC -.->|fetch_add| W1 - AC -.->|fetch_add| W2 - AC -.->|fetch_add| WN - end - - subgraph MERGE["Merge Phase"] - MG[merge_deltas] - SORT[Sort by key+origin] - DEDUP[Dedupe identical] - MG --> SORT --> DEDUP - end - - subgraph OUTPUT["Output"] - O[Canonical Ops
deterministic] - end - - I --> P - S0 --> W0 - S1 --> W1 - S2 --> W2 - S255 --> WN - W0 --> MG - W1 --> MG - W2 --> MG - WN --> MG - DEDUP --> O -``` - ---- - -## 3. Virtual Shard Routing - -```mermaid -flowchart TD - subgraph NODEID["NodeId (32 bytes)"] - B0["byte 0"] - B1["byte 1"] - B2["byte 2"] - B3["byte 3"] - B4["byte 4"] - B5["byte 5"] - B6["byte 6"] - B7["byte 7"] - REST["bytes 8-31
(ignored)"] - end - - subgraph EXTRACT["Extract First 8 Bytes"] - LE["u64::from_le_bytes
[b0,b1,b2,b3,b4,b5,b6,b7]"] - end - - subgraph MASK["Apply Shard Mask"] - AND["val & 0xFF
(NUM_SHARDS - 1)"] - end - - subgraph RESULT["Shard ID"] - SID["0..255"] - end - - B0 --> LE - B1 --> LE - B2 --> LE - B3 --> LE - B4 --> LE - B5 --> LE - B6 --> LE - B7 --> LE - LE --> AND --> SID -``` - -### Test Vectors (Frozen Protocol) - -| Input (first 8 bytes) | LE u64 | Shard | -| --------------------- | -------------------- | ---------- | -| `0xDEADBEEFCAFEBABE` | `0xBEBAFECAEFBEADDE` | 222 (0xDE) | -| `0x0000000000000000` | `0x0000000000000000` | 0 | -| `0x2A00000000000000` | `0x000000000000002A` | 42 | -| `0xFFFFFFFFFFFFFFFF` | `0xFFFFFFFFFFFFFFFF` | 255 | - ---- - -## 4. Two-Plane WARP Architecture - -```mermaid -graph TB - subgraph SKELETON["Skeleton Plane (Structure)"] - direction TB - N1["Node A
id: 0x1234"] - N2["Node B
id: 0x5678"] - N3["Node C
id: 0x9ABC"] - - N1 -->|"edge:link
id: 0xE001"| N2 - N1 -->|"edge:child
id: 0xE002"| N3 - N2 -->|"edge:ref
id: 0xE003"| N3 - end - - subgraph ALPHA["Attachment Plane (α)"] - direction TB - A1["N1.α['title']
Atom{string, 'Home'}"] - A2["N2.α['url']
Atom{string, '/page/b'}"] - A3["N3.α['body']
Atom{html, '<p>...</p>'}"] - A4["N3.α['portal']
Descend('child-instance')"] - end - - N1 -.- A1 - N2 -.- A2 - N3 -.- A3 - N3 -.- A4 - - subgraph DESCENDED["Descended Instance"] - direction TB - C1["Child Root
id: 0xCCC1"] - C2["Child Node
id: 0xCCC2"] - C1 --> C2 - end - - A4 -.->|"Descend pointer"| C1 -``` - ---- - -## 5. GraphView Contract Enforcement - -```mermaid -flowchart TD - subgraph EXECUTOR["Executor Function"] - EX["fn executor(view: GraphView, scope: &NodeId, delta: &mut TickDelta)"] - end - - subgraph READ["Read Path (GraphView)"] - R1["view.node(id)"] - R2["view.edges_from(id)"] - R3["view.attachment(id, key)"] - R4["view.has_edge(id)"] - - R1 --> GS - R2 --> GS - R3 --> GS - R4 --> GS - end - - subgraph GS["GraphStore (Immutable)"] - NODES["nodes: BTreeMap"] - EDGES["edges_from: BTreeMap"] - ATTACH["attachments: BTreeMap"] - end - - subgraph WRITE["Write Path (TickDelta)"] - W1["delta.emit(UpsertNode)"] - W2["delta.emit(UpsertEdge)"] - W3["delta.emit(SetAttachment)"] - W4["delta.emit(DeleteNode)"] - - W1 --> OPS - W2 --> OPS - W3 --> OPS - W4 --> OPS - end - - subgraph OPS["Accumulated Ops"] - OPLIST["Vec<(WarpOp, OpOrigin)>"] - end - - EX --> READ - EX --> WRITE - - style GS fill:#e8f5e9 - style OPS fill:#fff3e0 -``` - ---- - -## 6. State Root Hash Computation - -```mermaid -flowchart TD - subgraph BFS["1. Deterministic BFS"] - START["Start at root"] - VISIT["Visit reachable nodes"] - DESCEND["Follow Descend() attachments"] - COLLECT["Collect reachable set"] - START --> VISIT --> DESCEND --> COLLECT - end - - subgraph ENCODE["2. Canonical Encoding"] - subgraph INSTANCE["Per Instance (BTreeMap order)"] - IH["warp_id header"] - subgraph NODE["Per Node (ascending NodeId)"] - NH["node_id[32]"] - NT["node_type[32]"] - subgraph EDGE["Per Edge (ascending EdgeId)"] - EH["edge_id[32]"] - ET["edge_type[32]"] - ED["to_node[32]"] - end - subgraph ATTACH["Per Attachment"] - AK["key_len[8] + key"] - AT["type_id[32]"] - AV["value_len[8] + value"] - end - end - end - end - - subgraph HASH["3. BLAKE3 Digest"] - STREAM["Byte stream"] - DIGEST["state_root[32]"] - STREAM --> DIGEST - end - - BFS --> ENCODE --> HASH -``` - ---- - -## 7. Commit Hash v2 Structure - -```mermaid -flowchart LR - subgraph INPUTS["Commit Hash Inputs"] - V["version[4]
protocol tag"] - P["parents[]
parent hashes"] - SR["state_root[32]
graph hash"] - PD["patch_digest[32]
ops hash"] - PI["policy_id[4]
aion policy"] - end - - subgraph CONCAT["Concatenation"] - BYTES["version || parents || state_root || patch_digest || policy_id"] - end - - subgraph OUTPUT["Output"] - CH["commit_hash[32]
BLAKE3"] - end - - V --> BYTES - P --> BYTES - SR --> BYTES - PD --> BYTES - PI --> BYTES - BYTES --> CH -``` - ---- - -## 8. WSC Snapshot Format - -```text -┌─────────────────────────────────────────────────────────────────────────┐ -│ WSC SNAPSHOT FILE │ -├─────────────────────────────────────────────────────────────────────────┤ -│ │ -│ ┌────────────────────────────────────────────────────────────────────┐ │ -│ │ HEADER (fixed size) │ │ -│ │ ┌──────────┬──────────┬──────────┬──────────┬──────────┐ │ │ -│ │ │ magic │ version │ node_cnt │ edge_cnt │ offsets │ │ │ -│ │ │ 8 bytes │ 8 bytes │ 8 bytes │ 8 bytes │ 8×N bytes│ │ │ -│ │ └──────────┴──────────┴──────────┴──────────┴──────────┘ │ │ -│ └────────────────────────────────────────────────────────────────────┘ │ -│ │ -│ ┌────────────────────────────────────────────────────────────────────┐ │ -│ │ NODES TABLE (sorted by NodeId, 8-byte aligned) │ │ -│ │ ┌─────────────────┬─────────────────┬─────────────────┐ │ │ -│ │ │ NodeRow │ NodeRow │ NodeRow │ ... │ │ -│ │ │ 64 bytes │ 64 bytes │ 64 bytes │ │ │ -│ │ │ [id:32][type:32]│ [id:32][type:32]│ [id:32][type:32]│ │ │ -│ │ └─────────────────┴─────────────────┴─────────────────┘ │ │ -│ └────────────────────────────────────────────────────────────────────┘ │ -│ │ -│ ┌────────────────────────────────────────────────────────────────────┐ │ -│ │ EDGES TABLE (sorted by EdgeId, 8-byte aligned) │ │ -│ │ ┌─────────────────────────┬─────────────────────────┐ │ │ -│ │ │ EdgeRow │ EdgeRow │ ... │ │ -│ │ │ 128 bytes │ 128 bytes │ │ │ -│ │ │[id:32][from:32][to:32] │[id:32][from:32][to:32] │ │ │ -│ │ │[type:32] │[type:32] │ │ │ -│ │ └─────────────────────────┴─────────────────────────┘ │ │ -│ └────────────────────────────────────────────────────────────────────┘ │ -│ │ -│ ┌────────────────────────────────────────────────────────────────────┐ │ -│ │ OUT_INDEX (per-node ranges into out_edges) │ │ -│ │ ┌──────────────┬──────────────┬──────────────┐ │ │ -│ │ │ Range │ Range │ Range │ ... │ │ -│ │ │ 16 bytes │ 16 bytes │ 16 bytes │ │ │ -│ │ │[start:8][len:8]│[start:8][len:8]│[start:8][len:8]│ │ │ -│ │ └──────────────┴──────────────┴──────────────┘ │ │ -│ └────────────────────────────────────────────────────────────────────┘ │ -│ │ -│ ┌────────────────────────────────────────────────────────────────────┐ │ -│ │ ATTACHMENT INDEX (per-slot ranges) │ │ -│ │ Similar structure to OUT_INDEX │ │ -│ └────────────────────────────────────────────────────────────────────┘ │ -│ │ -│ ┌────────────────────────────────────────────────────────────────────┐ │ -│ │ BLOB ARENA (variable-length payloads) │ │ -│ │ ┌─────────────────────────────────────────────────────────────┐ │ │ -│ │ │ [payload bytes...] [payload bytes...] [payload bytes...] ...│ │ │ -│ │ └─────────────────────────────────────────────────────────────┘ │ │ -│ │ Referenced by (offset: u64, length: u64) tuples │ │ -│ └────────────────────────────────────────────────────────────────────┘ │ -│ │ -└─────────────────────────────────────────────────────────────────────────┘ -``` - ---- - -## 9. Footprint Independence Check - -```mermaid -flowchart TD - subgraph REWRITE1["Rewrite A"] - R1_READ["reads: {N1, N2}"] - R1_WRITE["writes: {N3}"] - end - - subgraph REWRITE2["Rewrite B"] - R2_READ["reads: {N4, N5}"] - R2_WRITE["writes: {N6}"] - end - - subgraph REWRITE3["Rewrite C"] - R3_READ["reads: {N1, N3}"] - R3_WRITE["writes: {N7}"] - end - - subgraph CHECK["Independence Check"] - C1{{"A ∩ B"}} - C2{{"A ∩ C"}} - C3{{"B ∩ C"}} - end - - subgraph RESULT["Results"] - OK1["A || B: OK
(no overlap)"] - CONFLICT["A || C: CONFLICT
(A.write ∩ C.read = {N3})"] - OK2["B || C: OK
(no overlap)"] - end - - R1_WRITE --> C1 - R2_WRITE --> C1 - R1_WRITE --> C2 - R3_READ --> C2 - R2_WRITE --> C3 - R3_WRITE --> C3 - - C1 --> OK1 - C2 --> CONFLICT - C3 --> OK2 - - style CONFLICT fill:#ffcdd2 - style OK1 fill:#c8e6c9 - style OK2 fill:#c8e6c9 -``` - ---- - -## 9b. FootprintGuard Enforcement Flow - -```mermaid -flowchart TD - EXEC["execute_item_enforced()"] - SNAP["ops_before = delta.len()"] - - subgraph parallel["Two independent catch_unwind calls"] - CATCH_EXEC["catch_unwind(executor)"] - CATCH_CHECK["catch_unwind(check_op loop)"] - end - - MATCH{"Match (exec_panic, check_result)"} - - OK["Ok(delta)"] - ERR_SINGLE["Err(PoisonedDelta)"] - ERR_BOTH["Err(FootprintViolationWithPanic)"] - - EXEC --> SNAP --> CATCH_EXEC - SNAP --> CATCH_CHECK - CATCH_EXEC --> MATCH - CATCH_CHECK --> MATCH - - MATCH -->|"(None, Ok)" | OK - MATCH -->|"(Some, Ok) or (None, Err)"| ERR_SINGLE - MATCH -->|"(Some, Err)"| ERR_BOTH - - style OK fill:#c8e6c9 - style ERR_SINGLE fill:#fff9c4 - style ERR_BOTH fill:#ffcdd2 -``` - -**Key:** Footprint enforcement is active when `cfg(debug_assertions)` or the -`footprint_enforce_release` feature is enabled, **unless** the `unsafe_graph` -feature is set. The `unsafe_graph` feature is mutually exclusive with enforcement -and disables all footprint validation—no `FootprintViolation` can occur while -`unsafe_graph` is active. - -When enforcement is active, every `ExecItem` execution is wrapped by -`execute_item_enforced()`. Two independent `catch_unwind` boundaries run: -one for the executor, one for the `check_op` validation loop. Both run -regardless of whether the other panics. Results are combined in a 3-way match: - -- `(None, Ok)` → success, return `Ok(delta)` -- `(Some, Ok)` or `(None, Err)` → single panic, return `Err(PoisonedDelta)` -- `(Some, Err)` → both panicked, return `Err(FootprintViolationWithPanic)` wrapping both payloads - ---- - -## 10. Complete Data Flow: Intent to Render - -```mermaid -sequenceDiagram - autonumber - participant U as User - participant V as Viewer - participant H as Session Hub - participant E as Engine - participant S as Scheduler - participant B as BOAW - participant G as GraphStore - participant W as WSC - - U->>V: Click action - V->>V: Encode intent bytes - V->>H: ingest_intent(bytes) - H->>E: forward intent - - Note over E: Phase 1: BEGIN - E->>E: begin() → TxId - - Note over E: Intent Processing - E->>E: dispatch_next_intent(tx) - E->>G: GraphView lookup - G-->>E: intent data - - Note over E: Phase 2: APPLY - E->>S: apply(tx, rule, scope) - S->>G: matcher(view, scope) - G-->>S: match result - S->>S: compute footprint - S->>S: enqueue PendingRewrite - - Note over E: Phase 3: COMMIT - E->>S: commit(tx) - S->>S: radix sort (drain) - S->>S: independence check (reserve) - - Note over B: Parallel Execution - S->>B: execute_parallel(items) - B->>B: partition into shards - par Worker 0 - B->>G: read via GraphView - G-->>B: data - B->>B: emit to TickDelta - and Worker 1 - B->>G: read via GraphView - G-->>B: data - B->>B: emit to TickDelta - and Worker N - B->>G: read via GraphView - G-->>B: data - B->>B: emit to TickDelta - end - B->>B: merge_deltas (canonical) - B-->>S: merged ops - - S->>G: apply ops - - Note over E: Phase 4: HASH - E->>G: compute state_root - G-->>E: hash - E->>E: compute commit_hash - - Note over E: Phase 5: RECORD - E->>W: store snapshot - E->>E: append to history - - Note over H: Emit to Tools - E->>H: WarpDiff - H->>V: WarpFrame - - Note over V: Apply & Render - V->>V: apply_op (each op) - V->>V: verify state_hash - V->>V: render frame - V->>U: Display result -``` - ---- - -## 11. Viewer Event Loop - -```mermaid -flowchart TD - subgraph FRAME["Frame Loop"] - START[frame start] - - subgraph DRAIN["1. Drain Session"] - DN[drain_notifications] - DF[drain_frames] - end - - subgraph PROCESS["2. Process Frames"] - PF[process_frames] - SNAP{Snapshot?} - DIFF{Diff?} - APPLY[apply_op each] - VERIFY[verify hash] - end - - subgraph EVENTS["3. Handle Events"] - UE[apply_ui_event] - REDUCE[reduce pure] - EFFECTS[run effects] - end - - subgraph RENDER["4. Render"] - MATCH{screen?} - TITLE[draw_title] - VIEW[draw_view] - HUD[draw_hud] - end - - END[frame end] - - START --> DRAIN - DN --> DF - DF --> PROCESS - PF --> SNAP - SNAP -->|yes| APPLY - PF --> DIFF - DIFF -->|yes| APPLY - APPLY --> VERIFY - VERIFY --> EVENTS - UE --> REDUCE - REDUCE --> EFFECTS - EFFECTS --> RENDER - MATCH -->|Title| TITLE - MATCH -->|View| VIEW - VIEW --> HUD - TITLE --> END - HUD --> END - end -``` - ---- - -_Visual Atlas generated 2026-01-25. Use alongside "What Makes Echo Tick?" for complete understanding._ diff --git a/docs/archive/study/echo-visual-atlas.tex b/docs/archive/study/echo-visual-atlas.tex deleted file mode 100644 index f2a3ff32..00000000 --- a/docs/archive/study/echo-visual-atlas.tex +++ /dev/null @@ -1,760 +0,0 @@ -% SPDX-License-Identifier: Apache-2.0 OR LicenseRef-MIND-UCAL-1.0 -% © James Ross Ω FLYING•ROBOTS -% Options for packages loaded elsewhere -\PassOptionsToPackage{unicode}{hyperref} -\PassOptionsToPackage{hyphens}{url} -\documentclass[ -]{book} -\usepackage[letterpaper, margin=1in]{geometry} -\usepackage{xcolor} -\usepackage{amsmath,amssymb} -\setcounter{secnumdepth}{-\maxdimen} % remove section numbering -\usepackage{iftex} -\ifPDFTeX - \usepackage[T1]{fontenc} - \usepackage[utf8]{inputenc} - \usepackage{textcomp} % provide euro and other symbols -\else % if luatex or xetex - \usepackage{unicode-math} % this also loads fontspec - \defaultfontfeatures{Scale=MatchLowercase} - \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} -\fi -\usepackage{lmodern} -\ifPDFTeX\else - % xetex/luatex font selection -\fi -% Use upquote if available, for straight quotes in verbatim environments -\IfFileExists{upquote.sty}{\usepackage{upquote}}{} -\IfFileExists{microtype.sty}{% use microtype if available - \usepackage[]{microtype} - \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts -}{} -\makeatletter -\@ifundefined{KOMAClassName}{% if non-KOMA class - \IfFileExists{parskip.sty}{% - \usepackage{parskip} - }{% else - \setlength{\parindent}{0pt} - \setlength{\parskip}{6pt plus 2pt minus 1pt}} -}{% if KOMA class - \KOMAoptions{parskip=half}} -\makeatother -\usepackage{color} -\usepackage{fancyvrb} -\newcommand{\VerbBar}{|} -\newcommand{\VERB}{\Verb[commandchars=\\\{\}]} -\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} -% Add ',fontsize=\small' for more characters per line -\newenvironment{Shaded}{}{} -\newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}} -\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{#1}} -\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}} -\newcommand{\BuiltInTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{#1}} -\newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{#1}}} -\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{#1}} -\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}} -\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{#1}} -\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}} -\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{#1}}} -\newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}} -\newcommand{\ExtensionTok}[1]{#1} -\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}} -\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{#1}} -\newcommand{\ImportTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{\textbf{#1}}} -\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}} -\newcommand{\NormalTok}[1]{#1} -\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{#1}} -\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{#1}} -\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{#1}} -\newcommand{\RegionMarkerTok}[1]{#1} -\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{#1}} -\newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{#1}} -\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\usepackage{longtable,booktabs,array} -\newcounter{none} % for unnumbered tables -\usepackage{calc} % for calculating minipage widths -% Correct order of tables after \paragraph or \subparagraph -\usepackage{etoolbox} -\makeatletter -\patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{} -\makeatother -% Allow footnotes in longtable head/foot -\IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}} -\makesavenoteenv{longtable} -\setlength{\emergencystretch}{3em} % prevent overfull lines -\providecommand{\tightlist}{% - \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} -\usepackage{bookmark} -\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available -\urlstyle{same} -\hypersetup{ - hidelinks, - pdfcreator={LaTeX via pandoc}} - -\author{} -\date{} - -\begin{document} -\frontmatter - -\mainmatter -\chapter{Echo Visual Atlas}\label{echo-visual-atlas} - -\begin{quote} -Standalone diagrams for understanding Echo's architecture. These -diagrams complement the main guide ``What Makes Echo Tick?'' -\end{quote} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{1. The Complete Tick -Pipeline}\label{the-complete-tick-pipeline} - -\begin{Shaded} -\begin{Highlighting}[] -\NormalTok{flowchart TB} -\NormalTok{ subgraph PHASE1["Phase 1: BEGIN"]} -\NormalTok{ B1[engine.begin]} -\NormalTok{ B2[Increment tx\_counter]} -\NormalTok{ B3[Add to live\_txs]} -\NormalTok{ B4[Return TxId]} -\NormalTok{ B1 {-}{-}\textgreater{} B2 {-}{-}\textgreater{} B3 {-}{-}\textgreater{} B4} -\NormalTok{ end} - -\NormalTok{ subgraph PHASE2["Phase 2: APPLY (0..N times)"]} -\NormalTok{ A1[engine.apply]} -\NormalTok{ A2\{Matcher?\}} -\NormalTok{ A3[Compute Footprint]} -\NormalTok{ A4[Create PendingRewrite]} -\NormalTok{ A5[Enqueue to Scheduler]} -\NormalTok{ A6[NoMatch]} -\NormalTok{ A1 {-}{-}\textgreater{} A2} -\NormalTok{ A2 {-}{-}\textgreater{}|true| A3 {-}{-}\textgreater{} A4 {-}{-}\textgreater{} A5} -\NormalTok{ A2 {-}{-}\textgreater{}|false| A6} -\NormalTok{ end} - -\NormalTok{ subgraph PHASE3["Phase 3: COMMIT"]} -\NormalTok{ subgraph DRAIN["3a. Drain"]} -\NormalTok{ D1[Radix sort pending]} -\NormalTok{ D2[Canonical order]} -\NormalTok{ end} -\NormalTok{ subgraph RESERVE["3b. Reserve"]} -\NormalTok{ R1[For each rewrite]} -\NormalTok{ R2\{Footprint conflict?\}} -\NormalTok{ R3[Accept]} -\NormalTok{ R4[Reject + witness]} -\NormalTok{ R1 {-}{-}\textgreater{} R2} -\NormalTok{ R2 {-}{-}\textgreater{}|no| R3} -\NormalTok{ R2 {-}{-}\textgreater{}|yes| R4} -\NormalTok{ end} -\NormalTok{ subgraph EXECUTE["3c. Execute"]} -\NormalTok{ E1[For each accepted]} -\NormalTok{ E2[Call executor]} -\NormalTok{ E3[Emit to TickDelta]} -\NormalTok{ E1 {-}{-}\textgreater{} E2 {-}{-}\textgreater{} E3} -\NormalTok{ end} -\NormalTok{ subgraph MERGE["3d. Merge"]} -\NormalTok{ M1[Collect all deltas]} -\NormalTok{ M2[Sort by key+origin]} -\NormalTok{ M3[Dedupe/detect conflicts]} -\NormalTok{ M1 {-}{-}\textgreater{} M2 {-}{-}\textgreater{} M3} -\NormalTok{ end} -\NormalTok{ subgraph FINALIZE["3e. Finalize"]} -\NormalTok{ F1[Apply ops to state]} -\NormalTok{ F2[Update indexes]} -\NormalTok{ F1 {-}{-}\textgreater{} F2} -\NormalTok{ end} -\NormalTok{ DRAIN {-}{-}\textgreater{} RESERVE {-}{-}\textgreater{} EXECUTE {-}{-}\textgreater{} MERGE {-}{-}\textgreater{} FINALIZE} -\NormalTok{ end} - -\NormalTok{ subgraph PHASE4["Phase 4: HASH"]} -\NormalTok{ H1[BFS reachable nodes]} -\NormalTok{ H2[Canonical encode]} -\NormalTok{ H3[BLAKE3 state\_root]} -\NormalTok{ H4[BLAKE3 patch\_digest]} -\NormalTok{ H5[Compute commit\_hash]} -\NormalTok{ H1 {-}{-}\textgreater{} H2 {-}{-}\textgreater{} H3 {-}{-}\textgreater{} H4 {-}{-}\textgreater{} H5} -\NormalTok{ end} - -\NormalTok{ subgraph PHASE5["Phase 5: RECORD"]} -\NormalTok{ REC1[Append Snapshot]} -\NormalTok{ REC2[Append Receipt]} -\NormalTok{ REC3[Append Patch]} -\NormalTok{ REC1 {-}{-}\textgreater{} REC2 {-}{-}\textgreater{} REC3} -\NormalTok{ end} - -\NormalTok{ PHASE1 {-}{-}\textgreater{} PHASE2 {-}{-}\textgreater{} PHASE3 {-}{-}\textgreater{} PHASE4 {-}{-}\textgreater{} PHASE5} -\end{Highlighting} -\end{Shaded} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{2. BOAW Parallel Execution -Model}\label{boaw-parallel-execution-model} - -\begin{Shaded} -\begin{Highlighting}[] -\NormalTok{flowchart LR} -\NormalTok{ subgraph INPUT["Input"]} -\NormalTok{ I[ExecItems\textless{}br/\textgreater{}n items]} -\NormalTok{ end} - -\NormalTok{ subgraph PARTITION["Partition Phase"]} -\NormalTok{ P[partition\_into\_shards]} -\NormalTok{ S0[Shard 0]} -\NormalTok{ S1[Shard 1]} -\NormalTok{ S2[...]} -\NormalTok{ S255[Shard 255]} -\NormalTok{ P {-}{-}\textgreater{} S0} -\NormalTok{ P {-}{-}\textgreater{} S1} -\NormalTok{ P {-}{-}\textgreater{} S2} -\NormalTok{ P {-}{-}\textgreater{} S255} -\NormalTok{ end} - -\NormalTok{ subgraph EXECUTE["Execute Phase (Parallel)"]} -\NormalTok{ W0[Worker 0\textless{}br/\textgreater{}TickDelta]} -\NormalTok{ W1[Worker 1\textless{}br/\textgreater{}TickDelta]} -\NormalTok{ W2[Worker 2\textless{}br/\textgreater{}TickDelta]} -\NormalTok{ WN[Worker N\textless{}br/\textgreater{}TickDelta]} -\NormalTok{ end} - -\NormalTok{ subgraph STEAL["Work Stealing"]} -\NormalTok{ AC[AtomicUsize\textless{}br/\textgreater{}next\_shard]} -\NormalTok{ AC {-}.{-}\textgreater{}|fetch\_add| W0} -\NormalTok{ AC {-}.{-}\textgreater{}|fetch\_add| W1} -\NormalTok{ AC {-}.{-}\textgreater{}|fetch\_add| W2} -\NormalTok{ AC {-}.{-}\textgreater{}|fetch\_add| WN} -\NormalTok{ end} - -\NormalTok{ subgraph MERGE["Merge Phase"]} -\NormalTok{ MG[merge\_deltas]} -\NormalTok{ SORT[Sort by key+origin]} -\NormalTok{ DEDUP[Dedupe identical]} -\NormalTok{ MG {-}{-}\textgreater{} SORT {-}{-}\textgreater{} DEDUP} -\NormalTok{ end} - -\NormalTok{ subgraph OUTPUT["Output"]} -\NormalTok{ O[Canonical Ops\textless{}br/\textgreater{}deterministic]} -\NormalTok{ end} - -\NormalTok{ I {-}{-}\textgreater{} P} -\NormalTok{ S0 {-}{-}\textgreater{} W0} -\NormalTok{ S1 {-}{-}\textgreater{} W1} -\NormalTok{ S2 {-}{-}\textgreater{} W2} -\NormalTok{ S255 {-}{-}\textgreater{} WN} -\NormalTok{ W0 {-}{-}\textgreater{} MG} -\NormalTok{ W1 {-}{-}\textgreater{} MG} -\NormalTok{ W2 {-}{-}\textgreater{} MG} -\NormalTok{ WN {-}{-}\textgreater{} MG} -\NormalTok{ DEDUP {-}{-}\textgreater{} O} -\end{Highlighting} -\end{Shaded} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{3. Virtual Shard Routing}\label{virtual-shard-routing} - -\begin{Shaded} -\begin{Highlighting}[] -\NormalTok{flowchart TD} -\NormalTok{ subgraph NODEID["NodeId (32 bytes)"]} -\NormalTok{ B0["byte 0"]} -\NormalTok{ B1["byte 1"]} -\NormalTok{ B2["byte 2"]} -\NormalTok{ B3["byte 3"]} -\NormalTok{ B4["byte 4"]} -\NormalTok{ B5["byte 5"]} -\NormalTok{ B6["byte 6"]} -\NormalTok{ B7["byte 7"]} -\NormalTok{ REST["bytes 8{-}31\textless{}br/\textgreater{}(ignored)"]} -\NormalTok{ end} - -\NormalTok{ subgraph EXTRACT["Extract First 8 Bytes"]} -\NormalTok{ LE["u64::from\_le\_bytes\textless{}br/\textgreater{}[b0,b1,b2,b3,b4,b5,b6,b7]"]} -\NormalTok{ end} - -\NormalTok{ subgraph MASK["Apply Shard Mask"]} -\NormalTok{ AND["val \& 0xFF\textless{}br/\textgreater{}(NUM\_SHARDS {-} 1)"]} -\NormalTok{ end} - -\NormalTok{ subgraph RESULT["Shard ID"]} -\NormalTok{ SID["0..255"]} -\NormalTok{ end} - -\NormalTok{ B0 {-}{-}\textgreater{} LE} -\NormalTok{ B1 {-}{-}\textgreater{} LE} -\NormalTok{ B2 {-}{-}\textgreater{} LE} -\NormalTok{ B3 {-}{-}\textgreater{} LE} -\NormalTok{ B4 {-}{-}\textgreater{} LE} -\NormalTok{ B5 {-}{-}\textgreater{} LE} -\NormalTok{ B6 {-}{-}\textgreater{} LE} -\NormalTok{ B7 {-}{-}\textgreater{} LE} -\NormalTok{ LE {-}{-}\textgreater{} AND {-}{-}\textgreater{} SID} -\end{Highlighting} -\end{Shaded} - -\subsection{Test Vectors (Frozen -Protocol)}\label{test-vectors-frozen-protocol} - -{\def\LTcaptype{none} % do not increment counter -\begin{longtable}[]{@{}lll@{}} -\toprule\noalign{} -Input (first 8 bytes) & LE u64 & Shard \\ -\midrule\noalign{} -\endhead -\bottomrule\noalign{} -\endlastfoot -\texttt{0xDEADBEEFCAFEBABE} & \texttt{0xBEBAFECAEFBEADDE} & 190 -(0xBE) \\ -\texttt{0x0000000000000000} & \texttt{0x0000000000000000} & 0 \\ -\texttt{0x2A00000000000000} & \texttt{0x000000000000002A} & 42 \\ -\texttt{0xFFFFFFFFFFFFFFFF} & \texttt{0xFFFFFFFFFFFFFFFF} & 255 \\ -\end{longtable} -} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{4. Two-Plane WARP -Architecture}\label{two-plane-warp-architecture} - -\begin{Shaded} -\begin{Highlighting}[] -\NormalTok{graph TB} -\NormalTok{ subgraph SKELETON["Skeleton Plane (Structure)"]} -\NormalTok{ direction TB} -\NormalTok{ N1["Node A\textless{}br/\textgreater{}id: 0x1234"]} -\NormalTok{ N2["Node B\textless{}br/\textgreater{}id: 0x5678"]} -\NormalTok{ N3["Node C\textless{}br/\textgreater{}id: 0x9ABC"]} - -\NormalTok{ N1 {-}{-}\textgreater{}|"edge:link\textless{}br/\textgreater{}id: 0xE001"| N2} -\NormalTok{ N1 {-}{-}\textgreater{}|"edge:child\textless{}br/\textgreater{}id: 0xE002"| N3} -\NormalTok{ N2 {-}{-}\textgreater{}|"edge:ref\textless{}br/\textgreater{}id: 0xE003"| N3} -\NormalTok{ end} - -\NormalTok{ subgraph ALPHA["Attachment Plane (α)"]} -\NormalTok{ direction TB} -\NormalTok{ A1["N1.α[\textquotesingle{}title\textquotesingle{}]\textless{}br/\textgreater{}Atom\{string, \textquotesingle{}Home\textquotesingle{}\}"]} -\NormalTok{ A2["N2.α[\textquotesingle{}url\textquotesingle{}]\textless{}br/\textgreater{}Atom\{string, \textquotesingle{}/page/b\textquotesingle{}\}"]} -\NormalTok{ A3["N3.α[\textquotesingle{}body\textquotesingle{}]\textless{}br/\textgreater{}Atom\{html, \textquotesingle{}\<p\>...\</p\>\textquotesingle{}\}"]} -\NormalTok{ A4["N3.α[\textquotesingle{}portal\textquotesingle{}]\textless{}br/\textgreater{}Descend(\textquotesingle{}child{-}instance\textquotesingle{})"]} -\NormalTok{ end} - -\NormalTok{ N1 {-}.{-} A1} -\NormalTok{ N2 {-}.{-} A2} -\NormalTok{ N3 {-}.{-} A3} -\NormalTok{ N3 {-}.{-} A4} - -\NormalTok{ subgraph DESCENDED["Descended Instance"]} -\NormalTok{ direction TB} -\NormalTok{ C1["Child Root\textless{}br/\textgreater{}id: 0xCCC1"]} -\NormalTok{ C2["Child Node\textless{}br/\textgreater{}id: 0xCCC2"]} -\NormalTok{ C1 {-}{-}\textgreater{} C2} -\NormalTok{ end} - -\NormalTok{ A4 {-}.{-}\textgreater{}|"Descend pointer"| C1} -\end{Highlighting} -\end{Shaded} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{5. GraphView Contract -Enforcement}\label{graphview-contract-enforcement} - -\begin{Shaded} -\begin{Highlighting}[] -\NormalTok{flowchart TD} -\NormalTok{ subgraph EXECUTOR["Executor Function"]} -\NormalTok{ EX["fn executor(view: GraphView, scope: \&NodeId, delta: \&mut TickDelta)"]} -\NormalTok{ end} - -\NormalTok{ subgraph READ["Read Path (GraphView)"]} -\NormalTok{ R1["view.node(id)"]} -\NormalTok{ R2["view.edges\_from(id)"]} -\NormalTok{ R3["view.attachment(id, key)"]} -\NormalTok{ R4["view.has\_edge(id)"]} - -\NormalTok{ R1 {-}{-}\textgreater{} GS} -\NormalTok{ R2 {-}{-}\textgreater{} GS} -\NormalTok{ R3 {-}{-}\textgreater{} GS} -\NormalTok{ R4 {-}{-}\textgreater{} GS} -\NormalTok{ end} - -\NormalTok{ subgraph GS["GraphStore (Immutable)"]} -\NormalTok{ NODES["nodes: BTreeMap"]} -\NormalTok{ EDGES["edges\_from: BTreeMap"]} -\NormalTok{ ATTACH["attachments: BTreeMap"]} -\NormalTok{ end} - -\NormalTok{ subgraph WRITE["Write Path (TickDelta)"]} -\NormalTok{ W1["delta.emit(UpsertNode)"]} -\NormalTok{ W2["delta.emit(UpsertEdge)"]} -\NormalTok{ W3["delta.emit(SetAttachment)"]} -\NormalTok{ W4["delta.emit(DeleteNode)"]} - -\NormalTok{ W1 {-}{-}\textgreater{} OPS} -\NormalTok{ W2 {-}{-}\textgreater{} OPS} -\NormalTok{ W3 {-}{-}\textgreater{} OPS} -\NormalTok{ W4 {-}{-}\textgreater{} OPS} -\NormalTok{ end} - -\NormalTok{ subgraph OPS["Accumulated Ops"]} -\NormalTok{ OPLIST["Vec\<(WarpOp, OpOrigin)\>"]} -\NormalTok{ end} - -\NormalTok{ EX {-}{-}\textgreater{} READ} -\NormalTok{ EX {-}{-}\textgreater{} WRITE} - -\NormalTok{ style GS fill:\#e8f5e9} -\NormalTok{ style OPS fill:\#fff3e0} -\end{Highlighting} -\end{Shaded} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{6. State Root Hash -Computation}\label{state-root-hash-computation} - -\begin{Shaded} -\begin{Highlighting}[] -\NormalTok{flowchart TD} -\NormalTok{ subgraph BFS["1. Deterministic BFS"]} -\NormalTok{ START["Start at root"]} -\NormalTok{ VISIT["Visit reachable nodes"]} -\NormalTok{ DESCEND["Follow Descend() attachments"]} -\NormalTok{ COLLECT["Collect reachable set"]} -\NormalTok{ START {-}{-}\textgreater{} VISIT {-}{-}\textgreater{} DESCEND {-}{-}\textgreater{} COLLECT} -\NormalTok{ end} - -\NormalTok{ subgraph ENCODE["2. Canonical Encoding"]} -\NormalTok{ subgraph INSTANCE["Per Instance (BTreeMap order)"]} -\NormalTok{ IH["warp\_id header"]} -\NormalTok{ subgraph NODE["Per Node (ascending NodeId)"]} -\NormalTok{ NH["node\_id[32]"]} -\NormalTok{ NT["node\_type[32]"]} -\NormalTok{ subgraph EDGE["Per Edge (ascending EdgeId)"]} -\NormalTok{ EH["edge\_id[32]"]} -\NormalTok{ ET["edge\_type[32]"]} -\NormalTok{ ED["to\_node[32]"]} -\NormalTok{ end} -\NormalTok{ subgraph ATTACH["Per Attachment"]} -\NormalTok{ AK["key\_len[8] + key"]} -\NormalTok{ AT["type\_id[32]"]} -\NormalTok{ AV["value\_len[8] + value"]} -\NormalTok{ end} -\NormalTok{ end} -\NormalTok{ end} -\NormalTok{ end} - -\NormalTok{ subgraph HASH["3. BLAKE3 Digest"]} -\NormalTok{ STREAM["Byte stream"]} -\NormalTok{ DIGEST["state\_root[32]"]} -\NormalTok{ STREAM {-}{-}\textgreater{} DIGEST} -\NormalTok{ end} - -\NormalTok{ BFS {-}{-}\textgreater{} ENCODE {-}{-}\textgreater{} HASH} -\end{Highlighting} -\end{Shaded} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{7. Commit Hash v2 Structure}\label{commit-hash-v2-structure} - -\begin{Shaded} -\begin{Highlighting}[] -\NormalTok{flowchart LR} -\NormalTok{ subgraph INPUTS["Commit Hash Inputs"]} -\NormalTok{ V["version[4]\textless{}br/\textgreater{}protocol tag"]} -\NormalTok{ P["parents[]\textless{}br/\textgreater{}parent hashes"]} -\NormalTok{ SR["state\_root[32]\textless{}br/\textgreater{}graph hash"]} -\NormalTok{ PD["patch\_digest[32]\textless{}br/\textgreater{}ops hash"]} -\NormalTok{ PI["policy\_id[4]\textless{}br/\textgreater{}aion policy"]} -\NormalTok{ end} - -\NormalTok{ subgraph CONCAT["Concatenation"]} -\NormalTok{ BYTES["version || parents || state\_root || patch\_digest || policy\_id"]} -\NormalTok{ end} - -\NormalTok{ subgraph OUTPUT["Output"]} -\NormalTok{ CH["commit\_hash[32]\textless{}br/\textgreater{}BLAKE3"]} -\NormalTok{ end} - -\NormalTok{ V {-}{-}\textgreater{} BYTES} -\NormalTok{ P {-}{-}\textgreater{} BYTES} -\NormalTok{ SR {-}{-}\textgreater{} BYTES} -\NormalTok{ PD {-}{-}\textgreater{} BYTES} -\NormalTok{ PI {-}{-}\textgreater{} BYTES} -\NormalTok{ BYTES {-}{-}\textgreater{} CH} -\end{Highlighting} -\end{Shaded} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{8. WSC Snapshot Format}\label{wsc-snapshot-format} - -\begin{verbatim} -┌─────────────────────────────────────────────────────────────────────────┐ -│ WSC SNAPSHOT FILE │ -├─────────────────────────────────────────────────────────────────────────┤ -│ │ -│ ┌────────────────────────────────────────────────────────────────────┐ │ -│ │ HEADER (fixed size) │ │ -│ │ ┌──────────┬──────────┬──────────┬──────────┬──────────┐ │ │ -│ │ │ magic │ version │ node_cnt │ edge_cnt │ offsets │ │ │ -│ │ │ 8 bytes │ 8 bytes │ 8 bytes │ 8 bytes │ 8×N bytes│ │ │ -│ │ └──────────┴──────────┴──────────┴──────────┴──────────┘ │ │ -│ └────────────────────────────────────────────────────────────────────┘ │ -│ │ -│ ┌────────────────────────────────────────────────────────────────────┐ │ -│ │ NODES TABLE (sorted by NodeId, 8-byte aligned) │ │ -│ │ ┌─────────────────┬─────────────────┬─────────────────┐ │ │ -│ │ │ NodeRow │ NodeRow │ NodeRow │ ... │ │ -│ │ │ 64 bytes │ 64 bytes │ 64 bytes │ │ │ -│ │ │ [id:32][type:32]│ [id:32][type:32]│ [id:32][type:32]│ │ │ -│ │ └─────────────────┴─────────────────┴─────────────────┘ │ │ -│ └────────────────────────────────────────────────────────────────────┘ │ -│ │ -│ ┌────────────────────────────────────────────────────────────────────┐ │ -│ │ EDGES TABLE (sorted by EdgeId, 8-byte aligned) │ │ -│ │ ┌─────────────────────────┬─────────────────────────┐ │ │ -│ │ │ EdgeRow │ EdgeRow │ ... │ │ -│ │ │ 128 bytes │ 128 bytes │ │ │ -│ │ │[id:32][from:32][to:32] │[id:32][from:32][to:32] │ │ │ -│ │ │[type:32] │[type:32] │ │ │ -│ │ └─────────────────────────┴─────────────────────────┘ │ │ -│ └────────────────────────────────────────────────────────────────────┘ │ -│ │ -│ ┌────────────────────────────────────────────────────────────────────┐ │ -│ │ OUT_INDEX (per-node ranges into out_edges) │ │ -│ │ ┌──────────────┬──────────────┬──────────────┐ │ │ -│ │ │ Range │ Range │ Range │ ... │ │ -│ │ │ 16 bytes │ 16 bytes │ 16 bytes │ │ │ -│ │ │[start:8][len:8]│[start:8][len:8]│[start:8][len:8]│ │ │ -│ │ └──────────────┴──────────────┴──────────────┘ │ │ -│ └────────────────────────────────────────────────────────────────────┘ │ -│ │ -│ ┌────────────────────────────────────────────────────────────────────┐ │ -│ │ ATTACHMENT INDEX (per-slot ranges) │ │ -│ │ Similar structure to OUT_INDEX │ │ -│ └────────────────────────────────────────────────────────────────────┘ │ -│ │ -│ ┌────────────────────────────────────────────────────────────────────┐ │ -│ │ BLOB ARENA (variable-length payloads) │ │ -│ │ ┌─────────────────────────────────────────────────────────────┐ │ │ -│ │ │ [payload bytes...] [payload bytes...] [payload bytes...] ...│ │ │ -│ │ └─────────────────────────────────────────────────────────────┘ │ │ -│ │ Referenced by (offset: u64, length: u64) tuples │ │ -│ └────────────────────────────────────────────────────────────────────┘ │ -│ │ -└─────────────────────────────────────────────────────────────────────────┘ -\end{verbatim} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{9. Footprint Independence -Check}\label{footprint-independence-check} - -\begin{Shaded} -\begin{Highlighting}[] -\NormalTok{flowchart TD} -\NormalTok{ subgraph REWRITE1["Rewrite A"]} -\NormalTok{ R1\_READ["reads: \{N1, N2\}"]} -\NormalTok{ R1\_WRITE["writes: \{N3\}"]} -\NormalTok{ end} - -\NormalTok{ subgraph REWRITE2["Rewrite B"]} -\NormalTok{ R2\_READ["reads: \{N4, N5\}"]} -\NormalTok{ R2\_WRITE["writes: \{N6\}"]} -\NormalTok{ end} - -\NormalTok{ subgraph REWRITE3["Rewrite C"]} -\NormalTok{ R3\_READ["reads: \{N1, N3\}"]} -\NormalTok{ R3\_WRITE["writes: \{N7\}"]} -\NormalTok{ end} - -\NormalTok{ subgraph CHECK["Independence Check"]} -\NormalTok{ C1\{\{"A ∩ B"\}\}} -\NormalTok{ C2\{\{"A ∩ C"\}\}} -\NormalTok{ C3\{\{"B ∩ C"\}\}} -\NormalTok{ end} - -\NormalTok{ subgraph RESULT["Results"]} -\NormalTok{ OK1["A || B: OK\textless{}br/\textgreater{}(no overlap)"]} -\NormalTok{ CONFLICT["A || C: CONFLICT\textless{}br/\textgreater{}(A.write ∩ C.read = \{N3\})"]} -\NormalTok{ OK2["B || C: OK\textless{}br/\textgreater{}(no overlap)"]} -\NormalTok{ end} - -\NormalTok{ R1\_WRITE {-}{-}\textgreater{} C1} -\NormalTok{ R2\_WRITE {-}{-}\textgreater{} C1} -\NormalTok{ R1\_WRITE {-}{-}\textgreater{} C2} -\NormalTok{ R3\_READ {-}{-}\textgreater{} C2} -\NormalTok{ R2\_WRITE {-}{-}\textgreater{} C3} -\NormalTok{ R3\_WRITE {-}{-}\textgreater{} C3} - -\NormalTok{ C1 {-}{-}\textgreater{} OK1} -\NormalTok{ C2 {-}{-}\textgreater{} CONFLICT} -\NormalTok{ C3 {-}{-}\textgreater{} OK2} - -\NormalTok{ style CONFLICT fill:\#ffcdd2} -\NormalTok{ style OK1 fill:\#c8e6c9} -\NormalTok{ style OK2 fill:\#c8e6c9} -\end{Highlighting} -\end{Shaded} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{10. Complete Data Flow: Intent to -Render}\label{complete-data-flow-intent-to-render} - -\begin{Shaded} -\begin{Highlighting}[] -\NormalTok{sequenceDiagram} -\NormalTok{ autonumber} -\NormalTok{ participant U as User} -\NormalTok{ participant V as Viewer} -\NormalTok{ participant H as Session Hub} -\NormalTok{ participant E as Engine} -\NormalTok{ participant S as Scheduler} -\NormalTok{ participant B as BOAW} -\NormalTok{ participant G as GraphStore} -\NormalTok{ participant W as WSC} - -\NormalTok{ U{-}\textgreater{}\textgreater{}V: Click action} -\NormalTok{ V{-}\textgreater{}\textgreater{}V: Encode intent bytes} -\NormalTok{ V{-}\textgreater{}\textgreater{}H: ingest\_intent(bytes)} -\NormalTok{ H{-}\textgreater{}\textgreater{}E: forward intent} - -\NormalTok{ Note over E: Phase 1: BEGIN} -\NormalTok{ E{-}\textgreater{}\textgreater{}E: begin() → TxId} - -\NormalTok{ Note over E: Intent Processing} -\NormalTok{ E{-}\textgreater{}\textgreater{}E: dispatch\_next\_intent(tx)} -\NormalTok{ E{-}\textgreater{}\textgreater{}G: GraphView lookup} -\NormalTok{ G{-}{-}\textgreater{}\textgreater{}E: intent data} - -\NormalTok{ Note over E: Phase 2: APPLY} -\NormalTok{ E{-}\textgreater{}\textgreater{}S: apply(tx, rule, scope)} -\NormalTok{ S{-}\textgreater{}\textgreater{}G: matcher(view, scope)} -\NormalTok{ G{-}{-}\textgreater{}\textgreater{}S: match result} -\NormalTok{ S{-}\textgreater{}\textgreater{}S: compute footprint} -\NormalTok{ S{-}\textgreater{}\textgreater{}S: enqueue PendingRewrite} - -\NormalTok{ Note over E: Phase 3: COMMIT} -\NormalTok{ E{-}\textgreater{}\textgreater{}S: commit(tx)} -\NormalTok{ S{-}\textgreater{}\textgreater{}S: radix sort (drain)} -\NormalTok{ S{-}\textgreater{}\textgreater{}S: independence check (reserve)} - -\NormalTok{ Note over B: Parallel Execution} -\NormalTok{ S{-}\textgreater{}\textgreater{}B: execute\_parallel(items)} -\NormalTok{ B{-}\textgreater{}\textgreater{}B: partition into shards} -\NormalTok{ par Worker 0} -\NormalTok{ B{-}\textgreater{}\textgreater{}G: read via GraphView} -\NormalTok{ G{-}{-}\textgreater{}\textgreater{}B: data} -\NormalTok{ B{-}\textgreater{}\textgreater{}B: emit to TickDelta} -\NormalTok{ and Worker 1} -\NormalTok{ B{-}\textgreater{}\textgreater{}G: read via GraphView} -\NormalTok{ G{-}{-}\textgreater{}\textgreater{}B: data} -\NormalTok{ B{-}\textgreater{}\textgreater{}B: emit to TickDelta} -\NormalTok{ and Worker N} -\NormalTok{ B{-}\textgreater{}\textgreater{}G: read via GraphView} -\NormalTok{ G{-}{-}\textgreater{}\textgreater{}B: data} -\NormalTok{ B{-}\textgreater{}\textgreater{}B: emit to TickDelta} -\NormalTok{ end} -\NormalTok{ B{-}\textgreater{}\textgreater{}B: merge\_deltas (canonical)} -\NormalTok{ B{-}{-}\textgreater{}\textgreater{}S: merged ops} - -\NormalTok{ S{-}\textgreater{}\textgreater{}G: apply ops} - -\NormalTok{ Note over E: Phase 4: HASH} -\NormalTok{ E{-}\textgreater{}\textgreater{}G: compute state\_root} -\NormalTok{ G{-}{-}\textgreater{}\textgreater{}E: hash} -\NormalTok{ E{-}\textgreater{}\textgreater{}E: compute commit\_hash} - -\NormalTok{ Note over E: Phase 5: RECORD} -\NormalTok{ E{-}\textgreater{}\textgreater{}W: store snapshot} -\NormalTok{ E{-}\textgreater{}\textgreater{}E: append to history} - -\NormalTok{ Note over H: Emit to Tools} -\NormalTok{ E{-}\textgreater{}\textgreater{}H: WarpDiff} -\NormalTok{ H{-}\textgreater{}\textgreater{}V: WarpFrame} - -\NormalTok{ Note over V: Apply \& Render} -\NormalTok{ V{-}\textgreater{}\textgreater{}V: apply\_op (each op)} -\NormalTok{ V{-}\textgreater{}\textgreater{}V: verify state\_hash} -\NormalTok{ V{-}\textgreater{}\textgreater{}V: render frame} -\NormalTok{ V{-}\textgreater{}\textgreater{}U: Display result} -\end{Highlighting} -\end{Shaded} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{11. Viewer Event Loop}\label{viewer-event-loop} - -\begin{Shaded} -\begin{Highlighting}[] -\NormalTok{flowchart TD} -\NormalTok{ subgraph FRAME["Frame Loop"]} -\NormalTok{ START[frame start]} - -\NormalTok{ subgraph DRAIN["1. Drain Session"]} -\NormalTok{ DN[drain\_notifications]} -\NormalTok{ DF[drain\_frames]} -\NormalTok{ end} - -\NormalTok{ subgraph PROCESS["2. Process Frames"]} -\NormalTok{ PF[process\_frames]} -\NormalTok{ SNAP\{Snapshot?\}} -\NormalTok{ DIFF\{Diff?\}} -\NormalTok{ APPLY[apply\_op each]} -\NormalTok{ VERIFY[verify hash]} -\NormalTok{ end} - -\NormalTok{ subgraph EVENTS["3. Handle Events"]} -\NormalTok{ UE[apply\_ui\_event]} -\NormalTok{ REDUCE[reduce pure]} -\NormalTok{ EFFECTS[run effects]} -\NormalTok{ end} - -\NormalTok{ subgraph RENDER["4. Render"]} -\NormalTok{ MATCH\{screen?\}} -\NormalTok{ TITLE[draw\_title]} -\NormalTok{ VIEW[draw\_view]} -\NormalTok{ HUD[draw\_hud]} -\NormalTok{ end} - -\NormalTok{ END[frame end]} - -\NormalTok{ START {-}{-}\textgreater{} DRAIN} -\NormalTok{ DN {-}{-}\textgreater{} DF} -\NormalTok{ DF {-}{-}\textgreater{} PROCESS} -\NormalTok{ PF {-}{-}\textgreater{} SNAP} -\NormalTok{ SNAP {-}{-}\textgreater{}|yes| APPLY} -\NormalTok{ PF {-}{-}\textgreater{} DIFF} -\NormalTok{ DIFF {-}{-}\textgreater{}|yes| APPLY} -\NormalTok{ APPLY {-}{-}\textgreater{} VERIFY} -\NormalTok{ VERIFY {-}{-}\textgreater{} EVENTS} -\NormalTok{ UE {-}{-}\textgreater{} REDUCE} -\NormalTok{ REDUCE {-}{-}\textgreater{} EFFECTS} -\NormalTok{ EFFECTS {-}{-}\textgreater{} RENDER} -\NormalTok{ MATCH {-}{-}\textgreater{}|Title| TITLE} -\NormalTok{ MATCH {-}{-}\textgreater{}|View| VIEW} -\NormalTok{ VIEW {-}{-}\textgreater{} HUD} -\NormalTok{ TITLE {-}{-}\textgreater{} END} -\NormalTok{ HUD {-}{-}\textgreater{} END} -\NormalTok{ end} -\end{Highlighting} -\end{Shaded} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\emph{Visual Atlas generated 2026-01-18. Use alongside ``What Makes Echo -Tick?'' for complete understanding.} - -\backmatter -\end{document} diff --git a/docs/archive/study/extract-mermaid.py b/docs/archive/study/extract-mermaid.py deleted file mode 100755 index cd03489d..00000000 --- a/docs/archive/study/extract-mermaid.py +++ /dev/null @@ -1,137 +0,0 @@ -#!/usr/bin/env python3 -# SPDX-License-Identifier: Apache-2.0 -# © James Ross Ω FLYING•ROBOTS -""" -Extract Mermaid diagrams from Markdown files and convert to PDF via SVG. - -Pipeline: .md -> extract mermaid blocks -> .mmd -> mmdc -> .svg -> inkscape -> .pdf -""" - -import re -import subprocess -import sys -from pathlib import Path - -STUDY_DIR = Path(__file__).parent -DIAGRAMS_DIR = STUDY_DIR / "diagrams" - -def extract_mermaid_blocks(md_file: Path) -> list[tuple[str, str]]: - """Extract mermaid code blocks from a markdown file. - - Returns list of (diagram_id, mermaid_code) tuples. - """ - content = md_file.read_text() - - # Match ```mermaid ... ``` blocks - pattern = r'```mermaid\n(.*?)```' - matches = re.findall(pattern, content, re.DOTALL) - - results = [] - base_name = md_file.stem - - for i, code in enumerate(matches, 1): - diagram_id = f"{base_name}-{i:02d}" - results.append((diagram_id, code.strip())) - - return results - - -def convert_mermaid_to_pdf(diagram_id: str, mermaid_code: str, output_dir: Path) -> Path | None: - """Convert mermaid code to PDF via SVG. - - Returns path to PDF or None on failure. - """ - output_dir.mkdir(parents=True, exist_ok=True) - - mmd_file = output_dir / f"{diagram_id}.mmd" - svg_file = output_dir / f"{diagram_id}.svg" - pdf_file = output_dir / f"{diagram_id}.pdf" - - # Write mermaid source - mmd_file.write_text(mermaid_code) - - # Convert to SVG with mmdc - try: - result = subprocess.run( - ["mmdc", "-i", str(mmd_file), "-o", str(svg_file), "-b", "transparent"], - capture_output=True, - text=True, - timeout=30 - ) - if result.returncode != 0: - print(f" mmdc failed for {diagram_id}: {result.stderr}", file=sys.stderr) - return None - except subprocess.TimeoutExpired: - print(f" mmdc timeout for {diagram_id}", file=sys.stderr) - return None - except FileNotFoundError: - print(" mmdc not found - install with: npm install -g @mermaid-js/mermaid-cli", file=sys.stderr) - return None - - if not svg_file.exists(): - print(f" SVG not created for {diagram_id}", file=sys.stderr) - return None - - # Convert SVG to PDF with inkscape - try: - result = subprocess.run( - ["inkscape", str(svg_file), "--export-type=pdf", f"--export-filename={pdf_file}"], - capture_output=True, - text=True, - timeout=30 - ) - if result.returncode != 0: - print(f" inkscape failed for {diagram_id}: {result.stderr}", file=sys.stderr) - return None - except subprocess.TimeoutExpired: - print(f" inkscape timeout for {diagram_id}", file=sys.stderr) - return None - except FileNotFoundError: - print(" inkscape not found", file=sys.stderr) - return None - - if pdf_file.exists(): - return pdf_file - return None - - -def main(): - """Process all markdown files in study directory.""" - md_files = [ - STUDY_DIR / "what-makes-echo-tick.md", - STUDY_DIR / "echo-visual-atlas.md", - STUDY_DIR / "echo-tour-de-code.md", - ] - - total_diagrams = 0 - converted = 0 - - for md_file in md_files: - if not md_file.exists(): - print(f"Skipping {md_file.name} (not found)") - continue - - print(f"\n=== Processing {md_file.name} ===") - blocks = extract_mermaid_blocks(md_file) - print(f"Found {len(blocks)} mermaid diagrams") - - for diagram_id, code in blocks: - total_diagrams += 1 - print(f" Converting {diagram_id}...", end=" ") - - pdf_path = convert_mermaid_to_pdf(diagram_id, code, DIAGRAMS_DIR) - if pdf_path: - print(f"OK -> {pdf_path.name}") - converted += 1 - else: - print("FAILED") - - print(f"\n=== Summary ===") - print(f"Total diagrams: {total_diagrams}") - print(f"Converted: {converted}") - print(f"Failed: {total_diagrams - converted}") - print(f"Output directory: {DIAGRAMS_DIR}") - - -if __name__ == "__main__": - main() diff --git a/docs/archive/study/inject-diagrams.py b/docs/archive/study/inject-diagrams.py deleted file mode 100644 index 1dfab19a..00000000 --- a/docs/archive/study/inject-diagrams.py +++ /dev/null @@ -1,112 +0,0 @@ -#!/usr/bin/env python3 -# SPDX-License-Identifier: Apache-2.0 -# © James Ross Ω FLYING•ROBOTS -""" -Post-process LaTeX files to replace mermaid code blocks with diagram includes. - -Finds Shaded blocks containing mermaid syntax and replaces with \includegraphics. -""" - -import re -import sys -from pathlib import Path - -STUDY_DIR = Path(__file__).parent -DIAGRAMS_DIR = STUDY_DIR / "diagrams" - -# Mermaid start patterns -MERMAID_STARTS = [ - r'\\NormalTok\{graph ', - r'\\NormalTok\{flowchart ', - r'\\NormalTok\{sequenceDiagram\}', - r'\\NormalTok\{classDiagram\}', - r'\\NormalTok\{stateDiagram', - r'\\NormalTok\{erDiagram\}', - r'\\NormalTok\{pie ', - r'\\NormalTok\{gantt\}', -] - - -def is_mermaid_block(block_content: str) -> bool: - """Check if a Shaded block contains mermaid diagram syntax.""" - for pattern in MERMAID_STARTS: - if re.search(pattern, block_content): - return True - return False - - -def process_tex_file(tex_file: Path, base_name: str) -> str: - """Process a tex file, replacing mermaid blocks with includegraphics.""" - content = tex_file.read_text() - - # Match Shaded environments - shaded_pattern = r'\\begin\{Shaded\}(.*?)\\end\{Shaded\}' - - diagram_counter = 0 - replacements = [] - - for match in re.finditer(shaded_pattern, content, re.DOTALL): - block = match.group(0) - block_content = match.group(1) - - if is_mermaid_block(block_content): - diagram_counter += 1 - diagram_id = f"{base_name}-{diagram_counter:02d}" - pdf_path = DIAGRAMS_DIR / f"{diagram_id}.pdf" - - if pdf_path.exists(): - # Create centered figure with the diagram - replacement = ( - f"\\begin{{center}}\n" - f"\\includegraphics[max width=\\textwidth,max height=0.4\\textheight,keepaspectratio]" - f"{{diagrams/{diagram_id}.pdf}}\n" - f"\\end{{center}}" - ) - replacements.append((match.start(), match.end(), replacement)) - else: - print(f" Warning: {pdf_path.name} not found, keeping code block") - - # Apply replacements in reverse order to preserve positions - for start, end, replacement in reversed(replacements): - content = content[:start] + replacement + content[end:] - - # Add graphicx package if we made replacements and it's not already there - if replacements and r'\usepackage{graphicx}' not in content: - # Insert after documentclass or after other usepackage statements - content = content.replace( - r'\usepackage{longtable', - r'\usepackage{graphicx}' + '\n' + r'\usepackage[export]{adjustbox}' + '\n' + r'\usepackage{longtable' - ) - - return content, len(replacements) - - -def main(): - """Process all tex files.""" - tex_files = [ - ("what-makes-echo-tick.tex", "what-makes-echo-tick"), - ("echo-visual-atlas.tex", "echo-visual-atlas"), - ("echo-tour-de-code.tex", "echo-tour-de-code"), - ] - - for tex_name, base_name in tex_files: - tex_file = STUDY_DIR / tex_name - if not tex_file.exists(): - print(f"Skipping {tex_name} (not found)") - continue - - print(f"\n=== Processing {tex_name} ===") - new_content, count = process_tex_file(tex_file, base_name) - - if count > 0: - # Write to new file (preserve original) - output_file = STUDY_DIR / tex_name.replace('.tex', '-with-diagrams.tex') - output_file.write_text(new_content) - print(f" Replaced {count} mermaid blocks") - print(f" Output: {output_file.name}") - else: - print(f" No mermaid blocks found") - - -if __name__ == "__main__": - main() diff --git a/docs/archive/study/macros.tex b/docs/archive/study/macros.tex deleted file mode 100644 index 7020557e..00000000 --- a/docs/archive/study/macros.tex +++ /dev/null @@ -1,13 +0,0 @@ -% SPDX-License-Identifier: Apache-2.0 OR LicenseRef-MIND-UCAL-1.0 -% © James Ross Ω FLYING•ROBOTS -% Macros for the WARPs paper -% Shared commands to keep notation consistent across the manuscript. -\usepackage{tikz} % Needed for \AIONLogo in this macros file -\usetikzlibrary{positioning,calc,shapes.geometric} -\newcommand{\AION}{\textrm{AI}\ensuremath{\Omega}\textrm{N}} -\newcommand{\AIONProjectURL}{\url{https://github.com/flyingrobots/aion}} - -% WARP term: small caps in prose, italic in math. -% Force upright small caps in text to avoid missing scit font shapes. -\DeclareRobustCommand{\WARP}{\ifmmode\mathit{WARP}\else{\upshape\scshape warp}\fi} -\DeclareMathOperator{\skel}{skel} diff --git a/docs/archive/study/paper-7eee.pdf b/docs/archive/study/paper-7eee.pdf deleted file mode 100644 index 2787128c..00000000 Binary files a/docs/archive/study/paper-7eee.pdf and /dev/null differ diff --git a/docs/archive/study/paper-7eee.tex b/docs/archive/study/paper-7eee.tex deleted file mode 100644 index 2ff4ab27..00000000 --- a/docs/archive/study/paper-7eee.tex +++ /dev/null @@ -1,1315 +0,0 @@ -% SPDX-License-Identifier: Apache-2.0 OR LicenseRef-MIND-UCAL-1.0 -% © James Ross Ω FLYING•ROBOTS -\documentclass{aion} - -% ------------------------------------------------------------ -% Metadata for this paper -% ------------------------------------------------------------ -\renewcommand{\papertitle}{WARP Graphs---WARP Core: Deterministic Graph Rewrite Simulation Engine} -\renewcommand{\papernumber}{Paper VII} -\renewcommand{\paperdate}{Januray 2026} - -\renewcommand{\paperauthor}{James Ross} -\renewcommand{\paperaffiliation}{Independent Researcher} -\renewcommand{\paperorcid}{0009-0006-0025-7801} -\renewcommand{\paperdoi}{10.5281/zenodo.18038297} - -% ------------------------------------------------------------ -% Packages (local to this paper) -% ------------------------------------------------------------ -\usepackage{float} -\usepackage{mathtools} -\usepackage{tabularx} -\usepackage{tikz} -\usepackage{tikz-cd} -\usetikzlibrary{arrows.meta,positioning,decorations.pathreplacing,fit,calc} - -\input{macros} - -% ------------------------------------------------------------ -% Notation shortcuts (guarded to avoid clashes with other papers) -% ------------------------------------------------------------ -\ifdefined\WCat\else\newcommand{\WCat}{\mathcal{W}}\fi -\ifdefined\Hist\else\DeclareMathOperator{\Hist}{Hist}\fi -\ifdefined\Trans\else\DeclareMathOperator{\Trans}{Trans}\fi -\ifdefined\DL\else\DeclareMathOperator{\DL}{DL}\fi -\ifdefined\Dist\else\DeclareMathOperator{\Dist}{Dist}\fi -\ifdefined\To\else\newcommand{\To}{\to}\fi - -\newcommand{\WState}{\mathsf{WState}} -\newcommand{\Tr}{\mathsf{Tr}} -\newcommand{\Labels}{\mathsf{Labels}} -\newcommand{\Apply}{\mathsf{Apply}} - -\DeclareMathOperator{\MW}{MW} -\DeclareMathOperator{\Path}{Path} -\DeclareMathOperator{\Obj}{Obj} -\DeclareMathOperator{\Mor}{Mor} -\DeclareMathOperator{\dom}{dom} -\DeclareMathOperator{\cod}{cod} - -\newcommand{\Ruliad}{\mathcal{R}} -\newcommand{\Chronos}{\mathsf{Chronos}} -\newcommand{\Kairos}{\mathsf{Kairos}} -\newcommand{\Aion}{\mathsf{Aion}} - -% \newcommand{\AION}{\mathdf{AI\upOmegaN}} - -% Paper I used \sectionbreak; provide a default in case the class doesn't. -\providecommand{\sectionbreak}{\clearpage} - -% Avoid duplicate hyperlink anchors for figures -\makeatletter -\renewcommand{\theHfigure}{\thesection.\arabic{figure}} -\makeatother - -\usetikzlibrary{backgrounds} - -\begin{document} - -\AIONFrontMatter{This paper outlines the architecture of the \textit`'{WARP Core} deterministic graph rewrite engine, a high-performance, real-time simulation engine with bit-level perfect deterministism, embarassingly-high coucurrency, and first-class time travel, by construction. -} - -% ============================================================ -\section{Introduction} -\label{sec:intro} - -To conclude the \textbf{\AION{} Foundations Series}, we describe the construction of the WARP Core, a real-time deterministic graph rewrite simulation engine. This technology is already running this project's homepage, https://flyingrobots.dev, known herein as the "WARPSite"---a "website" powered by the WARP -Paper~I introduces \WARP\ graphs as a minimal recursively nested state object~\cite{Ros25a}. -Paper~II defines a deterministic multiway semantics (via a two-plane DPO discipline) so that -executions become replayable \emph{worldlines}~\cite{Ros25b}. -Paper~III shows that deterministic worldlines admit a boundary representation: -a \emph{provenance payload} is sufficient to reconstruct the full interior derivation volume -(\emph{computational holography})~\cite{Ros25c}. - -This paper addresses the remaining mathematical question: \emph{how should a computation be compared across observers?} -In practice, consumers rarely require a raw microstep-by-microstep derivation. Engineering use-cases demand derived views: -summaries for interpretation, invariants for compilation, provenance for audit, and counterfactual branches for adversarial analysis. -These viewpoints correspond to different \emph{observers} of the same underlying history. - -The correct comparison problem is not ``which observer is right?'' but: -\begin{quote} -\emph{Given two observers that emit different trace languages, what is the cost of translating between them -under explicit resource constraints, and how much distortion is unavoidable?} -\end{quote} -We operationalise this as a geometry on observer space: a distance defined by translator description length -(MDL) plus trace distortion. This distance is \emph{budgeted}: two observers may be equivalent under unbounded -resources but far apart at finite time/memory budgets. - -\paragraph{Context within the Series.} -Paper~V is about ethics: what perfect provenance implies for accountability, privacy, and power. -Paper~VI is about architecture: how to implement the semantics as a system. -Paper~IV is therefore the final mathematics-oriented paper in the series. -Accordingly, we take the opportunity to give the Ruliad connection and observer geometry the full formal weight -they require, rather than deferring mathematical structure to later papers. - -\paragraph{Contributions.} -The contributions of this paper are: - -\begin{enumerate}[leftmargin=*] - \item A stand-alone account of \emph{history categories} for \WARP\ rewriting, relating deterministic worldlines - to multiway systems (\S\ref{sec:prelim}, \S\ref{sec:multiway}). - \item A formal definition of \emph{observers} as resource-bounded functors out of $\Hist(\mathcal{U},R)$ into an - observation space, including boundary-versus-bulk observer pairs induced by holography (\S\ref{sec:observers}). - \item A translation framework between observers, equipped with MDL description length~\cite{Ris78} and - trace distortion, together with explicit assumptions needed for compositionality (\S\ref{sec:translators}). - \item The definition of the \emph{rulial distance} $D_{\tau,m}$ and its core properties: - non-negativity, symmetry, monotonicity under budget relaxation, and a triangle inequality up to a constant - overhead inherited from prefix coding, together with a Lawvere-metric/enriched-category interpretation of directed cost - (\S\ref{sec:rulial}). - \item A formalisation of the Chronos--Kairos--Aion triad as a three-layer time model embedded in the multiway space, - and an interpretation of rulial distance as ``frame separation'' in the Ruliad (\S\ref{sec:multiway}). - \item A minimal temporal logic aligned with Chronos--Kairos--Aion, with concrete liveness/reconciliation examples and a - transport lemma relating temporal satisfaction to observer translation cost (\S\ref{sec:multiway}). -\end{enumerate} - -\paragraph{Scope management.} -We deliberately do \emph{not} attempt to axiomatise a single canonical trace metric, -nor do we claim that MDL gives an optimal notion of semantic similarity for all domains. -Our aim is narrower and foundational: to provide a mathematically explicit, computable mechanism that turns -\emph{translation cost} into geometry, so that later work can specialise it to concrete trace languages and security goals. - -\paragraph{Roadmap.} -We begin by restating the minimal background from Papers~I--III needed for a stand-alone reading (\S\ref{sec:prelim}). -We then formalise observers as resource-bounded functors out of history categories and motivate canonical observer families -induced by holography (\S\ref{sec:observers}). -Next we introduce translators, MDL description length, and lifted distortion as the ingredients for a quantitative comparison -of observers (\S\ref{sec:translators}). -Rulial distance is defined and analysed in \S\ref{sec:rulial}, including the Lawvere-metric/enriched-category interpretation of -directed cost (\S\ref{subsec:lawvere}). -We then connect deterministic worldlines to multiway systems and the Ruliad, formalise the Chronos--Kairos--Aion time model, -and develop a minimal temporal logic whose semantics range over worldlines and branching histories (\S\ref{sec:multiway}, -\S\ref{subsec:temporal-logic}). -Finally, we summarise related work (\S\ref{sec:related}), discuss implications and open directions (\S\ref{sec:outlook}), -and provide a notation summary (\S\ref{sec:notation}). - -\sectionbreak - -% ============================================================ -\section{Preliminaries and Standing Assumptions} -\label{sec:prelim} - -We briefly restate the fragments of Papers~I--III needed for a self-contained treatment. -Throughout, we adopt the deterministic replay discipline of Paper~II (fixed boundary data determines a unique committed tick worldline) -and the boundary encoding of Paper~III. - -\subsection[Warp states and deterministic worldlines]{\textnormal{\textsc{Warp}} states and deterministic worldlines} -\label{subsec:prelim-warps} - -A \WARP\ graph is a finite directed multigraph whose vertices and edges carry recursively attached \WARP\ graphs~\cite{Ros25a}. -A \emph{\WARP\ state} $U\in\WState$ is a typed open graph skeleton together with recursively attached \WARP\ states on each vertex and edge. -We write $\skel(U)$ for the skeleton component. - -Deterministic evolution is expressed in ticks. -Let $\Labels$ denote the space of tick patches: finite records sufficient to advance the state by one tick under the deterministic semantics of Paper~II. -Write -\[ - \Apply : \WState \times \Labels \rightharpoonup \WState -\] - for the deterministic tick-application function. - A \emph{tick patch} is an element $\mu\in\Labels$ (intended to be applied via $\Apply$). - Intuitively, a tick patch is the serialised record of the within-tick batch committed at that tick. - A \emph{tick} is the unit of concurrent evolution: it groups attachment-plane rewrites together with a scheduler-selected batch of independent skeleton rewrites, committed atomically (Paper~II, Def.~4.2). - A deterministic worldline is a sequence -\[ - U_0 \;\Rightarrow\; U_1 \;\Rightarrow\; \cdots \;\Rightarrow\; U_n -\qquad\text{with}\qquad -U_{i+1}=\Apply(U_i,\mu_i) -\] -whenever defined~\cite{Ros25b}. -Paper~II shows how to construct such an $\Apply$ from DPO rewriting in adhesive categories under a two-plane discipline, -and how to package scheduling decisions so that replay is bit-level deterministic. - -\subsection{Boundary encoding and wormholes} -\label{subsec:prelim-holography} - -Paper~III introduces a \emph{provenance payload} -\[ - P=(\mu_0,\ldots,\mu_{n-1}) -\] -and the boundary encoding $(U_0,P)$~\cite{Ros25c}. -Under patch sufficiency,\footnote{Patch sufficiency is the condition that $(U_0,P)$ uniquely determines the interior worldline under the deterministic semantics.} -$(U_0,P)$ reconstructs the interior worldline uniquely. -A \emph{wormhole} is a provenance-preserving compression of a multi-tick segment into a single edge labelled by a sub-payload. -For the present paper, holography induces two natural classes of observers: -\begin{itemize}[leftmargin=*] - \item \emph{bulk observers} that inspect some or all of the interior worldline; and - \item \emph{boundary observers} that operate only on the compact boundary artefact $(U_0,P)$. -\end{itemize} -Rulial distance will quantify the cost of translating between these viewpoints. - -\subsection{Multiway graphs and history categories} -\label{subsec:prelim-history} - -Fix a universe $\mathcal{U}\subseteq\WState$ and a rule pack $R$ (a finite set of rewrite rules plus the fixed typing/open-graph discipline). -The associated \emph{multiway graph} is the directed graph -\[ - \MW(\mathcal{U},R) = (V,E) -\] -whose vertices are states $V=\mathcal{U}$ and whose directed edges are individual rewrite steps generated by $R$ -(including alternative matches and orderings where applicable). -In general $\MW(\mathcal{U},R)$ branches and merges. - -\begin{definition}[History category]\label{def:hist-category} -Let $\MW(\mathcal{U},R)$ be a multiway graph. -Its \emph{history category} $\Hist(\mathcal{U},R)$ is the path category of $\MW(\mathcal{U},R)$: -\begin{itemize}[leftmargin=*] - \item objects are states $U\in\mathcal{U}$; - \item morphisms $h:U\to V$ are finite directed paths in $\MW(\mathcal{U},R)$ from $U$ to $V$; - \item composition is path concatenation. -\end{itemize} -\end{definition} - -\begin{remark}[Deterministic worldlines as functors] -A deterministic worldline $U_0\Rightarrow U_1\Rightarrow\cdots$ defines a functor -$W:\mathbb{N}\to\Hist(\mathcal{U},R)$ sending $i\mapsto U_i$ and $(i\to i{+}1)\mapsto (U_i\to U_{i+1})$. -The determinism discipline of Paper~II can be understood as selecting a unique such functor for fixed boundary data. -We later formalise its finite restriction as the Chronos functor (Definition~\ref{def:chronos} in \S\ref{subsec:chronos-kairos-aion}). -\end{remark} - -\sectionbreak - -% ============================================================ -\section{Observers} -\label{sec:observers} - -Observers are the interface between a \WARP\ history and a consumer. -We treat observers as functors out of the history category into a structured space of traces. - -\subsection{Observation spaces} -\label{subsec:obs-spaces} - -An \emph{observation space} is an object that supports: -(i) a notion of trace value; and (ii) a distortion measure between traces. -The minimal structure we require is a set $\Tr$ equipped with a metric (or pseudometric) -\[ - \mathrm{dist}_{\mathrm{tr}} : \Tr\times\Tr\to\mathbb{R}_{\ge 0}. -\] -In applications, $\Tr$ may be: -symbol streams, labelled paths, graphs of causal dependencies, certificates, or slices of provenance payloads. - -When it is convenient to keep categorical structure explicit, we may regard $\Tr$ as the object set of a category $\mathcal{Y}$ -and work objectwise. Nothing in the core definitions requires nontrivial morphisms in $\mathcal{Y}$; the geometry is carried -by $\mathrm{dist}_{\mathrm{tr}}$. - -\subsection{Observers as budgeted functors} -\label{subsec:obs-functors} - -\begin{definition}[Observer]\label{def:observer} -Fix $\Hist(\mathcal{U},R)$ and an observation space $(\Tr,\mathrm{dist}_{\mathrm{tr}})$. -An \emph{observer} is a functor -\[ - O : \Hist(\mathcal{U},R)\to \Tr, -\] -where we regard $\Tr$ as a discrete category. -Operationally, $O$ is realised by an algorithm that maps any derivation path $h$ to a trace value $O(h)$. -\end{definition} - -\begin{definition}[Resource-bounded observer]\label{def:budgeted-observer} -Let $(\tau,m)$ be time and memory budgets (in any fixed machine model). -An observer $O$ is \emph{$(\tau,m)$-bounded} if it admits an implementation that, on any history input $h$ in its domain, -runs within time $\tau$ and memory $m$. -\end{definition} - -\begin{remark}[Why we bound observers] -Without explicit budgets, all observers collapse into an uninformative equivalence: ``compute the full worldline and output it''. -Budgets ensure the geometry respects real computational constraints: replaying a wormhole is algorithmically simple -(low description length) but may be infeasible at small $\tau$. -\end{remark} - -\subsection{Canonical observer families induced by holography} -\label{subsec:obs-holography} - -Holographic boundary encoding induces a practical taxonomy of observers: -\begin{itemize}[leftmargin=*] - \item \emph{boundary observers} that inspect only $(U_0,P)$ (or its authenticated packaging such as a BTR); - \item \emph{bulk observers} that inspect interior states, matches, receipts, or causal cones; and - \item \emph{semantic observers} that collapse syntactic evolution into invariant properties (types, safety checks, query semantics). -\end{itemize} - -\begin{example}[Boundary vs bulk]\label{ex:boundary-bulk} -Let $O_{\partial}$ map a history $h$ to the boundary artefact $(U_0,P)$ that generates it, -and let $O_{\mathrm{bulk}}$ map $h$ to the full state sequence $(U_0,\ldots,U_n)$. -There is a natural translator $T_{\mathrm{replay}}$ from $O_{\partial}$ to $O_{\mathrm{bulk}}$ given by deterministic replay. -Its \emph{description length} is small (it is essentially the interpreter $\Apply$), -but its \emph{time cost} grows with the length of $P$. -This example will be revisited in \S\ref{subsec:rulial-budget-effects}. -\end{example} - -\subsection{Observer projections of wormholes} -\label{subsec:obs-projections} - -Given a wormhole boundary encoding $(U_0,P)$, different observers may: -\begin{itemize}[leftmargin=*] - \item expose only coarse-grained stages of $P$ (e.g.\ AST$\to$IR$\to$plan); - \item restrict to semantic effects (e.g.\ schema and invariants); - \item highlight only adversarial or counterfactual branches; - \item or inspect every microstep. -\end{itemize} - -\begin{figure}[t] - \centering - \begin{tikzpicture}[ - wormhole/.style={rectangle,draw=black,thick,rounded corners, - minimum width=36mm,minimum height=14mm,align=center}, - observer/.style={rectangle,draw=black,thick,rounded corners=3pt, - minimum width=22mm,minimum height=9mm,align=center,font=\small}, - arrow/.style={-Latex,thick,draw=black}, - >=Latex - ] - - % Central wormhole - \node[wormhole] (W) at (0,0) - {wormhole\\[-1pt] - \scriptsize $(U_0,P)$}; - - % Observers - \node[observer] (O1) at (-4.2,2.4) {$O_1$\\[-2pt]\scriptsize coarse stages}; - \node[observer] (O2) at (4.2,2.4) {$O_2$\\[-2pt]\scriptsize semantic}; - \node[observer] (O3) at (-4.2,-2.4) {$O_3$\\[-2pt]\scriptsize adversarial}; - \node[observer] (O4) at (4.2,-2.4) {$O_4$\\[-2pt]\scriptsize full microsteps}; - - % Projections - \draw[arrow] (W.north west) -- (O1.south east); - \draw[arrow] (W.north east) -- (O2.south west); - \draw[arrow] (W.south west) -- (O3.north east); - \draw[arrow] (W.south east) -- (O4.north west); - - % Labels on arrows - \node[rotate=45,font=\scriptsize] at (-2.3,1.4) {project}; - \node[rotate=-45,font=\scriptsize] at (2.3,1.4) {project}; - \node[rotate=-45,font=\scriptsize] at (-2.3,-1.4) {project}; - \node[rotate=45,font=\scriptsize] at (2.3,-1.4) {project}; - - \end{tikzpicture} - \caption{Multiple observers projecting the same wormhole boundary $(U_0,P)$ into different trace formats. - The rulial distance measures the complexity of translating between such views, balancing translator description length - against residual distortion.} - \label{fig:observer-projections} -\end{figure} - -\sectionbreak - -% ============================================================ -\section{Translators, MDL Complexity, and Distortion} -\label{sec:translators} - -To compare observers we require a compositional notion of translation, a complexity measure for translators, -and a distortion measure between outputs. - -\subsection{Translators} -\label{subsec:translators-def} - -Let $O_1,O_2:\Hist(\mathcal{U},R)\to\Tr$ be observers into a common trace space. -A translator should map traces produced by $O_1$ into traces in the format of $O_2$. - -\begin{definition}[Translator]\label{def:translator} -A \emph{translator} from $O_1$ to $O_2$ is an algorithmic operator -\[ - T_{12} : \Tr \to \Tr -\] -such that $T_{12}\circ O_1$ is a well-defined observer and is intended to approximate $O_2$. -We write $T_{12}\in\Trans(O_1,O_2)$. -\end{definition} - -\begin{remark}[Why we translate by post-composition] -This definition makes typing explicit: $T_{12}\circ O_1$ is an observer with the same domain as $O_2$. -If one prefers to keep the functor category $\Tr^{\Hist(\mathcal{U},R)}$ explicit, a translator can be regarded as an endofunctor -on $\Tr$ together with the induced action on observers by post-composition. -\end{remark} - -\begin{definition}[Budgeted translators]\label{def:budgeted-trans} -For budgets $(\tau,m)$, let $\Trans_{\tau,m}(O_1,O_2)\subseteq\Trans(O_1,O_2)$ denote the translators -realisable within those budgets. -\end{definition} - -\begin{assumption}[Budgeted translator axioms]\label{ass:budgeted-trans} -For each budget pair $(\tau,m)$: -\begin{enumerate}[leftmargin=*] - \item \emph{Identity.} For every observer $O$, the identity translator $I$ belongs to $\Trans_{\tau,m}(O,O)$, and we normalise - codes so that $\DL(I)=0$. - \item \emph{Composition.} If $T_{12}\in\Trans_{\tau,m}(O_1,O_2)$ and $T_{23}\in\Trans_{\tau,m}(O_2,O_3)$, then - \[ - T_{23}\circ T_{12}\in\Trans_{\tau,m}(O_1,O_3). - \] -\end{enumerate} -\end{assumption} - -\begin{example}[SQL $\leftrightarrow$ AST]\label{ex:sql-ast} -Consider a \WARP\ universe modelling a database query planner. -Observer $O_1$ outputs a trace of AST transformations, while observer $O_2$ outputs only the initial SQL string -and a final execution summary. -A translator $T_{12}$ must compile an AST evolution into a SQL-like summary, while $T_{21}$ must infer a plausible AST evolution -consistent with SQL and execution effects. -The description lengths $\DL(T_{12}),\DL(T_{21})$ and their residual distortions quantify the separation of these two views. -\end{example} - -\subsection{MDL and description length} -\label{subsec:mdl} - -We measure translator complexity using MDL: a translator is ``simple'' if it admits a short prefix-free description. - -\begin{definition}[Description length]\label{def:dl} -Fix a prefix-free code over translator programmes. -For a translator $T$, let $\DL(T)\in\mathbb{R}_{\ge 0}$ denote the length of its code word. -\end{definition} - -The constant-overhead behaviour of prefix codes gives the subadditivity we require. - -\begin{assumption}[Subadditivity up to a constant]\label{ass:dl-subadd} -There exists a constant $c\ge 0$ such that for any composable translators $T_{12},T_{23}$ we have -\[ - \DL(T_{23}\circ T_{12}) \le \DL(T_{12}) + \DL(T_{23}) + c. -\] -\end{assumption} - -\begin{remark}[On the constant $c$] -The constant $c$ is the code overhead required to describe ``run $T_{12}$ then $T_{23}$'' under the chosen universal coding scheme. -MDL theory~\cite{Ris78} (and related invariance results for prefix complexity~\cite{LiVitanyi2019}) justify treating such overhead as $O(1)$: -it does not scale with the size of the translators being composed. -\end{remark} -\begin{remark}[Relation to information distance and rate--distortion] -At $\lambda\to\infty$ with the constraint $\Dist(O_2,T\circ O_1)=0$, the directed cost -reduces to the description length of the shortest exact translator from $O_1$ to $O_2$. -The resulting symmetrised distance is closely related in spirit to \emph{algorithmic information distance}: -the Kolmogorov-style cost of converting one description into another~\cite{Bennett98,LiVitanyi2019}. -At finite $\lambda$, the objective is an MDL-flavoured instance of a rate--distortion trade-off: -we pay \emph{rate} (translator description length) to purchase lower residual distortion. -\end{remark} - - -\subsection{Trace distortion and lifted observer distortion} -\label{subsec:distortion} - -Fix a metric (or pseudometric) $\mathrm{dist}_{\mathrm{tr}}$ on trace space $\Tr$. -We lift it to a distortion between observers by taking a supremum over histories. - -\begin{definition}[Lifted distortion]\label{def:dist-lift} -For observers $O,O':\Hist(\mathcal{U},R)\to\Tr$, define -\[ - \Dist(O,O') \;:=\; \sup_{h\in\Mor(\Hist(\mathcal{U},R))}\, - \mathrm{dist}_{\mathrm{tr}}\bigl(O(h),O'(h)\bigr). -\] -\end{definition} - -\begin{assumption}[Bounded diameter]\label{ass:bounded-diameter} -All observers under comparison take values in a common trace space $\Tr$ of uniformly bounded diameter, -so that $\Dist(O,O')$ is finite. -\end{assumption} - -\begin{assumption}[Non-expansive translators]\label{ass:lipschitz} -Post-composition by any translator is $1$-Lipschitz: -\[ - \Dist(T\circ O,\, T\circ O') \le \Dist(O,O') -\] -for all translators $T$ and observers $O,O'$. -\end{assumption} - -\begin{remark}[Alternative liftings] -The supremum lifting is conservative: it protects against worst-case histories and adversarial inputs. -In statistical settings we may instead use an expected distortion over a distribution on histories, or restrict to histories within a time cone. -Our results adapt to any lifting that preserves the triangle inequality and non-expansiveness properties used in \S\ref{sec:rulial}. -\end{remark} - -\sectionbreak - -% ============================================================ -\section{Rulial Distance} -\label{sec:rulial} - -We now define the rulial distance and prove its core properties. -Throughout we fix a weighting parameter $\lambda>0$ trading off translator complexity against residual distortion. - -\subsection{Directed and symmetrised distance} - -It is useful to separate the directed translation problem from its symmetrisation. - -\begin{definition}[Directed rulial cost]\label{def:directed} -For observers $O_1,O_2$ define the directed cost -\[ - \vec{D}_{\tau,m}(O_1\!\to\! O_2) - := - \inf_{T_{12}\in\Trans_{\tau,m}(O_1,O_2)} - \Bigl(\DL(T_{12}) + \lambda\,\Dist(O_2,\,T_{12}\circ O_1)\Bigr), -\] -with the convention that the infimum over an empty set is $+\infty$. -\end{definition} - -\begin{definition}[Rulial distance]\label{def:rulial} -The (symmetrised) \emph{rulial distance} is -\[ - D_{\tau,m}(O_1,O_2) - := - \vec{D}_{\tau,m}(O_1\!\to\! O_2) - \;+\; - \vec{D}_{\tau,m}(O_2\!\to\! O_1). -\] -Equivalently, expanding the two infima yields the joint infimum formulation used in earlier drafts: -\[ - D_{\tau,m}(O_1,O_2) - = \inf_{\substack{ - T_{12}\in\Trans_{\tau,m}(O_1,O_2)\\ - T_{21}\in\Trans_{\tau,m}(O_2,O_1)}} - \Bigl( - \DL(T_{12}) + \DL(T_{21}) - + \lambda \bigl( - \Dist(O_2, T_{12}\circ O_1) + - \Dist(O_1, T_{21}\circ O_2) - \bigr) - \Bigr). -\] -\end{definition} - -\subsection{Basic properties} - -\begin{theorem}[Basic properties]\label{thm:rulial-basic} -For all observers $O_1,O_2$ and budgets $(\tau,m)$: -\begin{enumerate}[leftmargin=*] - \item $D_{\tau,m}(O_1,O_2)\ge 0$; - \item $D_{\tau,m}(O_1,O_2)=D_{\tau,m}(O_2,O_1)$; - \item $D_{\tau,m}(O,O)=0$ for every observer $O$. -\end{enumerate} -\end{theorem} - -\begin{proof} -Non-negativity follows because $\DL\ge 0$ and $\Dist\ge 0$. -Symmetry holds by definition of $D_{\tau,m}$ as the sum of two directed terms. -For reflexivity, the identity translator $I$ is admissible by Assumption~\ref{ass:budgeted-trans} and -satisfies $\DL(I)=0$ and $\Dist(O,I\circ O)=0$, so both directed costs vanish. -\end{proof} - -\begin{corollary}[Observer equivalence]\label{cor:observer-equivalence} -Let $O_1,O_2$ be observers. -Then $D_{\tau,m}(O_1,O_2)=0$ if and only if there exist translators -$T_{12}\in\Trans_{\tau,m}(O_1,O_2)$ and $T_{21}\in\Trans_{\tau,m}(O_2,O_1)$ such that: -\begin{enumerate}[leftmargin=*] - \item $\Dist(O_2, T_{12}\circ O_1)=0$ and $\Dist(O_1, T_{21}\circ O_2)=0$; and - \item $\DL(T_{12})$ and $\DL(T_{21})$ are bounded by a constant independent of the histories under consideration. -\end{enumerate} -In this case the observers are equivalent under the rulial geometry: they differ only by constant-overhead, -distortion-free translation. -\end{corollary} - -\begin{proof}[Proof sketch] -If such translators exist then both directed costs are bounded by constants independent of the histories (distortion is $0$ and description length is constant), -so $D_{\tau,m}(O_1,O_2)$ is bounded by a constant. -Under the constant-overhead convention of the remark below, we identify such constant separation with $0$, yielding $D_{\tau,m}(O_1,O_2)=0$. -Conversely, if $D_{\tau,m}(O_1,O_2)=0$ then (by definition of $D_{\tau,m}$ as the sum of two directed costs) -both directed costs vanish modulo constant overhead, hence there exist translators in both directions with -zero residual distortion and constant description length, as claimed. -\end{proof} - -\begin{remark} -Observer equivalence is defined modulo constant description overhead; exact zero-length translators are not required and depend on the choice of coding scheme. -\end{remark} - -\subsection{Monotonicity under budget relaxation} -\label{subsec:rulial-monotone} - -The budgeted nature of rulial distance is essential: it distinguishes translations that are short in description length -but exceed available time/memory resources from those that are admissible under the deployment constraints. - -\begin{proposition}[Budget monotonicity]\label{prop:budget-monotone} -If $(\tau',m')\succeq(\tau,m)$ (i.e.\ $\tau'\ge\tau$ and $m'\ge m$) then -\[ - D_{\tau',m'}(O_1,O_2) \le D_{\tau,m}(O_1,O_2). -\] -\end{proposition} - -\begin{proof} -By definition, $\Trans_{\tau,m}(O_i,O_j)\subseteq \Trans_{\tau',m'}(O_i,O_j)$ under budget relaxation, -so the infimum is taken over a larger set and cannot increase. -\end{proof} - -\subsection{Triangle inequality up to a constant} -\label{subsec:rulial-triangle} - -\begin{theorem}[Triangle inequality up to additive slack]\label{thm:rulial-triangle} -Assume: -\begin{enumerate}[leftmargin=*] - \item Assumption~\ref{ass:dl-subadd} (subadditivity of $\DL$ up to constant $c$); - \item $\Dist$ is a metric on observers (triangle inequality) and translators are non-expansive - (Assumption~\ref{ass:lipschitz}); - \item budget classes are closed under composition (Assumption~\ref{ass:budgeted-trans}). -\end{enumerate} -Then for all observers $O_1,O_2,O_3$ we have -\[ - D_{\tau,m}(O_1,O_3) \le D_{\tau,m}(O_1,O_2) + D_{\tau,m}(O_2,O_3) + 2c. -\] -\end{theorem} - -\begin{proof} -Fix $\varepsilon>0$ and choose near-optimal translators for the two distances: -pick $T_{12},T_{21}$ such that the objective for $D_{\tau,m}(O_1,O_2)$ is within $\varepsilon/2$ of the infimum, -and $T_{23},T_{32}$ similarly for $D_{\tau,m}(O_2,O_3)$. -By closure under composition, $T_{13}=T_{23}\circ T_{12}$ and $T_{31}=T_{21}\circ T_{32}$ -are admissible budgeted translators. - -Subadditivity gives $\DL(T_{13})\le\DL(T_{12})+\DL(T_{23})+c$ and -$\DL(T_{31})\le\DL(T_{21})+\DL(T_{32})+c$. -For distortion, the triangle inequality and non-expansiveness yield -\begin{align*} - \Dist(O_3,\,T_{13}\circ O_1) - &= \Dist(O_3,\,T_{23}\circ T_{12}\circ O_1)\\ - &\le \Dist(O_3,\,T_{23}\circ O_2) + \Dist(T_{23}\circ O_2,\,T_{23}\circ T_{12}\circ O_1)\\ - &\le \Dist(O_3,\,T_{23}\circ O_2) + \Dist(O_2,\,T_{12}\circ O_1), -\end{align*} -and similarly for $\Dist(O_1,\,T_{31}\circ O_3)$. -Summing the bounds and using near-optimality yields the stated inequality up to $\varepsilon$. -Letting $\varepsilon\to 0$ completes the proof. -\end{proof} - -\begin{remark}[Quasi-pseudometric] -Together with Theorem~\ref{thm:rulial-basic}, Theorem~\ref{thm:rulial-triangle} makes $D_{\tau,m}$ a -quasi-pseudometric: it satisfies all pseudometric axioms except that the triangle inequality holds only up to an additive constant $2c$. -In practice $c$ is a small, fixed prefix-coding overhead; it may also be absorbed into $\lambda$ if desired. -The geometry is most informative when translation costs scale nontrivially with history size, or when comparing asymptotically distinct observer classes (e.g.\ $O(1)$ vs $O(N)$), in which regime the constant $c$ becomes negligible. -\end{remark} - -\subsection{Lawvere-metric (enriched category) viewpoint} -\label{subsec:lawvere} - -The symmetrised distance $D_{\tau,m}$ is convenient for neighbourhoods and ``frame separation'', -but the underlying translation problem is inherently \emph{directed}: -decompressing a boundary view into a bulk view can be infeasible under strict budgets, -whereas projection from bulk into boundary is typically admissible under the same budgets. -This asymmetry is captured by Lawvere's observation that metric spaces are categories enriched in -the monoidal poset $([0,\infty],\ge,+,0)$~\cite{Lawvere73,Kelly82}. - -\begin{definition}[Lawvere metric space]\label{def:lawvere-metric} -A \emph{Lawvere metric space} is a category enriched over the monoidal poset $([0,\infty],\ge,+,0)$. -Concretely, it is a collection of objects together with a function $d(x,y)\in[0,\infty]$ such that: -(i) $d(x,x)=0$ for all $x$; and (ii) $d(x,z)\le d(x,y)+d(y,z)$ for all $x,y,z$. -No symmetry condition is imposed; $d(x,y)$ and $d(y,x)$ may differ. -The value $+\infty$ is permitted and represents ``no morphism'' (infeasible translation). -\end{definition} - -\begin{definition}[Directed rulial hom]\label{def:lawvere-hom} -Fix budgets $(\tau,m)$. -For observers $O_1,O_2$ define the \emph{directed hom-value} -\[ - d_{\tau,m}(O_1,O_2) \;:=\; \vec{D}_{\tau,m}(O_1\!\to\!O_2)\in[0,\infty], -\] -with the convention $d_{\tau,m}(O_1,O_2)=+\infty$ when $\Trans_{\tau,m}(O_1,O_2)=\varnothing$. -The symmetrised rulial distance is the induced symmetrisation -$D_{\tau,m}(O_1,O_2)=d_{\tau,m}(O_1,O_2)+d_{\tau,m}(O_2,O_1)$. -\end{definition} - -For notational convenience, we treat $\vec{D}_{\tau,m}(O_1\!\to\!O_2)$ and $d_{\tau,m}(O_1,O_2)$ as interchangeable; -we use $d_{\tau,m}$ when emphasising the Lawvere-enriched interpretation. - -\begin{proposition}[Composition as triangle inequality]\label{prop:lawvere-triangle} -Assume: -(i) Assumption~\ref{ass:dl-subadd}; -(ii) $\Dist$ satisfies the triangle inequality and translators are non-expansive (Assumption~\ref{ass:lipschitz}); -and (iii) budget classes are closed under composition (Assumption~\ref{ass:budgeted-trans}). -Then for all observers $O_1,O_2,O_3$, -\[ - d_{\tau,m}(O_1,O_3) - \le - d_{\tau,m}(O_1,O_2) + d_{\tau,m}(O_2,O_3) + c. -\] -\end{proposition} - -\begin{proof}[Proof sketch] -The argument is the directed half of the proof of Theorem~\ref{thm:rulial-triangle}. -Choose near-optimal translators $T_{12}\in\Trans_{\tau,m}(O_1,O_2)$ and $T_{23}\in\Trans_{\tau,m}(O_2,O_3)$. -Closure under composition gives an admissible translator $T_{13}=T_{23}\circ T_{12}$. -Subadditivity bounds $\DL(T_{13})\le \DL(T_{12})+\DL(T_{23})+c$. -The distortion term satisfies -$\Dist(O_3,T_{13}\circ O_1)\le \Dist(O_3,T_{23}\circ O_2)+\Dist(O_2,T_{12}\circ O_1)$ -by the triangle inequality and non-expansiveness. -Taking infima yields the stated inequality. -\end{proof} - -\begin{remark}[Strict enrichment vs $O(1)$ slack] -If we treat description lengths modulo constant additive overhead (as is standard in Kolmogorov/MDL-style arguments), -or adopt a translator description language with a primitive sequencing combinator whose size is absorbed into the base machine model, -then the constant $c$ may be taken as $0$. -In that regime, $d_{\tau,m}$ satisfies the Lawvere triangle inequality exactly and the ``space of observers'' -is a $[0,\infty]$-enriched category. -When $c>0$, the enrichment is accurate up to fixed $O(1)$ slack, matching the quasi-pseudometric remark above. -\end{remark} - -The enriched viewpoint encodes several familiar facts: -directed costs compose by addition (triangle inequality); -budgets produce $+\infty$ hom-values (no admissible translator); -and asymmetry is the generic case rather than an exception. -It also exposes standard categorical tools: the enriched Yoneda embedding associates to each observer $O$ -its distance profile $d_{\tau,m}(O,-)$, and Cauchy completion corresponds to freely adjoining ``ideal observers'' -realising limits of Cauchy weights (useful when taking refinement limits)~\cite{Kelly82}. - -\begin{example}[Boundary vs bulk as an asymmetric hom]\label{ex:lawvere-boundary-bulk} -Let $O_{\partial}$ be the boundary observer of Example~\ref{ex:boundary-bulk}. -Let $O_{\mathrm{bulk}}^{+}$ be a bulk observer whose trace format includes the boundary payload as a visible component -(e.g.\ it outputs $(U_0,P)$ together with additional interior witnesses such as $(U_1,\ldots,U_n)$, match receipts, or causal cones). -Then the forgetful projection translator $T_{\mathrm{forget}}$ extracting $(U_0,P)$ is admissible with -$\DL(T_{\mathrm{forget}})=O(1)$ and zero residual distortion, so $d_{\tau,m}(O_{\mathrm{bulk}}^{+},O_{\partial})=O(1)$. -In the opposite direction, Proposition~\ref{prop:boundary-bulk} shows that -$d_{\tau,m}(O_{\partial},O_{\mathrm{bulk}}^{+})$ can be $+\infty$ under strict budgets (replay is infeasible under the time bound), -but reduces to $O(1)$ when $(\tau,m)$ are unbounded. -This is typical of Lawvere metric spaces: translation is compositional, but symmetry is not assumed. -\end{example} - -\begin{figure}[t] - \centering - \begin{tikzpicture}[ - obs/.style={draw=black,thick,rounded corners=3pt,inner sep=6pt,align=center,font=\scriptsize}, - arr/.style={-Latex,thick,draw=black}, - maybe/.style={-Latex,thick,draw=black!70,dash pattern=on 6pt off 4pt}, - >=Latex - ] - - \node[obs] (Op) at (0,0) {$O_{\partial}$\\boundary}; - \node[obs] (Ob) at (10.0,0) {$O_{bulk}^{+}$\\bulk$+$}; - \node[obs] (Os) at (5.0,-6.0) {$O_{sum}$\\summary}; - - % Draw edges first; add labels afterwards so they are never occluded by the centre inequality box. - \draw[maybe] (Op) -- (Ob); - \draw[arr] (Ob) -- (Os); - \draw[maybe] (Op) -- (Os); - - \node[font=\scriptsize,align=center,text width=10.5cm,fill=white,inner sep=3pt] at (5.0,-1.9) - {$\begin{aligned} - d_{\tau,m}(O_{\partial}, O_{sum}) - &\le d_{\tau,m}(O_{\partial}, O_{bulk}^{+}) - + d_{\tau,m}(O_{bulk}^{+}, O_{sum})\;(+c) - \end{aligned}$}; - - % Edge labels (drawn last to sit above the inequality box) - \path (Op) -- (Ob) - node[midway,above=5mm,font=\scriptsize,fill=white,inner sep=1pt] - {$d_{\tau,m}(O_{\partial}, O_{bulk}^{+})$}; - \path (Ob) -- (Os) - node[pos=0.75,sloped,above=3mm,font=\scriptsize,fill=white,inner sep=1pt] - {$d_{\tau,m}(O_{bulk}^{+}, O_{sum})$}; - \path (Op) -- (Os) - node[pos=0.75,sloped,below=3mm,font=\scriptsize,fill=white,inner sep=1pt] - {$d_{\tau,m}(O_{\partial}, O_{sum})$}; - - \end{tikzpicture} - \caption{Directed translation costs form a Lawvere-style geometry: costs compose additively (triangle inequality), - asymmetry is expected, and strict budgets can force $+\infty$ distances. - The dashed arrows emphasise that some translations (e.g.\ boundary$\to$bulk$+$) may be infeasible at fixed $(\tau,m)$.} - \label{fig:lawvere} -\end{figure} - -\subsection{Budget effects: replay is short but not fast} -\label{subsec:rulial-budget-effects} - -The boundary/bulk example illustrates why we insist on explicit budgets. - -\begin{proposition}[Boundary-to-bulk translation]\label{prop:boundary-bulk} -Let $O_{\partial}$ be a boundary observer and $O_{\mathrm{bulk}}$ a bulk observer as in Example~\ref{ex:boundary-bulk}. -Assume deterministic replay is available as a translator $T_{\mathrm{replay}}$. -Then: -\begin{enumerate}[leftmargin=*] - \item $\DL(T_{\mathrm{replay}})$ is $O(1)$ relative to the fixed semantics $\Apply$ (it is essentially the interpreter); - \item for fixed finite budgets $(\tau,m)$, $T_{\mathrm{replay}}\notin\Trans_{\tau,m}(O_{\partial},O_{\mathrm{bulk}})$ once the payload length exceeds $\tau$, - so $\vec{D}_{\tau,m}(O_{\partial}\!\to\!O_{\mathrm{bulk}})=+\infty$ beyond that regime; - \item for unbounded budgets, the directed distortion term can be $0$ (exact replay), so - $\vec{D}_{\infty,\infty}(O_{\partial}\!\to\!O_{\mathrm{bulk}})=O(1)$. -\end{enumerate} -\end{proposition} - -\begin{proof} -(1) follows from the fact that the replay algorithm is fixed once the operational semantics is fixed. -(2) is immediate: replay must apply the tick patches sequentially and therefore requires time proportional to payload length. -(3) follows because replay is exact, so distortion vanishes, and only the constant description length remains. -\end{proof} - -\begin{remark}[Interpretation] -At unbounded resources, boundary and bulk descriptions may be close in rulial distance. -At bounded resources, they can be infinitely far. -This captures an engineering reality: a short programme can still be computationally infeasible under strict budgets. -In particular, under the two-plane \WARP\ semantics (Paper~II), a state decomposes into a skeleton together with recursively -attached sub-states; translating a boundary observer into a bulk observer amounts to \emph{expanding} these attachment fibres -across the committed tick sequence, work that can be infeasible under strict $(\tau,m)$ budgets. -Wormholes (Paper~III) are precisely provenance-preserving compressions of multi-tick segments into single labelled edges; a bulk observer ``sees inside'' -only by replaying (and hence expanding) the corresponding sub-payload. -\end{remark} - -\sectionbreak - -% ============================================================ -\section{Multiway Systems, the Ruliad, and Observer Geometry} -\label{sec:multiway} - -We treat the Ruliad connection with full mathematical detail, since later papers in the series -move from mathematics to ethics and architecture. - -\subsection[Multiway space induced by warp rewriting]{Multiway space induced by \textnormal{\textsc{Warp}} rewriting} -\label{subsec:multiway-warps} - -A rule pack $R$ induces a multiway graph $\MW(\mathcal{U},R)$. -The determinism discipline of Paper~II does \emph{not} remove branching from the underlying multiway space; -rather, it ensures that once a boundary encoding is fixed (initial state, rule pack, scheduler policy, and tie-breaks), -the realised evolution is a unique path. - -\begin{figure}[t] - \centering - \begin{tikzpicture}[ - state/.style={circle,draw=black,thick,minimum size=5mm,inner sep=0pt}, - ghost/.style={circle,draw=black!40,thick,minimum size=5mm,inner sep=0pt}, - arrow/.style={-Latex,thick,draw=black!40}, - detarrow/.style={-Latex,very thick,draw=black}, - >=Latex - ] - - % Initial state - \node[state] (S0) at (0,0) {$S_0$}; - - % Level 1 - \node[ghost] (A1) at (-1.5,1.6) {}; - \node[state] (A2) at (0,1.6) {}; - \node[ghost] (A3) at (1.5,1.6) {}; - - \draw[arrow] (S0) -- (A1); - \draw[detarrow] (S0) -- (A2); - \draw[arrow] (S0) -- (A3); - - % Level 2 - \node[ghost] (B1) at (-2.6,3.2) {}; - \node[ghost] (B2) at (-1.5,3.2) {}; - \node[ghost] (B3) at (-0.5,3.2) {}; - \node[state] (B4) at (0.6,3.2) {}; - \node[ghost] (B5) at (1.6,3.2) {}; - \node[ghost] (B6) at (2.6,3.2) {}; - - \draw[arrow] (A1) -- (B1); - \draw[arrow] (A1) -- (B2); - \draw[arrow] (A2) -- (B3); - \draw[detarrow] (A2) -- (B4); - \draw[arrow] (A3) -- (B5); - \draw[arrow] (A3) -- (B6); - - % Level 3 merge points - \node[ghost] (C1) at (-1.0,4.8) {}; - \node[state] (C2) at (0.6,4.8) {}; - \node[ghost] (C3) at (2.0,4.8) {}; - - \draw[arrow] (B2) -- (C1); - \draw[arrow] (B3) -- (C1); - \draw[detarrow] (B4) -- (C2); - \draw[arrow] (B5) -- (C3); - \draw[arrow] (B6) -- (C3); - - % Annotation - \node[anchor=west,align=left] at (3.5,2.35) - {\scriptsize multiway space:\\[-1pt] - \scriptsize all possible rewrites}; - \node[anchor=west,align=left] at (3.5,1.05) - {\scriptsize deterministic worldline:\\[-1pt] - \scriptsize unique path for fixed\\[-1pt] - \scriptsize boundary data}; - - \end{tikzpicture} - \caption{A deterministic worldline (thick) through the multiway space of all possible \WARP\ rewrites. - Fixing the rule pack, initial state, and scheduling/tie-break data selects a unique path; alternative branches - represent different matches, schedules, or rule-pack choices.} - \label{fig:multiway-slice} -\end{figure} - -\begin{remark}[Confluence vs determinism] -Confluence is a property of a rewrite system: different rewrite orders lead to a common result. -Determinism in Paper~II is stronger and more operational: given fixed boundary data, there is a unique committed tick outcome. -The multiway graph still exists as the ambient possibility space in which observers may reason about counterfactuals. -\end{remark} - -\subsection{The Ruliad as a large history space} -\label{subsec:ruliad} - -Wolfram's Ruliad is informally the limit of all possible computations; in our setting it is natural to model it -as a large history space built from multiway systems~\cite{Wolfram2020}. - -\begin{definition}[Aion/Ruliad history space]\label{def:ruliad} -Fix a class $\mathfrak{R}$ of admissible rule packs and a class $\mathfrak{U}$ of admissible initial states. -Define the \emph{Aion history space} (the \emph{Ruliad} in our setting) as the disjoint union of history categories -\[ - \Ruliad \;:=\; \bigsqcup_{(U_0,R)\in\mathfrak{U}\times\mathfrak{R}} \Hist(\mathcal{U}_{U_0,R},R), -\] -where $\mathcal{U}_{U_0,R}$ is the forward closure of $U_0$ under $R$ (the reachable states). -\end{definition} - -\begin{remark}[Large-category caveat] -$\Ruliad$ is a large category (indeed a proper class in many settings). -The disjoint union is deliberate: we treat histories as provenance-bearing artefacts, so components are not quotiented by extensional -state equality. -In particular, even if two reachable states are \emph{identical} as graph-shaped data, we keep them as distinct objects of $\Ruliad$ -when they arise from different causal origins (different initial states and/or rule packs). -This contrasts with a ``merging'' view of the Ruliad that identifies states across components and thereby erases origin information. -We use it as a conceptual container: the purpose is to make explicit that a single deterministic worldline is a small, selected path -within a vastly larger possibility space. -None of the metric arguments in \S\ref{sec:rulial} require manipulating $\Ruliad$ as a set-theoretic object. -\end{remark} - -\subsection{Chronos, Kairos, Aion} -\label{subsec:chronos-kairos-aion} - -We formalise the three-layer time model alluded to in earlier drafts. - -\begin{definition}[Chronos]\label{def:chronos} -\emph{Chronos time} is the linear time of a fixed worldline: -given a replayable payload $P=(\mu_0,\ldots,\mu_{n-1})$ of \emph{tick patches}, Chronos is the finite linear order -$0<1<\cdots0$) -\[ -\vec{D}_{\tau,m}(O_{\mathrm{bulk}} \to O_\partial) = O(1). -\] -\end{itemize} - -For unbounded budgets $(\tau,m)=(\infty,\infty)$, replay is admissible and exact, so both directed -costs are $O(1)$. -Here the hidden constant is independent of $|P|$ (history length) once the fixed semantics $\Apply$ and the translator coding scheme are chosen. - -\subsection*{A.4 Symmetrised distance} - -The symmetrised rulial distance is -\[ -D_{\tau,m}(O_\partial,O_{\mathrm{bulk}}) -= \vec{D}_{\tau,m}(O_\partial \to O_{\mathrm{bulk}}) -+ \vec{D}_{\tau,m}(O_{\mathrm{bulk}} \to O_\partial). -\] - -Thus: -\begin{itemize} -\item under strict budgets, -$D_{\tau,m}(O_\partial,O_{\mathrm{bulk}})=+\infty$; and -\item under relaxed budgets, -$D_{\infty,\infty}(O_\partial,O_{\mathrm{bulk}})=O(1)$. -\end{itemize} - -\subsection*{A.5 Interpretation} - -This example illustrates the operational use of rulial distance. In practice we do not compute the -infimum in Definition~\ref{def:rulial} directly; rather, we construct explicit translators and thereby obtain -concrete upper bounds. Improving translators reduces these bounds, whereas strong summarisation -or information hiding can increase them---in the limiting case, to $+\infty$. The geometry therefore captures -how observer separation depends on available resources rather than on semantic disagreement. - -\clearpage -\addcontentsline{toc}{section}{References} -\bibliographystyle{alphaurl} -\bibliography{refs} - -\end{document} diff --git a/docs/archive/study/refs.bib b/docs/archive/study/refs.bib deleted file mode 100644 index 42a45ae6..00000000 --- a/docs/archive/study/refs.bib +++ /dev/null @@ -1,111 +0,0 @@ -@article{Bennett98, - author = {Bennett, Charles H. and G{\'a}cs, P{\'e}ter and Li, Ming and Vit{\'a}nyi, Paul M. B. and Zurek, Wojciech H.}, - title = {Information Distance}, - journal = {IEEE Transactions on Information Theory}, - volume = {44}, - number = {4}, - pages = {1407--1423}, - year = {1998}, - doi = {10.1109/18.672557} -} - -@article{CD11, - author = {Coecke, Bob and Duncan, Ross}, - title = {Interacting quantum observables: Categorical algebra and diagrammatic reasoning}, - journal = {New Journal of Physics}, - volume = {13}, - number = {4}, - pages = {043016}, - year = {2011}, - doi = {10.1088/1367-2630/13/4/043016} -} - -@inproceedings{CES86, - author = {Clarke, Edmund M. and Emerson, E. Allen and Sistla, A. Prasad}, - title = {Automatic verification of finite-state concurrent systems using temporal logic}, - booktitle = {Proceedings of the 8th Annual ACM Symposium on Principles of Programming Languages}, - pages = {85--96}, - year = {1986}, - publisher = {ACM}, - doi = {10.1145/643800.643807} -} - -@book{Kelly82, - author = {Kelly, G. M.}, - title = {Basic Concepts of Enriched Category Theory}, - publisher = {Cambridge University Press}, - year = {1982}, - isbn = {9780521282648} -} - -@article{Lawvere73, - author = {Lawvere, F. William}, - title = {Metric spaces, generalized logic, and closed categories}, - journal = {Rendiconti del Seminario Matematico e Fisico di Milano}, - volume = {43}, - pages = {135--166}, - year = {1973} -} - -@book{LiVitanyi2019, - author = {Li, Ming and Vit{\'a}nyi, Paul M. B.}, - title = {An Introduction to Kolmogorov Complexity and Its Applications}, - edition = {4}, - publisher = {Springer}, - year = {2019}, - isbn = {978-3-030-10664-8} -} - -@inproceedings{Pnu77, - author = {Pnueli, Amir}, - title = {The temporal logic of programs}, - booktitle = {Proceedings of the 18th Annual Symposium on Foundations of Computer Science}, - pages = {46--57}, - year = {1977}, - organization = {IEEE} -} - -@article{Ris78, - author = {Rissanen, Jorma}, - title = {Modeling by shortest data description}, - journal = {Automatica}, - volume = {14}, - number = {5}, - pages = {465--471}, - year = {1978} -} - -@misc{Ros25a, - author = {Ross, James}, - title = {{WARP Graphs: A Worldline Algebra for Recursive Provenance}}, - howpublished = {AI$\Omega$N Foundations Series --- Paper I}, - month = {December}, - year = {2025}, - doi = {10.5281/zenodo.17908005}, - note = {Version cited: December 2025 PDF.} -} - -@misc{Ros25b, - author = {Ross, James}, - title = {{Deterministic Multiway Rewriting and Tick-Based Semantics}}, - howpublished = {AI$\Omega$N Foundations Series --- Paper II}, - month = {December}, - year = {2025}, - doi = {10.5281/zenodo.17934512} -} - -@misc{Ros25c, - author = {Ross, James}, - title = {{Computation Holography and Boundary Provenance Payloads}}, - howpublished = {AI$\Omega$N Foundations Series --- Paper III}, - month = {December}, - year = {2025}, - doi = {10.5281/zenodo.17963669} -} - -@misc{Wolfram2020, - author = {Wolfram, Stephen}, - title = {The Ruliad and the Wolfram Physics Project}, - year = {2020}, - note = {Available at \url{https://www.wolframphysics.org}} -} diff --git a/docs/archive/study/render-tour-diagrams.py b/docs/archive/study/render-tour-diagrams.py deleted file mode 100644 index 4aa65951..00000000 --- a/docs/archive/study/render-tour-diagrams.py +++ /dev/null @@ -1,91 +0,0 @@ -#!/usr/bin/env python3 -# SPDX-License-Identifier: Apache-2.0 -# © James Ross Ω FLYING•ROBOTS -""" -Extract mermaid blocks from what-makes-echo-tick-tour.md, -render them to SVG, and update the markdown to reference the SVGs. -""" - -import re -import subprocess -import sys -from pathlib import Path - -STUDY_DIR = Path(__file__).parent -DIAGRAMS_DIR = STUDY_DIR / "tour-diagrams" -INPUT_MD = STUDY_DIR / "what-makes-echo-tick-tour.md" - - -def extract_mermaid_blocks(content: str) -> list[tuple[int, int, str]]: - """Extract (start, end, code) tuples for all mermaid blocks.""" - pattern = r'```mermaid\n(.*?)```' - results = [] - for match in re.finditer(pattern, content, re.DOTALL): - results.append((match.start(), match.end(), match.group(1).strip())) - return results - - -def render_mermaid_to_svg(diagram_id: str, mermaid_code: str) -> Path | None: - """Render mermaid code to SVG. Returns path to SVG or None on failure.""" - DIAGRAMS_DIR.mkdir(parents=True, exist_ok=True) - - mmd_file = DIAGRAMS_DIR / f"{diagram_id}.mmd" - svg_file = DIAGRAMS_DIR / f"{diagram_id}.svg" - - mmd_file.write_text(mermaid_code) - - try: - result = subprocess.run( - ["mmdc", "-i", str(mmd_file), "-o", str(svg_file), "-b", "transparent"], - capture_output=True, - text=True, - timeout=30 - ) - if result.returncode != 0: - print(f" mmdc failed for {diagram_id}: {result.stderr}", file=sys.stderr) - return None - except subprocess.TimeoutExpired: - print(f" mmdc timeout for {diagram_id}", file=sys.stderr) - return None - except FileNotFoundError: - print(" mmdc not found - install with: npm install -g @mermaid-js/mermaid-cli", file=sys.stderr) - return None - - if svg_file.exists(): - return svg_file - return None - - -def main(): - print("=== Rendering Tour Diagrams ===\n") - - content = INPUT_MD.read_text() - blocks = extract_mermaid_blocks(content) - - print(f"Found {len(blocks)} mermaid diagrams") - - # Process in reverse order to preserve string positions - for i, (start, end, code) in enumerate(reversed(blocks), 1): - diagram_num = len(blocks) - i + 1 - diagram_id = f"tour-{diagram_num:02d}" - - print(f" Converting {diagram_id}...", end=" ") - - svg_path = render_mermaid_to_svg(diagram_id, code) - if svg_path: - # Replace mermaid block with image reference - # Use relative path from study dir - img_ref = f"![Diagram {diagram_num}](tour-diagrams/{diagram_id}.svg)" - content = content[:start] + img_ref + content[end:] - print("OK") - else: - print("FAILED") - - # Write updated markdown - INPUT_MD.write_text(content) - print(f"\nUpdated {INPUT_MD.name} with SVG references") - print(f"Diagrams saved to {DIAGRAMS_DIR}") - - -if __name__ == "__main__": - main() diff --git a/docs/archive/study/what-makes-echo-tick-processed.md b/docs/archive/study/what-makes-echo-tick-processed.md deleted file mode 100644 index 3a3e092f..00000000 --- a/docs/archive/study/what-makes-echo-tick-processed.md +++ /dev/null @@ -1,1121 +0,0 @@ - - - -# What Makes Echo Tick? - -> **Your Tour Guide**: Claude (Opus 4.5) -> -> Welcome! I've been asked to give you a personal tour through Echo's internals. This isn't just documentation—I'll share what I find elegant, surprising, and occasionally baffling about this codebase. When you see a red-outlined box, that's me stepping out of "narrator mode" to give you my unfiltered take. -> -> **Reading Time**: ~45 minutes for complete understanding. - ---- - -## Table of Contents - -1. [Philosophy: Why Echo Exists](#1-philosophy-why-echo-exists) -2. [The Big Picture: Architecture Overview](#2-the-big-picture-architecture-overview) -3. [Core Concepts: The WARP Graph](#3-core-concepts-the-warp-graph) -4. [The Engine: Heart of Echo](#4-the-engine-heart-of-echo) -5. [The Tick Pipeline: Where Everything Happens](#5-the-tick-pipeline-where-everything-happens) -6. [Parallel Execution: BOAW (Bag of Autonomous Workers)](#6-parallel-execution-boaw-bag-of-autonomous-workers) -7. [Storage & Hashing: Content-Addressed Truth](#7-storage--hashing-content-addressed-truth) -8. [Worked Example: Tracing a Link Click](#8-worked-example-tracing-a-link-click) -9. [The Viewer: Observing Echo](#9-the-viewer-observing-echo) -10. [Glossary](#10-glossary) - ---- - -## 1. Philosophy: Why Echo Exists - -### 1.1 The Problem - -Traditional game engines and simulations treat state as **mutable objects**. This creates fundamental problems: - -- **Replay is hard**: You can't just "rewind" because state changes are scattered and untracked. -- **Synchronization is fragile**: Two machines running the same logic may diverge due to floating-point differences, thread timing, or iteration order. -- **Debugging is a nightmare**: "It worked on my machine" is the symptom of non-determinism. -- **Branching is impossible**: You can't easily ask "what if?" without copying everything. - -\begin{claudecommentary} -**Claude's Take**: This list of problems isn't theoretical. I've seen countless debugging sessions where the root cause was "HashMap iteration order changed between runs." Echo's designers clearly got burned by non-determinism at some point and decided: _never again_. - -What strikes me most is the last point—"branching is impossible." Most engines don't even _try_ to support branching because it seems like a feature for version control, not runtime systems. Echo treats it as a first-class concern. That's unusual and, I think, genuinely forward-thinking. -\end{claudecommentary} - -### 1.2 Echo's Answer - -Echo treats **state as a typed graph** and **all changes as rewrites**. Each "tick" of the engine: - -1. Proposes a set of rewrites -2. Executes them in **deterministic order** -3. Emits **cryptographic hashes** of the resulting state - -This means: - -- **Same inputs → Same outputs** (always, on any machine) -- **State is verifiable** (hashes prove correctness) -- **Replay is trivial** (patches are prescriptive) -- **Branching is free** (copy-on-write snapshots) - -### 1.3 Core Design Principles - -```text -┌─────────────────────────────────────────────────────────────────┐ -│ ECHO'S THREE PILLARS │ -├─────────────────────────────────────────────────────────────────┤ -│ │ -│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │ -│ │ DETERMINISM │ │ PROVENANCE │ │ TOOLING │ │ -│ │ FIRST │ │ YOU CAN │ │ AS FIRST │ │ -│ │ │ │ TRUST │ │ CLASS │ │ -│ ├─────────────────┤ ├─────────────────┤ ├─────────────────┤ │ -│ │ Same inputs │ │ Snapshots are │ │ Graphs stream │ │ -│ │ always produce │ │ content- │ │ over canonical │ │ -│ │ same hashes │ │ addressed │ │ wire protocol │ │ -│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │ -│ │ -└─────────────────────────────────────────────────────────────────┘ -``` - -\begin{claudecommentary} -**Claude's Take**: "Tooling as first-class" is the sleeper here. Most engines treat debugging tools, replay systems, and visualization as afterthoughts—bolted on after the core is done. Echo inverts this: the wire protocol, the hash scheme, and the diff format were designed _so that tools could exist_. - -I've read a lot of engine architectures. This level of intentionality about tooling is rare. It's also why Echo can have a separate `warp-viewer` crate that just... works, instead of requiring heroic reverse-engineering. -\end{claudecommentary} - ---- - -## 2. The Big Picture: Architecture Overview - -### 2.1 System Layers - -Echo is organized into distinct layers, each with a specific responsibility: - -![Diagram 1](diagrams/tour-01.pdf) - -\begin{claudecommentary} -**Claude's Take**: This is a _clean_ layer cake. Each layer only talks to its neighbors. No "Layer 5 reaching down to Layer 1 for performance reasons." That discipline is hard to maintain, and I respect it. - -The `WSC Format` at Layer 2 caught my eye. It's Echo's custom columnar storage format—and before you ask "why not just use Arrow or Parquet?"—I'll spoil it: WSC is designed for mmap-friendly, zero-copy reads where every row is 8-byte aligned and you can binary-search directly into the file. It's specialized for _exactly this use case_. Sometimes NIH syndrome is justified. -\end{claudecommentary} - -### 2.2 Crate Map - -| Crate | Purpose | -| ---------------------- | ---------------------------------------------- | -| `warp-core` | The deterministic rewrite engine (the "brain") | -| `echo-graph` | Renderable graph types + diff operations | -| `echo-session-proto` | Wire protocol (canonical CBOR framing) | -| `echo-session-service` | Headless Unix-socket hub for tools | -| `echo-session-client` | Client helpers for connecting to the hub | -| `warp-viewer` | Native WGPU viewer for visualizing graphs | - -### 2.3 Data Flow Overview - -![Diagram 2](diagrams/tour-02.pdf) - -\begin{claudecommentary} -**Claude's Take**: Notice how the Engine talks to itself multiple times before touching the Store? That's the commit protocol at work. The Engine is _paranoid_ about mutations—it queues up intentions, validates them, and only then touches state. If you're used to "just mutate it directly" game engines, this will feel ceremonial. The ceremony is the point. -\end{claudecommentary} - ---- - -## 3. Core Concepts: The WARP Graph - -### 3.1 What is a WARP Graph? - -A WARP (**W**orldline **A**lgebra for **R**ecursive **P**rovenance) graph is Echo's fundamental data structure. It's not just a graph—it's a graph with **deterministic semantics**. - -![Diagram 3](diagrams/tour-03.pdf) - -\begin{claudecommentary} -**Claude's Take**: The name "WARP" is doing a lot of work here. "Worldline" evokes physics—specifically, the path an object traces through spacetime. In Echo, a node's "worldline" is its history of states across ticks. "Recursive Provenance" means you can always ask "where did this value come from?" and trace it back through the graph's history. - -Is the name a bit grandiose for what amounts to "typed graph with audit trail"? Maybe. But I've seen worse acronyms in this industry. -\end{claudecommentary} - -### 3.2 Two-Plane Architecture - -Echo separates structure from data via the **Two-Plane Model** (ADR-0001): - -| Plane | Contains | Purpose | -| ------------------ | ------------------------- | ------------------------------------- | -| **Skeleton** | Nodes + Edges (structure) | Fast traversal, deterministic hashing | -| **Attachment (α)** | Typed payloads | Domain-specific data | - -**Why separate them?** - -```text -┌────────────────────────────────────────────────────────────────────┐ -│ SKELETON PLANE (Structure) │ -│ │ -│ ┌─────┐ edge:link ┌─────┐ │ -│ │ N1 │─────────────────▶│ N2 │ │ -│ └─────┘ └─────┘ │ -│ │ │ │ -│ │ edge:child │ edge:ref │ -│ ▼ ▼ │ -│ ┌─────┐◀─────────────────────┘ │ -│ │ N3 │ │ -│ └─────┘ │ -│ │ -├────────────────────────────────────────────────────────────────────┤ -│ ATTACHMENT PLANE (Payloads) │ -│ │ -│ N1.α["title"] = Atom { type: "string", bytes: "Home" } │ -│ N2.α["url"] = Atom { type: "string", bytes: "/page/b" } │ -│ N3.α["body"] = Atom { type: "html", bytes: "

...

" } │ -│ │ -└────────────────────────────────────────────────────────────────────┘ -``` - -**Key insight**: Skeleton rewrites **never decode attachments**. This keeps the hot path fast and deterministic. - -\begin{claudecommentary} -**Claude's Take**: This is where Echo gets clever. The Skeleton plane only contains node IDs, edge IDs, and type tags—all fixed-size, all byte-comparable. You can compute the entire state hash without ever deserializing a single JSON blob, HTML string, or texture. - -The Attachment plane (they call it "α" because of course they do) holds the actual domain data. It participates in hashing but doesn't affect traversal. This separation means you can have a 10MB texture attached to a node and still iterate the graph at full speed. - -I've seen similar ideas in ECS architectures, but usually the separation is "components vs. systems." Echo's split is "structure vs. data," which is subtly different and, I think, more principled. -\end{claudecommentary} - -### 3.3 Node and Edge Identity - -Every node and edge has a **32-byte identifier**: - -```rust -pub struct NodeId([u8; 32]); // Content-addressed or assigned -pub struct EdgeId([u8; 32]); // Unique edge identifier -``` - -These IDs are: - -- **Deterministic**: Same content → same ID (when content-addressed) -- **Sortable**: Lexicographic ordering enables deterministic iteration -- **Hashable**: Participate in state root computation - -### 3.4 WarpInstances: Graphs Within Graphs - -Echo supports **descended attachments**—embedding entire graphs within attachment slots: - -![Diagram 4](diagrams/tour-04.pdf) - -This enables "WARPs all the way down"—recursive composition while maintaining determinism. - -\begin{claudecommentary} -**Claude's Take**: WarpInstances are _wild_. You can have a node whose attachment slot contains... another entire graph. And that graph can have nodes whose attachment slots contain... more graphs. It's turtles, but the turtles are graphs. - -Why would you want this? Think of a game with procedurally generated dungeons. Each dungeon could be its own WarpInstance, loaded on demand, with its own tick history and state root. The player character is in the "outer" instance; stepping through a portal descends into the "inner" one. - -I don't know if Echo actually uses this feature yet, but the architecture supports it cleanly. That's design for the future without overengineering the present. -\end{claudecommentary} - ---- - -## 4. The Engine: Heart of Echo - -### 4.1 The Engine Struct - -The `Engine` is Echo's central orchestrator. Located in `crates/warp-core/src/engine_impl.rs`: - -```rust -pub struct Engine { - state: WarpState, // Multi-instance graph state - rules: HashMap, // Registered rewrite rules - scheduler: DeterministicScheduler, // Deterministic ordering - bus: MaterializationBus, // Output channels - history: Vec<(Snapshot, TickReceipt, WarpTickPatchV1)>, - tx_counter: u64, // Transaction counter - live_txs: BTreeSet, // Active transactions - // ... more fields -} -``` - -\begin{claudecommentary} -**Claude's Take**: A few things jump out here: - -1. **`rules: HashMap`** — Wait, HashMap? Isn't that non-deterministic? It is! But notice: this is for _looking up_ rules by ID, not for _iterating_. The iteration order is determined by the `scheduler`, which is explicitly deterministic. The HashMap is fine because rule IDs are stable. - -2. **`history: Vec<(Snapshot, TickReceipt, WarpTickPatchV1)>`** — The engine keeps its entire history in memory? That seems expensive. I suspect this is configurable, or there's a garbage collection pass I haven't found yet. For long-running simulations, unbounded history would be a problem. - -3. **`BTreeSet` for live transactions** — BTreeSet, not HashSet. They're _really_ committed to determinism. Even the set of "which transactions are in-flight" is stored in sorted order. - \end{claudecommentary} - -### 4.2 Construction - -The engine is built via the `EngineBuilder`: - -```rust -let engine = EngineBuilder::new(store, root_node_id) - .with_policy_id(1) - .with_telemetry(telemetry) - .build(); -``` - -**What happens during construction:** - -![Diagram 5](diagrams/tour-05.pdf) - -### 4.3 Rewrite Rules - -Rules are the atoms of change in Echo. Each rule has three functions: - -```rust -pub struct RewriteRule { - pub name: String, - pub matcher: MatchFn, // Does this rule apply? - pub executor: ExecuteFn, // What changes to make - pub footprint: FootprintFn, // What resources are touched - pub policy: ConflictPolicy, // What to do on conflict -} - -// Function signatures (Phase 5 BOAW model): -type MatchFn = fn(GraphView, &NodeId) -> bool; -type ExecuteFn = fn(GraphView, &NodeId, &mut TickDelta); -type FootprintFn = fn(GraphView, &NodeId) -> Footprint; -``` - -**Critical constraint**: Executors receive a **read-only** `GraphView` and emit changes to a `TickDelta`. They **never** mutate the graph directly. - -\begin{claudecommentary} -**Claude's Take**: The `FootprintFn` is the secret sauce. Before executing a rule, Echo calls this function to ask: "What nodes, edges, and attachments will you touch?" The footprint is a _conservative estimate_—you must declare everything you _might_ read or write. - -This enables Echo's parallel execution model. If two rules have non-overlapping footprints, they can execute in parallel, in any order, and the result is guaranteed identical. If footprints overlap, they're sequenced deterministically. - -The burden on the rule author is significant: you must declare your footprint accurately, or you'll get either conflicts (declared overlap when there was none) or silent bugs (undeclared overlap that corrupts state). This is a sharp edge in the API. -\end{claudecommentary} - -### 4.4 GraphView: Read-Only Access - -The `GraphView` enforces BOAW's immutability contract: - -```rust -pub struct GraphView<'a> { - store: &'a GraphStore, - warp_id: WarpId, -} - -impl<'a> GraphView<'a> { - pub fn node(&self, id: &NodeId) -> Option<&NodeRecord>; - pub fn edges_from(&self, id: &NodeId) -> impl Iterator; - pub fn node_attachment(&self, id: &NodeId, key: &str) -> Option<&AttachmentValue>; - // ... read-only methods only -} -``` - -**No `DerefMut`, no `AsRef`, no interior mutability.** This is enforced at the type level. - -\begin{claudecommentary} -**Claude's Take**: I went looking for escape hatches here. `RefCell`? No. `UnsafeCell`? No. `Arc>`? No. The `GraphView` is genuinely immutable by construction. - -This is Rust at its best: the borrow checker prevents you from shooting yourself in the foot. In C++, you'd need discipline and code review to enforce "executors don't mutate the graph." In Rust, it's just... not possible. The types don't allow it. -\end{claudecommentary} - ---- - -## 5. The Tick Pipeline: Where Everything Happens - -### 5.1 Overview - -A "tick" is one complete cycle of the engine. It has five phases: - -![Diagram 6](diagrams/tour-06.pdf) - -\begin{claudecommentary} -**Claude's Take**: The "Commit" phase has five sub-steps. _Five_. This is where I started to appreciate how much thought went into this system. Let me summarize what each does: - -1. **Drain**: Pull all pending rewrites from the scheduler in canonical order -2. **Reserve**: Check footprints for conflicts, accept or reject each rewrite -3. **Execute**: Run the accepted rewrites (this is where parallelism happens) -4. **Merge**: Combine all `TickDelta` outputs into a single canonical operation list -5. **Finalize**: Apply the merged operations to produce the new state - -The reservation phase is particularly clever. It's like a two-phase commit: first you "reserve" your footprint (claim your lock), then you execute. If your footprint conflicts with an already-reserved footprint, you're rejected. No execution happens until all accepted rewrites have been validated. -\end{claudecommentary} - -### 5.2 Phase 1: Begin Transaction - -```rust -let tx = engine.begin(); -``` - -**What happens:** - -1. Increment `tx_counter` (wrapping to avoid 0) -2. Add `TxId` to `live_txs` set -3. Return opaque transaction identifier - -```text -┌─────────────────────────────────────────────────┐ -│ engine.begin() │ -├─────────────────────────────────────────────────┤ -│ tx_counter: 0 → 1 │ -│ live_txs: {} → {TxId(1)} │ -│ returns: TxId(1) │ -└─────────────────────────────────────────────────┘ -``` - -### 5.3 Phase 2: Apply Rules - -```rust -engine.apply(tx, "rule_name", &scope_node_id); -``` - -**What happens:** - -![Diagram 7](diagrams/tour-07.pdf) - -**The Footprint**: A declaration of what resources the rule will read and write: - -```rust -pub struct Footprint { - pub n_read: BTreeSet, // Nodes to read - pub n_write: BTreeSet, // Nodes to write - pub e_read: BTreeSet, // Edges to read - pub e_write: BTreeSet, // Edges to write - pub a_read: BTreeSet, // Attachments to read - pub a_write: BTreeSet, // Attachments to write - // ... ports, factor_mask -} -``` - -**Scheduler deduplication**: If the same `(scope_hash, rule_id)` is applied multiple times, **last wins**. This enables idempotent retry semantics. - -### 5.4 Phase 3: Commit (The Heart of Determinism) - -```rust -let (snapshot, receipt, patch) = engine.commit_with_receipt(tx); -``` - -This is where Echo's magic happens. Let's break it down: - -#### 5.4.1 Drain - -The scheduler drains all pending rewrites in **canonical order**: - -```rust -// RadixScheduler uses O(n) LSD radix sort -// 20 passes: 2 nonce + 2 rule_id + 16 scope_hash (16-bit digits) -let rewrites = scheduler.drain_for_tx(tx); // Vec in canonical order -``` - -**Ordering key**: `(scope_hash[0..32], rule_id, nonce)` - -This ensures the **same rewrites always execute in the same order**, regardless of when they were applied. - -\begin{claudecommentary} -**Claude's Take**: Radix sort! They're using radix sort for the scheduler drain. Not quicksort, not merge sort—radix sort. - -Why? Because radix sort is _stable_ and _deterministic_ by construction. Quicksort's behavior depends on pivot selection, which can vary. Merge sort is deterministic, but radix sort is faster for fixed-size keys. Since the ordering key is exactly 36 bytes (32-byte scope hash + 2-byte rule ID + 2-byte nonce), radix sort is perfect. - -This is the kind of detail that separates "deterministic by accident" from "deterministic by design." -\end{claudecommentary} - -#### 5.4.2 Reserve (Independence Check) - -For each rewrite in canonical order: - -![Diagram 8](diagrams/tour-08.pdf) - -**Conflict detection**: Uses `GenSet` for O(1) lookups: - -- Read-read overlap: **allowed** -- Write-write overlap: **conflict** -- Read-write overlap: **conflict** - -#### 5.4.3 Execute (Parallel, Lockless) - -Accepted rewrites execute against the **read-only snapshot**: - -```rust -for rewrite in accepted { - let rule = &rules[rewrite.rule_id]; - let view = GraphView::new(&state, rewrite.warp_id); - - // Executor reads from view, emits to delta - (rule.executor)(view, &rewrite.scope, &mut delta); -} -``` - -**Critical**: `GraphView` is immutable. `TickDelta` accumulates operations: - -```rust -pub struct TickDelta { - ops: Vec<(WarpOp, OpOrigin)>, -} - -// Operations emitted during execution: -delta.emit(WarpOp::UpsertNode { id, record }); -delta.emit(WarpOp::UpsertEdge { from, edge }); -delta.emit(WarpOp::DeleteNode { id }); -delta.emit(WarpOp::SetAttachment { node, key, value }); -``` - -#### 5.4.4 Merge (Canonical Sort) - -All operations are sorted into **canonical replay order**: - -```rust -// Sort by (WarpOpKey, OpOrigin) -ops.sort_by_key(|(op, origin)| (op.sort_key(), origin.clone())); - -// Deduplicate identical ops -// Error on conflicting ops (footprint model violation) -``` - -**Conflict handling**: If two rewrites wrote **different values** to the same key, that's a bug in the footprint model. Echo errors loudly. - -#### 5.4.5 Finalize - -Apply the merged delta to produce the new state: - -```rust -for op in merged_ops { - match op { - WarpOp::UpsertNode { id, record } => state.insert_node(id, record), - WarpOp::UpsertEdge { from, edge } => state.insert_edge(from, edge), - WarpOp::DeleteNode { id } => state.delete_node_isolated(id)?, // rejects if edges exist - WarpOp::SetAttachment { node, key, value } => state.set_attachment(node, key, value), - // ... - } -} -``` - -### 5.5 Phase 4: Hash Computation - -#### State Root (BLAKE3) - -The state root is computed via **deterministic BFS** over reachable nodes: - -![Diagram 9](diagrams/tour-09.pdf) - -**Encoding** (architecture-independent): - -- All IDs: raw 32 bytes -- Counts: u64 little-endian -- Payloads: 1-byte tag + type_id[32] + u64 LE length + bytes - -#### Commit Hash (v2) - -```rust -commit_hash = BLAKE3( - version_tag[4] || // Protocol version - parents[] || // Parent commit hashes - state_root[32] || // Graph-only hash - patch_digest[32] || // Merged ops digest - policy_id[4] // Policy identifier -) -``` - -\begin{claudecommentary} -**Claude's Take**: The commit hash includes a `policy_id`. This is subtle but important: two engines with different policies could produce the same state but different commit hashes. Why? Because the _process_ matters, not just the result. - -Imagine one policy allows rules to run in parallel; another requires sequential execution. They might produce identical graphs, but the commit hashes differ because the policies differ. This prevents accidentally mixing outputs from incompatible engine configurations. - -It's defensive design: "Trust, but verify—and make verification easy." -\end{claudecommentary} - -### 5.6 Phase 5: Record to History - -```rust -history.push(( - Snapshot { hash: commit_hash, state_root, parents, ... }, - TickReceipt { applied, rejected, ... }, - WarpTickPatchV1 { ops, in_slots, out_slots, patch_digest, ... } -)); -``` - -The patch is **prescriptive**: it can be replayed without re-matching to reproduce the exact same state. - ---- - -## 6. Parallel Execution: BOAW (Bag of Autonomous Workers) - -### 6.1 What is BOAW? - -BOAW stands for **Best Of All Worlds**. It's Echo's parallel execution architecture that enables: - -- **Massive parallelism** without locks -- **Deterministic convergence** across platforms -- **Worker-count invariance** (same result with 1 or 32 workers) - -### 6.2 The Key Insight - -```text -┌──────────────────────────────────────────────────────────────────┐ -│ THE BOAW INSIGHT │ -├──────────────────────────────────────────────────────────────────┤ -│ │ -│ Traditional parallelism: │ -│ "Make execution order deterministic" → Complex, slow │ -│ │ -│ BOAW parallelism: │ -│ "Let execution order vary, make MERGE deterministic" → Fast! │ -│ │ -│ Workers race freely → Each produces a TickDelta │ -│ Merge step sorts all deltas → Canonical output │ -│ │ -└──────────────────────────────────────────────────────────────────┘ -``` - -\begin{claudecommentary} -**Claude's Take**: This is the insight that makes Echo work. Most parallel systems try to _control_ the execution order—barriers, locks, atomic sequences. BOAW says: "Forget it. Let chaos reign during execution. We'll sort it out in the merge." - -It's like MapReduce: the map phase runs in any order; the reduce phase (merge) produces the canonical result. But unlike MapReduce, Echo operates on a graph with complex dependencies. The footprint model makes this possible: by declaring what you'll touch before executing, you enable the merge to validate that no conflicts occurred. - -If this sounds too good to be true, it mostly is—_if_ you get the footprints wrong. The system is only as deterministic as your footprint declarations. Lie to the footprint system, and you'll get non-determinism. -\end{claudecommentary} - -### 6.3 Execution Strategies - -#### Phase 6A: Stride Partitioning (Legacy) - -```text -Worker 0: items[0], items[4], items[8], ... -Worker 1: items[1], items[5], items[9], ... -Worker 2: items[2], items[6], items[10], ... -Worker 3: items[3], items[7], items[11], ... -``` - -**Problem**: Poor cache locality—related items scatter across workers. - -#### Phase 6B: Virtual Shards (Current Default) - -```rust -const NUM_SHARDS: usize = 256; // Protocol constant (frozen) - -fn shard_of(node_id: &NodeId) -> usize { - let bytes = node_id.as_bytes(); - let val = u64::from_le_bytes(bytes[0..8]); - (val & 255) as usize // Fast modulo via bitmask -} -``` - -![Diagram 10](diagrams/tour-10.pdf) - -**Benefits**: - -- Items with same `shard_of(scope)` processed together → better cache hits -- Workers dynamically claim shards via atomic counter → load balancing -- Determinism enforced by merge, not execution order - -\begin{claudecommentary} -**Claude's Take**: 256 shards is an interesting choice. It's small enough that the atomic counter for work-stealing doesn't become a bottleneck, but large enough to distribute work across many cores. - -The `& 255` bitmask is a micro-optimization I appreciate. It's equivalent to `% 256` but faster because 256 is a power of 2. This is the kind of low-level detail that adds up when you're processing millions of items per second. - -One thing I wondered: what if your NodeIds are clustered? Like, if all recent nodes have IDs starting with `0x00...`, they'd all end up in shard 0. I suspect content-addressed IDs (via BLAKE3) distribute uniformly, so this isn't a problem in practice. But for user-assigned IDs, you'd need to be careful. -\end{claudecommentary} - -### 6.4 The Execution Loop - -```rust -pub fn execute_parallel_sharded( - view: GraphView<'_>, - items: &[ExecItem], - workers: usize, -) -> Vec { - // Partition items into 256 shards - let shards = partition_into_shards(items); - - // Atomic counter for work-stealing - let next_shard = AtomicUsize::new(0); - - std::thread::scope(|s| { - let handles: Vec<_> = (0..workers).map(|_| { - s.spawn(|| { - let mut delta = TickDelta::new(); - loop { - // Claim next shard atomically - let shard_id = next_shard.fetch_add(1, Ordering::Relaxed); - if shard_id >= NUM_SHARDS { break; } - - // Execute all items in this shard - for item in &shards[shard_id].items { - (item.exec)(view.clone(), &item.scope, &mut delta); - } - } - delta - }) - }).collect(); - - handles.into_iter().map(|h| h.join().unwrap()).collect() - }) -} -``` - -### 6.5 The Canonical Merge - -```rust -pub fn merge_deltas(deltas: Vec) -> Result, MergeConflict> { - // 1. Flatten all ops from all workers - let mut all_ops: Vec<(WarpOpKey, OpOrigin, WarpOp)> = deltas - .into_iter() - .flat_map(|d| d.ops_with_origins()) - .collect(); - - // 2. Sort canonically by (key, origin) - all_ops.sort_by_key(|(key, origin, _)| (key.clone(), origin.clone())); - - // 3. Deduplicate and detect conflicts - let mut result = Vec::new(); - for group in all_ops.group_by(|(k1, _, _), (k2, _, _)| k1 == k2) { - let first = &group[0].2; - if group.iter().all(|(_, _, op)| op == first) { - result.push(first.clone()); // All identical: keep one - } else { - return Err(MergeConflict { writers: group.iter().map(|(_, o, _)| o).collect() }); - } - } - - Ok(result) -} -``` - -**Key guarantee**: Conflicts are bugs. If footprints were correct, no two rewrites should write different values to the same key. - ---- - -## 7. Storage & Hashing: Content-Addressed Truth - -### 7.1 The GraphStore - -Located in `crates/warp-core/src/graph.rs`: - -```rust -pub struct GraphStore { - pub(crate) warp_id: WarpId, - pub(crate) nodes: BTreeMap, - pub(crate) edges_from: BTreeMap>, - pub(crate) edges_to: BTreeMap>, // Reverse index - pub(crate) node_attachments: BTreeMap, - pub(crate) edge_attachments: BTreeMap, - pub(crate) edge_index: BTreeMap, // Edge → Source - pub(crate) edge_to_index: BTreeMap, // Edge → Target -} -``` - -**Why BTreeMap everywhere?** - -- Deterministic iteration order (sorted by key) -- Enables canonical hashing -- No HashMap ordering surprises - -\begin{claudecommentary} -**Claude's Take**: Seven BTreeMaps! This is the price of determinism. Each of these maps is sorted, which means: - -1. Insertions are O(log n) instead of O(1) amortized for HashMap -2. Iteration is always in key order, so hashing is deterministic -3. Memory overhead is slightly higher due to tree structure - -Is it worth it? For Echo's use case, absolutely. The alternative—using HashMap and then sorting before each hash—would be slower and more error-prone. By paying the cost upfront (O(log n) writes), you get guaranteed correctness. - -The multiple indices (`edges_from`, `edges_to`, `edge_index`, `edge_to_index`) look redundant, but they enable O(log n) lookups from any direction. Want all edges _from_ a node? `edges_from[node_id]`. Want all edges _to_ a node? `edges_to[node_id]`. This is a classic space-time tradeoff. -\end{claudecommentary} - -### 7.2 WSC: Write-Streaming Columnar Format - -For efficient snapshots, Echo uses WSC—a zero-copy, mmap-friendly format: - -```text -┌─────────────────────────────────────────────────────────────────┐ -│ WSC SNAPSHOT FILE │ -├─────────────────────────────────────────────────────────────────┤ -│ ┌─────────────────────────────────────────────────────────────┐ │ -│ │ NODES TABLE (sorted by NodeId) │ │ -│ │ ┌──────────┬───────────┬──────────┐ │ │ -│ │ │ NodeRow │ NodeRow │ NodeRow │ ... │ │ -│ │ │ 64 bytes │ 64 bytes │ 64 bytes │ │ │ -│ │ └──────────┴───────────┴──────────┘ │ │ -│ └─────────────────────────────────────────────────────────────┘ │ -│ ┌─────────────────────────────────────────────────────────────┐ │ -│ │ EDGES TABLE (sorted by EdgeId) │ │ -│ │ ┌───────────┬───────────┬───────────┐ │ │ -│ │ │ EdgeRow │ EdgeRow │ EdgeRow │ ... │ │ -│ │ │ 128 bytes │ 128 bytes │ 128 bytes │ │ │ -│ │ └───────────┴───────────┴───────────┘ │ │ -│ └─────────────────────────────────────────────────────────────┘ │ -│ ┌─────────────────────────────────────────────────────────────┐ │ -│ │ OUT_INDEX (per-node → range into out_edges) │ │ -│ │ ┌────────────────┬────────────────┐ │ │ -│ │ │ Range (16 B) │ Range (16 B) │ ... │ │ -│ │ └────────────────┴────────────────┘ │ │ -│ └─────────────────────────────────────────────────────────────┘ │ -│ ┌─────────────────────────────────────────────────────────────┐ │ -│ │ BLOB ARENA (variable-length data) │ │ -│ │ Referenced by (offset, length) tuples │ │ -│ └─────────────────────────────────────────────────────────────┘ │ -└─────────────────────────────────────────────────────────────────┘ -``` - -**Row types** (8-byte aligned): - -- `NodeRow`: 64 bytes (node_id[32] + node_type[32]) -- `EdgeRow`: 128 bytes (edge_id[32] + from[32] + to[32] + type[32]) -- `Range`: 16 bytes (start_le[8] + len_le[8]) - -\begin{claudecommentary} -**Claude's Take**: WSC is gloriously simple. Fixed-size rows, sorted tables, binary search for lookups. No compression, no Parquet-style encoding tricks—just flat bytes on disk that you can mmap and use directly. - -The trade-off is size: WSC files are larger than compressed formats. But the benefit is speed: you can find node #1000 by seeking to `offset + 1000 * 64` and reading 64 bytes. No decompression, no index lookups, no memory allocation. - -For Echo's use case (local caching, fast restarts), this makes sense. You're not storing petabytes; you're storing the state of a single simulation that fits in RAM. Optimize for access latency, not storage cost. -\end{claudecommentary} - -### 7.3 Copy-on-Write Semantics - -**Rule**: During a tick, nothing shared is mutated. - -![Diagram 11](diagrams/tour-11.pdf) - -**Structural sharing**: Only changed segments are newly written. Unchanged data is referenced by hash. - -### 7.4 Hash Algorithm Details - -**State Root** (BLAKE3, v2): - -```text -state_root = BLAKE3( - root_id[32] || - instance_count[8, LE] || - for each instance in BTreeMap order: - warp_id_len[8, LE] || - warp_id_bytes || - node_count[8, LE] || - for each node in ascending NodeId order: - node_id[32] || - node_type[32] || - for each outbound edge in ascending EdgeId order: - edge_id[32] || - edge_type[32] || - to_node[32] || - for each attachment: - key_len[8, LE] || - key_bytes || - type_id[32] || - value_len[8, LE] || - value_bytes -) -``` - -\begin{claudecommentary} -**Claude's Take**: The hashing is _exhaustive_. Every node, every edge, every attachment, every byte—all streamed through BLAKE3 in a defined order. There's no "we'll just hash the IDs and trust the content"—everything participates. - -This is expensive! But it's the foundation of Echo's trust model. If two engines produce the same state root, they have the same state. Period. No exceptions, no edge cases. - -The `version_tag` in the commit hash is a nice touch. If Echo ever changes its hashing algorithm (say, BLAKE3 v2 to v3), old and new hashes won't collide. Protocol evolution is built in. -\end{claudecommentary} - ---- - -## 8. Worked Example: Tracing a Link Click - -Let's trace what happens when a user clicks a link in a hypothetical WARP-based navigation system. - -### 8.1 The Scenario - -Imagine a simple site with two pages: - -![Diagram 12](diagrams/tour-12.pdf) - -**User clicks the link**: This should navigate from Home to About. - -\begin{claudecommentary} -**Claude's Take**: This example is deceptively simple—two pages, one link—but it exercises the entire engine: intent ingestion, rule matching, footprint validation, execution, merge, hashing, and emission. - -I'll add my notes at the interesting points. If you're skimming, watch for where the determinism guarantees kick in. -\end{claudecommentary} - -### 8.2 Step 1: Intent Ingestion - -The click is captured by the viewer and converted to an **intent**: - -```rust -// In the viewer: -let intent = NavigateIntent { - target_page: about_node_id, - timestamp: deterministic_tick, -}; -let intent_bytes = canonical_encode(&intent); - -// Send to engine: -engine.ingest_intent(intent_bytes); -``` - -**What happens inside `ingest_intent`**: - -![Diagram 13](diagrams/tour-13.pdf) - -### 8.3 Step 2: Begin Transaction - -```rust -let tx = engine.begin(); // tx = TxId(1) -``` - -### 8.4 Step 3: Dispatch Intent - -```rust -engine.dispatch_next_intent(tx); -``` - -**What happens**: - -![Diagram 14](diagrams/tour-14.pdf) - -### 8.5 Step 4: Rule Matching - -The `cmd/navigate` rule matches: - -```rust -// Matcher: Does this intent want navigation? -fn navigate_matcher(view: GraphView, scope: &NodeId) -> bool { - let intent = view.node(scope)?; - intent.type_id == "navigate_intent" -} - -// Footprint: What will we read/write? -fn navigate_footprint(view: GraphView, scope: &NodeId) -> Footprint { - Footprint { - n_read: btreeset![scope.clone(), viewer_node], - n_write: btreeset![], - a_read: btreeset![], - a_write: btreeset![AttachmentKey::new(viewer_node, "current")], - ..default() - } -} -``` - -\begin{claudecommentary} -**Claude's Take**: Notice the footprint. We declare that we'll: - -- **Read** two nodes: the intent (to get the target) and the viewer (to validate the current page) -- **Write** one attachment: the viewer's `current` attachment - -We're _not_ reading any attachments (we just need the node records), and we're _not_ writing any nodes (the viewer node already exists). This precision matters—if another rule also wants to write `viewer.current`, there's a conflict. -\end{claudecommentary} - -The rule is enqueued: - -```text -┌─────────────────────────────────────────────────────────────┐ -│ PendingRewrite │ -├─────────────────────────────────────────────────────────────┤ -│ rule_id: "cmd/navigate" │ -│ scope: 0xABCD... (intent node) │ -│ footprint: { n_read: [intent, viewer], a_write: [current] } │ -│ tx: TxId(1) │ -└─────────────────────────────────────────────────────────────┘ -``` - -### 8.6 Step 5: Commit - -```rust -let (snapshot, receipt, patch) = engine.commit_with_receipt(tx); -``` - -#### 5a. Drain - -```rust -let rewrites = scheduler.drain_for_tx(tx); -// Result: [PendingRewrite { rule: "cmd/navigate", scope: intent_node }] -``` - -#### 5b. Reserve - -```rust -// Check footprint independence -// No conflicts (only one rewrite) -// Accepted! -``` - -#### 5c. Execute - -```rust -fn navigate_executor(view: GraphView, scope: &NodeId, delta: &mut TickDelta) { - // Read the intent to find target - let intent = view.node(scope).unwrap(); - let target_page = intent.attachment("target").unwrap(); - - // Read current viewer state (for logging/validation) - let viewer = view.node(&VIEWER_NODE).unwrap(); - let old_page = viewer.attachment("current"); - - // Emit the change: update viewer's current page - delta.emit(WarpOp::SetAttachment { - node: VIEWER_NODE, - key: "current".into(), - value: AttachmentValue::Atom(AtomPayload { - type_id: "node_ref".into(), - bytes: target_page.to_bytes(), - }), - }); -} -``` - -**TickDelta now contains**: - -```rust -[ - (WarpOp::SetAttachment { - node: viewer_node, - key: "current", - value: about_node_id - }, OpOrigin { intent_id: 1, rule_id: 42, match_ix: 0, op_ix: 0 }) -] -``` - -#### 5d. Merge - -Only one delta, trivial merge: - -```rust -let merged_ops = vec![ - WarpOp::SetAttachment { node: viewer_node, key: "current", value: about_node_id } -]; -``` - -#### 5e. Finalize - -Apply to state: - -```rust -state.set_attachment(viewer_node, "current", about_node_id); -``` - -### 8.7 Step 6: Hash Computation - -```rust -// State root: BLAKE3 of reachable graph -let state_root = compute_state_root(&state); // 0x7890... - -// Patch digest: BLAKE3 of merged ops -let patch_digest = compute_patch_digest(&merged_ops); // 0xDEF0... - -// Commit hash -let commit_hash = BLAKE3( - VERSION_TAG || - [parent_hash] || - state_root || - patch_digest || - policy_id -); // 0x1234... -``` - -### 8.8 Step 7: Emit to Tools - -The engine emits a `WarpDiff` to the session hub: - -```rust -WarpDiff { - from_epoch: 0, - to_epoch: 1, - ops: vec![ - WarpOp::SetAttachment { - node: viewer_node, - key: "current", - value: about_node_id - } - ], - state_hash: 0x7890..., -} -``` - -### 8.9 Step 8: Viewer Applies Diff - -The viewer receives the diff and updates its rendering: - -```rust -for op in diff.ops { - match op { - WarpOp::SetAttachment { node, key, value } => { - if node == viewer_node && key == "current" { - // Update the displayed page - self.navigate_to(value.as_node_ref()); - } - } - _ => { /* other ops */ } - } -} -``` - -**Result**: The user sees the About page. - -\begin{claudecommentary} -**Claude's Take**: That's a lot of machinery for one link click! But here's what we get for free: - -1. **Replay**: Save the intent bytes, replay them later, get the exact same state hash -2. **Verification**: Any other engine given the same inputs produces the same commit hash -3. **Undo**: The previous snapshot is still in history; restoring is a pointer swap -4. **Branching**: Fork the state, try a different navigation, compare outcomes - -This is the payoff for all the ceremony. A traditional engine would do `viewer.current = about_page` and call it done. Echo builds a _provable audit trail_ around every state change. -\end{claudecommentary} - ---- - -## 9. The Viewer: Observing Echo - -The `warp-viewer` crate provides real-time visualization of WARP graphs. It's built on WGPU for cross-platform GPU rendering. - -### 9.1 Architecture - -![Diagram 15](diagrams/tour-15.pdf) - -### 9.2 Rendering Pipeline - -1. **Diff arrives** via session client -2. **State cache** updates local graph replica -3. **Layout engine** computes node positions (force-directed) -4. **Renderer** converts graph to GPU buffers -5. **Display** shows updated visualization - -\begin{claudecommentary} -**Claude's Take**: The viewer is _reactive_, not poll-based. It subscribes to diffs from the session hub and updates only when state changes. This means zero CPU usage when the graph is idle. - -The force-directed layout is a classic choice for graph visualization. It's not perfect—large graphs can take time to settle—but it's good enough for debugging and exploration. If you need a specific layout, you can inject position attachments and the viewer will respect them. -\end{claudecommentary} - ---- - -## 10. Glossary - -| Term | Definition | -| ------------------ | ------------------------------------------------------------------------- | -| **WARP** | Worldline Algebra for Recursive Provenance—Echo's core graph model | -| **Tick** | One complete cycle of the engine (begin → apply → commit → hash → record) | -| **Snapshot** | Immutable point-in-time capture of graph state | -| **Footprint** | Declaration of resources a rule will read/write | -| **BOAW** | Bag of Autonomous Workers—parallel execution model | -| **TickDelta** | Accumulated operations from rule execution | -| **State Root** | BLAKE3 hash of the entire graph | -| **Commit Hash** | BLAKE3 hash of (state root + patch + metadata) | -| **WarpInstance** | A graph-within-a-graph, enabling recursive composition | -| **WSC** | Write-Streaming Columnar—Echo's snapshot file format | -| **GraphView** | Read-only handle to graph state for rule executors | -| **PendingRewrite** | Queued rule application awaiting commit | - ---- - -\begin{claudecommentary} -**Final Thoughts from Your Tour Guide** - -Echo is not a simple system. It's a _principled_ system built on hard-won lessons about determinism, reproducibility, and trust. - -What I find most impressive isn't any single feature—it's the coherence. Every piece reinforces the others: - -- BTreeMaps enable deterministic hashing -- Footprints enable parallel execution -- Parallel execution requires immutable GraphView -- Immutable GraphView enables copy-on-write -- Copy-on-write enables cheap branching -- Cheap branching enables "what if?" queries - -Pull one thread and the whole tapestry unravels. This is integrated design, not a collection of independent features. - -Is Echo perfect? No. The footprint model requires discipline. The ceremony adds latency. The BTreeMaps trade speed for determinism. But for applications where _provability_ matters—games with replays, simulations with audits, collaborative tools with conflict resolution—Echo offers something rare: a foundation you can trust. - -Thanks for joining me on this tour. May your state roots always match. - -— Claude -\end{claudecommentary} diff --git a/docs/archive/study/what-makes-echo-tick-with-diagrams.pdf b/docs/archive/study/what-makes-echo-tick-with-diagrams.pdf deleted file mode 100644 index a2524efd..00000000 Binary files a/docs/archive/study/what-makes-echo-tick-with-diagrams.pdf and /dev/null differ diff --git a/docs/archive/study/what-makes-echo-tick-with-diagrams.tex b/docs/archive/study/what-makes-echo-tick-with-diagrams.tex deleted file mode 100644 index 95e2a209..00000000 --- a/docs/archive/study/what-makes-echo-tick-with-diagrams.tex +++ /dev/null @@ -1,1515 +0,0 @@ -% SPDX-License-Identifier: Apache-2.0 OR LicenseRef-MIND-UCAL-1.0 -% © James Ross Ω FLYING•ROBOTS -% Options for packages loaded elsewhere -\PassOptionsToPackage{unicode}{hyperref} -\PassOptionsToPackage{hyphens}{url} -\documentclass[ -]{book} -\usepackage[letterpaper, margin=1in]{geometry} -\usepackage{xcolor} -\usepackage{amsmath,amssymb} -\setcounter{secnumdepth}{-\maxdimen} % remove section numbering -\usepackage{iftex} -\ifPDFTeX - \usepackage[T1]{fontenc} - \usepackage[utf8]{inputenc} - \usepackage{textcomp} % provide euro and other symbols -\else % if luatex or xetex - \usepackage{unicode-math} % this also loads fontspec - \defaultfontfeatures{Scale=MatchLowercase} - \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} -\fi -\usepackage{lmodern} -\ifPDFTeX\else - % xetex/luatex font selection -\fi -% Use upquote if available, for straight quotes in verbatim environments -\IfFileExists{upquote.sty}{\usepackage{upquote}}{} -\IfFileExists{microtype.sty}{% use microtype if available - \usepackage[]{microtype} - \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts -}{} -\makeatletter -\@ifundefined{KOMAClassName}{% if non-KOMA class - \IfFileExists{parskip.sty}{% - \usepackage{parskip} - }{% else - \setlength{\parindent}{0pt} - \setlength{\parskip}{6pt plus 2pt minus 1pt}} -}{% if KOMA class - \KOMAoptions{parskip=half}} -\makeatother -\usepackage{color} -\usepackage{fancyvrb} -\newcommand{\VerbBar}{|} -\newcommand{\VERB}{\Verb[commandchars=\\\{\}]} -\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} -% Add ',fontsize=\small' for more characters per line -\newenvironment{Shaded}{}{} -\newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}} -\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{#1}} -\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}} -\newcommand{\BuiltInTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{#1}} -\newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{#1}}} -\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{#1}} -\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}} -\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{#1}} -\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}} -\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{#1}}} -\newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}} -\newcommand{\ExtensionTok}[1]{#1} -\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}} -\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{#1}} -\newcommand{\ImportTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{\textbf{#1}}} -\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}} -\newcommand{\NormalTok}[1]{#1} -\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{#1}} -\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{#1}} -\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{#1}} -\newcommand{\RegionMarkerTok}[1]{#1} -\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{#1}} -\newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{#1}} -\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\usepackage{graphicx} -\usepackage[export]{adjustbox} -\usepackage{longtable,booktabs,array} -\newcounter{none} % for unnumbered tables -\usepackage{calc} % for calculating minipage widths -% Correct order of tables after \paragraph or \subparagraph -\usepackage{etoolbox} -\makeatletter -\patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{} -\makeatother -% Allow footnotes in longtable head/foot -\IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}} -\makesavenoteenv{longtable} -\setlength{\emergencystretch}{3em} % prevent overfull lines -\providecommand{\tightlist}{% - \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} -\usepackage{bookmark} -\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available -\urlstyle{same} -\hypersetup{ - hidelinks, - pdfcreator={LaTeX via pandoc}} - -\author{} -\date{} - -\begin{document} -\frontmatter - -\mainmatter -\chapter{What Makes Echo Tick?}\label{what-makes-echo-tick} - -\begin{quote} -A comprehensive technical guide to the Echo deterministic graph-rewrite -engine. - -\textbf{Target Audience}: Developers who want to understand Echo's -internals in exhaustive detail. - -\textbf{Reading Time}: \textasciitilde45 minutes for complete -understanding. -\end{quote} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{Table of Contents}\label{table-of-contents} - -\begin{enumerate} -\def\labelenumi{\arabic{enumi}.} -\tightlist -\item - \hyperref[1-philosophy-why-echo-exists]{Philosophy: Why Echo Exists} -\item - \hyperref[2-the-big-picture-architecture-overview]{The Big Picture: - Architecture Overview} -\item - \hyperref[3-core-concepts-the-warp-graph]{Core Concepts: The WARP - Graph} -\item - \hyperref[4-the-engine-heart-of-echo]{The Engine: Heart of Echo} -\item - \hyperref[5-the-tick-pipeline-where-everything-happens]{The Tick - Pipeline: Where Everything Happens} -\item - \hyperref[6-parallel-execution-boaw-bag-of-autonomous-workers]{Parallel - Execution: BOAW (Bag of Autonomous Workers)} -\item - \hyperref[7-storage--hashing-content-addressed-truth]{Storage \& - Hashing: Content-Addressed Truth} -\item - \hyperref[8-worked-example-tracing-a-link-click]{Worked Example: - Tracing a Link Click} -\item - \hyperref[9-the-viewer-observing-echo]{The Viewer: Observing Echo} -\item - \hyperref[10-glossary]{Glossary} -\end{enumerate} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{1. Philosophy: Why Echo -Exists}\label{philosophy-why-echo-exists} - -\subsection{1.1 The Problem}\label{the-problem} - -Traditional game engines and simulations treat state as \textbf{mutable -objects}. This creates fundamental problems: - -\begin{itemize} -\tightlist -\item - \textbf{Replay is hard}: You can't just ``rewind'' because state - changes are scattered and untracked. -\item - \textbf{Synchronization is fragile}: Two machines running the same - logic may diverge due to floating-point differences, thread timing, or - iteration order. -\item - \textbf{Debugging is a nightmare}: ``It worked on my machine'' is the - symptom of non-determinism. -\item - \textbf{Branching is impossible}: You can't easily ask ``what if?'' - without copying everything. -\end{itemize} - -\subsection{1.2 Echo's Answer}\label{echos-answer} - -Echo treats \textbf{state as a typed graph} and \textbf{all changes as -rewrites}. Each ``tick'' of the engine: - -\begin{enumerate} -\def\labelenumi{\arabic{enumi}.} -\tightlist -\item - Proposes a set of rewrites -\item - Executes them in \textbf{deterministic order} -\item - Emits \textbf{cryptographic hashes} of the resulting state -\end{enumerate} - -This means: - \textbf{Same inputs → Same outputs} (always, on any -machine) - \textbf{State is verifiable} (hashes prove correctness) - -\textbf{Replay is trivial} (patches are prescriptive) - -\textbf{Branching is free} (copy-on-write snapshots) - -\subsection{1.3 Core Design Principles}\label{core-design-principles} - -\begin{verbatim} -┌─────────────────────────────────────────────────────────────────┐ -│ ECHO'S THREE PILLARS │ -├─────────────────────────────────────────────────────────────────┤ -│ │ -│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │ -│ │ DETERMINISM │ │ PROVENANCE │ │ TOOLING │ │ -│ │ FIRST │ │ YOU CAN │ │ AS FIRST │ │ -│ │ │ │ TRUST │ │ CLASS │ │ -│ ├─────────────────┤ ├─────────────────┤ ├─────────────────┤ │ -│ │ Same inputs │ │ Snapshots are │ │ Graphs stream │ │ -│ │ always produce │ │ content- │ │ over canonical │ │ -│ │ same hashes │ │ addressed │ │ wire protocol │ │ -│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │ -│ │ -└─────────────────────────────────────────────────────────────────┘ -\end{verbatim} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{2. The Big Picture: Architecture -Overview}\label{the-big-picture-architecture-overview} - -\subsection{2.1 System Layers}\label{system-layers} - -Echo is organized into distinct layers, each with a specific -responsibility: - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-01.pdf} -\end{center} - -\subsection{2.2 Crate Map}\label{crate-map} - -{\def\LTcaptype{none} % do not increment counter -\begin{longtable}[]{@{}ll@{}} -\toprule\noalign{} -Crate & Purpose \\ -\midrule\noalign{} -\endhead -\bottomrule\noalign{} -\endlastfoot -\texttt{warp-core} & The deterministic rewrite engine (the ``brain'') \\ -\texttt{echo-graph} & Renderable graph types + diff operations \\ -\texttt{echo-session-proto} & Wire protocol (canonical CBOR framing) \\ -\texttt{echo-session-service} & Headless Unix-socket hub for tools \\ -\texttt{echo-session-client} & Client helpers for connecting to the -hub \\ -\texttt{warp-viewer} & Native WGPU viewer for visualizing graphs \\ -\end{longtable} -} - -\subsection{2.3 Data Flow Overview}\label{data-flow-overview} - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-02.pdf} -\end{center} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{3. Core Concepts: The WARP -Graph}\label{core-concepts-the-warp-graph} - -\subsection{3.1 What is a WARP Graph?}\label{what-is-a-warp-graph} - -A WARP (\textbf{W}orldline \textbf{A}lgebra for \textbf{R}ecursive -\textbf{P}rovenance) graph is Echo's fundamental data structure. It's -not just a graph---it's a graph with \textbf{deterministic semantics}. - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-03.pdf} -\end{center} - -\subsection{3.2 Two-Plane Architecture}\label{two-plane-architecture} - -Echo separates structure from data via the \textbf{Two-Plane Model} -(ADR-0001): - -{\def\LTcaptype{none} % do not increment counter -\begin{longtable}[]{@{} - >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.2692}} - >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.3846}} - >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.3462}}@{}} -\toprule\noalign{} -\begin{minipage}[b]{\linewidth}\raggedright -Plane -\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright -Contains -\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright -Purpose -\end{minipage} \\ -\midrule\noalign{} -\endhead -\bottomrule\noalign{} -\endlastfoot -\textbf{Skeleton} & Nodes + Edges (structure) & Fast traversal, -deterministic hashing \\ -\textbf{Attachment (α)} & Typed payloads & Domain-specific data \\ -\end{longtable} -} - -\textbf{Why separate them?} - -\begin{verbatim} -┌────────────────────────────────────────────────────────────────────┐ -│ SKELETON PLANE (Structure) │ -│ │ -│ ┌─────┐ edge:link ┌─────┐ │ -│ │ N1 │─────────────────▶│ N2 │ │ -│ └─────┘ └─────┘ │ -│ │ │ │ -│ │ edge:child │ edge:ref │ -│ ▼ ▼ │ -│ ┌─────┐◀─────────────────────┘ │ -│ │ N3 │ │ -│ └─────┘ │ -│ │ -├────────────────────────────────────────────────────────────────────┤ -│ ATTACHMENT PLANE (Payloads) │ -│ │ -│ N1.α["title"] = Atom { type: "string", bytes: "Home" } │ -│ N2.α["url"] = Atom { type: "string", bytes: "/page/b" } │ -│ N3.α["body"] = Atom { type: "html", bytes: "

...

" } │ -│ │ -└────────────────────────────────────────────────────────────────────┘ -\end{verbatim} - -\textbf{Key insight}: Skeleton rewrites \textbf{never decode -attachments}. This keeps the hot path fast and deterministic. - -\subsection{3.3 Node and Edge Identity}\label{node-and-edge-identity} - -Every node and edge has a \textbf{32-byte identifier}: - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ NodeId([}\DataTypeTok{u8}\OperatorTok{;} \DecValTok{32}\NormalTok{])}\OperatorTok{;} \CommentTok{// Content{-}addressed or assigned} -\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ EdgeId([}\DataTypeTok{u8}\OperatorTok{;} \DecValTok{32}\NormalTok{])}\OperatorTok{;} \CommentTok{// Unique edge identifier} -\end{Highlighting} -\end{Shaded} - -These IDs are: - \textbf{Deterministic}: Same content → same ID (when -content-addressed) - \textbf{Sortable}: Lexicographic ordering enables -deterministic iteration - \textbf{Hashable}: Participate in state root -computation - -\subsection{3.4 WarpInstances: Graphs Within -Graphs}\label{warpinstances-graphs-within-graphs} - -Echo supports \textbf{descended attachments}---embedding entire graphs -within attachment slots: - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-04.pdf} -\end{center} - -This enables ``WARPs all the way down''---recursive composition while -maintaining determinism. - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{4. The Engine: Heart of Echo}\label{the-engine-heart-of-echo} - -\subsection{4.1 The Engine Struct}\label{the-engine-struct} - -The \texttt{Engine} is Echo's central orchestrator. Located in -\texttt{crates/warp-core/src/engine\_impl.rs}: - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ Engine }\OperatorTok{\{} -\NormalTok{ state}\OperatorTok{:}\NormalTok{ WarpState}\OperatorTok{,} \CommentTok{// Multi{-}instance graph state} -\NormalTok{ rules}\OperatorTok{:}\NormalTok{ HashMap}\OperatorTok{\textless{}}\NormalTok{RuleId}\OperatorTok{,}\NormalTok{ RewriteRule}\OperatorTok{\textgreater{},} \CommentTok{// Registered rewrite rules} -\NormalTok{ scheduler}\OperatorTok{:}\NormalTok{ DeterministicScheduler}\OperatorTok{,} \CommentTok{// Deterministic ordering} -\NormalTok{ bus}\OperatorTok{:}\NormalTok{ MaterializationBus}\OperatorTok{,} \CommentTok{// Output channels} -\NormalTok{ history}\OperatorTok{:} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{(Snapshot}\OperatorTok{,}\NormalTok{ TickReceipt}\OperatorTok{,}\NormalTok{ WarpTickPatchV1)}\OperatorTok{\textgreater{},} -\NormalTok{ tx\_counter}\OperatorTok{:} \DataTypeTok{u64}\OperatorTok{,} \CommentTok{// Transaction counter} -\NormalTok{ live\_txs}\OperatorTok{:}\NormalTok{ BTreeSet}\OperatorTok{\textless{}}\NormalTok{TxId}\OperatorTok{\textgreater{},} \CommentTok{// Active transactions} - \CommentTok{// ... more fields} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\subsection{4.2 Construction}\label{construction} - -The engine is built via the \texttt{EngineBuilder}: - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{let}\NormalTok{ engine }\OperatorTok{=} \PreprocessorTok{EngineBuilder::}\NormalTok{new(store}\OperatorTok{,}\NormalTok{ root\_node\_id)} - \OperatorTok{.}\NormalTok{with\_policy\_id(}\DecValTok{1}\NormalTok{)} - \OperatorTok{.}\NormalTok{with\_telemetry(telemetry)} - \OperatorTok{.}\NormalTok{build()}\OperatorTok{;} -\end{Highlighting} -\end{Shaded} - -\textbf{What happens during construction:} - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-05.pdf} -\end{center} - -\subsection{4.3 Rewrite Rules}\label{rewrite-rules} - -Rules are the atoms of change in Echo. Each rule has three functions: - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ RewriteRule }\OperatorTok{\{} - \KeywordTok{pub}\NormalTok{ name}\OperatorTok{:} \DataTypeTok{String}\OperatorTok{,} - \KeywordTok{pub}\NormalTok{ matcher}\OperatorTok{:}\NormalTok{ MatchFn}\OperatorTok{,} \CommentTok{// Does this rule apply?} - \KeywordTok{pub}\NormalTok{ executor}\OperatorTok{:}\NormalTok{ ExecuteFn}\OperatorTok{,} \CommentTok{// What changes to make} - \KeywordTok{pub}\NormalTok{ footprint}\OperatorTok{:}\NormalTok{ FootprintFn}\OperatorTok{,} \CommentTok{// What resources are touched} - \KeywordTok{pub}\NormalTok{ policy}\OperatorTok{:}\NormalTok{ ConflictPolicy}\OperatorTok{,} \CommentTok{// What to do on conflict} -\OperatorTok{\}} - -\CommentTok{// Function signatures (Phase 5 BOAW model):} -\KeywordTok{type}\NormalTok{ MatchFn }\OperatorTok{=} \KeywordTok{fn}\NormalTok{(GraphView}\OperatorTok{,} \OperatorTok{\&}\NormalTok{NodeId) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{bool}\OperatorTok{;} -\KeywordTok{type}\NormalTok{ ExecuteFn }\OperatorTok{=} \KeywordTok{fn}\NormalTok{(GraphView}\OperatorTok{,} \OperatorTok{\&}\NormalTok{NodeId}\OperatorTok{,} \OperatorTok{\&}\KeywordTok{mut}\NormalTok{ TickDelta)}\OperatorTok{;} -\KeywordTok{type}\NormalTok{ FootprintFn }\OperatorTok{=} \KeywordTok{fn}\NormalTok{(GraphView}\OperatorTok{,} \OperatorTok{\&}\NormalTok{NodeId) }\OperatorTok{{-}\textgreater{}}\NormalTok{ Footprint}\OperatorTok{;} -\end{Highlighting} -\end{Shaded} - -\textbf{Critical constraint}: Executors receive a \textbf{read-only} -\texttt{GraphView} and emit changes to a \texttt{TickDelta}. They -\textbf{never} mutate the graph directly. - -\subsection{4.4 GraphView: Read-Only -Access}\label{graphview-read-only-access} - -The \texttt{GraphView} enforces BOAW's immutability contract: - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ GraphView}\OperatorTok{\textless{}}\OtherTok{\textquotesingle{}a}\OperatorTok{\textgreater{}} \OperatorTok{\{} -\NormalTok{ store}\OperatorTok{:} \OperatorTok{\&}\OtherTok{\textquotesingle{}a}\NormalTok{ GraphStore}\OperatorTok{,} -\NormalTok{ warp\_id}\OperatorTok{:}\NormalTok{ WarpId}\OperatorTok{,} -\OperatorTok{\}} - -\KeywordTok{impl}\OperatorTok{\textless{}}\OtherTok{\textquotesingle{}a}\OperatorTok{\textgreater{}}\NormalTok{ GraphView}\OperatorTok{\textless{}}\OtherTok{\textquotesingle{}a}\OperatorTok{\textgreater{}} \OperatorTok{\{} - \KeywordTok{pub} \KeywordTok{fn}\NormalTok{ node(}\OperatorTok{\&}\KeywordTok{self}\OperatorTok{,}\NormalTok{ id}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Option}\OperatorTok{\textless{}\&}\NormalTok{NodeRecord}\OperatorTok{\textgreater{};} - \KeywordTok{pub} \KeywordTok{fn}\NormalTok{ edges\_from(}\OperatorTok{\&}\KeywordTok{self}\OperatorTok{,}\NormalTok{ id}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId) }\OperatorTok{{-}\textgreater{}} \KeywordTok{impl} \BuiltInTok{Iterator}\OperatorTok{\textless{}}\NormalTok{Item }\OperatorTok{=} \OperatorTok{\&}\NormalTok{EdgeRecord}\OperatorTok{\textgreater{};} - \KeywordTok{pub} \KeywordTok{fn}\NormalTok{ node\_attachment(}\OperatorTok{\&}\KeywordTok{self}\OperatorTok{,}\NormalTok{ id}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId}\OperatorTok{,}\NormalTok{ key}\OperatorTok{:} \OperatorTok{\&}\DataTypeTok{str}\NormalTok{) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Option}\OperatorTok{\textless{}\&}\NormalTok{AttachmentValue}\OperatorTok{\textgreater{};} - \CommentTok{// ... read{-}only methods only} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{No \texttt{DerefMut}, no -\texttt{AsRef\textless{}GraphStore\textgreater{}}, no interior -mutability.} This is enforced at the type level. - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{5. The Tick Pipeline: Where Everything -Happens}\label{the-tick-pipeline-where-everything-happens} - -\subsection{5.1 Overview}\label{overview} - -A ``tick'' is one complete cycle of the engine. It has five phases: - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-06.pdf} -\end{center} - -\subsection{5.2 Phase 1: Begin -Transaction}\label{phase-1-begin-transaction} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{let}\NormalTok{ tx }\OperatorTok{=}\NormalTok{ engine}\OperatorTok{.}\NormalTok{begin()}\OperatorTok{;} -\end{Highlighting} -\end{Shaded} - -\textbf{What happens:} 1. Increment \texttt{tx\_counter} (wrapping to -avoid 0) 2. Add \texttt{TxId} to \texttt{live\_txs} set 3. Return opaque -transaction identifier - -\begin{verbatim} -┌─────────────────────────────────────────────────┐ -│ engine.begin() │ -├─────────────────────────────────────────────────┤ -│ tx_counter: 0 → 1 │ -│ live_txs: {} → {TxId(1)} │ -│ returns: TxId(1) │ -└─────────────────────────────────────────────────┘ -\end{verbatim} - -\subsection{5.3 Phase 2: Apply Rules}\label{phase-2-apply-rules} - -\begin{Shaded} -\begin{Highlighting}[] -\NormalTok{engine}\OperatorTok{.}\NormalTok{apply(tx}\OperatorTok{,} \StringTok{"rule\_name"}\OperatorTok{,} \OperatorTok{\&}\NormalTok{scope\_node\_id)}\OperatorTok{;} -\end{Highlighting} -\end{Shaded} - -\textbf{What happens:} - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-07.pdf} -\end{center} - -\textbf{The Footprint}: A declaration of what resources the rule will -read and write: - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ Footprint }\OperatorTok{\{} - \KeywordTok{pub}\NormalTok{ n\_read}\OperatorTok{:}\NormalTok{ BTreeSet}\OperatorTok{\textless{}}\NormalTok{NodeId}\OperatorTok{\textgreater{},} \CommentTok{// Nodes to read} - \KeywordTok{pub}\NormalTok{ n\_write}\OperatorTok{:}\NormalTok{ BTreeSet}\OperatorTok{\textless{}}\NormalTok{NodeId}\OperatorTok{\textgreater{},} \CommentTok{// Nodes to write} - \KeywordTok{pub}\NormalTok{ e\_read}\OperatorTok{:}\NormalTok{ BTreeSet}\OperatorTok{\textless{}}\NormalTok{EdgeId}\OperatorTok{\textgreater{},} \CommentTok{// Edges to read} - \KeywordTok{pub}\NormalTok{ e\_write}\OperatorTok{:}\NormalTok{ BTreeSet}\OperatorTok{\textless{}}\NormalTok{EdgeId}\OperatorTok{\textgreater{},} \CommentTok{// Edges to write} - \KeywordTok{pub}\NormalTok{ a\_read}\OperatorTok{:}\NormalTok{ BTreeSet}\OperatorTok{\textless{}}\NormalTok{AttachmentKey}\OperatorTok{\textgreater{},} \CommentTok{// Attachments to read} - \KeywordTok{pub}\NormalTok{ a\_write}\OperatorTok{:}\NormalTok{ BTreeSet}\OperatorTok{\textless{}}\NormalTok{AttachmentKey}\OperatorTok{\textgreater{},} \CommentTok{// Attachments to write} - \CommentTok{// ... ports, factor\_mask} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{Runtime enforcement.} As of Phase~6B, footprint declarations are -enforced at runtime by \texttt{FootprintGuard} when -\texttt{footprint\_enforce\_release} is enabled or in debug builds; the -\texttt{unsafe\_graph} escape hatch disables these checks. The guard catches -the following violations: - -\begin{itemize} -\item Undeclared reads (node, edge, or attachment access not listed in the footprint) -\item Undeclared writes (ops emitted for resources not in \texttt{n\_write} / \texttt{e\_write} / \texttt{a\_write}) -\item Cross-warp emissions (ops targeting a \texttt{WarpId} other than the executing warp) -\item Instance ops blocked by \texttt{ExecItemKind} (not footprint coverage) -\item Adjacency violations (edge ops whose \texttt{from} node is absent from \texttt{n\_write}) -\end{itemize} - -\textbf{Scheduler deduplication}: If the same -\texttt{(scope\_hash,\ rule\_id)} is applied multiple times, -\textbf{last wins}. This enables idempotent retry semantics. - -\subsection{5.4 Phase 3: Commit (The Heart of -Determinism)}\label{phase-3-commit-the-heart-of-determinism} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{let}\NormalTok{ (snapshot}\OperatorTok{,}\NormalTok{ receipt}\OperatorTok{,}\NormalTok{ patch) }\OperatorTok{=}\NormalTok{ engine}\OperatorTok{.}\NormalTok{commit\_with\_receipt(tx)}\OperatorTok{;} -\end{Highlighting} -\end{Shaded} - -This is where Echo's magic happens. Let's break it down: - -\subsubsection{5.4.1 Drain}\label{drain} - -The scheduler drains all pending rewrites in \textbf{canonical order}: - -\begin{Shaded} -\begin{Highlighting}[] -\CommentTok{// RadixScheduler uses O(n) LSD radix sort} -\CommentTok{// 20 passes: 2 nonce + 2 rule\_id + 16 scope\_hash (16{-}bit digits)} -\KeywordTok{let}\NormalTok{ rewrites }\OperatorTok{=}\NormalTok{ scheduler}\OperatorTok{.}\NormalTok{drain\_for\_tx(tx)}\OperatorTok{;} \CommentTok{// Vec\textless{}PendingRewrite\textgreater{} in canonical order} -\end{Highlighting} -\end{Shaded} - -\textbf{Ordering key}: -\texttt{(scope\_hash{[}0..32{]},\ rule\_id,\ nonce)} - -This ensures the \textbf{same rewrites always execute in the same -order}, regardless of when they were applied. - -\subsubsection{5.4.2 Reserve (Independence -Check)}\label{reserve-independence-check} - -For each rewrite in canonical order: - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-08.pdf} -\end{center} - -\textbf{Conflict detection}: Uses -\texttt{GenSet\textless{}K\textgreater{}} for O(1) lookups: - Read-read -overlap: \textbf{allowed} - Write-write overlap: \textbf{conflict} - -Read-write overlap: \textbf{conflict} - -\subsubsection{5.4.3 Execute (Parallel, -Lockless)}\label{execute-parallel-lockless} - -Accepted rewrites execute against the \textbf{read-only snapshot}: - -\begin{Shaded} -\begin{Highlighting}[] -\ControlFlowTok{for}\NormalTok{ rewrite }\KeywordTok{in}\NormalTok{ accepted }\OperatorTok{\{} - \KeywordTok{let}\NormalTok{ rule }\OperatorTok{=} \OperatorTok{\&}\NormalTok{rules[rewrite}\OperatorTok{.}\NormalTok{rule\_id]}\OperatorTok{;} - \KeywordTok{let}\NormalTok{ view }\OperatorTok{=} \PreprocessorTok{GraphView::}\NormalTok{new(}\OperatorTok{\&}\NormalTok{state}\OperatorTok{,}\NormalTok{ rewrite}\OperatorTok{.}\NormalTok{warp\_id)}\OperatorTok{;} - - \CommentTok{// Executor reads from view, emits to delta} -\NormalTok{ (rule}\OperatorTok{.}\NormalTok{executor)(view}\OperatorTok{,} \OperatorTok{\&}\NormalTok{rewrite}\OperatorTok{.}\NormalTok{scope}\OperatorTok{,} \OperatorTok{\&}\KeywordTok{mut}\NormalTok{ delta)}\OperatorTok{;} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{Critical}: \texttt{GraphView} is immutable. \texttt{TickDelta} -accumulates operations: - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ TickDelta }\OperatorTok{\{} -\NormalTok{ ops}\OperatorTok{:} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{(WarpOp}\OperatorTok{,}\NormalTok{ OpOrigin)}\OperatorTok{\textgreater{},} -\OperatorTok{\}} - -\CommentTok{// Operations emitted during execution:} -\NormalTok{delta}\OperatorTok{.}\NormalTok{emit(}\PreprocessorTok{WarpOp::}\NormalTok{UpsertNode }\OperatorTok{\{}\NormalTok{ id}\OperatorTok{,}\NormalTok{ record }\OperatorTok{\}}\NormalTok{)}\OperatorTok{;} -\NormalTok{delta}\OperatorTok{.}\NormalTok{emit(}\PreprocessorTok{WarpOp::}\NormalTok{UpsertEdge }\OperatorTok{\{}\NormalTok{ from}\OperatorTok{,}\NormalTok{ edge }\OperatorTok{\}}\NormalTok{)}\OperatorTok{;} -\NormalTok{delta}\OperatorTok{.}\NormalTok{emit(}\PreprocessorTok{WarpOp::}\NormalTok{DeleteNode }\OperatorTok{\{}\NormalTok{ id }\OperatorTok{\}}\NormalTok{)}\OperatorTok{;} -\NormalTok{delta}\OperatorTok{.}\NormalTok{emit(}\PreprocessorTok{WarpOp::}\NormalTok{SetAttachment }\OperatorTok{\{}\NormalTok{ node}\OperatorTok{,}\NormalTok{ key}\OperatorTok{,}\NormalTok{ value }\OperatorTok{\}}\NormalTok{)}\OperatorTok{;} -\end{Highlighting} -\end{Shaded} - -\subsubsection{5.4.4 Merge (Canonical Sort)}\label{merge-canonical-sort} - -All operations are sorted into \textbf{canonical replay order}: - -\begin{Shaded} -\begin{Highlighting}[] -\CommentTok{// Sort by (WarpOpKey, OpOrigin)} -\NormalTok{ops}\OperatorTok{.}\NormalTok{sort\_by\_key(}\OperatorTok{|}\NormalTok{(op}\OperatorTok{,}\NormalTok{ origin)}\OperatorTok{|}\NormalTok{ (op}\OperatorTok{.}\NormalTok{sort\_key()}\OperatorTok{,}\NormalTok{ origin}\OperatorTok{.}\NormalTok{clone()))}\OperatorTok{;} - -\CommentTok{// Deduplicate identical ops} -\CommentTok{// Error on conflicting ops (footprint model violation)} -\end{Highlighting} -\end{Shaded} - -\textbf{Conflict handling}: If two rewrites wrote \textbf{different -values} to the same key, that's a bug in the footprint model. Echo -errors loudly. - -\subsubsection{5.4.5 Finalize}\label{finalize} - -Apply the merged delta to produce the new state: - -\begin{Shaded} -\begin{Highlighting}[] -\ControlFlowTok{for}\NormalTok{ op }\KeywordTok{in}\NormalTok{ merged\_ops }\OperatorTok{\{} - \ControlFlowTok{match}\NormalTok{ op }\OperatorTok{\{} - \PreprocessorTok{WarpOp::}\NormalTok{UpsertNode }\OperatorTok{\{}\NormalTok{ id}\OperatorTok{,}\NormalTok{ record }\OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ state}\OperatorTok{.}\NormalTok{insert\_node(id}\OperatorTok{,}\NormalTok{ record)}\OperatorTok{,} - \PreprocessorTok{WarpOp::}\NormalTok{UpsertEdge }\OperatorTok{\{}\NormalTok{ from}\OperatorTok{,}\NormalTok{ edge }\OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ state}\OperatorTok{.}\NormalTok{insert\_edge(from}\OperatorTok{,}\NormalTok{ edge)}\OperatorTok{,} - \PreprocessorTok{WarpOp::}\NormalTok{DeleteNode }\OperatorTok{\{}\NormalTok{ id }\OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ state}\OperatorTok{.}\NormalTok{delete\_node\_cascade(id)}\OperatorTok{,} - \PreprocessorTok{WarpOp::}\NormalTok{SetAttachment }\OperatorTok{\{}\NormalTok{ node}\OperatorTok{,}\NormalTok{ key}\OperatorTok{,}\NormalTok{ value }\OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ state}\OperatorTok{.}\NormalTok{set\_attachment(node}\OperatorTok{,}\NormalTok{ key}\OperatorTok{,}\NormalTok{ value)}\OperatorTok{,} - \CommentTok{// ...} - \OperatorTok{\}} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\subsection{5.5 Phase 4: Hash -Computation}\label{phase-4-hash-computation} - -\subsubsection{State Root (BLAKE3)}\label{state-root-blake3} - -The state root is computed via \textbf{deterministic BFS} over reachable -nodes: - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-09.pdf} -\end{center} - -\textbf{Encoding} (architecture-independent): - All IDs: raw 32 bytes - -Counts: u64 little-endian - Payloads: 1-byte tag + type\_id{[}32{]} + -u64 LE length + bytes - -\subsubsection{Commit Hash (v2)}\label{commit-hash-v2} - -\begin{Shaded} -\begin{Highlighting}[] -\NormalTok{commit\_hash }\OperatorTok{=}\NormalTok{ BLAKE3(} -\NormalTok{ version\_tag[}\DecValTok{4}\NormalTok{] }\OperatorTok{||} \CommentTok{// Protocol version} -\NormalTok{ parents[] }\OperatorTok{||} \CommentTok{// Parent commit hashes} -\NormalTok{ state\_root[}\DecValTok{32}\NormalTok{] }\OperatorTok{||} \CommentTok{// Graph{-}only hash} -\NormalTok{ patch\_digest[}\DecValTok{32}\NormalTok{] }\OperatorTok{||} \CommentTok{// Merged ops digest} -\NormalTok{ policy\_id[}\DecValTok{4}\NormalTok{] }\CommentTok{// Policy identifier} -\NormalTok{)} -\end{Highlighting} -\end{Shaded} - -\subsection{5.6 Phase 5: Record to -History}\label{phase-5-record-to-history} - -\begin{Shaded} -\begin{Highlighting}[] -\NormalTok{history}\OperatorTok{.}\NormalTok{push((} -\NormalTok{ Snapshot }\OperatorTok{\{}\NormalTok{ hash}\OperatorTok{:}\NormalTok{ commit\_hash}\OperatorTok{,}\NormalTok{ state\_root}\OperatorTok{,}\NormalTok{ parents}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} -\NormalTok{ TickReceipt }\OperatorTok{\{}\NormalTok{ applied}\OperatorTok{,}\NormalTok{ rejected}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} -\NormalTok{ WarpTickPatchV1 }\OperatorTok{\{}\NormalTok{ ops}\OperatorTok{,}\NormalTok{ in\_slots}\OperatorTok{,}\NormalTok{ out\_slots}\OperatorTok{,}\NormalTok{ patch\_digest}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\}} -\NormalTok{))}\OperatorTok{;} -\end{Highlighting} -\end{Shaded} - -The patch is \textbf{prescriptive}: it can be replayed without -re-matching to reproduce the exact same state. - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{6. Parallel Execution: BOAW (Bag of Autonomous -Workers)}\label{parallel-execution-boaw-bag-of-autonomous-workers} - -\subsection{6.1 What is BOAW?}\label{what-is-boaw} - -BOAW stands for \textbf{Best Of All Worlds}. It's Echo's parallel -execution architecture that enables: - -\begin{itemize} -\tightlist -\item - \textbf{Massive parallelism} without locks -\item - \textbf{Deterministic convergence} across platforms -\item - \textbf{Worker-count invariance} (same result with 1 or 32 workers) -\end{itemize} - -\subsection{6.2 The Key Insight}\label{the-key-insight} - -\begin{verbatim} -┌──────────────────────────────────────────────────────────────────┐ -│ THE BOAW INSIGHT │ -├──────────────────────────────────────────────────────────────────┤ -│ │ -│ Traditional parallelism: │ -│ "Make execution order deterministic" → Complex, slow │ -│ │ -│ BOAW parallelism: │ -│ "Let execution order vary, make MERGE deterministic" → Fast! │ -│ │ -│ Workers race freely → Each produces a TickDelta │ -│ Merge step sorts all deltas → Canonical output │ -│ │ -└──────────────────────────────────────────────────────────────────┘ -\end{verbatim} - -\subsection{6.3 Execution Strategies}\label{execution-strategies} - -\subsubsection{Phase 6A: Stride Partitioning -(Legacy)}\label{phase-6a-stride-partitioning-legacy} - -\begin{verbatim} -Worker 0: items[0], items[4], items[8], ... -Worker 1: items[1], items[5], items[9], ... -Worker 2: items[2], items[6], items[10], ... -Worker 3: items[3], items[7], items[11], ... -\end{verbatim} - -\textbf{Problem}: Poor cache locality---related items scatter across -workers. - -\subsubsection{Phase 6B: Virtual Shards (Current -Default)}\label{phase-6b-virtual-shards-current-default} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{const}\NormalTok{ NUM\_SHARDS}\OperatorTok{:} \DataTypeTok{usize} \OperatorTok{=} \DecValTok{256}\OperatorTok{;} \CommentTok{// Protocol constant (frozen)} - -\KeywordTok{fn}\NormalTok{ shard\_of(node\_id}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{usize} \OperatorTok{\{} - \KeywordTok{let}\NormalTok{ bytes }\OperatorTok{=}\NormalTok{ node\_id}\OperatorTok{.}\NormalTok{as\_bytes()}\OperatorTok{;} - \KeywordTok{let}\NormalTok{ val }\OperatorTok{=} \DataTypeTok{u64}\PreprocessorTok{::}\NormalTok{from\_le\_bytes(bytes[}\DecValTok{0}\OperatorTok{..}\DecValTok{8}\NormalTok{])}\OperatorTok{;} -\NormalTok{ (val }\OperatorTok{\&} \DecValTok{255}\NormalTok{) }\KeywordTok{as} \DataTypeTok{usize} \CommentTok{// Fast modulo via bitmask} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-10.pdf} -\end{center} - -\textbf{Benefits}: - Items with same \texttt{shard\_of(scope)} processed -together → better cache hits - Workers dynamically claim shards via -atomic counter → load balancing - Determinism enforced by merge, not -execution order - -\subsection{6.4 The Execution Loop}\label{the-execution-loop} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ execute\_parallel\_sharded(} -\NormalTok{ view}\OperatorTok{:}\NormalTok{ GraphView}\OperatorTok{\textless{}}\OtherTok{\textquotesingle{}\_}\OperatorTok{\textgreater{},} -\NormalTok{ items}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[ExecItem]}\OperatorTok{,} -\NormalTok{ workers}\OperatorTok{:} \DataTypeTok{usize}\OperatorTok{,} -\NormalTok{) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{TickDelta}\OperatorTok{\textgreater{}} \OperatorTok{\{} - \CommentTok{// Partition items into 256 shards} - \KeywordTok{let}\NormalTok{ shards }\OperatorTok{=}\NormalTok{ partition\_into\_shards(items)}\OperatorTok{;} - - \CommentTok{// Atomic counter for work{-}stealing} - \KeywordTok{let}\NormalTok{ next\_shard }\OperatorTok{=} \PreprocessorTok{AtomicUsize::}\NormalTok{new(}\DecValTok{0}\NormalTok{)}\OperatorTok{;} - - \PreprocessorTok{std::thread::}\NormalTok{scope(}\OperatorTok{|}\NormalTok{s}\OperatorTok{|} \OperatorTok{\{} - \KeywordTok{let}\NormalTok{ handles}\OperatorTok{:} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{\_}\OperatorTok{\textgreater{}} \OperatorTok{=}\NormalTok{ (}\DecValTok{0}\OperatorTok{..}\NormalTok{workers)}\OperatorTok{.}\NormalTok{map(}\OperatorTok{|}\NormalTok{\_}\OperatorTok{|} \OperatorTok{\{} -\NormalTok{ s}\OperatorTok{.}\NormalTok{spawn(}\OperatorTok{||} \OperatorTok{\{} - \KeywordTok{let} \KeywordTok{mut}\NormalTok{ delta }\OperatorTok{=} \PreprocessorTok{TickDelta::}\NormalTok{new()}\OperatorTok{;} - \ControlFlowTok{loop} \OperatorTok{\{} - \CommentTok{// Claim next shard atomically} - \KeywordTok{let}\NormalTok{ shard\_id }\OperatorTok{=}\NormalTok{ next\_shard}\OperatorTok{.}\NormalTok{fetch\_add(}\DecValTok{1}\OperatorTok{,} \PreprocessorTok{Ordering::}\NormalTok{Relaxed)}\OperatorTok{;} - \ControlFlowTok{if}\NormalTok{ shard\_id }\OperatorTok{\textgreater{}=}\NormalTok{ NUM\_SHARDS }\OperatorTok{\{} \ControlFlowTok{break}\OperatorTok{;} \OperatorTok{\}} - - \CommentTok{// Execute all items in this shard} - \ControlFlowTok{for}\NormalTok{ item }\KeywordTok{in} \OperatorTok{\&}\NormalTok{shards[shard\_id]}\OperatorTok{.}\NormalTok{items }\OperatorTok{\{} -\NormalTok{ (item}\OperatorTok{.}\NormalTok{exec)(view}\OperatorTok{.}\NormalTok{clone()}\OperatorTok{,} \OperatorTok{\&}\NormalTok{item}\OperatorTok{.}\NormalTok{scope}\OperatorTok{,} \OperatorTok{\&}\KeywordTok{mut}\NormalTok{ delta)}\OperatorTok{;} - \OperatorTok{\}} - \OperatorTok{\}} -\NormalTok{ delta} - \OperatorTok{\}}\NormalTok{)} - \OperatorTok{\}}\NormalTok{)}\OperatorTok{.}\NormalTok{collect()}\OperatorTok{;} - -\NormalTok{ handles}\OperatorTok{.}\NormalTok{into\_iter()}\OperatorTok{.}\NormalTok{map(}\OperatorTok{|}\NormalTok{h}\OperatorTok{|}\NormalTok{ h}\OperatorTok{.}\NormalTok{join()}\OperatorTok{.}\NormalTok{unwrap())}\OperatorTok{.}\NormalTok{collect()} - \OperatorTok{\}}\NormalTok{)} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\subsection{6.5 The Canonical Merge}\label{the-canonical-merge} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ merge\_deltas(deltas}\OperatorTok{:} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{TickDelta}\OperatorTok{\textgreater{}}\NormalTok{) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Result}\OperatorTok{\textless{}}\DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{WarpOp}\OperatorTok{\textgreater{},}\NormalTok{ MergeConflict}\OperatorTok{\textgreater{}} \OperatorTok{\{} - \CommentTok{// 1. Flatten all ops from all workers} - \KeywordTok{let} \KeywordTok{mut}\NormalTok{ all\_ops}\OperatorTok{:} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{(WarpOpKey}\OperatorTok{,}\NormalTok{ OpOrigin}\OperatorTok{,}\NormalTok{ WarpOp)}\OperatorTok{\textgreater{}} \OperatorTok{=}\NormalTok{ deltas} - \OperatorTok{.}\NormalTok{into\_iter()} - \OperatorTok{.}\NormalTok{flat\_map(}\OperatorTok{|}\NormalTok{d}\OperatorTok{|}\NormalTok{ d}\OperatorTok{.}\NormalTok{ops\_with\_origins())} - \OperatorTok{.}\NormalTok{collect()}\OperatorTok{;} - - \CommentTok{// 2. Sort canonically by (key, origin)} -\NormalTok{ all\_ops}\OperatorTok{.}\NormalTok{sort\_by\_key(}\OperatorTok{|}\NormalTok{(key}\OperatorTok{,}\NormalTok{ origin}\OperatorTok{,}\NormalTok{ \_)}\OperatorTok{|}\NormalTok{ (key}\OperatorTok{.}\NormalTok{clone()}\OperatorTok{,}\NormalTok{ origin}\OperatorTok{.}\NormalTok{clone()))}\OperatorTok{;} - - \CommentTok{// 3. Deduplicate and detect conflicts} - \KeywordTok{let} \KeywordTok{mut}\NormalTok{ result }\OperatorTok{=} \DataTypeTok{Vec}\PreprocessorTok{::}\NormalTok{new()}\OperatorTok{;} - \ControlFlowTok{for}\NormalTok{ group }\KeywordTok{in}\NormalTok{ all\_ops}\OperatorTok{.}\NormalTok{group\_by(}\OperatorTok{|}\NormalTok{(k1}\OperatorTok{,}\NormalTok{ \_}\OperatorTok{,}\NormalTok{ \_)}\OperatorTok{,}\NormalTok{ (k2}\OperatorTok{,}\NormalTok{ \_}\OperatorTok{,}\NormalTok{ \_)}\OperatorTok{|}\NormalTok{ k1 }\OperatorTok{==}\NormalTok{ k2) }\OperatorTok{\{} - \KeywordTok{let}\NormalTok{ first }\OperatorTok{=} \OperatorTok{\&}\NormalTok{group[}\DecValTok{0}\NormalTok{]}\OperatorTok{.}\DecValTok{2}\OperatorTok{;} - \ControlFlowTok{if}\NormalTok{ group}\OperatorTok{.}\NormalTok{iter()}\OperatorTok{.}\NormalTok{all(}\OperatorTok{|}\NormalTok{(\_}\OperatorTok{,}\NormalTok{ \_}\OperatorTok{,}\NormalTok{ op)}\OperatorTok{|}\NormalTok{ op }\OperatorTok{==}\NormalTok{ first) }\OperatorTok{\{} -\NormalTok{ result}\OperatorTok{.}\NormalTok{push(first}\OperatorTok{.}\NormalTok{clone())}\OperatorTok{;} \CommentTok{// All identical: keep one} - \OperatorTok{\}} \ControlFlowTok{else} \OperatorTok{\{} - \ControlFlowTok{return} \ConstantTok{Err}\NormalTok{(MergeConflict }\OperatorTok{\{}\NormalTok{ writers}\OperatorTok{:}\NormalTok{ group}\OperatorTok{.}\NormalTok{iter()}\OperatorTok{.}\NormalTok{map(}\OperatorTok{|}\NormalTok{(\_}\OperatorTok{,}\NormalTok{ o}\OperatorTok{,}\NormalTok{ \_)}\OperatorTok{|}\NormalTok{ o)}\OperatorTok{.}\NormalTok{collect() }\OperatorTok{\}}\NormalTok{)}\OperatorTok{;} - \OperatorTok{\}} - \OperatorTok{\}} - - \ConstantTok{Ok}\NormalTok{(result)} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{Key guarantee}: Conflicts are bugs. If footprints were correct, -no two rewrites should write different values to the same key. - -\subsection{6.6 Runtime Enforcement: -FootprintGuard}\label{runtime-enforcement-footprintguard} - -\texttt{FootprintGuard} is the runtime mechanism that validates every -graph access and emitted op against the declared footprint. - -\subsubsection{Read Enforcement}\label{read-enforcement} - -Read enforcement is implemented via \texttt{GraphView::new\_guarded()}, -which wraps the underlying \texttt{GraphView} with an intercepting layer. -Every accessor call---\texttt{node()}, \texttt{edges\_from()}, -\texttt{node\_attachment()}, etc.---is checked against the footprint's -declared read sets (\texttt{n\_read}, \texttt{e\_read}, \texttt{a\_read}). -An access to an undeclared resource triggers a \texttt{FootprintViolation} -panic. - -\subsubsection{Write Enforcement}\label{write-enforcement} - -Write enforcement uses a post-hoc \texttt{check\_op()} strategy. The -executor runs inside a \texttt{catch\_unwind} boundary, and validation -runs on every op emitted into the \texttt{TickDelta} regardless of -whether the executor completes normally or panics. This catches undeclared -writes, cross-warp emissions, unauthorized instance ops, and adjacency -violations (edge ops whose \texttt{from} node is absent from -\texttt{n\_write}). - -\subsubsection{Scope and Lifecycle}\label{scope-and-lifecycle} - -The guard is instantiated \emph{per-\texttt{ExecItem}} within a -\texttt{WorkUnit}. Each rule invocation receives its own guard, scoped to -that item's computed footprint. The \texttt{check\_op()} function validates -\texttt{TickDelta} emissions against the footprint. Enforcement yields two -payload variants: -\begin{itemize} -\item \texttt{FootprintViolation}: emitted when \texttt{check\_op} detects an - illegal op (undeclared write, cross-warp emission, etc.) -\item \texttt{FootprintViolationWithPanic}: emitted when the executor itself - panics and the guard wraps that panic together with any detected violation -\end{itemize} - -\textbf{Tick Fallout Semantics:} When enforcement fails, the wrapped panic -causes the \texttt{TickDelta} to become a \texttt{PoisonedDelta}, preventing -merge. The current \texttt{ExecItem}/tick is aborted. At merge time, if a -poisoned delta is encountered, a \texttt{MergeError::PoisonedDelta} is raised, -triggering worker/tick recovery. The distinction is: abort of the current -\texttt{ExecItem} happens immediately at detection; merge-time errors occur -when poisoned deltas reach the commit path. - -\subsubsection{Configuration}\label{guard-configuration} - -The guard is \texttt{cfg}-gated: - -\begin{itemize} -\item \textbf{Active} in debug builds (\texttt{debug\_assertions}) or when - the \texttt{footprint\_enforce\_release} feature is enabled. -\item \textbf{Disabled} when the \texttt{unsafe\_graph} feature is set, - which removes all guard overhead for maximum throughput in production - scenarios where footprints have already been validated. -\end{itemize} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{7. Storage \& Hashing: Content-Addressed -Truth}\label{storage-hashing-content-addressed-truth} - -\subsection{7.1 The GraphStore}\label{the-graphstore} - -Located in \texttt{crates/warp-core/src/graph.rs}: - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ GraphStore }\OperatorTok{\{} - \KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) warp\_id}\OperatorTok{:}\NormalTok{ WarpId}\OperatorTok{,} - \KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) nodes}\OperatorTok{:}\NormalTok{ BTreeMap}\OperatorTok{\textless{}}\NormalTok{NodeId}\OperatorTok{,}\NormalTok{ NodeRecord}\OperatorTok{\textgreater{},} - \KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) edges\_from}\OperatorTok{:}\NormalTok{ BTreeMap}\OperatorTok{\textless{}}\NormalTok{NodeId}\OperatorTok{,} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{EdgeRecord}\OperatorTok{\textgreater{}\textgreater{},} - \KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) edges\_to}\OperatorTok{:}\NormalTok{ BTreeMap}\OperatorTok{\textless{}}\NormalTok{NodeId}\OperatorTok{,} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{EdgeId}\OperatorTok{\textgreater{}\textgreater{},} \CommentTok{// Reverse index} - \KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) node\_attachments}\OperatorTok{:}\NormalTok{ BTreeMap}\OperatorTok{\textless{}}\NormalTok{NodeId}\OperatorTok{,}\NormalTok{ AttachmentValue}\OperatorTok{\textgreater{},} - \KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) edge\_attachments}\OperatorTok{:}\NormalTok{ BTreeMap}\OperatorTok{\textless{}}\NormalTok{EdgeId}\OperatorTok{,}\NormalTok{ AttachmentValue}\OperatorTok{\textgreater{},} - \KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) edge\_index}\OperatorTok{:}\NormalTok{ BTreeMap}\OperatorTok{\textless{}}\NormalTok{EdgeId}\OperatorTok{,}\NormalTok{ NodeId}\OperatorTok{\textgreater{},} \CommentTok{// Edge → Source} - \KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) edge\_to\_index}\OperatorTok{:}\NormalTok{ BTreeMap}\OperatorTok{\textless{}}\NormalTok{EdgeId}\OperatorTok{,}\NormalTok{ NodeId}\OperatorTok{\textgreater{},} \CommentTok{// Edge → Target} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{Why BTreeMap everywhere?} - Deterministic iteration order -(sorted by key) - Enables canonical hashing - No HashMap ordering -surprises - -\subsection{7.2 WSC: Write-Streaming Columnar -Format}\label{wsc-write-streaming-columnar-format} - -For efficient snapshots, Echo uses WSC---a zero-copy, mmap-friendly -format: - -\begin{verbatim} -┌─────────────────────────────────────────────────────────────────┐ -│ WSC SNAPSHOT FILE │ -├─────────────────────────────────────────────────────────────────┤ -│ ┌─────────────────────────────────────────────────────────────┐ │ -│ │ NODES TABLE (sorted by NodeId) │ │ -│ │ ┌──────────┬───────────┬──────────┐ │ │ -│ │ │ NodeRow │ NodeRow │ NodeRow │ ... │ │ -│ │ │ 64 bytes │ 64 bytes │ 64 bytes │ │ │ -│ │ └──────────┴───────────┴──────────┘ │ │ -│ └─────────────────────────────────────────────────────────────┘ │ -│ ┌─────────────────────────────────────────────────────────────┐ │ -│ │ EDGES TABLE (sorted by EdgeId) │ │ -│ │ ┌───────────┬───────────┬───────────┐ │ │ -│ │ │ EdgeRow │ EdgeRow │ EdgeRow │ ... │ │ -│ │ │ 128 bytes │ 128 bytes │ 128 bytes │ │ │ -│ │ └───────────┴───────────┴───────────┘ │ │ -│ └─────────────────────────────────────────────────────────────┘ │ -│ ┌─────────────────────────────────────────────────────────────┐ │ -│ │ OUT_INDEX (per-node → range into out_edges) │ │ -│ │ ┌────────────────┬────────────────┐ │ │ -│ │ │ Range (16 B) │ Range (16 B) │ ... │ │ -│ │ └────────────────┴────────────────┘ │ │ -│ └─────────────────────────────────────────────────────────────┘ │ -│ ┌─────────────────────────────────────────────────────────────┐ │ -│ │ BLOB ARENA (variable-length data) │ │ -│ │ Referenced by (offset, length) tuples │ │ -│ └─────────────────────────────────────────────────────────────┘ │ -└─────────────────────────────────────────────────────────────────┘ -\end{verbatim} - -\textbf{Row types} (8-byte aligned): - \texttt{NodeRow}: 64 bytes -(node\_id{[}32{]} + node\_type{[}32{]}) - \texttt{EdgeRow}: 128 bytes -(edge\_id{[}32{]} + from{[}32{]} + to{[}32{]} + type{[}32{]}) - -\texttt{Range}: 16 bytes (start\_le{[}8{]} + len\_le{[}8{]}) - -\subsection{7.3 Copy-on-Write Semantics}\label{copy-on-write-semantics} - -\textbf{Rule}: During a tick, nothing shared is mutated. - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-11.pdf} -\end{center} - -\textbf{Structural sharing}: Only changed segments are newly written. -Unchanged data is referenced by hash. - -\subsection{7.4 Hash Algorithm Details}\label{hash-algorithm-details} - -\textbf{State Root} (BLAKE3, v2): - -\begin{verbatim} -state_root = BLAKE3( - root_id[32] || - instance_count[8, LE] || - for each instance in BTreeMap order: - warp_id_len[8, LE] || - warp_id_bytes || - node_count[8, LE] || - for each node in ascending NodeId order: - node_id[32] || - node_type[32] || - for each outbound edge in ascending EdgeId order: - edge_id[32] || - edge_type[32] || - to_node[32] || - for each attachment: - key_len[8, LE] || - key_bytes || - type_id[32] || - value_len[8, LE] || - value_bytes -) -\end{verbatim} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{8. Worked Example: Tracing a Link -Click}\label{worked-example-tracing-a-link-click} - -Let's trace what happens when a user clicks a link in a hypothetical -WARP-based navigation system. - -\subsection{8.1 The Scenario}\label{the-scenario} - -Imagine a simple site with two pages: - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-12.pdf} -\end{center} - -\textbf{User clicks the link}: This should navigate from Home to About. - -\subsection{8.2 Step 1: Intent Ingestion}\label{step-1-intent-ingestion} - -The click is captured by the viewer and converted to an \textbf{intent}: - -\begin{Shaded} -\begin{Highlighting}[] -\CommentTok{// In the viewer:} -\KeywordTok{let}\NormalTok{ intent }\OperatorTok{=}\NormalTok{ NavigateIntent }\OperatorTok{\{} -\NormalTok{ target\_page}\OperatorTok{:}\NormalTok{ about\_node\_id}\OperatorTok{,} -\NormalTok{ timestamp}\OperatorTok{:}\NormalTok{ deterministic\_tick}\OperatorTok{,} -\OperatorTok{\};} -\KeywordTok{let}\NormalTok{ intent\_bytes }\OperatorTok{=}\NormalTok{ canonical\_encode(}\OperatorTok{\&}\NormalTok{intent)}\OperatorTok{;} - -\CommentTok{// Send to engine:} -\NormalTok{engine}\OperatorTok{.}\NormalTok{ingest\_intent(intent\_bytes)}\OperatorTok{;} -\end{Highlighting} -\end{Shaded} - -\textbf{What happens inside \texttt{ingest\_intent}}: - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-13.pdf} -\end{center} - -\subsection{8.3 Step 2: Begin -Transaction}\label{step-2-begin-transaction} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{let}\NormalTok{ tx }\OperatorTok{=}\NormalTok{ engine}\OperatorTok{.}\NormalTok{begin()}\OperatorTok{;} \CommentTok{// tx = TxId(1)} -\end{Highlighting} -\end{Shaded} - -\subsection{8.4 Step 3: Dispatch Intent}\label{step-3-dispatch-intent} - -\begin{Shaded} -\begin{Highlighting}[] -\NormalTok{engine}\OperatorTok{.}\NormalTok{dispatch\_next\_intent(tx)}\OperatorTok{;} -\end{Highlighting} -\end{Shaded} - -\textbf{What happens}: - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-14.pdf} -\end{center} - -\subsection{8.5 Step 4: Rule Matching}\label{step-4-rule-matching} - -The \texttt{cmd/navigate} rule matches: - -\begin{Shaded} -\begin{Highlighting}[] -\CommentTok{// Matcher: Does this intent want navigation?} -\KeywordTok{fn}\NormalTok{ navigate\_matcher(view}\OperatorTok{:}\NormalTok{ GraphView}\OperatorTok{,}\NormalTok{ scope}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{bool} \OperatorTok{\{} - \KeywordTok{let}\NormalTok{ intent }\OperatorTok{=}\NormalTok{ view}\OperatorTok{.}\NormalTok{node(scope)}\OperatorTok{?;} -\NormalTok{ intent}\OperatorTok{.}\NormalTok{type\_id }\OperatorTok{==} \StringTok{"navigate\_intent"} -\OperatorTok{\}} - -\CommentTok{// Footprint: What will we read/write?} -\KeywordTok{fn}\NormalTok{ navigate\_footprint(view}\OperatorTok{:}\NormalTok{ GraphView}\OperatorTok{,}\NormalTok{ scope}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId) }\OperatorTok{{-}\textgreater{}}\NormalTok{ Footprint }\OperatorTok{\{} -\NormalTok{ Footprint }\OperatorTok{\{} -\NormalTok{ n\_read}\OperatorTok{:} \PreprocessorTok{btreeset!}\NormalTok{[scope}\OperatorTok{.}\NormalTok{clone()}\OperatorTok{,}\NormalTok{ viewer\_node]}\OperatorTok{,} -\NormalTok{ n\_write}\OperatorTok{:} \PreprocessorTok{btreeset!}\NormalTok{[]}\OperatorTok{,} -\NormalTok{ a\_read}\OperatorTok{:} \PreprocessorTok{btreeset!}\NormalTok{[]}\OperatorTok{,} -\NormalTok{ a\_write}\OperatorTok{:} \PreprocessorTok{btreeset!}\NormalTok{[}\PreprocessorTok{AttachmentKey::}\NormalTok{new(viewer\_node}\OperatorTok{,} \StringTok{"current"}\NormalTok{)]}\OperatorTok{,} - \OperatorTok{..}\KeywordTok{default}\NormalTok{()} - \OperatorTok{\}} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -The rule is enqueued: - -\begin{verbatim} -┌─────────────────────────────────────────────────────────────┐ -│ PendingRewrite │ -├─────────────────────────────────────────────────────────────┤ -│ rule_id: "cmd/navigate" │ -│ scope: 0xABCD... (intent node) │ -│ footprint: { n_read: [intent, viewer], a_write: [current] } │ -│ tx: TxId(1) │ -└─────────────────────────────────────────────────────────────┘ -\end{verbatim} - -\subsection{8.6 Step 5: Commit}\label{step-5-commit} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{let}\NormalTok{ (snapshot}\OperatorTok{,}\NormalTok{ receipt}\OperatorTok{,}\NormalTok{ patch) }\OperatorTok{=}\NormalTok{ engine}\OperatorTok{.}\NormalTok{commit\_with\_receipt(tx)}\OperatorTok{;} -\end{Highlighting} -\end{Shaded} - -\subsubsection{5a. Drain}\label{a.-drain} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{let}\NormalTok{ rewrites }\OperatorTok{=}\NormalTok{ scheduler}\OperatorTok{.}\NormalTok{drain\_for\_tx(tx)}\OperatorTok{;} -\CommentTok{// Result: [PendingRewrite \{ rule: "cmd/navigate", scope: intent\_node \}]} -\end{Highlighting} -\end{Shaded} - -\subsubsection{5b. Reserve}\label{b.-reserve} - -\begin{Shaded} -\begin{Highlighting}[] -\CommentTok{// Check footprint independence} -\CommentTok{// No conflicts (only one rewrite)} -\CommentTok{// Accepted!} -\end{Highlighting} -\end{Shaded} - -\subsubsection{5c. Execute}\label{c.-execute} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{fn}\NormalTok{ navigate\_executor(view}\OperatorTok{:}\NormalTok{ GraphView}\OperatorTok{,}\NormalTok{ scope}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId}\OperatorTok{,}\NormalTok{ delta}\OperatorTok{:} \OperatorTok{\&}\KeywordTok{mut}\NormalTok{ TickDelta) }\OperatorTok{\{} - \CommentTok{// Read the intent to find target} - \KeywordTok{let}\NormalTok{ intent }\OperatorTok{=}\NormalTok{ view}\OperatorTok{.}\NormalTok{node(scope)}\OperatorTok{.}\NormalTok{unwrap()}\OperatorTok{;} - \KeywordTok{let}\NormalTok{ target\_page }\OperatorTok{=}\NormalTok{ intent}\OperatorTok{.}\NormalTok{attachment(}\StringTok{"target"}\NormalTok{)}\OperatorTok{.}\NormalTok{unwrap()}\OperatorTok{;} - - \CommentTok{// Read current viewer state (for logging/validation)} - \KeywordTok{let}\NormalTok{ viewer }\OperatorTok{=}\NormalTok{ view}\OperatorTok{.}\NormalTok{node(}\OperatorTok{\&}\NormalTok{VIEWER\_NODE)}\OperatorTok{.}\NormalTok{unwrap()}\OperatorTok{;} - \KeywordTok{let}\NormalTok{ old\_page }\OperatorTok{=}\NormalTok{ viewer}\OperatorTok{.}\NormalTok{attachment(}\StringTok{"current"}\NormalTok{)}\OperatorTok{;} - - \CommentTok{// Emit the change: update viewer\textquotesingle{}s current page} -\NormalTok{ delta}\OperatorTok{.}\NormalTok{emit(}\PreprocessorTok{WarpOp::}\NormalTok{SetAttachment }\OperatorTok{\{} -\NormalTok{ node}\OperatorTok{:}\NormalTok{ VIEWER\_NODE}\OperatorTok{,} -\NormalTok{ key}\OperatorTok{:} \StringTok{"current"}\OperatorTok{.}\NormalTok{into()}\OperatorTok{,} -\NormalTok{ value}\OperatorTok{:} \PreprocessorTok{AttachmentValue::}\NormalTok{Atom(AtomPayload }\OperatorTok{\{} -\NormalTok{ type\_id}\OperatorTok{:} \StringTok{"node\_ref"}\OperatorTok{.}\NormalTok{into()}\OperatorTok{,} -\NormalTok{ bytes}\OperatorTok{:}\NormalTok{ target\_page}\OperatorTok{.}\NormalTok{to\_bytes()}\OperatorTok{,} - \OperatorTok{\}}\NormalTok{)}\OperatorTok{,} - \OperatorTok{\}}\NormalTok{)}\OperatorTok{;} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{TickDelta now contains}: - -\begin{Shaded} -\begin{Highlighting}[] -\NormalTok{[} -\NormalTok{ (}\PreprocessorTok{WarpOp::}\NormalTok{SetAttachment }\OperatorTok{\{} -\NormalTok{ node}\OperatorTok{:}\NormalTok{ viewer\_node}\OperatorTok{,} -\NormalTok{ key}\OperatorTok{:} \StringTok{"current"}\OperatorTok{,} -\NormalTok{ value}\OperatorTok{:}\NormalTok{ about\_node\_id} - \OperatorTok{\},}\NormalTok{ OpOrigin }\OperatorTok{\{}\NormalTok{ intent\_id}\OperatorTok{:} \DecValTok{1}\OperatorTok{,}\NormalTok{ rule\_id}\OperatorTok{:} \DecValTok{42}\OperatorTok{,}\NormalTok{ match\_ix}\OperatorTok{:} \DecValTok{0}\OperatorTok{,}\NormalTok{ op\_ix}\OperatorTok{:} \DecValTok{0} \OperatorTok{\}}\NormalTok{)} -\NormalTok{]} -\end{Highlighting} -\end{Shaded} - -\subsubsection{5d. Merge}\label{d.-merge} - -Only one delta, trivial merge: - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{let}\NormalTok{ merged\_ops }\OperatorTok{=} \PreprocessorTok{vec!}\NormalTok{[} - \PreprocessorTok{WarpOp::}\NormalTok{SetAttachment }\OperatorTok{\{}\NormalTok{ node}\OperatorTok{:}\NormalTok{ viewer\_node}\OperatorTok{,}\NormalTok{ key}\OperatorTok{:} \StringTok{"current"}\OperatorTok{,}\NormalTok{ value}\OperatorTok{:}\NormalTok{ about\_node\_id }\OperatorTok{\}} -\NormalTok{]}\OperatorTok{;} -\end{Highlighting} -\end{Shaded} - -\subsubsection{5e. Finalize}\label{e.-finalize} - -Apply to state: - -\begin{Shaded} -\begin{Highlighting}[] -\NormalTok{state}\OperatorTok{.}\NormalTok{set\_attachment(viewer\_node}\OperatorTok{,} \StringTok{"current"}\OperatorTok{,}\NormalTok{ about\_node\_id)}\OperatorTok{;} -\end{Highlighting} -\end{Shaded} - -\subsection{8.7 Step 6: Hash Computation}\label{step-6-hash-computation} - -\begin{Shaded} -\begin{Highlighting}[] -\CommentTok{// State root: BLAKE3 of reachable graph} -\KeywordTok{let}\NormalTok{ state\_root }\OperatorTok{=}\NormalTok{ compute\_state\_root(}\OperatorTok{\&}\NormalTok{state)}\OperatorTok{;} \CommentTok{// 0x7890...} - -\CommentTok{// Patch digest: BLAKE3 of merged ops} -\KeywordTok{let}\NormalTok{ patch\_digest }\OperatorTok{=}\NormalTok{ compute\_patch\_digest(}\OperatorTok{\&}\NormalTok{merged\_ops)}\OperatorTok{;} \CommentTok{// 0xDEF0...} - -\CommentTok{// Commit hash} -\KeywordTok{let}\NormalTok{ commit\_hash }\OperatorTok{=}\NormalTok{ BLAKE3(} -\NormalTok{ VERSION\_TAG }\OperatorTok{||} -\NormalTok{ [parent\_hash] }\OperatorTok{||} -\NormalTok{ state\_root }\OperatorTok{||} -\NormalTok{ patch\_digest }\OperatorTok{||} -\NormalTok{ policy\_id} -\NormalTok{)}\OperatorTok{;} \CommentTok{// 0x1234...} -\end{Highlighting} -\end{Shaded} - -\subsection{8.8 Step 7: Emit to Tools}\label{step-7-emit-to-tools} - -The engine emits a \texttt{WarpDiff} to the session hub: - -\begin{Shaded} -\begin{Highlighting}[] -\NormalTok{WarpDiff }\OperatorTok{\{} -\NormalTok{ from\_epoch}\OperatorTok{:} \DecValTok{0}\OperatorTok{,} -\NormalTok{ to\_epoch}\OperatorTok{:} \DecValTok{1}\OperatorTok{,} -\NormalTok{ ops}\OperatorTok{:} \PreprocessorTok{vec!}\NormalTok{[} - \PreprocessorTok{WarpOp::}\NormalTok{SetAttachment }\OperatorTok{\{} -\NormalTok{ node}\OperatorTok{:}\NormalTok{ viewer\_node}\OperatorTok{,} -\NormalTok{ key}\OperatorTok{:} \StringTok{"current"}\OperatorTok{,} -\NormalTok{ value}\OperatorTok{:}\NormalTok{ about\_node\_id} - \OperatorTok{\}} -\NormalTok{ ]}\OperatorTok{,} -\NormalTok{ state\_hash}\OperatorTok{:} \DecValTok{0x7890}\OperatorTok{...,} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\subsection{8.9 Step 8: Viewer Applies -Diff}\label{step-8-viewer-applies-diff} - -\begin{Shaded} -\begin{Highlighting}[] -\CommentTok{// In warp{-}viewer:} -\KeywordTok{fn}\NormalTok{ process\_frames(viewer}\OperatorTok{:} \OperatorTok{\&}\KeywordTok{mut}\NormalTok{ ViewerState}\OperatorTok{,}\NormalTok{ frames}\OperatorTok{:} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{WarpFrame}\OperatorTok{\textgreater{}}\NormalTok{) }\OperatorTok{\{} - \ControlFlowTok{for}\NormalTok{ frame }\KeywordTok{in}\NormalTok{ frames }\OperatorTok{\{} - \ControlFlowTok{match}\NormalTok{ frame }\OperatorTok{\{} - \PreprocessorTok{WarpFrame::}\NormalTok{Diff(diff) }\OperatorTok{=\textgreater{}} \OperatorTok{\{} - \CommentTok{// Verify we have the parent epoch} - \PreprocessorTok{assert\_eq!}\NormalTok{(viewer}\OperatorTok{.}\NormalTok{epoch}\OperatorTok{,} \ConstantTok{Some}\NormalTok{(diff}\OperatorTok{.}\NormalTok{from\_epoch))}\OperatorTok{;} - - \CommentTok{// Apply each operation} - \ControlFlowTok{for}\NormalTok{ op }\KeywordTok{in}\NormalTok{ diff}\OperatorTok{.}\NormalTok{ops }\OperatorTok{\{} -\NormalTok{ viewer}\OperatorTok{.}\NormalTok{wire\_graph}\OperatorTok{.}\NormalTok{apply\_op(op)}\OperatorTok{;} - \OperatorTok{\}} - - \CommentTok{// Update epoch} -\NormalTok{ viewer}\OperatorTok{.}\NormalTok{epoch }\OperatorTok{=} \ConstantTok{Some}\NormalTok{(diff}\OperatorTok{.}\NormalTok{to\_epoch)}\OperatorTok{;} - - \CommentTok{// Verify hash matches!} - \KeywordTok{let}\NormalTok{ computed }\OperatorTok{=}\NormalTok{ viewer}\OperatorTok{.}\NormalTok{wire\_graph}\OperatorTok{.}\NormalTok{state\_hash()}\OperatorTok{;} - \PreprocessorTok{assert\_eq!}\NormalTok{(computed}\OperatorTok{,}\NormalTok{ diff}\OperatorTok{.}\NormalTok{state\_hash}\OperatorTok{,} \StringTok{"DESYNC!"}\NormalTok{)}\OperatorTok{;} - \OperatorTok{\}} - \CommentTok{// ...} - \OperatorTok{\}} - \OperatorTok{\}} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\subsection{8.10 The Result}\label{the-result} - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-15.pdf} -\end{center} - -\textbf{The navigation is complete.} The viewer now displays the About -page, and the state hash proves it happened deterministically. - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{9. The Viewer: Observing Echo}\label{the-viewer-observing-echo} - -\subsection{9.1 Event Handling -Architecture}\label{event-handling-architecture} - -The viewer uses a \textbf{pure reducer pattern} (similar to Redux/Elm): - -\begin{center} -\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-16.pdf} -\end{center} - -\subsection{9.2 The UiEvent Enum}\label{the-uievent-enum} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{enum}\NormalTok{ UiEvent }\OperatorTok{\{} - \CommentTok{// Menu navigation} -\NormalTok{ ConnectClicked}\OperatorTok{,} -\NormalTok{ SettingsClicked}\OperatorTok{,} -\NormalTok{ ExitClicked}\OperatorTok{,} - - \CommentTok{// Connection form} -\NormalTok{ ConnectHostChanged(}\DataTypeTok{String}\NormalTok{)}\OperatorTok{,} -\NormalTok{ ConnectPortChanged(}\DataTypeTok{u16}\NormalTok{)}\OperatorTok{,} -\NormalTok{ ConnectSubmit}\OperatorTok{,} - - \CommentTok{// Overlays} -\NormalTok{ OpenMenu}\OperatorTok{,} -\NormalTok{ CloseOverlay}\OperatorTok{,} -\NormalTok{ OpenSettingsOverlay}\OperatorTok{,} - - \CommentTok{// System} -\NormalTok{ ShutdownRequested}\OperatorTok{,} -\NormalTok{ EnterView}\OperatorTok{,} -\NormalTok{ ShowError(}\DataTypeTok{String}\NormalTok{)}\OperatorTok{,} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\subsection{9.3 The Pure Reducer}\label{the-pure-reducer} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ reduce(ui}\OperatorTok{:} \OperatorTok{\&}\NormalTok{UiState}\OperatorTok{,}\NormalTok{ ev}\OperatorTok{:}\NormalTok{ UiEvent) }\OperatorTok{{-}\textgreater{}}\NormalTok{ (UiState}\OperatorTok{,} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{UiEffect}\OperatorTok{\textgreater{}}\NormalTok{) }\OperatorTok{\{} - \KeywordTok{let} \KeywordTok{mut}\NormalTok{ next }\OperatorTok{=}\NormalTok{ ui}\OperatorTok{.}\NormalTok{clone()}\OperatorTok{;} - \KeywordTok{let} \KeywordTok{mut}\NormalTok{ fx }\OperatorTok{=} \DataTypeTok{Vec}\PreprocessorTok{::}\NormalTok{new()}\OperatorTok{;} - - \ControlFlowTok{match}\NormalTok{ ev }\OperatorTok{\{} - \PreprocessorTok{UiEvent::}\NormalTok{ConnectClicked }\OperatorTok{=\textgreater{}} \OperatorTok{\{} -\NormalTok{ next}\OperatorTok{.}\NormalTok{title\_mode }\OperatorTok{=} \PreprocessorTok{TitleMode::}\NormalTok{ConnectForm}\OperatorTok{;} - \OperatorTok{\}} - \PreprocessorTok{UiEvent::}\NormalTok{ConnectSubmit }\OperatorTok{=\textgreater{}} \OperatorTok{\{} -\NormalTok{ next}\OperatorTok{.}\NormalTok{screen }\OperatorTok{=} \PreprocessorTok{Screen::}\NormalTok{Connecting}\OperatorTok{;} -\NormalTok{ fx}\OperatorTok{.}\NormalTok{push(}\PreprocessorTok{UiEffect::}\NormalTok{RequestConnect)}\OperatorTok{;} - \OperatorTok{\}} - \PreprocessorTok{UiEvent::}\NormalTok{EnterView }\OperatorTok{=\textgreater{}} \OperatorTok{\{} -\NormalTok{ next}\OperatorTok{.}\NormalTok{screen }\OperatorTok{=} \PreprocessorTok{Screen::}\NormalTok{View}\OperatorTok{;} - \OperatorTok{\}} - \CommentTok{// ...} - \OperatorTok{\}} - -\NormalTok{ (next}\OperatorTok{,}\NormalTok{ fx)} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{Benefits}: - \textbf{Testable}: Pure function, easy to unit test -- \textbf{Predictable}: Same input always produces same output - -\textbf{Debuggable}: State transitions are explicit - -\subsection{9.4 Frame Loop}\label{frame-loop} - -Each frame: - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ frame(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\NormalTok{) }\OperatorTok{\{} - \CommentTok{// 1. Drain session notifications} - \ControlFlowTok{for}\NormalTok{ notification }\KeywordTok{in} \KeywordTok{self}\OperatorTok{.}\NormalTok{session}\OperatorTok{.}\NormalTok{drain\_notifications(}\DecValTok{64}\NormalTok{) }\OperatorTok{\{} - \KeywordTok{self}\OperatorTok{.}\NormalTok{handle\_notification(notification)}\OperatorTok{;} - \OperatorTok{\}} - - \CommentTok{// 2. Process incoming frames} - \KeywordTok{let}\NormalTok{ frames }\OperatorTok{=} \KeywordTok{self}\OperatorTok{.}\NormalTok{session}\OperatorTok{.}\NormalTok{drain\_frames(}\DecValTok{64}\NormalTok{)}\OperatorTok{;} - \KeywordTok{let}\NormalTok{ outcome }\OperatorTok{=}\NormalTok{ process\_frames(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{.}\NormalTok{ui}\OperatorTok{,} \OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{.}\NormalTok{viewer}\OperatorTok{,}\NormalTok{ frames)}\OperatorTok{;} - - \CommentTok{// 3. Handle state changes} - \ControlFlowTok{if}\NormalTok{ outcome}\OperatorTok{.}\NormalTok{enter\_view }\OperatorTok{\{} - \KeywordTok{self}\OperatorTok{.}\NormalTok{apply\_ui\_event(}\PreprocessorTok{UiEvent::}\NormalTok{EnterView)}\OperatorTok{;} - \OperatorTok{\}} - - \CommentTok{// 4. Handle pointer interaction (3D view)} - \ControlFlowTok{if} \KeywordTok{self}\OperatorTok{.}\NormalTok{ui}\OperatorTok{.}\NormalTok{screen }\OperatorTok{==} \PreprocessorTok{Screen::}\NormalTok{View }\OperatorTok{\{} - \KeywordTok{self}\OperatorTok{.}\NormalTok{handle\_pointer(dt}\OperatorTok{,}\NormalTok{ aspect}\OperatorTok{,}\NormalTok{ width}\OperatorTok{,}\NormalTok{ height}\OperatorTok{,}\NormalTok{ window)}\OperatorTok{;} - \OperatorTok{\}} - - \CommentTok{// 5. Render UI} - \ControlFlowTok{match} \KeywordTok{self}\OperatorTok{.}\NormalTok{ui}\OperatorTok{.}\NormalTok{screen }\OperatorTok{\{} - \PreprocessorTok{Screen::}\NormalTok{Title }\OperatorTok{=\textgreater{}}\NormalTok{ draw\_title\_screen(ctx}\OperatorTok{,} \KeywordTok{self}\NormalTok{)}\OperatorTok{,} - \PreprocessorTok{Screen::}\NormalTok{View }\OperatorTok{=\textgreater{}}\NormalTok{ draw\_view\_hud(ctx}\OperatorTok{,} \KeywordTok{self}\NormalTok{)}\OperatorTok{,} - \CommentTok{// ...} - \OperatorTok{\}} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{10. Glossary}\label{glossary} - -{\def\LTcaptype{none} % do not increment counter -\begin{longtable}[]{@{} - >{\raggedright\arraybackslash}p{(\linewidth - 2\tabcolsep) * \real{0.3333}} - >{\raggedright\arraybackslash}p{(\linewidth - 2\tabcolsep) * \real{0.6667}}@{}} -\toprule\noalign{} -\begin{minipage}[b]{\linewidth}\raggedright -Term -\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright -Definition -\end{minipage} \\ -\midrule\noalign{} -\endhead -\bottomrule\noalign{} -\endlastfoot -\textbf{WARP} & Worldline Algebra for Recursive Provenance---Echo's -graph formalism \\ -\textbf{BOAW} & Bag of Autonomous Workers---parallel execution -architecture \\ -\textbf{Tick} & One complete cycle of the engine (begin → apply → -commit) \\ -\textbf{Footprint} & Declaration of resources a rule will read/write \\ -\textbf{TickDelta} & Accumulator for operations during execution \\ -\textbf{WarpOp} & A single graph mutation operation \\ -\textbf{GraphView} & Read-only wrapper enforcing BOAW contract \\ -\textbf{Snapshot} & Immutable, hashable state at a point in time \\ -\textbf{WSC} & Write-Streaming Columnar---zero-copy snapshot format \\ -\textbf{State Root} & BLAKE3 hash of reachable graph state \\ -\textbf{Commit Hash} & Combined hash of state + patch + parents + -policy \\ -\textbf{Intent} & External input that causes state changes \\ -\textbf{MaterializationBus} & Channel system for emitting data to -tools \\ -\textbf{Scheduler} & Component ensuring deterministic rewrite -ordering \\ -\textbf{Virtual Shard} & Cache-locality optimization (256 shards) \\ -\textbf{OpOrigin} & Metadata tracking which intent/rule produced an -op \\ -\end{longtable} -} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{Appendix A: Key File -Locations}\label{appendix-a-key-file-locations} - -{\def\LTcaptype{none} % do not increment counter -\begin{longtable}[]{@{}lll@{}} -\toprule\noalign{} -Component & Path & Lines \\ -\midrule\noalign{} -\endhead -\bottomrule\noalign{} -\endlastfoot -Engine & \texttt{crates/warp-core/src/engine\_impl.rs} & 302-954 \\ -GraphStore & \texttt{crates/warp-core/src/graph.rs} & 1-300 \\ -GraphView & \texttt{crates/warp-core/src/graph\_view.rs} & 42-100 \\ -Scheduler & \texttt{crates/warp-core/src/scheduler.rs} & 59-712 \\ -Snapshot & \texttt{crates/warp-core/src/snapshot.rs} & 49-263 \\ -TickDelta & \texttt{crates/warp-core/src/tick\_delta.rs} & 38-172 \\ -BOAW Exec & \texttt{crates/warp-core/src/boaw/exec.rs} & 38-192 \\ -BOAW Shard & \texttt{crates/warp-core/src/boaw/shard.rs} & 82-120 \\ -BOAW Merge & \texttt{crates/warp-core/src/boaw/merge.rs} & 36-75 \\ -UI State & \texttt{crates/warp-viewer/src/ui\_state.rs} & 8-127 \\ -Viewer Frame & \texttt{crates/warp-viewer/src/app\_frame.rs} & 24-349 \\ -\end{longtable} -} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{Appendix B: Architecture Decision -Records}\label{appendix-b-architecture-decision-records} - -{\def\LTcaptype{none} % do not increment counter -\begin{longtable}[]{@{} - >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.1923}} - >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.2692}} - >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.5385}}@{}} -\toprule\noalign{} -\begin{minipage}[b]{\linewidth}\raggedright -ADR -\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright -Title -\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright -Key Decision -\end{minipage} \\ -\midrule\noalign{} -\endhead -\bottomrule\noalign{} -\endlastfoot -ADR-0001 & Two-Plane Model & Separate skeleton from attachments \\ -ADR-0002 & WarpInstances & Flattened indirection for nested graphs \\ -ADR-0003 & MaterializationBus & Causality-first API, no direct writes \\ -ADR-0004 & No Global State & Dependency injection only \\ -ADR-0005 & Physics & Deterministic scheduled rewrites \\ -ADR-0006 & Ban Non-Determinism & CI enforcement scripts \\ -ADR-0007 & BOAW Storage & Immutable base + overlay + merge \\ -\end{longtable} -} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\emph{Document generated 2026-01-18. For the latest information, consult -the source code and ADRs.} - -\backmatter -\end{document} diff --git a/docs/archive/study/what-makes-echo-tick.md b/docs/archive/study/what-makes-echo-tick.md deleted file mode 100644 index 84467771..00000000 --- a/docs/archive/study/what-makes-echo-tick.md +++ /dev/null @@ -1,1198 +0,0 @@ - - - -# What Makes Echo Tick? - -> **Your Tour Guide**: Claude (Opus 4.5) -> -> Welcome! I've been asked to give you a personal tour through Echo's internals. This isn't just documentation—I'll share what I find elegant, surprising, and occasionally baffling about this codebase. When you see a red-outlined box, that's me stepping out of "narrator mode" to give you my unfiltered take. -> -> **Reading Time**: ~45 minutes for complete understanding. - ---- - -## Table of Contents - -1. [Philosophy: Why Echo Exists](#1-philosophy-why-echo-exists) -2. [The Big Picture: Architecture Overview](#2-the-big-picture-architecture-overview) -3. [Core Concepts: The WARP Graph](#3-core-concepts-the-warp-graph) -4. [The Engine: Heart of Echo](#4-the-engine-heart-of-echo) -5. [The Tick Pipeline: Where Everything Happens](#5-the-tick-pipeline-where-everything-happens) -6. [Parallel Execution: BOAW (Bag of Autonomous Workers)](#6-parallel-execution-boaw-bag-of-autonomous-workers) -7. [Storage & Hashing: Content-Addressed Truth](#7-storage--hashing-content-addressed-truth) -8. [Worked Example: Tracing a Link Click](#8-worked-example-tracing-a-link-click) -9. [The Viewer: Observing Echo](#9-the-viewer-observing-echo) -10. [Glossary](#10-glossary) - ---- - -## 1. Philosophy: Why Echo Exists - -### 1.1 The Problem - -Traditional game engines and simulations treat state as **mutable objects**. This creates fundamental problems: - -- **Replay is hard**: You can't just "rewind" because state changes are scattered and untracked. -- **Synchronization is fragile**: Two machines running the same logic may diverge due to floating-point differences, thread timing, or iteration order. -- **Debugging is a nightmare**: "It worked on my machine" is the symptom of non-determinism. -- **Branching is impossible**: You can't easily ask "what if?" without copying everything. - - - -**Claude's Take**: These problems aren't theoretical. I've seen debugging sessions where the root cause was "HashMap iteration order changed between runs." Echo's designers got burned by non-determinism and decided: _never again_. - -The last point—"branching is impossible"—stands out. Most engines don't even try to support branching because it feels like a version-control feature, not runtime. Echo treats it as first-class. That's unusual and forward-looking. - - - -### 1.2 Echo's Answer - -Echo treats **state as a typed graph** and **all changes as rewrites**. Each "tick" of the engine: - -1. Proposes a set of rewrites -2. Executes them in **deterministic order** -3. Emits **cryptographic hashes** of the resulting state - -This means: - -- **Same inputs → Same outputs** (always, on any machine) -- **State is verifiable** (hashes prove correctness) -- **Replay is trivial** (patches are prescriptive) -- **Branching is free** (copy-on-write snapshots) - -### 1.3 Core Design Principles - -```text -┌─────────────────────────────────────────────────────────────────┐ -│ ECHO'S THREE PILLARS │ -├─────────────────────────────────────────────────────────────────┤ -│ │ -│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │ -│ │ DETERMINISM │ │ PROVENANCE │ │ TOOLING │ │ -│ │ FIRST │ │ YOU CAN │ │ AS FIRST │ │ -│ │ │ │ TRUST │ │ CLASS │ │ -│ ├─────────────────┤ ├─────────────────┤ ├─────────────────┤ │ -│ │ Same inputs │ │ Snapshots are │ │ Graphs stream │ │ -│ │ always produce │ │ content- │ │ over canonical │ │ -│ │ same hashes │ │ addressed │ │ wire protocol │ │ -│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │ -│ │ -└─────────────────────────────────────────────────────────────────┘ -``` - - - -**Claude's Take**: "Tooling as first-class" is the quiet win here. Most engines treat debugging, replay, and visualization as afterthoughts—bolted on after the core. Echo inverts this: the wire protocol, hash scheme, and diff format are designed so tools can exist. - -I've read a lot of engine architectures. This level of tooling intent is rare. It also explains why Echo can have a separate `warp-viewer` crate that works without heroic reverse-engineering. - - - ---- - -## 2. The Big Picture: Architecture Overview - -### 2.1 System Layers - -Echo is organized into distinct layers, each with a specific responsibility: - -![Diagram 1](diagrams/tour-01.svg) - - - -**Claude's Take**: A _clean_ layer cake. Each layer talks only to its neighbors—no "Layer 5 reaching down to Layer 1 for performance reasons." That discipline is hard to maintain, and I respect it. - -The `WSC Format` at Layer 2 caught my eye. It's Echo's custom columnar storage format—and before you ask "why not just use Arrow or Parquet?"—I'll spoil it: WSC is designed for mmap-friendly, zero-copy reads where every row is 8-byte aligned and you can binary-search directly into the file. It's specialized for _exactly this use case_. Sometimes NIH syndrome is justified. - - - -### 2.2 Crate Map - -| Crate | Purpose | -| ---------------------- | ---------------------------------------------- | -| `warp-core` | The deterministic rewrite engine (the "brain") | -| `echo-graph` | Renderable graph types + diff operations | -| `echo-session-proto` | Wire protocol (canonical CBOR framing) | -| `echo-session-service` | Headless Unix-socket hub for tools | -| `echo-session-client` | Client helpers for connecting to the hub | -| `warp-viewer` | Native WGPU viewer for visualizing graphs | - -### 2.3 Data Flow Overview - -![Diagram 2](diagrams/tour-02.svg) - - - -**Claude's Take**: Notice how the Engine talks to itself before touching the Store? That's the commit protocol. The Engine is _paranoid_ about mutations—it queues intentions, validates them, and only then touches state. If you're used to "just mutate it directly" game engines, this will feel ceremonial. The ceremony is the point. - - - ---- - -## 3. Core Concepts: The WARP Graph - -### 3.1 What is a WARP Graph? - -A WARP (**W**orldline **A**lgebra for **R**ecursive **P**rovenance) graph is Echo's fundamental data structure. It's not just a graph—it's a graph with **deterministic semantics**. - -![Diagram 3](diagrams/tour-03.svg) - - - -**Claude's Take**: The name "WARP" is doing a lot of work here. "Worldline" evokes physics—specifically, the path an object traces through spacetime. In Echo, a node's "worldline" is its history of states across ticks. "Recursive Provenance" means you can always ask "where did this value come from?" and trace it back through the graph's history. - -Is the name a bit grandiose for what amounts to "typed graph with audit trail"? Maybe. But I've seen worse acronyms in this industry. - - - -### 3.2 Two-Plane Architecture - -Echo separates structure from data via the **Two-Plane Model** (ADR-0001): - -| Plane | Contains | Purpose | -| ------------------ | ------------------------- | ------------------------------------- | -| **Skeleton** | Nodes + Edges (structure) | Fast traversal, deterministic hashing | -| **Attachment (α)** | Typed payloads | Domain-specific data | - -**Why separate them?** - -```text -┌────────────────────────────────────────────────────────────────────┐ -│ SKELETON PLANE (Structure) │ -│ │ -│ ┌─────┐ edge:link ┌─────┐ │ -│ │ N1 │─────────────────▶│ N2 │ │ -│ └─────┘ └─────┘ │ -│ │ │ │ -│ │ edge:child │ edge:ref │ -│ ▼ ▼ │ -│ ┌─────┐◀─────────────────────┘ │ -│ │ N3 │ │ -│ └─────┘ │ -│ │ -├────────────────────────────────────────────────────────────────────┤ -│ ATTACHMENT PLANE (Payloads) │ -│ │ -│ N1.α["title"] = Atom { type: "string", bytes: "Home" } │ -│ N2.α["url"] = Atom { type: "string", bytes: "/page/b" } │ -│ N3.α["body"] = Atom { type: "html", bytes: "

...

" } │ -│ │ -└────────────────────────────────────────────────────────────────────┘ -``` - -**Key insight**: Skeleton rewrites **never decode attachments**. This keeps the hot path fast and deterministic. - - - -**Claude's Take**: This is where Echo gets clever. The Skeleton plane only contains node IDs, edge IDs, and type tags—all fixed-size, all byte-comparable. You can compute the entire state hash without ever deserializing a single JSON blob, HTML string, or texture. - -The Attachment plane (they call it "α" because of course they do) holds the actual domain data. It participates in hashing but doesn't affect traversal. This separation means you can have a 10MB texture attached to a node and still iterate the graph at full speed. - -I've seen similar ideas in ECS architectures, but usually the separation is "components vs. systems." Echo's split is "structure vs. data," which is subtly different and, I think, more principled. - - - -### 3.3 Node and Edge Identity - -Every node and edge has a **32-byte identifier**: - -```rust -pub struct NodeId([u8; 32]); // Content-addressed or assigned -pub struct EdgeId([u8; 32]); // Unique edge identifier -``` - -These IDs are: - -- **Deterministic**: Same content → same ID (when content-addressed) -- **Sortable**: Lexicographic ordering enables deterministic iteration -- **Hashable**: Participate in state root computation - -### 3.4 WarpInstances: Graphs Within Graphs - -Echo supports **descended attachments**—embedding entire graphs within attachment slots: - -![Diagram 4](diagrams/tour-04.svg) - -This enables "WARPs all the way down"—recursive composition while maintaining determinism. - - - -**Claude's Take**: WarpInstances are _wild_. You can have a node whose attachment slot contains... another entire graph. And that graph can have nodes whose attachment slots contain... more graphs. It's turtles, but the turtles are graphs. - -Why would you want this? Think of a game with procedurally generated dungeons. Each dungeon could be its own WarpInstance, loaded on demand, with its own tick history and state root. The player character is in the "outer" instance; stepping through a portal descends into the "inner" one. - -I don't know if Echo actually uses this feature yet, but the architecture supports it cleanly. That's design for the future without overengineering the present. - - - ---- - -## 4. The Engine: Heart of Echo - -### 4.1 The Engine Struct - -The `Engine` is Echo's central orchestrator. Located in `crates/warp-core/src/engine_impl.rs`: - -```rust -pub struct Engine { - state: WarpState, // Multi-instance graph state - rules: HashMap, // Registered rewrite rules - scheduler: DeterministicScheduler, // Deterministic ordering - bus: MaterializationBus, // Output channels - history: Vec<(Snapshot, TickReceipt, WarpTickPatchV1)>, - tx_counter: u64, // Transaction counter - live_txs: BTreeSet, // Active transactions - // ... more fields -} -``` - - - -**Claude's Take**: A few things jump out here: - -1. **`rules: HashMap`** — Wait, HashMap? Isn't that non-deterministic? It is! But notice: this is for _looking up_ rules by ID, not for _iterating_. The iteration order is determined by the `scheduler`, which is explicitly deterministic. The HashMap is fine because rule IDs are stable. - -2. **`history: Vec<(Snapshot, TickReceipt, WarpTickPatchV1)>`** — The engine keeps its entire history in memory? That seems expensive. I suspect this is configurable, or there's a garbage collection pass I haven't found yet. For long-running simulations, unbounded history would be a problem. - -3. **`BTreeSet` for live transactions** — BTreeSet, not HashSet. They're _really_ committed to determinism. Even the set of "which transactions are in-flight" is stored in sorted order. - - -### 4.2 Construction - -The engine is built via the `EngineBuilder`: - -```rust -let engine = EngineBuilder::new(store, root_node_id) - .with_policy_id(1) - .with_telemetry(telemetry) - .build(); -``` - -**What happens during construction:** - -![Diagram 5](diagrams/tour-05.svg) - -### 4.3 Rewrite Rules - -Rules are the atoms of change in Echo. Each rule has three functions: - -```rust -pub struct RewriteRule { - pub name: String, - pub matcher: MatchFn, // Does this rule apply? - pub executor: ExecuteFn, // What changes to make - pub footprint: FootprintFn, // What resources are touched - pub policy: ConflictPolicy, // What to do on conflict -} - -// Function signatures (Phase 5 BOAW model): -type MatchFn = fn(GraphView, &NodeId) -> bool; -type ExecuteFn = fn(GraphView, &NodeId, &mut TickDelta); -type FootprintFn = fn(GraphView, &NodeId) -> Footprint; -``` - -**Critical constraint**: Executors receive a **read-only** `GraphView` and emit changes to a `TickDelta`. They **never** mutate the graph directly. - - - -**Claude's Take**: The `FootprintFn` is the secret sauce. Before executing a rule, Echo calls this function to ask: "What nodes, edges, and attachments will you touch?" The footprint is a _conservative estimate_—you must declare everything you _might_ read or write. - -This enables Echo's parallel execution model. If two rules have non-overlapping footprints, they can execute in parallel, in any order, and the result is guaranteed identical. If footprints overlap, they're sequenced deterministically. - -The burden on the rule author is significant: you must declare your footprint accurately, or you'll get either conflicts (declared overlap when there was none) or silent bugs (undeclared overlap that corrupts state). This is a sharp edge in the API. - - - -**Runtime enforcement**: Footprint declarations are no longer just documentation or planning artifacts. They are actively enforced at runtime by `FootprintGuard` (see [Section 6.6](#66-runtime-enforcement-footprintguard)) when `footprint_enforce_release` is enabled or in debug builds, and can be disabled via the `unsafe_graph` escape hatch. The guard catches: - -- **Undeclared reads**: accessing nodes or attachments not declared in the footprint. Node-based edge traversal via `GraphView::edges_from()` checks `n_read` (reading adjacency from a node), while direct edge-by-ID operations like `has_edge()` and `edge_attachment()` check `e_read`. Attachment reads check `a_read`. -- **Undeclared writes**: emitting ops that target nodes, edges, or attachments not in `n_write`/`e_write`/`a_write` -- **Cross-warp emissions**: an op targets a different warp than the rule's execution scope -- **Unauthorized instance ops**: `ExecItemKind::User` rules emitting `UpsertWarpInstance` or `DeleteWarpInstance` -- **Attachment write violations**: `OpenPortal` is treated as an attachment write by `FootprintGuard` and requires the target node in `n_write` -- **Adjacency violations**: edge mutations where the `from` node is missing from `n_write` - -This means an inaccurate footprint is no longer a silent bug—it's a hard failure whenever enforcement is active. - -### 4.4 GraphView: Read-Only Access - -The `GraphView` enforces BOAW's immutability contract: - -```rust -pub struct GraphView<'a> { - store: &'a GraphStore, - warp_id: WarpId, -} - -impl<'a> GraphView<'a> { - pub fn node(&self, id: &NodeId) -> Option<&NodeRecord>; - pub fn edges_from(&self, id: &NodeId) -> impl Iterator; - pub fn node_attachment(&self, id: &NodeId, key: &str) -> Option<&AttachmentValue>; - // ... read-only methods only -} -``` - -**No `DerefMut`, no `AsRef`, no interior mutability.** This is enforced at the type level. - - - -**Claude's Take**: I went looking for escape hatches here. `RefCell`? No. `UnsafeCell`? No. `Arc>`? No. The `GraphView` is genuinely immutable by construction. - -This is Rust at its best: the borrow checker prevents you from shooting yourself in the foot. In C++, you'd need discipline and code review to enforce "executors don't mutate the graph." In Rust, it's just... not possible. The types don't allow it. - - - ---- - -## 5. The Tick Pipeline: Where Everything Happens - -### 5.1 Overview - -A "tick" is one complete cycle of the engine. It has five phases: - -![Diagram 6](diagrams/tour-06.svg) - - - -**Claude's Take**: The "Commit" phase has five sub-steps. _Five_. This is where I started to appreciate how much thought went into this system. Let me summarize what each does: - -1. **Drain**: Pull all pending rewrites from the scheduler in canonical order -2. **Reserve**: Check footprints for conflicts, accept or reject each rewrite -3. **Execute**: Run the accepted rewrites (this is where parallelism happens) -4. **Merge**: Combine all `TickDelta` outputs into a single canonical operation list -5. **Finalize**: Apply the merged operations to produce the new state - -The reservation phase is particularly clever. It's like a two-phase commit: first you "reserve" your footprint (claim your lock), then you execute. If your footprint conflicts with an already-reserved footprint, you're rejected. No execution happens until all accepted rewrites have been validated. - - - -### 5.2 Phase 1: Begin Transaction - -```rust -let tx = engine.begin(); -``` - -**What happens:** - -1. Increment `tx_counter` (wrapping to avoid 0) -2. Add `TxId` to `live_txs` set -3. Return opaque transaction identifier - -```text -┌─────────────────────────────────────────────────┐ -│ engine.begin() │ -├─────────────────────────────────────────────────┤ -│ tx_counter: 0 → 1 │ -│ live_txs: {} → {TxId(1)} │ -│ returns: TxId(1) │ -└─────────────────────────────────────────────────┘ -``` - -### 5.3 Phase 2: Apply Rules - -```rust -engine.apply(tx, "rule_name", &scope_node_id); -``` - -**What happens:** - -![Diagram 7](diagrams/tour-07.svg) - -**The Footprint**: A declaration of what resources the rule will read and write: - -```rust -pub struct Footprint { - pub n_read: BTreeSet, // Nodes to read - pub n_write: BTreeSet, // Nodes to write - pub e_read: BTreeSet, // Edges to read - pub e_write: BTreeSet, // Edges to write - pub a_read: BTreeSet, // Attachments to read - pub a_write: BTreeSet, // Attachments to write - // ... ports, factor_mask -} -``` - -**Scheduler deduplication**: If the same `(scope_hash, rule_id)` is applied multiple times, **last wins**. This enables idempotent retry semantics. - -### 5.4 Phase 3: Commit (The Heart of Determinism) - -```rust -let (snapshot, receipt, patch) = engine.commit_with_receipt(tx); -``` - -This is where Echo's magic happens. Let's break it down: - -#### 5.4.1 Drain - -The scheduler drains all pending rewrites in **canonical order**: - -```rust -// RadixScheduler uses O(n) LSD radix sort -// 20 passes: 2 nonce + 2 rule_id + 16 scope_hash (16-bit digits) -let rewrites = scheduler.drain_for_tx(tx); // Vec in canonical order -``` - -**Ordering key**: `(scope_hash[0..32], rule_id, nonce)` - -This ensures the **same rewrites always execute in the same order**, regardless of when they were applied. - - - -**Claude's Take**: Radix sort! They're using radix sort for the scheduler drain. Not quicksort, not merge sort—radix sort. - -Why? Because radix sort is _stable_ and _deterministic_ by construction. Quicksort's behavior depends on pivot selection, which can vary. Merge sort is deterministic, but radix sort is faster for fixed-size keys. Since the ordering key is exactly 36 bytes (32-byte scope hash + 2-byte rule ID + 2-byte nonce), radix sort is perfect. - -This is the kind of detail that separates "deterministic by accident" from "deterministic by design." - - - -#### 5.4.2 Reserve (Independence Check) - -For each rewrite in canonical order: - -![Diagram 8](diagrams/tour-08.svg) - -**Conflict detection**: Uses `GenSet` for O(1) lookups: - -- Read-read overlap: **allowed** -- Write-write overlap: **conflict** -- Read-write overlap: **conflict** - -#### 5.4.3 Execute (Parallel, Lockless) - -Accepted rewrites execute against the **read-only snapshot**: - -```rust -for rewrite in accepted { - let rule = &rules[rewrite.rule_id]; - let view = GraphView::new(&state, rewrite.warp_id); - - // Executor reads from view, emits to delta - (rule.executor)(view, &rewrite.scope, &mut delta); -} -``` - -**Critical**: `GraphView` is immutable. `TickDelta` accumulates operations: - -```rust -pub struct TickDelta { - ops: Vec<(WarpOp, OpOrigin)>, -} - -// Operations emitted during execution: -delta.emit(WarpOp::UpsertNode { id, record }); -delta.emit(WarpOp::UpsertEdge { from, edge }); -delta.emit(WarpOp::DeleteNode { id }); -delta.emit(WarpOp::SetAttachment { node, key, value }); -``` - -#### 5.4.4 Merge (Canonical Sort) - -All operations are sorted into **canonical replay order**: - -```rust -// Sort by (WarpOpKey, OpOrigin) -ops.sort_by_key(|(op, origin)| (op.sort_key(), origin.clone())); - -// Deduplicate identical ops -// Error on conflicting ops (footprint model violation) -``` - -**Conflict handling**: If two rewrites wrote **different values** to the same key, that's a bug in the footprint model. Echo errors loudly. - -#### 5.4.5 Finalize - -Apply the merged delta to produce the new state: - -```rust -for op in merged_ops { - match op { - WarpOp::UpsertNode { id, record } => state.insert_node(id, record), - WarpOp::UpsertEdge { from, edge } => state.insert_edge(from, edge), - WarpOp::DeleteNode { id } => state.delete_node_isolated(id)?, // rejects if edges exist - WarpOp::SetAttachment { node, key, value } => state.set_attachment(node, key, value), - // ... - } -} -``` - -> **Note:** `DeleteNode` requires the node to be _isolated_ (no incident edges). -> Callers must emit explicit `DeleteEdge` ops before `DeleteNode`. This ensures -> that WarpOps explicitly describe all mutations—no hidden cascade side effects. - -### 5.5 Phase 4: Hash Computation - -#### State Root (BLAKE3) - -The state root is computed via **deterministic BFS** over reachable nodes: - -![Diagram 9](diagrams/tour-09.svg) - -**Encoding** (architecture-independent): - -- All IDs: raw 32 bytes -- Counts: u64 little-endian -- Payloads: 1-byte tag + type_id[32] + u64 LE length + bytes - -#### Commit Hash (v2) - -```rust -commit_hash = BLAKE3( - version_tag[4] || // Protocol version - parents[] || // Parent commit hashes - state_root[32] || // Graph-only hash - patch_digest[32] || // Merged ops digest - policy_id[4] // Policy identifier -) -``` - - - -**Claude's Take**: The commit hash includes a `policy_id`. This is subtle but important: two engines with different policies could produce the same state but different commit hashes. Why? Because the _process_ matters, not just the result. - -Imagine one policy allows rules to run in parallel; another requires sequential execution. They might produce identical graphs, but the commit hashes differ because the policies differ. This prevents accidentally mixing outputs from incompatible engine configurations. - -It's defensive design: "Trust, but verify—and make verification easy." - - - -### 5.6 Phase 5: Record to History - -```rust -history.push(( - Snapshot { hash: commit_hash, state_root, parents, ... }, - TickReceipt { applied, rejected, ... }, - WarpTickPatchV1 { ops, in_slots, out_slots, patch_digest, ... } -)); -``` - -The patch is **prescriptive**: it can be replayed without re-matching to reproduce the exact same state. - ---- - -## 6. Parallel Execution: BOAW (Bag of Autonomous Workers) - -### 6.1 What is BOAW? - -BOAW stands for **Best Of All Worlds**. It's Echo's parallel execution architecture that enables: - -- **Massive parallelism** without locks -- **Deterministic convergence** across platforms -- **Worker-count invariance** (same result with 1 or 32 workers) - -### 6.2 The Key Insight - -```text -┌──────────────────────────────────────────────────────────────────┐ -│ THE BOAW INSIGHT │ -├──────────────────────────────────────────────────────────────────┤ -│ │ -│ Traditional parallelism: │ -│ "Make execution order deterministic" → Complex, slow │ -│ │ -│ BOAW parallelism: │ -│ "Let execution order vary, make MERGE deterministic" → Fast! │ -│ │ -│ Workers race freely → Each produces a TickDelta │ -│ Merge step sorts all deltas → Canonical output │ -│ │ -└──────────────────────────────────────────────────────────────────┘ -``` - - - -**Claude's Take**: This is the insight that makes Echo work. Most parallel systems try to _control_ the execution order—barriers, locks, atomic sequences. BOAW says: "Forget it. Let chaos reign during execution. We'll sort it out in the merge." - -It's like MapReduce: the map phase runs in any order; the reduce phase (merge) produces the canonical result. But unlike MapReduce, Echo operates on a graph with complex dependencies. The footprint model makes this possible: by declaring what you'll touch before executing, you enable the merge to validate that no conflicts occurred. - -If this sounds too good to be true, it mostly is—_if_ you get the footprints wrong. The system is only as deterministic as your footprint declarations. Lie to the footprint system, and you'll get non-determinism. - - - -### 6.3 Execution Strategies - -#### Phase 6A: Stride Partitioning (Legacy) - -```text -Worker 0: items[0], items[4], items[8], ... -Worker 1: items[1], items[5], items[9], ... -Worker 2: items[2], items[6], items[10], ... -Worker 3: items[3], items[7], items[11], ... -``` - -**Problem**: Poor cache locality—related items scatter across workers. - -#### Phase 6B: Virtual Shards (Current Default) - -```rust -const NUM_SHARDS: usize = 256; // Protocol constant (frozen) - -fn shard_of(node_id: &NodeId) -> usize { - let bytes = node_id.as_bytes(); - let val = u64::from_le_bytes(bytes[0..8]); - (val & 255) as usize // Fast modulo via bitmask -} -``` - -![Diagram 10](diagrams/tour-10.svg) - -**Benefits**: - -- Items with same `shard_of(scope)` processed together → better cache hits -- Workers dynamically claim shards via atomic counter → load balancing -- Determinism enforced by merge, not execution order - - - -**Claude's Take**: 256 shards is an interesting choice. It's small enough that the atomic counter for work-stealing doesn't become a bottleneck, but large enough to distribute work across many cores. - -The `& 255` bitmask is a micro-optimization I appreciate. It's equivalent to `% 256` but faster because 256 is a power of 2. This is the kind of low-level detail that adds up when you're processing millions of items per second. - -One thing I wondered: what if your NodeIds are clustered? Like, if all recent nodes have IDs starting with `0x00...`, they'd all end up in shard 0. I suspect content-addressed IDs (via BLAKE3) distribute uniformly, so this isn't a problem in practice. But for user-assigned IDs, you'd need to be careful. - - - -### 6.4 The Execution Loop - -```rust -pub fn execute_parallel_sharded( - view: GraphView<'_>, - items: &[ExecItem], - workers: usize, -) -> Vec { - // Partition items into 256 shards - let shards = partition_into_shards(items); - - // Atomic counter for work-stealing - let next_shard = AtomicUsize::new(0); - - std::thread::scope(|s| { - let handles: Vec<_> = (0..workers).map(|_| { - s.spawn(|| { - let mut delta = TickDelta::new(); - loop { - // Claim next shard atomically - let shard_id = next_shard.fetch_add(1, Ordering::Relaxed); - if shard_id >= NUM_SHARDS { break; } - - // Execute all items in this shard - for item in &shards[shard_id].items { - (item.exec)(view.clone(), &item.scope, &mut delta); - } - } - delta - }) - }).collect(); - - handles.into_iter().map(|h| h.join().unwrap()).collect() - }) -} -``` - -### 6.5 The Canonical Merge - -```rust -pub fn merge_deltas(deltas: Vec) -> Result, MergeConflict> { - // 1. Flatten all ops from all workers - let mut all_ops: Vec<(WarpOpKey, OpOrigin, WarpOp)> = deltas - .into_iter() - .flat_map(|d| d.ops_with_origins()) - .collect(); - - // 2. Sort canonically by (key, origin) - all_ops.sort_by_key(|(key, origin, _)| (key.clone(), origin.clone())); - - // 3. Deduplicate and detect conflicts - let mut result = Vec::new(); - for group in all_ops.group_by(|(k1, _, _), (k2, _, _)| k1 == k2) { - let first = &group[0].2; - if group.iter().all(|(_, _, op)| op == first) { - result.push(first.clone()); // All identical: keep one - } else { - return Err(MergeConflict { writers: group.iter().map(|(_, o, _)| o).collect() }); - } - } - - Ok(result) -} -``` - -**Key guarantee**: Conflicts are bugs. If footprints were correct, no two rewrites should write different values to the same key. - -### 6.6 Runtime Enforcement: FootprintGuard - -Footprint declarations aren't just planning artifacts—they're enforced at runtime by `FootprintGuard`. The guard operates per-`ExecItem` within a `WorkUnit`, catching violations before they can corrupt state. - -**Read enforcement**: `GraphView::new_guarded()` wraps the standard `GraphView` and intercepts accessor calls (`node()`, `edges_from()`, `node_attachment()`, etc.). Any access to a node, edge, or attachment not listed in the item's declared footprint triggers an immediate violation. - -**Write enforcement**: After each executor runs (inside a `catch_unwind` boundary), the guard calls `check_op()` on every emitted `WarpOp`. This post-hoc validation catches: - -- Ops targeting nodes/edges/attachments not in the declared write sets -- Cross-warp emissions (an op targets a different warp than the guard's scope) -- Unauthorized instance ops (non-system rules emitting `UpsertWarpInstance` or `DeleteWarpInstance`) -- Adjacency violations (edge mutations where the `from` node is not in `n_write`) - -**Violation payloads**: Violations produce typed `FootprintViolation` panic payloads, making them distinguishable from other panics and enabling structured error reporting. - -**cfg-gating**: Enforcement is active in debug builds and in release builds compiled with the `footprint_enforce_release` feature. It is disabled entirely when the `unsafe_graph` feature is enabled (for benchmarks or trusted contexts where the overhead is unacceptable). - ---- - -## 7. Storage & Hashing: Content-Addressed Truth - -### 7.1 The GraphStore - -Located in `crates/warp-core/src/graph.rs`: - -```rust -pub struct GraphStore { - pub(crate) warp_id: WarpId, - pub(crate) nodes: BTreeMap, - pub(crate) edges_from: BTreeMap>, - pub(crate) edges_to: BTreeMap>, // Reverse index - pub(crate) node_attachments: BTreeMap, - pub(crate) edge_attachments: BTreeMap, - pub(crate) edge_index: BTreeMap, // Edge → Source - pub(crate) edge_to_index: BTreeMap, // Edge → Target -} -``` - -**Why BTreeMap everywhere?** - -- Deterministic iteration order (sorted by key) -- Enables canonical hashing -- No HashMap ordering surprises - - - -**Claude's Take**: Seven BTreeMaps! This is the price of determinism. Each of these maps is sorted, which means: - -1. Insertions are O(log n) instead of O(1) amortized for HashMap -2. Iteration is always in key order, so hashing is deterministic -3. Memory overhead is slightly higher due to tree structure - -Is it worth it? For Echo's use case, absolutely. The alternative—using HashMap and then sorting before each hash—would be slower and more error-prone. By paying the cost upfront (O(log n) writes), you get guaranteed correctness. - -The multiple indices (`edges_from`, `edges_to`, `edge_index`, `edge_to_index`) look redundant, but they enable O(log n) lookups from any direction. Want all edges _from_ a node? `edges_from[node_id]`. Want all edges _to_ a node? `edges_to[node_id]`. This is a classic space-time tradeoff. - - - -### 7.2 WSC: Write-Streaming Columnar Format - -For efficient snapshots, Echo uses WSC—a zero-copy, mmap-friendly format: - -```text -┌─────────────────────────────────────────────────────────────────┐ -│ WSC SNAPSHOT FILE │ -├─────────────────────────────────────────────────────────────────┤ -│ ┌─────────────────────────────────────────────────────────────┐ │ -│ │ NODES TABLE (sorted by NodeId) │ │ -│ │ ┌──────────┬───────────┬──────────┐ │ │ -│ │ │ NodeRow │ NodeRow │ NodeRow │ ... │ │ -│ │ │ 64 bytes │ 64 bytes │ 64 bytes │ │ │ -│ │ └──────────┴───────────┴──────────┘ │ │ -│ └─────────────────────────────────────────────────────────────┘ │ -│ ┌─────────────────────────────────────────────────────────────┐ │ -│ │ EDGES TABLE (sorted by EdgeId) │ │ -│ │ ┌───────────┬───────────┬───────────┐ │ │ -│ │ │ EdgeRow │ EdgeRow │ EdgeRow │ ... │ │ -│ │ │ 128 bytes │ 128 bytes │ 128 bytes │ │ │ -│ │ └───────────┴───────────┴───────────┘ │ │ -│ └─────────────────────────────────────────────────────────────┘ │ -│ ┌─────────────────────────────────────────────────────────────┐ │ -│ │ OUT_INDEX (per-node → range into out_edges) │ │ -│ │ ┌────────────────┬────────────────┐ │ │ -│ │ │ Range (16 B) │ Range (16 B) │ ... │ │ -│ │ └────────────────┴────────────────┘ │ │ -│ └─────────────────────────────────────────────────────────────┘ │ -│ ┌─────────────────────────────────────────────────────────────┐ │ -│ │ BLOB ARENA (variable-length data) │ │ -│ │ Referenced by (offset, length) tuples │ │ -│ └─────────────────────────────────────────────────────────────┘ │ -└─────────────────────────────────────────────────────────────────┘ -``` - -**Row types** (8-byte aligned): - -- `NodeRow`: 64 bytes (node_id[32] + node_type[32]) -- `EdgeRow`: 128 bytes (edge_id[32] + from[32] + to[32] + type[32]) -- `Range`: 16 bytes (start_le[8] + len_le[8]) - - - -**Claude's Take**: WSC is gloriously simple. Fixed-size rows, sorted tables, binary search for lookups. No compression, no Parquet-style encoding tricks—just flat bytes on disk that you can mmap and use directly. - -The trade-off is size: WSC files are larger than compressed formats. But the benefit is speed: you can find node #1000 by seeking to `offset + 1000 * 64` and reading 64 bytes. No decompression, no index lookups, no memory allocation. - -For Echo's use case (local caching, fast restarts), this makes sense. You're not storing petabytes; you're storing the state of a single simulation that fits in RAM. Optimize for access latency, not storage cost. - - - -### 7.3 Copy-on-Write Semantics - -**Rule**: During a tick, nothing shared is mutated. - -![Diagram 11](diagrams/tour-11.svg) - -**Structural sharing**: Only changed segments are newly written. Unchanged data is referenced by hash. - -### 7.4 Hash Algorithm Details - -**State Root** (BLAKE3, v2): - -```text -state_root = BLAKE3( - root_id[32] || - instance_count[8, LE] || - for each instance in BTreeMap order: - warp_id_len[8, LE] || - warp_id_bytes || - node_count[8, LE] || - for each node in ascending NodeId order: - node_id[32] || - node_type[32] || - for each outbound edge in ascending EdgeId order: - edge_id[32] || - edge_type[32] || - to_node[32] || - for each attachment: - key_len[8, LE] || - key_bytes || - type_id[32] || - value_len[8, LE] || - value_bytes -) -``` - - - -**Claude's Take**: The hashing is _exhaustive_. Every node, every edge, every attachment, every byte—all streamed through BLAKE3 in a defined order. There's no "we'll just hash the IDs and trust the content"—everything participates. - -This is expensive! But it's the foundation of Echo's trust model. If two engines produce the same state root, they have the same state. Period. No exceptions, no edge cases. - -The `version_tag` in the commit hash is a nice touch. If Echo ever changes its hashing algorithm (say, BLAKE3 v2 to v3), old and new hashes won't collide. Protocol evolution is built in. - - - ---- - -## 8. Worked Example: Tracing a Link Click - -Let's trace what happens when a user clicks a link in a hypothetical WARP-based navigation system. - -### 8.1 The Scenario - -Imagine a simple site with two pages: - -![Diagram 12](diagrams/tour-12.svg) - -**User clicks the link**: This should navigate from Home to About. - - - -**Claude's Take**: This example is deceptively simple—two pages, one link—but it exercises the entire engine: intent ingestion, rule matching, footprint validation, execution, merge, hashing, and emission. - -I'll add my notes at the interesting points. If you're skimming, watch for where the determinism guarantees kick in. - - - -### 8.2 Step 1: Intent Ingestion - -The click is captured by the viewer and converted to an **intent**: - -```rust -// In the viewer: -let intent = NavigateIntent { - target_page: about_node_id, - timestamp: deterministic_tick, -}; -let intent_bytes = canonical_encode(&intent); - -// Send to engine: -engine.ingest_intent(intent_bytes); -``` - -**What happens inside `ingest_intent`**: - -![Diagram 13](diagrams/tour-13.svg) - -### 8.3 Step 2: Begin Transaction - -```rust -let tx = engine.begin(); // tx = TxId(1) -``` - -### 8.4 Step 3: Dispatch Intent - -```rust -engine.dispatch_next_intent(tx); -``` - -**What happens**: - -![Diagram 14](diagrams/tour-14.svg) - -### 8.5 Step 4: Rule Matching - -The `cmd/navigate` rule matches: - -```rust -// Matcher: Does this intent want navigation? -fn navigate_matcher(view: GraphView, scope: &NodeId) -> bool { - let intent = view.node(scope)?; - intent.type_id == "navigate_intent" -} - -// Footprint: What will we read/write? -fn navigate_footprint(view: GraphView, scope: &NodeId) -> Footprint { - Footprint { - n_read: btreeset![scope.clone(), viewer_node], - n_write: btreeset![], - a_read: btreeset![], - a_write: btreeset![AttachmentKey::new(viewer_node, "current")], - ..default() - } -} -``` - - - -**Claude's Take**: Notice the footprint. We declare that we'll: - -- **Read** two nodes: the intent (to get the target) and the viewer (to validate the current page) -- **Write** one attachment: the viewer's `current` attachment - -We're _not_ reading any attachments (we just need the node records), and we're _not_ writing any nodes (the viewer node already exists). This precision matters—if another rule also wants to write `viewer.current`, there's a conflict. - - - -The rule is enqueued: - -```text -┌─────────────────────────────────────────────────────────────┐ -│ PendingRewrite │ -├─────────────────────────────────────────────────────────────┤ -│ rule_id: "cmd/navigate" │ -│ scope: 0xABCD... (intent node) │ -│ footprint: { n_read: [intent, viewer], a_write: [current] } │ -│ tx: TxId(1) │ -└─────────────────────────────────────────────────────────────┘ -``` - -### 8.6 Step 5: Commit - -```rust -let (snapshot, receipt, patch) = engine.commit_with_receipt(tx); -``` - -#### 5a. Drain - -```rust -let rewrites = scheduler.drain_for_tx(tx); -// Result: [PendingRewrite { rule: "cmd/navigate", scope: intent_node }] -``` - -#### 5b. Reserve - -```rust -// Check footprint independence -// No conflicts (only one rewrite) -// Accepted! -``` - -#### 5c. Execute - -```rust -fn navigate_executor(view: GraphView, scope: &NodeId, delta: &mut TickDelta) { - // Read the intent to find target - let intent = view.node(scope).unwrap(); - let target_page = intent.attachment("target").unwrap(); - - // Read current viewer state (for logging/validation) - let viewer = view.node(&VIEWER_NODE).unwrap(); - let old_page = viewer.attachment("current"); - - // Emit the change: update viewer's current page - delta.emit(WarpOp::SetAttachment { - node: VIEWER_NODE, - key: "current".into(), - value: AttachmentValue::Atom(AtomPayload { - type_id: "node_ref".into(), - bytes: target_page.to_bytes(), - }), - }); -} -``` - -**TickDelta now contains**: - -```rust -[ - (WarpOp::SetAttachment { - node: viewer_node, - key: "current", - value: about_node_id - }, OpOrigin { intent_id: 1, rule_id: 42, match_ix: 0, op_ix: 0 }) -] -``` - -#### 5d. Merge - -Only one delta, trivial merge: - -```rust -let merged_ops = vec![ - WarpOp::SetAttachment { node: viewer_node, key: "current", value: about_node_id } -]; -``` - -#### 5e. Finalize - -Apply to state: - -```rust -state.set_attachment(viewer_node, "current", about_node_id); -``` - -### 8.7 Step 6: Hash Computation - -```rust -// State root: BLAKE3 of reachable graph -let state_root = compute_state_root(&state); // 0x7890... - -// Patch digest: BLAKE3 of merged ops -let patch_digest = compute_patch_digest(&merged_ops); // 0xDEF0... - -// Commit hash -let commit_hash = BLAKE3( - VERSION_TAG || - [parent_hash] || - state_root || - patch_digest || - policy_id -); // 0x1234... -``` - -### 8.8 Step 7: Emit to Tools - -The engine emits a `WarpDiff` to the session hub: - -```rust -WarpDiff { - from_epoch: 0, - to_epoch: 1, - ops: vec![ - WarpOp::SetAttachment { - node: viewer_node, - key: "current", - value: about_node_id - } - ], - state_hash: 0x7890..., -} -``` - -### 8.9 Step 8: Viewer Applies Diff - -The viewer receives the diff and updates its rendering: - -```rust -for op in diff.ops { - match op { - WarpOp::SetAttachment { node, key, value } => { - if node == viewer_node && key == "current" { - // Update the displayed page - self.navigate_to(value.as_node_ref()); - } - } - _ => { /* other ops */ } - } -} -``` - -**Result**: The user sees the About page. - - - -**Claude's Take**: That's a lot of machinery for one link click! But here's what we get for free: - -1. **Replay**: Save the intent bytes, replay them later, get the exact same state hash -2. **Verification**: Any other engine given the same inputs produces the same commit hash -3. **Undo**: The previous snapshot is still in history; restoring is a pointer swap -4. **Branching**: Fork the state, try a different navigation, compare outcomes - -This is the payoff for all the ceremony. A traditional engine would do `viewer.current = about_page` and call it done. Echo builds a _provable audit trail_ around every state change. - - - ---- - -## 9. The Viewer: Observing Echo - -The `warp-viewer` crate provides real-time visualization of WARP graphs. It's built on WGPU for cross-platform GPU rendering. - -### 9.1 Architecture - -![Diagram 15](diagrams/tour-15.svg) - -### 9.2 Rendering Pipeline - -1. **Diff arrives** via session client -2. **State cache** updates local graph replica -3. **Layout engine** computes node positions (force-directed) -4. **Renderer** converts graph to GPU buffers -5. **Display** shows updated visualization - - - -**Claude's Take**: The viewer is _reactive_, not poll-based. It subscribes to diffs from the session hub and updates only when state changes. This means zero CPU usage when the graph is idle. - -The force-directed layout is a classic choice for graph visualization. It's not perfect—large graphs can take time to settle—but it's good enough for debugging and exploration. If you need a specific layout, you can inject position attachments and the viewer will respect them. - - - ---- - -## 10. Glossary - -| Term | Definition | -| ------------------ | ------------------------------------------------------------------------- | -| **WARP** | Worldline Algebra for Recursive Provenance—Echo's core graph model | -| **Tick** | One complete cycle of the engine (begin → apply → commit → hash → record) | -| **Snapshot** | Immutable point-in-time capture of graph state | -| **Footprint** | Declaration of resources a rule will read/write | -| **BOAW** | Bag of Autonomous Workers—parallel execution model | -| **TickDelta** | Accumulated operations from rule execution | -| **State Root** | BLAKE3 hash of the entire graph | -| **Commit Hash** | BLAKE3 hash of (state root + patch + metadata) | -| **WarpInstance** | A graph-within-a-graph, enabling recursive composition | -| **WSC** | Write-Streaming Columnar—Echo's snapshot file format | -| **GraphView** | Read-only handle to graph state for rule executors | -| **PendingRewrite** | Queued rule application awaiting commit | - ---- - - - -### Final Thoughts from Your Tour Guide - -Echo is not a simple system. It's a _principled_ system built on hard-won lessons about determinism, reproducibility, and trust. - -What I find most impressive isn't any single feature—it's the coherence. Every piece reinforces the others: - -- BTreeMaps enable deterministic hashing -- Footprints enable parallel execution -- Parallel execution requires immutable GraphView -- Immutable GraphView enables copy-on-write -- Copy-on-write enables cheap branching -- Cheap branching enables "what if?" queries - -Pull one thread and the whole tapestry unravels. This is integrated design, not a collection of independent features. - -Is Echo perfect? No. The footprint model requires discipline. The ceremony adds latency. The BTreeMaps trade speed for determinism. But for applications where _provability_ matters—games with replays, simulations with audits, collaborative tools with conflict resolution—Echo offers something rare: a foundation you can trust. - -Thanks for joining me on this tour. May your state roots always match. - -— Claude - - diff --git a/docs/archive/study/what-makes-echo-tick.pdf b/docs/archive/study/what-makes-echo-tick.pdf deleted file mode 100644 index d5fa6286..00000000 Binary files a/docs/archive/study/what-makes-echo-tick.pdf and /dev/null differ diff --git a/docs/archive/study/what-makes-echo-tick.tex b/docs/archive/study/what-makes-echo-tick.tex deleted file mode 100644 index df4fb76f..00000000 --- a/docs/archive/study/what-makes-echo-tick.tex +++ /dev/null @@ -1,1627 +0,0 @@ -% SPDX-License-Identifier: Apache-2.0 OR LicenseRef-MIND-UCAL-1.0 -% © James Ross Ω FLYING•ROBOTS -% Options for packages loaded elsewhere -\PassOptionsToPackage{unicode}{hyperref} -\PassOptionsToPackage{hyphens}{url} -\documentclass[ - 11pt, -]{book} -\usepackage{xcolor} -\usepackage[margin=0.75in,letterpaper]{geometry} - -\usepackage{graphicx} -\usepackage[export]{adjustbox} -\usepackage{tcolorbox} -\tcbuselibrary{breakable,skins} - -% Page layout - small margins -\usepackage[margin=0.75in,letterpaper]{geometry} - -% Make code blocks smaller to fit -\usepackage{fvextra} -\DefineVerbatimEnvironment{Highlighting}{Verbatim}{ - commandchars=\\\{\}, - fontsize=\small, - breaklines=true, - breakanywhere=true -} - -% Define the Claude commentary box style - RED OUTLINE + RED TEXT -\newtcolorbox{claudecommentary}{ - enhanced, - breakable, - colback=red!5, - colframe=red!75!black, - coltext=red!70!black, - boxrule=3pt, - arc=5pt, - left=12pt, - right=12pt, - top=12pt, - bottom=12pt, - before skip=15pt, - after skip=15pt, - fontupper=\color{red!70!black}, - fonttitle=\bfseries\Large\color{red!75!black}, - title={\raisebox{-0.1em}{\Large$\blacktriangleright$} Claude's Commentary}, - attach boxed title to top left={yshift=-4mm,xshift=10mm}, - boxed title style={ - colback=white, - colframe=red!75!black, - boxrule=2pt, - arc=3pt - } -} -\usepackage{amsmath,amssymb} -\setcounter{secnumdepth}{-\maxdimen} % remove section numbering -\usepackage{iftex} -\ifPDFTeX - \usepackage[T1]{fontenc} - \usepackage[utf8]{inputenc} - \usepackage{textcomp} % provide euro and other symbols -\else % if luatex or xetex - \usepackage{unicode-math} % this also loads fontspec - \defaultfontfeatures{Scale=MatchLowercase} - \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} -\fi -\usepackage{lmodern} -\ifPDFTeX\else - % xetex/luatex font selection -\fi -% Use upquote if available, for straight quotes in verbatim environments -\IfFileExists{upquote.sty}{\usepackage{upquote}}{} -\IfFileExists{microtype.sty}{% use microtype if available - \usepackage[]{microtype} - \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts -}{} -\makeatletter -\@ifundefined{KOMAClassName}{% if non-KOMA class - \IfFileExists{parskip.sty}{% - \usepackage{parskip} - }{% else - \setlength{\parindent}{0pt} - \setlength{\parskip}{6pt plus 2pt minus 1pt}} -}{% if KOMA class - \KOMAoptions{parskip=half}} -\makeatother -\usepackage{color} -\usepackage{fancyvrb} -\newcommand{\VerbBar}{|} -\newcommand{\VERB}{\Verb[commandchars=\\\{\}]} -\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} -% Add ',fontsize=\small' for more characters per line -\newenvironment{Shaded}{}{} -\newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}} -\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{#1}} -\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}} -\newcommand{\BuiltInTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{#1}} -\newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{#1}}} -\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{#1}} -\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}} -\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{#1}} -\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}} -\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{#1}}} -\newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}} -\newcommand{\ExtensionTok}[1]{#1} -\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}} -\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{#1}} -\newcommand{\ImportTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{\textbf{#1}}} -\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}} -\newcommand{\NormalTok}[1]{#1} -\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{#1}} -\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{#1}} -\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{#1}} -\newcommand{\RegionMarkerTok}[1]{#1} -\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{#1}} -\newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{#1}} -\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} -\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} -\usepackage{longtable,booktabs,array} -\newcounter{none} % for unnumbered tables -\usepackage{calc} % for calculating minipage widths -% Correct order of tables after \paragraph or \subparagraph -\usepackage{etoolbox} -\makeatletter -\patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{} -\makeatother -% Allow footnotes in longtable head/foot -\IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}} -\makesavenoteenv{longtable} -\usepackage{graphicx} -\makeatletter -\newsavebox\pandoc@box -\newcommand*\pandocbounded[1]{% scales image to fit in text height/width - \sbox\pandoc@box{#1}% - \Gscale@div\@tempa{\textheight}{\dimexpr\ht\pandoc@box+\dp\pandoc@box\relax}% - \Gscale@div\@tempb{\linewidth}{\wd\pandoc@box}% - \ifdim\@tempb\p@<\@tempa\p@\let\@tempa\@tempb\fi% select the smaller of both - \ifdim\@tempa\p@<\p@\scalebox{\@tempa}{\usebox\pandoc@box}% - \else\usebox{\pandoc@box}% - \fi% -} -% Set default figure placement to htbp -\def\fps@figure{htbp} -\makeatother -\setlength{\emergencystretch}{3em} % prevent overfull lines -\providecommand{\tightlist}{% - \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} -\usepackage{bookmark} -\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available -\urlstyle{same} -\hypersetup{ - hidelinks, - pdfcreator={LaTeX via pandoc}} - -\author{} -\date{} - -\begin{document} -\frontmatter - -\mainmatter -\chapter{What Makes Echo Tick?}\label{what-makes-echo-tick} - -\begin{quote} -\textbf{Your Tour Guide}: Claude (Opus 4.5) - -Welcome! I've been asked to give you a personal tour through Echo's -internals. This isn't just documentation---I'll share what I find -elegant, surprising, and occasionally baffling about this codebase. When -you see a red-outlined box, that's me stepping out of ``narrator mode'' -to give you my unfiltered take. - -\textbf{Reading Time}: \textasciitilde45 minutes for complete -understanding. -\end{quote} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{Table of Contents}\label{table-of-contents} - -\begin{enumerate} -\def\labelenumi{\arabic{enumi}.} -\tightlist -\item - \hyperref[1-philosophy-why-echo-exists]{Philosophy: Why Echo Exists} -\item - \hyperref[2-the-big-picture-architecture-overview]{The Big Picture: - Architecture Overview} -\item - \hyperref[3-core-concepts-the-warp-graph]{Core Concepts: The WARP - Graph} -\item - \hyperref[4-the-engine-heart-of-echo]{The Engine: Heart of Echo} -\item - \hyperref[5-the-tick-pipeline-where-everything-happens]{The Tick - Pipeline: Where Everything Happens} -\item - \hyperref[6-parallel-execution-boaw-bag-of-autonomous-workers]{Parallel - Execution: BOAW (Bag of Autonomous Workers)} -\item - \hyperref[7-storage--hashing-content-addressed-truth]{Storage \& - Hashing: Content-Addressed Truth} -\item - \hyperref[8-worked-example-tracing-a-link-click]{Worked Example: - Tracing a Link Click} -\item - \hyperref[9-the-viewer-observing-echo]{The Viewer: Observing Echo} -\item - \hyperref[10-glossary]{Glossary} -\end{enumerate} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{1. Philosophy: Why Echo -Exists}\label{philosophy-why-echo-exists} - -\subsection{1.1 The Problem}\label{the-problem} - -Traditional game engines and simulations treat state as \textbf{mutable -objects}. This creates fundamental problems: - -\begin{itemize} -\tightlist -\item - \textbf{Replay is hard}: You can't just ``rewind'' because state - changes are scattered and untracked. -\item - \textbf{Synchronization is fragile}: Two machines running the same - logic may diverge due to floating-point differences, thread timing, or - iteration order. -\item - \textbf{Debugging is a nightmare}: ``It worked on my machine'' is the - symptom of non-determinism. -\item - \textbf{Branching is impossible}: You can't easily ask ``what if?'' - without copying everything. -\end{itemize} - -\begin{claudecommentary} -**Claude's Take**: This list of problems isn't theoretical. I've seen countless debugging sessions where the root cause was "HashMap iteration order changed between runs." Echo's designers clearly got burned by non-determinism at some point and decided: *never again*. - -What strikes me most is the last point—"branching is impossible." Most engines don't even *try* to support branching because it seems like a feature for version control, not runtime systems. Echo treats it as a first-class concern. That's unusual and, I think, genuinely forward-thinking. -\end{claudecommentary} - -\subsection{1.2 Echo's Answer}\label{echos-answer} - -Echo treats \textbf{state as a typed graph} and \textbf{all changes as -rewrites}. Each ``tick'' of the engine: - -\begin{enumerate} -\def\labelenumi{\arabic{enumi}.} -\tightlist -\item - Proposes a set of rewrites -\item - Executes them in \textbf{deterministic order} -\item - Emits \textbf{cryptographic hashes} of the resulting state -\end{enumerate} - -This means: - \textbf{Same inputs → Same outputs} (always, on any -machine) - \textbf{State is verifiable} (hashes prove correctness) - -\textbf{Replay is trivial} (patches are prescriptive) - -\textbf{Branching is free} (copy-on-write snapshots) - -\subsection{1.3 Core Design Principles}\label{core-design-principles} - -\begin{verbatim} -┌─────────────────────────────────────────────────────────────────┐ -│ ECHO'S THREE PILLARS │ -├─────────────────────────────────────────────────────────────────┤ -│ │ -│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │ -│ │ DETERMINISM │ │ PROVENANCE │ │ TOOLING │ │ -│ │ FIRST │ │ YOU CAN │ │ AS FIRST │ │ -│ │ │ │ TRUST │ │ CLASS │ │ -│ ├─────────────────┤ ├─────────────────┤ ├─────────────────┤ │ -│ │ Same inputs │ │ Snapshots are │ │ Graphs stream │ │ -│ │ always produce │ │ content- │ │ over canonical │ │ -│ │ same hashes │ │ addressed │ │ wire protocol │ │ -│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │ -│ │ -└─────────────────────────────────────────────────────────────────┘ -\end{verbatim} - -\begin{claudecommentary} -**Claude's Take**: "Tooling as first-class" is the sleeper here. Most engines treat debugging tools, replay systems, and visualization as afterthoughts—bolted on after the core is done. Echo inverts this: the wire protocol, the hash scheme, and the diff format were designed *so that tools could exist*. - -I've read a lot of engine architectures. This level of intentionality about tooling is rare. It's also why Echo can have a separate `warp-viewer` crate that just... works, instead of requiring heroic reverse-engineering. -\end{claudecommentary} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{2. The Big Picture: Architecture -Overview}\label{the-big-picture-architecture-overview} - -\subsection{2.1 System Layers}\label{system-layers} - -Echo is organized into distinct layers, each with a specific -responsibility: - -\begin{figure} -\centering -\pandocbounded{\includegraphics[keepaspectratio,alt={Diagram 1}]{diagrams/tour-01.pdf}} -\caption{Diagram 1} -\end{figure} - -\begin{claudecommentary} -**Claude's Take**: This is a *clean* layer cake. Each layer only talks to its neighbors. No "Layer 5 reaching down to Layer 1 for performance reasons." That discipline is hard to maintain, and I respect it. - -The `WSC Format` at Layer 2 caught my eye. It's Echo's custom columnar storage format—and before you ask "why not just use Arrow or Parquet?"—I'll spoil it: WSC is designed for mmap-friendly, zero-copy reads where every row is 8-byte aligned and you can binary-search directly into the file. It's specialized for *exactly this use case*. Sometimes NIH syndrome is justified. -\end{claudecommentary} - -\subsection{2.2 Crate Map}\label{crate-map} - -{\def\LTcaptype{none} % do not increment counter -\begin{longtable}[]{@{}ll@{}} -\toprule\noalign{} -Crate & Purpose \\ -\midrule\noalign{} -\endhead -\bottomrule\noalign{} -\endlastfoot -\texttt{warp-core} & The deterministic rewrite engine (the ``brain'') \\ -\texttt{echo-graph} & Renderable graph types + diff operations \\ -\texttt{echo-session-proto} & Wire protocol (canonical CBOR framing) \\ -\texttt{echo-session-service} & Headless Unix-socket hub for tools \\ -\texttt{echo-session-client} & Client helpers for connecting to the -hub \\ -\texttt{warp-viewer} & Native WGPU viewer for visualizing graphs \\ -\end{longtable} -} - -\subsection{2.3 Data Flow Overview}\label{data-flow-overview} - -\begin{figure} -\centering -\pandocbounded{\includegraphics[keepaspectratio,alt={Diagram 2}]{diagrams/tour-02.pdf}} -\caption{Diagram 2} -\end{figure} - -\begin{claudecommentary} -**Claude's Take**: Notice how the Engine talks to itself multiple times before touching the Store? That's the commit protocol at work. The Engine is *paranoid* about mutations—it queues up intentions, validates them, and only then touches state. If you're used to "just mutate it directly" game engines, this will feel ceremonial. The ceremony is the point. -\end{claudecommentary} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{3. Core Concepts: The WARP -Graph}\label{core-concepts-the-warp-graph} - -\subsection{3.1 What is a WARP Graph?}\label{what-is-a-warp-graph} - -A WARP (\textbf{W}orldline \textbf{A}lgebra for \textbf{R}ecursive -\textbf{P}rovenance) graph is Echo's fundamental data structure. It's -not just a graph---it's a graph with \textbf{deterministic semantics}. - -\begin{figure} -\centering -\pandocbounded{\includegraphics[keepaspectratio,alt={Diagram 3}]{diagrams/tour-03.pdf}} -\caption{Diagram 3} -\end{figure} - -\begin{claudecommentary} -**Claude's Take**: The name "WARP" is doing a lot of work here. "Worldline" evokes physics—specifically, the path an object traces through spacetime. In Echo, a node's "worldline" is its history of states across ticks. "Recursive Provenance" means you can always ask "where did this value come from?" and trace it back through the graph's history. - -Is the name a bit grandiose for what amounts to "typed graph with audit trail"? Maybe. But I've seen worse acronyms in this industry. -\end{claudecommentary} - -\subsection{3.2 Two-Plane Architecture}\label{two-plane-architecture} - -Echo separates structure from data via the \textbf{Two-Plane Model} -(ADR-0001): - -{\def\LTcaptype{none} % do not increment counter -\begin{longtable}[]{@{} - >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.2692}} - >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.3846}} - >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.3462}}@{}} -\toprule\noalign{} -\begin{minipage}[b]{\linewidth}\raggedright -Plane -\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright -Contains -\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright -Purpose -\end{minipage} \\ -\midrule\noalign{} -\endhead -\bottomrule\noalign{} -\endlastfoot -\textbf{Skeleton} & Nodes + Edges (structure) & Fast traversal, -deterministic hashing \\ -\textbf{Attachment (α)} & Typed payloads & Domain-specific data \\ -\end{longtable} -} - -\textbf{Why separate them?} - -\begin{verbatim} -┌────────────────────────────────────────────────────────────────────┐ -│ SKELETON PLANE (Structure) │ -│ │ -│ ┌─────┐ edge:link ┌─────┐ │ -│ │ N1 │─────────────────▶│ N2 │ │ -│ └─────┘ └─────┘ │ -│ │ │ │ -│ │ edge:child │ edge:ref │ -│ ▼ ▼ │ -│ ┌─────┐◀─────────────────────┘ │ -│ │ N3 │ │ -│ └─────┘ │ -│ │ -├────────────────────────────────────────────────────────────────────┤ -│ ATTACHMENT PLANE (Payloads) │ -│ │ -│ N1.α["title"] = Atom { type: "string", bytes: "Home" } │ -│ N2.α["url"] = Atom { type: "string", bytes: "/page/b" } │ -│ N3.α["body"] = Atom { type: "html", bytes: "

...

" } │ -│ │ -└────────────────────────────────────────────────────────────────────┘ -\end{verbatim} - -\textbf{Key insight}: Skeleton rewrites \textbf{never decode -attachments}. This keeps the hot path fast and deterministic. - -\begin{claudecommentary} -**Claude's Take**: This is where Echo gets clever. The Skeleton plane only contains node IDs, edge IDs, and type tags—all fixed-size, all byte-comparable. You can compute the entire state hash without ever deserializing a single JSON blob, HTML string, or texture. - -The Attachment plane (they call it "α" because of course they do) holds the actual domain data. It participates in hashing but doesn't affect traversal. This separation means you can have a 10MB texture attached to a node and still iterate the graph at full speed. - -I've seen similar ideas in ECS architectures, but usually the separation is "components vs. systems." Echo's split is "structure vs. data," which is subtly different and, I think, more principled. -\end{claudecommentary} - -\subsection{3.3 Node and Edge Identity}\label{node-and-edge-identity} - -Every node and edge has a \textbf{32-byte identifier}: - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ NodeId([}\DataTypeTok{u8}\OperatorTok{;} \DecValTok{32}\NormalTok{])}\OperatorTok{;} \CommentTok{// Content{-}addressed or assigned} -\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ EdgeId([}\DataTypeTok{u8}\OperatorTok{;} \DecValTok{32}\NormalTok{])}\OperatorTok{;} \CommentTok{// Unique edge identifier} -\end{Highlighting} -\end{Shaded} - -These IDs are: - \textbf{Deterministic}: Same content → same ID (when -content-addressed) - \textbf{Sortable}: Lexicographic ordering enables -deterministic iteration - \textbf{Hashable}: Participate in state root -computation - -\subsection{3.4 WarpInstances: Graphs Within -Graphs}\label{warpinstances-graphs-within-graphs} - -Echo supports \textbf{descended attachments}---embedding entire graphs -within attachment slots: - -\begin{figure} -\centering -\pandocbounded{\includegraphics[keepaspectratio,alt={Diagram 4}]{diagrams/tour-04.pdf}} -\caption{Diagram 4} -\end{figure} - -This enables ``WARPs all the way down''---recursive composition while -maintaining determinism. - -\begin{claudecommentary} -**Claude's Take**: WarpInstances are *wild*. You can have a node whose attachment slot contains... another entire graph. And that graph can have nodes whose attachment slots contain... more graphs. It's turtles, but the turtles are graphs. - -Why would you want this? Think of a game with procedurally generated dungeons. Each dungeon could be its own WarpInstance, loaded on demand, with its own tick history and state root. The player character is in the "outer" instance; stepping through a portal descends into the "inner" one. - -I don't know if Echo actually uses this feature yet, but the architecture supports it cleanly. That's design for the future without overengineering the present. -\end{claudecommentary} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{4. The Engine: Heart of Echo}\label{the-engine-heart-of-echo} - -\subsection{4.1 The Engine Struct}\label{the-engine-struct} - -The \texttt{Engine} is Echo's central orchestrator. Located in -\texttt{crates/warp-core/src/engine\_impl.rs}: - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ Engine }\OperatorTok{\{} -\NormalTok{ state}\OperatorTok{:}\NormalTok{ WarpState}\OperatorTok{,} \CommentTok{// Multi{-}instance graph state} -\NormalTok{ rules}\OperatorTok{:}\NormalTok{ HashMap}\OperatorTok{\textless{}}\NormalTok{RuleId}\OperatorTok{,}\NormalTok{ RewriteRule}\OperatorTok{\textgreater{},} \CommentTok{// Registered rewrite rules} -\NormalTok{ scheduler}\OperatorTok{:}\NormalTok{ DeterministicScheduler}\OperatorTok{,} \CommentTok{// Deterministic ordering} -\NormalTok{ bus}\OperatorTok{:}\NormalTok{ MaterializationBus}\OperatorTok{,} \CommentTok{// Output channels} -\NormalTok{ history}\OperatorTok{:} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{(Snapshot}\OperatorTok{,}\NormalTok{ TickReceipt}\OperatorTok{,}\NormalTok{ WarpTickPatchV1)}\OperatorTok{\textgreater{},} -\NormalTok{ tx\_counter}\OperatorTok{:} \DataTypeTok{u64}\OperatorTok{,} \CommentTok{// Transaction counter} -\NormalTok{ live\_txs}\OperatorTok{:}\NormalTok{ BTreeSet}\OperatorTok{\textless{}}\NormalTok{TxId}\OperatorTok{\textgreater{},} \CommentTok{// Active transactions} - \CommentTok{// ... more fields} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\begin{claudecommentary} -**Claude's Take**: A few things jump out here: - -1. **`rules: HashMap`** — Wait, HashMap? Isn't that non-deterministic? It is! But notice: this is for *looking up* rules by ID, not for *iterating*. The iteration order is determined by the `scheduler`, which is explicitly deterministic. The HashMap is fine because rule IDs are stable. - -2. **`history: Vec<(Snapshot, TickReceipt, WarpTickPatchV1)>`** — The engine keeps its entire history in memory? That seems expensive. I suspect this is configurable, or there's a garbage collection pass I haven't found yet. For long-running simulations, unbounded history would be a problem. - -3. **`BTreeSet` for live transactions** — BTreeSet, not HashSet. They're *really* committed to determinism. Even the set of "which transactions are in-flight" is stored in sorted order. -\end{claudecommentary} - -\subsection{4.2 Construction}\label{construction} - -The engine is built via the \texttt{EngineBuilder}: - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{let}\NormalTok{ engine }\OperatorTok{=} \PreprocessorTok{EngineBuilder::}\NormalTok{new(store}\OperatorTok{,}\NormalTok{ root\_node\_id)} - \OperatorTok{.}\NormalTok{with\_policy\_id(}\DecValTok{1}\NormalTok{)} - \OperatorTok{.}\NormalTok{with\_telemetry(telemetry)} - \OperatorTok{.}\NormalTok{build()}\OperatorTok{;} -\end{Highlighting} -\end{Shaded} - -\textbf{What happens during construction:} - -\begin{figure} -\centering -\pandocbounded{\includegraphics[keepaspectratio,alt={Diagram 5}]{diagrams/tour-05.pdf}} -\caption{Diagram 5} -\end{figure} - -\subsection{4.3 Rewrite Rules}\label{rewrite-rules} - -Rules are the atoms of change in Echo. Each rule has three functions: - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ RewriteRule }\OperatorTok{\{} - \KeywordTok{pub}\NormalTok{ name}\OperatorTok{:} \DataTypeTok{String}\OperatorTok{,} - \KeywordTok{pub}\NormalTok{ matcher}\OperatorTok{:}\NormalTok{ MatchFn}\OperatorTok{,} \CommentTok{// Does this rule apply?} - \KeywordTok{pub}\NormalTok{ executor}\OperatorTok{:}\NormalTok{ ExecuteFn}\OperatorTok{,} \CommentTok{// What changes to make} - \KeywordTok{pub}\NormalTok{ footprint}\OperatorTok{:}\NormalTok{ FootprintFn}\OperatorTok{,} \CommentTok{// What resources are touched} - \KeywordTok{pub}\NormalTok{ policy}\OperatorTok{:}\NormalTok{ ConflictPolicy}\OperatorTok{,} \CommentTok{// What to do on conflict} -\OperatorTok{\}} - -\CommentTok{// Function signatures (Phase 5 BOAW model):} -\KeywordTok{type}\NormalTok{ MatchFn }\OperatorTok{=} \KeywordTok{fn}\NormalTok{(GraphView}\OperatorTok{,} \OperatorTok{\&}\NormalTok{NodeId) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{bool}\OperatorTok{;} -\KeywordTok{type}\NormalTok{ ExecuteFn }\OperatorTok{=} \KeywordTok{fn}\NormalTok{(GraphView}\OperatorTok{,} \OperatorTok{\&}\NormalTok{NodeId}\OperatorTok{,} \OperatorTok{\&}\KeywordTok{mut}\NormalTok{ TickDelta)}\OperatorTok{;} -\KeywordTok{type}\NormalTok{ FootprintFn }\OperatorTok{=} \KeywordTok{fn}\NormalTok{(GraphView}\OperatorTok{,} \OperatorTok{\&}\NormalTok{NodeId) }\OperatorTok{{-}\textgreater{}}\NormalTok{ Footprint}\OperatorTok{;} -\end{Highlighting} -\end{Shaded} - -\textbf{Critical constraint}: Executors receive a \textbf{read-only} -\texttt{GraphView} and emit changes to a \texttt{TickDelta}. They -\textbf{never} mutate the graph directly. - -\begin{claudecommentary} -**Claude's Take**: The `FootprintFn` is the secret sauce. Before executing a rule, Echo calls this function to ask: "What nodes, edges, and attachments will you touch?" The footprint is a *conservative estimate*—you must declare everything you *might* read or write. - -This enables Echo's parallel execution model. If two rules have non-overlapping footprints, they can execute in parallel, in any order, and the result is guaranteed identical. If footprints overlap, they're sequenced deterministically. - -The burden on the rule author is significant: you must declare your footprint accurately, or you'll get either conflicts (declared overlap when there was none) or silent bugs (undeclared overlap that corrupts state). This is a sharp edge in the API. -\end{claudecommentary} - -\textbf{Runtime enforcement.} As of Phase~6B, footprint declarations are -enforced at runtime by \texttt{FootprintGuard}. An inaccurate footprint is -now a hard failure in debug builds. The guard catches the following -violations: - -\begin{itemize} -\item Undeclared reads (node, edge, or attachment access not listed in the footprint) -\item Undeclared writes (ops emitted for resources not in \texttt{n\_write} / \texttt{e\_write} / \texttt{a\_write}) -\item Cross-warp emissions (ops targeting a \texttt{WarpId} other than the executing warp) -\item Unauthorized instance ops (warp-instance-level operations like \texttt{UpsertWarpInstance} or - \texttt{DeleteWarpInstance} emitted by \texttt{ExecItemKind::User} rules; only - \texttt{ExecItemKind::System} rules may emit these) -\item Adjacency violations (edge ops whose \texttt{from} node is absent from \texttt{n\_write}) -\end{itemize} - -\subsection{4.4 GraphView: Read-Only -Access}\label{graphview-read-only-access} - -The \texttt{GraphView} enforces BOAW's immutability contract: - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ GraphView}\OperatorTok{\textless{}}\OtherTok{\textquotesingle{}a}\OperatorTok{\textgreater{}} \OperatorTok{\{} -\NormalTok{ store}\OperatorTok{:} \OperatorTok{\&}\OtherTok{\textquotesingle{}a}\NormalTok{ GraphStore}\OperatorTok{,} -\NormalTok{ warp\_id}\OperatorTok{:}\NormalTok{ WarpId}\OperatorTok{,} -\OperatorTok{\}} - -\KeywordTok{impl}\OperatorTok{\textless{}}\OtherTok{\textquotesingle{}a}\OperatorTok{\textgreater{}}\NormalTok{ GraphView}\OperatorTok{\textless{}}\OtherTok{\textquotesingle{}a}\OperatorTok{\textgreater{}} \OperatorTok{\{} - \KeywordTok{pub} \KeywordTok{fn}\NormalTok{ node(}\OperatorTok{\&}\KeywordTok{self}\OperatorTok{,}\NormalTok{ id}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Option}\OperatorTok{\textless{}\&}\NormalTok{NodeRecord}\OperatorTok{\textgreater{};} - \KeywordTok{pub} \KeywordTok{fn}\NormalTok{ edges\_from(}\OperatorTok{\&}\KeywordTok{self}\OperatorTok{,}\NormalTok{ id}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId) }\OperatorTok{{-}\textgreater{}} \KeywordTok{impl} \BuiltInTok{Iterator}\OperatorTok{\textless{}}\NormalTok{Item }\OperatorTok{=} \OperatorTok{\&}\NormalTok{EdgeRecord}\OperatorTok{\textgreater{};} - \KeywordTok{pub} \KeywordTok{fn}\NormalTok{ node\_attachment(}\OperatorTok{\&}\KeywordTok{self}\OperatorTok{,}\NormalTok{ id}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId}\OperatorTok{,}\NormalTok{ key}\OperatorTok{:} \OperatorTok{\&}\DataTypeTok{str}\NormalTok{) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Option}\OperatorTok{\textless{}\&}\NormalTok{AttachmentValue}\OperatorTok{\textgreater{};} - \CommentTok{// ... read{-}only methods only} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{No \texttt{DerefMut}, no -\texttt{AsRef\textless{}GraphStore\textgreater{}}, no interior -mutability.} This is enforced at the type level. - -\begin{claudecommentary} -**Claude's Take**: I went looking for escape hatches here. `RefCell`? No. `UnsafeCell`? No. `Arc>`? No. The `GraphView` is genuinely immutable by construction. - -This is Rust at its best: the borrow checker prevents you from shooting yourself in the foot. In C++, you'd need discipline and code review to enforce "executors don't mutate the graph." In Rust, it's just... not possible. The types don't allow it. -\end{claudecommentary} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{5. The Tick Pipeline: Where Everything -Happens}\label{the-tick-pipeline-where-everything-happens} - -\subsection{5.1 Overview}\label{overview} - -A ``tick'' is one complete cycle of the engine. It has five phases: - -\begin{figure} -\centering -\pandocbounded{\includegraphics[keepaspectratio,alt={Diagram 6}]{diagrams/tour-06.pdf}} -\caption{Diagram 6} -\end{figure} - -\begin{claudecommentary} -**Claude's Take**: The "Commit" phase has five sub-steps. *Five*. This is where I started to appreciate how much thought went into this system. Let me summarize what each does: - -1. **Drain**: Pull all pending rewrites from the scheduler in canonical order -2. **Reserve**: Check footprints for conflicts, accept or reject each rewrite -3. **Execute**: Run the accepted rewrites (this is where parallelism happens) -4. **Merge**: Combine all `TickDelta` outputs into a single canonical operation list -5. **Finalize**: Apply the merged operations to produce the new state - -The reservation phase is particularly clever. It's like a two-phase commit: first you "reserve" your footprint (claim your lock), then you execute. If your footprint conflicts with an already-reserved footprint, you're rejected. No execution happens until all accepted rewrites have been validated. -\end{claudecommentary} - -\subsection{5.2 Phase 1: Begin -Transaction}\label{phase-1-begin-transaction} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{let}\NormalTok{ tx }\OperatorTok{=}\NormalTok{ engine}\OperatorTok{.}\NormalTok{begin()}\OperatorTok{;} -\end{Highlighting} -\end{Shaded} - -\textbf{What happens:} 1. Increment \texttt{tx\_counter} (wrapping to -avoid 0) 2. Add \texttt{TxId} to \texttt{live\_txs} set 3. Return opaque -transaction identifier - -\begin{verbatim} -┌─────────────────────────────────────────────────┐ -│ engine.begin() │ -├─────────────────────────────────────────────────┤ -│ tx_counter: 0 → 1 │ -│ live_txs: {} → {TxId(1)} │ -│ returns: TxId(1) │ -└─────────────────────────────────────────────────┘ -\end{verbatim} - -\subsection{5.3 Phase 2: Apply Rules}\label{phase-2-apply-rules} - -\begin{Shaded} -\begin{Highlighting}[] -\NormalTok{engine}\OperatorTok{.}\NormalTok{apply(tx}\OperatorTok{,} \StringTok{"rule\_name"}\OperatorTok{,} \OperatorTok{\&}\NormalTok{scope\_node\_id)}\OperatorTok{;} -\end{Highlighting} -\end{Shaded} - -\textbf{What happens:} - -\begin{figure} -\centering -\pandocbounded{\includegraphics[keepaspectratio,alt={Diagram 7}]{diagrams/tour-07.pdf}} -\caption{Diagram 7} -\end{figure} - -\textbf{The Footprint}: A declaration of what resources the rule will -read and write: - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ Footprint }\OperatorTok{\{} - \KeywordTok{pub}\NormalTok{ n\_read}\OperatorTok{:}\NormalTok{ BTreeSet}\OperatorTok{\textless{}}\NormalTok{NodeId}\OperatorTok{\textgreater{},} \CommentTok{// Nodes to read} - \KeywordTok{pub}\NormalTok{ n\_write}\OperatorTok{:}\NormalTok{ BTreeSet}\OperatorTok{\textless{}}\NormalTok{NodeId}\OperatorTok{\textgreater{},} \CommentTok{// Nodes to write} - \KeywordTok{pub}\NormalTok{ e\_read}\OperatorTok{:}\NormalTok{ BTreeSet}\OperatorTok{\textless{}}\NormalTok{EdgeId}\OperatorTok{\textgreater{},} \CommentTok{// Edges to read} - \KeywordTok{pub}\NormalTok{ e\_write}\OperatorTok{:}\NormalTok{ BTreeSet}\OperatorTok{\textless{}}\NormalTok{EdgeId}\OperatorTok{\textgreater{},} \CommentTok{// Edges to write} - \KeywordTok{pub}\NormalTok{ a\_read}\OperatorTok{:}\NormalTok{ BTreeSet}\OperatorTok{\textless{}}\NormalTok{AttachmentKey}\OperatorTok{\textgreater{},} \CommentTok{// Attachments to read} - \KeywordTok{pub}\NormalTok{ a\_write}\OperatorTok{:}\NormalTok{ BTreeSet}\OperatorTok{\textless{}}\NormalTok{AttachmentKey}\OperatorTok{\textgreater{},} \CommentTok{// Attachments to write} - \CommentTok{// ... ports, factor\_mask} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{Scheduler deduplication}: If the same -\texttt{(scope\_hash,\ rule\_id)} is applied multiple times, -\textbf{last wins}. This enables idempotent retry semantics. - -\subsection{5.4 Phase 3: Commit (The Heart of -Determinism)}\label{phase-3-commit-the-heart-of-determinism} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{let}\NormalTok{ (snapshot}\OperatorTok{,}\NormalTok{ receipt}\OperatorTok{,}\NormalTok{ patch) }\OperatorTok{=}\NormalTok{ engine}\OperatorTok{.}\NormalTok{commit\_with\_receipt(tx)}\OperatorTok{;} -\end{Highlighting} -\end{Shaded} - -This is where Echo's magic happens. Let's break it down: - -\subsubsection{5.4.1 Drain}\label{drain} - -The scheduler drains all pending rewrites in \textbf{canonical order}: - -\begin{Shaded} -\begin{Highlighting}[] -\CommentTok{// RadixScheduler uses O(n) LSD radix sort} -\CommentTok{// 20 passes: 2 nonce + 2 rule\_id + 16 scope\_hash (16{-}bit digits)} -\KeywordTok{let}\NormalTok{ rewrites }\OperatorTok{=}\NormalTok{ scheduler}\OperatorTok{.}\NormalTok{drain\_for\_tx(tx)}\OperatorTok{;} \CommentTok{// Vec\textless{}PendingRewrite\textgreater{} in canonical order} -\end{Highlighting} -\end{Shaded} - -\textbf{Ordering key}: -\texttt{(scope\_hash{[}0..32{]},\ rule\_id,\ nonce)} - -This ensures the \textbf{same rewrites always execute in the same -order}, regardless of when they were applied. - -\begin{claudecommentary} -**Claude's Take**: Radix sort! They're using radix sort for the scheduler drain. Not quicksort, not merge sort—radix sort. - -Why? Because radix sort is *stable* and *deterministic* by construction. Quicksort's behavior depends on pivot selection, which can vary. Merge sort is deterministic, but radix sort is faster for fixed-size keys. Since the ordering key is exactly 36 bytes (32-byte scope hash + 2-byte rule ID + 2-byte nonce), radix sort is perfect. - -This is the kind of detail that separates "deterministic by accident" from "deterministic by design." -\end{claudecommentary} - -\subsubsection{5.4.2 Reserve (Independence -Check)}\label{reserve-independence-check} - -For each rewrite in canonical order: - -\begin{figure} -\centering -\pandocbounded{\includegraphics[keepaspectratio,alt={Diagram 8}]{diagrams/tour-08.pdf}} -\caption{Diagram 8} -\end{figure} - -\textbf{Conflict detection}: Uses -\texttt{GenSet\textless{}K\textgreater{}} for O(1) lookups: - Read-read -overlap: \textbf{allowed} - Write-write overlap: \textbf{conflict} - -Read-write overlap: \textbf{conflict} - -\subsubsection{5.4.3 Execute (Parallel, -Lockless)}\label{execute-parallel-lockless} - -Accepted rewrites execute against the \textbf{read-only snapshot}: - -\begin{Shaded} -\begin{Highlighting}[] -\ControlFlowTok{for}\NormalTok{ rewrite }\KeywordTok{in}\NormalTok{ accepted }\OperatorTok{\{} - \KeywordTok{let}\NormalTok{ rule }\OperatorTok{=} \OperatorTok{\&}\NormalTok{rules[rewrite}\OperatorTok{.}\NormalTok{rule\_id]}\OperatorTok{;} - \KeywordTok{let}\NormalTok{ view }\OperatorTok{=} \PreprocessorTok{GraphView::}\NormalTok{new(}\OperatorTok{\&}\NormalTok{state}\OperatorTok{,}\NormalTok{ rewrite}\OperatorTok{.}\NormalTok{warp\_id)}\OperatorTok{;} - - \CommentTok{// Executor reads from view, emits to delta} -\NormalTok{ (rule}\OperatorTok{.}\NormalTok{executor)(view}\OperatorTok{,} \OperatorTok{\&}\NormalTok{rewrite}\OperatorTok{.}\NormalTok{scope}\OperatorTok{,} \OperatorTok{\&}\KeywordTok{mut}\NormalTok{ delta)}\OperatorTok{;} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{Critical}: \texttt{GraphView} is immutable. \texttt{TickDelta} -accumulates operations: - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ TickDelta }\OperatorTok{\{} -\NormalTok{ ops}\OperatorTok{:} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{(WarpOp}\OperatorTok{,}\NormalTok{ OpOrigin)}\OperatorTok{\textgreater{},} -\OperatorTok{\}} - -\CommentTok{// Operations emitted during execution:} -\NormalTok{delta}\OperatorTok{.}\NormalTok{emit(}\PreprocessorTok{WarpOp::}\NormalTok{UpsertNode }\OperatorTok{\{}\NormalTok{ id}\OperatorTok{,}\NormalTok{ record }\OperatorTok{\}}\NormalTok{)}\OperatorTok{;} -\NormalTok{delta}\OperatorTok{.}\NormalTok{emit(}\PreprocessorTok{WarpOp::}\NormalTok{UpsertEdge }\OperatorTok{\{}\NormalTok{ from}\OperatorTok{,}\NormalTok{ edge }\OperatorTok{\}}\NormalTok{)}\OperatorTok{;} -\NormalTok{delta}\OperatorTok{.}\NormalTok{emit(}\PreprocessorTok{WarpOp::}\NormalTok{DeleteNode }\OperatorTok{\{}\NormalTok{ id }\OperatorTok{\}}\NormalTok{)}\OperatorTok{;} -\NormalTok{delta}\OperatorTok{.}\NormalTok{emit(}\PreprocessorTok{WarpOp::}\NormalTok{SetAttachment }\OperatorTok{\{}\NormalTok{ node}\OperatorTok{,}\NormalTok{ key}\OperatorTok{,}\NormalTok{ value }\OperatorTok{\}}\NormalTok{)}\OperatorTok{;} -\end{Highlighting} -\end{Shaded} - -\subsubsection{5.4.4 Merge (Canonical Sort)}\label{merge-canonical-sort} - -All operations are sorted into \textbf{canonical replay order}: - -\begin{Shaded} -\begin{Highlighting}[] -\CommentTok{// Sort by (WarpOpKey, OpOrigin)} -\NormalTok{ops}\OperatorTok{.}\NormalTok{sort\_by\_key(}\OperatorTok{|}\NormalTok{(op}\OperatorTok{,}\NormalTok{ origin)}\OperatorTok{|}\NormalTok{ (op}\OperatorTok{.}\NormalTok{sort\_key()}\OperatorTok{,}\NormalTok{ origin}\OperatorTok{.}\NormalTok{clone()))}\OperatorTok{;} - -\CommentTok{// Deduplicate identical ops} -\CommentTok{// Error on conflicting ops (footprint model violation)} -\end{Highlighting} -\end{Shaded} - -\textbf{Conflict handling}: If two rewrites wrote \textbf{different -values} to the same key, that's a bug in the footprint model. Echo -errors loudly. - -\subsubsection{5.4.5 Finalize}\label{finalize} - -Apply the merged delta to produce the new state: - -\begin{Shaded} -\begin{Highlighting}[] -\ControlFlowTok{for}\NormalTok{ op }\KeywordTok{in}\NormalTok{ merged\_ops }\OperatorTok{\{} - \ControlFlowTok{match}\NormalTok{ op }\OperatorTok{\{} - \PreprocessorTok{WarpOp::}\NormalTok{UpsertNode }\OperatorTok{\{}\NormalTok{ id}\OperatorTok{,}\NormalTok{ record }\OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ state}\OperatorTok{.}\NormalTok{insert\_node(id}\OperatorTok{,}\NormalTok{ record)}\OperatorTok{,} - \PreprocessorTok{WarpOp::}\NormalTok{UpsertEdge }\OperatorTok{\{}\NormalTok{ from}\OperatorTok{,}\NormalTok{ edge }\OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ state}\OperatorTok{.}\NormalTok{insert\_edge(from}\OperatorTok{,}\NormalTok{ edge)}\OperatorTok{,} - \PreprocessorTok{WarpOp::}\NormalTok{DeleteNode }\OperatorTok{\{}\NormalTok{ id }\OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ state}\OperatorTok{.}\NormalTok{delete\_node\_cascade(id)}\OperatorTok{,} - \PreprocessorTok{WarpOp::}\NormalTok{SetAttachment }\OperatorTok{\{}\NormalTok{ node}\OperatorTok{,}\NormalTok{ key}\OperatorTok{,}\NormalTok{ value }\OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ state}\OperatorTok{.}\NormalTok{set\_attachment(node}\OperatorTok{,}\NormalTok{ key}\OperatorTok{,}\NormalTok{ value)}\OperatorTok{,} - \CommentTok{// ...} - \OperatorTok{\}} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\subsection{5.5 Phase 4: Hash -Computation}\label{phase-4-hash-computation} - -\subsubsection{State Root (BLAKE3)}\label{state-root-blake3} - -The state root is computed via \textbf{deterministic BFS} over reachable -nodes: - -\begin{figure} -\centering -\pandocbounded{\includegraphics[keepaspectratio,alt={Diagram 9}]{diagrams/tour-09.pdf}} -\caption{Diagram 9} -\end{figure} - -\textbf{Encoding} (architecture-independent): - All IDs: raw 32 bytes - -Counts: u64 little-endian - Payloads: 1-byte tag + type\_id{[}32{]} + -u64 LE length + bytes - -\subsubsection{Commit Hash (v2)}\label{commit-hash-v2} - -\begin{Shaded} -\begin{Highlighting}[] -\NormalTok{commit\_hash }\OperatorTok{=}\NormalTok{ BLAKE3(} -\NormalTok{ version\_tag[}\DecValTok{4}\NormalTok{] }\OperatorTok{||} \CommentTok{// Protocol version} -\NormalTok{ parents[] }\OperatorTok{||} \CommentTok{// Parent commit hashes} -\NormalTok{ state\_root[}\DecValTok{32}\NormalTok{] }\OperatorTok{||} \CommentTok{// Graph{-}only hash} -\NormalTok{ patch\_digest[}\DecValTok{32}\NormalTok{] }\OperatorTok{||} \CommentTok{// Merged ops digest} -\NormalTok{ policy\_id[}\DecValTok{4}\NormalTok{] }\CommentTok{// Policy identifier} -\NormalTok{)} -\end{Highlighting} -\end{Shaded} - -\begin{claudecommentary} -**Claude's Take**: The commit hash includes a `policy_id`. This is subtle but important: two engines with different policies could produce the same state but different commit hashes. Why? Because the *process* matters, not just the result. - -Imagine one policy allows rules to run in parallel; another requires sequential execution. They might produce identical graphs, but the commit hashes differ because the policies differ. This prevents accidentally mixing outputs from incompatible engine configurations. - -It's defensive design: "Trust, but verify—and make verification easy." -\end{claudecommentary} - -\subsection{5.6 Phase 5: Record to -History}\label{phase-5-record-to-history} - -\begin{Shaded} -\begin{Highlighting}[] -\NormalTok{history}\OperatorTok{.}\NormalTok{push((} -\NormalTok{ Snapshot }\OperatorTok{\{}\NormalTok{ hash}\OperatorTok{:}\NormalTok{ commit\_hash}\OperatorTok{,}\NormalTok{ state\_root}\OperatorTok{,}\NormalTok{ parents}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} -\NormalTok{ TickReceipt }\OperatorTok{\{}\NormalTok{ applied}\OperatorTok{,}\NormalTok{ rejected}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} -\NormalTok{ WarpTickPatchV1 }\OperatorTok{\{}\NormalTok{ ops}\OperatorTok{,}\NormalTok{ in\_slots}\OperatorTok{,}\NormalTok{ out\_slots}\OperatorTok{,}\NormalTok{ patch\_digest}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\}} -\NormalTok{))}\OperatorTok{;} -\end{Highlighting} -\end{Shaded} - -The patch is \textbf{prescriptive}: it can be replayed without -re-matching to reproduce the exact same state. - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{6. Parallel Execution: BOAW (Bag of Autonomous -Workers)}\label{parallel-execution-boaw-bag-of-autonomous-workers} - -\subsection{6.1 What is BOAW?}\label{what-is-boaw} - -BOAW stands for \textbf{Best Of All Worlds}. It's Echo's parallel -execution architecture that enables: - -\begin{itemize} -\tightlist -\item - \textbf{Massive parallelism} without locks -\item - \textbf{Deterministic convergence} across platforms -\item - \textbf{Worker-count invariance} (same result with 1 or 32 workers) -\end{itemize} - -\subsection{6.2 The Key Insight}\label{the-key-insight} - -\begin{verbatim} -┌──────────────────────────────────────────────────────────────────┐ -│ THE BOAW INSIGHT │ -├──────────────────────────────────────────────────────────────────┤ -│ │ -│ Traditional parallelism: │ -│ "Make execution order deterministic" → Complex, slow │ -│ │ -│ BOAW parallelism: │ -│ "Let execution order vary, make MERGE deterministic" → Fast! │ -│ │ -│ Workers race freely → Each produces a TickDelta │ -│ Merge step sorts all deltas → Canonical output │ -│ │ -└──────────────────────────────────────────────────────────────────┘ -\end{verbatim} - -\begin{claudecommentary} -**Claude's Take**: This is the insight that makes Echo work. Most parallel systems try to *control* the execution order—barriers, locks, atomic sequences. BOAW says: "Forget it. Let chaos reign during execution. We'll sort it out in the merge." - -It's like MapReduce: the map phase runs in any order; the reduce phase (merge) produces the canonical result. But unlike MapReduce, Echo operates on a graph with complex dependencies. The footprint model makes this possible: by declaring what you'll touch before executing, you enable the merge to validate that no conflicts occurred. - -If this sounds too good to be true, it mostly is—*if* you get the footprints wrong. The system is only as deterministic as your footprint declarations. Lie to the footprint system, and you'll get non-determinism. -\end{claudecommentary} - -\subsection{6.3 Execution Strategies}\label{execution-strategies} - -\subsubsection{Phase 6A: Stride Partitioning -(Legacy)}\label{phase-6a-stride-partitioning-legacy} - -\begin{verbatim} -Worker 0: items[0], items[4], items[8], ... -Worker 1: items[1], items[5], items[9], ... -Worker 2: items[2], items[6], items[10], ... -Worker 3: items[3], items[7], items[11], ... -\end{verbatim} - -\textbf{Problem}: Poor cache locality---related items scatter across -workers. - -\subsubsection{Phase 6B: Virtual Shards (Current -Default)}\label{phase-6b-virtual-shards-current-default} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{const}\NormalTok{ NUM\_SHARDS}\OperatorTok{:} \DataTypeTok{usize} \OperatorTok{=} \DecValTok{256}\OperatorTok{;} \CommentTok{// Protocol constant (frozen)} - -\KeywordTok{fn}\NormalTok{ shard\_of(node\_id}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{usize} \OperatorTok{\{} - \KeywordTok{let}\NormalTok{ bytes }\OperatorTok{=}\NormalTok{ node\_id}\OperatorTok{.}\NormalTok{as\_bytes()}\OperatorTok{;} - \KeywordTok{let}\NormalTok{ val }\OperatorTok{=} \DataTypeTok{u64}\PreprocessorTok{::}\NormalTok{from\_le\_bytes(bytes[}\DecValTok{0}\OperatorTok{..}\DecValTok{8}\NormalTok{])}\OperatorTok{;} -\NormalTok{ (val }\OperatorTok{\&} \DecValTok{255}\NormalTok{) }\KeywordTok{as} \DataTypeTok{usize} \CommentTok{// Fast modulo via bitmask} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\begin{figure} -\centering -\pandocbounded{\includegraphics[keepaspectratio,alt={Diagram 10}]{diagrams/tour-10.pdf}} -\caption{Diagram 10} -\end{figure} - -\textbf{Benefits}: - Items with same \texttt{shard\_of(scope)} processed -together → better cache hits - Workers dynamically claim shards via -atomic counter → load balancing - Determinism enforced by merge, not -execution order - -\begin{claudecommentary} -**Claude's Take**: 256 shards is an interesting choice. It's small enough that the atomic counter for work-stealing doesn't become a bottleneck, but large enough to distribute work across many cores. - -The `& 255` bitmask is a micro-optimization I appreciate. It's equivalent to `% 256` but faster because 256 is a power of 2. This is the kind of low-level detail that adds up when you're processing millions of items per second. - -One thing I wondered: what if your NodeIds are clustered? Like, if all recent nodes have IDs starting with `0x00...`, they'd all end up in shard 0. I suspect content-addressed IDs (via BLAKE3) distribute uniformly, so this isn't a problem in practice. But for user-assigned IDs, you'd need to be careful. -\end{claudecommentary} - -\subsection{6.4 The Execution Loop}\label{the-execution-loop} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ execute\_parallel\_sharded(} -\NormalTok{ view}\OperatorTok{:}\NormalTok{ GraphView}\OperatorTok{\textless{}}\OtherTok{\textquotesingle{}\_}\OperatorTok{\textgreater{},} -\NormalTok{ items}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[ExecItem]}\OperatorTok{,} -\NormalTok{ workers}\OperatorTok{:} \DataTypeTok{usize}\OperatorTok{,} -\NormalTok{) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{TickDelta}\OperatorTok{\textgreater{}} \OperatorTok{\{} - \CommentTok{// Partition items into 256 shards} - \KeywordTok{let}\NormalTok{ shards }\OperatorTok{=}\NormalTok{ partition\_into\_shards(items)}\OperatorTok{;} - - \CommentTok{// Atomic counter for work{-}stealing} - \KeywordTok{let}\NormalTok{ next\_shard }\OperatorTok{=} \PreprocessorTok{AtomicUsize::}\NormalTok{new(}\DecValTok{0}\NormalTok{)}\OperatorTok{;} - - \PreprocessorTok{std::thread::}\NormalTok{scope(}\OperatorTok{|}\NormalTok{s}\OperatorTok{|} \OperatorTok{\{} - \KeywordTok{let}\NormalTok{ handles}\OperatorTok{:} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{\_}\OperatorTok{\textgreater{}} \OperatorTok{=}\NormalTok{ (}\DecValTok{0}\OperatorTok{..}\NormalTok{workers)}\OperatorTok{.}\NormalTok{map(}\OperatorTok{|}\NormalTok{\_}\OperatorTok{|} \OperatorTok{\{} -\NormalTok{ s}\OperatorTok{.}\NormalTok{spawn(}\OperatorTok{||} \OperatorTok{\{} - \KeywordTok{let} \KeywordTok{mut}\NormalTok{ delta }\OperatorTok{=} \PreprocessorTok{TickDelta::}\NormalTok{new()}\OperatorTok{;} - \ControlFlowTok{loop} \OperatorTok{\{} - \CommentTok{// Claim next shard atomically} - \KeywordTok{let}\NormalTok{ shard\_id }\OperatorTok{=}\NormalTok{ next\_shard}\OperatorTok{.}\NormalTok{fetch\_add(}\DecValTok{1}\OperatorTok{,} \PreprocessorTok{Ordering::}\NormalTok{Relaxed)}\OperatorTok{;} - \ControlFlowTok{if}\NormalTok{ shard\_id }\OperatorTok{\textgreater{}=}\NormalTok{ NUM\_SHARDS }\OperatorTok{\{} \ControlFlowTok{break}\OperatorTok{;} \OperatorTok{\}} - - \CommentTok{// Execute all items in this shard} - \ControlFlowTok{for}\NormalTok{ item }\KeywordTok{in} \OperatorTok{\&}\NormalTok{shards[shard\_id]}\OperatorTok{.}\NormalTok{items }\OperatorTok{\{} -\NormalTok{ (item}\OperatorTok{.}\NormalTok{exec)(view}\OperatorTok{.}\NormalTok{clone()}\OperatorTok{,} \OperatorTok{\&}\NormalTok{item}\OperatorTok{.}\NormalTok{scope}\OperatorTok{,} \OperatorTok{\&}\KeywordTok{mut}\NormalTok{ delta)}\OperatorTok{;} - \OperatorTok{\}} - \OperatorTok{\}} -\NormalTok{ delta} - \OperatorTok{\}}\NormalTok{)} - \OperatorTok{\}}\NormalTok{)}\OperatorTok{.}\NormalTok{collect()}\OperatorTok{;} - -\NormalTok{ handles}\OperatorTok{.}\NormalTok{into\_iter()}\OperatorTok{.}\NormalTok{map(}\OperatorTok{|}\NormalTok{h}\OperatorTok{|}\NormalTok{ h}\OperatorTok{.}\NormalTok{join()}\OperatorTok{.}\NormalTok{unwrap())}\OperatorTok{.}\NormalTok{collect()} - \OperatorTok{\}}\NormalTok{)} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\subsection{6.5 The Canonical Merge}\label{the-canonical-merge} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ merge\_deltas(deltas}\OperatorTok{:} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{TickDelta}\OperatorTok{\textgreater{}}\NormalTok{) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Result}\OperatorTok{\textless{}}\DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{WarpOp}\OperatorTok{\textgreater{},}\NormalTok{ MergeConflict}\OperatorTok{\textgreater{}} \OperatorTok{\{} - \CommentTok{// 1. Flatten all ops from all workers} - \KeywordTok{let} \KeywordTok{mut}\NormalTok{ all\_ops}\OperatorTok{:} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{(WarpOpKey}\OperatorTok{,}\NormalTok{ OpOrigin}\OperatorTok{,}\NormalTok{ WarpOp)}\OperatorTok{\textgreater{}} \OperatorTok{=}\NormalTok{ deltas} - \OperatorTok{.}\NormalTok{into\_iter()} - \OperatorTok{.}\NormalTok{flat\_map(}\OperatorTok{|}\NormalTok{d}\OperatorTok{|}\NormalTok{ d}\OperatorTok{.}\NormalTok{ops\_with\_origins())} - \OperatorTok{.}\NormalTok{collect()}\OperatorTok{;} - - \CommentTok{// 2. Sort canonically by (key, origin)} -\NormalTok{ all\_ops}\OperatorTok{.}\NormalTok{sort\_by\_key(}\OperatorTok{|}\NormalTok{(key}\OperatorTok{,}\NormalTok{ origin}\OperatorTok{,}\NormalTok{ \_)}\OperatorTok{|}\NormalTok{ (key}\OperatorTok{.}\NormalTok{clone()}\OperatorTok{,}\NormalTok{ origin}\OperatorTok{.}\NormalTok{clone()))}\OperatorTok{;} - - \CommentTok{// 3. Deduplicate and detect conflicts} - \KeywordTok{let} \KeywordTok{mut}\NormalTok{ result }\OperatorTok{=} \DataTypeTok{Vec}\PreprocessorTok{::}\NormalTok{new()}\OperatorTok{;} - \ControlFlowTok{for}\NormalTok{ group }\KeywordTok{in}\NormalTok{ all\_ops}\OperatorTok{.}\NormalTok{group\_by(}\OperatorTok{|}\NormalTok{(k1}\OperatorTok{,}\NormalTok{ \_}\OperatorTok{,}\NormalTok{ \_)}\OperatorTok{,}\NormalTok{ (k2}\OperatorTok{,}\NormalTok{ \_}\OperatorTok{,}\NormalTok{ \_)}\OperatorTok{|}\NormalTok{ k1 }\OperatorTok{==}\NormalTok{ k2) }\OperatorTok{\{} - \KeywordTok{let}\NormalTok{ first }\OperatorTok{=} \OperatorTok{\&}\NormalTok{group[}\DecValTok{0}\NormalTok{]}\OperatorTok{.}\DecValTok{2}\OperatorTok{;} - \ControlFlowTok{if}\NormalTok{ group}\OperatorTok{.}\NormalTok{iter()}\OperatorTok{.}\NormalTok{all(}\OperatorTok{|}\NormalTok{(\_}\OperatorTok{,}\NormalTok{ \_}\OperatorTok{,}\NormalTok{ op)}\OperatorTok{|}\NormalTok{ op }\OperatorTok{==}\NormalTok{ first) }\OperatorTok{\{} -\NormalTok{ result}\OperatorTok{.}\NormalTok{push(first}\OperatorTok{.}\NormalTok{clone())}\OperatorTok{;} \CommentTok{// All identical: keep one} - \OperatorTok{\}} \ControlFlowTok{else} \OperatorTok{\{} - \ControlFlowTok{return} \ConstantTok{Err}\NormalTok{(MergeConflict }\OperatorTok{\{}\NormalTok{ writers}\OperatorTok{:}\NormalTok{ group}\OperatorTok{.}\NormalTok{iter()}\OperatorTok{.}\NormalTok{map(}\OperatorTok{|}\NormalTok{(\_}\OperatorTok{,}\NormalTok{ o}\OperatorTok{,}\NormalTok{ \_)}\OperatorTok{|}\NormalTok{ o)}\OperatorTok{.}\NormalTok{collect() }\OperatorTok{\}}\NormalTok{)}\OperatorTok{;} - \OperatorTok{\}} - \OperatorTok{\}} - - \ConstantTok{Ok}\NormalTok{(result)} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{Key guarantee}: Conflicts are bugs. If footprints were correct, -no two rewrites should write different values to the same key. - -\subsection{6.6 Runtime Enforcement: -FootprintGuard}\label{runtime-enforcement-footprintguard} - -\texttt{FootprintGuard} is the runtime mechanism that validates every -graph access and emitted op against the declared footprint. - -\subsubsection{Read Enforcement}\label{read-enforcement} - -Read enforcement is implemented via \texttt{GraphView::new\_guarded()}, -which wraps the underlying \texttt{GraphView} with an intercepting layer. -Every accessor call---\texttt{node()}, \texttt{edges\_from()}, -\texttt{node\_attachment()}, etc.---is checked against the footprint's -declared read sets (\texttt{n\_read}, \texttt{e\_read}, \texttt{a\_read}). -An access to an undeclared resource triggers a \texttt{FootprintViolation} -panic. - -\subsubsection{Write Enforcement}\label{write-enforcement} - -Write enforcement uses a post-hoc \texttt{check\_op()} strategy: every op -emitted into the \texttt{TickDelta} is validated against the footprint's -write sets after the executor runs. The \texttt{catch\_unwind} boundary is -separate---it catches immediate \texttt{GraphView} read violations so that, -even if the executor unwinds, any ops already emitted can still be -validated. This catches undeclared writes, cross-warp emissions, -unauthorized instance ops, and adjacency violations (edge ops whose -\texttt{from} node is absent from \texttt{n\_write}). - -\subsubsection{Scope and Lifecycle}\label{scope-and-lifecycle} - -The guard is instantiated \emph{per-\texttt{ExecItem}} within a -\texttt{WorkUnit}. Each rule invocation receives its own guard, scoped to -that item's computed footprint. Violations produce panic payloads: -\texttt{FootprintViolation} for basic violations (undeclared access, cross-warp -emission), or \texttt{FootprintViolationWithPanic} when both a violation and -an executor panic occur simultaneously. Both payloads carry structured -information about the offending access. - -\subsubsection{Configuration}\label{guard-configuration} - -The guard is \texttt{cfg}-gated: - -\begin{itemize} -\item \textbf{Active} in debug builds (\texttt{debug\_assertions}) or when - the \texttt{footprint\_enforce\_release} feature is enabled. -\item \textbf{Disabled} when the \texttt{unsafe\_graph} feature is set, - which removes all guard overhead for maximum throughput in production - scenarios where footprints have already been validated. -\end{itemize} - -\textbf{Note:} The \texttt{unsafe\_graph} flag takes precedence and disables all -guard enforcement unconditionally, regardless of \texttt{debug\_assertions} or -\texttt{footprint\_enforce\_release}. - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{7. Storage \& Hashing: Content-Addressed -Truth}\label{storage-hashing-content-addressed-truth} - -\subsection{7.1 The GraphStore}\label{the-graphstore} - -Located in \texttt{crates/warp-core/src/graph.rs}: - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ GraphStore }\OperatorTok{\{} - \KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) warp\_id}\OperatorTok{:}\NormalTok{ WarpId}\OperatorTok{,} - \KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) nodes}\OperatorTok{:}\NormalTok{ BTreeMap}\OperatorTok{\textless{}}\NormalTok{NodeId}\OperatorTok{,}\NormalTok{ NodeRecord}\OperatorTok{\textgreater{},} - \KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) edges\_from}\OperatorTok{:}\NormalTok{ BTreeMap}\OperatorTok{\textless{}}\NormalTok{NodeId}\OperatorTok{,} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{EdgeRecord}\OperatorTok{\textgreater{}\textgreater{},} - \KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) edges\_to}\OperatorTok{:}\NormalTok{ BTreeMap}\OperatorTok{\textless{}}\NormalTok{NodeId}\OperatorTok{,} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{EdgeId}\OperatorTok{\textgreater{}\textgreater{},} \CommentTok{// Reverse index} - \KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) node\_attachments}\OperatorTok{:}\NormalTok{ BTreeMap}\OperatorTok{\textless{}}\NormalTok{NodeId}\OperatorTok{,}\NormalTok{ AttachmentValue}\OperatorTok{\textgreater{},} - \KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) edge\_attachments}\OperatorTok{:}\NormalTok{ BTreeMap}\OperatorTok{\textless{}}\NormalTok{EdgeId}\OperatorTok{,}\NormalTok{ AttachmentValue}\OperatorTok{\textgreater{},} - \KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) edge\_index}\OperatorTok{:}\NormalTok{ BTreeMap}\OperatorTok{\textless{}}\NormalTok{EdgeId}\OperatorTok{,}\NormalTok{ NodeId}\OperatorTok{\textgreater{},} \CommentTok{// Edge → Source} - \KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) edge\_to\_index}\OperatorTok{:}\NormalTok{ BTreeMap}\OperatorTok{\textless{}}\NormalTok{EdgeId}\OperatorTok{,}\NormalTok{ NodeId}\OperatorTok{\textgreater{},} \CommentTok{// Edge → Target} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{Why BTreeMap everywhere?} - Deterministic iteration order -(sorted by key) - Enables canonical hashing - No HashMap ordering -surprises - -\begin{claudecommentary} -**Claude's Take**: Seven BTreeMaps! This is the price of determinism. Each of these maps is sorted, which means: - -1. Insertions are O(log n) instead of O(1) amortized for HashMap -2. Iteration is always in key order, so hashing is deterministic -3. Memory overhead is slightly higher due to tree structure - -Is it worth it? For Echo's use case, absolutely. The alternative—using HashMap and then sorting before each hash—would be slower and more error-prone. By paying the cost upfront (O(log n) writes), you get guaranteed correctness. - -The multiple indices (`edges_from`, `edges_to`, `edge_index`, `edge_to_index`) look redundant, but they enable O(log n) lookups from any direction. Want all edges *from* a node? `edges_from[node_id]`. Want all edges *to* a node? `edges_to[node_id]`. This is a classic space-time tradeoff. -\end{claudecommentary} - -\subsection{7.2 WSC: Write-Streaming Columnar -Format}\label{wsc-write-streaming-columnar-format} - -For efficient snapshots, Echo uses WSC---a zero-copy, mmap-friendly -format: - -\begin{verbatim} -┌─────────────────────────────────────────────────────────────────┐ -│ WSC SNAPSHOT FILE │ -├─────────────────────────────────────────────────────────────────┤ -│ ┌─────────────────────────────────────────────────────────────┐ │ -│ │ NODES TABLE (sorted by NodeId) │ │ -│ │ ┌──────────┬───────────┬──────────┐ │ │ -│ │ │ NodeRow │ NodeRow │ NodeRow │ ... │ │ -│ │ │ 64 bytes │ 64 bytes │ 64 bytes │ │ │ -│ │ └──────────┴───────────┴──────────┘ │ │ -│ └─────────────────────────────────────────────────────────────┘ │ -│ ┌─────────────────────────────────────────────────────────────┐ │ -│ │ EDGES TABLE (sorted by EdgeId) │ │ -│ │ ┌───────────┬───────────┬───────────┐ │ │ -│ │ │ EdgeRow │ EdgeRow │ EdgeRow │ ... │ │ -│ │ │ 128 bytes │ 128 bytes │ 128 bytes │ │ │ -│ │ └───────────┴───────────┴───────────┘ │ │ -│ └─────────────────────────────────────────────────────────────┘ │ -│ ┌─────────────────────────────────────────────────────────────┐ │ -│ │ OUT_INDEX (per-node → range into out_edges) │ │ -│ │ ┌────────────────┬────────────────┐ │ │ -│ │ │ Range (16 B) │ Range (16 B) │ ... │ │ -│ │ └────────────────┴────────────────┘ │ │ -│ └─────────────────────────────────────────────────────────────┘ │ -│ ┌─────────────────────────────────────────────────────────────┐ │ -│ │ BLOB ARENA (variable-length data) │ │ -│ │ Referenced by (offset, length) tuples │ │ -│ └─────────────────────────────────────────────────────────────┘ │ -└─────────────────────────────────────────────────────────────────┘ -\end{verbatim} - -\textbf{Row types} (8-byte aligned): - \texttt{NodeRow}: 64 bytes -(node\_id{[}32{]} + node\_type{[}32{]}) - \texttt{EdgeRow}: 128 bytes -(edge\_id{[}32{]} + from{[}32{]} + to{[}32{]} + type{[}32{]}) - -\texttt{Range}: 16 bytes (start\_le{[}8{]} + len\_le{[}8{]}) - -\begin{claudecommentary} -**Claude's Take**: WSC is gloriously simple. Fixed-size rows, sorted tables, binary search for lookups. No compression, no Parquet-style encoding tricks—just flat bytes on disk that you can mmap and use directly. - -The trade-off is size: WSC files are larger than compressed formats. But the benefit is speed: you can find node #1000 by seeking to `offset + 1000 * 64` and reading 64 bytes. No decompression, no index lookups, no memory allocation. - -For Echo's use case (local caching, fast restarts), this makes sense. You're not storing petabytes; you're storing the state of a single simulation that fits in RAM. Optimize for access latency, not storage cost. -\end{claudecommentary} - -\subsection{7.3 Copy-on-Write Semantics}\label{copy-on-write-semantics} - -\textbf{Rule}: During a tick, nothing shared is mutated. - -\begin{figure} -\centering -\pandocbounded{\includegraphics[keepaspectratio,alt={Diagram 11}]{diagrams/tour-11.pdf}} -\caption{Diagram 11} -\end{figure} - -\textbf{Structural sharing}: Only changed segments are newly written. -Unchanged data is referenced by hash. - -\subsection{7.4 Hash Algorithm Details}\label{hash-algorithm-details} - -\textbf{State Root} (BLAKE3, v2): - -\begin{verbatim} -state_root = BLAKE3( - root_id[32] || - instance_count[8, LE] || - for each instance in BTreeMap order: - warp_id_len[8, LE] || - warp_id_bytes || - node_count[8, LE] || - for each node in ascending NodeId order: - node_id[32] || - node_type[32] || - for each outbound edge in ascending EdgeId order: - edge_id[32] || - edge_type[32] || - to_node[32] || - for each attachment: - key_len[8, LE] || - key_bytes || - type_id[32] || - value_len[8, LE] || - value_bytes -) -\end{verbatim} - -\begin{claudecommentary} -**Claude's Take**: The hashing is *exhaustive*. Every node, every edge, every attachment, every byte—all streamed through BLAKE3 in a defined order. There's no "we'll just hash the IDs and trust the content"—everything participates. - -This is expensive! But it's the foundation of Echo's trust model. If two engines produce the same state root, they have the same state. Period. No exceptions, no edge cases. - -The `version_tag` in the commit hash is a nice touch. If Echo ever changes its hashing algorithm (say, BLAKE3 v2 to v3), old and new hashes won't collide. Protocol evolution is built in. -\end{claudecommentary} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{8. Worked Example: Tracing a Link -Click}\label{worked-example-tracing-a-link-click} - -Let's trace what happens when a user clicks a link in a hypothetical -WARP-based navigation system. - -\subsection{8.1 The Scenario}\label{the-scenario} - -Imagine a simple site with two pages: - -\begin{figure} -\centering -\pandocbounded{\includegraphics[keepaspectratio,alt={Diagram 12}]{diagrams/tour-12.pdf}} -\caption{Diagram 12} -\end{figure} - -\textbf{User clicks the link}: This should navigate from Home to About. - -\begin{claudecommentary} -**Claude's Take**: This example is deceptively simple—two pages, one link—but it exercises the entire engine: intent ingestion, rule matching, footprint validation, execution, merge, hashing, and emission. - -I'll add my notes at the interesting points. If you're skimming, watch for where the determinism guarantees kick in. -\end{claudecommentary} - -\subsection{8.2 Step 1: Intent Ingestion}\label{step-1-intent-ingestion} - -The click is captured by the viewer and converted to an \textbf{intent}: - -\begin{Shaded} -\begin{Highlighting}[] -\CommentTok{// In the viewer:} -\KeywordTok{let}\NormalTok{ intent }\OperatorTok{=}\NormalTok{ NavigateIntent }\OperatorTok{\{} -\NormalTok{ target\_page}\OperatorTok{:}\NormalTok{ about\_node\_id}\OperatorTok{,} -\NormalTok{ timestamp}\OperatorTok{:}\NormalTok{ deterministic\_tick}\OperatorTok{,} -\OperatorTok{\};} -\KeywordTok{let}\NormalTok{ intent\_bytes }\OperatorTok{=}\NormalTok{ canonical\_encode(}\OperatorTok{\&}\NormalTok{intent)}\OperatorTok{;} - -\CommentTok{// Send to engine:} -\NormalTok{engine}\OperatorTok{.}\NormalTok{ingest\_intent(intent\_bytes)}\OperatorTok{;} -\end{Highlighting} -\end{Shaded} - -\textbf{What happens inside \texttt{ingest\_intent}}: - -\begin{figure} -\centering -\pandocbounded{\includegraphics[keepaspectratio,alt={Diagram 13}]{diagrams/tour-13.pdf}} -\caption{Diagram 13} -\end{figure} - -\subsection{8.3 Step 2: Begin -Transaction}\label{step-2-begin-transaction} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{let}\NormalTok{ tx }\OperatorTok{=}\NormalTok{ engine}\OperatorTok{.}\NormalTok{begin()}\OperatorTok{;} \CommentTok{// tx = TxId(1)} -\end{Highlighting} -\end{Shaded} - -\subsection{8.4 Step 3: Dispatch Intent}\label{step-3-dispatch-intent} - -\begin{Shaded} -\begin{Highlighting}[] -\NormalTok{engine}\OperatorTok{.}\NormalTok{dispatch\_next\_intent(tx)}\OperatorTok{;} -\end{Highlighting} -\end{Shaded} - -\textbf{What happens}: - -\begin{figure} -\centering -\pandocbounded{\includegraphics[keepaspectratio,alt={Diagram 14}]{diagrams/tour-14.pdf}} -\caption{Diagram 14} -\end{figure} - -\subsection{8.5 Step 4: Rule Matching}\label{step-4-rule-matching} - -The \texttt{cmd/navigate} rule matches: - -\begin{Shaded} -\begin{Highlighting}[] -\CommentTok{// Matcher: Does this intent want navigation?} -\KeywordTok{fn}\NormalTok{ navigate\_matcher(view}\OperatorTok{:}\NormalTok{ GraphView}\OperatorTok{,}\NormalTok{ scope}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{bool} \OperatorTok{\{} - \KeywordTok{let}\NormalTok{ intent }\OperatorTok{=}\NormalTok{ view}\OperatorTok{.}\NormalTok{node(scope)}\OperatorTok{?;} -\NormalTok{ intent}\OperatorTok{.}\NormalTok{type\_id }\OperatorTok{==} \StringTok{"navigate\_intent"} -\OperatorTok{\}} - -\CommentTok{// Footprint: What will we read/write?} -\KeywordTok{fn}\NormalTok{ navigate\_footprint(view}\OperatorTok{:}\NormalTok{ GraphView}\OperatorTok{,}\NormalTok{ scope}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId) }\OperatorTok{{-}\textgreater{}}\NormalTok{ Footprint }\OperatorTok{\{} -\NormalTok{ Footprint }\OperatorTok{\{} -\NormalTok{ n\_read}\OperatorTok{:} \PreprocessorTok{btreeset!}\NormalTok{[scope}\OperatorTok{.}\NormalTok{clone()}\OperatorTok{,}\NormalTok{ viewer\_node]}\OperatorTok{,} -\NormalTok{ n\_write}\OperatorTok{:} \PreprocessorTok{btreeset!}\NormalTok{[]}\OperatorTok{,} -\NormalTok{ a\_read}\OperatorTok{:} \PreprocessorTok{btreeset!}\NormalTok{[]}\OperatorTok{,} -\NormalTok{ a\_write}\OperatorTok{:} \PreprocessorTok{btreeset!}\NormalTok{[}\PreprocessorTok{AttachmentKey::}\NormalTok{new(viewer\_node}\OperatorTok{,} \StringTok{"current"}\NormalTok{)]}\OperatorTok{,} - \OperatorTok{..}\KeywordTok{default}\NormalTok{()} - \OperatorTok{\}} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\begin{claudecommentary} -**Claude's Take**: Notice the footprint. We declare that we'll: -- **Read** two nodes: the intent (to get the target) and the viewer (to validate the current page) -- **Write** one attachment: the viewer's `current` attachment - -We're *not* reading any attachments (we just need the node records), and we're *not* writing any nodes (the viewer node already exists). This precision matters—if another rule also wants to write `viewer.current`, there's a conflict. -\end{claudecommentary} - -The rule is enqueued: - -\begin{verbatim} -┌─────────────────────────────────────────────────────────────┐ -│ PendingRewrite │ -├─────────────────────────────────────────────────────────────┤ -│ rule_id: "cmd/navigate" │ -│ scope: 0xABCD... (intent node) │ -│ footprint: { n_read: [intent, viewer], a_write: [current] } │ -│ tx: TxId(1) │ -└─────────────────────────────────────────────────────────────┘ -\end{verbatim} - -\subsection{8.6 Step 5: Commit}\label{step-5-commit} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{let}\NormalTok{ (snapshot}\OperatorTok{,}\NormalTok{ receipt}\OperatorTok{,}\NormalTok{ patch) }\OperatorTok{=}\NormalTok{ engine}\OperatorTok{.}\NormalTok{commit\_with\_receipt(tx)}\OperatorTok{;} -\end{Highlighting} -\end{Shaded} - -\subsubsection{5a. Drain}\label{a.-drain} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{let}\NormalTok{ rewrites }\OperatorTok{=}\NormalTok{ scheduler}\OperatorTok{.}\NormalTok{drain\_for\_tx(tx)}\OperatorTok{;} -\CommentTok{// Result: [PendingRewrite \{ rule: "cmd/navigate", scope: intent\_node \}]} -\end{Highlighting} -\end{Shaded} - -\subsubsection{5b. Reserve}\label{b.-reserve} - -\begin{Shaded} -\begin{Highlighting}[] -\CommentTok{// Check footprint independence} -\CommentTok{// No conflicts (only one rewrite)} -\CommentTok{// Accepted!} -\end{Highlighting} -\end{Shaded} - -\subsubsection{5c. Execute}\label{c.-execute} - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{fn}\NormalTok{ navigate\_executor(view}\OperatorTok{:}\NormalTok{ GraphView}\OperatorTok{,}\NormalTok{ scope}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId}\OperatorTok{,}\NormalTok{ delta}\OperatorTok{:} \OperatorTok{\&}\KeywordTok{mut}\NormalTok{ TickDelta) }\OperatorTok{\{} - \CommentTok{// Read the intent to find target} - \KeywordTok{let}\NormalTok{ intent }\OperatorTok{=}\NormalTok{ view}\OperatorTok{.}\NormalTok{node(scope)}\OperatorTok{.}\NormalTok{unwrap()}\OperatorTok{;} - \KeywordTok{let}\NormalTok{ target\_page }\OperatorTok{=}\NormalTok{ intent}\OperatorTok{.}\NormalTok{attachment(}\StringTok{"target"}\NormalTok{)}\OperatorTok{.}\NormalTok{unwrap()}\OperatorTok{;} - - \CommentTok{// Read current viewer state (for logging/validation)} - \KeywordTok{let}\NormalTok{ viewer }\OperatorTok{=}\NormalTok{ view}\OperatorTok{.}\NormalTok{node(}\OperatorTok{\&}\NormalTok{VIEWER\_NODE)}\OperatorTok{.}\NormalTok{unwrap()}\OperatorTok{;} - \KeywordTok{let}\NormalTok{ old\_page }\OperatorTok{=}\NormalTok{ viewer}\OperatorTok{.}\NormalTok{attachment(}\StringTok{"current"}\NormalTok{)}\OperatorTok{;} - - \CommentTok{// Emit the change: update viewer\textquotesingle{}s current page} -\NormalTok{ delta}\OperatorTok{.}\NormalTok{emit(}\PreprocessorTok{WarpOp::}\NormalTok{SetAttachment }\OperatorTok{\{} -\NormalTok{ node}\OperatorTok{:}\NormalTok{ VIEWER\_NODE}\OperatorTok{,} -\NormalTok{ key}\OperatorTok{:} \StringTok{"current"}\OperatorTok{.}\NormalTok{into()}\OperatorTok{,} -\NormalTok{ value}\OperatorTok{:} \PreprocessorTok{AttachmentValue::}\NormalTok{Atom(AtomPayload }\OperatorTok{\{} -\NormalTok{ type\_id}\OperatorTok{:} \StringTok{"node\_ref"}\OperatorTok{.}\NormalTok{into()}\OperatorTok{,} -\NormalTok{ bytes}\OperatorTok{:}\NormalTok{ target\_page}\OperatorTok{.}\NormalTok{to\_bytes()}\OperatorTok{,} - \OperatorTok{\}}\NormalTok{)}\OperatorTok{,} - \OperatorTok{\}}\NormalTok{)}\OperatorTok{;} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{TickDelta now contains}: - -\begin{Shaded} -\begin{Highlighting}[] -\NormalTok{[} -\NormalTok{ (}\PreprocessorTok{WarpOp::}\NormalTok{SetAttachment }\OperatorTok{\{} -\NormalTok{ node}\OperatorTok{:}\NormalTok{ viewer\_node}\OperatorTok{,} -\NormalTok{ key}\OperatorTok{:} \StringTok{"current"}\OperatorTok{,} -\NormalTok{ value}\OperatorTok{:}\NormalTok{ about\_node\_id} - \OperatorTok{\},}\NormalTok{ OpOrigin }\OperatorTok{\{}\NormalTok{ intent\_id}\OperatorTok{:} \DecValTok{1}\OperatorTok{,}\NormalTok{ rule\_id}\OperatorTok{:} \DecValTok{42}\OperatorTok{,}\NormalTok{ match\_ix}\OperatorTok{:} \DecValTok{0}\OperatorTok{,}\NormalTok{ op\_ix}\OperatorTok{:} \DecValTok{0} \OperatorTok{\}}\NormalTok{)} -\NormalTok{]} -\end{Highlighting} -\end{Shaded} - -\subsubsection{5d. Merge}\label{d.-merge} - -Only one delta, trivial merge: - -\begin{Shaded} -\begin{Highlighting}[] -\KeywordTok{let}\NormalTok{ merged\_ops }\OperatorTok{=} \PreprocessorTok{vec!}\NormalTok{[} - \PreprocessorTok{WarpOp::}\NormalTok{SetAttachment }\OperatorTok{\{}\NormalTok{ node}\OperatorTok{:}\NormalTok{ viewer\_node}\OperatorTok{,}\NormalTok{ key}\OperatorTok{:} \StringTok{"current"}\OperatorTok{,}\NormalTok{ value}\OperatorTok{:}\NormalTok{ about\_node\_id }\OperatorTok{\}} -\NormalTok{]}\OperatorTok{;} -\end{Highlighting} -\end{Shaded} - -\subsubsection{5e. Finalize}\label{e.-finalize} - -Apply to state: - -\begin{Shaded} -\begin{Highlighting}[] -\NormalTok{state}\OperatorTok{.}\NormalTok{set\_attachment(viewer\_node}\OperatorTok{,} \StringTok{"current"}\OperatorTok{,}\NormalTok{ about\_node\_id)}\OperatorTok{;} -\end{Highlighting} -\end{Shaded} - -\subsection{8.7 Step 6: Hash Computation}\label{step-6-hash-computation} - -\begin{Shaded} -\begin{Highlighting}[] -\CommentTok{// State root: BLAKE3 of reachable graph} -\KeywordTok{let}\NormalTok{ state\_root }\OperatorTok{=}\NormalTok{ compute\_state\_root(}\OperatorTok{\&}\NormalTok{state)}\OperatorTok{;} \CommentTok{// 0x7890...} - -\CommentTok{// Patch digest: BLAKE3 of merged ops} -\KeywordTok{let}\NormalTok{ patch\_digest }\OperatorTok{=}\NormalTok{ compute\_patch\_digest(}\OperatorTok{\&}\NormalTok{merged\_ops)}\OperatorTok{;} \CommentTok{// 0xDEF0...} - -\CommentTok{// Commit hash} -\KeywordTok{let}\NormalTok{ commit\_hash }\OperatorTok{=}\NormalTok{ BLAKE3(} -\NormalTok{ VERSION\_TAG }\OperatorTok{||} -\NormalTok{ [parent\_hash] }\OperatorTok{||} -\NormalTok{ state\_root }\OperatorTok{||} -\NormalTok{ patch\_digest }\OperatorTok{||} -\NormalTok{ policy\_id} -\NormalTok{)}\OperatorTok{;} \CommentTok{// 0x1234...} -\end{Highlighting} -\end{Shaded} - -\subsection{8.8 Step 7: Emit to Tools}\label{step-7-emit-to-tools} - -The engine emits a \texttt{WarpDiff} to the session hub: - -\begin{Shaded} -\begin{Highlighting}[] -\NormalTok{WarpDiff }\OperatorTok{\{} -\NormalTok{ from\_epoch}\OperatorTok{:} \DecValTok{0}\OperatorTok{,} -\NormalTok{ to\_epoch}\OperatorTok{:} \DecValTok{1}\OperatorTok{,} -\NormalTok{ ops}\OperatorTok{:} \PreprocessorTok{vec!}\NormalTok{[} - \PreprocessorTok{WarpOp::}\NormalTok{SetAttachment }\OperatorTok{\{} -\NormalTok{ node}\OperatorTok{:}\NormalTok{ viewer\_node}\OperatorTok{,} -\NormalTok{ key}\OperatorTok{:} \StringTok{"current"}\OperatorTok{,} -\NormalTok{ value}\OperatorTok{:}\NormalTok{ about\_node\_id} - \OperatorTok{\}} -\NormalTok{ ]}\OperatorTok{,} -\NormalTok{ state\_hash}\OperatorTok{:} \DecValTok{0x7890}\OperatorTok{...,} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\subsection{8.9 Step 8: Viewer Applies -Diff}\label{step-8-viewer-applies-diff} - -The viewer receives the diff and updates its rendering: - -\begin{Shaded} -\begin{Highlighting}[] -\ControlFlowTok{for}\NormalTok{ op }\KeywordTok{in}\NormalTok{ diff}\OperatorTok{.}\NormalTok{ops }\OperatorTok{\{} - \ControlFlowTok{match}\NormalTok{ op }\OperatorTok{\{} - \PreprocessorTok{WarpOp::}\NormalTok{SetAttachment }\OperatorTok{\{}\NormalTok{ node}\OperatorTok{,}\NormalTok{ key}\OperatorTok{,}\NormalTok{ value }\OperatorTok{\}} \OperatorTok{=\textgreater{}} \OperatorTok{\{} - \ControlFlowTok{if}\NormalTok{ node }\OperatorTok{==}\NormalTok{ viewer\_node }\OperatorTok{\&\&}\NormalTok{ key }\OperatorTok{==} \StringTok{"current"} \OperatorTok{\{} - \CommentTok{// Update the displayed page} - \KeywordTok{self}\OperatorTok{.}\NormalTok{navigate\_to(value}\OperatorTok{.}\NormalTok{as\_node\_ref())}\OperatorTok{;} - \OperatorTok{\}} - \OperatorTok{\}} -\NormalTok{ \_ }\OperatorTok{=\textgreater{}} \OperatorTok{\{} \CommentTok{/* other ops */} \OperatorTok{\}} - \OperatorTok{\}} -\OperatorTok{\}} -\end{Highlighting} -\end{Shaded} - -\textbf{Result}: The user sees the About page. - -\begin{claudecommentary} -**Claude's Take**: That's a lot of machinery for one link click! But here's what we get for free: - -1. **Replay**: Save the intent bytes, replay them later, get the exact same state hash -2. **Verification**: Any other engine given the same inputs produces the same commit hash -3. **Undo**: The previous snapshot is still in history; restoring is a pointer swap -4. **Branching**: Fork the state, try a different navigation, compare outcomes - -This is the payoff for all the ceremony. A traditional engine would do `viewer.current = about_page` and call it done. Echo builds a *provable audit trail* around every state change. -\end{claudecommentary} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{9. The Viewer: Observing Echo}\label{the-viewer-observing-echo} - -The \texttt{warp-viewer} crate provides real-time visualization of WARP -graphs. It's built on WGPU for cross-platform GPU rendering. - -\subsection{9.1 Architecture}\label{architecture} - -\begin{figure} -\centering -\pandocbounded{\includegraphics[keepaspectratio,alt={Diagram 15}]{diagrams/tour-15.pdf}} -\caption{Diagram 15} -\end{figure} - -\subsection{9.2 Rendering Pipeline}\label{rendering-pipeline} - -\begin{enumerate} -\def\labelenumi{\arabic{enumi}.} -\tightlist -\item - \textbf{Diff arrives} via session client -\item - \textbf{State cache} updates local graph replica -\item - \textbf{Layout engine} computes node positions (force-directed) -\item - \textbf{Renderer} converts graph to GPU buffers -\item - \textbf{Display} shows updated visualization -\end{enumerate} - -\begin{claudecommentary} -**Claude's Take**: The viewer is *reactive*, not poll-based. It subscribes to diffs from the session hub and updates only when state changes. This means zero CPU usage when the graph is idle. - -The force-directed layout is a classic choice for graph visualization. It's not perfect—large graphs can take time to settle—but it's good enough for debugging and exploration. If you need a specific layout, you can inject position attachments and the viewer will respect them. -\end{claudecommentary} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\section{10. Glossary}\label{glossary} - -{\def\LTcaptype{none} % do not increment counter -\begin{longtable}[]{@{} - >{\raggedright\arraybackslash}p{(\linewidth - 2\tabcolsep) * \real{0.3333}} - >{\raggedright\arraybackslash}p{(\linewidth - 2\tabcolsep) * \real{0.6667}}@{}} -\toprule\noalign{} -\begin{minipage}[b]{\linewidth}\raggedright -Term -\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright -Definition -\end{minipage} \\ -\midrule\noalign{} -\endhead -\bottomrule\noalign{} -\endlastfoot -\textbf{WARP} & Worldline Algebra for Recursive Provenance---Echo's core -graph model \\ -\textbf{Tick} & One complete cycle of the engine (begin → apply → commit -→ hash → record) \\ -\textbf{Snapshot} & Immutable point-in-time capture of graph state \\ -\textbf{Footprint} & Declaration of resources a rule will read/write \\ -\textbf{BOAW} & Bag of Autonomous Workers---parallel execution model \\ -\textbf{TickDelta} & Accumulated operations from rule execution \\ -\textbf{State Root} & BLAKE3 hash of the entire graph \\ -\textbf{Commit Hash} & BLAKE3 hash of (state root + patch + metadata) \\ -\textbf{WarpInstance} & A graph-within-a-graph, enabling recursive -composition \\ -\textbf{WSC} & Write-Streaming Columnar---Echo's snapshot file format \\ -\textbf{GraphView} & Read-only handle to graph state for rule -executors \\ -\textbf{PendingRewrite} & Queued rule application awaiting commit \\ -\end{longtable} -} - -\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} - -\begin{claudecommentary} -**Final Thoughts from Your Tour Guide** - -Echo is not a simple system. It's a *principled* system built on hard-won lessons about determinism, reproducibility, and trust. - -What I find most impressive isn't any single feature—it's the coherence. Every piece reinforces the others: -- BTreeMaps enable deterministic hashing -- Footprints enable parallel execution -- Parallel execution requires immutable GraphView -- Immutable GraphView enables copy-on-write -- Copy-on-write enables cheap branching -- Cheap branching enables "what if?" queries - -Pull one thread and the whole tapestry unravels. This is integrated design, not a collection of independent features. - -Is Echo perfect? No. The footprint model requires discipline. The ceremony adds latency. The BTreeMaps trade speed for determinism. But for applications where *provability* matters—games with replays, simulations with audits, collaborative tools with conflict resolution—Echo offers something rare: a foundation you can trust. - -Thanks for joining me on this tour. May your state roots always match. - -— Claude -\end{claudecommentary} - -\backmatter -\end{document} diff --git a/docs/archive/tasks.md b/docs/archive/tasks.md deleted file mode 100644 index 4fb91c16..00000000 --- a/docs/archive/tasks.md +++ /dev/null @@ -1,10 +0,0 @@ - - - -# WARP View Protocol Tasks [MOVED] - -This checklist moved into the WVP spec so the tasks live alongside the protocol definition and status notes; the target includes the complete v0 implementation checklist. - -Retention policy: this stub will remain as a redirect until the next docs audit pass, then be removed once inbound links are cleaned up. - -- [/spec-warp-view-protocol#implementation-checklist-v0](/spec-warp-view-protocol#implementation-checklist-v0) diff --git a/docs/archive/tasks/TASKS.md b/docs/archive/tasks/TASKS.md deleted file mode 100644 index 92741a83..00000000 --- a/docs/archive/tasks/TASKS.md +++ /dev/null @@ -1,203 +0,0 @@ - - - - -# Tasks: docs/polish-41 - -CodeRabbit review comments — all 66 remaining items. Nothing gets pushed until -every box is checked. - ---- - -## Group A: Verify Local Fixes (32 items) - -These files already have local edits. Verify each comment is fully addressed, -then check the box. If a comment is only _partially_ addressed, move it to -Group B. - -### spec-scheduler.md (5 items — all locally fixed) - -- [x] **[MAJ] line 50:** `phases` cannot be both required and defaulted. - Made `phases` optional with `?` suffix. Registration already had `?? ["update"]` fallback. Contract is now consistent. -- [x] **[MAJ] line 143:** Dedup rule broken by unconditional `inDegree` bumps. - Added `.has()` guard before `.add()` + inDegree increment in both after/before loops. -- [x] **[MAJ] line 183:** `initialize` never runs for systems added after tick 1. - Changed to run INITIALIZE unconditionally every tick; only systems with `status: "pending"` execute. -- [x] **[MAJ] line 278:** `unpauseable` is not a conflict predicate. - Removed `and not unpauseable` from batching condition. Unpauseable only affects pause handling. -- [x] **[MIN] line 406:** Wrong warp-core type name. - Changed `FootprintInfo` → `Footprint` in open questions. - -### spec-editor-and-inspector.md (7 items — 3 locally fixed, 4 need work) - -- [x] **[MAJ] line 6:** Broken cross-reference links. - Fixed `/guide/warp-primer` → `guide/warp-primer.md` and `docs/spec-time-streams...` → `spec-time-streams...`. -- [x] **[CRIT] line 36:** `object` typing too permissive. - Changed `payload: object` → `payload: unknown`. -- [x] **[MAJ] line 40:** Draft note needs exact missing artifacts. - Updated to name `types.ts` and `registry.ts` with specific missing types. -- [x] **[CRIT] line 43:** Sorting algorithm for deterministic frame ordering unspecified. - **Done:** Added normative paragraph specifying stable sort ascending by `(tick, frameType)`, unsigned integer comparison for tick, UTF-8 lexicographic comparison for frameType, stable tie-breaking by insertion order. Applies to both in-memory buffer and JSONL log. -- [x] **[MAJ] line 51:** "Signed session token" undefined. - **Done:** Added "Session Token Format" subsection specifying HMAC-SHA256 over `{sessionId, capabilities, issuedAt, expiresAt}`, `base64url` wire format, and 401 rejection semantics. -- [x] **[MAJ] line 62:** `filter` field structure undefined. - **Done:** Added "Filter Semantics" subsection: flat key-value map, exact-match AND-combined predicates, unknown keys silently ignored, with example. -- [x] **[MAJ] line 96:** `producer` return type too loose. - **Done:** Changed `producer` return type from `object` to `unknown`. - -### xtask/src/main.rs (1 item — locally fixed) - -- [x] **[MAJ] line 791:** SPDX repair ignores `--root`. - Scoped `ensure_spdx.sh` to pass `md_files` as positional args instead of running repo-wide. - -### memorials/2026-01-18-phase4-rubicon.md (3 items — 1 fixed, 2 need work) - -- [x] **[MIN] line 111:** Emphasis half-fixed. - **Done:** Prettier enforces underscore emphasis (`_..._`), overriding asterisks. `_Alea iacta est_.` is the correct form under this repo's prettier config. -- [x] **[CRIT] line ~21:** Revert underscore emphasis to asterisks. - **Done:** `base + ops = next` is a computational formula — code span is correct. Kept as-is per user decision. -- [x] **[MIN] line ~111:** Foreign phrase requires italics, not emphasis. - **Done:** Changed to `Alea iacta est.` (semantic HTML for foreign phrase, avoids prettier underscore/asterisk conflicts). - -### spec-merkle-commit.md (3 items — 2 fixed, 1 partially) - -- [x] **[MIN] line 6:** Root-relative link. - Fixed `/guide/warp-primer` → `guide/warp-primer.md`. -- [x] **[MAJ] line 78:** Parent count validation in `compute_commit_hash_v2()`. - Added MUST-validate sentence. -- [x] **[TRIV] line 203:** Consolidate empty digest definition. - `EMPTY_LEN_DIGEST` constant defined at line 195 with cross-reference to engine's `DIGEST_LEN0_U64`. Invariants section (lines 201-203) explains the semantic distinction from `blake3(b"")`. - -### Other locally-fixed files (13 items) - -- [x] **[MIN] docs/adr/ADR-0004-No-Global-State.md:178** — Over-escaped `install\_\*` in code block. - **Done:** Prettier re-escapes underscores/asterisks inside ` ```markdown ` fences (it formats the content as markdown). The escapes are cosmetic — they render identically to unescaped versions in markdown renderers. Accepted as prettier-enforced. -- [x] **[CRIT] docs/archive/spec-geom-collision.md:34** — Broken cross-ref `SPEC_DETERMINISTIC_MATH.md`. Fixed → `spec-deterministic-math.md`. -- [x] **[CRIT] docs/notes/scheduler-optimization-followups.md:30** — Proptest missing. - **Done:** Added 3 proptests to `scheduler.rs`: `proptest_drain_matches_btreemap_reference` (fuzzes both sort paths against BTreeMap reference, n=1..2048), `proptest_insertion_order_independence` (verifies drain output is order-invariant), `threshold_boundary_determinism` (exercises n=1023/1024/1025). Also fixed a pre-existing radix sort bug: `bucket16` scope pair index was inverted (LSD passes processed MSB-first instead of LSB-first), causing comparison-sort and radix-sort paths to produce different orderings at the SMALL_SORT_THRESHOLD boundary. -- [x] **[MAJ] docs/notes/scheduler-optimization-followups.md:65** — Radix sort docs incomplete. - **Done:** Added comprehensive "Radix Sort Internals" subsection documenting: `RewriteThin` layout, 20-pass rationale, 16-bit digit trade-off table, LSD stability requirement, pass sequence diagram, `bucket16` digit extraction, three-phase counting sort algorithm, ping-pong buffer pattern, and threshold justification. -- [x] **[MAJ] docs/notes/scheduler-optimization-followups.md:201** — Ambiguous benchmark note. Fixed. -- [x] **[MIN] docs/spec-ecs-storage.md:6** — Root-relative link. Fixed. -- [x] **[MIN] docs/spec-geom-collision.md:7** — Vague deferral. Fixed. -- [x] **[MIN] docs/spec-mwmr-concurrency.md:6** — Broken link. Fixed. -- [x] **[MIN] docs/spec-mwmr-concurrency.md:51** — Name "Theorem A". - **Done:** Replaced with "Skeleton-plane Tick Confluence theorem (Paper II, §6, Thm. 6.1)" — the formal statement that any two serialisations of a scheduler-admissible batch yield isomorphic successors. -- [x] **[MAJ] docs/spec-warp-confluence.md:66** — Signing canonicalization underspecified. - **Done:** Added "Signing Canonicalization" normative subsection with exact 8-field canonical byte sequence (root_hash, parent_hash, diff_count, diff_hashes, signer_id, capability_count, capabilities, timestamp), encoding types, and MUST-reject clause. -- [x] **[MIN] docs/spec-warp-confluence.md:6** — Root-relative link. Fixed. -- [x] **[MAJ] docs/spec-world-api.md:6** — Broken primer link. Fixed. -- [x] **[MAJ] docs/spec-world-api.md:~92** — Version management too vague. - **Done:** Added "Breaking-Change Criteria" (4 criteria) and "Deprecation Timeline" (3-phase: announce → no-op → remove, minimum 2 minor releases or 90 days). - ---- - -## Group B: New Work Required (34 items) - -These files have no local changes yet. Each needs investigation + fix. - -### spec-branch-tree.md (10 items — spec completeness) - -This spec has significant gaps that CR flagged. Every item relates to -determinism: undefined types/formulas mean implementations could diverge. - -- [x] **[MAJ] line 36:** Define `ReadKey` and `WriteKey` as formal interfaces. - **Done:** Defined `AccessKey = { slot: u32, fieldPath?: CanonicalFieldPath }` with `ReadKey`/`WriteKey` aliases. Added `QualifiedKey` for cross-scope use. Documented layering: Aion `Del/Use` → confluence `R/W/D/A` → ECS `slot+fieldPath`. -- [x] **[MAJ] line 60:** Formalize `MergeStrategyId` type. - **Done:** Extensible namespaced string (`core:lww`, `core:sum`, etc.). Non-core strategies require resolver manifest digest. Removed `domainResolver` (escape hatch, not a strategy). Plugin loading ABI deferred to post-Phase 0. -- [x] **[CRIT] line 116:** Hash formula references non-existent field. - **Done:** Extracted `TimelineNodeCore` (hashable subset: `parents`, `branchId`, `chronos`, `snapshotId`, `diffId`). Replaced `parentId + mergeParents?` with `parents: Hash[]`. Moved `aionWeight`/`strainDelta` to `TimelineMetadata` sidecar. Formula: `id = BLAKE3(canonicalEncode(TimelineNodeCore))`. -- [x] **[MAJ] line 177:** Define entropy formula weights. - **Done:** Renamed to "branch strain." Configurable per-world, fixed-point integers. Defaults: wF=5, wC=25, wP=50, wM=15, wX=20. Raw total in [0,∞), floor at 0. `imports` = cross-branch messages. -- [x] **[MAJ] line 199:** Clarify byte-level encoding in seed derivation. - **Done:** Domain-separated canonical encoding: `BLAKE3(canonicalEncode({ domain: "echo.branch-seed.v1", seed: 32 bytes, branchId: length-prefixed UTF-8, chronos: u64 LE }))`. -- [x] **[MAJ] line 206:** Clarify all GC modes are deterministic. - **Done:** Three explicit modes: `periodic`, `checkpoint`, `none`. No adaptive mode. Split "disabled" vs "deferred." Pin semantics: full transitive reachable closure. -- [x] **[MAJ] line 252:** Define `WorldView` and `GCPolicy` types. - **Done:** `WorldView`: lightweight read-only handle with `chronos`, `schemaLedgerId`, `getChunkVersion()`, `readComponentCanonical()`. `GCPolicy`: `mode` + `intervalTicks` + `retainDepth` + `retainBaseSnapshots` + `respectPins`. -- [x] **[MIN] line 300:** Define causal relation semantics. - **Done:** Layered model: within-node (Paper II tick-event poset), cross-tick (parents/chronos ancestry), cross-branch (merge parents). Network frontier causality out of scope. Defined all four edge relations. Added note: Chronos is per-branch, not global. -- [x] **[MIN] line 373:** Specify entropy bounds and initialization. - **Done:** Renamed to "strain." Genesis at 0. Fork inherits parent total. Merge continues from target + delta. Per-node canonical, BranchRecord caches head. No reset on collapse. Saturation → gameplay policy, not scalar behavior. -- [x] **[MIN] line 390:** Define capability token structure. - **Done:** Forward-reference to `spec-capabilities-and-security.md`. Branch-tree stores `CapabilityAssertion { tokenDigest, scope }`. Violations emit deterministic error nodes. - -### spec-temporal-bridge.md (1 item) - -- [x] **[CRIT] line 115:** API exposes opaque `NodeId`s but lifecycle rules dereference them as full nodes. - **Done:** Added `getNode(id: NodeId): TimelineNode` to `BridgeContext`. Added disambiguation note clarifying timeline `NodeId` (hex-encoded content-addressed `Hash`) vs echo-graph `NodeId` (`u64`). API keeps `NodeId` as parameter type; bridge resolves internally via `getNode()`. - -### spec-runtime-config.md (1 item) - -- [x] **[CRIT] line ~54:** `world:config` capability undefined. - **Done:** Added `"world:config"` to `Capability` union type and `Runtime config` row to Capability Scopes table in `spec-capabilities-and-security.md`. Removed "not yet defined" warning from `spec-runtime-config.md` line 61. - -### spec-serialization-protocol.md (2 items) - -- [x] **[MIN] line 6:** Root-relative link. Fix `/guide/eli5` → `guide/eli5.md`. - **Done:** Already `guide/eli5.md` — verified correct. -- [x] **[MAJ] line 141:** `payloads` field semantics and serialization order incomplete. - **Done:** Specified `BlockManifest` encoding: declaration-order sections, each with `sectionTag (uint8)` + `count (uint32 LE)` + sorted hashes. Empty sections encoded with `count = 0` and tag always present. - -### spec-time-streams-and-wormholes.md (2 items) - -- [x] **[MAJ] line 189:** StreamAdmissionDecision canonical field ordering. - **Done:** Line 192 has normative MUST language: "Implementations MUST NOT reorder fields." Verified sufficient. -- [x] **[TRIV] line 509:** Narrative example numbering creates maintenance burden. - **Done:** Converted numbered steps (1–4) to bold-labeled bullets. - -### SPEC-0002-descended-attachments-v1.md (5 items — formatting consistency) - -- [x] **[MAJ] line 3:** Blank-line policy chaos. - **Done:** Ran prettier (already formatted). Verified blank lines after all headings are consistent. -- [x] **[MAJ] line 52:** AttachmentPlane consolidation inconsistency. - **Done:** Unified all enum definitions to sub-bullet style: `AttachmentPlane`, `AttachmentOwner`, `AttachmentValue`, and `PortalInit` all use `name:` header + sub-bullet variants with em-dash descriptions. -- [x] **[MAJ] line ~53:** Enum variant nesting inconsistency. - **Done:** Covered by the enum style unification above. -- [x] **[TRIV] line 192:** Algorithm formatting — verify logical structure preserved. - **Done:** Verified S3 DAG slicing algorithm: 4 steps intact, step 2 has (a)/(b) sub-cases, step 4 has portal-chain closure. Logical structure correct. -- [x] **[TRIV] line 226:** Header spacing — acceptable but inconsistent. - **Done:** Prettier confirms formatting is correct. All headings followed by blank lines. - -### SPEC-0003-dpo-concurrency-litmus-v0.md (1 item) - -- [x] **[MIN] line 45:** Calling read/read overlap "disjoint" is wrong. - **Done:** Changed `remain disjoint` → `are non-conflicting` on line 61. - -### Other docs (12 items) - -- [x] **[TRIV] docs/DETERMINISTIC_MATH.md:52** — Tighten "very small numbers" to "subnormal values (magnitude < 2^−126)". - **Done:** Updated line 47 with `subnormal values (magnitude < 2⁻¹²⁶)`. -- [x] **[TRIV] docs/branch-merge-playbook.md:3** — Remove unexplained blank line or add markdownlint disable comment. - **Dismissed:** Blank line after SPDX header is repo-wide prettier convention. -- [x] **[TRIV] docs/branch-merge-playbook.md:37** — Same: extra blank line before code block. - **Dismissed:** Blank line before code fence is standard markdown formatting. -- [x] **[TRIV] docs/branch-merge-playbook.md:44** — Explain or revert indentation changes in code block. - **Dismissed:** Code block uses correct 4-space TypeScript indentation. -- [x] **[MIN] docs/branch-merge-playbook.md:58** — Add brief inline definition of "Aion" (Echo's timeline concept) on first use. - **Done:** Added parenthetical `(Echo's per-node timeline weight)` after "Aion". -- [x] **[MAJ] docs/guide/cargo-features.md:10** — Provenance note says "check individual crates" but doesn't give a verification command that actually works. Either provide a real command or remove the claim. - **Done:** Replaced vague text with a concrete `cargo metadata | jq` command that lists all workspace feature flags. -- [x] **[MIN] docs/guide/warp-primer.md:128** — Emphasis style still inconsistent. Normalize all italic to `_underscores_` (the file's majority style) or all to `*asterisks*`. - **Done:** Ran prettier — file already normalized (reported unchanged). -- [x] **[MAJ] docs/notes/claude-musings-on-determinism.md:1** — SPDX `MIND-UCAL-1.0` is non-standard. This is project-wide (327 files). Decide: change all 327 to `LicenseRef-MIND-UCAL-1.0`, or document the convention and dismiss. - **Done:** Renamed across 328 files (336 occurrences) in commit `a4d4101`. -- [x] **[TRIV] docs/notes/claude-musings-on-determinism.md:3** — Blank line after copyright — justified by prettier. Already verified as project-wide convention. Dismiss with explanation. - **Dismissed:** Blank line after copyright is repo-wide prettier convention. -- [x] **[CRIT] docs/spec-knots-in-time.md:~75** — `SweptVolumeProxy` → `SweepProxy` and module path. - **Done:** Round-1 fix verified. Line 75 uses `SweepProxy` (canonical name). `warp-geom/src/temporal/manifold.rs:13` matches. -- [x] **[MAJ] docs/tasks/issue-canonical-f32.md:41** — Expand serde acceptance criteria: add NaN canonicalization and subnormal flushing test items. - **Done:** Expanded single checkbox into 4 separate acceptance criteria: NaN canonicalization, subnormal flushing, serde NaN roundtrip, and serde subnormal roundtrip. -- [x] **[MIN] docs/warp-math-claims.md:8** — Emphasis style change (asterisk → underscore). Revert to match file's dominant style. - **Done:** Ran prettier — file already normalized (reported unchanged). - ---- - -## Execution Order - -1. **Verify Group A** — confirm all local fixes are correct. -2. **Group B: Critical items first** — spec-branch-tree hash formula, spec-temporal-bridge NodeId, spec-runtime-config capability, spec-knots verification. -3. **Group B: Major items** — spec completeness gaps, formatting passes. -4. **Group B: Minor/Trivial** — emphasis, phrasing, blank lines. -5. **Final lint pass** — `cargo xtask docs-lint`, `cargo clippy -p xtask`. -6. **Single commit, single push.** diff --git a/docs/archive/tasks/WASM-TASKS.md b/docs/archive/tasks/WASM-TASKS.md deleted file mode 100644 index c76bdd8a..00000000 --- a/docs/archive/tasks/WASM-TASKS.md +++ /dev/null @@ -1,48 +0,0 @@ - - - -# WASM Task Checklist - -Policy: write failing tests first, then implement; check off tasks only when tests and docs are updated. - -## P0 — Bootstrap & Scaffold - -- [x] Scaffold `specs/spec-000-rewrite` Leptos+Trunk app (CSR) with `index.html`, `src/lib.rs`, panic hook, hot-reload. -- [x] Add workspace membership and `make spec-000-{dev,build}` helpers. -- [ ] Failing check: `cargo check -p spec-000-rewrite --target wasm32-unknown-unknown` in CI job (Trunk build). - -## P1 — Kernel Bindings & Types - -- [x] Add `wasm-bindgen` feature to kernel crate (or shim crate) and expose minimal WARP graph/rewrite API (add node, set field, connect, tombstone, materialize). -- [x] Create shared DTO crate (`echo-wasm-abi`) with serde + wasm-bindgen-friendly types for graph and rewrite log; reuse in UI. -- [x] wasm-bindgen unit tests exercising add/set/connect/tombstone round-trip serialization. - -## P1 — UI MVP (Living Spec) - -- [ ] Render graph (SVG/canvas) from serialized WARP graph; simple layout. -- [ ] Render rewrite log; click-to-time-travel replays history via kernel API. -- [ ] “Apply Rewrite” panel hooks to kernel methods; updates view reactively. -- [ ] Failing tests: screenshot/DOM snapshot via Playwright (Trunk serve) or headless wasm-bindgen tests for state transitions. - -## P2 — Certification & Win Condition - -- [ ] Implement completion detector that issues a completion hash/badge when the user reaches target state (Spec-000). -- [ ] Persist/emit completion hash for PR inclusion; document the flow. -- [ ] Failing test: deterministic hash for canonical walkthrough sequence. - -## P2 — Tooling & CI - -- [ ] GitHub Action: build spec-000 with Trunk (wasm32-unknown-unknown), cache target/Trunk, artifact the dist. -- [ ] Size guard: assert wasm bundle < configured budget; fail if exceeded. -- [ ] Lint: add `cargo fmt`/`clippy` (wasm target) gate for spec crates. - -## P3 — UX & Resilience - -- [ ] Error surface: UI shows kernel errors (invalid rewrite, payload too large). -- [ ] Offline-first: bundle assets, graceful fallback when no network. -- [ ] Performance pass: incremental graph diffing instead of full redraw; fast layout for ≤200 nodes. -- [ ] Accessibility: keyboard navigation for rewrites; ARIA on controls. - -## P3 — Future Spec Template - -- [ ] Turn spec-000 into a `spec-template/` scaffold script for future specs (copy, rename, wire to new kernel module, add win condition). diff --git a/docs/archive/tasks/issue-canonical-f32.md b/docs/archive/tasks/issue-canonical-f32.md deleted file mode 100644 index b2bc7646..00000000 --- a/docs/archive/tasks/issue-canonical-f32.md +++ /dev/null @@ -1,44 +0,0 @@ - - - -# Title: feat(warp-core): Implement strict determinism for F32Scalar (NaNs, Subnormals) - -## Summary - -Upgrade `F32Scalar` to enforce strict bit-level determinism across all platforms by handling "freaky numbers" (NaN payloads and subnormals) in software. Currently, `F32Scalar` only canonicalizes `-0.0`. - -## Problem - -IEEE 754 floating-point behavior varies across architectures (x86, ARM, WASM): - -1. **NaN Payloads:** `0.0/0.0` produces different bit patterns on different CPUs. -2. **Subnormals:** Some environments flush subnormals to zero (FTZ/DAZ), others do not. -3. **Serialization:** Raw deserialization can bypass invariants if not carefully guarded (fixed in `scalar.rs`, but needs verifying). - -This divergence breaks the determinism guarantee required for Echo's simulation loop. - -## Requirements (Strict Policy) - -Modify `F32Scalar::new(f32)` to apply the following transformations: - -1. **NaN Canonicalization:** If `input.is_nan()`, replace it with a single canonical quiet NaN value (e.g., `0x7fc00000`). -2. **Subnormal Flushing:** If `input` is subnormal (exponent is 0 but mantissa is non-zero), replace it with `+0.0` (preserving sign canonicalization). -3. **Signed Zero:** Continue to map `-0.0` to `+0.0`. - -## Test Plan - -Enable the commented-out tests in `crates/warp-core/tests/determinism_policy_tests.rs`: - -- `test_policy_nan_canonicalization`: Verify positive/negative/signaling/payload NaNs all map to the canonical bits. -- `test_policy_subnormal_flushing`: Verify small/large/negative subnormals map to `+0.0`. -- `test_policy_serialization_guard`: Verify deserializing `-0.0` results in `+0.0`. - -## Definition of Done - -- `F32Scalar::new` implements the full sanitization logic. -- All tests in `determinism_policy_tests.rs` are uncommented and passing. -- [ ] Enable `test_policy_nan_canonicalization` in `determinism_policy_tests.rs` — verify positive/negative/signaling/payload NaNs all map to canonical `0x7fc00000`. -- [ ] Enable `test_policy_subnormal_flushing` in `determinism_policy_tests.rs` — verify subnormals (exponent 0, mantissa ≠ 0) map to `+0.0`. -- [ ] Add `test_policy_serde_nan_roundtrip` — verify that serializing a NaN via `serde` and deserializing produces the canonical NaN, not a platform-specific payload. -- [ ] Add `test_policy_serde_subnormal_roundtrip` — verify that deserializing a subnormal value produces `+0.0`, not the raw subnormal. -- Benchmarks confirm acceptable overhead. diff --git a/docs/archive/telemetry-graph-replay.md b/docs/archive/telemetry-graph-replay.md deleted file mode 100644 index 4b39e78f..00000000 --- a/docs/archive/telemetry-graph-replay.md +++ /dev/null @@ -1,72 +0,0 @@ - - - -# Telemetry: Graph Snapshot for Repro/Replay (Design Note) - -Status: Draft • Scope: warp-core (dev-only feature) - -## Problem - -When a conflict or unexpected outcome occurs during a transaction, logs with counts are helpful but insufficient for reproduction. We want the option to capture a minimal, deterministic snapshot of the reachable subgraph from `root` at key points (e.g., pre-commit or on conflict) so we can replay locally and bisect. - -## Approach - -- Add a feature-gated telemetry event `graph_snapshot` that emits the canonical, stable serialization of the reachable subgraph. -- Trigger points (feature-controlled): - - On first conflict within a tx (sampled or rate-limited) - - On commit (debug builds only) -- Consumers can store the JSONL stream and later reconstruct the exact state to reproduce behavior. - -## Constraints - -- Deterministic ordering and bytes: leverage the existing snapshot hash traversal and encoding rules. Do NOT invent a second ordering. -- Size control: - - Emit only the reachable subgraph from `root`. - - Optionally redact payloads or cap payload size via a `telemetry_max_payload_bytes` knob. - - Allow sampling (e.g., `N` per minute) to keep overhead bounded. -- Security: feature must be off by default; never ship in production. Payloads may contain domain data. - -## Event Shape (JSONL) - -```json -{ - "timestamp_micros": 1234567890, - "tx_id": 42, - "event": "graph_snapshot", - "root": "", - "snapshot_hash": "", - "nodes": [ - { "id": "", "ty": "", "payload": "" } - ], - "edges": [ - { - "id": "", - "from": "", - "to": "", - "ty": "", - "payload": "" - } - ] -} -``` - -- Ordering: nodes ascending by `NodeId`, edges grouped by `from` with each group ascending by `EdgeId`. -- Payload encoding: identical to runtime wire format (length-prefixed little-endian), then base64 for JSON safety. - -## API Sketch - -- `telemetry::graph_snapshot(tx, &GraphStore, &root, redact_payloads: bool)` -- Compiles behind `feature = "telemetry"` only. -- Reuses internal snapshot traversal to ensure identical reachability set and order. - -## Replay - -- CLI helper (`warp-cli`) to read JSONL and reconstruct an in-memory `GraphStore` for any `graph_snapshot` event. -- Verify by recomputing the `snapshot_hash` and comparing with the logged value. - -## Next Steps - -- [ ] Add serialization helper that walks the same reachable set as `compute_snapshot_hash`. -- [ ] Feature-gate emitting on conflict (first per tx) and on commit (debug only). -- [ ] CLI command: `warp-cli replay --from telemetry.jsonl --tx 42`. -- [ ] Document redaction policy and sampling knobs. diff --git a/docs/archive/testing-and-replay-plan.md b/docs/archive/testing-and-replay-plan.md deleted file mode 100644 index ed970ebc..00000000 --- a/docs/archive/testing-and-replay-plan.md +++ /dev/null @@ -1,115 +0,0 @@ - - - -# Testing & Replay Plan (Phase 0.5) - -Defines how Echo proves determinism end-to-end: automated tests, replay tooling, and golden datasets. - ---- - -## Replay CLI Contract - -`echo replay --from --until --verify` - -- Loads block manifest spanning `from` → `until`. -- Replays diffs using canonical decoding, enforcing PRNG spans and capability rules. -- Verification: recompute `worldHash` at each node and compare with recorded hash; mismatches flagged. -- Outputs `VerificationReport` with pass/fail, mismatch details, and entropy trail. - -```ts -interface VerificationReport { - readonly from: NodeId; - readonly until: NodeId; - readonly success: boolean; - readonly mismatches?: readonly Mismatch[]; - readonly stats: { - replayedDiffs: number; - elapsedMs: number; - entropyTrail: number[]; - }; -} -``` - ---- - -## Golden Hash Dataset - -- Maintained under `tests/golden/` with recorded blocks for canonical scenarios (each engine subsystem). -- CI job replays golden datasets across Node, Chromium, WebKit; asserts identical hashes. -- Golden scenarios include: idle world, branching + merge, paradox quarantine, entropy surges. - ---- - -## Differential Merge Checker - -- For any branch merge, store both diff chains and run a comparer ensuring three-way merge produced expected result. -- Tool `echo diff-compare --base --a --b ` outputs conflict list and merged hash; used in tests. - ---- - -## Entropy Regression Tests - -- Simulate deterministic sequences (forks, merges, paradoxes) and assert entropy meter matches expected values. -- Tests fail if entropy formula or weights change without updating test expectations. - ---- - -## Automation Plan - -Once implemented, the automated test suite will include: - -- PLANNED: `cargo test --package warp-core --features determinism` – runs replay and comparers for golden datasets. -- PLANNED: `cargo test --package warp-core --test paradox` – injects artificial read/write overlaps to validate quarantine behavior. -- PLANNED: `cargo test --package warp-core --test entropy` – verifies entropy observers and metrics. -- PLANNED: `cargo test --package warp-core --test bridge` – covers temporal bridge retro/reroute. -- TODO: Add Criterion-based scheduler benches to CI once implemented (Phase 1 task). - -### BOAW Compliance Tests (Implemented) - -The BOAW (Base-Overlay-Apply-Write) test harness is now implemented per ADR-0007: - -- `cargo test --package warp-core --test boaw_determinism` – 8 determinism tests with real engine hashes -- `EngineHarness` trait provides a real harness that wraps `warp-core::Engine` -- `BoawSnapshot` captures state for determinism verification -- `boaw/touch` test rule exercises the core rewrite pipeline - -**Phase 3 Progress:** - -- `TickDelta` module now available for collecting ops during execution -- Validation infrastructure ready with `assert_delta_matches_diff()` helper (gated by `delta_validate` feature) - -## Phase 4: SnapshotAccumulator Validation - -Under the `delta_validate` feature, Phase 4 adds a second validation layer: - -1. **Delta-to-diff validation** (Phase 3): `delta.finalize()` ops must match `diff_state()` output _exactly_ (full `WarpOp` equality, including payloads — not just `sort_key()`) -2. **Accumulator validation** (Phase 4): `SnapshotAccumulator` built from `base + ops` must produce the same `state_root` as legacy computation - -Run with: `cargo test -p warp-core --features delta_validate` - -## Phase 5: Read-Only Execution (Complete) - -Phase 5 completes the BOAW execution model transition: - -1. **Read-only execution**: Executors receive `GraphView` (read-only) instead of `&mut GraphStore` -2. **Op emission only**: No GraphStore mutations during execution — rules emit ops to `TickDelta` -3. **Post-execution state update**: State updated after execution via `apply_to_state()` -4. **Signature change**: `ExecuteFn` now takes `(&GraphView, &mut TickDelta, &NodeId)` instead of `(&mut GraphStore, &NodeId)` - -This milestone enables: - -- True parallel execution (thread-local deltas, no shared mutable state) -- Removal of `state_before = self.state.clone()` overhead -- Removal of `diff_state()` post-hoc diffing -- Foundation for structural sharing and immutable snapshots - ---- - -## Manual Validation - -- Provide scripts to run long-form simulations (50k ticks) and ensure replay matches. -- Document steps in README for reproducibility. - ---- - -This plan ensures Echo can prove determinism, replayability, entropy stability, and merge correctness across environments. diff --git a/docs/archive/two-lane-abi.md b/docs/archive/two-lane-abi.md deleted file mode 100644 index 4c974c6e..00000000 --- a/docs/archive/two-lane-abi.md +++ /dev/null @@ -1,59 +0,0 @@ - - - -# Two-Lane ABI Design (Control Plane vs. Data Plane) - -Status: **Phase 1 Complete** - -The Echo WASM ABI is split into two distinct logical lanes to separate stable, -schema-driven application logic from the low-level mechanical plumbing of the kernel. - -## 1. Control Plane (The "Handshake" Lane) - -The Control Plane is used at boot time and during structural transitions to ensure -the host and the kernel are speaking the same language. - -### Registry Handshake - -- **`get_registry_info()`**: Returns canonical CBOR bytes containing `schema_sha256_hex`, - `codec_id`, and `registry_version`. -- **Purpose**: The host verifies these fields against its own generated manifest - (from `wesley-generator-vue`) before calling any other functions. -- **Fail-Fast**: If the hash or version mismatches, the host refuses to mount - to prevent undefined behavior and ledger corruption. - -### Metadata Accessors - -- **`get_codec_id()`**, **`get_registry_version()`**, **`get_schema_sha256_hex()`**: - Helper accessors for debugging and runtime inspection. - -## 2. Data Plane (The "Execution" Lane) - -The Data Plane handles the high-frequency flow of state changes and information retrieval. - -### Input Lane (Intents) - -- **`dispatch_intent(bytes)`**: Enqueues an opaque, pre-validated command payload - into the kernel's inbox. -- **Envelope**: The host uses `encode_command(op_id, payload)` to wrap app-specific - data in a canonical CBOR structure that the kernel knows how to route. - -### Output Lane (Projection) - -- **`step(budget)`**: Advances the causal clock. Returns a `StepResult`. -- **`drain_view_ops()`**: Returns an array of `ViewOp`s emitted during the - preceding steps. These drive the host UI (e.g., toasts, navigation). - -### Query Lane (Read-Only) - -- **`execute_query(id, vars)`**: Executes a schema-validated, side-effect-free - lookup against the current graph. -- **Purity Guard**: The ABI layer enforces that only operations marked as `Query` - in the registry can be invoked here. - -## 3. Ledger Reconciliation - -The separation of lanes allows the kernel to reconcile the **Intent Log** (Data Plane) -against the **Schema Version** (Control Plane) recorded in the provenance layer. -Future versions will include the `registry_version` in every tick header to allow -for multi-version playback. diff --git a/docs/archive/warp-demo-roadmap.md b/docs/archive/warp-demo-roadmap.md deleted file mode 100644 index b1d47a9a..00000000 --- a/docs/archive/warp-demo-roadmap.md +++ /dev/null @@ -1,83 +0,0 @@ - - - -# WARP Demo Roadmap (Phase 1 Targets) - -This document captures the interactive demos and performance milestones we want to hit as we implement the Rust-based WARP runtime. Each demo proves a key property of Echo’s deterministic multiverse architecture. - ---- - -## Demo 1: Deterministic Netcode - -**Goal:** Show two instances running locally in lockstep and prove graph hash equality every frame. - -- Two Echo instances (no network) consume identical input streams generated from a shared seed (deterministic RNG feeding input script). -- Each frame serializes the world graph in canonical order (sorted node/edge IDs, component payload bytes) and hashes it with BLAKE3 to produce the “frame hash”. -- Inspectors display the frame hashes side-by-side and flag divergence immediately. Success = 100% equality across a 10 000-frame run. -- Determinism safeguards: freeze wall clock, mock OS timers, clamp floating-point math to deterministic fixed-point helpers, forbid nondeterministic APIs. -- Output artifact: JSON trace (`frame`, `hash`, `inputs_consumed`) plus a screenshot/video for the showcase. - -## Demo 2: Scheduler Rewrite Benchmark - -**Goal:** Benchmark the rewrite executor under scripted workloads. - -- Criterion-based benches exercise flat, chained, branching, and timeline-flush scenarios (mirrors `docs/scheduler-benchmarks.md`). -- Success criteria: median tick time < 0.5 ms for toy workload (100 entities, 10 rules); percentile tails recorded. -- Bench harness outputs JSON summaries (mean, median, std dev) consumed by the inspector. -- Deterministic PRNG seeds recorded so benches are reproducible across CI machines. - -## Demo 3: Timeline Fork/Merge Replay - -**Goal:** Demonstrate branching timelines, paradox detection, and canonical merges. - -- Start from a baseline snapshot, fork into three branches with scripted rewrites, deliberately introduce a conflict on one branch. -- Inspector view shows divergence tree, entropy deltas, and paradox quarantine in real time. -- Success criteria: merge replay produces the documented canonical hash, paradox branch quarantined with deterministic error log, entropy metrics trend as expected. -- Deliverable: recorded replay plus JSON report showing branch IDs, merge decisions, and resulting hashes. - -## Demo 4: Rhai Live Coding Loop - -**Goal:** Prove Rhai bindings support hot reload without breaking determinism. - -- Script registers a system that increments a component each tick; developer edits Rhai code mid-run via CLI hot-reload. -- Engine stages rewrite intents from Rhai through the FFI; after reload, replay the prior ticks to confirm deterministic equivalence. -- Success: frame hashes before/after reload identical when replayed from the same snapshot; inspector shows live diff of system graphs. -- Includes integration test capturing reload latency budget (< 50 ms) and ensuring queued rewrites survive reload boundary. - -## Demo 5: Confluence Sync Showcase - -**Goal:** Synchronise two peers via rewrite transactions, demonstrating deterministic convergence. - -- Peer A applies scripted rewrites while offline, then pushes transactions to Peer B via the Confluence protocol. -- Both peers compute snapshot hashes before/after sync; success when hashes converge with zero conflicts. -- Includes failure injection (duplicate transaction, out-of-order delivery) to show deterministic resolution path. -- Inspector UI plots sync throughput (transactions/sec) and latency. - -## Success Criteria Summary - -- **Frame Hash Integrity:** For Demo 1 and Demo 3, identical BLAKE3 hashes across peers/branches every tick. Any discrepancy fails the demo. -- **Input Stream Discipline:** Inputs recorded as timestamped events with deterministic seeds. Replay harness reuses the same log to verify determinism. -- **Floating-Point Policy:** All demos rely on fixed-point math or deterministic float wrappers; document configuration in README. -- **Performance Targets:** - - Demo 1: tick time ≤ 2 ms on reference hardware (M2 Pro / 32 GB). - - Demo 2: criterion bench median ≤ 0.5 ms; 99th percentile ≤ 1.0 ms. - - Demo 5: sync 10 000 transactions in under 2 s with zero conflicts. - -## Roadmap / Dependencies - -| Phase | Demo Coverage | Dependencies | -| ----- | ----------------------------- | ---------------------------------------------------- | -| 1A | Demo 2 harness scaffolding | Criterion setup, synthetic rewrite fixtures | -| 1B | Demo 1 prototype (local hash) | Motion rewrite spike, snapshot hashing | -| 1C | Demo 4 Rhai API | Rhai in-process bindings, hot-reload CLI | -| 1D | Demo 3 timeline tooling | Branch tree diff viewer, entropy metrics | -| 1E | Demo 5 networking | Confluence transaction protocol, replay verification | -| 1F | Demo dashboards | Inspector frame overlays, JSON ingestion | - -**Prerequisites:** BLAKE3 hashing utilities, deterministic PRNG module, snapshot serialiser, inspector graph viewer, CI runners with wasm/criterion toolchains. - -**Timeline:** - -- Milestone Alpha (end 1B): Demo 1 frame-hash prototype + Demo 2 toy bench executed manually. -- Milestone Beta (end 1D): Demos 1–3 automated in CI with golden outputs. -- Milestone GA (end 1F): Full demo suite (all five) runnable via `cargo xtask demo` and published as part of release notes. diff --git a/docs/archive/warp-runtime-architecture.md b/docs/archive/warp-runtime-architecture.md deleted file mode 100644 index de361d32..00000000 --- a/docs/archive/warp-runtime-architecture.md +++ /dev/null @@ -1,83 +0,0 @@ - - - -# WARP Runtime Architecture (Phase 1 Blueprint) - -This document captures the consensus that emerged for Echo’s Phase 1 implementation: the entire runtime, assets, and tooling operate on top of the WARP graph engine. Every concept—worlds, systems, entities, components, assets, pipelines—is a graph node. The engine executes deterministic DPO rewrite rules over that graph each tick, emitting snapshots for replay, networking, and tooling. - ---- - -## Everything Is a Graph - -- `World`: graph node whose edges point to `System` subgraphs. -- `System`: rewrite rule graph. Pattern `L`, interface `K`, output `R`. -- `Entity`: graph node with edges to `Component` nodes (`Has` edges). -- `Component`: leaf node with payload (POD data, asset reference, etc.). -- `Timeline`: sequence of rewrite transactions / snapshots. -- `Asset`: graph nodes that hold binary payloads (meshes, shaders). -- `Importer/Exporter`: graph describing pipelines—each step is a node with rewrite rule. - ---- - -## Tick Loop (Deterministic Scheduler) - -> **Note**: This is the target Phase 1 API design. The current `warp-core` crate -> is a bootstrap skeleton; consult `crates/warp-core/src/lib.rs` for the working -> interfaces. - -```rust -loop { - let tx = engine.begin(); - - let rewrites = scheduler.collect(world_root, &engine); - for rewrite in rewrites { - engine.apply(tx, rewrite.rule, &rewrite.scope, &rewrite.params)?; - } - - let snapshot = engine.commit(tx)?; - publish_inspector_frames(snapshot); - process_delayed_events(snapshot); -} -``` - -- Scheduler walks the graph, gathers rewrite intents, orders by `(scope_hash, rule_id)`. -- Disjoint scopes execute in parallel under the DPOi scheduler. -- Commit produces a `Snapshot` hash captured in the branch tree and Confluence. - ---- - -## Execution Walkthrough - -1. **Begin transaction** – `engine.begin()` returns `TxId`. -2. **Collect rewrites** – scheduler matches system patterns, computes scope hashes. -3. **Apply rules** – each rule operates on matched subgraph, updating payloads / edges. -4. **Commit** – atomic swap of graph store, emit snapshot + commit log entry. -5. **Emit frames** – inspector, entropy, Codex logs read from snapshot. - ---- - -## Branching & Replay - -- Forking = capturing snapshot hash and starting new rewrite sequence. -- Rollback = load prior snapshot, replay commits. -- Merge = deterministic three-way merge via Confluence rules. - ---- - -## Tools & Networking - -- Tooling (Echo Studio, inspector) consumes snapshots and rewrite logs. -- Networking exchanges rewrite transactions (scope hash, rule id, params hash). -- Deterministic merge ensures peers converge on identical snapshots. - ---- - -## Implementation Notes - -- WARP engine runs in Rust (`warp-core`). -- Rhai scripts issue rewrite intents via bindings; remain deterministic. -- TypeScript tools (via WASM) visualize the same graphs. - ---- - -This loop—the recursive execution of graph rewrite rules—is the heart of Echo’s deterministic multiverse runtime. diff --git a/docs/archive/tasks/TASKS-DAG.md b/docs/assets/dags/tasks-dag-source.md similarity index 100% rename from docs/archive/tasks/TASKS-DAG.md rename to docs/assets/dags/tasks-dag-source.md diff --git a/docs/assets/dags/tasks-dag.dot b/docs/assets/dags/tasks-dag.dot index d039c509..62311521 100644 --- a/docs/assets/dags/tasks-dag.dot +++ b/docs/assets/dags/tasks-dag.dot @@ -2,14 +2,14 @@ digraph tasks_dag { graph [rankdir=LR, labelloc="t", fontsize=18, fontname="Helvetica", newrank=true, splines=true]; node [shape=box, style="rounded,filled", fontname="Helvetica", fontsize=10, margin="0.10,0.06"]; edge [fontname="Helvetica", fontsize=9, arrowsize=0.8]; - label="Echo — Tasks DAG (from docs/archive/tasks/TASKS-DAG.md)\nGenerated by scripts/generate-tasks-dag.js"; + label="Echo — Tasks DAG (from docs/assets/dags/tasks-dag-source.md)\nGenerated by scripts/generate-tasks-dag.js"; subgraph cluster_legend { label="Legend"; color="gray70"; fontcolor="gray30"; style="rounded"; - LG [label="confirmed in docs/archive/tasks/TASKS-DAG.md", color="green", fontcolor="green"]; + LG [label="confirmed in docs/assets/dags/tasks-dag-source.md", color="green", fontcolor="green"]; } subgraph cluster_Spec { diff --git a/docs/assets/dags/tasks-dag.svg b/docs/assets/dags/tasks-dag.svg index aae7f07a..a15b647a 100644 --- a/docs/assets/dags/tasks-dag.svg +++ b/docs/assets/dags/tasks-dag.svg @@ -9,11 +9,11 @@ tasks_dag -Echo — Tasks DAG (from docs/archive/tasks/TASKS-DAG.md) +Echo — Tasks DAG (from docs/assets/dags/tasks-dag-source.md) Generated by scripts/generate-tasks-dag.js cluster_legend - + Legend @@ -59,8 +59,8 @@ LG - -confirmed in docs/archive/tasks/TASKS-DAG.md + +confirmed in docs/assets/dags/tasks-dag-source.md diff --git a/docs/dependency-dags.md b/docs/dependency-dags.md index b7aa6497..278f911c 100644 --- a/docs/dependency-dags.md +++ b/docs/dependency-dags.md @@ -79,18 +79,18 @@ cargo xtask dags --snapshot-label 2026-01-02 --- -## Tasks DAG (derived from `docs/archive/tasks/TASKS-DAG.md`) +## Tasks DAG (derived from `docs/assets/dags/tasks-dag-source.md`) ![Tasks DAG](assets/dags/tasks-dag.svg) Sources: -- Source data: `docs/archive/tasks/TASKS-DAG.md` -- Generator: `scripts/generate-tasks-dag.js` (scheduled by the GitHub workflow `.github/workflows/refresh-dependency-dags.yml` to keep the rendered output aligned with `docs/archive/tasks/TASKS-DAG.md`) +- Source data: `docs/assets/dags/tasks-dag-source.md` +- Generator: `scripts/generate-tasks-dag.js` (scheduled by the GitHub workflow `.github/workflows/refresh-dependency-dags.yml` to keep the rendered output aligned with `docs/assets/dags/tasks-dag-source.md`) - DOT: `docs/assets/dags/tasks-dag.dot` - SVG: `docs/assets/dags/tasks-dag.svg` -This DAG visualizes inferred issue dependencies that contributors log in `docs/archive/tasks/TASKS-DAG.md`, offering a quick comparison point against the curated milestone/issue graphs above. +This DAG visualizes inferred issue dependencies that contributors log in `docs/assets/dags/tasks-dag-source.md`, offering a quick comparison point against the curated milestone/issue graphs above. By design, isolated nodes (no incoming/outgoing edges) are filtered out to reduce clutter; the generator computes `connectedNodeIds` / `filteredNodes` and logs the drop counts during render. ## Regenerating the Tasks DAG diff --git a/docs/guide/eli5.md b/docs/guide/eli5.md index 1013f312..ef463cd3 100644 --- a/docs/guide/eli5.md +++ b/docs/guide/eli5.md @@ -145,7 +145,7 @@ If you want the formal version, see: ### If you want to run something concrete - WARP View Protocol demo: [/guide/wvp-demo](/guide/wvp-demo) -- Collision tour: [/guide/collision-tour](/guide/collision-tour) +- Collision DPO tour: [/collision-dpo-tour.html](/collision-dpo-tour.html) --- diff --git a/docs/guide/start-here.md b/docs/guide/start-here.md index 87396751..3058a059 100644 --- a/docs/guide/start-here.md +++ b/docs/guide/start-here.md @@ -34,8 +34,7 @@ ECS is a _useful storage and API layer_, but the deeper “ground truth” model ### If you want to run something end-to-end 1. WARP View Protocol demo: [/guide/wvp-demo](/guide/wvp-demo) -2. Collision tour: [/guide/collision-tour](/guide/collision-tour) -3. Interactive collision DPO tour (static HTML): [/collision-dpo-tour.html](/collision-dpo-tour.html) +2. Collision DPO tour (static HTML): [/collision-dpo-tour.html](/collision-dpo-tour.html) ### Collision DPO Tour (what to expect) @@ -45,14 +44,14 @@ ECS is a _useful storage and API layer_, but the deeper “ground truth” model ### If you want what should I work on? -- Docs map (curated index): [/meta/docs-index](/meta/docs-index) +- Docs home / curated map: [/](/) ## How These Docs Are Organized - **Guides** (`docs/guide/`): newcomer-friendly explanations and runnable walkthroughs. - **Specs** (`docs/spec-*.md`, `docs/spec/`): normative artifacts we try to keep stable and precise. -- **Notes** (`docs/notes/`): explorations and scratchpads; useful, but not authoritative. -- **Book** (`docs/book/`): long-form LaTeX material; may lag behind the latest implementation. +- **Architecture / Theory** (`docs/architecture*.md`, `docs/THEORY.md`, `docs/METHODOLOGY.md`): design intent and conceptual framing. +- **Procedures / Benchmarks** (`docs/procedures/`, `docs/benchmarks/`): contributor workflow and evidence. ## Viewing Docs Locally diff --git a/docs/index.md b/docs/index.md index 878189d6..b45f16b7 100644 --- a/docs/index.md +++ b/docs/index.md @@ -7,6 +7,8 @@ Echo is a deterministic **graph‑rewrite simulation engine**. In Echo, “WARP” is the core idea: your world state is a graph (structure) plus attachments (data), and each tick applies deterministic rewrite rules to that graph. +Git history is the archive. This page is the live docs map. + ## Visual Topic Map ```mermaid @@ -60,13 +62,37 @@ flowchart TD - Architecture overview (draft, but the source of truth for intent): [/architecture-outline](/architecture-outline) - Core runtime spec (`warp-core`): [/spec-warp-core](/spec-warp-core) -## Run Something (learn by doing) +## Curated Map -- WARP View Protocol demo (hub + 2 viewers): [/guide/wvp-demo](/guide/wvp-demo) -- Collision tour (walkthrough + links): [/guide/collision-tour](/guide/collision-tour) -- Interactive collision DPO tour (static HTML): [/collision-dpo-tour.html](/collision-dpo-tour.html) -- Geometry & collision (spec stub): [/spec-geom-collision](/spec-geom-collision) +### Core runtime + +- WARP core runtime: [/spec-warp-core](/spec-warp-core) +- Tick patch boundary: [/spec-warp-tick-patch](/spec-warp-tick-patch) +- Rewrite scheduler (current implementation): [/scheduler-warp-core](/scheduler-warp-core) +- Merkle commit / snapshot hashing: [/spec-merkle-commit](/spec-merkle-commit) +- Two-plane law: [/warp-two-plane-law](/warp-two-plane-law) + +### Determinism + +- Deterministic math policy: [/SPEC_DETERMINISTIC_MATH](/SPEC_DETERMINISTIC_MATH) +- Deterministic math hazards: [/DETERMINISTIC_MATH](/DETERMINISTIC_MATH) +- Claim register + evidence: [/determinism/DETERMINISM_CLAIMS_v0.1](/determinism/DETERMINISM_CLAIMS_v0.1) +- Benchmark guide: [/BENCHMARK_GUIDE](/BENCHMARK_GUIDE) -## When You Need a Map +### Contributor workflow -- Docs map (curated): [/meta/docs-index](/meta/docs-index) +- Contributor playbook: [/workflows](/workflows) +- PR submission loop: [/procedures/PR-SUBMISSION-REVIEW-LOOP](/procedures/PR-SUBMISSION-REVIEW-LOOP) +- Dependency DAGs: [/dependency-dags](/dependency-dags) +- Roadmap index: [/ROADMAP](/ROADMAP) + +### Theory / intent + +- Architecture outline: [/architecture-outline](/architecture-outline) +- Theory: [/THEORY](/THEORY) +- Methodology: [/METHODOLOGY](/METHODOLOGY) + +## Run Something (learn by doing) + +- WARP View Protocol demo (hub + 2 viewers): [/guide/wvp-demo](/guide/wvp-demo) +- Collision DPO tour (static walkthrough): [/collision-dpo-tour.html](/collision-dpo-tour.html) diff --git a/docs/meta/README.md b/docs/meta/README.md deleted file mode 100644 index bf7d04f1..00000000 --- a/docs/meta/README.md +++ /dev/null @@ -1,12 +0,0 @@ - - - -# Docs Meta - -This folder is documentation **about** the documentation. It collects: - -- doc inventories and navigation maps -- docs hygiene/audit notes -- legacy/documentation excavation logs - -If a file is about how we organize docs (not the product itself), it belongs here. diff --git a/docs/meta/docs-audit.md b/docs/meta/docs-audit.md deleted file mode 100644 index 195d011e..00000000 --- a/docs/meta/docs-audit.md +++ /dev/null @@ -1,160 +0,0 @@ - - - -# Docs Audit — Purge / Merge / Splurge - -This is a lightweight “docs hygiene” memo: which documents look stale, overlapping, or underspecified, and what we should do about them. - -**Initial audit:** 2026-01-02 -**Last updated:** 2026-03-07 - ---- - -## Rubric - -### Purge - -Candidate for removal or archival when the doc is: - -- clearly wrong / misleading (claims features that don’t exist), -- unreferenced and not historically valuable, -- replaced by a newer canonical doc (and the old one causes confusion). - -_Note:_ prefer “archive + redirect note” over hard-deleting, unless we’re sure the content is junk. - -### Merge - -Candidate for consolidation when two docs: - -- describe the same invariant or workflow, -- are both expected to stay accurate, -- and drift risk is higher than the value of having multiple entry points. - -### Splurge (Enhance) - -Candidate for investment when the doc is: - -- canonical (people should read it), -- frequently referenced, -- or is a high-leverage “on-ramp” document (index, primer, workflow). - ---- - -## What We Did In This Pass - -Splurged: - -- `docs/math-validation-plan.md` — updated to match the current `warp-core` deterministic math implementation and CI lanes (float lane + `det_fixed` lane), and to list concrete tests/commands instead of JS/browser plans. -- `docs/index.md` — updated the docs landing page to point at real, current documents (it previously linked to a missing collision spec). -- Collision tour docs hygiene: - - moved the tour source to `docs/public/collision-dpo-tour.html` so VitePress emits `/collision-dpo-tour.html`, - - added a `docs/spec-geom-collision.md` stub so the tour’s “Spec” link is non-broken. - -De-risked (clarified “what is canonical”): - -- `docs/spec-deterministic-math.md` — marked as a legacy TS-oriented Phase 0 design sketch; points readers to: - - `docs/SPEC_DETERMINISTIC_MATH.md` (normative policy) - - `docs/math-validation-plan.md` (how we test) - ---- - -## What We Did (2026-03-07) - -### Archived - -Moved 6 superseded documents to `docs/archive/` with redirect stubs: - -- `spec-deterministic-math.md` -- legacy Phase 0 TS-oriented draft -- `spec-geom-collision.md` -- stub with no normative content -- `notes/scheduler-radix-optimization.md` -- superseded by `-2` version -- `notes/xtask-wizard.md` -- concept note, never implemented -- `plans/cross-warp-parallelism.md` -- feature already implemented -- `plans/BOAW-tech-debt.md` -- content already in `adr/TECH-DEBT-BOAW.md` - -### Consolidated - -- Replaced "Related Docs" sections in `SPEC_DETERMINISTIC_MATH.md` and - `DETERMINISTIC_MATH.md` with structured "Docs Map" tables linking - all 5 documents in the deterministic math cluster. -- Updated `scheduler.md` Quick Map with status-labeled table. -- Added "(not yet implemented)" to `spec-scheduler.md` title. - -### Fixed - -- `ROADMAP/backlog/editor-hot-reload.md`: `docs/specs/` -> `docs/spec/` -- `ROADMAP/backlog/plugin-abi.md`: `docs/specs/` -> `docs/spec/` -- `meta/docs-index.md`: `memorial.md` -> `memorials/2026-01-18-phase4-rubicon.md` -- `ROADMAP/ISSUE-INDEX.md`: 6 references to `streams-inspector-frame.md` - -> `streams-inspector.md` (file never had the `-frame` suffix) -- `architecture-outline.md`: `docs/spec/SPEC-0004...` -> `spec/SPEC-0004...` - (stale `docs/` prefix), nonexistent `echo-scene-port/README.md` link -- `archive/plans/BOAW-tech-debt.md`: `../adr/` -> `../../adr/` (depth - changed by archival) -- `archive/notes/scheduler-radix-optimization.md`: image paths updated - to point back to `../../notes/` after archival -- `meta/docs-index.md`: `public/assets/...` -> `../public/assets/...` - -### Added - -- `docs/guide/configuration-reference.md` -- engine parameters, protocol - constants, environment variables, channel policies -- `docs/guide/cargo-features.md` -- all Cargo feature flags across the - workspace (11 crates, 19 unique flags) - -### Added (tooling) - -- `cargo xtask lint-dead-refs` -- scans `docs/` for broken markdown - cross-references. Use `--all` to also check image/HTML references. -- `cargo xtask markdown-fix` -- auto-fixes SPDX headers, runs prettier, - and applies markdownlint `--fix` across `docs/`. Flags: `--no-prettier`, - `--no-lint`. -- `cargo xtask docs-lint` -- combined pipeline: `markdown-fix` then - `lint-dead-refs`. One command for full docs hygiene. - -### Formatted - -- Ran `cargo xtask markdown-fix` across all 205 docs files: prettier - formatting, SPDX header normalization, markdownlint auto-fixes applied. - 66 unfixable lint warnings remain (missing code fence languages in - `THEORY.md`, structural issues in `warp-math-claims.md`). - -### Updated - -- `README.md` -- added determinism claims link, reference docs section -- `meta/docs-index.md` -- new entries, archive note, updated descriptions - for redirected docs -- `CHANGELOG.md` -- docs polish entries - ---- - -## Candidates (Next) - -### Merge candidates - -- Scheduler documentation: - - Multiple reserve/scheduler docs exist (`docs/scheduler-benchmarks.md`, `docs/scheduler-reserve-*.md`, `docs/spec-scheduler.md`). - - Action: decide which is canonical for “how it works” vs “how we benchmark it”, and add a single landing doc (or update `docs/spec-scheduler.md`) that links the rest. - -### Splurge candidates - -- `docs/meta/docs-index.md`: - - It’s already a great index, but could include a short “If you’re changing X, read Y” map (e.g., determinism policy, docs guard, PR policy). - - Action: add a “Common contributor paths” section. - -- `docs/code-map.md`: - - High leverage for onboarding; should stay aligned with canonical specs. - - Action: keep concept→spec links accurate as we demote legacy docs. - -### Purge / archive candidates (defer until verified) - -- “One-off” review burn-down notes under `docs/notes/`: - - These are historically useful, but may not belong in the default browsing path forever. - - Action: consider a `docs/notes/archive/` folder and move docs that are purely PR-specific retrospectives once they’ve served their purpose. - ---- - -## Open Questions - -- Do we want a formal “doc tier” tag? - - Example: **Spec (normative)** vs **Guide (how-to)** vs **Notes (historical)** vs **ADR (decisions)**. -- Should VitePress navigation be driven by `docs/meta/docs-index.md` (as the canonical index), rather than having multiple “landing pages”? diff --git a/docs/meta/docs-index.md b/docs/meta/docs-index.md deleted file mode 100644 index c9b27b7c..00000000 --- a/docs/meta/docs-index.md +++ /dev/null @@ -1,129 +0,0 @@ - - - -# Echo Docs Map - -This page is a curated map of the docs: a few "golden paths", plus links to the most-used specs. -If you want the full inventory, use repo search (`rg`) and follow links outward from the core specs. - -| Document | Purpose | -| -------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------- | -| `architecture-outline.md` | High-level architecture vision and principles | -| `workflows.md` | Contributor workflows, policies, and blessed repo entry points | -| `guide/warp-primer.md` | Start here: newcomer-friendly primer for WARP in Echo | -| `guide/wvp-demo.md` | Demo: run the session hub + 2 viewers (publisher/subscriber) | -| `guide/tumble-tower.md` | Demo 3 scenario: deterministic physics ladder ("Tumble Tower") | -| `spec-warp-core.md` | WARP core format and runtime | -| `spec-warp-tick-patch.md` | Tick patch boundary artifact (delta ops, in/out slots, patch_digest) | -| `spec-mwmr-concurrency.md` | WARP MWMR Concurrency Spec (Footprints, Ports, Factor Masks) | -| `spec-merkle-commit.md` | Snapshot Commit Spec (v2) | -| `spec-temporal-bridge.md` | Cross-branch event lifecycle | -| `warp-two-plane-law.md` | Project law: define SkeletonGraph vs attachment plane, π(U), depth-0 atoms, and "no hidden edges" | -| `adr/ADR-0001-warp-two-plane-skeleton-and-attachments.md` | ADR: formalize two-plane representation (SkeletonGraph + Attachment Plane) and the core invariants | -| `adr/ADR-0002-warp-instances-descended-attachments.md` | ADR: WarpInstances and descended attachments via flattened indirection (no hidden edges, no recursive hot path) | -| `spec/SPEC-0001-attachment-plane-v0-atoms.md` | Spec: attachment plane v0 (typed atoms), codec boundary, and deterministic decode failure semantics | -| `spec/SPEC-0002-descended-attachments-v1.md` | Spec: descended attachments v1 (WarpInstances, SlotId::Attachment, descent-chain footprint law, worldline slicing) | -| `spec/SPEC-0004-worldlines-playback-truthbus.md` | Spec: Worldlines, PlaybackCursor, ViewSession, TruthBus | -| `architecture/TERMS_WARP_STATE_INSTANCES_PORTALS_WORMHOLES.md` | Canonical terminology: WarpState vs SkeletonGraph, instances/portals, and wormholes (reserved for history compression) | -| `scheduler.md` | Doc map: warp-core rewrite scheduler vs planned system scheduler | -| `scheduler-warp-core.md` | Canonical doc: warp-core rewrite scheduler (`reserve()` / drain) | -| `scheduler-performance-warp-core.md` | Canonical doc: warp-core scheduler benchmarks | -| `determinism/DETERMINISM_CLAIMS_v0.1.md` | Verified determinism claims (DET-001 through DET-005) | -| `guide/configuration-reference.md` | Engine parameters, protocol constants, environment variables | -| `guide/cargo-features.md` | Cargo feature flags across the workspace | - -## Start Here - -- Echo (ELI5 spiral on-ramp): [/guide/eli5](/guide/eli5) -- Start Here guide: [/guide/start-here](/guide/start-here) -- WARP primer (newcomer-friendly): [/guide/warp-primer](/guide/warp-primer) -- Architecture overview (draft, but the intent source of truth): [/architecture-outline](/architecture-outline) - -## Learn By Doing - -- WARP View Protocol demo: [/guide/wvp-demo](/guide/wvp-demo) -- Tumble Tower scenario (deterministic physics ladder): [/guide/tumble-tower](/guide/tumble-tower) - -## Core WARP Specs (High Leverage) - -- WARP core format + runtime (`warp-core`): [/spec-warp-core](/spec-warp-core) -- Tick patches (delta artifact boundary): [/spec-warp-tick-patch](/spec-warp-tick-patch) -- MWMR Concurrency (footprints, ports): [/spec-mwmr-concurrency](/spec-mwmr-concurrency) -- Merkle commit (snapshot hashing): [/spec-merkle-commit](/spec-merkle-commit) - -## Determinism + Math - -- Policy (normative): [/SPEC_DETERMINISTIC_MATH](/SPEC_DETERMINISTIC_MATH) -- Hazards + mitigations (background): [/DETERMINISTIC_MATH](/DETERMINISTIC_MATH) -- Current claims / error budgets: [/warp-math-claims](/warp-math-claims) -- Determinism claims: [/determinism/DETERMINISM_CLAIMS_v0.1](/determinism/DETERMINISM_CLAIMS_v0.1) - -## Reference / Deep Dives - -- Two-plane law ("no hidden edges"): [/warp-two-plane-law](/warp-two-plane-law) -- Warp instances / portals terminology: [/architecture/TERMS_WARP_STATE_INSTANCES_PORTALS_WORMHOLES](/architecture/TERMS_WARP_STATE_INSTANCES_PORTALS_WORMHOLES) -- DIND harness: [/dind-harness](/dind-harness) -- Golden vectors (ABI): [/golden-vectors](/golden-vectors) -- JS CBOR mapping: [/js-cbor-mapping](/js-cbor-mapping) -- Dependency DAGs: [/dependency-dags](/dependency-dags) -- Benchmark guide: [/BENCHMARK_GUIDE](/BENCHMARK_GUIDE) -- Release policy: [/RELEASE_POLICY](/RELEASE_POLICY) -- Roadmap: [/ROADMAP](/ROADMAP) - -## Procedures - -- PR submission + review loop: [/procedures/PR-SUBMISSION-REVIEW-LOOP](/procedures/PR-SUBMISSION-REVIEW-LOOP) -- Issue dependencies: [/procedures/ISSUE-DEPENDENCIES](/procedures/ISSUE-DEPENDENCIES) -- Extract PR comments: [/procedures/EXTRACT-PR-COMMENTS](/procedures/EXTRACT-PR-COMMENTS) - -## Vision Specs (Unimplemented) - -These specs describe planned features that are not yet implemented. They represent -design intent and are kept for reference, but should not be treated as current behavior. - -| Spec | Topic | -| ------------------------------------ | ------------------------------------------------------ | -| `spec-branch-tree.md` | Branch tree, diffs, and timeline persistence | -| `spec-canonical-inbox-sequencing.md` | Canonical inbox sequencing, idempotent ingress | -| `spec-capabilities-and-security.md` | Capability tokens and signatures | -| `spec-concurrency-and-authoring.md` | Parallel core + single-threaded scripting model | -| `spec-ecs-storage.md` | ECS storage (archetypes, chunks, COW) | -| `spec-editor-and-inspector.md` | Inspector frame protocol + tooling transport | -| `spec-entropy-and-paradox.md` | Entropy metrics and paradox handling | -| `spec-knots-in-time.md` | Time knots for Echo | -| `spec-networking.md` | Deterministic event replication modes | -| `spec-plugin-system.md` | Plugin discovery, namespace isolation | -| `spec-runtime-config.md` | Deterministic configuration schema | -| `spec-scheduler.md` | Planned ECS/system scheduler (not warp-core scheduler) | -| `spec-serialization-protocol.md` | Canonical encoding and hashing | -| `spec-time-streams-and-wormholes.md` | Multi-clock time, cursors, wormholes | -| `spec-timecube.md` | Chronos × Kairos × Aion | -| `spec-warp-confluence.md` | Global WARP graph synchronization | -| `spec-warp-view-protocol.md` | WARP View Protocol (WVP) | -| `spec-world-api.md` | Stable public façade for external modules | - -## ADRs - -| ADR | Title | -| ----------------------- | ---------------------------------------------------------- | -| `adr/ADR-0001-*` | Two-plane WARP representation | -| `adr/ADR-0002-*` | WarpInstances + descended attachments | -| `adr/ADR-0003-*` | Causality-first API (MaterializationPort) | -| `adr/ADR-0004-*` | No global state (DI only) | -| `adr/ADR-0005-*` | Physics as deterministic scheduled rewrites | -| `adr/ADR-0006-*` | Ban non-determinism | -| `adr/ADR-0007-*` | Parallel execution storage + scheduling | -| `adr/PLAN-PHASE-6B-*` | Virtual shards (complete) | -| `adr/TECH-DEBT-BOAW.md` | Parallel execution tech debt tracker (historical filename) | - -## Archive - -Superseded and stale documents live in [`docs/archive/`](../archive/). - -See `archive/README.md` for the archive policy. Archived categories include: - -- **Session artifacts:** notes, plans, tasks, RFCs, memorials -- **Study materials:** LaTeX papers, tour-de-code booklets, visual atlas -- **Completed missions:** DIND missions, determinism audit, mat-bus RFC -- **Stale docs:** agents, issues matrix, code map, phase 1 plan, demo roadmaps -- **Dead redirects:** collision tour (targets never created) diff --git a/docs/meta/legacy-excavation.md b/docs/meta/legacy-excavation.md deleted file mode 100644 index c2bd4513..00000000 --- a/docs/meta/legacy-excavation.md +++ /dev/null @@ -1,35 +0,0 @@ - - - -# Legacy Excavation Log (Placeholder) - -This document is a place to record “legacy prototype” artifacts we discover (old folders, old -designs, abandoned experiments) and the decisions we make about them: - -- keep concept (rewrite cleanly) -- redesign (needs rethinking) -- discard (no longer relevant) - -If you add entries here, prefer linking to concrete files and capture a short “why” in the decision -log when a choice affects public surface area or determinism. - -## Process (Recommended) - -1. Identify a legacy artifact (folder, demo, script, or spec). -2. Summarize its intent in one sentence. -3. Decide: **keep concept**, **redesign**, or **discard**. -4. Record any determinism or public API implications. -5. Link to the replacement or follow-up issue if one exists. - -## Where to Look - -- Old demos or prototype subfolders. -- Archived build scripts or abandoned toolchains. -- Experimental rendering or physics integrations. -- Prototype specs that predate the Rust-first era. - -## Template - -| Artifact | What It Was | What We Keep | Action | Notes | -| --------------- | ----------- | ------------ | --------------------- | ---------------------- | -| `path/to/thing` | (1–2 lines) | (concepts) | keep/redesign/discard | (gotchas, deps, links) | diff --git a/docs/public/collision-dpo-tour.html b/docs/public/collision-dpo-tour.html index 5e752206..d6411883 100644 --- a/docs/public/collision-dpo-tour.html +++ b/docs/public/collision-dpo-tour.html @@ -2,211 +2,738 @@ - - - - Echo Collision DPO Tour - - - -

Collision / CCD — DPO Rule Tour

-

Each rule shown as LHS → Interface K → RHS. See the legend for visual semantics.

-

LegendSpec

- -
-

Graph Anatomy (Everything Is a Graph)

-
-
- Collision Subgraph Overview -
Overview — typed nodes and edges for one colliding pair at tick n.
-
Node/Edge Graph
Entities, components, temporal proxies, potential pair, contact, TOI, event — all first‑class nodes linked by typed edges (has_component, has_proxy, pair_of, contact_of, event_of, produced_in).
-
-

- This is the literal graph Echo maintains. Derived artifacts (proxies, pairs, contacts, events) are not hidden engine buffers — - they are nodes that tools can query, branch, replay, and merge deterministically. The same initial facts and policies yield the - same subgraph and the same snapshot hash on every peer. -

-
-
-
-
- -

How Things Move (time-aware proxies)

-
-

BuildTemporalProxy (pre_update)

-
-
- BuildTemporalProxy Step 1 -
Step 1 — LHS: Collider + Transform (+ Velocity) at Tick n
-
-

- We gather the collider’s Transform (and optional Velocity) at tick n. In Echo this input is - explicit graph state, not a transient engine struct. Chronos gives us a fixed dt, so the motion window - for the upcoming tick is well-defined and reproducible. -

-
    -
  • Different: many engines pull from mutable component state ad‑hoc; here we read typed nodes bound to a specific tick.
  • -
  • Determinism: same dt and same components ⇒ same inputs on every peer/branch.
  • -
-
-
-
- BuildTemporalProxy Step 2 -
Step 2 — Interface K
-
-

- The DPO Interface K shows what is preserved between LHS and RHS: collider + transform + tick. - This is how we say “the world keeps these facts while we add the proxy.” -

-
    -
  • Different: Echo makes the preserved context explicit; typical engines merge implicit state in-place.
  • -
-
-
-
- BuildTemporalProxy Step 3 -
Step 3 — RHS: TemporalProxy(e,n) added
-
-

- We add a TemporalProxy with a fat AABB that encloses motion over [start,end]. Padding is derived from - velocity and quantized policy, so two peers derive the same box. The proxy links back to the entity and Tick n. -

-
    -
  • Different: broad‑phase caches become first‑class graph nodes with stable IDs.
  • -
  • Determinism: quantized padding + stable insertion order ⇒ identical proxy sets.
  • -
-
-
-
-
+ + + + Echo Collision DPO Tour + + + +

Collision / CCD — DPO Rule Tour

- This rule deterministically derives a TemporalProxy for each collider at tick n. - The proxy’s “fat AABB” encloses the body over the whole tick window [start,end], so fast movers can’t tunnel between broad‑phase sweeps. - The proxy is a typed node in the graph (not an opaque engine cache) and is linked back to the entity and the producing tick. + Each rule shown as LHS → Interface K → RHS. See the legend for + visual semantics.

-
    -
  • Different from typical engines: broad‑phase buffers are usually internal and mutation‑ordered; in Echo they are explicit graph nodes, created by a rewrite with a stable scope and ID.
  • -
  • Determinism: proxy size and padding are computed from quantized policy values; insert order is sorted by ID, so peers/branches build identical proxy sets.
  • -
-
-
- -

How Collision Works (broad → narrow → events)

-
-

BroadPhasePairing (update)

-
-
BroadPhasePairing Step 1
Step 1 — LHS: overlapping proxies

We test fat AABB overlap on proxies built for the full tick window. Overlap means the pair is a candidate for narrow phase.

  • Different: the candidate condition is a graph fact, not an opaque boolean.
  • Determinism: identical proxies ⇒ identical overlap set.
-
BroadPhasePairing Step 2
Step 2 — K: proxies preserved

The proxies themselves are preserved (K). This makes the rule commute with other rules that may also read them this tick.

-
BroadPhasePairing Step 3
Step 3 — RHS: PotentialPair added

We mint a PotentialPair with canonical PairId = H(min(A,B)||max(A,B)||branch) and back‑refs to proxies.

  • Different: pair lists are reproducible data, not engine iteration order.
  • Determinism: output list is sorted strictly; peers/branches match.
-
-

- The broad phase converts overlapping proxies into PotentialPair nodes. Each pair gets a canonical - PairId = H(min(A,B) || max(A,B) || branch) and edges back to the proxies. The emitted list is - sorted deterministically, which makes network replication and timeline diffs trivial. + Legend

-
    -
  • Different: most engines keep an internal unsorted array of candidate pairs; Echo materializes pairs as graph facts, with stable IDs and ordering.
  • -
  • Determinism: ties in AABB endpoints break on IDs; output is strictly sorted, so two peers converge on identical pair order.
  • -
-
-
-
-

NarrowPhaseDiscrete (update)

-
-
NarrowPhaseDiscrete Step 1
Step 1 — LHS: discrete overlap @ end pose

For low‑speed pairs, we evaluate shapes at the end pose of tick n. If they overlap, we proceed to build a manifold.

  • Different: thresholding policy is data; no hidden time‑step heuristics.
-
NarrowPhaseDiscrete Step 2
Step 2 — K: pair preserved

We keep the PotentialPair (K). The narrow phase acts as a pure derivation from pair+poses.

-
NarrowPhaseDiscrete Step 3
Step 3 — RHS: Contact with Manifold added

We create a Contact with a reduced Manifold (2–4 points). Points are canonicalized by feature IDs to ensure reproducible ordering.

  • Different: engine doesn’t call your code mid‑narrow; it records facts you can read consistently.
  • Determinism: centralized tolerances + ordering.
-
-
-

- For low‑speed pairs, we evaluate shapes at end‑of‑tick poses and, if intersecting, create a Contact with a - deterministically ordered Manifold (2–4 clipped points). The contact attaches to the pair and to the producing tick. - Manifold point ordering and feature IDs are canonicalized to remove platform drift. -

-
    -
  • Different: instead of imperative callbacks that mutate scripts, Echo records contacts as first‑class nodes; scripts read them after rules run.
  • -
  • Determinism: manifold reduction, feature selection, and floating‑point tolerances are centralized and quantized.
  • -
-
-
+
+

Graph Anatomy (Everything Is a Graph)

+
+
+ Collision Subgraph Overview +
+ Overview — typed nodes and edges for one colliding pair + at tick n. +
+
+ Node/Edge Graph +
+ Entities, components, temporal proxies, potential + pair, contact, TOI, event — all first‑class nodes + linked by typed edges (has_component, has_proxy, + pair_of, contact_of, event_of, produced_in). +
+
+
+

+ This is the literal graph Echo maintains. Derived + artifacts (proxies, pairs, contacts, events) are not + hidden engine buffers — they are nodes that tools + can query, branch, replay, and merge + deterministically. The same initial facts and + policies yield the same subgraph and the same + snapshot hash on every peer. +

+
+
+
+
-
-

NarrowPhaseCCD (update)

-
-
NarrowPhaseCCD Step 1
Step 1 — LHS: CCD policy triggers

Policy flags fast motion/small features (or material‑required CCD). We will compute a Toi in [0,1] before creating a contact.

-
NarrowPhaseCCD Step 2
Step 2 — K: pair preserved

We keep the pair (K) and run conservative advancement or a swept primitive test to find the impact time.

-
NarrowPhaseCCD Step 3
Step 3 — RHS: Toi + Contact added

We emit a Toi node with quantized s and a Contact at the impact pose. Quantization and iteration caps are recorded to make this stable.

-
-
-

- When a policy indicates high motion or small features, we run CCD: conservative advancement for general convex - shapes or closed‑form sweeps for spheres/capsules. We emit a Toi node with quantized s ∈ [0,1] and a Contact - at the impact pose. Because s is quantized and the rule scopes are stable, peers compute the same TOI and contact set. -

-
    -
  • Different: CCD outputs are persisted as graph data (Toi + Contact), not transient solver state; branches and replays see identical values.
  • -
  • Determinism: iteration caps, policy thresholds, and s quantization are recorded; identical inputs yield identical s.
  • -
-
-
+

+ How Things Move (time-aware proxies) +

+
+

BuildTemporalProxy (pre_update)

+
+
+ BuildTemporalProxy Step 1 +
+ Step 1 — LHS: Collider + Transform (+ Velocity) at Tick + n +
+
+

+ We gather the collider’s + Transform (and optional + Velocity) at tick n. In + Echo this input is explicit graph state, not a + transient engine struct. Chronos gives us a fixed + dt, so the motion window for the + upcoming tick is well-defined and reproducible. +

+
    +
  • + Different: many engines pull + from mutable component state ad‑hoc; here we + read typed nodes bound to a specific tick. +
  • +
  • + Determinism: same + dt and same components ⇒ same + inputs on every peer/branch. +
  • +
+
+
+
+ BuildTemporalProxy Step 2 +
Step 2 — Interface K
+
+

+ The DPO Interface K shows what is + preserved between LHS and RHS: collider + transform + + tick. This is how we say “the world keeps these + facts while we add the proxy.” +

+
    +
  • + Different: Echo makes the + preserved context explicit; typical engines + merge implicit state in-place. +
  • +
+
+
+
+ BuildTemporalProxy Step 3 +
+ Step 3 — RHS: TemporalProxy(e,n) added +
+
+

+ We add a TemporalProxy with a fat + AABB that encloses motion over [start,end]. Padding + is derived from velocity and quantized policy, so + two peers derive the same box. The proxy links back + to the entity and Tick n. +

+
    +
  • + Different: broad‑phase caches + become first‑class graph nodes with stable IDs. +
  • +
  • + Determinism: quantized padding + + stable insertion order ⇒ identical proxy sets. +
  • +
+
+
+
+
+

+ This rule deterministically derives a + TemporalProxy for each collider at tick + n. The proxy’s “fat AABB” encloses the body over + the whole tick window [start,end], so fast movers can’t + tunnel between broad‑phase sweeps. The proxy is a typed node + in the graph (not an opaque engine cache) and is linked back + to the entity and the producing tick. +

+
    +
  • + Different from typical engines: + broad‑phase buffers are usually internal and + mutation‑ordered; in Echo they are explicit graph nodes, + created by a rewrite with a stable scope and ID. +
  • +
  • + Determinism: proxy size and padding are + computed from quantized policy values; insert order is + sorted by ID, so peers/branches build identical proxy + sets. +
  • +
+
+
-
-

ContactEvents (post_update)

-
-
ContactEvents Step 1
Step 1 — LHS: contact states n-1 vs n

We stage previous and current Contact facts for the pair to compute Begin/Persist/End.

-
ContactEvents Step 2
Step 2 — K: contacts preserved

K keeps both contact nodes in scope so event construction is a pure comparison, not in‑place mutation.

-
ContactEvents Step 3
Step 3 — RHS: ContactEvent added

We create a ContactEvent (Begin/Persist/End) sorted by (toi_s, ContactId). Events are nodes that tools and scripts can consume deterministically.

-
-
-

- We diff previous vs current Contact nodes and create a ContactEvent (Begin/Persist/End) ordered by - (toi_s, ContactId). Events are regular nodes and flow through the Temporal Bridge to tools, replay, or networking. -

-
    -
  • Different: engines typically invoke user callbacks in engine order; Echo records events as data first, then tooling/scripts consume them deterministically.
  • -
  • Determinism: strict sort order; event payloads are value objects that hash the same on every peer.
  • -
-
-
+

+ How Collision Works (broad → narrow → events) +

+
+

BroadPhasePairing (update)

+
+
+ BroadPhasePairing Step 1 +
Step 1 — LHS: overlapping proxies
+
+

+ We test fat AABB overlap on proxies + built for the full tick window. Overlap means the + pair is a candidate for narrow phase. +

+
    +
  • + Different: the candidate + condition is a graph fact, not an opaque + boolean. +
  • +
  • + Determinism: identical proxies + ⇒ identical overlap set. +
  • +
+
+
+
+ BroadPhasePairing Step 2 +
Step 2 — K: proxies preserved
+
+

+ The proxies themselves are preserved (K). This makes + the rule commute with other rules that may also read + them this tick. +

+
+
+
+ BroadPhasePairing Step 3 +
Step 3 — RHS: PotentialPair added
+
+

+ We mint a PotentialPair with + canonical PairId = + H(min(A,B)||max(A,B)||branch) and back‑refs to + proxies. +

+
    +
  • + Different: pair lists are + reproducible data, not engine iteration order. +
  • +
  • + Determinism: output list is + sorted strictly; peers/branches match. +
  • +
+
+
+
+
+

+ The broad phase converts overlapping proxies into + PotentialPair nodes. Each pair gets a + canonical PairId = H(min(A,B) || max(A,B) || + branch) and edges back to the proxies. The emitted list is + sorted deterministically, which makes network replication + and timeline diffs trivial. +

+
    +
  • + Different: most engines keep an + internal unsorted array of candidate pairs; Echo + materializes pairs as graph facts, with stable IDs and + ordering. +
  • +
  • + Determinism: ties in AABB endpoints + break on IDs; output is strictly sorted, so two peers + converge on identical pair order. +
  • +
+
+
-

How We Keep It Clean (deterministic GC)

-
-

GC Ephemeral (timeline_flush)

-
-
GC Ephemeral Step 1
Step 1 — LHS: ephemeral nodes

Before flush, the frame has proxies, pairs, TOIs and optional per‑tick contacts. They’re marked ephemeral.

-
GC Ephemeral Step 2
Step 2 — Selection

We deterministically select unreferenced, older artifacts for deletion. The retention policy is configured and recorded.

-
GC Ephemeral Step 3
Step 3 — RHS: nodes deleted

We remove the selected nodes in a stable ID order. Snapshots after flush are identical across peers/branches.

-
-
-

- Broad‑phase proxies, potential pairs, transient TOIs and, optionally, per‑tick contacts are ephemeral. At - timeline_flush we delete them in a deterministic order. We keep only the high‑value artifacts (Aion‑tagged events, metrics) for tools and audits. -

-
    -
  • Different: many engines leak implicit caches across frames; Echo models and cleans them explicitly as graph data.
  • -
  • Determinism: GC order is sorted by ID; post‑flush snapshots are identical across branches and peers.
  • -
-
-
+
+

NarrowPhaseDiscrete (update)

+
+
+ NarrowPhaseDiscrete Step 1 +
+ Step 1 — LHS: discrete overlap @ end pose +
+
+

+ For low‑speed pairs, we evaluate shapes at the end + pose of tick n. If they overlap, we proceed + to build a manifold. +

+
    +
  • + Different: thresholding policy + is data; no hidden time‑step heuristics. +
  • +
+
+
+
+ NarrowPhaseDiscrete Step 2 +
Step 2 — K: pair preserved
+
+

+ We keep the PotentialPair (K). The + narrow phase acts as a pure derivation from + pair+poses. +

+
+
+
+ NarrowPhaseDiscrete Step 3 +
+ Step 3 — RHS: Contact with Manifold added +
+
+

+ We create a Contact with a reduced + Manifold (2–4 points). Points are + canonicalized by feature IDs to ensure reproducible + ordering. +

+
    +
  • + Different: engine doesn’t call + your code mid‑narrow; it records facts you can + read consistently. +
  • +
  • + Determinism: centralized + tolerances + ordering. +
  • +
+
+
+
+
+

+ For low‑speed pairs, we evaluate shapes at end‑of‑tick poses + and, if intersecting, create a Contact with + a deterministically ordered Manifold (2–4 + clipped points). The contact attaches to the pair and to the + producing tick. Manifold point ordering and feature IDs are + canonicalized to remove platform drift. +

+
    +
  • + Different: instead of imperative + callbacks that mutate scripts, Echo records contacts as + first‑class nodes; scripts read them after rules run. +
  • +
  • + Determinism: manifold reduction, + feature selection, and floating‑point tolerances are + centralized and quantized. +
  • +
+
+
+ +
+

NarrowPhaseCCD (update)

+
+
+ NarrowPhaseCCD Step 1 +
Step 1 — LHS: CCD policy triggers
+
+

+ Policy flags fast motion/small features (or + material‑required CCD). We will compute a + Toi in [0,1] before creating a + contact. +

+
+
+
+ NarrowPhaseCCD Step 2 +
Step 2 — K: pair preserved
+
+

+ We keep the pair (K) and run + conservative advancement or a swept + primitive test to find the impact time. +

+
+
+
+ NarrowPhaseCCD Step 3 +
Step 3 — RHS: Toi + Contact added
+
+

+ We emit a Toi node with quantized + s and a Contact at the + impact pose. Quantization and iteration caps are + recorded to make this stable. +

+
+
+
+
+

+ When a policy indicates high motion or small features, we + run CCD: conservative advancement for + general convex shapes or closed‑form sweeps for + spheres/capsules. We emit a Toi node with + quantized s ∈ [0,1] and a + Contact at the impact pose. Because + s is quantized and the rule scopes are stable, + peers compute the same TOI and contact set. +

+
    +
  • + Different: CCD outputs are persisted as + graph data (Toi + Contact), not transient solver state; + branches and replays see identical values. +
  • +
  • + Determinism: iteration caps, policy + thresholds, and s quantization are recorded; + identical inputs yield identical s. +
  • +
+
+
+ +
+

ContactEvents (post_update)

+
+
+ ContactEvents Step 1 +
+ Step 1 — LHS: contact states n-1 vs n +
+
+

+ We stage previous and current + Contact facts for the pair to + compute Begin/Persist/End. +

+
+
+
+ ContactEvents Step 2 +
Step 2 — K: contacts preserved
+
+

+ K keeps both contact nodes in scope + so event construction is a pure comparison, not + in‑place mutation. +

+
+
+
+ ContactEvents Step 3 +
Step 3 — RHS: ContactEvent added
+
+

+ We create a + ContactEvent (Begin/Persist/End) + sorted by (toi_s, ContactId). + Events are nodes that tools and scripts can consume + deterministically. +

+
+
+
+
+

+ We diff previous vs current Contact nodes + and create a + ContactEvent (Begin/Persist/End) ordered by + (toi_s, ContactId). Events are regular + nodes and flow through the Temporal Bridge to tools, replay, + or networking. +

+
    +
  • + Different: engines typically invoke + user callbacks in engine order; Echo records events as + data first, then tooling/scripts consume them + deterministically. +
  • +
  • + Determinism: strict sort order; event + payloads are value objects that hash the same on every + peer. +
  • +
+
+
+ +

+ How We Keep It Clean (deterministic GC) +

+
+

GC Ephemeral (timeline_flush)

+
+
+ GC Ephemeral Step 1 +
Step 1 — LHS: ephemeral nodes
+
+

+ Before flush, the frame has proxies, pairs, TOIs and + optional per‑tick contacts. They’re marked + ephemeral. +

+
+
+
+ GC Ephemeral Step 2 +
Step 2 — Selection
+
+

+ We deterministically select unreferenced, older + artifacts for deletion. The retention policy is + configured and recorded. +

+
+
+
+ GC Ephemeral Step 3 +
Step 3 — RHS: nodes deleted
+
+

+ We remove the selected nodes in a stable ID order. + Snapshots after flush are identical across + peers/branches. +

+
+
+
+
+

+ Broad‑phase proxies, potential pairs, transient TOIs and, + optionally, per‑tick contacts are + ephemeral. At timeline_flush we + delete them in a deterministic order. We keep only the + high‑value artifacts (Aion‑tagged events, metrics) for tools + and audits. +

+
    +
  • + Different: many engines leak implicit + caches across frames; Echo models and cleans them + explicitly as graph data. +
  • +
  • + Determinism: GC order is sorted by ID; + post‑flush snapshots are identical across branches and + peers. +
  • +
+
+
- - - + + + diff --git a/docs/workflows.md b/docs/workflows.md index a070ede8..f72e4d3e 100644 --- a/docs/workflows.md +++ b/docs/workflows.md @@ -14,22 +14,6 @@ This doc is the “official workflow index” for Echo: how we work, what invari - Record architectural decisions in ADRs (`docs/adr/`) or PR descriptions. - Before opening a PR, run the validation workflow below. -### Agent Context System (AI Agents) - -AI agents use a **2-tier context system** for seamless handoffs. See -[`docs/archive/AGENTS.md`](./archive/AGENTS.md) for full details. - -| Tier | Store | Purpose | -| --------- | ----------------------------------- | ------------------------------------------ | -| Immediate | Redis stream (`echo:agent:handoff`) | Current task state, branch, blockers | -| Deep | Knowledge graph | Architecture decisions, patterns, entities | - -**Quick reference:** - -- **Session start**: `XRANGE echo:agent:handoff - + COUNT 5` + `search_nodes("")` -- **During work**: Update Redis after significant actions -- **Session end**: Always write a handoff entry with `branch`, `status`, `next_steps` - --- ## Branch + PR Workflow diff --git a/scripts/check-append-only.js b/scripts/check-append-only.js index 14b08f60..223477df 100644 --- a/scripts/check-append-only.js +++ b/scripts/check-append-only.js @@ -3,7 +3,7 @@ import { execFileSync } from "node:child_process"; -const files = ["AGENTS.md", "docs/archive/tasks/TASKS-DAG.md"]; +const files = ["AGENTS.md", "docs/assets/dags/tasks-dag-source.md"]; const args = process.argv.slice(2); const baseArgIndex = args.indexOf("--base"); diff --git a/scripts/generate-dependency-dags.js b/scripts/generate-dependency-dags.js index a025bf0e..383cac3c 100644 --- a/scripts/generate-dependency-dags.js +++ b/scripts/generate-dependency-dags.js @@ -103,7 +103,7 @@ function parseArgs(argv) { milestonesJson: ".cache/echo/deps/milestones-all.json", configJson: "docs/assets/dags/deps-config.json", outDir: "docs/assets/dags", - tasksDagPath: path.join("docs", "archive", "tasks", "TASKS-DAG.md"), + tasksDagPath: path.join("docs", "assets", "dags", "tasks-dag-source.md"), snapshot: null, snapshotLabelMode: "auto", }; @@ -138,7 +138,7 @@ function parseArgs(argv) { " --milestones-json Read/write milestones snapshot JSON", " --config Dependency config (edges) JSON", " --out-dir Output directory for DOT/SVG", - " --tasks-dag Path to docs/archive/tasks/TASKS-DAG.md (reality edges)", + " --tasks-dag Path to docs/assets/dags/tasks-dag-source.md (reality edges)", " --snapshot Override label date in output graphs (legacy; prefer --snapshot-label)", " --snapshot-label Snapshot label: auto|none|rolling|YYYY-MM-DD", "", @@ -284,7 +284,9 @@ function emitIssueDot({ issues, issueEdges, snapshotLabel, realityEdges }) { } if (realityEdges) { - // Add nodes for reality-only edges (TASKS-DAG.md) when both endpoints exist and the edge is absent from configuredEdges, so red “missing from plan” edges can render. + // Add nodes for reality-only edges (tasks-dag-source.md) when both endpoints + // exist and the edge is absent from configuredEdges, so red “missing from + // plan” edges can render. for (const edgeKey of realityEdges) { const realityEdge = safeParseEdgeKey(edgeKey, "reality edge"); if (!realityEdge) continue; @@ -336,7 +338,7 @@ function emitIssueDot({ issues, issueEdges, snapshotLabel, realityEdges }) { lines.push( ` label="${escapeDotString( title, - )}\\nEdge direction: prerequisite → dependent (do tail before head)\\nEdge styles encode confidence (solid=strong, dashed=medium, dotted=weak).\\nGreen = Confirmed in TASKS-DAG.md; Red = In TASKS-DAG.md but missing from Plan.";`, + )}\\nEdge direction: prerequisite → dependent (do tail before head)\\nEdge styles encode confidence (solid=strong, dashed=medium, dotted=weak).\\nGreen = Confirmed in tasks-dag-source.md; Red = In tasks-dag-source.md but missing from Plan.";`, ); lines.push(""); @@ -417,7 +419,7 @@ function emitIssueDot({ issues, issueEdges, snapshotLabel, realityEdges }) { const { from: u, to: v } = realityEdge; if (byNum.has(u) && byNum.has(v)) { lines.push( - ` i${u} -> i${v} [color="red", penwidth=2.0, style="dashed", tooltip="Inferred from TASKS-DAG.md (missing from Plan)"];`, + ` i${u} -> i${v} [color="red", penwidth=2.0, style="dashed", tooltip="Inferred from tasks-dag-source.md (missing from Plan)"];`, ); } } diff --git a/scripts/generate-tasks-dag.js b/scripts/generate-tasks-dag.js index 61029496..6b8ac24c 100644 --- a/scripts/generate-tasks-dag.js +++ b/scripts/generate-tasks-dag.js @@ -7,7 +7,7 @@ import { spawnSync } from "node:child_process"; import { parseTasksDag } from "./parse-tasks-dag.js"; import { escapeDotString } from "./dag-utils.js"; -const INPUT_FILE_DISPLAY = "docs/archive/tasks/TASKS-DAG.md"; +const INPUT_FILE_DISPLAY = "docs/assets/dags/tasks-dag-source.md"; const INPUT_FILE = path.join(...INPUT_FILE_DISPLAY.split("/")); const OUT_DIR = "docs/assets/dags"; const DOT_FILE = path.join(OUT_DIR, "tasks-dag.dot"); diff --git a/tests/hooks/test_dependency_dags.sh b/tests/hooks/test_dependency_dags.sh index e70dbff9..f9bbee39 100644 --- a/tests/hooks/test_dependency_dags.sh +++ b/tests/hooks/test_dependency_dags.sh @@ -29,7 +29,6 @@ trap cleanup EXIT mkdir -p \ "$tmpdir/scripts" \ - "$tmpdir/docs/archive/tasks" \ "$tmpdir/docs/assets/dags" \ "$tmpdir/.cache/echo/deps" @@ -82,7 +81,7 @@ cat >"$tmpdir/docs/assets/dags/deps-config.json" <<'EOF' } EOF -cat >"$tmpdir/docs/archive/tasks/TASKS-DAG.md" <<'EOF' +cat >"$tmpdir/docs/assets/dags/tasks-dag-source.md" <<'EOF' ## [#2: Dependent issue](https://example.com/issues/2) - Blocked by: @@ -97,9 +96,9 @@ if ( node scripts/generate-dependency-dags.js >"$output_file" 2>&1 ); then if grep -Eq 'i1 -> i2 \[[^]]*color="red"' "$tmpdir/docs/assets/dags/issue-deps.dot"; then - pass "generator reads archived TASKS-DAG source by default" + pass "generator reads tasks DAG source by default" else - fail "generator should render a reality-only edge from the archived TASKS-DAG source" + fail "generator should render a reality-only edge from the tasks DAG source" if [[ -f "$tmpdir/docs/assets/dags/issue-deps.dot" ]]; then cat "$tmpdir/docs/assets/dags/issue-deps.dot" else @@ -107,14 +106,14 @@ if ( fi fi else - fail "generator should succeed with only the archived TASKS-DAG source present" + fail "generator should succeed with only the tasks DAG source present" cat "$output_file" fi cat >"$tmpdir/docs/assets/dags/clusters-config.json" <<'EOF' [" "] EOF -cat >"$tmpdir/docs/archive/tasks/TASKS-DAG.md" <<'EOF' +cat >"$tmpdir/docs/assets/dags/tasks-dag-source.md" <<'EOF' ## [#1: Alpha seed](https://example.com/issues/1) - Blocks: diff --git a/xtask/src/main.rs b/xtask/src/main.rs index e92ffbe3..729939ac 100644 --- a/xtask/src/main.rs +++ b/xtask/src/main.rs @@ -5382,7 +5382,7 @@ mod tests { #[test] fn root_relative_link_resolves_against_docs_root() { - let source = Path::new("docs/meta/docs-index.md"); + let source = Path::new("docs/index.md"); let docs_root = Path::new("docs"); let candidates = build_candidates(source, "/guide/start-here.md", docs_root); assert!(candidates