>` (payloads)
-> - Cache locality during sorting
-> - Handle indirection mechanism
-> 5. **The histogram counting algorithm:**
-> - Two-pass per digit: count occurrences, then exclusive prefix sum to get write indices
-> - Why we zero `counts16` before each pass
-> - How the scratch buffer enables in-place-like behavior
->
-> Add this explanation as inline comments in `scheduler.rs` and/or as a new doc file at `docs/notes/radix-sort-internals.md`. Include diagrams (Mermaid or ASCII art) showing the pass sequence and memory layout."
-
-### Radix Sort Internals
-
-The implementation lives in `crates/warp-core/src/scheduler.rs`. This section
-documents the algorithm as implemented.
-
-#### Sorting key: `RewriteThin`
-
-```text
-RewriteThin (48 bytes)
-├─ scope_be32: [u8; 32] ← BLAKE3 scope hash, byte-lexicographic
-├─ rule_id: u32 ← compact rule identifier
-├─ nonce: u32 ← insertion-order tie-breaker
-└─ handle: usize ← index into fat payload vec
-```
-
-**Thin/fat separation:** Only the 48-byte `RewriteThin` records are touched
-during sorting. Full payloads (`Option`) live in a separate `fat` vector
-indexed by `handle`. This keeps sort cache lines tight — the radix passes
-never touch payload data.
-
-#### Why 20 passes?
-
-The composite sort key is `(scope_be32, rule_id, nonce)` = 32 + 4 + 4 = 40
-bytes. Each pass processes a 16-bit digit (2 bytes), so 40 / 2 = **20 passes**.
-
-#### Why 16-bit digits (not 8-bit)?
-
-| Digit size | Histogram entries | Histogram memory | Passes |
-| ---------- | ----------------: | ---------------: | -----: |
-| 8-bit | 256 | 1 KB | 40 |
-| 16-bit | 65,536 | 256 KB | 20 |
-
-At the target scale (n > 1024), pass count dominates. Each pass involves a
-full scan + scatter of all n records. Halving the pass count from 40 to 20
-is worth the 256 KB histogram — well within L2 cache on modern CPUs.
-
-#### Why LSD (Least Significant Digit)?
-
-- **Stable:** LSD radix sort is inherently stable. Each pass preserves the
- relative order established by previous passes.
-- **Predictable:** Exactly k passes for k digits — no recursion, no
- early-out variance.
-- **Required for nonce tie-breaking:** Stability ensures that when
- `scope_be32` and `rule_id` are equal, the nonce (insertion order)
- determines the final position — matching the comparison sort's behavior.
-
-MSD would require recursive partitioning and explicit tie-breaking logic.
-
-#### Pass sequence (LSD order)
-
-```text
-Pass 0: nonce low 16 bits (least significant)
-Pass 1: nonce high 16 bits
-Pass 2: rule_id low 16 bits
-Pass 3: rule_id high 16 bits
-Pass 4: scope_be32 pair 15 (bytes [30..32], scope LSB)
-Pass 5: scope_be32 pair 14 (bytes [28..30])
- ⋮
-Pass 19: scope_be32 pair 0 (bytes [0..2], scope MSB)
-```
-
-After all 20 passes, the primary sort key is `scope_be32` (most significant),
-then `rule_id`, then `nonce` — matching `cmp_thin`'s comparison order.
-
-#### Digit extraction (`bucket16`)
-
-```text
-passes 0–1: u16_from_u32_le(nonce, idx) — LE decomposition
-passes 2–3: u16_from_u32_le(rule_id, idx) — LE decomposition
-passes 4–19: u16_be_from_pair32(scope, 19-pass) — BE pair from byte array
-```
-
-The scope uses big-endian pairs because `scope_be32` is stored in
-byte-lexicographic order. The `19 - pass` index maps LSD pass ordering
-onto big-endian byte positions (pass 4 → pair 15 = LSB, pass 19 → pair
-0 = MSB).
-
-#### Three-phase counting sort (per pass)
-
-Each of the 20 passes executes:
-
-1. **Count:** Zero the 65,536-entry histogram, then scan all n records,
- incrementing `counts[bucket16(record, pass)]`.
-2. **Prefix sum:** Convert counts to starting positions via exclusive
- cumulative sum: `counts[i] = sum of counts[0..i]`.
-3. **Stable scatter:** Scan records in order, placing each at
- `dst[counts[bucket]++]`. The post-increment ensures stable ordering
- within each bucket.
-
-#### Ping-pong buffer
-
-The sort alternates between `thin` and `scratch` vectors each pass:
-
-```text
-Pass 0: thin → scratch
-Pass 1: scratch → thin
-Pass 2: thin → scratch
- ⋮
-Pass 19: scratch → thin (20 passes = even, result in thin)
-```
-
-Since 20 is even, the final sorted result is already in `thin`. If the
-pass count were odd, a final `copy_from_slice` would sync the result.
-
-#### Threshold: `SMALL_SORT_THRESHOLD = 1024`
-
-- **n ≤ 1024:** Use `sort_unstable_by(cmp_thin)` — Rust's pattern-defeating
- quicksort. Avoids the fixed 256 KB histogram zeroing cost.
-- **n > 1024:** Use the 20-pass radix sort — O(n) scaling dominates the
- O(n log n) comparison sort.
-
-The threshold was empirically determined on Apple Silicon. The histogram
-zeroing cost (~256 KB × 20 passes) is amortized at n ≈ 1024. This is a
-compile-time constant; all participants in a deterministic simulation MUST
-use the same value.
-
----
-
-## Prompt 3: Document Assumptions & Arbitrary Decisions
-
-**Prompt for next session:**
-
-> "Please review the scheduler optimization implementation and create comprehensive documentation explaining decisions that may appear arbitrary or require platform-specific validation. Create `docs/notes/scheduler-implementation-notes.md` covering:
->
-> 1. **The 1024 threshold choice:**
-> - Empirically determined on M1 Mac (Apple Silicon)
-> - Based on when 5MB zeroing cost becomes negligible relative to comparison sort overhead
-> - **Platform dependency**: Intel x86 may have different optimal threshold due to:
-> - Different memory bandwidth characteristics
-> - Different cache sizes (L1/L2/L3)
-> - Different CPU instruction latencies
-> - **Validation needed**: Benchmark on Intel/AMD x86_64, ARM Cortex-A series, RISC-V
-> - **Potential solution**: Make threshold configurable via feature flag or runtime detection
-> - **Determinism note:** `SMALL_SORT_THRESHOLD` is a compile-time constant (`1024`). All participants must use the same value. This is not auto-tuned.
-> 2. **16-bit radix digit size:**
-> - Assumes 256KB zeroing is acceptable fixed cost
-> - Alternative: 8-bit digits (20KB zeroing, 40 passes) might win on memory-constrained systems
-> - Alternative: 32-bit digits (16GB histogram!) is obviously wrong, but why? Document the analysis.
-> - **Question**: Did we test 12-bit digits (4KB histogram, ~27 passes)? Should we?
-> 3. **FxHasher (rustc-hash) choice:**
-> - Fast but non-cryptographic
-> - Assumes no adversarial input targeting hash collisions
-> - **Risk**: Pathological inputs could cause O(n²) behavior in the HashMap
-> - **Mitigation**: Could switch to ahash or SipHash if collision attacks are a concern
-> 4. **GenSet generation counter wraparound:**
-> - What happens when `gen: u32` overflows after 4 billion transactions?
-> - Currently unhandled - assumes no single engine instance lives that long
-> - **Validation needed**: Add a debug assertion or overflow handling
-> 5. **Comparison sort choice (sort_unstable_by):**
-> - Why unstable sort is acceptable (we have explicit nonce tie-breaking in the comparator)
-> - Why not pdqsort vs other algorithms? (It's already Rust's default)
-> 6. **Scope hash size (32 bytes = 256 bits):**
-> - Why this size? Comes from BLAKE3 output
-> - Radix pass count directly depends on this
-> - If we ever change hash algorithm, pass count must be recalculated
->
-> For each decision, document:
->
-> - **Rationale**: Why we chose this
-> - **Assumptions**: What must be true for this choice to be correct
-> - **Risks**: What could go wrong
-> - **Validation needed**: What tests/benchmarks would increase confidence
-> - **Alternatives**: What we considered but rejected, and why"
-
----
-
-## Prompt 4: Worst-Case Scenarios & Mitigations
-
-**Prompt for next session:**
-
-> "Please analyze the hybrid scheduler implementation to identify **worst-case scenarios** and design mitigations with empirical validation. Focus on adversarial inputs and edge cases where performance or correctness could degrade:
->
-> 1. **Adversarial Hash Inputs:**
-> - **Scenario**: All scopes hash to values with identical high-order bits (e.g., all start with 0x00000000...)
-> - **Impact**: Radix sort doesn't partition until late passes, cache thrashing
-> - **Test**: Generate 10k scopes with only low-order byte varying
-> - **Mitigation**: Document that this is acceptable (real hashes distribute uniformly), or switch to MSD radix if detected
-> 2. **Threshold Boundary Oscillation:**
-> - **Scenario**: Input size oscillates around 1024 (e.g., 1000 → 1050 → 980 → 1100)
-> - **Impact**: Algorithm selection thrashing, icache/dcache pollution
-> - **Test**: Benchmark repeated cycles of 1000/1050 element drains
-> - **Mitigation**: Add hysteresis (e.g., switch at 1024 going up, 900 going down)
-> 3. **FxHashMap Collision Attack:**
-> - **Scenario**: Malicious input with (scope, rule_id) pairs engineered to collide in FxHasher
-> - **Impact**: HashMap lookups degrade to O(n), enqueue becomes O(n²)
-> - **Test**: Generate colliding inputs (requires reverse-engineering FxHash)
-> - **Mitigation**: Switch to ahash (DDoS-resistant) or document trust model
-> 4. **Memory Exhaustion:**
-> - **Scenario**: Enqueue 10M+ rewrites before draining
-> - **Impact**: 5MB × 20 = 100MB scratch buffer, plus thin/fat vectors = potential OOM
-> - **Test**: Benchmark memory usage at n = 100k, 1M, 10M
-> - **Mitigation**: Add early drain triggers or pool scratch buffers across transactions
-> 5. **Highly Skewed Rule Distribution:**
-> - **Scenario**: 99% of rewrites use rule_id = 0, remainder spread across 1-255
-> - **Impact**: First rule_id radix pass is nearly no-op, wasted cache bandwidth
-> - **Test**: Generate skewed distribution, measure vs uniform distribution
-> - **Mitigation**: Skip radix passes if variance is low (requires online detection)
-> 6. **Transaction Starvation:**
-> - **Scenario**: Transaction A enqueues 100k rewrites, transaction B enqueues 1 rewrite
-> - **Impact**: B's single rewrite pays proportional cost in GenSet conflict checking
-> - **Test**: Benchmark two-transaction scenario with 100k vs 1 rewrites
-> - **Mitigation**: Per-transaction GenSet or early-out if footprint is empty
->
-> For each scenario:
->
-> 1. **Create a benchmark** in `crates/warp-benches/benches/scheduler_adversarial.rs`
-> 2. **Measure degradation** compared to best-case (e.g., how much slower?)
-> 3. **Implement mitigation** if degradation is >2x
-> 4. **Re-benchmark** to prove mitigation works
-> 5. **Document** in `docs/notes/scheduler-worst-case-analysis.md` with graphs
->
-> The goal is to **quantify** our worst-case behavior and provide **evidence** that mitigations work, not just intuition."
-
----
-
-## Alternatives Considered
-
-During the optimization process, we evaluated several alternative approaches before settling on the current hybrid radix sort implementation:
-
-### 1. **Pure Comparison Sort (Status Quo)**
-
-- **Approach**: Keep BTreeMap-based scheduling
-- **Pros**:
- - Already implemented and tested
- - Simple, no custom sort logic
- - Good for small n
-- **Cons**:
- - O(n log n) complexity
- - 44% slower at n=1000 than hybrid
- - Doesn't scale to n=10k+
-- **Why rejected**: Performance target (60 FPS = 16.67ms frame budget) requires sub-millisecond scheduling at n=1000+. BTreeMap doesn't meet this at scale.
-
----
-
-### 2. **Pure Radix Sort (No Threshold)**
-
-- **Approach**: Always use 20-pass radix sort, no comparison fallback
-- **Pros**:
- - Simpler code (no branching)
- - Perfect O(n) scaling
- - Excellent at large n
-- **Cons**:
- - 91x slower at n=10 (687µs vs 7.5µs)
- - Fixed 5MB zeroing cost dominates small inputs
- - Real games have variable rewrite counts per frame
-- **Why rejected**:
- - Most frames have <100 rewrites, paying huge penalty for rare large frames is unacceptable
- - "Flat green line" in benchmarks (Benchmark visualization: see performance data in `scheduler-radix-optimization-2.md`.)
- - Cannot justify 91x regression for 90% of frames to optimize 10% of frames
-
----
-
-### 3. **8-bit Digit Radix Sort**
-
-- **Approach**: Use 256-entry histogram (1KB) with 40 passes instead of 16-bit/20 passes
-- **Pros**:
- - Only 20KB zeroing overhead vs 5MB
- - Could lower threshold to ~128
- - Better cache locality (256 entries fit in L1)
-- **Cons**:
- - Double the number of passes (40 vs 20)
- - Each pass has loop overhead, random access patterns
- - More opportunities for branch misprediction
-- **Why rejected**:
- - Preliminary analysis suggested memory bandwidth not the bottleneck, pass count is
- - At n=10k, memory cost (5MB) is amortized, but 20 extra passes are not
- - Rust's `sort_unstable` is _extremely_ optimized; difficult to surpass with more passes
- - Would need empirical benchmarking to prove 8-bit is better (didn't have time)
-
----
-
-### 4. **Active-Bucket Zeroing**
-
-- **Approach**: Only zero histogram buckets that were non-zero after previous pass
-- **Pros**:
- - Could save 15-20% at large n by avoiding full 256KB zeroes
- - Maintains 16-bit digit performance
-- **Cons**:
- - Requires tracking which buckets are "dirty"
- - Extra bookkeeping overhead (bitmap? linked list?)
- - Complexity increase
- - Benefit only at n > 10k
-- **Why rejected**:
- - Premature optimization - current implementation meets performance targets
- - Complexity/benefit ratio not compelling
- - Can revisit if profiling shows zeroing is bottleneck at scale
- - User's philosophy: "golden path happens 90% of the time"
-
----
-
-### 5. **Cross-Transaction Buffer Pooling**
-
-- **Approach**: Reuse `scratch` and `counts16` buffers across multiple `drain_in_order()` calls
-- **Pros**:
- - Amortizes allocation cost across multiple frames
- - Reduces memory allocator pressure
- - Could enable per-thread pools for parallelism
-- **Cons**:
- - Requires lifetime management (who owns the pool?)
- - Breaks current simple API (`drain_in_order()` is self-contained)
- - Unclear benefit (allocations are fast, we care about compute time)
-- **Why rejected**:
- - No evidence allocation is bottleneck (Criterion excludes setup with `BatchSize::PerIteration`)
- - Complexity without measured gain
- - Would need profiling to justify
-
----
-
-### 6. **Rule-Domain Optimization**
-
-- **Approach**: If `rule_id` space is small (<256), skip high-order rule_id radix pass
-- **Pros**:
- - Saves 1 pass for common case (most games have <100 rules)
- - Simple optimization (if `max_rule_id < 256`, skip pass)
-- **Cons**:
- - Requires tracking max rule_id dynamically
- - Saves ~5% total time (1/20 passes)
- - Adds conditional logic to hot path
-- **Why rejected**:
- - Marginal gain (~5%) not worth complexity
- - Pass overhead is cheap relative to histogram operations
- - User constraint: "one dude, on a laptop" - optimize high-value targets first
-
----
-
-### 7. **MSD (Most Significant Digit) Radix Sort**
-
-- **Approach**: Sort high-order bytes first, recursively partition
-- **Pros**:
- - Can early-out if data is already partitioned
- - Potentially fewer passes for sorted data
-- **Cons**:
- - Not stable (requires explicit tie-breaking logic)
- - Variable number of passes (hard to predict performance)
- - Recursive implementation (cache unfriendly)
- - Complex to implement correctly
-- **Why rejected**:
- - LSD radix guarantees exactly 20 passes (predictable performance)
- - Stability is critical for nonce tie-breaking
- - Our data is random (graph hashes), no sorted patterns to exploit
- - Complexity not justified by speculative gains
-
----
-
-### 8. **Hybrid with Multiple Thresholds**
-
-- **Approach**: Three-way split: comparison (<256), 8-bit radix (256-4096), 16-bit radix (>4096)
-- **Pros**:
- - Theoretically optimal for all input sizes
- - Could squeeze out extra 5-10% in 100-1000 range
-- **Cons**:
- - Three codepaths to maintain
- - Two threshold parameters to tune
- - Cache pollution from three different algorithms
- - Testing complexity (need coverage at both boundaries)
-- **Why rejected**:
- - Diminishing returns - hybrid with single threshold already meets targets
- - User's philosophy: "good enough for golden path"
- - Engineering time better spent on other features
- - Premature optimization
-
----
-
-## Summary: Why Hybrid Radix at 1024?
-
-The current implementation (comparison sort for n ≤ 1024, 16-bit radix for n > 1024) was chosen because:
-
-1. **Meets performance targets**: 44% speedup at n=1000, perfect O(n) at scale
-2. **Simple**: One threshold, two well-understood algorithms
-3. **Robust**: Rust's `sort_unstable` is battle-tested, radix is deterministic
-4. **Measurable**: Clear boundary at 1024 makes reasoning about performance easy
-5. **Good enough**: Covers 90% golden path, doesn't over-optimize edge cases
-
-Alternative approaches either:
-
-- Sacrificed small-n performance (pure radix)
-- Added complexity without measured gains (active-bucket zeroing, pooling)
-- Required more tuning parameters (multi-threshold hybrid)
-- Didn't align with user's resource constraints (one person, hobby project)
-
-The guiding principle: **"Ship what works for real use cases, iterate if profiling shows a better target."**
diff --git a/docs/archive/notes/scheduler-radix-optimization-2.md b/docs/archive/notes/scheduler-radix-optimization-2.md
deleted file mode 100644
index eb77e142..00000000
--- a/docs/archive/notes/scheduler-radix-optimization-2.md
+++ /dev/null
@@ -1,349 +0,0 @@
-
-
-
-# From $O(n \log n)$ to $O(n)$: Optimizing Echo’s Deterministic Scheduler
-
-> **Provenance:** This document supersedes `docs/archive/notes/scheduler-radix-optimization.md`. See the archive for earlier analysis.
-
-**Tags:** performance, algorithms, optimization, radix-sort
-
----
-
-## TL;DR
-
-- **Echo** runs at **60 fps** while processing **~5,000 DPO graph rewrites per frame**.
-- Determinism at _game scale_ is **confirmed**.
-- Scheduler now **linear-time** with **zero small-$n$ regressions**.
-
----
-
-## What is Echo?
-
-**Echo** is a **deterministic simulation engine** built on **graph-rewriting theory**.
-Although its applications span far beyond games, we’ll view it through the lens of a **game engine**.
-
-Traditional engines manage state via **mutable object hierarchies** and **event loops**.
-Echo represents the _entire_ simulation as a **typed graph** that evolves through **deterministic rewrite rules**—mathematical transformations that guarantee **bit-identical results** across platforms, replays, and networked peers.
-
-At Echo’s core lies the **WARP graph (WARP)**:
-
-- **Nodes are graphs** (a “player” is a subgraph with its own internal structure).
-- **Edges are graphs** (carry provenance and nested state).
-- **Rules are graph rewrites** (pattern-match → replace).
-
-Every frame the WARP is replaced by a new WARP—an **echo** of the previous state.
-
-### Why bother? Aren’t Unreal/Unity “solved”?
-
-They excel at **rendering** and **asset pipelines**, but their **state-management foundation** is fragile for the hardest problems in game dev:
-
-| Problem | Symptom |
-| ------------------------- | ----------------------------------------------------------------- |
-| **Divergent state** | Rubber-banding, client-side prediction, authoritative corrections |
-| **Non-reproducible bugs** | “Works on my machine”, heisenbugs |
-
-Echo eliminates both by making **state immutable** and **updates pure functions**.
-
----
-
-## Version Control for Reality
-
-Think of each frame as an **immutable commit** with a **cryptographic hash** over the reachable graph (canonical byte order).
-Player inputs become **candidate rewrites**. Thanks to **confluence** (category-theory math), all inputs fold into a **single deterministic effect**.
-
-```math
-(world, inputs) \to world'
-```
-
-No prediction. No rollback. No arbitration. If two machines disagree, a **hash mismatch at frame N+1** is an immediate, precise alarm.
-
-### Deterministic branching & merge (ASCII)
-
-```text
-Frame₀
- │
- ▼
- Frame₁───┐
- │ \
- ▼ \
- Frame₂A Frame₂B
- │ │
- └──────┴────┘
- ▼
- Merge₃ (confluence + canonical order)
-```
-
----
-
-## What Echo Unlocks
-
-| Feature | Traditional Engine | Echo |
-| ---------------------- | ---------------------------- | ---------------------------- |
-| **Perfect replays** | Recorded inputs + heuristics | Recompute from any commit |
-| **Infinite debugger** | Breakpoints + logs | Query graph provenance |
-| **Provable fairness** | Trust server | Cryptographic hash signature |
-| **Zero silent desync** | Prediction errors | Immediate hash check |
-| **Networking** | Send world diff | Send inputs only |
-
----
-
-## Confluence, Not Arbitration
-
-When multiple updates touch the same state, Echo **merges** them via **lattice operators** with **ACI** properties:
-
-- **Associative**, **Commutative**, **Idempotent**
-
-### Examples
-
-- Tag union: join(A, B) = A ∪ B
-- Scalar cap: join(Cap(a), Cap(b)) = Cap(max(a, b))
-
-Folding any bucket yields **one result**, independent of order or partitioning.
-
----
-
-## Safe Parallelism by Construction
-
-Updates are **DPO (Double Push-Out) graph rewrites**.
-
-- **Independent** rewrites run in parallel.
-- **Overlapping** rewrites are merged (lattice) or rejected.
-- **Dependent** rewrites follow a **canonical order**.
-
-The full pipeline:
-
-1. Collect inputs for frame N+1.
-2. Bucket by (scope, rule_family).
-3. **Confluence-fold** each bucket (ACI).
-4. Apply remaining rewrites in **lexicographic order**:
-
-```text
-(scope_hash, rule_id, nonce)
-```
-
-1. Emit snapshot & compute commit hash.
-
----
-
-## A Tiny Rewrite, A Tiny Lattice
-
-**Motion rewrite** (scalar view)
-
-> Match: entity with position p, velocity v Replace: p′ = p + v·dt (velocity unchanged)
-
-### Cap lattice
-
-> join(Cap(α), Cap(β)) = Cap(max(α, β)) {Cap(2), Cap(5), Cap(3)} → Cap(5) (order-independent)
-
-These primitives—**rewrites** + **lattices**—are the DNA of Echo’s determinism.
-
----
-
-## Echo vs. the World
-
-| Property | Echo |
-| -------------------------- | -------------------------------------------------- |
-| **Determinism by design** | Same inputs → same outputs (no FP drift, no races) |
-| **Formal semantics** | DPO category theory → provable transitions |
-| **Replay from the future** | Rewind, fork, checkpoint any frame |
-| **Networked lockstep** | Send inputs only; hash verifies sync |
-| **AI training paradise** | Reproducible episodes = debuggable training |
-
-Echo isn’t just another ECS—it’s a **new architectural paradigm**.
-
----
-
-## The Problem: $O(n \log n)$ Was Hurting
-
-The scheduler must execute rewrites in **strict lexicographic order**: (scope_hash (256 bit), rule_id, nonce).
-
-Initial implementation:
-
-```rust
-pub(crate) pending: BTreeMap<(Hash, Hash), PendingRewrite>;
-```
-
-**Bottleneck**: Insertions into the `BTreeMap` required $O(n \log n)$ comparisons over 256-bit scope hashes; draining via `BTreeMap::drain()` was $O(n)$.
-
-| $n$ | Time |
-| ----- | ----------- |
-| 1,000 | **1.33 ms** |
-| 3,000 | **4.2 ms** |
-
-Curve fit: $T/n ≈ -345 + 272.7 \ln n$ → textbook $O(n \log n)$.
-
----
-
-## The Solution: 20-Pass Radix Sort
-
-Radix sort is **comparison-free** → $O(n)$ for fixed-width keys.
-
-### Design choices
-
-- **LSD** (least-significant digit first)
-- **16-bit digits** (big-endian)
-- **20 passes total**:
- - 2 for nonce (u32)
- - 2 for rule_id (u32)
- - 16 for scope_hash (32 bytes)
-- **Stable** → preserves insertion order for ties
-- **Byte-lexicographic** → identical to BTreeMap
-
-### Architecture
-
-```rust
-struct RewriteThin {
- scope_be32: [u8; 32], // 256-bit scope
- rule_id: u32,
- nonce: u32,
- handle: usize, // index into fat payload vec; usize to avoid truncation
-}
-
-struct PendingTx
{
- thin: Vec,
- fat: Vec>,
- scratch: Vec,
- counts16: Vec, // 65,536 buckets = 256 KiB
-}
-```
-
-**Key insight**: Sort **thin keys** (28 bytes) only; gather **fat payloads** once at the end.
-
-### Pass sequence
-
-Each pass: **count → prefix-sum → scatter → flip buffers**.
-
----
-
-## The Disaster: Small-$n$ Regression
-
-Initial radix numbers were _worse_ at low $n$:
-
-| $n$ | BTreeMap | Radix | Regression |
-| ----- | -------- | ---------- | -------------- |
-| 10 | 7.5 µs | **687 µs** | **91× slower** |
-| 100 | 90 µs | **667 µs** | **7× slower** |
-| 1,000 | 1.33 ms | 1.36 ms | marginal |
-
-**Culprit**: counts.fill(0) **20 times** → **5 MiB** of writes _regardless_ of $n$. At $n=10$, sorting cost was dwarfed by memory bandwidth.
-
----
-
-## The Fix: Adaptive Threshold
-
-```rust
-const SMALL_SORT_THRESHOLD: usize = 1024;
-
-if n > 1 {
- if n <= SMALL_SORT_THRESHOLD {
- self.thin.sort_unstable_by(cmp_thin);
- } else {
- self.radix_sort();
- }
-}
-```
-
-**Why 1024?**
-
-- **< 500**: comparison wins (no zeroing).
-- **> 2,000**: radix wins (linear scaling).
-- **1024**: conservative crossover, both ~same cost.
-
----
-
-## The Results: Perfect $O(n)$ Scaling
-
-| $n$ | Old (BTreeMap) | New (Hybrid) | Speedup | ns/rewrite |
-| ------ | -------------- | ------------ | -------- | ---------- |
-| 10 | 7.5 µs | 7.6 µs | -1% | 760 |
-| 100 | 90 µs | 76 µs | **+16%** | 760 |
-| 1,000 | 1.33 ms | **0.75 ms** | **+44%** | 750 |
-| 3,000 | — | 3.03 ms | — | 1,010 |
-| 10,000 | — | 9.74 ms | — | 974 |
-| 30,000 | — | 29.53 ms | — | 984 |
-
-_From 3 k → 30 k (10×) → **9.75×** time → textbook linear._
-
-**60 FPS budget (16.67 ms):**
-
-- $n=1,000$ → **0.75 ms** = **4.5 %** of frame → **plenty of headroom**.
-
-### Phase breakdown ($n=30 k$)
-
-```text
-Total: 37.61 ms (100 %)
-Enqueue: 12.87 ms (34 %) – hash lookups + dedupe
-Drain: 24.83 ms (66 %) – radix + conflict checks + execute
-```
-
-Both phases scale **linearly**.
-
----
-
-## Visualization: The Story in One Glance
-
-Interactive D3 dashboard: `docs/benchmarks/report-inline.html`
-
-- **Log-log plot** with four series (hash, total, enqueue, drain)
-- **Threshold marker** at $n=1024$
-- **Color-coded stat cards** matching the chart
-- **Straight line** from 3 k → 30 k = proof of $O(n)$
-
----
-
-## Lessons Learned
-
-1. **Measure first** – curve fitting exposed $O(n \log n)$ before any code change.
-2. **Benchmarks lie** – a “fast” radix at $n=1,000$ obliterated $n=10$.
-3. **Memory bandwidth > CPU** – 5 MiB of zeroing dominated tiny inputs.
-4. **Hybrid wins** – comparison sort is _faster_ for small $n$.
-5. **Visualize the win** – a straight line on log-log is worth a thousand numbers.
-
----
-
-## What’s Next?
-
-| Idea | Expected Gain |
-| --------------------------------------- | ------------------ |
-| **Active-bucket zeroing** | ~15 % at large $n$ |
-| **Cross-tx scratch pooling** | Reduce alloc churn |
-| **Collapse rule_id to u8** (≤256 rules) | Drop 2 passes |
-
-The scheduler is now **algorithmically optimal** and **constant-factor excellent**.
-
----
-
-## Conclusion: Echoing the Future
-
-Echo’s deterministic scheduler evolved from **$O(n \log n)$** to **$O(n)$** with a **hybrid adaptive radix sort**:
-
-- **44 % faster** at typical game loads ($n=1,000$)
-- **Perfect linear scaling** to **30 k rewrites**
-- **Well under 60 FPS budget**
-- **Zero regressions** at small $n$
-- **Beautiful dashboard** proving the win
-
-Traditional engines treat determinism as an **afterthought**—a feature bolted on with prediction and prayer. Echo treats it as a **mathematical guarantee**, baked into every layer from DPO theory to the scheduler you just read about.
-
-When you can execute **30,000 deterministic rewrites per frame** and still hit **60 FPS**, you’re not just optimizing code—you’re **proving a new kind of game engine is possible**. One where:
-
-- **Multiplayer “just works”** (same pure function → no desync)
-- **Replay is physics** (rewind by recomputing graph history)
-- **AI training is reproducible**
-- **Formal verification** becomes practical
-- **Time-travel debugging** is native
-
-**The graph is a straight line. The future is deterministic. Echo is how we get there.** 🚀
-
----
-
-## Code References
-
-- **Implementation**: crates/warp-core/src/scheduler.rs (see `radix_sort`, `drain_in_order`)
-- **Benchmarks**: crates/warp-benches/benches/scheduler_drain.rs
-- **Dashboard**: `docs/benchmarks/report-inline.html`
-- **PR**: The radix optimization work has been merged to main.
-
----
-
-_Curious? Dive into the Echo docs or join the conversation on [GitHub](https://github.com/flyingrobots/echo)._
diff --git a/docs/archive/notes/scheduler-radix-optimization.md b/docs/archive/notes/scheduler-radix-optimization.md
deleted file mode 100644
index e52b975a..00000000
--- a/docs/archive/notes/scheduler-radix-optimization.md
+++ /dev/null
@@ -1,465 +0,0 @@
-
-
-
-# From $O(n log n)$ to $O(n)$: Optimizing Echo's Deterministic Scheduler
-
-**Tags:** performance, algorithms, optimization, radix-sort
-
----
-
-## TL;DR
-
-- Early benchmarks demonstrate that **Echo** can run at 60 fps while pushing ~5,000 DPO graph rewrites per frame
-- Big viability question answered
-- "Game scale" activity: confirmed
-
-## What is Echo?
-
-**Echo is a deterministic simulation engine built on graph rewriting theory.** While its applications are broad, it was born from the world of game development, so we'll use "game engine" as our primary lens.
-
-Unlike traditional game engines, which manage state through mutable object hierarchies and event loops, Echo represents the entire simulation state as a typed graph. This graph evolves through **deterministic rewrite rules**—mathematical transformations that guarantee identical results across platforms, replays, and simulations.
-
-At Echo's core is the _**WARP graph**_ (WARP). In Echo, _everything_ is a graph. Nodes are graphs, meaning a "player" is a complex subgraph with its own internal graph structure, not just an object. Edges are graphs, too, and can also have their own internal graphs, allowing expressiveness that carries structure and provenance. And most importantly, rules are graph rewrites. Echo updates the simulation by finding specific patterns in the WARP and replacing them with new ones. Every frame, the WARP is replaced by a new WARP, an _echo_ of the state that came before it.
-
-### Why bother? Aren't game engines a solved problem
-
-That's a fair question, but it’s aimed at the wrong target. While engines like Unreal and Unity are phenomenal rendering powerhouses and asset pipelines, they are built on an architectural foundation that struggles with the hardest problems in game development: **state management and networking**.
-
-The open secret of multiplayer development is that no two machines in a session ever truly agree on the game's state. What the player experiences is a sophisticated illusion, a constant, high-speed negotiation between **client-side prediction** and **authoritative server corrections**.
-
-I know this because I'm one of the developers who built those illusions. I've written the predictive input systems and complex netcode designed to paper over the cracks. The "rubber-banding" we've all experienced isn't a _bug_—it's an _artifact_. It's the unavoidable symptom of a system where state is **divergent by default**.
-
-This architectural flaw creates a secondary nightmare: **debugging**. When state is mutable, concurrent, and non-deterministic, reproducing a bug becomes a dark art. It's often impossible to look at a game state and know with certainty _how it got that way_. The system is fundamentally non-reproducible.
-
-The state of the art is built on patches, prediction, and arbitration to hide this core problem. The architecture itself is fragile.
-
-Until now.
-
-### Version Control for Reality
-
-One way to understand how Echo works is to imagine the simulation as version control for moments in time. In this mental model, a frame is like an immutable commit. And like a commit each frame has a canonical, cryptographic hash over the entire reachable graph, encoded in a fixed order. Echo treats inputs from players and other game world updates as candidate graph rewrites, and thanks to _confluence_, some category theory math, we can fold them into a single, deterministic effect. Finally, the scheduler applies all rewrites in a deterministic order and produces the next snapshot.
-
-No prediction. No rollback. No "authoritative correction." Just one pure function from `(world, inputs) → world′`.
-
-If two machines disagree, they disagree fast: a hash mismatch at frame `N+1` is a precise alarm, not a rubber‑band later.
-
-### ASCII timeline (branching and merge, deterministically)
-
-```text
- Frame₀
- │
- ▼
- Frame₁───┐
- │ \
- ▼ \
- Frame₂A Frame₂B
- │ │
- └────┬────┘
- ▼
- Merge₃ (confluence + canonical rewrite order)
-```
-
-### What Echo Unlocks
-
-This "version control" model isn't just a metaphor; it's a new architecture that unlocks capabilities that look "impossible" in a traditional engine.
-
-It enables **perfect replays**, as every frame is a commit that can be recomputed from its inputs to a bit‑identical state. This, in turn, provides an **infinite debugger**: provenance is embedded directly in the graph, allowing you to query its history to see who changed what, when, and why.
-
-For competitive games, this provides **provable fairness**, as a frame's cryptographic hash is a verifiable signature of "what happened." This all adds up to **zero silent desync**. A hash mismatch catches drift immediately and precisely, long before a user ever notices.
-
-Networking becomes straightforward: distribute inputs, compute the same function, compare hashes. When the math agrees, the world agrees.
-
-## [](https://dev.to/flyingrobots/determinism-by-construction-inside-echos-recursive-meta-graph-ecs-3491-temp-slug-8201751?preview=3b87bb097d6497d71ce72d6b6e87a1a101318ff960042f1db3908b807b6dd9a1b0b3811607d98ea25549311a530faa30d469ddd1cf0ac2c60e8f92fd#confluence-not-arbitration)Confluence, Not Arbitration
-
-When multiple updates target related state, we don't race them, we *merge* them with deterministic math. We use **confluence operators** with **lattice** properties:
-
-**Associative**, **Commutative**, **Idempotent** (ACI)
-
-Examples:
-
-Tags union: `join(TagsA, TagsB) = TagsA ∪ TagsB`
-
-Scalar cap: `join(Cap(a), Cap(b)) = Cap(max(a, b))`
-
-Those properties guarantee that folding a bucket of updates yields one result, independent of arrival order and partitioning.
-
-## [](https://dev.to/flyingrobots/determinism-by-construction-inside-echos-recursive-meta-graph-ecs-3491-temp-slug-8201751?preview=3b87bb097d6497d71ce72d6b6e87a1a101318ff960042f1db3908b807b6dd9a1b0b3811607d98ea25549311a530faa30d469ddd1cf0ac2c60e8f92fd#safe-parallelism-by-construction)Safe Parallelism by Construction
-
-Echo implements updates as **DPO (Double Push‑Out) graph rewrites**. This structure provides safe parallelism by construction: independent rewrites can apply in parallel without issue. Any overlapping rewrites are either deterministically merged by a lattice or rejected as invalid. For any remaining, dependent rewrites, the scheduler enforces a canonical order.
-
-The upshot: "Which rule ran first?" stops being a source of nondeterminism.
-
-A sketch of the full *fold→rewrite→commit* pipeline:
-
-> 1. Collect inputs for frame `N+1`.
-> 2. Bucket by (scope, rule family).
-> 3. Confluence fold each bucket (ACI).
-> 4. Apply remaining rewrites in a canonical order:
->
->
->
-> ```text
-> order by (scope_hash, family, compact_rule_id, payload_digest).
-> (Early convention — current drain key: scope, rule_id, nonce)
-> ```
->
-> 1. Emit a new snapshot and compute commit hash.
-
-## [](https://dev.to/flyingrobots/determinism-by-construction-inside-echos-recursive-meta-graph-ecs-3491-temp-slug-8201751?preview=3b87bb097d6497d71ce72d6b6e87a1a101318ff960042f1db3908b807b6dd9a1b0b3811607d98ea25549311a530faa30d469ddd1cf0ac2c60e8f92fd#a-tiny-rewrite-a-tiny-lattice)A Tiny Rewrite, A Tiny Lattice
-
-Rewrite (motion) in Scalar terms:
-
-> Match: an entity with position p and velocity v
-> Replace: position p′ = p + v·dt; velocity unchanged
-
-Lattice example (cap / max):
-
-> join(Cap(α), Cap(β)) = Cap(max(α, β))
-> ACI → the fold of {Cap(2), Cap(5), Cap(3)} is Cap(5) regardless of order.
-
-These primitives, **rewrites** and **lattices**, are the heart of Echo's "determinism by construction."
-
-**What makes Echo different:**
-
-- **Determinism by design**: Same inputs → same outputs, always. No floating-point drift, no race conditions, no "it works on my machine."
-- **Formal semantics**: Built on Double Pushout (DPO) category theory—every state transition is mathematically provable.
-- **Replay from the future**: Rewind time, fork timelines, or replay from any checkpoint. Your game is a pure function.
-- **Networked lockstep**: Perfect synchronization without sending world state. Just send inputs; all clients compute identical results.
-- **AI training paradise**: Deterministic = reproducible = debuggable. Train agents with confidence.
-
-Echo isn't just another ECS—it's a **fundamentally different way to build games**, where the scheduler isn't just an implementation detail, it's the guarantee of determinism itself.
-
----
-
-## The Problem: $O(n log n)$ Was Showing
-
-Echo's deterministic scheduler needs to execute rewrites in strict lexicographic order: `(scope_hash, rule_id, nonce)`. This ensures identical results across platforms and replays—critical for a deterministic game engine.
-
-Our initial implementation used a `BTreeMap<(Hash, Hash), PendingRewrite>`:
-
-```rust
-// Old approach
-pub(crate) pending: BTreeMap<(Hash, Hash), PendingRewrite>
-```
-
-**The bottleneck:** Insertions into the `BTreeMap` required $O(n \log n)$ comparisons over 256-bit scope hashes. Draining via `BTreeMap::drain()` was $O(n)$. The radix sort optimization eliminates the insertion bottleneck. Benchmarks showed:
-
-```text
-n=1000: ~1.33ms (comparison sort via BTreeMap iteration)
-n=3000: ~4.2ms (log factor starting to hurt)
-```
-
-Curve fitting confirmed **T/n ≈ -345 + 272.7·ln(n)**—textbook $O(n log n)$.
-
----
-
-## The Solution: 20-Pass Radix Sort
-
-Radix sort achieves **$O(n)$** complexity with zero comparisons by treating keys as sequences of digits. We implemented:
-
-- **LSD radix sort** with 16-bit big-endian digits
-- **20 passes total**: 2 for nonce, 2 for rule_id, 16 for full 32-byte scope hash
-- **Stable sorting** preserves insertion order for tie-breaking
-- **Byte-lexicographic ordering** exactly matches BTreeMap semantics
-
-### The Architecture
-
-```rust
-struct RewriteThin {
- scope_be32: [u8; 32], // Full 256-bit scope
- rule_id: u32, // Compact rule handle
- nonce: u32, // Insertion-order tie-break
- handle: u32, // Index into fat payload vec
-}
-
-struct PendingTx {
- thin: Vec, // Sorted keys
- fat: Vec>, // Payloads (indexed by handle)
- scratch: Vec, // Reused scratch buffer
- counts16: Vec, // 256KB histogram (65536 buckets)
-}
-```
-
-**Key insight:** Separate "thin" sorting keys from "fat" payloads. Only move 28-byte records during radix passes, then gather payloads at the end.
-
-```mermaid
-graph LR
- subgraph "Thin Keys (sorted)"
- T1[RewriteThin handle=0]
- T2[RewriteThin handle=2]
- T3[RewriteThin handle=1]
- end
-
- subgraph "Fat Payloads (indexed)"
- F0[PendingRewrite]
- F1[PendingRewrite]
- F2[PendingRewrite]
- end
-
- T1 -->|handle=0| F0
- T2 -->|handle=2| F2
- T3 -->|handle=1| F1
-
- style T1 fill:#e0af68
- style T2 fill:#e0af68
- style T3 fill:#e0af68
- style F0 fill:#9ece6a
- style F1 fill:#9ece6a
- style F2 fill:#9ece6a
-```
-
-### Radix Sort Pass Sequence
-
-The 20-pass LSD radix sort processes digits from least significant to most significant:
-
-```mermaid
-graph TD
- Start[Input: n rewrites] --> P1[Pass 1-2: nonce low→high]
- P1 --> P2[Pass 3-4: rule_id low→high]
- P2 --> P3[Pass 5-20: scope_hash bytes 31→0]
- P3 --> Done[Output: sorted by scope,rule,nonce]
-
- style Start fill:#bb9af7
- style Done fill:#9ece6a
- style P1 fill:#e0af68
- style P2 fill:#e0af68
- style P3 fill:#ff9e64
-```
-
-Each pass:
-
-1. **Count** — histogram of 65536 16-bit buckets
-2. **Prefix sum** — compute output positions
-3. **Scatter** — stable placement into scratch buffer
-4. **Flip** — swap `thin ↔ scratch` for next pass
-
----
-
-## The Disaster: Small-n Regression
-
-Initial results were not encouraging:
-
-```text
-BEFORE (BTreeMap): AFTER (Radix):
-n=10: 7.5µs n=10: 687µs (91x SLOWER!)
-n=100: 90µs n=100: 667µs (7x SLOWER!)
-n=1000: 1.33ms n=1000: 1.36ms (marginal)
-```
-
-
-_The benchmark graph tells the story: that flat green line at low n is 5MB of zeroing overhead dominating tiny inputs._
-
-**What went wrong?** The radix implementation zeroed a **256KB counts array 20 times per drain**:
-
-```rust
-counts.fill(0); // 65,536 × u32 = 256KB
-// × 20 passes = 5MB of writes for ANY input size
-```
-
-At n=10, we were doing **5MB of memory bandwidth** to sort **10 tiny records**. The "flat green line" in the benchmark graph told the story—massive fixed cost dominating small inputs.
-
----
-
-## The Fix: Adaptive Threshold
-
-The solution: **use the right tool for the job.**
-
-```mermaid
-graph TD
- Start[n rewrites to drain] --> Check{n ≤ 1024?}
- Check -->|Yes| Comp[Comparison Sort O n log n Low constant]
- Check -->|No| Radix[Radix Sort O n High constant]
- Comp --> Done[Sorted output]
- Radix --> Done
-
- style Start fill:#bb9af7
- style Comp fill:#e0af68
- style Radix fill:#9ece6a
- style Done fill:#bb9af7
- style Check fill:#ff9e64
-```
-
-```rust
-const SMALL_SORT_THRESHOLD: usize = 1024;
-
-fn drain_in_order(&mut self) -> Vec {
- let n = self.thin.len();
- if n > 1 {
- if n <= SMALL_SORT_THRESHOLD {
- // Fast path: comparison sort for small batches
- self.thin.sort_unstable_by(cmp_thin);
- } else {
- // Scalable path: radix for large batches
- self.radix_sort();
- }
- }
- // ... drain logic
-}
-
-
-fn cmp_thin(a: &RewriteThin, b: &RewriteThin) -> Ordering {
- a.scope_be32.cmp(&b.scope_be32)
- .then_with(|| a.rule_id.cmp(&b.rule_id))
- .then_with(|| a.nonce.cmp(&b.nonce))
-}
-```
-
-**Why 1024?** Empirical testing showed:
-
-- Below ~500: comparison sort wins (no zeroing overhead)
-- Above ~2000: radix sort wins ($O(n)$ scales)
-- **1024: conservative sweet spot** where both approaches perform similarly
-
-
-_The fix: adaptive threshold keeps small inputs fast while unlocking $O(n)$ scaling at large $n$._
-
----
-
-## The Results: Perfect $O(n)$ Scaling
-
-Final benchmark results across 6 data points (10, 100, 1k, 3k, 10k, 30k):
-
-| Input n | Old (BTreeMap) | New (Hybrid) | Speedup | Per-element |
-| ------- | -------------- | ------------ | -------- | ----------- |
-| 10 | 7.5µs | 7.6µs | -1% | 760ns |
-| 100 | 90µs | 76µs | +16% | 760ns |
-| 1,000 | 1.33ms | 0.75ms | **+44%** | 750ns |
-| 3,000 | — | 3.03ms | — | 1010ns |
-| 10,000 | — | 9.74ms | — | 974ns |
-| 30,000 | — | 29.53ms | — | 984ns |
-
-
-_The complete picture: purple (snapshot hash), green (scheduler total), yellow (enqueue), red (drain). Note the threshold marker at $n=1024$ and the perfectly straight lines beyond it._
-
-**Key observations:**
-
-1. **Comparison sort regime ($n ≤ 1024$):** ~750ns/element, competitive with old approach
-2. **Radix sort regime ($n > 1024$):** Converges to ~1µs/element with **zero deviation**
-3. **Scaling from 3k → 30k (10× data):** 9.75× time—textbook $O(n)$
-4. **60 FPS viability:** At $n=1000$ (typical game scene), scheduler overhead is just **0.75ms = 4.5% of 16.67ms frame budget**
-
-### Phase Breakdown
-
-Breaking down enqueue vs drain at $n=30k$:
-
-```text
-Total: 37.61ms (100%)
-Enqueue: 12.87ms (34%) — Hash lookups + last-wins dedupe
-Drain: 24.83ms (66%) — Radix sort + conflict checks + execute
-```
-
-```mermaid
-%%{init: {'theme':'dark'}}%%
-pie title Scheduler Time Breakdown at n=30k
- "Enqueue (hash + dedupe)" : 34
- "Drain (radix + conflicts)" : 66
-```
-
-The drain phase dominates, but both scale linearly. Future optimizations could target the radix sort overhead (active-bucket zeroing, cross-transaction pooling), but the current approach achieves our performance targets.
-
----
-
-## The Visualization: Telling the Story
-
-We built an interactive D3 dashboard (`docs/benchmarks/report-inline.html`) showing:
-
-- **Four series on log-log plot:**
- - Purple (solid): Snapshot Hash baseline
- - Green (solid): Scheduler Drain Total
- - Yellow (dashed): Enqueue phase
- - Red (dashed): Drain phase
-
-- **Threshold marker at $n=1024$** showing where the sorting strategy switches
-
-- **2×2 color-coded stat cards** matching chart colors for instant visual connection
-
-- **Explanatory context:** What we measure, why 60 FPS matters, how $O(n)$ scaling works
-
-**The key visual:** A straight line on the $log-log$ plot from 3k to 30k—proof of perfect linear scaling.
-
----
-
-## Lessons Learned
-
-### 1. **Measure First, Optimize Second**
-
-Curve fitting (`T/n ≈ 272.7·ln(n)`) confirmed the $O(n log n)$ bottleneck before we touched code.
-
-### 2. **Don't Optimize for Benchmarks Alone**
-
-The initial radix implementation looked good at $n=1000$ but destroyed small-batch performance. Real workloads include both.
-
-### 3. **Memory Bandwidth Matters**
-
-Zeroing 5MB of counts array matters more than CPU cycles at small $n$. The "flat line" in benchmarks was the smoking gun.
-
-### 4. **Hybrid Approaches Win**
-
-Comparison sort isn't "slow"—it's just $O(n log n)$. For small $n$, it's faster than **any** $O(n)$ algorithm with high constants.
-
-### 5. **Visualize the Win**
-
-A good chart tells the story instantly. Our dashboard shows the threshold switch, phase breakdown, and perfect scaling at a glance.
-
----
-
-## What's Next?
-
-Future optimizations:
-
-1. **Active-bucket zeroing**: Only zero counts buckets actually used (saves ~15% at large $n$)
-2. **Cross-transaction pooling**: Share scratch buffers across transactions via arena allocator
-3. **Rule-domain optimization**: If we have <256 rules, collapse `rule_id` to single-byte direct indexing (saves 2 passes)
-
-The scheduler is algorithmically optimal, scales to 30k rewrites in <30ms, and the constants are excellent.
-
----
-
-## Conclusion: Echoing the Future
-
-Echo's deterministic scheduler went from $O(n log n)$ BTreeMap to $O(n)$ hybrid adaptive sorter:
-
-- ✅ **44% faster at typical workloads ($n=1000$)**
-- ✅ **Perfect linear scaling to 30k rewrites**
-- ✅ **Well under 60 FPS budget**
-- ✅ **Zero regressions at small n**
-- ✅ **Beautiful visualization proving the win**
-
-The textbook said "radix sort is $O(n)$." The benchmarks said "prove it." **The graph is a straight line.**
-
-But here's the deeper point: **This optimization matters because Echo is building something fundamentally new.**
-
-Traditional game engines treat determinism as an afterthought—a nice-to-have feature bolted on through careful engineering and hope. Echo treats it as a **mathematical guarantee**, woven into every layer from category theory foundations to the scheduler you're reading about right now.
-
-When you can execute 30,000 deterministic rewrite rules per frame and still hit 60 FPS, you're not just optimizing a scheduler—you're **proving that a different kind of game engine is possible.** One where:
-
-- **Multiplayer "just works"** because clients can't desync (they're running the same pure function)
-- **Replay isn't a feature**, it's physics (rewind time by replaying the graph rewrite history)
-- **AI training scales** because every training episode is perfectly reproducible
-- **Formal verification** becomes practical (prove your game logic correct, not just test it)
-- **Time travel debugging** isn't science fiction (checkpoint the graph, fork timelines, compare outcomes)
-
-Echo isn't just a faster game engine. **Echo is a different game engine.** One built on the mathematical foundation that traditional engines lack. One where the scheduler's deterministic ordering isn't a nice property—it's the **fundamental guarantee** that makes everything else possible.
-
-This optimization journey—from spotting the $O(n log n)$ bottleneck to proving $O(n)$ scaling with a hybrid radix sorter—is what it takes to make that vision real. To make determinism **fast enough** that developers don't have to choose between correctness and performance.
-
-The graph is a straight line. The future is deterministic. **And Echo is how we get there.** 🚀
-
----
-
-> **Note:** Code references below reflect state at time of writing and may be
-> stale. Paths and line numbers have likely changed since this document was
-> authored. Use repo search (`rg`) to locate current implementations.
-
-## Code References
-
-- Implementation: `crates/warp-core/src/scheduler.rs` (see `fn radix_sort` near line 338) _(line numbers may have shifted)_
-- Benchmarks: `crates/warp-benches/benches/scheduler_drain.rs`
-- Dashboard: `docs/benchmarks/report-inline.html`
-- The radix optimization work has been merged to main.
-
----
-
-_Want to learn more? Check out the [Echo documentation](/meta/docs-index) or join the discussion on [GitHub](https://github.com/flyingrobots/echo)._
diff --git a/docs/archive/notes/xtask-wizard.md b/docs/archive/notes/xtask-wizard.md
deleted file mode 100644
index 2bae3f0e..00000000
--- a/docs/archive/notes/xtask-wizard.md
+++ /dev/null
@@ -1,51 +0,0 @@
-
-
-
-# xtask "workday wizard" — concept note
-
-Goal: a human-friendly `cargo xtask` (or `just`/`make` alias) that walks a contributor through starting and ending a work session, with automation hooks for branches, PRs, issues, and planning.
-
-## Core flow
-
-### Start session
-
-- Prompt for intent/issue: pick from open GitHub issues (via gh CLI) or free text.
-- Branch helper: suggest branch name (`echo/-`), create and checkout if approved.
-- Env checks: toolchain match, hooks installed (`make hooks`), `cargo fmt -- --check`/`clippy` optional preflight.
-
-### During session
-
-- Task DAG helper: load tasks from issue body / local `tasks.yaml`; compute simple priority/topo order (dependencies, P1/P0 tags).
-- Bench/test shortcuts: menu to run common commands (clippy, cargo test -p warp-core, bench targets).
-- Docs guard assist: if runtime code touched, remind to update relevant specs/ADRs.
-
-### End session
-
-- Summarize changes: gather `git status`, staged/untracked hints.
-- PR prep: prompt for PR title/body template (with issue closing keywords); optionally run `git commit` and `gh pr create`.
-- Issue hygiene: assign milestone/board/labels via gh CLI; auto-link PR to issue.
-
-## Nice-to-haves
-
-- Determinism check shortcut: run twin-engine sandbox determinism A/B (radix vs legacy) and summarize.
-- Planner math: simple critical path/priority scoring across tasks.yaml; suggest next task when current is blocked.
-- Cache hints: detect heavy commands run recently, skip/confirm rerun.
-- Telemetry: write a small JSON session record for later blog/mining (start/end time, commands run, tests status).
-
-## Tech sketch
-
-- Implement under `xtask` crate in workspace; expose `cargo xtask wizard`.
-- Use `dialoguer`/`inquire` for prompts; `serde_yaml/json` for tasks; `gh` CLI for GitHub ops (fallback to no-op if missing).
-- Config file (`.echo/xtask.toml`) for defaults (branch prefix, issue labels, PR template path).
-
-## Open questions
-
-- How much is automated vs. suggested (avoid surprising commits)?
-- Should Docs Guard be enforced via wizard or still via hooks?
-- Where to store per-session summaries (keep in git or external log)?
-
-## Next steps
-
-- Prototype a minimal “start session” + “end session” flow with `gh` optional.
-- Add a `tasks.yaml` example and priority/topo helper.
-- Wire into make/just: `make wizard` → `cargo xtask wizard`.
diff --git a/docs/archive/phase1-plan.md b/docs/archive/phase1-plan.md
deleted file mode 100644
index 879c1a8c..00000000
--- a/docs/archive/phase1-plan.md
+++ /dev/null
@@ -1,130 +0,0 @@
-
-
-
-# Phase 1 – Core Ignition Plan
-
-Goal: deliver a deterministic Rust implementation of WARP powering the Echo runtime, with tangible demos at each milestone. This plan outlines task chains, dependencies, and expected demonstrations.
-
-Status (2025-12-30):
-
-- 1A (bootstrap) and 1B (rewrite executor spike) are effectively landed in `main` via `warp-core` (B0/B1: two-plane attachments + WarpInstances).
-- The next “engine-facing” milestone is 1C (Rhai/TS bindings) and the next “tooling-facing” milestone is completing the WARP View Protocol demo path (`docs/tasks.md`).
-
----
-
-## Task Graph
-
-```mermaid
-graph TD
- A[1A · WARP Core Bootstrap]
- B[1B · Rewrite Executor Spike]
- C[1C · Rhai/TS Bindings]
- D[1D · Echo ECS on WARP]
- E[1E · Networking & Confluence MVP]
- F[1F · Tooling Integration]
-
- A --> B --> C --> D --> E --> F
- B --> DemoToy
- D --> DemoNetcode
- E --> DemoTimeTravel
- F --> DemoLiveCoding
-
- subgraph Demos
- DemoToy[Demo 2 · Toy Rewrite Benchmark]
- DemoNetcode[Demo 1 · Deterministic Netcode]
- DemoTimeTravel[Demo 5 · Time Travel Merge]
- DemoLiveCoding[Demo 6 · Rhai Live Coding]
- end
-```
-
----
-
-## Phases & Tangible Outcomes
-
-### 1A · WARP Core Bootstrap
-
-- Tasks
- - Scaffold crates (`warp-core`, `warp-wasm`, `warp-cli`).
- - Implement GraphStore primitives, hash utilities, scheduler skeleton.
- - CI: `cargo fmt/clippy/test` baseline.
-- Demonstration: _None_ (foundation only).
-
-### 1B · Rewrite Executor Spike
-
-- Tasks
- - Implement motion rule test (Position + Velocity rewrite).
- - Execute deterministic ordering + snapshot hashing.
- - Add minimal diff/commit log entries.
-- Demonstration: **Demo 2 · Toy Benchmark**
- - 100 nodes, 10 rules, property tests showing stable hashes.
-
-### 1C · Rhai/TS Bindings
-
-- Tasks
- - Embed Rhai with deterministic sandbox + host modules.
- - Build WASM bindings for tooling.
- - Port inspector CLI to use snapshots.
-- Demonstration: Rhai script triggers rewrite; inspector shows matching snapshot hash.
-
-### 1D · Echo ECS on WARP
-
-- Tasks
- - Map existing ECS system set onto rewrite rules.
- - Replace Codex’s Baby event queue with rewrite intents.
- - Emit frame hash HUD.
-- Demonstration: **Demo 1 · Deterministic Netcode**
- - Two instances, identical inputs, frame hash displayed per tick.
-
-### 1E · Networking & Confluence MVP
-
-- Tasks
- - Implement rewrite transaction packets; replay on peers.
- - Converge canonical snapshots; handle conflicts deterministically.
- - Integrate rollback path (branch rewind, replay log).
-- Demonstration: **Demo 5 · Time Travel**
- - Fork, edit, merge branch; show canonical outcome.
-
-### 1F · Tooling Integration
-
-- Tasks
- - Echo Studio (TS + WASM) graph viewer with live updates.
- - Entropy lens, paradox heatmap overlays.
- - Rhai live coding pipeline (hot reload).
-- Demonstrations:
- - **Demo 3 · Real Benchmark** (1k nodes, 100 rules).
- - **Demo 6 · Live Coding** (Rhai edit updates live graph).
-
----
-
-## Performance / Benchmark Milestones
-
-| Milestone | Target | Notes |
-| ------------------ | --------------------------------------------- | --------------------- |
-| Toy Benchmark | 100 nodes / 10 rules / 200 iterations < 1ms | Demo 2 |
-| Real Demo | 1,000 nodes / 100 rules < 10ms rewrite checks | Demo 3 |
-| Production Stretch | 10,000 nodes / 1000 rules (profiling only) | Phase 2 optimizations |
-
-Optimization roadmap once baseline is working:
-
-1. Incremental pattern matching.
-2. Spatial indexing.
-3. SIMD bitmap operations.
-4. Critical pair analysis for confluence proofs.
-
----
-
-## Networking Demo Targets
-
-| Mode | Deliverable |
-| --------- | --------------------------------------------------------------- |
-| Lockstep | Replay identical inputs; frame hash equality per tick. |
-| Rollback | Predictive input with rollback on mismatch. |
-| Authority | Host selects canonical branch; entropy auditor rejects paradox. |
-
----
-
-## Documentation Checklist
-
-- Update `docs/warp-runtime-architecture.md` as rules/loop evolve.
-
-Phase 1 completes when Demo 6 (Live Coding) runs atop the Rust WARP runtime with inspector tooling in place, using Rhai as the scripting layer.
diff --git a/docs/archive/plans/BOAW-tech-debt.md b/docs/archive/plans/BOAW-tech-debt.md
deleted file mode 100644
index d348b46a..00000000
--- a/docs/archive/plans/BOAW-tech-debt.md
+++ /dev/null
@@ -1,315 +0,0 @@
-
-
-
-
-
-# BOAW Roadmap: Phase 6B → Phase 9
-
-**Created:** 2026-01-20
-**Status:** AWAITING APPROVAL
-**Context:** Post-Phase 6B integration — cleanup, guardrails, and planning
-
----
-
-## Classification Rubric
-
-| If it... | Then it's... |
-| ------------------------------------ | -------------------------------------- |
-| Unblocks a phase | **Roadmap** (Tiers 1-3) |
-| Reduces risk or prevents regressions | **Guardrail** (Tier 0.5) |
-| Improves performance | **Perf Gate** (only after measurement) |
-| Is unused code | **Delete immediately** (Tier 0) |
-
----
-
-## Tier 0: Cleanup (Today)
-
-_Dead code and doc drift. Do immediately after merge._
-
-### 0.1 Delete `emit_view_op_delta()`
-
-| Field | Value |
-| -------------- | --------------------------------------------- |
-| **Location** | `crates/echo-dind-tests/src/rules.rs:600-648` |
-| **Call Sites** | 0 |
-| **Risk** | None |
-
-**Why:** Deprecated function using non-deterministic `delta.len()` sequencing.
-Replaced by `emit_view_op_delta_scoped()`. Keeping it risks copy-paste of broken pattern.
-
-### 0.2 Delete `execute_parallel_stride()` + Feature Gate
-
-| Field | Value |
-| -------------- | ------------------------------------------- |
-| **Location** | `crates/warp-core/src/boaw/exec.rs:176-207` |
-| **Call Sites** | 3 (1 conditional, 2 Phase 6A tests) |
-| **Risk** | Low |
-
-**Why:** Phase 6A stride execution superseded by Phase 6B sharded execution.
-Feature-gated behind `parallel-stride-fallback`. Adds maintenance burden.
-
-**Steps:**
-
-1. Delete Phase 6A equivalence tests (`boaw_parallel_exec.rs:286-365`)
-2. Remove stride fallback conditional (`exec.rs:67-83`)
-3. Delete `execute_parallel_stride()` function
-4. Remove `parallel-stride-fallback` feature from `Cargo.toml`
-
-### 0.3 Doc Accuracy Pass
-
-Verify these are still accurate post-merge:
-
-
-
-- [ ] `TECH-DEBT-BOAW.md` — mark Phase 6B items complete _(not completed before archival)_
-- [ ] `ADR-0007-BOAW-Storage.md` — phase status markers _(not completed before archival)_
-- [ ] `CHANGELOG.md` — PR #257 merge recorded _(not completed before archival)_
-
----
-
-## Tier 0.5: Correctness Guardrails (This Week)
-
-_Tests we can land now + baseline measurements. Reduces future regression risk._
-
-### 0.5.1 Activate Passing Tests
-
-Some `#[ignore]` tests may now pass after Phase 6B. Audit and activate:
-
-| Test File | Check For |
-| --------------------- | --------------------------------------------- |
-| `boaw_determinism.rs` | Any tests that only needed parallel execution |
-| `boaw_end_to_end.rs` | Full integration tests |
-| `boaw_footprints.rs` | T3.1 already passes; verify others |
-
-### 0.5.2 WarpOpKey Invariant Test
-
-Verify `WarpOpKey` ordering is stable and exercised:
-
-- Canonical sort order matches spec
-- No collisions under realistic workloads
-- Public API (`sort_key()`) works for external verification
-
-### 0.5.3 Initial Benchmark Baseline
-
-**Purpose:** Prove parallelism delivers measurable wins. Capture baseline so future
-phases don't accidentally regress performance.
-
-**Scope:** Minimal, not a full optimization suite.
-
-| Benchmark | What It Measures |
-| ------------------------- | ---------------------------------- |
-| `parallel_vs_serial_10` | 10 rewrites: parallel speedup |
-| `parallel_vs_serial_100` | 100 rewrites: parallel speedup |
-| `parallel_vs_serial_1000` | 1000 rewrites: parallel speedup |
-| `shard_distribution` | Are rewrites spread across shards? |
-
-**Location:** `benches/boaw_baseline.rs` (new file)
-
-**Success Criteria:**
-
-- Parallel ≥ serial for n ≥ 100 (no regression)
-- Document baseline numbers in `docs/notes/boaw-perf-baseline.md`
-
----
-
-## Tier 1: Phase 7 — Forking
-
-_Multi-parent commits and prerequisites. ~2-3 weeks._
-
-### Prerequisites (Enable Forking)
-
-| Component | Tests Unblocked | Notes |
-| -------------------------------- | --------------- | ----------------------------------------------------------------------- |
-| **OpenPortal scheduling (T7.1)** | 4 | Scheduler tracks new warps; enforces "no same-tick writes to new warps" |
-| **DeltaView** | 6 | Overlay + base resolution during execution |
-| ~~**FootprintGuard**~~ | 3 | ✅ Done (44aebb0d8f7b, 0d0231b55761) |
-| **SnapshotBuilder wiring** | 1 | Connect builder to test harness |
-
-### Core Forking Work
-
-| Component | Description |
-| ----------------------------- | -------------------------------- |
-| Multi-parent commit structure | Commit can have 0..n parents |
-| Worldline DAG | Track branch/merge topology |
-| Parent addressing | Reference parents by commit hash |
-
-### Tests Unblocked: 14
-
-```text
-boaw_openportal_rules.rs — 4 tests (T7.1)
-boaw_cow.rs — 6 tests (DeltaView)
-boaw_footprints.rs — 3 tests (FootprintGuard)
-boaw_determinism.rs — 1 test (SnapshotBuilder)
-```
-
----
-
-## Tier 2: Phase 8 — Collapse/Merge
-
-_Deterministic multi-parent reconciliation. ~2-3 weeks. Requires Phase 7._
-
-### Merge Components
-
-| Component | Description |
-| ----------------------------- | ------------------------------------------------ |
-| **Typed merge registry** | Per-type: Sensitivity, MergeBehavior, Disclosure |
-| **Merge regimes** | Commutative (CRDT), LWW, ConflictOnly |
-| **Conflict artifacts** | Deterministic, contains only hashes (no secrets) |
-| **Canonical parent ordering** | Sort by `commit_hash` for order-dependent merges |
-| **Presence policies** | delete-wins (default), add-wins, LWW |
-
-### Tests Unblocked: 10
-
-```text
-boaw_merge.rs — all 10 tests
-├── t6_1: Commutative merge parent-order invariance
-├── t6_2: Canonical ordering for order-dependent
-├── t6_3: Conflict artifact determinism
-├── merge_regime_crdt_like_is_preferred
-├── merge_regime_lww_with_canonical_order
-├── presence_policy_delete_wins
-├── presence_policy_add_wins
-├── conflict_artifact_is_first_class_and_deterministic
-└── conflict_artifact_contains_no_secrets
-```
-
----
-
-## Tier 3: Phase 9 — Privacy Claims
-
-_Ledger-safe provenance. ~2-3 weeks. Requires Phase 8._
-
-### Privacy Components
-
-| Component | Description |
-| ------------------------- | ------------------------------------------------------- |
-| **Atom type registry** | Sensitivity (Public/Private/ForbiddenInLedger) |
-| **Mind mode enforcement** | Reject ForbiddenInLedger atoms in ledger |
-| **ClaimRecord structure** | claim_key, scheme_id, statement_hash, commitment, proof |
-| **Commitment safety** | Pepper-based hashing (dictionary-safe) |
-| **ZK proof merging** | Verify during collapse; quarantine invalid |
-| **Diagnostics mode** | Richer introspection for trusted debugging |
-
-### Tests Unblocked: 9
-
-```text
-boaw_privacy.rs — all 9 tests
-├── t7_1: Mind mode forbids ForbiddenInLedger
-├── t7_2: Invalid proofs quarantined
-├── t7_3: Conflicting valid claims → artifact
-├── t7_4: Commitment dictionary-safe with pepper
-├── atom_type_declares_sensitivity
-├── atom_type_declares_merge_behavior
-├── atom_type_declares_disclosure_policy
-├── claim_record_is_canonical
-└── diagnostics_mode_allows_richer_introspection
-```
-
----
-
-## Perf Gate (Recurring)
-
-_Run at end of each tier. Catch regressions early._
-
-### What to Measure
-
-| Metric | Baseline (Tier 0.5) | Gate Threshold |
-| --------------------------- | ------------------- | -------------------------- |
-| Parallel vs serial (n=100) | TBD | No regression (≥ baseline) |
-| Parallel vs serial (n=1000) | TBD | No regression (≥ baseline) |
-| Merge time (n ops) | TBD | < 2x baseline |
-| Snapshot build time | TBD | < 2x baseline |
-
-### When to Run
-
-- [x] After Tier 0 (cleanup) — establish baseline
-- [ ] After Tier 1 (Phase 7) — verify forking doesn't regress
-- [ ] After Tier 2 (Phase 8) — verify merge doesn't regress
-- [ ] After Tier 3 (Phase 9) — verify privacy checks don't regress
-
-### Optimization Work (Only If Gate Fails)
-
-These are **not scheduled**. Only pursue if perf gate shows regression:
-
-| Item | Trigger | Status |
-| -------------------------- | ---------------------------------- | ------------- |
-| ~~Cross-warp parallelism~~ | Multi-warp ticks show poor scaling | ✅ Done |
-| State clone overhead | CI times unacceptable | Not scheduled |
-| Shard rebalancing | Skewed distributions measured | Not scheduled |
-| SIMD merge sort | Merge becomes bottleneck | Not scheduled |
-
----
-
-## Test Inventory Summary
-
-| Tier | Tests Unblocked | Cumulative |
-| -------------------- | ------------------- | ---------- |
-| Tier 0.5 | ~2-3 (audit needed) | ~2-3 |
-| Tier 1 (Phase 7) | 14 | ~17 |
-| Tier 2 (Phase 8) | 10 | ~27 |
-| Tier 3 (Phase 9) | 9 | ~36 |
-| Stress (run anytime) | 1 | 37 |
-
-**Current:** ~17 tests passing
-**After Phase 9:** ~54 tests passing (all BOAW tests enabled)
-
----
-
-
-
-> **⚠️ TRACKING MOVED:** This archived checklist is preserved for historical
-> context only. Active work tracking is now in
-> [`TECH-DEBT-BOAW.md`](../../adr/TECH-DEBT-BOAW.md). Do NOT update checkboxes here.
-
-## Execution Checklist
-
-### ☐ Tier 0 Cleanup
-
-- [ ] Delete `emit_view_op_delta()` from `rules.rs`
-- [ ] Delete `execute_parallel_stride()` + tests + feature gate
-- [ ] Verify doc accuracy (TECH-DEBT, ADR, CHANGELOG)
-
-### Tier 0.5: Guardrails (This Week)
-
-- [ ] Audit `#[ignore]` tests — activate any that now pass
-- [ ] Add/verify WarpOpKey invariant test
-- [ ] Create `benches/boaw_baseline.rs` with minimal benchmarks
-- [ ] Document baseline in `docs/notes/boaw-perf-baseline.md`
-- [ ] Run perf gate, record numbers
-
-### Tier 1: Phase 7 (Next Sprint)
-
-- [ ] Implement OpenPortal scheduling (T7.1)
-- [ ] Implement DeltaView
-- [x] Implement FootprintGuard (44aebb0d8f7b, 0d0231b55761)
-- [ ] Wire SnapshotBuilder to test harness
-- [ ] Core forking semantics
-- [ ] Activate 14 tests
-- [ ] Run perf gate
-
-### Tier 2: Phase 8 (Following Sprint)
-
-- [ ] Typed merge registry
-- [ ] Merge regimes + conflict artifacts
-- [ ] Presence policies
-- [ ] Activate 10 tests
-- [ ] Run perf gate
-
-### Tier 3: Phase 9 (Future)
-
-- [ ] Atom type registry
-- [ ] Mind mode + ClaimRecord
-- [ ] ZK proof merging
-- [ ] Activate 9 tests
-- [ ] Run perf gate
-
----
-
-## References
-
-- [ADR-0007-BOAW-Storage.md](../../adr/ADR-0007-BOAW-Storage.md) — Full specification
-- [TECH-DEBT-BOAW.md](../../adr/TECH-DEBT-BOAW.md) — Original tracking (to be updated)
-- [PR #257](https://github.com/flyingrobots/echo/pull/257) — Phase 6B implementation
-- Knowledge Graph: `BOAW_Phase_6B`, `Echo_BOAW_Architecture`
diff --git a/docs/archive/plans/COMING_SOON.md b/docs/archive/plans/COMING_SOON.md
deleted file mode 100644
index 3a596771..00000000
--- a/docs/archive/plans/COMING_SOON.md
+++ /dev/null
@@ -1,125 +0,0 @@
-
-
-
-# Echo & Wesley: The Causal Application Guide
-
-Welcome to the future of causal development. This document explains how **Echo** (the substrate) and **Wesley** (the law-giver) work together to create deterministic, time-travelable applications.
-
----
-
-## 1. The Core Philosophy: "Law vs. Physics"
-
-Building an application on Echo is different from traditional state-management. We split the universe into two layers:
-
-1. **The Law (Wesley)**: Defines _what_ exists and _what_ is allowed to happen. It is expressed in GraphQL SDL with WARP directives.
-2. **The Physics (Echo)**: The high-performance graph substrate that executes the laws, enforces constraints, and records the history of every atom.
-
----
-
-## 2. Wesley: The Schema Compiler
-
-Wesley is not a runtime; it is a **Law Compiler**. When you build an application, you start by writing a schema.
-
-### Defining the Ontology
-
-In a `.graphql` file, you define:
-
-- **Types**: The "Atoms" of your graph (e.g., `User`, `Position`, `InventoryItem`).
-- **Channels**: The event buses where data is emitted (e.g., `PhysicsUpdates`, `ChatMessages`).
-- **Policies**: How data on those channels is handled (`StrictSingle`, `Reduce:Sum`, or `Log`).
-
-### Defining Operations (The Intent ABI)
-
-Instead of arbitrary functions, you define **Operations (Ops)**. An Op is a declaration of intent to change the graph.
-
-```graphql
-type Mutation {
- movePlayer(id: ID!, delta: Vec3!): MoveResult @warp(opId: 101)
-}
-```
-
-Wesley compiles this into an **Intermediate Representation (IR)**. Echo's code generator (`echo-ttd-gen`) then consumes this IR to produce:
-
-- Type-safe Rust structs.
-- Enforcement tables (Footprints) that declare exactly which nodes an Op is allowed to read or write.
-
----
-
-## 3. Echo: The Causal Substrate
-
-Echo takes the artifacts from Wesley and provides the execution environment.
-
-### Graph Rewrites
-
-Every change in Echo is a **Graph Rewrite**. When an application triggers an Op (like `movePlayer`):
-
-1. **Intent**: An `EINT` (Echo Intent) frame is created.
-2. **Scheduling**: The Echo Scheduler looks at the Op's **Footprint**. If two Ops touch different parts of the graph, they can run in parallel.
-3. **Execution**: The rewrite rule is applied. This is a pure function: `(PriorState, OpArgs) -> (NewState, Emissions)`.
-4. **Commit**: The new state is hashed (BLAKE3) and committed to the **Provenance Store**.
-
-### Determinism Guards
-
-Echo enforces "Ironclad Determinism":
-
-- **Floating Point**: All math uses `DFix64` (fixed-point) to ensure bit-exact results across Intel, ARM, and WASM.
-- **No Side Effects**: Rewrite rules cannot call `Date.now()` or `Math.random()`. All entropy must be passed in as a seeded "Paradox" value.
-
----
-
-## 4. The Time-Travel Debugger (TTD)
-
-The TTD is not just a UI; it is a fundamental property of the **Provenance Store**.
-
-### Worldlines & Forks
-
-Because every tick is a content-addressed snapshot, Echo supports **Causal Branching**:
-
-- **Playback**: You can seek a "Cursor" to any tick in the past.
-- **Forking**: You can create a new `WorldlineId` starting from a past tick. You can then apply different intents to see a "What If" scenario.
-- **Replay**: The TTD can re-play an entire session and verify that the `state_root` hashes match the "Golden" run.
-
-### The Receipt System
-
-Every execution produces a **TTDR Receipt**. This is a cryptographically signed proof that:
-_"At Tick X, Op Y was applied to State Z, resulting in State A and Emissions B."_
-
----
-
-## 5. How to Build an "Echo App"
-
-### Step 1: The Wesley Sync
-
-Write your schema and run `cargo xtask wesley sync`. This vendors the types and manifests into your project.
-
-### Step 2: Implement Rewrite Rules
-
-In Rust, you implement the logic for your Ops. Echo provides a `GraphView` that enforces your footprint at runtime.
-
-```rust
-fn handle_move_player(view: &mut GuardedView, args: MoveArgs) -> StepResult {
- let mut pos = view.get_component::(args.id)?;
- pos.x += args.delta.x;
- view.set_component(args.id, pos)?;
- Ok(Advanced)
-}
-```
-
-### Step 3: Define the Scene Port
-
-Use `echo-scene-port` to map your graph state to visual objects. This produces a `SceneDelta`—a language-agnostic list of "Add Node", "Move Edge", or "Set Label" commands.
-
-### Step 4: The Frontend
-
-Wire the WASM `TtdEngine` into your React/Three.js app. The engine handles the worldlines; your UI just renders the current "Truth Frames" arriving on the subscribed channels.
-
----
-
-## 6. Coming Soon: The "Drill Sergeant" Workflow
-
-We are moving toward a workflow where **Determinism isn't Optional**.
-
-- **DIND (Deterministic Ironclad Nightmare Drills)**: Your app will be subjected to randomized operation orders to ensure it always converges to the same state.
-- **Fuzzing the Law**: Wesley will generate "hostile" inputs to try and crash your rewrite rules.
-
-_Echo is more than an engine; it is a guarantee that causality is absolute._
diff --git a/docs/archive/plans/SPEC-0004-final-plan.md b/docs/archive/plans/SPEC-0004-final-plan.md
deleted file mode 100644
index 56401e56..00000000
--- a/docs/archive/plans/SPEC-0004-final-plan.md
+++ /dev/null
@@ -1,249 +0,0 @@
-
-
-
-# SPEC-0004 Implementation Plan: Worldlines, PlaybackCursors, ViewSessions, TruthBus
-
-**Status:** In Progress
-**Created:** 2026-01-20
-**Spec:** `/docs/spec/SPEC-0004-worldlines-playback-truthbus.md`
-
----
-
-## Corrections Applied (from review)
-
-1. **U0Ref = WarpId** — MVP U0Ref is just a handle to `engine.initial_state` for a warp, not a checkpoint blob
-2. **One entry per global tick per warp** — Store patches even if empty to maintain index alignment: `warp_patches[warp_id].len() == global_tick_history_len`
-3. **Use existing canonical hash scheme** — `compute_state_root_for_warp_store` must use same ordering as `snapshot.rs`
-4. **Minimal TruthSink** — `BTreeMap>` plus a parallel `BTreeMap>` for receipts, not a full bus layer
-5. **Add demo emission for tests** — Need deterministic emission path or outputs are vacuous
-6. **Explicit WarpOp coverage** — `apply_warp_op_to_store` must handle all variants or reject with typed error
-
----
-
-## Commit Status
-
-### ✅ Commit 1 — MBUS v2 Encoder/Decoder + Tests (COMPLETE)
-
-**Files Created:**
-
-- `crates/warp-core/src/materialization/frame_v2.rs` — V2 encoder/decoder with cursor-stamped packets
-
-**Files Modified:**
-
-- `crates/warp-core/src/materialization/mod.rs` — Export frame_v2 types
-
-**Tests Passing (11/11):**
-
-- T19: `mbus_v2_roundtrip_single_packet`
-- T20: `mbus_v1_rejects_v2`
-- T21: `mbus_v2_rejects_v1`
-- T22: `mbus_v2_multi_packet_roundtrip`
-- Plus edge case tests (empty entries, bad magic, truncated, etc.)
-
-**Gate:** `cargo test -p warp-core --features delta_validate -- frame_v2` ✅
-
----
-
-### 🔲 Commit 2 — Types + IDs + ProvenanceStore Seam + Per-Warp Worldline Store
-
-**New Files:**
-
-- `crates/warp-core/src/worldline.rs`
- - `WorldlineId(Hash)` — transparent wrapper
- - `HashTriplet { state_root, patch_digest, commit_hash }`
- - `WorldlineTickPatchV1` — per-warp projection of global tick
- - `WorldlineTickHeaderV1` — shared header across warps
- - `OutputFrameSet = Vec<(ChannelId, Vec)>`
-
-- `crates/warp-core/src/playback.rs`
- - `CursorId(Hash)`, `SessionId(Hash)` — transparent wrappers
- - `CursorRole { Writer, Reader }`
- - `PlaybackMode { Paused, Play, StepForward, StepBack, Seek { target, then } }`
- - `SeekThen { Pause, RestorePrevious, Play }`
- - `CursorReceipt` — cursor context for truth frames
- - `TruthFrame` — authoritative value with cursor receipt
-
-- `crates/warp-core/src/provenance_store.rs`
- - `ProvenanceStore` trait (seam for future wormholes)
- - `LocalProvenanceStore` — in-memory Vec-backed implementation
- - `HistoryError { HistoryUnavailable { tick }, WorldlineNotFound }`
- - `U0Ref = WarpId` (per correction #1)
-
-**Engine Modifications (`engine_impl.rs`):**
-
-- Add fields:
-
- ```rust
- warp_patches: BTreeMap>,
- warp_expected: BTreeMap>,
- warp_outputs: BTreeMap>,
- ```
-
-- Modify `commit_with_receipt` to project global ops → per-warp patches
-- **Invariant:** `warp_patches[warp_id].len() == tick_history.len()` (even for no-ops)
-
-**Gate:** `cargo test -p warp-core --features delta_validate`
-
----
-
-### 🔲 Commit 3 — Warp-Local Apply + State Root + Cursor Seek + Verification
-
-**Add to `playback.rs`:**
-
-- `PlaybackCursor` struct with:
- - `cursor_id`, `worldline_id`, `warp_id`, `tick`, `role`, `mode`
- - `store: GraphStore` (owned, never shared)
- - `pin_max_tick: u64`
-- `PlaybackCursor::seek_to(target, provenance)`:
- - If `target < tick`: rebuild from U0 (initial_state for warp)
- - Apply patches `tick.. }`
-- `ViewSession::subscribe(channel)`, `set_active_cursor(cursor)`
-
-**Truth Sink (minimal, per correction #4):**
-
-- `TruthSink { frames: BTreeMap>, receipts: BTreeMap> }`
-- Helper: `collect_frames(session_id) -> &[TruthFrame]` — returns frames for a session
-- Helper: `last_receipt(session_id) -> Option<&CursorReceipt>` — reads from the receipts map
-
-**PlaybackCursor::step():**
-
-- Implement `PlaybackMode` state machine
-- `Paused` → no-op
-- `Play` → Writer appends (BOAW), Reader consumes then pauses at frontier
-- `StepForward` → advance one then `Paused`
-- `StepBack` → seek(tick-1) then `Paused`
-- `Seek { target, then }` → seek then apply `SeekThen`
-
-**Tests:**
-
-- T1: `writer_play_advances_and_records_outputs`
-- T2: `step_forward_advances_one_then_pauses`
-- T3: `paused_noop_even_with_pending_intents`
-- T7: `truth_frames_are_cursor_addressed_and_authoritative`
-- T9: `two_sessions_same_channel_different_cursors_receive_different_truth`
-- T10: `session_cursor_switch_is_opaque_to_subscribers`
-- T16: `worker_count_invariance_for_writer_advance`
-
-**Gate:** `cargo test -p warp-core --features delta_validate` + `cargo test -p echo-dind-harness`
-
----
-
-### 🔲 Commit 5 — Record Outputs Per Tick + Seek/Playback
-
-**Engine Modifications:**
-
-- On `commit_with_receipt`, after `bus.finalize()`:
-
- ```rust
- let outputs: OutputFrameSet = mat_report.channels
- .iter()
- .map(|fc| (fc.channel, fc.data.clone()))
- .collect();
- self.warp_outputs.entry(root_warp).or_default().push(outputs);
- ```
-
-**Demo Emission (per correction #5):**
-
-- Add deterministic test emission path so T1/T8 aren't vacuous
-- Option A: Demo rule that emits to channel based on tick
-- Option B: Compute outputs from state deterministically for tests
-
-**ViewSession Publishing:**
-
-- `publish_truth(cursor, provenance, sink)` sources from `provenance.outputs(worldline, tick)`
-
-**Tests:**
-
-- T4: `seek_moves_cursor_without_mutating_writer_store`
-- T5: `step_back_is_seek_minus_one_then_pause`
-- T6: `reader_play_consumes_existing_then_pauses_at_frontier`
-- T8: `outputs_match_recorded_bytes_for_same_tick`
-- T19-T22: MBUS v2 integration
-
-**Gate:** `cargo test -p warp-core --features delta_validate`
-
----
-
-### 🔲 Commit 6 — Reducer Semantics + Checkpoint Skeleton + Fork Stub
-
-**New File: `crates/warp-core/src/retention.rs`**
-
-```rust
-pub enum RetentionPolicy {
- KeepAll,
- CheckpointEvery { k: u64 },
- KeepRecent { window: u64, checkpoint_every: u64 },
- ArchiveToWormhole { after: u64, checkpoint_every: u64 }, // seam only
-}
-```
-
-**Checkpoint Skeleton:**
-
-- `LocalProvenanceStore::checkpoint(warp_id, tick, state)` — naive clone
-- `checkpoint_before(worldline, tick)` for fast seek
-
-**Fork Stub:**
-
-- `LocalProvenanceStore::fork(source, fork_tick, new_id)` — prefix-copy
-
-**Tests:**
-
-- T11: `reducer_commutative_is_permutation_invariant_and_replayable`
-- T12: `reducer_order_dependent_is_canonically_deterministic_and_replayable`
-- T13: `reduced_channel_emits_single_authoritative_value_per_tick`
-- T17: `checkpoint_replay_equals_full_replay`
-- T18: `fork_worldline_diverges_after_fork_tick_without_affecting_original`
-
-**Gate:** `cargo test -p warp-core --features delta_validate`
-
----
-
-## Key Files Reference
-
-| File | Purpose |
-| -------------------------------- | --------------------------------------------------------- |
-| `materialization/frame.rs:1-255` | Pattern for MBUS encoding |
-| `engine_impl.rs:967-1085` | `commit_with_receipt` — hook for per-warp projection |
-| `tick_patch.rs:98-461` | `WarpOp`, `apply_to_state` — pattern for `apply_to_store` |
-| `snapshot.rs:90-265` | `compute_state_root`, `compute_commit_hash_v2` |
-| `graph.rs:16-486` | `GraphStore`, `canonical_state_hash` |
-
----
-
-## Invariants (from spec)
-
-- **WL-001 (Holography):** Given U0Ref + patches + canonical apply, any tick's state is reconstructible
-- **WL-002 (Truth):** Given recorded outputs per tick, any tick's client-visible truth is reconstructible byte-for-byte
-- **CUR-001:** Cursor never mutates worldline unless role is Writer and mode requires advance
-- **CUR-002:** Cursor never executes rules when seeking; it applies recorded patches only
-- **CUR-003:** After seek/apply, cursor verifies expected hashes byte-for-byte
-- **OUT-001:** For `(worldline_id, tick, channel)`, value bytes are deterministic across runs/machines
-- **OUT-002:** Playback at tick t reproduces the same TruthFrames recorded at tick t
-- **STEP-001:** No store mutation while any GraphView borrow exists for that store
-- **STEP-002:** Seeking never touches writer cursor store; only cursor.store
diff --git a/docs/archive/plans/SPEC-0004-review-hitlist.md b/docs/archive/plans/SPEC-0004-review-hitlist.md
deleted file mode 100644
index 8f9c2616..00000000
--- a/docs/archive/plans/SPEC-0004-review-hitlist.md
+++ /dev/null
@@ -1,149 +0,0 @@
-
-
-
-# SPEC-0004 Self-Review Hit List
-
-**Date:** 2026-01-22
-**Branch:** `graph-boaw`
-**Status:** Pre-PR review complete
-
----
-
-## Summary
-
-| Category | High | Medium | Low | Total |
-| ------------- | ----- | ------ | ------ | ------ |
-| Source Code | 0 | 7 | 36 | 43 |
-| Test Code | 1 | 8 | 18 | 27 |
-| Documentation | 0 | 3 | 8 | 11 |
-| API Surface | 0 | 0 | 6 | 6 |
-| **TOTAL** | **1** | **18** | **68** | **87** |
-
----
-
-## HIGH Severity
-
-- [ ] **#53** Cross-file: Massive test helper duplication (~330 lines duplicated across 3 test files). `test_worldline_id`, `test_cursor_id`, `setup_worldline_with_ticks`, `create_add_node_patch`, etc. should be in `tests/common/mod.rs`.
-
----
-
-## MEDIUM Severity
-
-### Source Code
-
-- [ ] **#1** `playback.rs:314` — Long mid-function comment block contradicts itself ("Actually, let's clarify..."). Clean up or move to module-level docs.
-- [ ] **#2** `playback.rs:394` — `StepForward` for writers returns `StepResult::Advanced` but does nothing (misleading stub). Should return `NoOp` or document clearly.
-- [ ] **#3** `playback.rs:566` — `publish_truth` hash conversion is fragile (relies on `blake3::Hash` to `[u8;32]` via `into()`). Add explicit type annotation.
-- [ ] **#4** `provenance_store.rs:204` — `add_checkpoint` silently no-ops if worldline doesn't exist. Should return error or log.
-- [ ] **#5** `provenance_store.rs:189` — `append()` doesn't validate `global_tick` equals current length (gap risk).
-- [ ] **#6** `retention.rs:47` — `ArchiveToWormhole` is "not implemented" but no compile-time warning when used.
-- [ ] **#7** `frame_v2.rs:111` — `debug_assert!` for payload size check. Release builds silently produce invalid packets if payload exceeds `u32::MAX`.
-
-### Test Code
-
-- [ ] **#8** `view_session_tests.rs:713` — T16 tests conceptually belong in BOAW test file, not "view sessions".
-- [ ] **#9** `view_session_tests.rs:726` — `make_touch_rule` closure duplicated between T16 and T16-shuffled (47 lines x 2).
-- [ ] **#10** `view_session_tests.rs:873` — `XorShift64` + `shuffle` reimplemented inline (duplicates `common/mod.rs`).
-- [ ] **#11** `outputs_playback_tests.rs:92` — `setup_worldline_with_ticks` duplicated verbatim across 3 files.
-- [ ] **#12** `outputs_playback_tests.rs:698` — Direct field mutation `cursor.tick = 100` bypasses public API.
-- [ ] **#13** `checkpoint_fork_tests.rs:59` — `create_add_node_patch` duplicated verbatim.
-- [ ] **#14** `reducer_emission_tests.rs:1254` — `bus_log` is non-mut but calls `emit()`. Misleading if interior mutability.
-- [ ] **#15** `view_session_tests.rs:317` — Helper functions block (~110 lines) duplicated across 3 test files.
-
-### Documentation
-
-- [ ] **#16** `architecture-outline.md:125` — Says "`TruthSink` trait" but it's actually a `struct`.
-- [ ] **#17** `architecture-outline.md:128` — `RetentionPolicy` variants listed incorrectly (says "Archival", missing `CheckpointEvery`).
-- [ ] **#18** `architecture-outline.md:121` — Potentially broken link path (`/spec/` vs relative).
-
----
-
-## LOW Severity
-
-### Source Code (LOW)
-
-- [ ] **#19** `playback.rs:264` — All `PlaybackCursor` fields are `pub` (risky for `store` field).
-- [ ] **#20** `playback.rs:381` — Writer stub TODO not marked with `// TODO:` for grep-ability.
-- [ ] **#21** `retention.rs:21` — Missing `#[non_exhaustive]` on `RetentionPolicy` enum.
-- [ ] **#22** `worldline.rs:260` — `OutputFrameSet` type alias doesn't show docs in all IDE contexts. Consider newtype.
-- [ ] **#23** `frame_v2.rs:149` — `decode_v2_packet` returns `Option` with no failure reason. Consider `Result<_, DecodeError>`.
-- [ ] **#24** `frame_v2.rs:174` — Variable named `cursor` confusing given `CursorId` in crate. Rename to `offset`.
-- [ ] **#25** `playback.rs:25` — `BTreeMap` imported at top but only used in `TruthSink`. Consider importing at point of use.
-- [ ] **#26** `playback.rs:34` — `CursorId` and `SessionId` have identical `as_bytes` implementations. Consider macro/trait.
-- [ ] **#27** `playback.rs:633` — `TruthSink::collect_frames` clones the entire Vec. Return `&[TruthFrame]` instead.
-- [ ] **#28** `playback.rs:631` — Missing `#[must_use]` on `TruthSink::last_receipt`.
-- [ ] **#29** `worldline.rs:145` — `#[allow(clippy::too_many_lines)]` on `apply_warp_op_to_store`. Consider refactoring.
-- [ ] **#30** `worldline.rs:97` — Simple accessors (`global_tick()`, `policy_id()`) missing `#[inline]`.
-- [ ] **#31** `worldline.rs:284` — `ApplyError::UnsupportedOperation` uses `&'static str`. Consider enum of op names.
-- [ ] **#32** `provenance_store.rs:139` — `WorldlineHistory` is private but has doc comment. Consider removing.
-- [ ] **#33** `provenance_store.rs:229` — `checkpoint()` does redundant `get_mut` after hash computation.
-- [ ] **#34** `provenance_store.rs:254` — `#[allow(clippy::cast_possible_truncation)]` on `fork` needs safety comment.
-- [ ] **#35** `provenance_store.rs:277` — Repeated `#[allow(clippy::cast_possible_truncation)]`. Consider module-level allow.
-- [ ] **#36** `provenance_store.rs:317` — `checkpoint_before` returns `None` for non-existent worldline. Document behavior.
-- [ ] **#37** `retention.rs:56` — `Default` impl could use `#[derive(Default)]` with `#[default]` attribute.
-- [ ] **#38** `frame_v2.rs:102` — Multiple `#[allow(clippy::cast_possible_truncation)]` in `encode_v2_packet`.
-- [ ] **#39** `frame_v2.rs:225` — `decode_v2_packets` creates subslice then re-checks length inside decode. Minor inefficiency.
-- [ ] **#40** `playback.rs:559` — `publish_truth` error doc references `HistoryError` inconsistently.
-
-### Test Code (LOW)
-
-- [ ] **#41** `view_session_tests.rs:82` — Magic number `patch_digest: [tick as u8; 32]` wraps at tick > 255.
-- [ ] **#42** `view_session_tests.rs:119` — Magic number `+100` offset for `commit_hash` unexplained.
-- [ ] **#43** `view_session_tests.rs:145` — Magic number `10` for `pin_max_tick` not named.
-- [ ] **#44** `view_session_tests.rs:719` — `WORKER_COUNTS` uses `[1,2,8,32]` vs `common` uses `[1,2,4,8,16,32]`.
-- [ ] **#45** `view_session_tests.rs:232` — Loop count `5` is a magic number.
-- [ ] **#46** `outputs_playback_tests.rs:3` — `#![allow(clippy::expect_fun_call)]` is file-wide. Scope to specific functions.
-- [ ] **#47** `outputs_playback_tests.rs:427` — Magic number `k = 12u64` — why 12?
-- [ ] **#48** `playback_cursor_tests.rs:21` — `test_cursor_id()` has different signature than other test files. Prevents extraction.
-- [ ] **#49** `playback_cursor_tests.rs:256` — Unused variable `_hash_at_3` computed but never asserted.
-- [ ] **#50** `playback_cursor_tests.rs:207` — "Tick 10 is valid" reasoning unclear. Document convention.
-- [ ] **#51** `reducer_emission_tests.rs:29` — `key_sub as key` shadows 2-arg `key` function. Confusing.
-- [ ] **#52** `reducer_emission_tests.rs:43` — `factorial` overflow guard uses `debug_assert!`. Use `assert!`.
-- [ ] **#53** `reducer_emission_tests.rs:176` — Redundant re-assertion after loop (same check inside and after).
-- [ ] **#54** `reducer_emission_tests.rs:539` — Double-finalization pattern (wasteful and confusing).
-- [ ] **#55** `checkpoint_fork_tests.rs:9` — `#![allow(clippy::unwrap_used)]` is file-wide.
-- [ ] **#56** `checkpoint_fork_tests.rs:135` — `cursor_tick = patch_index + 1` convention is fragile.
-- [ ] **#57** Missing edge case tests: `pin_max_tick=0`, seek to `u64::MAX`, empty worldline, duplicate WorldlineId registration.
-- [ ] **#58** `outputs_playback_tests.rs:623` — `unsubscribed_channel` variable name is redundant with test logic.
-
-### Documentation (LOW)
-
-- [ ] **#59** CHANGELOG claims "T19-T22" but these labels don't appear in test file names.
-- [ ] **#60** `code-map.md` says "T1-T10 playback tests" but file has T1,T4,T5,T6,T7,T8 (not T2,T3,T9,T10).
-- [ ] **#61** CHANGELOG `checkpoint()` description says "Create checkpoint" but function is `add_checkpoint`.
-- [ ] **#62** CHANGELOG claims `WorldlineId` is "content-addressed" but tests use fixed bytes.
-
-### API Surface
-
-- [ ] **#63** `RetentionPolicy` exported but no public function accepts/returns it (dangling export).
-- [ ] **#64** `apply_warp_op_to_store` exposes internal mutation without guardrails.
-- [ ] **#65** `ApplyError` vs `ApplyResult` naming creates cognitive collision (different contexts).
-- [ ] **#66** `compute_state_root_for_warp_store` newly public — low-level, easy to misuse.
-- [ ] **#67** `CheckpointRef` exposed publicly but only meaningful in provenance context.
-- [ ] **#68** `playback` module exports 11 types in a flat list. Consider sub-grouping in docs.
-
----
-
-## Recommended Fix Priority
-
-### P0 — Before PR
-
-- [ ] Fix #53 (HIGH): Extract shared test helpers to `tests/common/mod.rs`
-- [ ] Fix #16-#18 (MEDIUM): Factual errors in `architecture-outline.md`
-- [ ] Fix #9-#10 (MEDIUM): Use existing `common/` XorShift64/shuffle/make_touch_rule
-
-### P1 — Before Merge
-
-- [ ] Fix #1-#2 (MEDIUM): Clean up playback.rs stub and comments
-- [ ] Fix #5 (MEDIUM): Add tick gap validation to `append()`
-- [ ] Fix #7 (MEDIUM): Promote `debug_assert!` to runtime check in frame_v2
-
-### P2 — Follow-up Issue
-
-- [ ] Fix #4 (MEDIUM): Error handling in `add_checkpoint`
-- [ ] Fix #8 (MEDIUM): Move T16 to appropriate test file
-- [ ] Fix #21 (LOW): Add `#[non_exhaustive]` to `RetentionPolicy`
-
-### P3 — Tech Debt
-
-- [ ] All remaining LOW severity items
diff --git a/docs/archive/plans/cross-warp-parallelism.md b/docs/archive/plans/cross-warp-parallelism.md
deleted file mode 100644
index 9ee8ff76..00000000
--- a/docs/archive/plans/cross-warp-parallelism.md
+++ /dev/null
@@ -1,106 +0,0 @@
-
-
-
-# Cross-Warp Parallelism
-
-**Created:** 2026-01-20
-**Status:** IMPLEMENTED
-**Archived:** 2026-03-07 (PR #292)
-**Reason:** Feature fully implemented
-**Implementation:** PR #257 (Phase 6B); see `crates/warp-core/src/boaw/exec.rs`
-**Context:** Performance optimization — parallelize execution across warps
-
-> **Archival Note:** The implemented design deviates from this plan. The `WorkUnit`
-> struct as built does NOT store an explicit `shard_id` field — shard identity is
-> implicit in the items membership. See `crates/warp-core/src/boaw/exec.rs:259-281`
-> for the actual structure. This document preserves the original planning intent.
-
----
-
-## Problem Statement
-
-In `engine_impl.rs:1220`, warps are processed serially:
-
-```rust
-for (warp_id, warp_rewrites) in by_warp {
- let view = GraphView::new(store); // borrows per-warp store
- let deltas = execute_parallel_sharded(view, &items, workers);
- all_deltas.extend(deltas);
-}
-```
-
-While `execute_parallel_sharded()` parallelizes _within_ each warp, multi-warp ticks
-still execute warp-by-warp. With N warps and S shards each, latency is O(N) rather
-than O(1) when parallelism is available.
-
----
-
-## Recommended Approach
-
-**Global work queue of `(warp_id, shard_id)` units** — flat parallelism, no nesting.
-
-1. **Partition rewrites by warp** — group by `WarpId`
-2. **Within each warp, partition into shards** — reuse existing `shard_of()` (256 shards)
-3. **Build work units** — `WorkUnit { warp_id, items: Vec }` _(shard_id not stored; implicit in items membership)_
-4. **Spawn fixed worker pool** — `available_parallelism()` threads, spawned once
-5. **Atomic work claiming** — workers claim next unit via `AtomicUsize` index
-6. **Execute with warp-local view** — each unit resolves its warp's `GraphView`
-
-**Pros:** Scalable, clean, deterministic (canonical merge order), no API churn.
-**Cons:** Slightly more wiring than per-warp threading, but avoids nested spawns.
-
----
-
-## Constraints (Non-Negotiable)
-
-1. **No nested threading** — `execute_work_queue()` is the _only_ spawn site. Units
- call serial execution internally, never `execute_parallel_sharded()`.
-
-2. **No long-lived borrows across warps** — worker loop must: resolve `GraphView`,
- execute unit, drop view, move on. No caching `&GraphStore` across iterations.
-
-3. **Keep `ExecItem` unchanged** — `WorkUnit` carries `warp_id + Vec`.
- Do not widen `ExecItem`'s API surface.
-
----
-
-## Implementation Steps
-
-| Step | Description | Files |
-| ---- | ------------------------------------------------------ | -------------- |
-| 1 | Add `WorkUnit { warp_id, items }` struct (no shard_id) | exec.rs |
-| 2 | Add `build_work_units()` — partition by warp + shard | exec.rs |
-| 3 | Add `execute_work_queue()` — atomic claim loop | exec.rs |
-| 4 | Replace serial for-loop with `execute_work_queue()` | engine_impl.rs |
-| 5 | Add `#[cfg(feature = "cross-warp-parallel")]` gate | Cargo.toml |
-
----
-
-## Files Modified
-
-| File | Change |
-| ------------------------------------- | ------------------------------------------- |
-| `crates/warp-core/src/boaw/exec.rs` | WorkUnit struct, build_work_units, executor |
-| `crates/warp-core/src/engine_impl.rs` | Replace serial loop with work queue call |
-| `crates/warp-core/Cargo.toml` | Feature gate (optional) |
-
----
-
-## Success Criteria
-
-- [x] Multi-warp tick executes all warp-shards concurrently
-- [x] Fixed worker pool (no nested spawning)
-- [x] Determinism preserved (canonical unit ordering + merge)
-- [x] No regression on single-warp benchmarks
-
----
-
-## Minimal Success Test
-
-Integration test proving correctness:
-
-- **Setup:** 2 warps × many shards (e.g., 100 items per warp)
-- **Worker counts:** `{1, 2, 8, 32}` — all must produce identical results
-- **Assertion:** Same `commit_hash` per warp (or engine receipt hash) across all runs
-
-If this passes, the design is correct.
diff --git a/docs/archive/plans/per-warp-time-sovereignty.md b/docs/archive/plans/per-warp-time-sovereignty.md
deleted file mode 100644
index a76eed26..00000000
--- a/docs/archive/plans/per-warp-time-sovereignty.md
+++ /dev/null
@@ -1,852 +0,0 @@
-
-
-
-# Per-Warp Time Sovereignty
-
-**Status:** Draft
-**Created:** 2026-01-20
-**Target:** Phase 7 (Post-BOAW)
-**Authors:** Claude (research agent)
-
-## Overview
-
-This plan defines how different WARPs can exist at different "now" positions within the same Engine step, safely and deterministically. This enables:
-
-- **Warp A** in LIVE mode (advancing tick frontier, ingesting new intents)
-- **Warp B** in REPLAY mode (replaying historical commits or applying recorded tick patches)
-- **Warp C** in PAUSED mode (no-op, frozen in time)
-
-All executing concurrently within one Engine step call.
-
----
-
-## 1. Current State
-
-### What Exists Today
-
-| Component | Location | Current Capability |
-| ------------------------ | -------------------------- | ----------------------------------------------------------------------------------- |
-| **WarpState** | `warp_state.rs:43-46` | `BTreeMap` - per-warp isolation via separate stores |
-| **WorkUnit** | `boaw/exec.rs:149-159` | Carries `warp_id` explicitly - work units are warp-tagged |
-| **execute_work_queue()** | `boaw/exec.rs:192-282` | Resolves `GraphView` per-unit from correct store via `resolve_store(&unit.warp_id)` |
-| **tick_history** | `engine_impl.rs:424` | `Vec<(Snapshot, TickReceipt, WarpTickPatchV1)>` - **engine-global**, not per-warp |
-| **jump_to_tick()** | `engine_impl.rs:1581-1601` | Replays patches sequentially, but operates on **whole engine** |
-| **WarpTickPatchV1** | `tick_patch.rs:324-461` | `apply_to_state()` applies canonical ops to WarpState |
-| **Footprint isolation** | `scheduler.rs:162-222` | Keys include `warp_id` - cross-warp conflicts impossible by design |
-| **Commit DAG** | `engine_impl.rs:1052-1056` | **Single linear chain** - parents from `last_snapshot` (global) |
-
-### What's Missing
-
-1. **No `WarpRunMode`** - no enum for LIVE/REPLAY/PAUSED
-2. **No per-warp timeline** - tick history is engine-global
-3. **No warp-local "now"** - no tracking of each warp's position in its timeline
-4. **No mode-aware scheduling** - work queue doesn't filter by mode
-5. **No REPLAY intent rejection** - no mechanism to block new intents for replaying warps
-6. **No per-warp commit DAG** - single chain, not per-warp branches
-
----
-
-## 2. Constraints & Invariants
-
-### Non-Negotiable (Compile-Time or Hard Runtime Errors)
-
-| ID | Invariant | Enforcement |
-| ------------------- | -------------------------------------------------------------------------------------------------- | ---------------------------------------- |
-| **REPLAY-001** | REPLAY warps MUST NOT ingest new intents | Runtime check at `ingest_intent()` entry |
-| **REPLAY-002** | REPLAY warps MUST only apply recorded patches (not execute rules) | Mode branch in step |
-| **REPLAY-003** | All hashes (`commit_hash`, `patch_digest`, `state_root`) MUST match recorded history byte-for-byte | Post-apply verification |
-| **REPLAY-004** | REPLAY execution MUST NOT depend on wall clock, random, or nondet | ADR-0006 ban list |
-| **LIVE-001** | LIVE warps MAY ingest new intents | Default behavior |
-| **LIVE-002** | LIVE execution deterministic given ingress | Existing guarantee |
-| **LIVE-003** | LIVE warps MUST NOT read/write other warps' state | WarpId-scoped keys |
-| **ISOLATION-001** | Each warp's timeline is independent | Per-warp `tick_history` |
-| **ISOLATION-002** | No cross-warp `GraphView` aliasing during parallel execution | Per-unit resolution |
-| **DETERMINISM-001** | Mixed-mode execution produces deterministic per-warp commit DAGs | Canonical merge |
-
-### Soft Invariants (Debug Assertions, Upgradable)
-
-| ID | Invariant | Enforcement |
-| -------------- | ------------------------------------------ | --------------------- |
-| **PAUSED-001** | PAUSED warps produce zero work units | Mode filter in build |
-| **MODE-001** | Mode transitions are explicit and recorded | API enforcement |
-| **REPLAY-005** | REPLAY completion triggers mode transition | Configurable callback |
-
----
-
-## 3. Design
-
-### 3.1 Warp-Local "Now" Definition
-
-"Now" is not wall-clock time but a **position in the warp's commit DAG**.
-
-```rust
-/// The temporal position of a single warp within its own timeline.
-#[derive(Clone, Debug, PartialEq, Eq)]
-pub struct WarpNow {
- /// The warp this position belongs to.
- pub warp_id: WarpId,
- /// Current tick index (0 = initial state U0, 1 = after first commit, etc.)
- pub tick_index: u64,
- /// The commit hash at this position (None for U0).
- pub commit_hash: Option,
- /// Current execution mode.
- pub mode: WarpRunMode,
-}
-```
-
-**Location**: `crates/warp-core/src/warp_timeline.rs`
-
-For a linear chain: `(warp_id, tick_index)` uniquely identifies the state.
-For future branching: `(warp_id, commit_hash)` would be the canonical form.
-
-### 3.2 WarpRunMode Model
-
-```rust
-/// Execution mode for a warp within an Engine step.
-#[derive(Clone, Debug, PartialEq, Eq)]
-pub enum WarpRunMode {
- /// Normal operation: new intents allowed, rules execute, commits advance frontier.
- Live,
-
- /// Replaying recorded history: no new intents, only apply recorded patches.
- Replay {
- /// Target tick index to replay to (post-apply tick_index; patches 0..target_tick-1).
- target_tick: u64,
- /// Source of recorded patches for verification.
- source: ReplaySource,
- },
-
- /// No-op: warp is excluded from this step entirely.
- Paused,
-
- /// (Future) Forking: create a new timeline branch from current position.
- #[non_exhaustive]
- _Reserved,
-}
-
-/// Source of recorded patches for REPLAY mode.
-#[derive(Clone, Debug, PartialEq, Eq)]
-pub enum ReplaySource {
- /// Replay from engine's own ledger (local tick_history).
- LocalLedger,
- /// Replay from external patches (e.g., received from network peer).
- External(Vec),
-}
-```
-
-**Design rationale**: Modes are **per-warp, not per-engine**. This allows warp A to advance (LIVE) while warp B replays (REPLAY) in the same `Engine.step()` call.
-
-### 3.3 Per-Warp Timeline Structure
-
-```rust
-/// Timeline state for a single warp.
-#[derive(Clone, Debug)]
-pub struct WarpTimeline {
- /// Warp identifier.
- pub warp_id: WarpId,
- /// Current execution mode.
- pub mode: WarpRunMode,
- /// Complete tick history for this warp.
- pub tick_history: Vec<(Snapshot, TickReceipt, WarpTickPatchV1)>,
- /// Most recent snapshot (tip of the DAG).
- pub last_snapshot: Option,
- /// Initial state for replay (U0 for this warp).
- pub initial_store: GraphStore,
- /// Current position in timeline.
- pub now: WarpNow,
-}
-
-impl WarpTimeline {
- /// Get the current tick index (0 = U0, n = after n commits).
- pub fn tick_index(&self) -> u64 {
- self.tick_history.len() as u64
- }
-
- /// Check if this warp can accept new intents.
- pub fn can_ingest(&self) -> bool {
- matches!(self.mode, WarpRunMode::Live)
- }
-
- /// Get recorded patch at index (for REPLAY verification).
- pub fn recorded_patch(&self, index: u64) -> Option<&WarpTickPatchV1> {
- self.tick_history.get(index as usize).map(|(_, _, p)| p)
- }
-}
-```
-
-### 3.4 REPLAY Invariants Enforcement
-
-```rust
-impl WarpTimeline {
- /// Apply a replay step: verify and apply recorded patch.
- pub fn replay_step(
- &mut self,
- store: &mut GraphStore,
- ) -> Result {
- let WarpRunMode::Replay { target_tick, ref source } = self.mode else {
- return Err(ReplayError::NotInReplayMode);
- };
-
- let current_tick = self.tick_index();
- // target_tick is the desired post-apply tick_index (number of patches applied).
- // tick_index 0 = initial state; tick_index N = state after patches 0..N-1.
- // So when current_tick >= target_tick, patches 0..target_tick-1 have been applied.
- if current_tick >= target_tick {
- return Ok(ReplayStepResult::ReplayComplete);
- }
-
- // Get recorded patch
- let recorded = match source {
- ReplaySource::LocalLedger => {
- self.recorded_patch(current_tick)
- .ok_or(ReplayError::MissingRecordedPatch { tick: current_tick })?
- .clone()
- }
- ReplaySource::External(patches) => {
- patches.get(current_tick as usize)
- .ok_or(ReplayError::MissingRecordedPatch { tick: current_tick })?
- .clone()
- }
- };
-
- // Apply patch (no rule execution!)
- recorded.apply_to_store(store)?;
-
- // Verify post-state matches recorded (REPLAY-003)
- let post_state_root = compute_state_root_for_warp(store, &self.warp_id);
- let (recorded_snapshot, _, _) = &self.tick_history[current_tick as usize];
-
- if post_state_root != recorded_snapshot.state_root {
- return Err(ReplayError::StateRootMismatch {
- tick: current_tick,
- expected: recorded_snapshot.state_root,
- actual: post_state_root,
- });
- }
-
- // Advance timeline position
- self.now.tick_index = current_tick + 1;
- self.now.commit_hash = Some(recorded_snapshot.hash);
-
- Ok(ReplayStepResult::Advanced { tick: current_tick + 1 })
- }
-}
-
-#[derive(Debug)]
-pub enum ReplayError {
- NotInReplayMode,
- MissingRecordedPatch { tick: u64 },
- StateRootMismatch { tick: u64, expected: Hash, actual: Hash },
- PatchDigestMismatch { tick: u64, expected: Hash, actual: Hash },
- CommitHashMismatch { tick: u64, expected: Hash, actual: Hash },
-}
-```
-
-### 3.5 LIVE Invariants Enforcement
-
-```rust
-impl Engine {
- /// Ingest intent with mode check (LIVE-001 enforced).
- pub fn ingest_intent_for_warp(
- &mut self,
- warp_id: &WarpId,
- intent_bytes: &[u8],
- ) -> Result {
- let timeline = self.timelines.get(warp_id)
- .ok_or(EngineError::UnknownWarp(*warp_id))?;
-
- // REPLAY-001: Reject intents for non-LIVE warps
- if !timeline.can_ingest() {
- return Err(EngineError::WarpNotAcceptingIntents {
- warp_id: *warp_id,
- mode: timeline.mode.clone(),
- });
- }
-
- // Proceed with normal ingestion
- self.ingest_intent_impl(warp_id, intent_bytes)
- }
-}
-```
-
-### 3.6 Concurrency Safety Matrix
-
-| Data | Sharing Model | Rationale |
-| ----------------------- | ---------------- | --------------------------------------------- |
-| `GraphStore` per warp | **Isolated** | Each warp has own store in `WarpState.stores` |
-| `WarpTimeline` per warp | **Isolated** | Each warp has own timeline, mode, history |
-| `WorkUnit` | **Read-shared** | Built before execution, immutable during |
-| `TickDelta` per worker | **Thread-local** | Each worker accumulates own delta |
-| `RewriteRule` registry | **Read-shared** | Rules immutable after registration |
-| Atomic work counter | **Shared** | `AtomicUsize` for work-stealing |
-| Engine metadata | **Isolated** | Only one `&mut Engine` exists |
-
-**Key guarantee**: During `execute_work_queue()`, each worker:
-
-1. Claims a `WorkUnit` atomically
-2. Resolves `GraphView` from correct warp's store (read-only)
-3. Writes to thread-local `TickDelta`
-4. Never touches another warp's store
-
-### 3.7 Global Work Queue with Mixed Modes
-
-```rust
-/// Build work units respecting per-warp modes.
-pub fn build_mixed_mode_work_units(
- timelines: &BTreeMap,
- live_rewrites: &BTreeMap>,
-) -> MixedModeWorkPlan {
- let mut live_units = Vec::new();
- let mut replay_warps = Vec::new();
- let mut paused_warps = Vec::new();
-
- for (warp_id, timeline) in timelines {
- match &timeline.mode {
- WarpRunMode::Live => {
- // Build work units for LIVE warps (normal path)
- if let Some(rewrites) = live_rewrites.get(warp_id) {
- let items: Vec = rewrites.iter()
- .map(|(rw, exec)| ExecItem {
- exec: *exec,
- scope: rw.scope.local_id,
- origin: OpOrigin::default(),
- })
- .collect();
- let sharded = partition_into_shards(&items);
- for shard in sharded {
- if !shard.items.is_empty() {
- live_units.push(WorkUnit {
- warp_id: *warp_id,
- items: shard.items,
- });
- }
- }
- }
- }
- WarpRunMode::Replay { .. } => {
- replay_warps.push(*warp_id);
- }
- WarpRunMode::Paused => {
- paused_warps.push(*warp_id);
- }
- WarpRunMode::_Reserved => unreachable!(),
- }
- }
-
- MixedModeWorkPlan {
- live_units,
- replay_warps,
- paused_warps,
- }
-}
-
-pub struct MixedModeWorkPlan {
- /// Work units for LIVE warps (rule execution).
- pub live_units: Vec,
- /// Warps in REPLAY mode (will apply recorded patches).
- pub replay_warps: Vec,
- /// Warps in PAUSED mode (no-op).
- pub paused_warps: Vec,
-}
-```
-
-**Key insight**: REPLAY warps don't produce `ExecItem` work units because they don't execute rules - they apply pre-recorded patches. PAUSED warps produce nothing. Only LIVE warps generate rule execution work.
-
-### 3.8 Preventing Cross-Mode Contamination
-
-```rust
-impl Engine {
- pub fn step_mixed_mode(&mut self) -> Result {
- // 1. Build mode-aware work plan
- let plan = build_mixed_mode_work_units(&self.timelines, &self.pending_by_warp);
-
- // 2. Execute LIVE warps (parallel rule execution)
- let live_deltas = if !plan.live_units.is_empty() {
- execute_work_queue(&plan.live_units, self.worker_count, |warp_id| {
- self.state.store(warp_id)
- })
- } else {
- Vec::new()
- };
-
- // 3. Merge and commit LIVE deltas (per-warp)
- let mut live_results = BTreeMap::new();
- for warp_id in plan.live_units.iter().map(|u| u.warp_id).collect::>() {
- let warp_delta = self.extract_warp_delta(&live_deltas, &warp_id);
- let result = self.commit_warp(&warp_id, warp_delta)?;
- live_results.insert(warp_id, result);
- }
-
- // 4. Execute REPLAY warps (apply recorded patches, verify hashes)
- let mut replay_results = BTreeMap::new();
- for warp_id in &plan.replay_warps {
- let timeline = self.timelines.get_mut(warp_id).unwrap();
- let store = self.state.store_mut(warp_id).unwrap();
- let result = timeline.replay_step(store)?;
- replay_results.insert(*warp_id, result);
- }
-
- // 5. PAUSED warps: no-op
-
- Ok(MixedModeStepResult {
- live_results,
- replay_results,
- paused: plan.paused_warps,
- })
- }
-}
-```
-
-### 3.9 Per-Warp Commit with Isolated Timeline
-
-```rust
-impl Engine {
- fn commit_warp(
- &mut self,
- warp_id: &WarpId,
- delta: TickDelta,
- ) -> Result {
- let timeline = self.timelines.get_mut(warp_id)
- .ok_or(EngineError::UnknownWarp(*warp_id))?;
- let store = self.state.store_mut(warp_id)
- .ok_or(EngineError::UnknownWarp(*warp_id))?;
-
- // Build patch from delta
- let patch = WarpTickPatchV1::from_delta(delta, self.policy_id)?;
-
- // Apply patch
- patch.apply_to_store(store)?;
-
- // Compute hashes
- let state_root = compute_state_root_for_warp(store, warp_id);
- let patch_digest = patch.digest();
- let parents = timeline.last_snapshot
- .as_ref()
- .map(|s| vec![s.hash])
- .unwrap_or_default();
- let commit_hash = compute_commit_hash_v2(
- state_root,
- &parents,
- patch_digest,
- self.policy_id,
- );
-
- // Build snapshot
- let snapshot = Snapshot {
- warp_id: *warp_id,
- hash: commit_hash,
- state_root,
- patch_digest,
- parents,
- // ... other fields
- };
-
- // Record in warp's timeline (NOT global!)
- let receipt = self.build_receipt_for_warp(warp_id)?;
- timeline.tick_history.push((snapshot.clone(), receipt, patch));
- timeline.last_snapshot = Some(snapshot.clone());
- timeline.now.tick_index += 1;
- timeline.now.commit_hash = Some(commit_hash);
-
- Ok(WarpCommitResult {
- snapshot,
- tick_index: timeline.now.tick_index,
- })
- }
-}
-```
-
----
-
-## 4. Implementation Plan
-
-### Phase 1: Core Types (1 commit)
-
-**Files to create/modify:**
-
-| Action | File | Changes |
-| ---------- | --------------------------------------- | ----------------------------------------------------------------------- |
-| **NEW** | `crates/warp-core/src/warp_timeline.rs` | `WarpNow`, `WarpRunMode`, `ReplaySource`, `WarpTimeline`, `ReplayError` |
-| **MODIFY** | `crates/warp-core/src/lib.rs` | Export new module |
-
-**Tests**: Unit tests for `WarpRunMode` transitions, `WarpTimeline` basic ops.
-
-### Phase 2: Per-Warp Timeline Storage (1 commit)
-
-**Files to modify:**
-
-| Action | File | Changes |
-| ---------- | ------------------------------------- | ----------------------------------------------------------------------------------------- |
-| **MODIFY** | `crates/warp-core/src/engine_impl.rs` | Add `timelines: BTreeMap` field; migrate `tick_history` to per-warp |
-| **MODIFY** | `crates/warp-core/src/warp_state.rs` | Add `timeline()` accessor |
-
-**Tests**: Verify existing tests pass with new storage layout.
-
-### Phase 3: Mode-Aware Intent Ingestion (1 commit)
-
-**Files to modify:**
-
-| Action | File | Changes |
-| ---------- | ------------------------------------- | ---------------------------------------------------------------------------------------------- |
-| **MODIFY** | `crates/warp-core/src/engine_impl.rs` | Check `timeline.can_ingest()` in `ingest_intent()`; add `EngineError::WarpNotAcceptingIntents` |
-
-**Tests**:
-
-- `test_replay_warp_rejects_new_intents`
-- `test_paused_warp_rejects_new_intents`
-- `test_live_warp_accepts_intents`
-
-### Phase 4: Mode-Aware Work Queue (1 commit)
-
-**Files to modify:**
-
-| Action | File | Changes |
-| ---------- | ------------------------------------- | -------------------------------------------------------- |
-| **MODIFY** | `crates/warp-core/src/boaw/exec.rs` | Add `MixedModeWorkPlan`, `build_mixed_mode_work_units()` |
-| **MODIFY** | `crates/warp-core/src/engine_impl.rs` | Implement `step_mixed_mode()` |
-
-**Tests**:
-
-- `test_live_warp_generates_work_units`
-- `test_replay_warp_no_work_units`
-- `test_paused_warp_no_work_units`
-
-### Phase 5: REPLAY Patch Application (1 commit)
-
-**Files to modify:**
-
-| Action | File | Changes |
-| ---------- | --------------------------------------- | ---------------------------------------------------------- |
-| **MODIFY** | `crates/warp-core/src/warp_timeline.rs` | Implement `WarpTimeline::replay_step()`, hash verification |
-| **MODIFY** | `crates/warp-core/src/tick_patch.rs` | Add `apply_to_store()` (single-warp variant) |
-
-**Tests**:
-
-- `test_replay_applies_recorded_patches`
-- `test_replay_detects_state_root_mismatch`
-- `test_replay_detects_commit_hash_mismatch`
-
-### Phase 6: Per-Warp Commit (1 commit)
-
-**Files to modify:**
-
-| Action | File | Changes |
-| ---------- | ------------------------------------- | ----------------------------------------------------- |
-| **MODIFY** | `crates/warp-core/src/engine_impl.rs` | Implement `commit_warp()`, per-warp snapshot creation |
-| **MODIFY** | `crates/warp-core/src/snapshot.rs` | Add `warp_id` to `Snapshot` (or make warp-scoped) |
-
-**Tests**:
-
-- `test_commit_advances_warp_timeline`
-- `test_commit_hash_deterministic_per_warp`
-
-### Phase 7: Engine API Surface (1 commit)
-
-**Files to modify:**
-
-| Action | File | Changes |
-| ---------- | ------------------------------------- | --------------------------------------------------------- |
-| **MODIFY** | `crates/warp-core/src/engine_impl.rs` | Add `set_warp_mode()`, `get_warp_now()`, `start_replay()` |
-
-**Tests**:
-
-- `test_set_warp_mode_live_to_replay`
-- `test_set_warp_mode_replay_to_paused`
-- `test_start_replay_from_tick_zero`
-
-### Phase 8: Integration Tests (1 commit)
-
-**Files to create:**
-
-| Action | File | Changes |
-| ------- | ------------------------------------------------- | --------------------------- |
-| **NEW** | `crates/warp-core/tests/warp_time_sovereignty.rs` | Full integration test suite |
-
----
-
-## 5. Test Plan
-
-### File: `crates/warp-core/tests/warp_time_sovereignty.rs`
-
-| Test ID | Name | Description |
-| ------- | ------------------------------------------- | ---------------------------------------------------------------------------------------- |
-| **T1** | `test_live_and_replay_concurrent_isolation` | LIVE warp advances while REPLAY warp replays; neither affects the other |
-| **T2** | `test_replay_hash_chain_identity` | REPLAY produces identical `commit_hash` chains to recorded history (100 ticks, 10 seeds) |
-| **T3** | `test_live_worker_invariance_with_replay` | LIVE worker-count invariance holds during concurrent REPLAY |
-| **T4** | `test_replay_rejects_intents` | REPLAY warp returns `Err(WarpNotAcceptingIntents)` on intent ingestion |
-| **T5** | `test_mixed_mode_work_queue_determinism` | Mixed-mode execution deterministic across 50 shuffled ingress orderings |
-| **T6** | `test_replay_tripwire_nondet_injection` | **Tripwire**: fails if any nondet input leaks into REPLAY mode |
-| **T7** | `test_replay_completion_mode_transition` | REPLAY completion transitions mode to PAUSED (or LIVE if configured) |
-| **T8** | `test_multiple_replay_warps_isolation` | Multiple REPLAY warps don't interfere with each other |
-| **T9** | `test_cross_mode_commit_dag_independence` | LIVE and REPLAY warps have completely independent commit DAGs |
-| **T10** | `test_paused_warp_state_immutable` | PAUSED warp state completely unchanged across 10 engine steps |
-
-### Test Implementation Details
-
-```rust
-/// T1: LIVE warp advances while REPLAY warp replays - mutual isolation.
-#[test]
-fn test_live_and_replay_concurrent_isolation() {
- // Setup: Engine with warp_a (LIVE) and warp_b (REPLAY)
- // 1. Record 10 ticks for warp_b
- // 2. Rewind warp_b to tick 0, set REPLAY mode targeting tick 5
- // 3. Set warp_a to LIVE
- // 4. Execute 5 mixed-mode steps
- // Assert: warp_a advanced 5 ticks, warp_b replayed to tick 5
- // Assert: warp_a's commit hashes are new, warp_b's match recorded history
- // Assert: Neither warp's state was corrupted by the other
-}
-
-/// T6: Tripwire - nondet leak into REPLAY fails.
-#[test]
-fn test_replay_tripwire_nondet_injection() {
- // Setup: Custom rule that attempts to inject nondet (e.g., thread::current().id())
- // Record with clean rule, then replay
- // Assert: If any nondet leaks, state_root mismatch detected
- // This test FAILS if nondet enters replay path - that's the tripwire
-}
-```
-
-### Additional Test Files
-
-| File | Purpose |
-| ---------------------------------------------- | ------------------------------------------------------------- |
-| `crates/warp-core/tests/replay_determinism.rs` | Permutation tests (100+ seeds), cross-platform verification |
-| `crates/warp-core/tests/mode_transitions.rs` | Valid/invalid mode transitions, transition during active step |
-
----
-
-## 6. Engine API Surface
-
-### New Methods
-
-```rust
-impl Engine {
- /// Set the execution mode for a warp.
- ///
- /// # Errors
- /// - `UnknownWarp` if warp_id not found
- /// - `InvalidModeTransition` if transition not allowed (e.g., during active step)
- pub fn set_warp_mode(
- &mut self,
- warp_id: WarpId,
- mode: WarpRunMode
- ) -> Result<(), EngineError>;
-
- /// Get the current temporal position of a warp.
- pub fn get_warp_now(&self, warp_id: &WarpId) -> Option<&WarpNow>;
-
- /// Start replay for a warp from its current position to target_tick.
- ///
- /// This is a convenience wrapper that:
- /// 1. Validates target_tick is in recorded history
- /// 2. Sets mode to Replay { target_tick, source: LocalLedger }
- pub fn start_replay(
- &mut self,
- warp_id: WarpId,
- target_tick: u64
- ) -> Result<(), EngineError>;
-
- /// Start replay from external patches (e.g., received from network).
- pub fn start_replay_external(
- &mut self,
- warp_id: WarpId,
- patches: Vec,
- ) -> Result<(), EngineError>;
-
- /// Execute one engine step with mixed-mode support.
- pub fn step_mixed_mode(&mut self) -> Result;
-
- /// Get all warps and their current modes.
- pub fn warp_modes(&self) -> impl Iterator- ;
-}
-```
-
-### Result Types
-
-```rust
-pub struct MixedModeStepResult {
- /// Results for warps that were in LIVE mode.
- pub live_results: BTreeMap
,
- /// Results for warps that were in REPLAY mode.
- pub replay_results: BTreeMap,
- /// Warps that were PAUSED (no-op).
- pub paused: Vec,
-}
-
-pub struct WarpCommitResult {
- pub snapshot: Snapshot,
- pub tick_index: u64,
-}
-
-pub enum ReplayStepResult {
- /// Advanced to the next tick.
- Advanced { tick: u64 },
- /// Reached target_tick, replay complete.
- ReplayComplete,
-}
-```
-
----
-
-## 7. Scheduling Rules
-
-### Which Warps Run
-
-1. **LIVE warps**: Generate work units if they have pending rewrites
-2. **REPLAY warps**: Apply one recorded patch per step (no work units)
-3. **PAUSED warps**: Skipped entirely (no state change)
-
-### Mode Determination Per Step
-
-```text
-For each warp in canonical order (BTreeMap iteration):
- match warp.mode:
- Live →
- if has_pending_rewrites(warp):
- generate WorkUnits for this warp
- else:
- no-op this step
-
- Replay { target_tick, source } →
- if warp.tick_index < target_tick:
- schedule replay_step for this warp
- else:
- replay complete, transition to PAUSED (or callback)
-
- Paused →
- no-op
-```
-
-### Work Queue Execution Order
-
-1. All LIVE work units execute in parallel (existing `execute_work_queue`)
-2. LIVE deltas merged and committed per-warp
-3. REPLAY warps apply patches sequentially (per-warp, can be parallelized across warps)
-4. PAUSED warps skipped
-
----
-
-## 8. Time Travel / Rewind / Replay Selection
-
-### Required Inputs for REPLAY
-
-| Input | Source | Required |
-| ------------------ | ------------------------------------------------- | -------- |
-| `warp_id` | Caller | Yes |
-| `target_tick` | Caller | Yes |
-| `ReplaySource` | Caller chooses | Yes |
-| Recorded patches | `LocalLedger` or `External(Vec)` | Yes |
-| Initial state (U0) | Stored in `WarpTimeline.initial_store` | Auto |
-
-### Rewind Mechanism
-
-```rust
-impl WarpTimeline {
- /// Rewind warp to tick 0 (U0 state) for replay.
- pub fn rewind_to_origin(&mut self, store: &mut GraphStore) {
- // Clone initial state back to active store
- *store = self.initial_store.clone();
- self.now.tick_index = 0;
- self.now.commit_hash = None;
- // Note: tick_history preserved for replay source
- }
-
- /// Rewind to specific tick (requires re-applying patches 0..tick).
- pub fn rewind_to_tick(
- &mut self,
- store: &mut GraphStore,
- tick: u64
- ) -> Result<(), ReplayError> {
- self.rewind_to_origin(store);
- for i in 0..tick {
- let patch = self.recorded_patch(i)
- .ok_or(ReplayError::MissingRecordedPatch { tick: i })?;
- patch.apply_to_store(store)?;
- }
- self.now.tick_index = tick;
- self.now.commit_hash = if tick == 0 {
- None
- } else {
- self.tick_history.get(tick as usize - 1)
- .map(|(s, _, _)| s.hash)
- };
- Ok(())
- }
-}
-```
-
----
-
-## 9. Risks & Mitigations
-
-| Risk | Severity | Mitigation |
-| ----------------------------------------------------- | ------------ | ------------------------------------------------------------------------------------- |
-| **Per-warp timeline storage increases memory** | Medium | Use structural sharing for `GraphStore` snapshots; only store deltas |
-| **REPLAY hash mismatch debugging is hard** | Medium | Include tick index, expected vs actual hashes, and delta dump in `ReplayError` |
-| **Mode transition race conditions** | Low | Mode changes only allowed between steps; enforce via `&mut Engine` |
-| **Future forking complicates commit DAG** | Low (future) | Design `parents: Vec` now; collapse/merge is separate feature |
-| **Cross-warp portal operations during mixed modes** | Medium | Portal creation in LIVE warp that targets REPLAY warp must be blocked; add validation |
-| **Global `policy_id` shared across warps** | Low | Acceptable for now; future per-warp policy is out of scope |
-| **REPLAY from external source has no chain of trust** | Medium | External `ReplaySource` should require signature verification (future enhancement) |
-
----
-
-## 10. Out of Scope (Future Work)
-
-1. **Per-warp forking/branching** - Commit DAG supports multiple parents but implementation deferred
-2. **Collapse/merge across warps** - ADR-0007 Layer 7 specified but not implemented here
-3. **Privacy mode per-warp** - Mind vs Diagnostics modes are engine-global currently
-4. **Network-sourced REPLAY verification** - Signature verification for `ReplaySource::External`
-5. **Per-warp policy_id** - All warps share engine's `policy_id`
-
----
-
-## 11. Summary
-
-Per-warp time sovereignty is achievable with **minimal, composable changes** because the existing architecture already enforces per-warp isolation via:
-
-- `WarpId`-scoped keys in footprints
-- Per-unit `GraphView` resolution in `execute_work_queue()`
-- Separate `GraphStore` per warp in `WarpState`
-
-The main additions are:
-
-1. **WarpRunMode enum** - explicit mode tracking per warp
-2. **Per-warp timeline storage** - migrate global `tick_history` to per-warp
-3. **Mode-aware scheduling** - filter work queue by mode
-4. **REPLAY enforcement** - apply recorded patches instead of executing rules
-
-The design **preserves all existing determinism guarantees** and **does not block future forking/collapse features**.
-
----
-
-## Appendix A: File Change Summary
-
-| File | Action | LOC Estimate |
-| ------------------------------------------------- | ------- | ------------ |
-| `crates/warp-core/src/warp_timeline.rs` | **NEW** | ~300 |
-| `crates/warp-core/src/lib.rs` | MODIFY | +5 |
-| `crates/warp-core/src/engine_impl.rs` | MODIFY | +200 |
-| `crates/warp-core/src/boaw/exec.rs` | MODIFY | +50 |
-| `crates/warp-core/src/tick_patch.rs` | MODIFY | +30 |
-| `crates/warp-core/src/snapshot.rs` | MODIFY | +10 |
-| `crates/warp-core/src/warp_state.rs` | MODIFY | +20 |
-| `crates/warp-core/tests/warp_time_sovereignty.rs` | **NEW** | ~400 |
-| `crates/warp-core/tests/replay_determinism.rs` | **NEW** | ~200 |
-| `crates/warp-core/tests/mode_transitions.rs` | **NEW** | ~150 |
-| **Total** | | ~1365 |
-
----
-
-## Appendix B: Compile-Time vs Runtime Enforcement
-
-| Invariant | Enforcement | Mechanism |
-| --------------------------- | ---------------- | ------------------------------------------- |
-| REPLAY-001 (no intents) | **Runtime** | Check in `ingest_intent()` |
-| REPLAY-002 (patches only) | **Runtime** | Mode branch in `step_mixed_mode()` |
-| REPLAY-003 (hash match) | **Runtime** | Post-apply verification |
-| REPLAY-004 (no nondet) | **Compile-time** | ADR-0006 ban list + `ban-nondeterminism.sh` |
-| LIVE-003 (no cross-warp) | **Compile-time** | `WarpId` in all key types |
-| ISOLATION-002 (no aliasing) | **Runtime** | Per-unit `GraphView` resolution |
-| DETERMINISM-001 | **Runtime** | Canonical merge in `merge_deltas()` |
diff --git a/docs/archive/release-criteria.md b/docs/archive/release-criteria.md
deleted file mode 100644
index 817f6b67..00000000
--- a/docs/archive/release-criteria.md
+++ /dev/null
@@ -1,36 +0,0 @@
-
-
-
-# Release Criteria — Phase 0.5 → Phase 1
-
-Checklist for closing Phase 0.5 and starting Phase 1 implementation.
-
-## How to Use This Checklist
-
-- Treat each item as a gate: “done” means it is implemented **and** verified.
-- Link evidence (tests, docs, or CI runs) in the Phase 0.5 tracking issue.
-- If a requirement moves, update the checklist so it stays authoritative.
-
-## Required Criteria
-
-- [ ] Branch tree spec v0.5 implemented (roaring bitmaps, epochs, hashing).
-- [ ] Codex’s Baby Phase 0.5 features implemented (event envelope, bridge, backpressure).
-- [ ] Temporal bridge integrated with branch tree and CB.
-- [ ] Serialization protocol implemented with content-addressed blocks.
-- [ ] Replay CLI (`echo replay --verify`) passes golden hash suite.
-- [ ] Entropy observers and inspector packets verified.
-- [ ] Capability tokens and security envelopes enforced.
-- [ ] Determinism test suite green on Node, Chromium, WebKit.
-- [ ] Deterministic config loader produces `configHash`.
-- [ ] Plugin manifest loader validates capabilities and records `pluginsManifestHash`.
-- [ ] Inspector JSONL writer produces canonical frames.
-- [ ] Documentation index current (spec map).
-
-## Evidence Expectations (Examples)
-
-- Determinism suite: CI logs or `echo-dind-harness` transcript.
-- Replay CLI: golden hashes checked in `testdata/` with a reproducible runner.
-- Protocol gates: a spec doc + a passing conformance test.
-- Docs: `docs/meta/docs-index.md` updated with links to current specs.
-
-Once all items checked, open Phase 1 milestone and migrate outstanding tasks to implementation backlog.
diff --git a/docs/archive/rfc/mat-bus-finish.md b/docs/archive/rfc/mat-bus-finish.md
deleted file mode 100644
index 1202acd4..00000000
--- a/docs/archive/rfc/mat-bus-finish.md
+++ /dev/null
@@ -1,662 +0,0 @@
-
-
-
-
-# RFC: MaterializationBus Completion
-
-**Status:** Complete
-**Date:** 2026-01-17
-**Branch:** `materialization-bus`
-**Depends on:** ADR-0003-Materialization-Bus
-
-## Summary
-
-This RFC completes the MaterializationBus implementation with three deliverables:
-
-1. **EmissionPort trait** — Hexagonal boundary for rule emissions
-2. **ReduceOp enum** — Built-in deterministic reduce operations (no user functions)
-3. **Cross-platform determinism tests** — GitHub Actions + DIND harness
-
----
-
-## 1. EmissionPort Trait (Hexagonal Architecture)
-
-### Problem
-
-The current plan passes `&MaterializationBus` directly to rule executors. This:
-
-- Couples rules to concrete implementation
-- Exposes internal `EmitKey` construction to callers
-- Makes testing harder (can't mock the bus)
-- Violates hexagonal/ports-and-adapters principles
-
-### Solution
-
-Introduce an `EmissionPort` trait as the driven port. Rules depend on the trait; the engine provides a scoped adapter.
-
-```rust
-// crates/warp-core/src/materialization/emission_port.rs
-
-/// Driven port for rule emissions (what rules see).
-///
-/// Rules emit to channels via this trait. The engine provides a scoped
-/// implementation that automatically constructs EmitKeys from execution context.
-pub trait EmissionPort {
- /// Emit data to a channel.
- ///
- /// The implementation handles EmitKey construction. Callers only provide
- /// channel and payload.
- fn emit(&self, channel: ChannelId, data: Vec);
-
- /// Emit with explicit subkey (for multi-emission rules).
- ///
- /// Use when a single rule invocation needs to emit multiple values to
- /// the same channel. The subkey disambiguates emissions.
- fn emit_with_subkey(&self, channel: ChannelId, subkey: u32, data: Vec);
-}
-```
-
-### Scoped Adapter
-
-The engine creates a `ScopedEmitter` for each rule execution:
-
-```rust
-// crates/warp-core/src/materialization/scoped_emitter.rs
-
-/// Scoped adapter that auto-fills EmitKey from execution context.
-///
-/// Created by the engine for each rule invocation. Captures the scope hash
-/// and rule ID, preventing rules from forging keys.
-pub struct ScopedEmitter<'a> {
- bus: &'a MaterializationBus,
- scope_hash: Hash,
- rule_id: u32,
-}
-
-impl<'a> ScopedEmitter<'a> {
- /// Create a new scoped emitter for a rule execution.
- pub fn new(bus: &'a MaterializationBus, scope_hash: Hash, rule_id: u32) -> Self {
- Self { bus, scope_hash, rule_id }
- }
-}
-
-impl EmissionPort for ScopedEmitter<'_> {
- fn emit(&self, channel: ChannelId, data: Vec) {
- let key = EmitKey::new(self.scope_hash, self.rule_id);
- self.bus.emit(channel, key, data);
- }
-
- fn emit_with_subkey(&self, channel: ChannelId, subkey: u32, data: Vec) {
- let key = EmitKey::with_subkey(self.scope_hash, self.rule_id, subkey);
- self.bus.emit(channel, key, data);
- }
-}
-```
-
-### Engine Integration
-
-```rust
-// In Engine::execute_rule() or similar
-
-let emitter = ScopedEmitter::new(&self.bus, scope_node.hash(), rule.id());
-rule.execute(context, &emitter)?;
-```
-
-### Testing
-
-Rules can be tested with a mock port:
-
-```rust
-#[cfg(test)]
-struct MockEmissionPort {
- emissions: RefCell)>>,
-}
-
-impl EmissionPort for MockEmissionPort {
- fn emit(&self, channel: ChannelId, data: Vec) {
- self.emissions.borrow_mut().push((channel, data));
- }
- // ...
-}
-```
-
-### Duplicate EmitKey Rejection
-
-**Policy: Reject duplicate (channel, EmitKey) pairs. Always.**
-
-If a rule emits twice to the same channel with the same EmitKey, the bus returns
-`DuplicateEmission` error. This catches rules that iterate non-deterministic
-sources (e.g., `HashMap`) without proper subkey differentiation.
-
-```rust
-/// Error returned when the same (channel, EmitKey) is emitted twice.
-#[derive(Debug, Clone, PartialEq, Eq)]
-pub struct DuplicateEmission {
- pub channel: ChannelId,
- pub key: EmitKey,
-}
-
-impl MaterializationBus {
- /// Emit data to a channel. Returns error if key already exists.
- pub fn emit(
- &self,
- channel: ChannelId,
- key: EmitKey,
- data: Vec,
- ) -> Result<(), DuplicateEmission> {
- use std::collections::btree_map::Entry;
-
- let mut pending = self.pending.borrow_mut();
- let channel_map = pending.entry(channel).or_default();
-
- match channel_map.entry(key) {
- Entry::Vacant(e) => {
- e.insert(data);
- Ok(())
- }
- Entry::Occupied(_) => Err(DuplicateEmission { channel, key }),
- }
- }
-}
-```
-
-**Why reject even if payloads are identical?**
-
-Allowing "identical payload = OK" encourages sloppy code that emits redundantly.
-Then someone changes a field and tests fail mysteriously. Rejecting always forces
-rule authors to think: "Am I iterating deterministically? Do I need unique subkeys?"
-
-### Files to Create/Modify
-
-| File | Action |
-| -------------------------------------------------------- | ------------------------------------------------- |
-| `crates/warp-core/src/materialization/emission_port.rs` | **Create** — trait definition |
-| `crates/warp-core/src/materialization/scoped_emitter.rs` | **Create** — adapter implementation |
-| `crates/warp-core/src/materialization/mod.rs` | **Modify** — export new types |
-| `crates/warp-core/src/materialization/bus.rs` | **Modify** — add DuplicateEmission, update emit() |
-| `crates/warp-core/src/engine.rs` (or equivalent) | **Modify** — create ScopedEmitter per rule |
-
----
-
-## 2. ReduceOp Enum (Built-in Deterministic Ops)
-
-### Problem
-
-The current `ChannelPolicy::Reduce { join_fn_id }` design assumes a join function registry where users register merge functions by ID. This is a determinism landmine:
-
-- User functions may not be commutative/associative
-- Function lookup adds indirection and potential for error
-- Can't verify correctness at compile time
-- Opens door to non-deterministic user code
-
-### Solution
-
-Replace `join_fn_id` with a closed enum of built-in reduce operations.
-
-**IMPORTANT: Not all reduce ops are commutative.** They fall into two categories:
-
-| Category | Ops | Property |
-| ----------------------- | -------------------------------------- | ------------------------------------------------ |
-| **Commutative Monoids** | `Sum`, `Max`, `Min`, `BitOr`, `BitAnd` | Order doesn't matter: `a ⊕ b = b ⊕ a` |
-| **Order-Dependent** | `First`, `Last`, `Concat` | Deterministic via EmitKey order, NOT commutative |
-
-Both categories are **deterministic** (same inputs → same output), but only commutative ops are **permutation-invariant** at the value level. Order-dependent ops rely on the canonical EmitKey ordering.
-
-```rust
-// crates/warp-core/src/materialization/reduce_op.rs
-
-/// Built-in reduce operations for channel coalescing.
-///
-/// # Algebraic Categories
-///
-/// **Commutative monoids** (permutation-invariant):
-/// - `Sum`, `Max`, `Min`, `BitOr`, `BitAnd`
-/// - Result is identical regardless of emission order
-///
-/// **Order-dependent** (deterministic via EmitKey order):
-/// - `First`, `Last`, `Concat`
-/// - Result depends on canonical EmitKey ordering
-/// - NOT commutative — do not claim they are!
-#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
-pub enum ReduceOp {
- // ─── COMMUTATIVE MONOIDS ───────────────────────────────────────────
-
- /// Sum all values as little-endian u64.
- /// Empty input → `[0u8; 8]` (zero).
- Sum,
-
- /// Take maximum value (lexicographic byte comparison).
- /// Empty input → `[]` (empty vec).
- Max,
-
- /// Take minimum value (lexicographic byte comparison).
- /// Empty input → `[]` (empty vec).
- Min,
-
- /// Bitwise OR all values.
- /// Shorter values are zero-padded on the right.
- /// Empty input → `[]` (empty vec).
- BitOr,
-
- /// Bitwise AND all values.
- /// Result length = minimum input length (intersection semantics).
- /// Empty input → `[]` (empty vec).
- BitAnd,
-
- // ─── ORDER-DEPENDENT (NOT COMMUTATIVE) ─────────────────────────────
-
- /// Take first value by EmitKey order.
- /// Empty input → `[]` (empty vec).
- /// WARNING: Not commutative. Depends on canonical key ordering.
- First,
-
- /// Take last value by EmitKey order.
- /// Empty input → `[]` (empty vec).
- /// WARNING: Not commutative. Depends on canonical key ordering.
- Last,
-
- /// Concatenate all values in EmitKey order.
- /// Empty input → `[]` (empty vec).
- /// WARNING: Not commutative. Order matters for result bytes.
- Concat,
-}
-
-impl ReduceOp {
- /// Returns true if this op is a commutative monoid (permutation-invariant).
- pub const fn is_commutative(&self) -> bool {
- matches!(self, Self::Sum | Self::Max | Self::Min | Self::BitOr | Self::BitAnd)
- }
-}
-```
-
-### Updated ChannelPolicy
-
-```rust
-#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
-pub enum ChannelPolicy {
- /// All emissions in EmitKey order, length-prefixed.
- #[default]
- Log,
-
- /// Error if more than one emission.
- StrictSingle,
-
- /// Reduce via built-in operation.
- Reduce(ReduceOp),
-}
-```
-
-### Implementation
-
-```rust
-impl ReduceOp {
- /// Apply this reduce operation to a set of values.
- ///
- /// Values are provided in EmitKey order (required for First/Last/Concat).
- /// Returns the reduced result.
- ///
- /// # Empty Input Behavior
- ///
- /// All ops return `[]` (empty vec) on empty input, EXCEPT:
- /// - `Sum` returns `[0u8; 8]` (zero as u64 LE)
- ///
- /// This is intentional: empty input means "nothing to reduce."
- pub fn apply(self, values: impl IntoIterator- >) -> Vec
{
- let mut iter = values.into_iter().peekable();
-
- // Handle empty input uniformly (except Sum)
- if iter.peek().is_none() {
- return match self {
- Self::Sum => vec![0u8; 8], // Identity for addition
- _ => Vec::new(), // "Nothing to reduce"
- };
- }
-
- match self {
- // ─── COMMUTATIVE MONOIDS ───────────────────────────────────
-
- Self::Sum => {
- let sum: u64 = iter
- .map(|v| {
- let mut buf = [0u8; 8];
- let len = v.len().min(8);
- buf[..len].copy_from_slice(&v[..len]);
- u64::from_le_bytes(buf)
- })
- .sum();
- sum.to_le_bytes().to_vec()
- }
-
- Self::Max => iter.max().unwrap(), // unwrap safe: checked non-empty
-
- Self::Min => iter.min().unwrap(), // unwrap safe: checked non-empty
-
- Self::BitOr => {
- iter.reduce(|acc, v| bitwise_or(&acc, &v)).unwrap()
- }
-
- Self::BitAnd => {
- iter.reduce(|acc, v| bitwise_and(&acc, &v)).unwrap()
- }
-
- // ─── ORDER-DEPENDENT (EmitKey order matters) ───────────────
-
- Self::First => iter.next().unwrap(), // unwrap safe: checked non-empty
-
- Self::Last => iter.last().unwrap(), // unwrap safe: checked non-empty
-
- Self::Concat => iter.flatten().collect(),
- }
- }
-}
-
-/// Bitwise OR with zero-padding for shorter operand.
-fn bitwise_or(a: &[u8], b: &[u8]) -> Vec {
- let len = a.len().max(b.len());
- let mut result = vec![0u8; len];
- for (i, byte) in result.iter_mut().enumerate() {
- let av = a.get(i).copied().unwrap_or(0);
- let bv = b.get(i).copied().unwrap_or(0);
- *byte = av | bv;
- }
- result
-}
-
-/// Bitwise AND with truncation to shorter operand (intersection semantics).
-fn bitwise_and(a: &[u8], b: &[u8]) -> Vec {
- let len = a.len().min(b.len());
- (0..len).map(|i| a[i] & b[i]).collect()
-}
-```
-
-### Files to Create/Modify
-
-| File | Action |
-| ------------------------------------------------------- | ----------------------------------------------- |
-| `crates/warp-core/src/materialization/reduce_op.rs` | **Create** — enum + apply() |
-| `crates/warp-core/src/materialization/channel.rs` | **Modify** — update ChannelPolicy |
-| `crates/warp-core/src/materialization/bus.rs` | **Modify** — call ReduceOp::apply() in finalize |
-| `crates/warp-core/tests/materialization_determinism.rs` | **Add** — reduce op tests |
-
----
-
-## 3. Cross-Platform Determinism Tests
-
-### Problem
-
-MaterializationBus must produce identical output across:
-
-- macOS (dev machines)
-- Linux (CI, production)
-- WASM (browser runtime)
-
-Current tests run only on the host platform.
-
-### Solution
-
-Two-layer testing:
-
-| Layer | Environment | Trigger | Purpose |
-| ------------------ | ---------------- | ------------------------------ | ------------------------------ |
-| **DIND** | Docker-in-Docker | `cargo xtask dind-determinism` | Local dev, fast iteration |
-| **GitHub Actions** | Native runners | Push/PR | Gate merges, real environments |
-
-### 3.1 DIND Harness Extension
-
-Extend existing DIND test harness to include materialization digest:
-
-```rust
-// crates/echo-dind-tests/src/lib.rs
-
-/// Output from a determinism test run.
-#[derive(Debug, Serialize, Deserialize)]
-pub struct DeterminismOutput {
- /// State hash after N ticks.
- pub state_hash: String,
- /// Tick receipt hashes.
- pub receipt_hashes: Vec,
- /// NEW: Materialization digest (hash of all finalized frames).
- pub materialization_digest: String,
-}
-```
-
-The test runs the same scenario on:
-
-1. Host (macOS/Linux)
-2. Docker Linux container
-3. WASM via wasm-pack
-
-All three must produce identical `materialization_digest`.
-
-### 3.2 GitHub Actions Workflow
-
-```yaml
-# .github/workflows/determinism.yml
-
-name: Determinism
-
-on:
- push:
- branches: [main]
- pull_request:
- branches: [main]
-
-jobs:
- determinism-matrix:
- strategy:
- matrix:
- os: [ubuntu-latest, macos-latest]
- include:
- - os: ubuntu-latest
- target: x86_64-unknown-linux-gnu
- - os: macos-latest
- target: x86_64-apple-darwin
-
- runs-on: ${{ matrix.os }}
-
- steps:
- - uses: actions/checkout@v4
-
- - name: Install Rust
- uses: dtolnay/rust-action@stable
- with:
- targets: ${{ matrix.target }},wasm32-unknown-unknown
-
- - name: Install wasm-pack
- run: cargo install wasm-pack
-
- - name: Run determinism tests
- run: cargo test -p warp-core --test materialization_determinism
-
- - name: Run WASM determinism tests
- run: wasm-pack test --node crates/warp-core
-
- - name: Capture materialization digest
- id: digest
- run: |
- DIGEST=$(cargo run -p echo-dind-tests --bin capture-digest)
- echo "digest=$DIGEST" >> $GITHUB_OUTPUT
- echo "$DIGEST" > digest.txt
-
- - name: Upload digest artifact
- uses: actions/upload-artifact@v4
- with:
- name: digest-${{ matrix.os }}
- path: digest.txt
-
- verify-cross-platform:
- needs: determinism-matrix
- runs-on: ubuntu-latest
- steps:
- - name: Download all digests
- uses: actions/download-artifact@v4
-
- - name: Compare digests
- run: |
- LINUX=$(cat digest-ubuntu-latest/digest.txt)
- MACOS=$(cat digest-macos-latest/digest.txt)
-
- if [ "$LINUX" != "$MACOS" ]; then
- echo "DETERMINISM FAILURE: Linux and macOS produced different digests"
- echo "Linux: $LINUX"
- echo "macOS: $MACOS"
- exit 1
- fi
-
- echo "Cross-platform determinism verified: $LINUX"
-```
-
-### 3.3 Local DIND Command
-
-```bash
-# Run locally before pushing
-cargo xtask dind-determinism
-
-# Runs:
-# 1. Native test → captures digest
-# 2. Docker test → captures digest
-# 3. WASM test → captures digest
-# 4. Compares all three
-```
-
-### Files to Create/Modify
-
-| File | Action |
-| -------------------------------------------------- | ----------------------------------------- |
-| `.github/workflows/determinism.yml` | **Create** — CI workflow |
-| `crates/echo-dind-tests/src/lib.rs` | **Modify** — add materialization_digest |
-| `crates/echo-dind-tests/src/bin/capture-digest.rs` | **Create** — digest capture binary |
-| `xtask/src/main.rs` | **Modify** — add dind-determinism command |
-
----
-
-## Implementation Order
-
-```text
-Phase 1: EmissionPort (unblocks engine integration)
-├── Create emission_port.rs
-├── Create scoped_emitter.rs
-├── Update mod.rs exports
-└── Add unit tests
-
-Phase 2: ReduceOp (completes bus semantics)
-├── Create reduce_op.rs
-├── Update channel.rs (ChannelPolicy)
-├── Update bus.rs (finalize with reduce)
-└── Add reduce tests to determinism suite
-
-Phase 3: Cross-Platform Tests (gates merges)
-├── Extend DIND harness
-├── Create GitHub workflow
-├── Add xtask command
-└── Verify on first PR
-```
-
-## Test Plan: "SPEC is reSPECted"
-
-Comprehensive test suite ensuring the spec cannot lie.
-
-### Tier 1 — EmitKey Correctness + Wire Encoding
-
-| Test | What It Proves |
-| ------------------------------------------------- | -------------------------------------------------------- |
-| `emit_key_ord_is_lexicographic_scope_rule_subkey` | Ordering matches spec |
-| `emit_key_wire_encoding_is_40_bytes_no_padding` | bytes[0..32]=scope, [32..36]=rule LE, [36..40]=subkey LE |
-| `emit_key_roundtrip_wire` | encode → decode → equals |
-| `emit_key_subkey_from_hash_is_deterministic` | Same input → same u32 |
-
-### Tier 2 — Bus Duplicate Rejection
-
-| Test | What It Proves |
-| --------------------------------------------------- | -------------------------------------------------- |
-| `bus_rejects_duplicate_key_same_channel` | (ch, key, A) then (ch, key, B) → DuplicateEmission |
-| `bus_allows_same_key_different_channels` | (ch1, key) and (ch2, key) both OK |
-| `bus_rejects_duplicate_key_even_if_bytes_identical` | No "identical payload = OK" loophole |
-
-### Tier 3 — Permutation Invariance ("SPEC Police")
-
-| Test | What It Proves |
-| ----------------------------------------------- | ---------------------------------- |
-| `log_finalize_is_permutation_invariant_small_n` | All N! orderings → identical bytes |
-| `bus_channel_iteration_is_canonical` | Channels in BTreeMap order |
-| `bus_log_preserves_all_emissions_no_drops` | count(output) == count(input) |
-
-### Tier 4 — ReduceOp Algebra
-
-**Commutative ops (must be permutation-invariant):**
-
-| Test | What It Proves |
-| ------------------------------------------- | ------------------------------ |
-| `reduce_sum_commutative_associative` | All permutations → same result |
-| `reduce_max_min_are_commutative` | Byte-lex comparison is stable |
-| `reduce_bitor_commutative_variable_length` | Zero-padding semantics correct |
-| `reduce_bitand_commutative_variable_length` | Truncation semantics correct |
-
-**Order-dependent ops (NOT commutative, deterministic via EmitKey):**
-
-| Test | What It Proves |
-| ------------------------------------------- | ------------------------------ |
-| `reduce_first_picks_first_in_emitkey_order` | Smallest key wins |
-| `reduce_last_picks_last_in_emitkey_order` | Largest key wins |
-| `reduce_concat_matches_emitkey_order` | Output = concat(sorted by key) |
-
-**Truth serum:**
-
-| Test | What It Proves |
-| ----------------------------------------------- | ---------------------------------- |
-| `reduce_op_commutativity_table_is_honest` | `is_commutative()` matches reality |
-| `reduce_empty_input_returns_specified_identity` | Sum→[0;8], others→[] |
-
-### Tier 5 — Engine Integration
-
-| Test | What It Proves |
-| ------------------------------------------------ | -------------------------------- |
-| `engine_log_emissions_stable_across_apply_order` | Rewrite order doesn't matter |
-| `engine_strict_single_deterministic_failure` | Same error signature both orders |
-| `engine_reduce_sum_stable_across_apply_order` | Reduced sum identical |
-| `engine_emits_only_post_commit` | Port empty before commit |
-
-### Tier 6 — Cross-Platform Digest
-
-| Test | What It Proves |
-| ---------------------------------------------------- | -------------------------------- |
-| `determinism_output_includes_materialization_digest` | Harness writes digest |
-| `cross_platform_digest_matches_linux_macOS_wasm` | All platforms identical |
-| `scope_hash_is_content_hash_not_id_hash` | Equivalent stores → same EmitKey |
-
----
-
-## Open Questions
-
-1. **WASM target for CI** — `wasm32-unknown-unknown` or `wasm32-wasi`? Recommend `unknown-unknown` for browser purity.
-
-2. **Reduce op extensibility** — Should we ever allow user-defined reduce ops? **NO.** Use `Log` and reduce client-side.
-
-3. **Digest algorithm** — BLAKE3 of concatenated frame bytes. Simple, no Merkle tree needed.
-
----
-
-## Success Criteria
-
-- [x] Rules emit via `EmissionPort` trait, not direct bus access
-- [x] Duplicate (channel, EmitKey) pairs rejected with `DuplicateEmission`
-- [x] `ChannelPolicy::Reduce(ReduceOp)` replaces `join_fn_id`
-- [x] All 8 `ReduceOp` variants implemented with `is_commutative()` classification
-- [x] Empty-input behavior: Sum→[0;8], all others→[]
-- [x] All Tier 1-5 tests passing
-- [x] GitHub Actions workflow passes on PR
-- [x] `cargo xtask dind` passes locally
-- [x] Cross-platform digest match verified in CI (weekly schedule)
-
----
-
-## Revision History
-
-| Date | Change |
-| ---------- | --------------------------------------------------------------------- |
-| 2026-01-17 | Initial draft |
-| 2026-01-17 | Fixed ReduceOp algebra claims (First/Last/Concat are NOT commutative) |
-| 2026-01-17 | Added duplicate EmitKey rejection policy |
-| 2026-01-17 | Specified empty-input behavior (Sum→[0;8], others→[]) |
-| 2026-01-17 | Added comprehensive "SPEC is reSPECted" test plan |
-| 2026-01-17 | Phase 3 complete: 127 tests, CI workflow, xtask dind command |
diff --git a/docs/archive/roadmap-mwmr-mini-epic.md b/docs/archive/roadmap-mwmr-mini-epic.md
deleted file mode 100644
index ac71dc68..00000000
--- a/docs/archive/roadmap-mwmr-mini-epic.md
+++ /dev/null
@@ -1,84 +0,0 @@
-
-
-
-# MWMR Concurrency Mini‑Epic Roadmap (Footprints, Reserve Gate, Telemetry)
-
-Status: Active • Owner: warp-core • Created: 2025-10-27
-
-## Outcomes
-
-- Enforce MWMR determinism via independence checks (footprints + ports + factor masks).
-- Keep the hot path zero‑overhead (compact u32 rule ids; domain‑separated family ids only at boundaries).
-- Prove commutation with property tests (N‑permutation) and add basic telemetry for conflict rates.
-
----
-
-## Phase 0.5 — Foundations (Done / In‑Progress)
-
-- [x] Footprint type with ports and factor mask (IdSet/PortSet; deterministic intersects)
-- [x] RewriteRule surface extended with `compute_footprint`, `factor_mask`, `ConflictPolicy`
-- [x] PendingRewrite carries `footprint` + `phase`
-- [x] Property test: 2 independent motion rewrites commute (equal snapshot hash)
-- [x] Spec doc: `docs/spec-mwmr-concurrency.md`
-
----
-
-## Phase 1 — Reservation Gate & Compact IDs
-
-- [x] CompactRuleId(u32) and rule table mapping family_id → compact id (in Engine)
-- [x] DeterministicScheduler::reserve(tx, &mut PendingRewrite) → bool (active frontier per tx)
-- [x] Engine commit() wires the reserve gate (execute only Reserved rewrites)
-- [x] Feature‑gated JSONL telemetry (reserved/conflict) with timestamp, tx_id, short rule id
-- [ ] Use CompactRuleId in PendingRewrite and internal execution paths (leave family id for ordering/disk/wire)
-
----
-
-## Phase 2 — Proof & Performance
-
-- [ ] Property test: N‑permutation commutation (N = 3..6 independent rewrites)
-- [ ] Reserve gate smoke tests (same PortKey ⇒ conflict; disjoint ports ⇒ reserve)
-- [ ] Criterion bench: independence checks (10/100/1k rewrites) — target < 1 ms @ 100
-- [ ] Telemetry counters per tick (conflict_rate, retry_count, reservation_latency_ms, epoch_flip_ms)
-- [ ] Add Retry with randomized backoff (behind flag) once telemetry lands; keep default Abort
-
----
-
-## Phase 3 — Rule Identity & Hot‑Load
-
-- [x] build.rs generates const family id for `rule:motion/update` (domain‑separated)
-- [ ] Generalize generator (src/gen/rule_ids.rs) and runtime assert test to catch drift
-- [ ] Rhai rule registration: `register_rule{name, match, exec, ?id, ?revision}`; engine computes if omitted
-- [ ] Revision ID = `blake3("rule-rev::canon-ast-v1" || canonical AST bytes)`
-
----
-
-## Phase 4 — Storage & Epochs (Scoping/Design)
-
-- [ ] Offset‑graph arena + mmap view (zero‑copy snapshots)
-- [ ] Double‑buffered planes (attachments/skeleton), lazy epoch flips, grace‑period reclamation
-- [ ] Optional Merkle overlays for partial verification
-
----
-
-## Guardrails & Invariants
-
-- Deterministic planning key = (scope_hash, family_id); execution may be parallel, ordering stays stable.
-- Footprint independence order: factor_mask → ports → edges → nodes; fail fast on ports.
-- Keep |L| ≤ 5–10; split rules or seed from rare types if larger.
-- Never serialize CompactRuleId; boundary formats carry family id + (optional) revision id.
-
----
-
-## Telemetry (dev feature)
-
-- Events: `reserved`, `conflict` (ts_micros, tx_id, rule_id_short)
-- Counters per tick: conflict_rate, retry_count, reservation_latency_ms, epoch_flip_ms, bitmap_blocks_checked
-
----
-
-## Links
-
-- Spec: `docs/spec-mwmr-concurrency.md`
-- Tests: `crates/warp-core/tests/footprint_independence_tests.rs`, `crates/warp-core/tests/property_commute_tests.rs`
-- Engine: `crates/warp-core/src/engine_impl.rs`, `crates/warp-core/src/scheduler.rs`
-- Build: `crates/warp-core/build.rs`
diff --git a/docs/archive/runtime-diagnostics-plan.md b/docs/archive/runtime-diagnostics-plan.md
deleted file mode 100644
index 15106fa9..00000000
--- a/docs/archive/runtime-diagnostics-plan.md
+++ /dev/null
@@ -1,64 +0,0 @@
-
-
-
-# Runtime Diagnostics Plan (Phase 0.5)
-
-Outlines logging, tracing, crash recovery, and inspector data streams for Echo runtime.
-
----
-
-## Logging Levels
-
-- `TRACE` – verbose diagnostics (disabled in production).
-- `DEBUG` – subsystem insights (branch tree, Codex’s Baby).
-- `INFO` – major lifecycle events (fork, merge, replay start).
-- `WARN` – recoverable anomalies (drop records, entropy spikes).
-- `ERROR` – determinism faults (capability denial, PRNG mismatch).
-
-Logs are structured JSON: `{ timestamp?, tick, branch, level, event, data }`. Timestamps optional and excluded from hashes.
-
----
-
-## Crash Recovery
-
-- On `ERROR`, emit synthetic timeline node with `errorCode`, `nodeId`, `diffId`.
-- Persist crash report (JSON) including last inspector frames and capability state.
-- Provide CLI `echo diagnostics --last-crash` to display report.
-
----
-
-## Tracing
-
-- Optional per-phase tracing (`TRACE` level) capturing start/end of scheduler phases, system durations.
-- Output to separate trace buffer for tooling (`trace.jsonl`).
-
----
-
-## Inspector Streams
-
-- `InspectorFrame` (core metrics)
-- `CBInspectorFrame` (Codex’s Baby)
-- `BridgeInspectorFrame` (Temporal Bridge)
-- `CapabilityInspectorFrame`
-
-Frames emitted each tick after `timeline_flush`, appended to ring buffer (configurable size). Debug tools subscribe over IPC/WebSocket.
-
----
-
-## Diagnostic CLI
-
-- `echo inspect --tick ` – dump inspector frames.
-- `echo entropy --branch ` – show entropy history.
-- `echo diff ` – print diff summary.
-- `echo replay --verify` – reuse replay contract.
-
----
-
-## CI Integration
-
-- Pipeline collects inspector frames for failing tests, attaches to artifacts.
-- Warnings escalate to failures when thresholds exceeded (entropy > threshold without observer, repeated paradox quarantine).
-
----
-
-This plan provides consistent observability without compromising determinism.
diff --git a/docs/archive/rust-rhai-ts-division.md b/docs/archive/rust-rhai-ts-division.md
deleted file mode 100644
index 77b301c3..00000000
--- a/docs/archive/rust-rhai-ts-division.md
+++ /dev/null
@@ -1,87 +0,0 @@
-
-
-
-# Language & Responsibility Map (Phase 1)
-
-Echo’s runtime stack is intentionally stratified. Rust owns the deterministic graph engine; Rhai sits on top for gameplay scripting; TypeScript powers the tooling layer via WebAssembly bindings. This document captures what lives where as we enter Phase 1 (Core Ignition).
-
----
-
-## Rust (warp-core, wasm, cli)
-
-### Responsibilities
-
-- WARP engine: GraphStore, PatternGraph, RewriteRule, DeterministicScheduler, commit/Snapshot APIs.
-- ECS foundations: Worlds, Systems, Components expressed as rewrite rules.
-- Timeline & Branch tree: rewrite transactions, snapshot hashing, concurrency guard rails.
-- Math/PRNG: deterministic float32 / fixed32 modules shared with gameplay.
-- Netcode: lockstep / rollback / authority modes using rewrite transactions.
-- Asset pipeline: import/export graphs, payload storage, zero-copy access.
-- Confluence: distributed synchronization of rewrite transactions.
-- Rhai engine hosting: embed Rhai with deterministic module set; expose WARP bindings.
-- CLI tools: `echo-cli` with `verify`, `bench`, and `inspect` subcommands.
-
-### Key Crates
-
-- `warp-core` – core engine; Rhai binds directly in-process
-- `warp-wasm` – WASM build for tooling/editor
-- `warp-cli` – CLI utilities (`echo-cli` binary: verify, bench, inspect)
-
----
-
-## Rhai (gameplay authoring layer)
-
-### Rhai Responsibilities
-
-- Gameplay systems & components (e.g., AI state machines, quests, input handling).
-- Component registration, entity creation/destruction via exposed APIs.
-- Scripting for deterministic “async” (scheduled events through Codex’s Baby).
-- Editor lenses and inspector overlays written in Rhai for rapid iteration.
-
-### Constraints
-
-- Single-threaded per branch; no OS threads.
-- Engine budgeted deterministically per tick.
-- Mutations occur through rewrite intents (`warp.apply(...)`), not raw memory access.
-
-### Bindings
-
-- `warp` Rhai module providing:
- - `apply(rule_name, scope, params)`
- - `delay(seconds, fn)` (schedules replay-safe events)
- - Query helpers (read components, iterate entities)
- - Capability-guarded operations (world:rewrite, asset:import, etc.)
-
----
-
-## TypeScript / Web Tooling
-
-### TypeScript Responsibilities
-
-- Echo Studio (graph IDE) – visualizes world graph, rewrites, branch tree.
-- Inspector dashboards – display Codex, entropy, paradox frames.
-- Replay/rollback visualizers, network debugging tools.
-- Plugin builders and determinism test harness UI.
-
-### Integration
-
-- Uses `warp-wasm` to call into WARP engine from the browser.
-- IPC/WebSocket for live inspector feeds (`InspectorEnvelope`).
-- Works with JSONL logs for offline analysis.
-- All mutations go through bindings; tooling never mutates state outside WARP APIs.
-
-### Tech
-
-- Frontend frameworks: React/Svelte/Vanilla as needed.
-- WebGPU/WebGL for graph visualization.
-- TypeScript ensures type safety for tooling code.
-
----
-
-## Summary
-
-- Rust: core deterministic runtime + binding layers.
-- Rhai: gameplay logic, editor lenses, deterministic script-level behavior.
-- TypeScript: visualization and tooling on top of WASM/IPC.
-
-This division keeps determinism and performance anchored in Rust while giving designers and tooling engineers approachable layers tailored for their workflows.
diff --git a/docs/archive/scheduler-benchmarks.md b/docs/archive/scheduler-benchmarks.md
deleted file mode 100644
index d9a7a551..00000000
--- a/docs/archive/scheduler-benchmarks.md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-
-
-# Scheduler Benchmark Plan (Phase 0)
-
-This document has been **split** to reduce drift and make scope explicit.
-
-Doc map:
-
-- [docs/scheduler.md](./scheduler.md)
-
-Current (implemented) benchmarks:
-
-- [docs/scheduler-performance-warp-core.md](./scheduler-performance-warp-core.md)
-
-Future (planned) system-scheduler benchmarks:
-
-- [docs/spec-scheduler.md](./spec-scheduler.md) (planned benchmark scenarios; spec-only today)
-
----
-
-The detailed benchmark plan content now lives in:
-
-- [docs/scheduler-performance-warp-core.md](./scheduler-performance-warp-core.md) (warp-core)
-- [docs/spec-scheduler.md](./spec-scheduler.md) (planned system scheduler scenarios)
diff --git a/docs/archive/scheduler-reserve-complexity.md b/docs/archive/scheduler-reserve-complexity.md
deleted file mode 100644
index f17bf1d1..00000000
--- a/docs/archive/scheduler-reserve-complexity.md
+++ /dev/null
@@ -1,12 +0,0 @@
-
-
-
-# Scheduler `reserve()` Time Complexity Analysis
-
-This document has been **merged** into the canonical warp-core scheduler doc:
-
-- [docs/scheduler-warp-core.md](./scheduler-warp-core.md)
-
-It remains as a stable link target for older references.
-
-The full analysis now lives in [docs/scheduler-warp-core.md](./scheduler-warp-core.md).
diff --git a/docs/archive/scheduler-reserve-validation.md b/docs/archive/scheduler-reserve-validation.md
deleted file mode 100644
index 37a3197c..00000000
--- a/docs/archive/scheduler-reserve-validation.md
+++ /dev/null
@@ -1,23 +0,0 @@
-
-
-
-# Scheduler `reserve()` Implementation Validation
-
-This document has been **merged** into the canonical warp-core scheduler doc:
-
-- [docs/scheduler-warp-core.md](./scheduler-warp-core.md)
-
-It remains as a stable link target for older references.
-
-## Questions Answered
-
-1. ✅ **Atomic Reservation**: No partial marking on conflict
-2. ✅ **Determinism Preserved**: Same inputs → same outputs
-3. ✅ **Time Complexity**: Detailed analysis with ALL loops counted
-4. ✅ **Performance Claims**: Measured, not just theoretical
-
----
-
-If you’re here for evidence details (atomicity/determinism/complexity), read:
-
-- [docs/scheduler-warp-core.md](./scheduler-warp-core.md)
diff --git a/docs/archive/spec-deterministic-math.md b/docs/archive/spec-deterministic-math.md
deleted file mode 100644
index 514bcc69..00000000
--- a/docs/archive/spec-deterministic-math.md
+++ /dev/null
@@ -1,213 +0,0 @@
-
-
-
-# Deterministic Math Module Specification (Phase 0)
-
-> **Background:** For a gentler introduction, see [WARP Primer](/guide/warp-primer).
-
-Echo’s math module underpins every deterministic system: physics proxies, animation, AI, and branch reconciliation.
-
-**Status (2026-01-02): legacy draft + partial reality.**
-
-- This document started life as a JS/TypeScript-oriented Phase 0 draft.
-- The canonical implementation today is Rust `warp-core` (`crates/warp-core/src/math/*`).
-- The normative determinism policy is `docs/SPEC_DETERMINISTIC_MATH.md`.
-- Validation and CI lanes are tracked in `docs/math-validation-plan.md`.
-
-Treat this spec as a **design sketch for future bindings** (TS/WASM/FFI) and an inventory of desired API shape, not as a statement that the JS implementation exists.
-
----
-
-## Goals
-
-- Provide deterministic vector/matrix/quaternion operations across platforms (at minimum: Linux/macOS, and eventually WASM/JS bindings).
-- Support dual numeric modes via scalar backends:
- - float lane (`F32Scalar`, default)
- - fixed-point lane (`DFix64`, feature-gated today)
-- Expose seeded PRNG services suitable for replay and branching.
-- Offer allocation-aware APIs (avoid heap churn) for hot loops.
-- Surface profiling hooks (NaN guards, range checks) in development builds.
-
----
-
-## Numeric Modes
-
-### Float32 Mode (default)
-
-- **Rust source of truth:** `F32Scalar` wraps `f32` and enforces canonicalization invariants (NaNs, signed zero, subnormals) at construction and after operations.
-- **Transcendentals:** `sin`/`cos` are provided via a deterministic software backend (`warp_core::math::trig`), not platform/libm.
-- **Bindings note:** if/when we ship TS/WASM bindings, they must match Rust’s outputs and invariants; “just `Math.fround`” is not sufficient to guarantee cross-engine determinism for transcendentals or NaN payload behavior.
-
-### Fixed-Point Mode (opt-in)
-
-- **Rust source of truth:** `DFix64` is Q32.32 fixed-point stored in `i64` and is currently feature-gated behind `det_fixed` so we can evolve it without destabilizing the default lane.
-- **Non-finite mapping:** conversions from float inputs must be deterministic (e.g., NaN → 0, ±∞ saturate) and are covered by tests.
-- **Bindings note:** future TS bindings should treat Rust fixtures as canonical; JS `BigInt` fixed-point is a possible implementation strategy, but not a correctness authority.
-
-Mode should be chosen at engine init (or build feature selection), with a clear policy for serialization/hashing so deterministic replay remains stable.
-
----
-
-## Core Types
-
-### Vec2 / Vec3 / Vec4
-
-```ts
-interface Vec2 {
- readonly x: number;
- readonly y: number;
-}
-
-type VecLike = Float32Array | number[];
-```
-
-- Backed by `Float32Array` of length 2/3/4.
-- Methods: `create`, `clone`, `set`, `add`, `sub`, `scale`, `dot`, `length`, `normalize`, `lerp`, `equals`.
-- All mutating functions accept `out` parameter for in-place updates to reduce allocations.
-- Deterministic clamps: every operation ends with `fround` (float mode) or `fixed` operations.
-- Rust parity: `warp_core::math::Vec3` currently implements add/sub/scale/dot/cross/length/normalize; `Vec2`/`Vec4` remain TODO.
-
-### Mat3 / Mat4
-
-- Column-major storage (`Float32Array(9)` / `Float32Array(16)`).
-- Methods: `identity`, `fromRotation`, `fromTranslation`, `multiply`, `invert`, `transformVec`.
-- Deterministic inversion: use well-defined algorithm with guard against singular matrices (records failure and returns identity or throws based on config).
-- Rust parity: `warp_core::math::Mat4` exposes `multiply` and `transform_point`; identity/fromRotation/invert are pending.
-
-### Quat
-
-- Represented as `[x, y, z, w]`.
-- Functions: `identity`, `fromAxisAngle`, `multiply`, `slerp`, `normalize`, `toMat4`.
-- `slerp` uses deterministic interpolation with clamped range.
-- Rust parity: `warp_core::math::Quat` implements identity/fromAxisAngle/multiply/normalize/to_mat4; `slerp` remains TBD.
-
-### Transform
-
-- Struct bundling position (Vec3), rotation (Quat), scale (Vec3).
-- Helper for constructing Mat4; ensures consistent order of operations.
-- Rust parity: transform helpers are still tracked for Phase 1 (not implemented yet).
-
-### Bounds / AABB
-
-- Useful for physics collision; stores min/max Vec3.
-- Provides deterministic union/intersection operations.
-
----
-
-## PRNG Services
-
-### Engine PRNG
-
-- Based on counter-based generator (e.g., Philox or Xoroshiro128+).
-- Implementation in TypeScript with optional WebAssembly acceleration later.
-- Interface:
-
-```ts
-interface PRNG {
- next(): number; // returns float in [0,1)
- nextInt(min: number, max: number): number;
- nextFloat(min: number, max: number): number;
- state(): PRNGState;
- jump(): PRNG; // independent stream
-}
-```
-
-- `state` serializable for replay.
-- `jump` used for branch forking: clone generator with deterministic offset.
-- `seed` derived from combination of world seed + branch ID + optional subsystem tag.
-- Rust parity: `warp_core::math::Prng` implements seeding, `next_f32`, and `next_int`; state/jump APIs are follow-up work.
-
-### Deterministic Hashing
-
-- Provide `hash64` function (e.g., SplitMix64) for converting strings/IDs into seeds.
-- Ensure stable across platforms; implement in TypeScript to avoid native differences.
-
-### Integration Points
-
-- Scheduler passes `math.prng` on `TickContext`.
-- Codex’s Baby `CommandContext` exposes `prng.spawn(scope)` for per-handler streams.
-- Timeline branch creation clones PRNG state to maintain deterministic divergence.
-
----
-
-## Utility Functions
-
-- `clamp(value, min, max)` – deterministic clamp using `Math.min/Math.max` once (avoid multiple rounding).
-- `approximatelyEqual(a, b, epsilon)` – uses configured epsilon (float32 ~1e-6).
-- `degToRad`, `radToDeg` – using float32 rounding.
-- `wrapAngle(angle)` – ensure deterministic wrap [-π, π].
-- `bezier`, `catmullRom` – deterministic interpolation functions for animation.
-
----
-
-## Memory Strategy
-
-- Provide pool of reusable vectors/matrices for temporary calculations (`MathStack`).
-- `MathStack` uses deterministic LIFO behavior: `pushVec3()`, `pushMat4()`, `pop()`.
-- Guard misuse in dev builds (stack underflow/overflow assertions).
-
----
-
-## Diagnostics
-
-- Optional `math.enableDeterminismChecks()` toggles NaN/Infinity detection; throws descriptive error with stack trace.
-- `math.traceEnabled` allows capturing sequence of operations for debugging (recorded in inspector overlay).
-- Stats counters: operations per frame, PRNG usage frequency.
-
----
-
-## API Surface (draft)
-
-```ts
-interface EchoMath {
- mode: "float32" | "fixed32";
- vec2: Vec2Module;
- vec3: Vec3Module;
- vec4: Vec4Module;
- mat3: Mat3Module;
- mat4: Mat4Module;
- quat: QuatModule;
- transform: TransformModule;
- prng: PRNGFactory;
- stack: MathStack;
- constants: {
- epsilon: number;
- tau: number;
- };
- utils: {
- clamp(value: number, min: number, max: number): number;
- approx(a: number, b: number, epsilon?: number): boolean;
- degToRad(deg: number): number;
- radToDeg(rad: number): number;
- };
-}
-```
-
-`PRNGFactory`:
-
-```ts
-interface PRNGFactory {
- create(seed: PRNGSeed): PRNG;
- fromTimeline(fingerprint: TimelineFingerprint, scope?: string): PRNG;
-}
-```
-
----
-
-## Determinism Notes
-
-- Avoid `Math.random`; all randomness flows through PRNG.
-- `Math.sin/cos` may vary across engines; implement polynomial approximations or wrap to enforce float32 rounding (test across browsers).
-- Fixed-point mode may skip trig functions initially; provide lookup tables or polynomial approximations.
-- Ensure order of operations consistent; avoid relying on JS evaluation order quirks.
-
----
-
-## Open Questions
-
-- Should fixed-point mode support quaternions (costly) or restrict to 2D contexts?
-- How to expose SIMD acceleration where available without breaking determinism (e.g., WebAssembly fallback).
-- Do we allow user-defined math extensions (custom vector sizes) via plugin system?
-- Integration with physics adapters: how to synchronize with Box2D/Rapier numeric expectations (float32).
-
-Future work: add unit tests validating cross-environment determinism, micro-benchmarks for operations, and sample usage in the playground.
diff --git a/docs/archive/spec-geom-collision.md b/docs/archive/spec-geom-collision.md
deleted file mode 100644
index d6b83b6b..00000000
--- a/docs/archive/spec-geom-collision.md
+++ /dev/null
@@ -1,42 +0,0 @@
-
-
-
-# Geometry & Collision (Spec Stub)
-
-> **Background:** For a gentler introduction, see [WARP Primer](/guide/warp-primer).
-
-**Status: not yet re-specified.** This repo currently carries an interactive DPO tour and diagram assets, but the full written spec for Echo’s geometry/collision subsystem is pending re-homing into the Rust-first era.
-
-## Scope (Intended)
-
-- Deterministic broad phase and narrow phase modeled as graph rewrites.
-- Canonical identifiers for bodies, shapes, and contacts.
-- Collision events emitted as deterministic graph deltas.
-- CCD as a deterministic, replayable sequence of rewrite steps.
-
-## Non-Goals (For Now)
-
-- Physics engine replacement (Box2D/Rapier integrations remain adapters).
-- GPU-accelerated collision or platform-specific broad-phase shortcuts.
-- Real-time authoring tools (tracked separately in editor/inspector specs).
-
-What exists today:
-
-- Interactive tour: `/collision-dpo-tour.html` (source: `docs/public/collision-dpo-tour.html`)
-- Guide entrypoint: `docs/guide/collision-tour.md`
-- Diagram assets: `docs/public/assets/collision/`
-
-What this spec should eventually cover:
-
-- Deterministic broad phase + narrow phase modeled as graph rewrites (DPO).
-- Canonical IDs, stable ordering, and hashing inputs/outputs for replay.
-- Temporal proxies, CCD workflow, and event emission in a timeline-aware world.
-- See [spec-deterministic-math.md](../spec-deterministic-math.md) for the normative deterministic math policy.
-
-## Near-Term Deliverables
-
-- Solidify the wire format for collision-related view ops (if any).
-- Define the minimal node/edge schema for bodies, shapes, and contacts.
-- Specify the canonical ordering for resolving contact sets.
-
-Until the full spec is written, treat the tour as an **illustrative artifact**, not a normative contract.
diff --git a/docs/archive/study/aion.cls b/docs/archive/study/aion.cls
deleted file mode 100644
index 4f2d2f67..00000000
--- a/docs/archive/study/aion.cls
+++ /dev/null
@@ -1,175 +0,0 @@
-% SPDX-License-Identifier: Apache-2.0 OR LicenseRef-MIND-UCAL-1.0
-% © James Ross Ω FLYING•ROBOTS
-\NeedsTeXFormat{LaTeX2e}
-\ProvidesClass{aion}[2025/12/07 AIΩN Foundations Series Class]
-
-\LoadClass[11pt]{article}
-
-% ------------------------------------------------------------
-% Packages
-% ------------------------------------------------------------
-\RequirePackage[T1]{fontenc}
-\RequirePackage{lmodern}
-\RequirePackage{amsmath, amssymb, amsfonts, amsthm}
-\RequirePackage{microtype}
-\RequirePackage{geometry}
-\RequirePackage{xcolor}
-\RequirePackage{graphicx}
-\RequirePackage{titlesec}
-\RequirePackage{tocloft}
-\RequirePackage{enumitem}
-\RequirePackage{booktabs}
-\RequirePackage{chngcntr}
-\RequirePackage{hyperref}
-
-% ------------------------------------------------------------
-% Geometry
-% ------------------------------------------------------------
-\geometry{
- margin=1in,
-}
-
-% ------------------------------------------------------------
-% Colors (brand)
-% ------------------------------------------------------------
-\definecolor{AIONBlue}{RGB}{20, 60, 120}
-\definecolor{AIONAccent}{RGB}{120, 20, 120}
-
-% ------------------------------------------------------------
-% Hyperref
-% ------------------------------------------------------------
-\hypersetup{
- colorlinks=true,
- linkcolor=AIONBlue,
- citecolor=AIONBlue,
- urlcolor=AIONAccent,
- hypertexnames=false
-}
-
-% ------------------------------------------------------------
-% Section Titles
-% ------------------------------------------------------------
-\titleformat{\section}
- {\large\bfseries\color{AIONBlue!85!black}}
- {\thesection}{0.5em}{}
-
-\titleformat{\subsection}
- {\normalsize\bfseries\color{AIONBlue!85!black}}
- {\thesubsection}{0.5em}{}
-
-% Reset figure numbering by section for cleaner hyperlinks
-\counterwithin{figure}{section}
-\renewcommand{\theHfigure}{\thesection.\arabic{figure}}
-
-% ------------------------------------------------------------
-% Theorem Environments
-% ------------------------------------------------------------
-\theoremstyle{definition}
-\newtheorem{definition}{Definition}[section]
-\newtheorem{assumption}[definition]{Assumption}
-
-\theoremstyle{plain}
-\newtheorem{proposition}[definition]{Proposition}
-\newtheorem{theorem}[definition]{Theorem}
-\newtheorem{lemma}[definition]{Lemma}
-\newtheorem{corollary}[definition]{Corollary}
-
-\theoremstyle{remark}
-\newtheorem{example}[definition]{Example}
-\newtheorem{remark}[definition]{Remark}
-
-% ------------------------------------------------------------
-% Metadata Commands
-% ------------------------------------------------------------
-% Internal storage macros (initialized to \@empty for robust checking)
-\makeatletter
-\newcommand{\AION@papertitle}{\@empty}
-\newcommand{\AION@papernumber}{\@empty}
-\newcommand{\AION@paperversion}{\@empty}
-\newcommand{\AION@paperdate}{\@empty}
-\newcommand{\AION@paperauthor}{\@empty}
-\newcommand{\AION@paperaffiliation}{\@empty}
-\newcommand{\AION@paperorcid}{\@empty}
-\newcommand{\AION@paperdoi}{\@empty}
-
-% User-facing setter commands
-\newcommand{\papertitle}[1]{\gdef\AION@papertitle{#1}}
-\newcommand{\papernumber}[1]{\gdef\AION@papernumber{#1}}
-\newcommand{\paperversion}[1]{\gdef\AION@paperversion{#1}}
-\newcommand{\paperdate}[1]{\gdef\AION@paperdate{#1}}
-\newcommand{\paperauthor}[1]{\gdef\AION@paperauthor{#1}}
-\newcommand{\paperaffiliation}[1]{\gdef\AION@paperaffiliation{#1}}
-\newcommand{\paperorcid}[1]{\gdef\AION@paperorcid{#1}}
-\newcommand{\paperdoi}[1]{\gdef\AION@paperdoi{#1}}
-
-% Robust emptiness check using \@empty
-\newcommand{\AION@require}[2]{%
- \ifx#1\@empty
- \ClassError{aion}{#2 not set}{You must call #2 before \string\AIONTitlePage}
- \fi
-}
-\makeatother
-
-\makeatletter
-\newcommand{\AIONTitlePage}{%
- \AION@require{\AION@papertitle}{\string\papertitle}%
- \AION@require{\AION@papernumber}{\string\papernumber}%
- \AION@require{\AION@paperauthor}{\string\paperauthor}%
- \AION@require{\AION@paperdate}{\string\paperdate}%
-
- \thispagestyle{empty}
- \begin{center}
-
- % Nudge the block slightly downward for visual gravity
- \vspace*{1.5cm}
-
- % Title (primary anchor)
- {\Huge\bfseries \AION@papertitle \par}
- \vspace{14pt}
-
- % Series / paper number (subtitle energy)
- {\normalsize\scshape\color{AIONBlue}
- AI$\Omega$N Foundations Series — \AION@papernumber \par}
-
- \vspace{16pt}
-
- % Author block (confident, quiet)
- {\large
- \AION@paperauthor \par}
- \vspace{4pt}
-
- % Only show affiliation/ORCID if defined
- \ifx\AION@paperaffiliation\@empty\else
- {\normalsize \AION@paperaffiliation \par}
- \fi
- \ifx\AION@paperorcid\@empty\else
- {\normalsize ORCID: \AION@paperorcid \par}
- \fi
- \ifx\AION@paperdoi\@empty\else
- {\normalsize DOI: \AION@paperdoi \par}
- \fi
-
- \vspace{10pt}
-
- {\normalsize
- \AION@paperdate \par}
-
- \end{center}
- \vspace{2cm}
-}
-\makeatother
-
-\newcommand{\AIONFrontMatter}[1]{%
- \begin{center}
- \small
- #1
- \end{center}
- \vspace{1em}
-}
-
-% ------------------------------------------------------------
-% Table of Contents formatting
-% TODO: implement custom TOC styling if needed
-% ------------------------------------------------------------
-
-\endinput
diff --git a/docs/archive/study/build-tour.py b/docs/archive/study/build-tour.py
deleted file mode 100644
index a4c1e10a..00000000
--- a/docs/archive/study/build-tour.py
+++ /dev/null
@@ -1,260 +0,0 @@
-#!/usr/bin/env python3
-# SPDX-License-Identifier: Apache-2.0
-# © James Ross Ω FLYING•ROBOTS
-"""
-Build the 'What Makes Echo Tick' tour document with:
-1. Claude's commentary in red-outlined boxes with RED TEXT
-2. PDF diagrams with embedded fonts
-3. Letter-size paper with small margins
-"""
-
-import re
-import subprocess
-import sys
-from pathlib import Path
-
-STUDY_DIR = Path(__file__).parent
-DIAGRAMS_DIR = STUDY_DIR / "diagrams"
-
-INPUT_MD = STUDY_DIR / "what-makes-echo-tick.md"
-PROCESSED_MD = STUDY_DIR / "what-makes-echo-tick-processed.md"
-OUTPUT_TEX = STUDY_DIR / "what-makes-echo-tick.tex"
-OUTPUT_PDF = STUDY_DIR / "what-makes-echo-tick.pdf"
-
-
-def escape_latex(text: str) -> str:
- """Escape LaTeX special characters in text."""
- # Use placeholder to avoid double-escaping braces in \textbackslash{}
- BACKSLASH_PLACEHOLDER = "\x00BACKSLASH\x00"
- text = text.replace('\\', BACKSLASH_PLACEHOLDER)
-
- replacements = [
- ('&', r'\&'),
- ('%', r'\%'),
- ('$', r'\$'),
- ('#', r'\#'),
- ('_', r'\_'),
- ('{', r'\{'),
- ('}', r'\}'),
- ('~', r'\textasciitilde{}'),
- ('^', r'\textasciicircum{}'),
- ]
- for char, replacement in replacements:
- text = text.replace(char, replacement)
-
- return text.replace(BACKSLASH_PLACEHOLDER, r'\textbackslash{}')
-
-
-def convert_commentary_to_latex(md_content: str) -> str:
- """Convert CLAUDE_COMMENTARY markers to LaTeX red boxes."""
-
- def replace_commentary(match: re.Match[str]) -> str:
- inner = match.group(1).strip()
- # Escape LaTeX special chars in the commentary content
- escaped = escape_latex(inner)
- return f'\n\n\\begin{{claudecommentary}}\n{escaped}\n\\end{{claudecommentary}}\n\n'
-
- # Replace ...
- pattern = r'\s*(.*?)\s*'
- md_content = re.sub(pattern, replace_commentary, md_content, flags=re.DOTALL)
-
- return md_content
-
-
-def convert_svg_to_pdf_refs(md_content: str) -> str:
- """Convert SVG image references to PDF for LaTeX."""
- md_content = re.sub(
- r'\!\[([^\]]*)\]\(diagrams/([^)]+)\.svg\)',
- r'',
- md_content
- )
- return md_content
-
-
-def run_pandoc(md_file: Path, tex_file: Path) -> bool:
- """Run pandoc to convert markdown to LaTeX."""
- try:
- result = subprocess.run(
- [
- "pandoc",
- str(md_file),
- "-o", str(tex_file),
- "--standalone",
- "-f", "markdown+raw_tex",
- "--top-level-division=chapter",
- "-V", "geometry:margin=0.75in",
- "-V", "geometry:letterpaper",
- "-V", "fontsize=11pt",
- ],
- capture_output=True,
- text=True,
- timeout=60
- )
- if result.returncode != 0:
- print(f"pandoc failed: {result.stderr}", file=sys.stderr)
- return False
- return True
- except (subprocess.TimeoutExpired, FileNotFoundError) as e:
- print(f"pandoc error: {e}", file=sys.stderr)
- return False
-
-
-def postprocess_tex(tex_file: Path) -> None:
- """Post-process the LaTeX file."""
- content = tex_file.read_text()
-
- # Add required packages and styling
- # Note: graphicx and geometry are already loaded by Pandoc, so we only add
- # adjustbox, tcolorbox, and fvextra here
- packages = r"""
-\usepackage[export]{adjustbox}
-\usepackage{tcolorbox}
-\tcbuselibrary{breakable,skins}
-
-% Make code blocks smaller to fit
-\usepackage{fvextra}
-\DefineVerbatimEnvironment{Highlighting}{Verbatim}{
- commandchars=\\\{\},
- fontsize=\small,
- breaklines=true,
- breakanywhere=true
-}
-
-% Define the Claude commentary box style - RED OUTLINE + RED TEXT
-\newtcolorbox{claudecommentary}{
- enhanced,
- breakable,
- colback=red!5,
- colframe=red!75!black,
- coltext=red!70!black,
- boxrule=3pt,
- arc=5pt,
- left=12pt,
- right=12pt,
- top=12pt,
- bottom=12pt,
- before skip=15pt,
- after skip=15pt,
- fontupper=\color{red!70!black},
- fonttitle=\bfseries\Large\color{red!75!black},
- title={\raisebox{-0.1em}{\Large$\blacktriangleright$} Claude's Commentary},
- attach boxed title to top left={yshift=-4mm,xshift=10mm},
- boxed title style={
- colback=white,
- colframe=red!75!black,
- boxrule=2pt,
- arc=3pt
- }
-}
-"""
-
- # Insert packages after \documentclass
- if r'\usepackage{amsmath' in content:
- content = content.replace(
- r'\usepackage{amsmath',
- packages + r'\usepackage{amsmath'
- )
- elif r'\begin{document}' in content:
- content = content.replace(
- r'\begin{document}',
- packages + r'\begin{document}'
- )
-
- # Fix image includes - make them fit with max width/height
- content = re.sub(
- r'\\pandocbounded\{\\includegraphics\{([^}]+)\}\}',
- r'\\begin{center}\\includegraphics[max width=0.95\\textwidth,max height=0.4\\textheight,keepaspectratio]{\1}\\end{center}',
- content
- )
-
- # Also handle bare includegraphics
- content = re.sub(
- r'\\includegraphics\{(diagrams/[^}]+)\}',
- r'\\begin{center}\\includegraphics[max width=0.95\\textwidth,max height=0.4\\textheight,keepaspectratio]{\1}\\end{center}',
- content
- )
-
- tex_file.write_text(content)
-
-
-def run_xelatex(tex_file: Path) -> bool:
- """Run xelatex to produce PDF."""
- try:
- for run in [1, 2]:
- print(f" xelatex pass {run}...", end=" ", flush=True)
- result = subprocess.run(
- [
- "xelatex",
- "-interaction=nonstopmode",
- "-output-directory", str(tex_file.parent),
- str(tex_file)
- ],
- capture_output=True,
- text=True,
- timeout=120,
- cwd=tex_file.parent
- )
- success = result.returncode == 0
- print("OK" if success else "warnings")
-
- pdf_file = tex_file.with_suffix('.pdf')
- if not pdf_file.exists():
- print("PDF not generated!", file=sys.stderr)
- return False
- if not success:
- print("xelatex failed on final pass", file=sys.stderr)
- return False
- return True
-
- except (subprocess.TimeoutExpired, FileNotFoundError) as e:
- print(f"xelatex error: {e}", file=sys.stderr)
- return False
-
-
-def main() -> None:
- print("=== Building What Makes Echo Tick ===\n")
-
- if not INPUT_MD.exists():
- print(f"Error: {INPUT_MD} not found", file=sys.stderr)
- sys.exit(1)
-
- # Read the markdown
- print(f"1. Reading {INPUT_MD.name}...")
- md_content = INPUT_MD.read_text()
-
- # Convert commentary markers to LaTeX
- print("2. Converting Claude commentary to LaTeX red boxes...")
- md_content = convert_commentary_to_latex(md_content)
-
- # Convert SVG refs to PDF
- print("3. Converting image references to PDF...")
- md_content = convert_svg_to_pdf_refs(md_content)
-
- # Write processed markdown
- PROCESSED_MD.write_text(md_content)
- print(f" Wrote {PROCESSED_MD.name}")
-
- # Run pandoc
- print("4. Running pandoc...")
- if not run_pandoc(PROCESSED_MD, OUTPUT_TEX):
- print(" Pandoc failed!")
- sys.exit(1)
- print(f" Generated {OUTPUT_TEX.name}")
-
- # Post-process the LaTeX
- print("5. Post-processing LaTeX...")
- postprocess_tex(OUTPUT_TEX)
- print(" Added red boxes, small margins, fitted graphics")
-
- # Run xelatex
- print("6. Running xelatex...")
- if run_xelatex(OUTPUT_TEX):
- print("\n=== Success! ===")
- print(f"Output: {OUTPUT_PDF}")
- else:
- print("\n PDF generation may have issues, check .log file")
- sys.exit(1)
-
-
-if __name__ == "__main__":
- main()
diff --git a/docs/archive/study/diagrams/tour-01.mmd b/docs/archive/study/diagrams/tour-01.mmd
deleted file mode 100644
index c32c34c1..00000000
--- a/docs/archive/study/diagrams/tour-01.mmd
+++ /dev/null
@@ -1,39 +0,0 @@
-graph TB
- subgraph "Layer 5: Tools & Viewers"
- V[warp-viewer]
- T[External Tools]
- end
-
- subgraph "Layer 4: Session Protocol"
- SS[echo-session-service]
- SC[echo-session-client]
- WS[WebSocket Gateway]
- end
-
- subgraph "Layer 3: Wire Format"
- EG[echo-graph]
- SP[echo-session-proto]
- end
-
- subgraph "Layer 2: Storage"
- WSC[WSC Format]
- CAS[Content-Addressed Store]
- end
-
- subgraph "Layer 1: Core Engine"
- E[warp-core Engine]
- S[Scheduler]
- B[BOAW Executor]
- end
-
- V --> SC
- T --> WS
- WS --> SS
- SC --> SS
- SS --> EG
- EG --> SP
- SP --> E
- E --> S
- S --> B
- B --> WSC
- WSC --> CAS
\ No newline at end of file
diff --git a/docs/archive/study/diagrams/tour-01.pdf b/docs/archive/study/diagrams/tour-01.pdf
deleted file mode 100644
index 376eed91..00000000
Binary files a/docs/archive/study/diagrams/tour-01.pdf and /dev/null differ
diff --git a/docs/archive/study/diagrams/tour-01.svg b/docs/archive/study/diagrams/tour-01.svg
deleted file mode 100644
index 43dfa4e6..00000000
--- a/docs/archive/study/diagrams/tour-01.svg
+++ /dev/null
@@ -1 +0,0 @@
-Layer 4: Session Protocol
\ No newline at end of file
diff --git a/docs/archive/study/diagrams/tour-02.mmd b/docs/archive/study/diagrams/tour-02.mmd
deleted file mode 100644
index 65b14c01..00000000
--- a/docs/archive/study/diagrams/tour-02.mmd
+++ /dev/null
@@ -1,24 +0,0 @@
-sequenceDiagram
- participant User
- participant Tool as Tool/Viewer
- participant Hub as Session Hub
- participant Engine as warp-core Engine
- participant Store as Graph Store
-
- User->>Tool: Click link
- Tool->>Hub: ingest_intent(bytes)
- Hub->>Engine: forward intent
- Engine->>Engine: begin() transaction
- Engine->>Engine: apply() rules
- Engine->>Store: read via GraphView
- Engine->>Engine: compute footprints
-
- rect rgb(240, 248, 255)
- Note over Engine,Store: commit() internals
- Engine->>Store: apply delta
- Engine->>Engine: compute hashes
- Engine->>Hub: emit snapshot/diff
- end
-
- Hub->>Tool: WarpFrame
- Tool->>User: render new state
\ No newline at end of file
diff --git a/docs/archive/study/diagrams/tour-02.pdf b/docs/archive/study/diagrams/tour-02.pdf
deleted file mode 100644
index 6266c617..00000000
Binary files a/docs/archive/study/diagrams/tour-02.pdf and /dev/null differ
diff --git a/docs/archive/study/diagrams/tour-02.svg b/docs/archive/study/diagrams/tour-02.svg
deleted file mode 100644
index 91ac23e2..00000000
--- a/docs/archive/study/diagrams/tour-02.svg
+++ /dev/null
@@ -1 +0,0 @@
-Graph Store warp-core Engine Session Hub Tool/Viewer User Graph Store warp-core Engine Session Hub Tool/Viewer User Click link ingest_intent(bytes) forward intent begin() transaction apply() rules read via GraphView compute footprints commit() apply delta compute hashes emit snapshot/diff WarpFrame render new state
\ No newline at end of file
diff --git a/docs/archive/study/diagrams/tour-03.mmd b/docs/archive/study/diagrams/tour-03.mmd
deleted file mode 100644
index 0b045cc4..00000000
--- a/docs/archive/study/diagrams/tour-03.mmd
+++ /dev/null
@@ -1,20 +0,0 @@
-graph LR
- subgraph "WARP Graph Structure"
- N1[Node A id: 0x1234...]
- N2[Node B id: 0x5678...]
- N3[Node C id: 0x9ABC...]
-
- N1 -->|edge:link| N2
- N1 -->|edge:child| N3
- N2 -->|edge:ref| N3
- end
-
- subgraph "Attachments (α plane)"
- A1[title: 'Home']
- A2[url: '/page/b']
- A3[content: '...']
- end
-
- N1 -.- A1
- N2 -.- A2
- N3 -.- A3
diff --git a/docs/archive/study/diagrams/tour-03.pdf b/docs/archive/study/diagrams/tour-03.pdf
deleted file mode 100644
index 81befb06..00000000
Binary files a/docs/archive/study/diagrams/tour-03.pdf and /dev/null differ
diff --git a/docs/archive/study/diagrams/tour-03.svg b/docs/archive/study/diagrams/tour-03.svg
deleted file mode 100644
index 605602fc..00000000
--- a/docs/archive/study/diagrams/tour-03.svg
+++ /dev/null
@@ -1 +0,0 @@
-
\ No newline at end of file
diff --git a/docs/archive/study/diagrams/tour-04.mmd b/docs/archive/study/diagrams/tour-04.mmd
deleted file mode 100644
index d2d04747..00000000
--- a/docs/archive/study/diagrams/tour-04.mmd
+++ /dev/null
@@ -1,16 +0,0 @@
-graph TB
- subgraph "Root Instance (warp_id: 'root')"
- R[Root Node]
- P1[Page 1]
- P2[Page 2]
- R --> P1
- R --> P2
- end
-
- subgraph "Child Instance (warp_id: 'child-abc')"
- C1[Child Root]
- C2[Child Node]
- C1 --> C2
- end
-
- P2 -.->|"α[portal] = Descend('child-abc')"| C1
diff --git a/docs/archive/study/diagrams/tour-04.pdf b/docs/archive/study/diagrams/tour-04.pdf
deleted file mode 100644
index 708fb439..00000000
Binary files a/docs/archive/study/diagrams/tour-04.pdf and /dev/null differ
diff --git a/docs/archive/study/diagrams/tour-04.svg b/docs/archive/study/diagrams/tour-04.svg
deleted file mode 100644
index 2ea2bd65..00000000
--- a/docs/archive/study/diagrams/tour-04.svg
+++ /dev/null
@@ -1 +0,0 @@
-Child Instance (warp_id: 'child-abc')
Root Instance (warp_id: 'root')
α[portal] = Descend('child-abc')
\ No newline at end of file
diff --git a/docs/archive/study/diagrams/tour-05.mmd b/docs/archive/study/diagrams/tour-05.mmd
deleted file mode 100644
index d194cb8b..00000000
--- a/docs/archive/study/diagrams/tour-05.mmd
+++ /dev/null
@@ -1,8 +0,0 @@
-flowchart TD
- A[Create GraphStore] --> B[Create WarpState]
- B --> C[Create root WarpInstance]
- C --> D[Initialize DeterministicScheduler]
- D --> E[Create empty rules HashMap]
- E --> F[Initialize MaterializationBus]
- F --> G[Preserve U0 state for replay]
- G --> H[Engine ready]
\ No newline at end of file
diff --git a/docs/archive/study/diagrams/tour-05.pdf b/docs/archive/study/diagrams/tour-05.pdf
deleted file mode 100644
index 65ed0327..00000000
Binary files a/docs/archive/study/diagrams/tour-05.pdf and /dev/null differ
diff --git a/docs/archive/study/diagrams/tour-05.svg b/docs/archive/study/diagrams/tour-05.svg
deleted file mode 100644
index 2dede56d..00000000
--- a/docs/archive/study/diagrams/tour-05.svg
+++ /dev/null
@@ -1 +0,0 @@
-
Initialize DeterministicScheduler
Create empty rules HashMap
Initialize MaterializationBus
Preserve U0 state for replay
\ No newline at end of file
diff --git a/docs/archive/study/diagrams/tour-06.mmd b/docs/archive/study/diagrams/tour-06.mmd
deleted file mode 100644
index 7bb5c256..00000000
--- a/docs/archive/study/diagrams/tour-06.mmd
+++ /dev/null
@@ -1,29 +0,0 @@
-flowchart LR
- subgraph "1. Begin"
- B[begin]
- end
-
- subgraph "2. Apply"
- A1[apply rule 1]
- A2[apply rule 2]
- A3[apply rule N]
- end
-
- subgraph "3. Commit"
- C1[Drain]
- C2[Reserve]
- C3[Execute]
- C4[Merge]
- C5[Finalize]
- end
-
- subgraph "4. Hash"
- H1[State Root]
- H2[Commit Hash]
- end
-
- subgraph "5. Record"
- R[Append to History]
- end
-
- B --> A1 --> A2 --> A3 --> C1 --> C2 --> C3 --> C4 --> C5 --> H1 --> H2 --> R
\ No newline at end of file
diff --git a/docs/archive/study/diagrams/tour-06.pdf b/docs/archive/study/diagrams/tour-06.pdf
deleted file mode 100644
index 8158912a..00000000
Binary files a/docs/archive/study/diagrams/tour-06.pdf and /dev/null differ
diff --git a/docs/archive/study/diagrams/tour-06.svg b/docs/archive/study/diagrams/tour-06.svg
deleted file mode 100644
index 1f9e5f03..00000000
--- a/docs/archive/study/diagrams/tour-06.svg
+++ /dev/null
@@ -1 +0,0 @@
-5. Record
4. Hash
3. Commit
2. Apply
1. Begin
\ No newline at end of file
diff --git a/docs/archive/study/diagrams/tour-07.mmd b/docs/archive/study/diagrams/tour-07.mmd
deleted file mode 100644
index 4752500c..00000000
--- a/docs/archive/study/diagrams/tour-07.mmd
+++ /dev/null
@@ -1,7 +0,0 @@
-flowchart TD
- A[apply called] --> B{Matcher returns true?}
- B -->|No| C[Return NoMatch]
- B -->|Yes| D[Compute Footprint]
- D --> E[Create PendingRewrite]
- E --> F[Enqueue to Scheduler]
- F --> G[Return Matched]
\ No newline at end of file
diff --git a/docs/archive/study/diagrams/tour-07.pdf b/docs/archive/study/diagrams/tour-07.pdf
deleted file mode 100644
index 2b77302c..00000000
Binary files a/docs/archive/study/diagrams/tour-07.pdf and /dev/null differ
diff --git a/docs/archive/study/diagrams/tour-07.svg b/docs/archive/study/diagrams/tour-07.svg
deleted file mode 100644
index 02db703f..00000000
--- a/docs/archive/study/diagrams/tour-07.svg
+++ /dev/null
@@ -1 +0,0 @@
-
\ No newline at end of file
diff --git a/docs/archive/study/diagrams/tour-08.mmd b/docs/archive/study/diagrams/tour-08.mmd
deleted file mode 100644
index 1ac04467..00000000
--- a/docs/archive/study/diagrams/tour-08.mmd
+++ /dev/null
@@ -1,9 +0,0 @@
-flowchart TD
- A[For each rewrite] --> B{Footprint conflicts with active frontier?}
- B -->|No conflict| C[Accept: add to active frontier]
- B -->|Conflict| D[Reject: record blocking witness]
- C --> E[Continue to next]
- D --> E
- E --> F{More rewrites?}
- F -->|Yes| A
- F -->|No| G[Done: have accepted/rejected sets]
\ No newline at end of file
diff --git a/docs/archive/study/diagrams/tour-08.pdf b/docs/archive/study/diagrams/tour-08.pdf
deleted file mode 100644
index a0d4f134..00000000
Binary files a/docs/archive/study/diagrams/tour-08.pdf and /dev/null differ
diff --git a/docs/archive/study/diagrams/tour-08.svg b/docs/archive/study/diagrams/tour-08.svg
deleted file mode 100644
index 275b730c..00000000
--- a/docs/archive/study/diagrams/tour-08.svg
+++ /dev/null
@@ -1 +0,0 @@
-
Footprint conflicts with active frontier?
Accept: add to active frontier
Reject: record blocking witness
Done: have accepted/rejected sets
\ No newline at end of file
diff --git a/docs/archive/study/diagrams/tour-09.mmd b/docs/archive/study/diagrams/tour-09.mmd
deleted file mode 100644
index b4cff003..00000000
--- a/docs/archive/study/diagrams/tour-09.mmd
+++ /dev/null
@@ -1,7 +0,0 @@
-flowchart TD
- A[Start at root] --> B[BFS: visit all reachable nodes]
- B --> C[For each instance: hash in BTreeMap order]
- C --> D[For each node: hash in ascending NodeId order]
- D --> E[For each node's edges: hash in ascending EdgeId order]
- E --> F[BLAKE3 digest of canonical byte stream]
- F --> G[state_root: Hash]
\ No newline at end of file
diff --git a/docs/archive/study/diagrams/tour-09.pdf b/docs/archive/study/diagrams/tour-09.pdf
deleted file mode 100644
index b05eafc4..00000000
Binary files a/docs/archive/study/diagrams/tour-09.pdf and /dev/null differ
diff --git a/docs/archive/study/diagrams/tour-09.svg b/docs/archive/study/diagrams/tour-09.svg
deleted file mode 100644
index c44be00c..00000000
--- a/docs/archive/study/diagrams/tour-09.svg
+++ /dev/null
@@ -1 +0,0 @@
-
BFS: visit all reachable nodes
For each instance: hash in BTreeMap order
For each node: hash in ascending NodeId order
For each node's edges: hash in ascending EdgeId order
BLAKE3 digest of canonical byte stream
\ No newline at end of file
diff --git a/docs/archive/study/diagrams/tour-10.mmd b/docs/archive/study/diagrams/tour-10.mmd
deleted file mode 100644
index cf7ccf63..00000000
--- a/docs/archive/study/diagrams/tour-10.mmd
+++ /dev/null
@@ -1,29 +0,0 @@
-flowchart TD
- subgraph "Partitioning"
- I[Items] --> P[partition_into_shards]
- P --> S0[Shard 0]
- P --> S1[Shard 1]
- P --> S2[Shard 2]
- P --> S3[Shard 3]
- P --> S255[Shard 255]
- end
-
- subgraph "Work Stealing"
- W0[Worker 0] -->|claims| S0
- W0 -->|claims| S1
- W1[Worker 1] -->|claims| S2
- W1 -->|claims| S3
- end
-
- subgraph "Execution"
- S0 --> D0[TickDelta 0]
- S1 --> D0
- S2 --> D1[TickDelta 1]
- S3 --> D1
- end
-
- subgraph "Merge"
- D0 --> M[merge_deltas]
- D1 --> M
- M --> O[Canonical Ops]
- end
diff --git a/docs/archive/study/diagrams/tour-10.pdf b/docs/archive/study/diagrams/tour-10.pdf
deleted file mode 100644
index e5dc50cf..00000000
Binary files a/docs/archive/study/diagrams/tour-10.pdf and /dev/null differ
diff --git a/docs/archive/study/diagrams/tour-10.svg b/docs/archive/study/diagrams/tour-10.svg
deleted file mode 100644
index 10ebe51d..00000000
--- a/docs/archive/study/diagrams/tour-10.svg
+++ /dev/null
@@ -1 +0,0 @@
-
\ No newline at end of file
diff --git a/docs/archive/study/diagrams/tour-11.mmd b/docs/archive/study/diagrams/tour-11.mmd
deleted file mode 100644
index 39f0e317..00000000
--- a/docs/archive/study/diagrams/tour-11.mmd
+++ /dev/null
@@ -1,17 +0,0 @@
-flowchart LR
- subgraph "Before Tick"
- S1[Snapshot N immutable]
- end
-
- subgraph "During Tick"
- GV[GraphView reads from S1]
- TD[TickDelta accumulates ops]
- GV -->|reads| S1
- end
-
- subgraph "After Commit"
- S2[Snapshot N+1 new immutable]
- S1 -.->|structural sharing| S2
- end
-
- TD -->|apply ops| S2
diff --git a/docs/archive/study/diagrams/tour-11.pdf b/docs/archive/study/diagrams/tour-11.pdf
deleted file mode 100644
index 9e54e077..00000000
Binary files a/docs/archive/study/diagrams/tour-11.pdf and /dev/null differ
diff --git a/docs/archive/study/diagrams/tour-11.svg b/docs/archive/study/diagrams/tour-11.svg
deleted file mode 100644
index 01eb7d4f..00000000
--- a/docs/archive/study/diagrams/tour-11.svg
+++ /dev/null
@@ -1 +0,0 @@
-Snapshot N+1 new immutable
\ No newline at end of file
diff --git a/docs/archive/study/diagrams/tour-12.mmd b/docs/archive/study/diagrams/tour-12.mmd
deleted file mode 100644
index 383932ab..00000000
--- a/docs/archive/study/diagrams/tour-12.mmd
+++ /dev/null
@@ -1,16 +0,0 @@
-graph TD
- subgraph "Initial State"
- ROOT[Root type: site]
- HOME[Home Page type: page α.title: 'Welcome']
- ABOUT[About Page type: page α.title: 'About Us']
- LINK[Link type: link α.target: About]
-
- ROOT -->|edge:root_page| HOME
- ROOT -->|edge:page| ABOUT
- HOME -->|edge:content| LINK
- LINK -.->|resolves to| ABOUT
- end
-
- subgraph "View State"
- V[Viewer α.current: Home]
- end
diff --git a/docs/archive/study/diagrams/tour-12.pdf b/docs/archive/study/diagrams/tour-12.pdf
deleted file mode 100644
index 1b3cef24..00000000
Binary files a/docs/archive/study/diagrams/tour-12.pdf and /dev/null differ
diff --git a/docs/archive/study/diagrams/tour-12.svg b/docs/archive/study/diagrams/tour-12.svg
deleted file mode 100644
index 2fcfdb83..00000000
--- a/docs/archive/study/diagrams/tour-12.svg
+++ /dev/null
@@ -1 +0,0 @@
-Home Page type: page α.title: 'Welcome'
About Page type: page α.title: 'About Us'
Link type: link α.target: About
\ No newline at end of file
diff --git a/docs/archive/study/diagrams/tour-13.mmd b/docs/archive/study/diagrams/tour-13.mmd
deleted file mode 100644
index 605eb32a..00000000
--- a/docs/archive/study/diagrams/tour-13.mmd
+++ /dev/null
@@ -1,7 +0,0 @@
-flowchart TD
- A[Intent bytes arrive] --> B[Compute intent_id = BLAKE3 of intent payload]
- B --> C{intent_id seen before?}
- C -->|Yes| D[Return Duplicate]
- C -->|No| E[Create event node keyed by intent_id]
- E --> F[Create edge: inbox → event type: pending]
- F --> G[Return Accepted]
diff --git a/docs/archive/study/diagrams/tour-13.pdf b/docs/archive/study/diagrams/tour-13.pdf
deleted file mode 100644
index 083023d0..00000000
Binary files a/docs/archive/study/diagrams/tour-13.pdf and /dev/null differ
diff --git a/docs/archive/study/diagrams/tour-13.svg b/docs/archive/study/diagrams/tour-13.svg
deleted file mode 100644
index ef3eb10f..00000000
--- a/docs/archive/study/diagrams/tour-13.svg
+++ /dev/null
@@ -1 +0,0 @@
-
Compute intent_id = BLAKE3 bytes
Create event node keyed by intent_id
Create edge: inbox → event type: pending
\ No newline at end of file
diff --git a/docs/archive/study/diagrams/tour-14.mmd b/docs/archive/study/diagrams/tour-14.mmd
deleted file mode 100644
index 41f385aa..00000000
--- a/docs/archive/study/diagrams/tour-14.mmd
+++ /dev/null
@@ -1,6 +0,0 @@
-flowchart TD
- A[Find pending event with minimum intent_id] --> B[For each cmd/* rule in stable order]
- B --> C{Rule matches?}
- C -->|No| B
- C -->|Yes| D[Apply matching rule]
- D --> E[Apply sys/ack_pending remove pending edge]
\ No newline at end of file
diff --git a/docs/archive/study/diagrams/tour-14.pdf b/docs/archive/study/diagrams/tour-14.pdf
deleted file mode 100644
index 424d0310..00000000
Binary files a/docs/archive/study/diagrams/tour-14.pdf and /dev/null differ
diff --git a/docs/archive/study/diagrams/tour-14.svg b/docs/archive/study/diagrams/tour-14.svg
deleted file mode 100644
index 34ac0783..00000000
--- a/docs/archive/study/diagrams/tour-14.svg
+++ /dev/null
@@ -1 +0,0 @@
-
Find pending event with minimum intent_id
For each cmd/* rule in stable order
Apply sys/ack_pending remove pending edge
\ No newline at end of file
diff --git a/docs/archive/study/diagrams/tour-15.mmd b/docs/archive/study/diagrams/tour-15.mmd
deleted file mode 100644
index 6c054c30..00000000
--- a/docs/archive/study/diagrams/tour-15.mmd
+++ /dev/null
@@ -1,21 +0,0 @@
-graph TB
- subgraph "Viewer"
- R[Renderer WGPU]
- L[Layout Engine Force-directed]
- D[Diff Processor]
- S[State Cache]
- end
-
- subgraph "Session"
- SC[Session Client]
- end
-
- subgraph "Output"
- Screen[Screen]
- end
-
- SC -->|WarpDiff| D
- D -->|updates| S
- S -->|positions| L
- L -->|vertices| R
- R -->|pixels| Screen
diff --git a/docs/archive/study/diagrams/tour-15.pdf b/docs/archive/study/diagrams/tour-15.pdf
deleted file mode 100644
index daa00efd..00000000
Binary files a/docs/archive/study/diagrams/tour-15.pdf and /dev/null differ
diff --git a/docs/archive/study/diagrams/tour-15.svg b/docs/archive/study/diagrams/tour-15.svg
deleted file mode 100644
index 732aba41..00000000
--- a/docs/archive/study/diagrams/tour-15.svg
+++ /dev/null
@@ -1 +0,0 @@
-Layout Engine Force-directed
\ No newline at end of file
diff --git a/docs/archive/study/echo-tour-de-code-directors-cut.pdf b/docs/archive/study/echo-tour-de-code-directors-cut.pdf
deleted file mode 100644
index e56610b9..00000000
Binary files a/docs/archive/study/echo-tour-de-code-directors-cut.pdf and /dev/null differ
diff --git a/docs/archive/study/echo-tour-de-code-directors-cut.tex b/docs/archive/study/echo-tour-de-code-directors-cut.tex
deleted file mode 100644
index 1c4985a6..00000000
--- a/docs/archive/study/echo-tour-de-code-directors-cut.tex
+++ /dev/null
@@ -1,1330 +0,0 @@
-% SPDX-License-Identifier: Apache-2.0 OR LicenseRef-MIND-UCAL-1.0
-% © James Ross Ω FLYING•ROBOTS
-% Options for packages loaded elsewhere
-\PassOptionsToPackage{unicode}{hyperref}
-\PassOptionsToPackage{hyphens}{url}
-\documentclass[11pt]{book}
-\usepackage[letterpaper, margin=1in]{geometry}
-\usepackage{xcolor}
-\usepackage{amsmath,amssymb}
-\setcounter{secnumdepth}{-\maxdimen} % remove section numbering
-\usepackage{iftex}
-\ifPDFTeX
- \usepackage[T1]{fontenc}
- \usepackage[utf8]{inputenc}
- \usepackage{textcomp}
-\else
- \usepackage{unicode-math}
- \defaultfontfeatures{Scale=MatchLowercase}
- \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1}
-\fi
-\usepackage{lmodern}
-\ifPDFTeX\else\fi
-\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
-\IfFileExists{microtype.sty}{%
- \usepackage[]{microtype}
- \UseMicrotypeSet[protrusion]{basicmath}
-}{}
-\makeatletter
-\@ifundefined{KOMAClassName}{%
- \IfFileExists{parskip.sty}{%
- \usepackage{parskip}
- }{%
- \setlength{\parindent}{0pt}
- \setlength{\parskip}{6pt plus 2pt minus 1pt}}
-}{\KOMAoptions{parskip=half}}
-\makeatother
-\usepackage{color}
-\usepackage{fancyvrb}
-\newcommand{\VerbBar}{|}
-\newcommand{\VERB}{\Verb[commandchars=\\\{\}]}
-\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\},fontsize=\small}
-\newenvironment{Shaded}{\begin{quote}}{\end{quote}}
-\newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}}
-\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}}
-\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{#1}}
-\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}}
-\newcommand{\BuiltInTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{#1}}
-\newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}}
-\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{#1}}}
-\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}}
-\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{#1}}
-\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}}
-\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{#1}}
-\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}}
-\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{#1}}}
-\newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}}
-\newcommand{\ExtensionTok}[1]{#1}
-\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}}
-\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{#1}}
-\newcommand{\ImportTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{\textbf{#1}}}
-\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}}
-\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}}
-\newcommand{\NormalTok}[1]{#1}
-\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{#1}}
-\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{#1}}
-\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{#1}}
-\newcommand{\RegionMarkerTok}[1]{#1}
-\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}}
-\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{#1}}
-\newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}}
-\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{#1}}
-\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}}
-\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}}
-\usepackage{longtable,booktabs,array}
-\newcounter{none}
-\usepackage{calc}
-\usepackage{etoolbox}
-\makeatletter
-\patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{}
-\makeatother
-\IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}}
-\makesavenoteenv{longtable}
-\setlength{\emergencystretch}{3em}
-\providecommand{\tightlist}{%
- \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
-\usepackage{bookmark}
-\IfFileExists{xurl.sty}{\usepackage{xurl}}{}
-\urlstyle{same}
-\hypersetup{
- hidelinks,
- pdfcreator={LaTeX via pandoc}}
-
-% ═══════════════════════════════════════════════════════════════════════════════
-% DIRECTOR'S CUT STYLING
-% ═══════════════════════════════════════════════════════════════════════════════
-\usepackage{tcolorbox}
-\tcbuselibrary{skins,breakable}
-\usepackage{fontawesome5}
-\usepackage{pifont}
-\usepackage{mdframed}
-
-% Director's Commentary - conversational asides
-\newenvironment{directors}
-{\begin{mdframed}[
- linecolor=blue!60,
- linewidth=2pt,
- leftline=true,
- rightline=false,
- topline=false,
- bottomline=false,
- backgroundcolor=blue!3,
- innerleftmargin=12pt,
- innerrightmargin=10pt,
- innertopmargin=8pt,
- innerbottommargin=8pt,
- skipabove=12pt,
- skipbelow=12pt
-]\small\sffamily\color{blue!70!black}}
-{\end{mdframed}}
-
-% "Pro tip" callouts
-\newenvironment{protip}
-{\begin{mdframed}[
- linecolor=green!60!black,
- linewidth=2pt,
- leftline=true,
- rightline=false,
- topline=false,
- bottomline=false,
- backgroundcolor=green!5,
- innerleftmargin=12pt,
- innerrightmargin=10pt,
- innertopmargin=8pt,
- innerbottommargin=8pt,
- skipabove=12pt,
- skipbelow=12pt
-]\small\sffamily\color{green!50!black}\textbf{Pro Tip:} }
-{\end{mdframed}}
-
-% "Watch out" warnings
-\newenvironment{watchout}
-{\begin{mdframed}[
- linecolor=orange!80!black,
- linewidth=2pt,
- leftline=true,
- rightline=false,
- topline=false,
- bottomline=false,
- backgroundcolor=orange!5,
- innerleftmargin=12pt,
- innerrightmargin=10pt,
- innertopmargin=8pt,
- innerbottommargin=8pt,
- skipabove=12pt,
- skipbelow=12pt
-]\small\sffamily\color{orange!70!black}\textbf{Heads Up:} }
-{\end{mdframed}}
-
-% "The Big Picture" for architectural context
-\newenvironment{bigpicture}
-{\begin{mdframed}[
- linecolor=purple!60,
- linewidth=2pt,
- leftline=true,
- rightline=false,
- topline=false,
- bottomline=false,
- backgroundcolor=purple!3,
- innerleftmargin=12pt,
- innerrightmargin=10pt,
- innertopmargin=8pt,
- innerbottommargin=8pt,
- skipabove=12pt,
- skipbelow=12pt
-]\small\sffamily\color{purple!70!black}\textbf{The Big Picture:} }
-{\end{mdframed}}
-
-\author{}
-\date{}
-
-\begin{document}
-\frontmatter
-
-\mainmatter
-\chapter*{Echo: Tour de Code}
-\addcontentsline{toc}{chapter}{Echo: Tour de Code}
-
-\begin{quote}
-\large\textbf{The Director's Cut}
-
-\normalsize
-A complete function-by-function trace of Echo's execution pipeline, with commentary explaining what's \emph{really} going on and why.
-
-File paths and line numbers accurate as of 2026-01-18.
-\end{quote}
-
-\begin{directors}
-Hey! Welcome to the Director's Cut of the Echo Tour de Code.
-
-I'm going to walk you through this codebase like we're pair programming. When I see something clever, I'll tell you why it's clever. When there's a non-obvious design decision, I'll explain the trade-offs. When there's a potential footgun, I'll point it out.
-
-The goal here isn't just to show you \emph{what} the code does---any decent grep can do that. I want you to understand \emph{why} it does it this way, and what would break if you changed it.
-
-Let's dive in.
-\end{directors}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\tableofcontents
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{1. Intent Ingestion}\label{intent-ingestion}
-
-\textbf{Entry Point:} \texttt{Engine::ingest\_intent()} \\
-\textbf{File:} \texttt{crates/warp-core/src/engine\_impl.rs:1216}
-
-\begin{directors}
-This is where everything starts. A user does something---clicks a button, submits a form, whatever---and that action gets serialized into bytes and fed into this function.
-
-The first thing to understand: Echo doesn't care \emph{what} those bytes mean. It treats them as opaque data. The semantics come later, when rules interpret the bytes. Right now, we're just doing bookkeeping.
-\end{directors}
-
-\subsection{1.1 Function Signature}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ ingest\_intent(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ intent\_bytes}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[}\DataTypeTok{u8}\NormalTok{]) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Result}\OperatorTok{\textless{}}\NormalTok{IngestDisposition}\OperatorTok{,}\NormalTok{ EngineError}\OperatorTok{\textgreater{}}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{Returns:}
-\begin{itemize}
-\item \texttt{IngestDisposition::Accepted \{ intent\_id: Hash \}} --- New intent accepted
-\item \texttt{IngestDisposition::Duplicate \{ intent\_id: Hash \}} --- Already ingested
-\end{itemize}
-
-\begin{directors}
-Notice the return type. We don't just return ``success'' or ``failure''---we tell the caller \emph{what happened}. Did we actually ingest this intent, or did we already have it?
-
-This matters because in a distributed system, the same intent might arrive multiple times (network retries, replays, etc.). The caller needs to know whether this is a fresh intent or a duplicate so they can decide what to do next.
-\end{directors}
-
-\subsection{1.2 Complete Call Trace}
-
-\begin{verbatim}
-Engine::ingest_intent(intent_bytes: &[u8])
-│
-├─[1] compute_intent_id(intent_bytes) → Hash
-│ FILE: crates/warp-core/src/inbox.rs:205
-│ CODE:
-│ let mut hasher = blake3::Hasher::new();
-│ hasher.update(b"intent:"); // Domain separation
-│ hasher.update(intent_bytes);
-│ hasher.finalize().into() // → [u8; 32]
-\end{verbatim}
-
-\begin{directors}
-Okay, stop right here. This is the most important line in the entire function.
-
-See that \texttt{b"intent:"} prefix? That's called \textbf{domain separation}, and it's a cryptographic best practice that a lot of codebases get wrong.
-
-Here's the problem it solves: imagine you have some bytes that represent an intent. Now imagine those \emph{exact same bytes} could also be interpreted as a node ID, or an edge ID, or some other identifier. Without domain separation, they'd all hash to the same value, and you'd have collisions between completely different concepts.
-
-By prefixing with \texttt{"intent:"}, we guarantee that an intent hash can \emph{never} collide with a node hash (which uses \texttt{"node:"}), or a type hash (\texttt{"type:"}), etc. Even if the raw bytes are identical, the hashes will be different.
-
-Echo does this everywhere:
-\begin{itemize}
-\item \texttt{"intent:"} for intent IDs
-\item \texttt{"node:"} for node IDs
-\item \texttt{"type:"} for type IDs
-\item \texttt{"edge:"} for edge IDs
-\end{itemize}
-
-If you're ever tempted to add a new ID type, remember to pick a unique prefix. Future you will thank present you.
-\end{directors}
-
-\begin{verbatim}
-├─[2] NodeId(intent_id)
-│ Creates strongly-typed NodeId from Hash
-\end{verbatim}
-
-\begin{protip}
-These newtype wrappers (\texttt{NodeId}, \texttt{EdgeId}, \texttt{TypeId}, etc.) are all just 32 bytes under the hood. But Rust's type system won't let you accidentally pass a \texttt{NodeId} where an \texttt{EdgeId} is expected. Zero runtime cost, maximum compile-time safety.
-\end{protip}
-
-\begin{verbatim}
-├─[3] self.state.store_mut(&warp_id) → Option<&mut GraphStore>
-│ FILE: crates/warp-core/src/engine_impl.rs:1221
-│ ERROR: EngineError::UnknownWarp if None
-│
-├─[4] Extract root_node_id from self.current_root.local_id
-│
-├─[5] STRUCTURAL NODE CREATION (Idempotent)
-│ ├─ make_node_id("sim") → NodeId
-│ │ FILE: crates/warp-core/src/ident.rs:93
-│ │ CODE: blake3("node:" || "sim")
-│ │
-│ ├─ make_node_id("sim/inbox") → NodeId
-│ │ CODE: blake3("node:" || "sim/inbox")
-│ │
-│ ├─ make_type_id("sim") → TypeId
-│ │ FILE: crates/warp-core/src/ident.rs:85
-│ │ CODE: blake3("type:" || "sim")
-│ │
-│ ├─ make_type_id("sim/inbox") → TypeId
-│ ├─ make_type_id("sim/inbox/event") → TypeId
-│ │
-│ ├─ store.insert_node(sim_id, NodeRecord { ty: sim_ty })
-│ │ FILE: crates/warp-core/src/graph.rs:175
-│ │ CODE: self.nodes.insert(id, record)
-│ │
-│ └─ store.insert_node(inbox_id, NodeRecord { ty: inbox_ty })
-\end{verbatim}
-
-\begin{directors}
-Step [5] is doing something subtle: it's creating the structural scaffolding for intents \emph{idempotently}.
-
-What does that mean? Well, imagine this is the first intent ever ingested. The ``sim'' node doesn't exist yet, nor does the ``sim/inbox'' node. So we create them.
-
-But what if this is the millionth intent? Those structural nodes already exist. And here's the key insight: \textbf{because the IDs are derived from the names deterministically}, we get the same ID every time. \texttt{make\_node\_id("sim")} \emph{always} returns the same hash.
-
-So when we call \texttt{store.insert\_node(sim\_id, ...)}, if the node already exists with that ID, it's just a no-op (or an update---same difference for immutable nodes).
-
-This is the beauty of content-addressed storage. You don't need ``if exists'' checks everywhere. Just compute the ID, do the insert, and let the storage layer handle deduplication.
-\end{directors}
-
-\begin{verbatim}
-├─[6] STRUCTURAL EDGE CREATION
-│ ├─ make_edge_id("edge:root/sim") → EdgeId
-│ │ FILE: crates/warp-core/src/ident.rs:109
-│ │ CODE: blake3("edge:" || "edge:root/sim")
-│ │
-│ ├─ store.insert_edge(root_id, EdgeRecord { ... })
-│ │ FILE: crates/warp-core/src/graph.rs:188
-│ │ └─ GraphStore::upsert_edge_record(from, edge)
-│ │ FILE: crates/warp-core/src/graph.rs:196
-│ │ UPDATES:
-│ │ self.edge_index.insert(edge_id, from)
-│ │ self.edge_to_index.insert(edge_id, to)
-│ │ self.edges_from.entry(from).or_default().push(edge)
-│ │ self.edges_to.entry(to).or_default().push(edge_id)
-│ │
-│ └─ store.insert_edge(sim_id, EdgeRecord { ... }) [sim → inbox]
-\end{verbatim}
-
-\begin{directors}
-Look at all those index updates in \texttt{upsert\_edge\_record}. We're maintaining \emph{four separate indices} for edges:
-
-\begin{enumerate}
-\item \texttt{edge\_index}: edge ID $\rightarrow$ source node
-\item \texttt{edge\_to\_index}: edge ID $\rightarrow$ target node
-\item \texttt{edges\_from}: source node $\rightarrow$ list of edges
-\item \texttt{edges\_to}: target node $\rightarrow$ list of edge IDs
-\end{enumerate}
-
-Why so many? Because graph queries can go in any direction:
-\begin{itemize}
-\item ``What edges leave this node?'' $\rightarrow$ \texttt{edges\_from}
-\item ``What edges arrive at this node?'' $\rightarrow$ \texttt{edges\_to}
-\item ``Given this edge, what's its source?'' $\rightarrow$ \texttt{edge\_index}
-\item ``Given this edge, what's its target?'' $\rightarrow$ \texttt{edge\_to\_index}
-\end{itemize}
-
-Each of these is O(1) lookup. Yes, it's more memory. Yes, it's more bookkeeping on mutations. But graph traversal is \emph{constant} in Echo, and that's worth a lot.
-\end{directors}
-
-\begin{verbatim}
-├─[7] DUPLICATE DETECTION
-│ store.node(&event_id) → Option<&NodeRecord>
-│ FILE: crates/warp-core/src/graph.rs:87
-│ CODE: self.nodes.get(id)
-│ IF Some(_): return Ok(IngestDisposition::Duplicate { intent_id })
-\end{verbatim}
-
-\begin{directors}
-Here's where the content-addressing pays off beautifully.
-
-Remember how we computed \texttt{intent\_id} by hashing the intent bytes? And remember how we're about to use that same ID as the event node's ID?
-
-That means: \textbf{if this exact intent was ever ingested before, it created a node with this exact ID}. So we can detect duplicates just by checking if the node exists.
-
-No database sequence numbers. No UUIDs. No distributed coordination. Just: hash the bytes, check if that node exists. That's it.
-
-This is why content-addressed systems are so elegant. Deduplication is \emph{free}.
-\end{directors}
-
-\begin{verbatim}
-├─[8] EVENT NODE CREATION
-│ store.insert_node(event_id, NodeRecord { ty: event_ty })
-│ NOTE: event_id = intent_id (content-addressed)
-│
-├─[9] INTENT ATTACHMENT
-│ ├─ AtomPayload::new(type_id, bytes)
-│ │ FILE: crates/warp-core/src/attachment.rs:149
-│ │ CODE: Self { type_id, bytes: Bytes::copy_from_slice(intent_bytes) }
-│ │
-│ └─ store.set_node_attachment(event_id, Some(AttachmentValue::Atom(payload)))
-│ FILE: crates/warp-core/src/graph.rs:125
-│ CODE: self.node_attachments.insert(id, v)
-\end{verbatim}
-
-\begin{directors}
-The graph structure (nodes and edges) is just the skeleton. The actual \emph{data}---the intent bytes---lives in an ``attachment.''
-
-Think of it like this: the node is the mailbox, and the attachment is the letter inside. The mailbox has a predictable address (the content-addressed ID), but the contents can be anything.
-
-This separation is useful because you can query the graph structure without loading all the attachment data. For large payloads, that's a big memory savings.
-\end{directors}
-
-\begin{verbatim}
-├─[10] PENDING EDGE CREATION (Queue Membership)
-│ ├─ pending_edge_id(&inbox_id, &intent_id) → EdgeId
-│ │ FILE: crates/warp-core/src/inbox.rs:212
-│ │ CODE: blake3("edge:" || "sim/inbox/pending:" || inbox_id || intent_id)
-│ │
-│ └─ store.insert_edge(inbox_id, EdgeRecord {
-│ id: pending_edge_id,
-│ from: inbox_id,
-│ to: event_id,
-│ ty: make_type_id("edge:pending")
-│ })
-│
-└─[11] return Ok(IngestDisposition::Accepted { intent_id })
-\end{verbatim}
-
-\begin{bigpicture}
-The ``pending edge'' is how Echo implements a queue using a graph.
-
-The inbox node is the queue. Each pending edge from inbox to an event node represents ``this event is waiting to be processed.'' When a rule processes the event, it deletes the pending edge.
-
-Why use a graph for a queue? Because now the queue is \emph{part of the state that gets hashed and committed}. You can replay the entire system from any snapshot, and the queue will be exactly where it was.
-
-No external message broker. No separate queue database. It's all just graph.
-\end{bigpicture}
-
-\subsection{1.3 Data Structures Modified}
-
-{\def\LTcaptype{none}
-\begin{longtable}[]{@{}
- >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.42}}
- >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.27}}
- >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.31}}@{}}
-\toprule\noalign{}
-Structure & Field & Change \\
-\midrule\noalign{}
-\endhead
-\bottomrule\noalign{}
-\endlastfoot
-\texttt{GraphStore} & \texttt{nodes} & +3 entries (sim, inbox, event) \\
-\texttt{GraphStore} & \texttt{edges\_from} & +3 edges \\
-\texttt{GraphStore} & \texttt{edges\_to} & +3 reverse entries \\
-\texttt{GraphStore} & \texttt{edge\_index} & +3 edge$\rightarrow$from mappings \\
-\texttt{GraphStore} & \texttt{edge\_to\_index} & +3 edge$\rightarrow$to mappings \\
-\texttt{GraphStore} & \texttt{node\_attachments} & +1 (event $\rightarrow$ intent payload) \\
-\end{longtable}
-}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{2. Transaction Lifecycle}\label{transaction-lifecycle}
-
-\subsection{2.1 Begin Transaction}
-
-\textbf{Entry Point:} \texttt{Engine::begin()} \\
-\textbf{File:} \texttt{crates/warp-core/src/engine\_impl.rs:711-719}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ begin(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\NormalTok{) }\OperatorTok{{-}\textgreater{}}\NormalTok{ TxId }\OperatorTok{\{}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter }\OperatorTok{=} \KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter}\OperatorTok{.}\NormalTok{wrapping\_add(}\DecValTok{1}\NormalTok{)}\OperatorTok{;} \CommentTok{// Line 713}
- \ControlFlowTok{if} \KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter }\OperatorTok{==} \DecValTok{0} \OperatorTok{\{}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter }\OperatorTok{=} \DecValTok{1}\OperatorTok{;} \CommentTok{// Line 715}
- \OperatorTok{\}}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{live\_txs}\OperatorTok{.}\NormalTok{insert(}\KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter)}\OperatorTok{;} \CommentTok{// Line 717}
- \PreprocessorTok{TxId::}\NormalTok{from\_raw(}\KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter) }\CommentTok{// Line 718}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{directors}
-This is refreshingly simple for a transaction begin, right? Just increment a counter and track it in a set.
-
-But look at line 715. What's up with that \texttt{if tx\_counter == 0} check?
-
-Here's the deal: \texttt{TxId(0)} is reserved as an invalid/sentinel value throughout the codebase. It means ``no transaction'' or ``null transaction.'' If you ever wrap around from \texttt{u64::MAX} back to 0, you'd suddenly have a valid-looking transaction ID that's actually invalid.
-
-Now, will you ever hit $2^{64}$ transactions? Almost certainly not. The sun will burn out first. But this check costs one branch that's basically never taken, and it eliminates an entire class of potential bugs.
-
-This is defensive programming done right. The cost is negligible, and the safety is real.
-\end{directors}
-
-\begin{protip}
-See that \texttt{\#[repr(transparent)]} on \texttt{TxId}? That guarantees it has the exact same memory layout as a raw \texttt{u64}. You get type safety at compile time with zero runtime overhead. Use newtypes liberally---they're free!
-\end{protip}
-
-\subsection{2.2 Abort Transaction}
-
-\textbf{Entry Point:} \texttt{Engine::abort()} \\
-\textbf{File:} \texttt{crates/warp-core/src/engine\_impl.rs:962-968}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ abort(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ tx}\OperatorTok{:}\NormalTok{ TxId) }\OperatorTok{\{}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{live\_txs}\OperatorTok{.}\NormalTok{remove(}\OperatorTok{\&}\NormalTok{tx}\OperatorTok{.}\NormalTok{value())}\OperatorTok{;}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{scheduler}\OperatorTok{.}\NormalTok{finalize\_tx(tx)}\OperatorTok{;}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{bus}\OperatorTok{.}\NormalTok{clear()}\OperatorTok{;}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{last\_materialization}\OperatorTok{.}\NormalTok{clear()}\OperatorTok{;}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{last\_materialization\_errors}\OperatorTok{.}\NormalTok{clear()}\OperatorTok{;}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{directors}
-Notice what's \emph{not} here: there's no rollback of graph state.
-
-Why? Because Echo hasn't touched the graph yet! All the matching and scheduling happens without mutating anything. The graph only changes during commit.
-
-This is a fundamental architectural decision: \textbf{the graph is effectively immutable until commit}. You can abort at any point before commit and there's nothing to undo. Just clear the transient state and you're done.
-
-Compare this to traditional databases where abort might mean replaying a undo log. Here it's just clearing some hash maps.
-\end{directors}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{3. Rule Matching}\label{rule-matching}
-
-\textbf{Entry Point:} \texttt{Engine::apply()} \\
-\textbf{File:} \texttt{crates/warp-core/src/engine\_impl.rs:730-737}
-
-\begin{bigpicture}
-Rules are the heart of Echo's reactive programming model. A rule says ``when you see this pattern in the graph, do this thing.''
-
-But here's the key insight: matching is \textbf{pure}. The matcher function reads the graph, decides if the pattern matches, but doesn't modify anything. All the mutations happen later, during execution.
-
-This separation of matching from execution is what enables parallel scheduling.
-\end{bigpicture}
-
-\subsection{3.1 Function Signature}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ apply(}
- \OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}
-\NormalTok{ tx}\OperatorTok{:}\NormalTok{ TxId}\OperatorTok{,}
-\NormalTok{ rule\_name}\OperatorTok{:} \OperatorTok{\&}\DataTypeTok{str}\OperatorTok{,}
-\NormalTok{ scope}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId}\OperatorTok{,}
-\NormalTok{) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Result}\OperatorTok{\textless{}}\NormalTok{ApplyResult}\OperatorTok{,}\NormalTok{ EngineError}\OperatorTok{\textgreater{}}
-\end{Highlighting}
-\end{Shaded}
-
-\subsection{3.2 Key Steps}
-
-\begin{verbatim}
-Engine::apply(tx, rule_name, scope)
-│
-├─[4] CREATE GRAPHVIEW
-│ GraphView::new(store) → GraphView<'_>
-│ FILE: crates/warp-core/src/graph_view.rs
-│ TYPE: Read-only wrapper (Copy, 8 bytes)
-\end{verbatim}
-
-\begin{directors}
-This is one of my favorite patterns in Echo.
-
-\texttt{GraphView} is a \emph{read-only wrapper} around \texttt{GraphStore}. It's literally just a pointer (8 bytes), and it implements \texttt{Copy}, so passing it around is essentially free.
-
-But here's the magic: \texttt{GraphView} only exposes read methods. No mutations. The Rust compiler \emph{physically cannot} let you modify the graph through a \texttt{GraphView}.
-
-This is Rust's type system doing real work. You don't need runtime checks for ``is this a read-only transaction?'' The type system guarantees it at compile time. Any code that takes a \texttt{GraphView} is provably read-only.
-\end{directors}
-
-\begin{verbatim}
-├─[5] CALL MATCHER
-│ (rule.matcher)(view, scope) → bool
-│ TYPE: MatchFn = for<'a> fn(GraphView<'a>, &NodeId) -> bool
-│ IF false: return Ok(ApplyResult::NoMatch)
-│
-├─[8] COMPUTE FOOTPRINT
-│ (rule.compute_footprint)(view, scope) → Footprint
-│ RETURNS:
-│ Footprint {
-│ n_read: IdSet, // Nodes read
-│ n_write: IdSet, // Nodes written
-│ e_read: IdSet, // Edges read
-│ e_write: IdSet, // Edges written
-│ a_read: AttachmentSet, // Attachments read
-│ a_write: AttachmentSet, // Attachments written
-│ b_in: PortSet, // Input ports
-│ b_out: PortSet, // Output ports
-│ factor_mask: u64, // O(1) prefilter
-│ }
-\end{verbatim}
-
-\begin{directors}
-The footprint is the \textbf{declaration of intent}.
-
-Before a rule can execute, it must tell the scheduler exactly which nodes, edges, and attachments it plans to read and write. Not approximately. Not ``somewhere in this subgraph.'' \emph{Exactly} these IDs.
-
-This is a constraint on rule authors, but it's what makes parallelism tractable. If two rules have non-overlapping footprints, they can run concurrently. If they overlap, the scheduler serializes them.
-
-Think of it like declaring your locks upfront, except you never actually acquire locks---you just declare your intentions and let the scheduler figure out what can run in parallel.
-\end{directors}
-
-\begin{watchout}
-If your footprint is wrong---if you access something you didn't declare---Bad Things happen. The parallel execution model assumes footprints are honest. There's debug-mode validation, but in release mode, you're on the honor system.
-
-Always over-declare rather than under-declare. If you \emph{might} read a node, put it in \texttt{n\_read}. Correctness beats parallelism.
-\end{watchout}
-
-\begin{verbatim}
-└─[11] ENQUEUE TO SCHEDULER
- self.scheduler.enqueue(tx, PendingRewrite { ... })
- │
- └─ PendingTx::enqueue(scope_be32, rule_id, payload)
- FILE: crates/warp-core/src/scheduler.rs:331-355
-
- CASE 1: Duplicate (scope_hash, rule_id) — LAST WINS
- fat[thin[i].handle] = Some(payload) // Overwrite
- thin[i].nonce = next_nonce++ // Refresh nonce
-
- CASE 2: New entry
- fat.push(Some(payload))
- thin.push(RewriteThin { scope_be32, rule_id, nonce, handle })
- index.insert(key, thin.len() - 1)
-\end{verbatim}
-
-\begin{directors}
-See ``LAST WINS'' on duplicate entries? This is subtle but important.
-
-If you call \texttt{apply()} twice with the same rule and scope, you get one execution, not two. The second call \emph{replaces} the first.
-
-Why? Because matching a rule at a scope is \emph{idempotent}. If the rule matches at that scope, you want to execute it once, regardless of how many times you tried to apply it.
-
-The ``nonce'' gets refreshed on replacement, which affects sort order (we'll see why later), but the key point is: duplicate apply calls are collapsed into one.
-\end{directors}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{4. Scheduler: Drain \& Reserve}\label{scheduler-drain-reserve}
-
-\begin{bigpicture}
-This is where Echo's determinism guarantee gets forged.
-
-You've enqueued a bunch of rules. They were enqueued in whatever order the application called \texttt{apply()}. Now we need to execute them in a \textbf{canonical order}---the same order every time, regardless of timing, regardless of which thread called what when.
-
-The scheduler does this in two phases:
-\begin{enumerate}
-\item \textbf{Drain}: Sort all pending rewrites into canonical order
-\item \textbf{Reserve}: Walk through them, checking for conflicts
-\end{enumerate}
-\end{bigpicture}
-
-\subsection{4.1 Drain Phase (Radix Sort)}
-
-\textbf{Entry Point:} \texttt{RadixScheduler::drain\_for\_tx()} \\
-\textbf{File:} \texttt{crates/warp-core/src/scheduler.rs:109-113}
-
-\begin{verbatim}
-RadixScheduler::drain_for_tx(tx)
-│
-└─ PendingTx::drain_in_order()
- │
- ├─ DECISION: n <= 1024 (SMALL_SORT_THRESHOLD)?
- │ ├─ YES: sort_unstable_by(cmp_thin)
- │ └─ NO: radix_sort()
- │
- └─ radix_sort()
- │
- └─ FOR pass IN 0..20: // ═══ 20 PASSES ═══
- │
- ├─ PHASE 1: COUNT BUCKETS
- ├─ PHASE 2: PREFIX SUMS
- └─ PHASE 3: STABLE SCATTER
-\end{verbatim}
-
-\begin{directors}
-Twenty passes of radix sort. Let's unpack why.
-
-First: why radix sort instead of quicksort or mergesort?
-
-\begin{enumerate}
-\item \textbf{Determinism}: Radix sort is inherently stable---equal elements stay in their original order. Quicksort's behavior depends on pivot selection, which can vary.
-
-\item \textbf{O(n) complexity}: With fixed key size, radix sort is linear. We're sorting by 160 bits (128 bits of scope\_hash + 32 bits of rule\_id), so it's O(20n) = O(n).
-
-\item \textbf{Cache-friendly}: Each pass is a sequential scan. Modern CPUs love sequential access.
-\end{enumerate}
-
-The 1024-element threshold is practical: for small arrays, the constant factors of radix sort (setting up histograms, etc.) exceed its benefits. Below that threshold, a comparison sort wins.
-\end{directors}
-
-\begin{verbatim}
-BUCKET EXTRACTION (bucket16):
-FILE: crates/warp-core/src/scheduler.rs:481-498
-
-Pass 0: u16_from_u32_le(r.nonce, 0) // Nonce bytes [0:2]
-Pass 1: u16_from_u32_le(r.nonce, 1) // Nonce bytes [2:4]
-Pass 2: u16_from_u32_le(r.rule_id, 0) // Rule ID bytes [0:2]
-Pass 3: u16_from_u32_le(r.rule_id, 1) // Rule ID bytes [2:4]
-Pass 4: u16_be_from_pair32(scope, 15) // Scope bytes [30:32]
-...
-Pass 19: u16_be_from_pair32(scope, 0) // Scope bytes [0:2] (MSD)
-
-SORT ORDER: (scope_hash, rule_id, nonce) ascending lexicographic
-\end{verbatim}
-
-\begin{directors}
-This is LSD (Least Significant Digit) radix sort---we process from least significant to most significant.
-
-The final sort order is: \texttt{(scope\_hash, rule\_id, nonce)}.
-
-Why this order?
-\begin{itemize}
-\item \textbf{scope\_hash first}: Rules at different scopes can potentially run in parallel. Grouping by scope makes conflict detection efficient.
-\item \textbf{rule\_id second}: When multiple rules match at the same scope, we need a deterministic order.
-\item \textbf{nonce last}: The tiebreaker for duplicate (scope, rule) pairs. Remember ``LAST WINS''? The nonce determines which duplicate survives.
-\end{itemize}
-
-Because it's LSD, we process in reverse order: nonce first (passes 0-1), then rule\_id (passes 2-3), then scope\_hash (passes 4-19).
-\end{directors}
-
-\subsection{4.2 Reserve Phase (Independence Check)}
-
-\textbf{Entry Point:} \texttt{RadixScheduler::reserve()} \\
-\textbf{File:} \texttt{crates/warp-core/src/scheduler.rs:134-143}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{fn}\NormalTok{ reserve(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ tx}\OperatorTok{:}\NormalTok{ TxId}\OperatorTok{,}\NormalTok{ pr}\OperatorTok{:} \OperatorTok{\&}\KeywordTok{mut}\NormalTok{ PendingRewrite) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{bool} \OperatorTok{\{}
- \KeywordTok{let}\NormalTok{ active }\OperatorTok{=} \KeywordTok{self}\OperatorTok{.}\NormalTok{active}\OperatorTok{.}\NormalTok{entry(tx)}\OperatorTok{.}\NormalTok{or\_insert\_with(}\PreprocessorTok{ActiveFootprints::}\NormalTok{new)}\OperatorTok{;}
- \ControlFlowTok{if} \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{has\_conflict(active}\OperatorTok{,}\NormalTok{ pr) }\OperatorTok{\{}
- \ControlFlowTok{return} \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{on\_conflict(pr)}\OperatorTok{;}
- \OperatorTok{\}}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{mark\_all(active}\OperatorTok{,}\NormalTok{ pr)}\OperatorTok{;}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{on\_reserved(pr)}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{directors}
-This is classic two-phase locking... without the locks.
-
-We walk through the sorted rewrites. For each one:
-\begin{enumerate}
-\item Check if its footprint conflicts with already-reserved footprints
-\item If no conflict, mark its footprint as reserved and accept it
-\item If conflict, reject it (it'll need to wait for a future tick)
-\end{enumerate}
-
-The conflict matrix is what you'd expect:
-
-\begin{center}
-\begin{tabular}{|c|c|c|}
-\hline
- & Read & Write \\
-\hline
-Read & \checkmark & X \\
-\hline
-Write & X & X \\
-\hline
-\end{tabular}
-\end{center}
-
-Multiple readers are fine. Any writer conflicts with readers and other writers.
-\end{directors}
-
-\subsection{4.3 GenSet: O(1) Conflict Detection}
-
-\textbf{File:} \texttt{crates/warp-core/src/scheduler.rs:509-535}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{struct}\NormalTok{ GenSet}\OperatorTok{\textless{}}\NormalTok{K}\OperatorTok{\textgreater{}} \OperatorTok{\{}
-\NormalTok{ gen}\OperatorTok{:} \DataTypeTok{u32}\OperatorTok{,} \CommentTok{// Current generation}
-\NormalTok{ seen}\OperatorTok{:}\NormalTok{ FxHashMap}\OperatorTok{\textless{}}\NormalTok{K}\OperatorTok{,} \DataTypeTok{u32}\OperatorTok{\textgreater{},} \CommentTok{// Key → generation when marked}
-\OperatorTok{\}}
-
-\KeywordTok{impl}\OperatorTok{\textless{}}\NormalTok{K}\OperatorTok{:} \BuiltInTok{Hash} \OperatorTok{+} \BuiltInTok{Eq} \OperatorTok{+} \BuiltInTok{Copy}\OperatorTok{\textgreater{}}\NormalTok{ GenSet}\OperatorTok{\textless{}}\NormalTok{K}\OperatorTok{\textgreater{}} \OperatorTok{\{}
- \KeywordTok{pub} \KeywordTok{fn}\NormalTok{ contains(}\OperatorTok{\&}\KeywordTok{self}\OperatorTok{,}\NormalTok{ key}\OperatorTok{:}\NormalTok{ K) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{bool} \OperatorTok{\{}
- \PreprocessorTok{matches!}\NormalTok{(}\KeywordTok{self}\OperatorTok{.}\NormalTok{seen}\OperatorTok{.}\NormalTok{get(}\OperatorTok{\&}\NormalTok{key)}\OperatorTok{,} \ConstantTok{Some}\NormalTok{(}\OperatorTok{\&}\NormalTok{g) }\ControlFlowTok{if}\NormalTok{ g }\OperatorTok{==} \KeywordTok{self}\OperatorTok{.}\NormalTok{gen)}
- \OperatorTok{\}}
-
- \KeywordTok{pub} \KeywordTok{fn}\NormalTok{ mark(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ key}\OperatorTok{:}\NormalTok{ K) }\OperatorTok{\{}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{seen}\OperatorTok{.}\NormalTok{insert(key}\OperatorTok{,} \KeywordTok{self}\OperatorTok{.}\NormalTok{gen)}\OperatorTok{;}
- \OperatorTok{\}}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{directors}
-Okay, this is my favorite data structure in the entire codebase. It's so simple and so clever.
-
-The problem: we need to track which keys are ``in the set'' for conflict detection. Between transactions, we need to clear the set.
-
-The naive approach: call \texttt{hash\_map.clear()} between transactions. That's O(n) where n is the number of keys.
-
-The clever approach: \textbf{generational clearing}.
-
-Instead of storing just keys, we store (key, generation). A key is ``in the set'' only if its stored generation matches the current generation.
-
-To ``clear'' the set? Just increment \texttt{gen}. That's it. O(1).
-
-All the old entries are still in the hash map, but they have stale generations, so \texttt{contains()} returns false for them. They're ghosts.
-
-The map grows over time, but since the same keys tend to be accessed repeatedly (temporal locality), it stabilizes quickly. And we never pay the O(n) clear cost.
-
-This pattern is criminally underused. Remember it.
-\end{directors}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{5. BOAW Parallel Execution}\label{boaw-parallel-execution}
-
-\textbf{Entry Point:} \texttt{execute\_parallel()} \\
-\textbf{File:} \texttt{crates/warp-core/src/boaw/exec.rs:61-83}
-
-\begin{bigpicture}
-BOAW stands for ``Best Of All Worlds.'' The idea is simple but powerful:
-
-\begin{enumerate}
-\item Partition work items into shards based on their scope
-\item Spin up worker threads
-\item Workers claim shards and execute items
-\item Merge all the outputs into a single canonical result
-\end{enumerate}
-
-The key insight: \textbf{execution order doesn't matter if we sort the outputs}. Workers can execute in any order, claim shards in any order, even race against each other---as long as the merge produces the same result, we're deterministic.
-\end{bigpicture}
-
-\subsection{5.1 Entry Point}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ execute\_parallel(view}\OperatorTok{:}\NormalTok{ GraphView}\OperatorTok{\textless{}}\OtherTok{\textquotesingle{}\_}\OperatorTok{\textgreater{},}\NormalTok{ items}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[ExecItem]}\OperatorTok{,}\NormalTok{ workers}\OperatorTok{:} \DataTypeTok{usize}\NormalTok{) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{TickDelta}\OperatorTok{\textgreater{}}
-\end{Highlighting}
-\end{Shaded}
-
-\subsection{5.2 Sharding}
-
-\begin{verbatim}
-partition_into_shards(items.to_vec()) → Vec
-│
-└─ FOR item IN items:
- │
- ├─ shard_of(&item.scope) → usize
- │ CODE:
- │ let bytes = scope.as_bytes();
- │ let first_8: [u8; 8] = bytes[0..8].try_into().unwrap();
- │ let val = u64::from_le_bytes(first_8);
- │ (val & 255) as usize // SHARD_MASK = 255
- │
- └─ shards[shard_id].items.push(item)
-\end{verbatim}
-
-\begin{directors}
-The sharding is beautifully simple: take the first 8 bytes of the node ID, interpret as a little-endian u64, mask with 255. You get a shard number from 0 to 255.
-
-Why 256 shards?
-\begin{itemize}
-\item \textbf{Fine enough}: With random node IDs, work distributes evenly across shards.
-\item \textbf{Coarse enough}: Each shard has multiple items, amortizing per-shard overhead.
-\item \textbf{Power of 2}: Masking is just a bitwise AND, no division needed.
-\end{itemize}
-
-Why is this deterministic? Because shard assignment depends only on the node ID, which is content-addressed. The same node always lands in the same shard.
-\end{directors}
-
-\subsection{5.3 Work Stealing Loop}
-
-\begin{verbatim}
-FOR _ IN 0..workers:
-│
-└─ s.spawn(move || { ... }) // ═══ WORKER THREAD ═══
- │
- └─ LOOP:
- │
- ├─ shard_id = next_shard.fetch_add(1, Ordering::Relaxed)
- │ ATOMIC: Returns old value, increments counter
- │
- ├─ IF shard_id >= 256: break
- │
- └─ FOR item IN &shards[shard_id].items:
- └─ (item.exec)(view, &item.scope, &mut delta)
-\end{verbatim}
-
-\begin{directors}
-Each worker runs a loop: atomically claim the next shard number, process all items in that shard, repeat until no shards remain.
-
-See \texttt{Ordering::Relaxed}? That's the weakest memory ordering---basically ``no synchronization, just do the atomic operation.''
-
-Why is that safe here?
-\begin{enumerate}
-\item Each shard is processed by exactly one worker (atomic fetch-add guarantees unique assignment)
-\item Workers don't need to see each other's results until after \texttt{join()}
-\item The \texttt{join()} provides the synchronization barrier
-\end{enumerate}
-
-Using \texttt{Relaxed} instead of \texttt{SeqCst} avoids expensive memory barriers. On a 16-core machine, that matters.
-\end{directors}
-
-\begin{watchout}
-The shard claim order is non-deterministic. Worker 1 might claim shard 5 before worker 2 claims shard 3, or vice versa.
-
-This is fine! The merge phase sorts the outputs canonically. The execution order doesn't affect the final result.
-
-But if you're debugging and wondering why execution traces look different between runs, this is why.
-\end{watchout}
-
-\subsection{5.4 Enforced Execution Path}\label{enforced-execution-path}
-
-\textbf{Entry Point:} \texttt{execute\_item\_enforced()} \\
-\textbf{File:} \texttt{crates/warp-core/src/boaw/exec.rs}
-
-When footprint enforcement is active, each item is executed via
-\texttt{execute\_item\_enforced()} instead of a bare function-pointer call.
-This wraps execution with \texttt{catch\_unwind} and performs post-hoc
-\texttt{check\_op()} validation on any newly-emitted ops.
-
-\begin{verbatim}
-execute_item_enforced(view, item, delta, footprint)
-│
-├─ ops_before = delta.len()
-│ Snapshot the op count BEFORE the executor runs
-│
-├─ result = std::panic::catch_unwind(AssertUnwindSafe(|| {
-│ (item.exec)(view, &item.scope, delta)
-│ }))
-│
-├─ FOR op IN delta.ops()[ops_before..]:
-│ guard.check_op(op) → panic\_any(FootprintViolation) on failure
-│ Validates that each newly-emitted op falls within the declared footprint.
-│ ExecItemKind::System items may emit warp-instance-level ops;
-│ ExecItemKind::User items may not.
-│
-└─ OUTCOME PRECEDENCE:
- ├─ IF check_op fails:
- │ panic\_any(FootprintViolation)
- │ Footprint violations OVERRIDE executor panics — violation takes precedence.
- │ (FootprintViolation includes UnauthorizedInstanceOp and CrossWarpEmission.)
- │
- ├─ IF footprint is clean BUT executor panicked:
- │ std::panic::resume_unwind(payload)
- │ The original panic propagates to the caller.
- │
- └─ IF both clean:
- return Ok(())
-\end{verbatim}
-
-\begin{directors}
-This is perhaps the most interesting design decision in the enforcement system.
-
-\textbf{Why post-hoc instead of intercept-on-write?}
-
-The naive approach would be to wrap every \texttt{delta.push\_op()} call with a check. But that would add overhead to every write in the hot loop---and most writes are valid. Instead, we let the executor run at full speed, then scan the ops it produced. This is cheaper because:
-
-\begin{enumerate}
-\item Most rule invocations produce few ops (1-5 typically)
-\item The scan is a single pass over a small vec
-\item We avoid indirection/branching in the write path
-\end{enumerate}
-
-\textbf{Why does violation override panic?}
-
-Consider: a rule writes to node X (not in its footprint), then panics on an unrelated assertion. If we propagated the panic, the developer would see ``assertion failed'' and waste time debugging the wrong thing. By checking the delta first, we surface the \emph{root cause}---the footprint violation---which is almost always why the subsequent logic went wrong.
-
-\textbf{The Poison Invariant:} After a panic, the \texttt{TickDelta} is
-considered poisoned. The partially-written ops have no transactional rollback.
-The delta must be discarded---it cannot be merged or committed. This is safe
-because each worker has its own delta, so a poisoned delta doesn't contaminate
-other workers' output.
-\end{directors}
-
-\textbf{\texttt{ExecItemKind} (cfg-gated):}
-
-\begin{itemize}
-\tightlist
-\item
- \texttt{ExecItemKind::User} --- Normal rule executor. May emit
- node/edge/attachment ops scoped to the declared footprint. Cannot emit
- warp-instance-level ops (\texttt{UpsertWarpInstance},
- \texttt{DeleteWarpInstance}, \texttt{OpenPortal}).
-\item
- \texttt{ExecItemKind::System} --- Internal-only executor (e.g., portal
- opening). May emit warp-instance-level ops.
-\end{itemize}
-
-\begin{directors}
-The User/System distinction prevents a critical class of bugs: user-authored rules accidentally (or maliciously) creating/destroying warp instances. In a multiverse simulation, instance ops change the \emph{topology} of the timeline graph. Only engine-internal code (like the portal system) should have that power.
-
-\textbf{The triple cfg-gate pattern:}
-
-\begin{enumerate}
-\item \texttt{debug\_assertions} OR \texttt{footprint\_enforce\_release} --- always-on in dev, opt-in for release
-\item \texttt{not(unsafe\_graph)} --- escape hatch for benchmarks and fuzzing
-\end{enumerate}
-
-This means the \texttt{ExecItem} struct is \emph{literally a different size} depending on your build profile. In release without the enforcement feature, the \texttt{kind} field doesn't exist---zero overhead, not even a byte.
-\end{directors}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{6. Delta Merge \& State Finalization}\label{delta-merge-state-finalization}
-
-\begin{bigpicture}
-Multiple workers have produced their deltas. Now we need to merge them into a single canonical result.
-
-The merge does three things:
-\begin{enumerate}
-\item Flatten all operations from all deltas
-\item Sort them by a canonical key
-\item Deduplicate, detecting conflicts along the way
-\end{enumerate}
-\end{bigpicture}
-
-\subsection{6.1 Canonical Merge}
-
-\textbf{Entry Point:} \texttt{merge\_deltas()} \\
-\textbf{File:} \texttt{crates/warp-core/src/boaw/merge.rs:36-75}
-
-\begin{verbatim}
-merge_deltas(deltas: Vec) → Result, MergeConflict>
-│
-├─[1] FLATTEN ALL OPS WITH ORIGINS
-│
-├─[2] CANONICAL SORT
-│ flat.sort_by(|a, b| (&a.0, &a.1).cmp(&(&b.0, &b.1)));
-│ ORDER: (WarpOpKey, OpOrigin) lexicographic
-│
-└─[3] DEDUPE & CONFLICT DETECTION
- GROUP by WarpOpKey
- IF all ops in group are identical: keep one
- ELSE: return Err(MergeConflict { writers })
-\end{verbatim}
-
-\begin{directors}
-The magic is in step 3: \textbf{benevolent coincidence}.
-
-If two rules independently decide to create the same edge, with the same properties, that's fine! They're in agreement. We keep one copy.
-
-But if they produce \emph{different} operations for the same key---say, one sets an attachment to value A and another sets it to value B---that's a conflict. The rules disagree, and we can't pick a winner.
-
-This policy allows natural redundancy in rule definitions. Multiple rules can create the same structural elements without coordinating. As long as they agree on the result, it works.
-
-Conflicts indicate a bug in rule definitions. The receipt includes the conflicting writers so you can debug.
-\end{directors}
-
-\subsection{6.2 Operation Ordering}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{fn}\NormalTok{ sort\_key(}\OperatorTok{\&}\KeywordTok{self}\NormalTok{) }\OperatorTok{{-}\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}
- \ControlFlowTok{match} \KeywordTok{self} \OperatorTok{\{}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{OpenPortal }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{1}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{UpsertWarpInstance }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{2}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{DeleteWarpInstance }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{3}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{DeleteEdge }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{4}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{DeleteNode }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{5}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{UpsertNode }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{6}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{UpsertEdge }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{7}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{SetAttachment }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{8}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},}
- \OperatorTok{\}}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{directors}
-The operation order is carefully chosen to maintain invariants:
-
-\begin{enumerate}
-\item \textbf{OpenPortal first}: Creates warp instances that later ops may reference
-\item \textbf{Deletes before upserts}: If you delete then upsert the same thing, you get a fresh entity. If you upsert then delete, you get nothing. Deletes first is the saner default.
-\item \textbf{Nodes before edges}: Edges reference nodes, so nodes must exist first
-\item \textbf{Attachments last}: Attachments attach to nodes/edges, so the skeleton must exist
-\end{enumerate}
-
-This ordering means rules can emit ops in any order. The merge sorts them into the correct sequence. One less thing for rule authors to worry about.
-\end{directors}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{7. Hash Computation}\label{hash-computation}
-
-\begin{bigpicture}
-Echo uses hashing for two things:
-
-\begin{enumerate}
-\item \textbf{State root}: A fingerprint of what the graph looks like right now
-\item \textbf{Commit hash}: A fingerprint of this entire commit (state + how we got here)
-\end{enumerate}
-
-If two nodes compute the same commit hash, they have identical state. This is how consensus works without comparing the full state.
-\end{bigpicture}
-
-\subsection{7.1 State Root}
-
-\textbf{Entry Point:} \texttt{compute\_state\_root()} \\
-\textbf{File:} \texttt{crates/warp-core/src/snapshot.rs:88-209}
-
-\begin{verbatim}
-compute_state_root(state: &WarpState, root: &NodeKey) → Hash
-│
-├─[1] BFS REACHABILITY TRAVERSAL
-│ Only hash nodes/edges reachable from root
-│
-├─[2] HASHING PHASE
-│ │
-│ └─ FOR warp_id IN reachable_warps: // BTreeSet = sorted order
-│ FOR (node_id, node) IN store.nodes: // BTreeMap = sorted
-│ hash(node_id, node.type, attachment)
-│ FOR (from, edges) IN store.edges_from: // BTreeMap = sorted
-│ sorted_edges = edges.sort_by(id)
-│ hash(from, edges)
-│
-└─ hasher.finalize().into()
-\end{verbatim}
-
-\begin{directors}
-Two critical details here:
-
-\textbf{1. Reachability}: We only hash nodes/edges reachable from the root via BFS. Unreachable ``garbage'' doesn't affect the hash.
-
-This is subtle but important. It means you can safely delete subgraphs without affecting the hash of nodes that don't reference them. It's also the foundation for garbage collection---unreachable data can be purged without breaking consensus.
-
-\textbf{2. BTreeMap/BTreeSet}: Notice the iteration is over B-tree collections, not hash maps.
-
-Why? Because B-trees iterate in \emph{sorted order}. Hash maps iterate in arbitrary order (based on hashing, which might differ between machines or Rust versions).
-
-If we used hash maps, two machines with identical state might produce different hashes just because they iterated in different orders. That would be catastrophic.
-
-BTreeMap/BTreeSet cost O(log n) instead of O(1) for operations, but they guarantee deterministic iteration. For hashing, that's non-negotiable.
-\end{directors}
-
-\subsection{7.2 Commit Hash v2}
-
-\textbf{Entry Point:} \texttt{compute\_commit\_hash\_v2()} \\
-\textbf{File:} \texttt{crates/warp-core/src/snapshot.rs:244-263}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{fn}\NormalTok{ compute\_commit\_hash\_v2(}
-\NormalTok{ state\_root}\OperatorTok{:} \OperatorTok{\&}\BuiltInTok{Hash}\OperatorTok{,}
-\NormalTok{ parents}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[}\BuiltInTok{Hash}\NormalTok{]}\OperatorTok{,}
-\NormalTok{ patch\_digest}\OperatorTok{:} \OperatorTok{\&}\BuiltInTok{Hash}\OperatorTok{,}
-\NormalTok{ policy\_id}\OperatorTok{:} \DataTypeTok{u32}\OperatorTok{,}
-\NormalTok{) }\OperatorTok{{-}\textgreater{}} \BuiltInTok{Hash} \OperatorTok{\{}
- \KeywordTok{let} \KeywordTok{mut}\NormalTok{ h }\OperatorTok{=} \BuiltInTok{Hasher}\PreprocessorTok{::}\NormalTok{new()}\OperatorTok{;}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\DecValTok{2u16}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// Version tag}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\NormalTok{(parents}\OperatorTok{.}\NormalTok{len() }\KeywordTok{as} \DataTypeTok{u64}\NormalTok{)}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// Parent count}
- \ControlFlowTok{for}\NormalTok{ p }\KeywordTok{in}\NormalTok{ parents }\OperatorTok{\{}\NormalTok{ h}\OperatorTok{.}\NormalTok{update(p)}\OperatorTok{;} \OperatorTok{\}} \CommentTok{// Parents}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(state\_root)}\OperatorTok{;} \CommentTok{// State}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(patch\_digest)}\OperatorTok{;} \CommentTok{// Operations}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\NormalTok{policy\_id}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// Policy}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{finalize()}\OperatorTok{.}\NormalTok{into()}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{directors}
-The commit hash includes:
-\begin{itemize}
-\item \textbf{state\_root}: What the graph looks like
-\item \textbf{patch\_digest}: What operations got us here
-\item \textbf{parents}: Which commit(s) we're building on
-\item \textbf{policy\_id}: Which policy version we're using
-\end{itemize}
-
-The \texttt{2u16} version tag is future-proofing. If we ever need to change the commit hash format, we bump the version. Old and new formats produce different hashes, which is correct---they're different protocols.
-
-Everything is little-endian (\texttt{to\_le\_bytes()}) because we need byte-identical encoding across platforms. Big-endian and little-endian machines must produce the same hash.
-\end{directors}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{8. Commit Orchestration}\label{commit-orchestration}
-
-\textbf{Entry Point:} \texttt{Engine::commit\_with\_receipt()} \\
-\textbf{File:} \texttt{crates/warp-core/src/engine\_impl.rs:837-954}
-
-\begin{bigpicture}
-This is the grand finale. All the pieces come together:
-
-\begin{enumerate}
-\item Drain the scheduler (get sorted rewrites)
-\item Reserve (check for conflicts)
-\item Execute (run the rules, collect deltas)
-\item Merge (combine deltas canonically)
-\item Apply (mutate the graph)
-\item Hash (compute state root and commit hash)
-\item Record (save to history)
-\end{enumerate}
-
-If any step fails, we haven't mutated anything permanent. The graph only changes when everything succeeds.
-\end{bigpicture}
-
-\begin{verbatim}
-Engine::commit_with_receipt(tx)
-│
-├─[2] DRAIN CANDIDATES
-│ drained = self.scheduler.drain_for_tx(tx)
-│
-├─[3] RESERVE (INDEPENDENCE CHECK)
-│ FOR rewrite IN drained:
-│ accepted = self.scheduler.reserve(tx, &mut rewrite)
-│
-├─[4] EXECUTE
-│ state_before = self.state.clone() // Snapshot before mutation!
-│ FOR rewrite IN reserved:
-│ (executor)(view, &scope, &mut delta)
-│ delta.finalize()
-│ patch.apply_to_state(&mut self.state)
-│
-├─[6] COMPUTE DELTA PATCH
-│ ops = diff_state(&state_before, &self.state)
-│
-├─[7] COMPUTE STATE ROOT
-│ state_root = compute_state_root(&self.state, &root)
-│
-├─[10] COMPUTE COMMIT HASH
-│ hash = compute_commit_hash_v2(state_root, parents, patch_digest, policy_id)
-│
-└─[12] RECORD TO HISTORY
- tick_history.push((snapshot, receipt, patch))
-\end{verbatim}
-
-\begin{directors}
-See \texttt{state\_before = self.state.clone()} in step [4]?
-
-We snapshot the state \emph{before} executing anything. This enables:
-\begin{enumerate}
-\item \texttt{diff\_state()}: Compare before/after to get the actual ops
-\item Validation: The delta from execution should match the diff
-\item Potential rollback: If something goes wrong, we have the original
-\end{enumerate}
-
-The clone isn't as expensive as it looks. \texttt{WarpState} uses \texttt{Arc} internally for shared data structures, so cloning is cheap---it increments reference counts rather than deep-copying. True copy-on-write semantics require explicit \texttt{Arc}/\texttt{Rc}/\texttt{Cow} wrappers; Rust's \texttt{Clone} trait itself performs deep copies unless the type uses such wrappers.
-\end{directors}
-
-\begin{directors}
-And that's it! That's the complete journey from user action to committed state.
-
-Every step is deterministic. Every hash is content-addressed. The same inputs always produce the same outputs, regardless of timing, thread scheduling, or which machine runs the code.
-
-This is what makes Echo special. It's not just a graph database. It's a \emph{deterministic computation engine} that happens to store its state in a graph.
-
-Thanks for sticking with me through this tour. Now go read the actual code---you'll understand it much better now.
-\end{directors}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{Appendix A: Complexity Summary}\label{appendix-a-complexity-summary}
-
-{\def\LTcaptype{none}
-\begin{longtable}[]{@{}lll@{}}
-\toprule\noalign{}
-Operation & Complexity & Notes \\
-\midrule\noalign{}
-\endhead
-\bottomrule\noalign{}
-\endlastfoot
-\texttt{ingest\_intent} & O(1) & Fixed structural insertions \\
-\texttt{begin} & O(1) & Counter increment + set insert \\
-\texttt{apply} & O(m) & m = footprint size \\
-\texttt{drain\_for\_tx} & O(n) & n = candidates, 20 radix passes \\
-\texttt{reserve} per rewrite & O(m) & m = footprint size, O(1) per check \\
-\texttt{execute\_parallel} & O(n/w) & n = items, w = workers \\
-\texttt{merge\_deltas} & O(k log k) & k = total ops \\
-\texttt{compute\_state\_root} & O(V + E) & V = nodes, E = edges \\
-\end{longtable}
-}
-
-\begin{directors}
-Nothing quadratic. Nothing exponential. The system scales linearly with the amount of work. That's by design.
-
-The one potential bottleneck is \texttt{compute\_state\_root}---it traverses the entire reachable graph. For very large graphs, that's expensive. In practice, graphs are partitioned across warp instances, keeping each traversal manageable.
-\end{directors}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{Appendix B: Determinism Boundaries}\label{appendix-b-determinism-boundaries}
-
-\subsection{Guaranteed Deterministic}
-
-\begin{itemize}
-\tightlist
-\item Radix sort ordering (20-pass LSD)
-\item BTreeMap/BTreeSet iteration
-\item BLAKE3 hashing
-\item GenSet conflict detection
-\item Canonical merge deduplication
-\end{itemize}
-
-\subsection{Intentionally Non-Deterministic (Handled by Merge)}
-
-\begin{itemize}
-\tightlist
-\item Worker execution order in BOAW
-\item Shard claim order (atomic counter)
-\end{itemize}
-
-\begin{directors}
-The non-deterministic parts are carefully contained. Workers race against each other, but the merge absorbs that chaos and produces a deterministic result.
-
-Think of it as a funnel: chaos at the wide end (parallel execution), order at the narrow end (merged output). The merge is the bottleneck that enforces determinism.
-\end{directors}
-
-\subsection{Protocol Constants (Frozen)}
-
-\begin{itemize}
-\tightlist
-\item \texttt{NUM\_SHARDS = 256}
-\item \texttt{SHARD\_MASK = 255}
-\item Shard routing: \texttt{LE\_u64(node\_id[0..8]) \& 255}
-\item Commit hash v2 version tag: \texttt{0x02 0x00}
-\end{itemize}
-
-\begin{watchout}
-These constants are \textbf{frozen}. Changing them would break compatibility with all existing commits.
-
-If you're tempted to ``optimize'' by tweaking \texttt{NUM\_SHARDS}, remember: every historical commit was created with these values. Changing them makes replay impossible.
-
-Protocol evolution happens through version tags, not constant changes.
-\end{watchout}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\emph{Document generated 2026-01-18. Director's commentary by your friendly AI pair programmer.}
-
-\backmatter
-\end{document}
diff --git a/docs/archive/study/echo-tour-de-code-with-commentary.pdf b/docs/archive/study/echo-tour-de-code-with-commentary.pdf
deleted file mode 100644
index ee5622cb..00000000
Binary files a/docs/archive/study/echo-tour-de-code-with-commentary.pdf and /dev/null differ
diff --git a/docs/archive/study/echo-tour-de-code-with-commentary.tex b/docs/archive/study/echo-tour-de-code-with-commentary.tex
deleted file mode 100644
index 54051a5b..00000000
--- a/docs/archive/study/echo-tour-de-code-with-commentary.tex
+++ /dev/null
@@ -1,2016 +0,0 @@
-% SPDX-License-Identifier: Apache-2.0 OR LicenseRef-MIND-UCAL-1.0
-% © James Ross Ω FLYING•ROBOTS
-% Options for packages loaded elsewhere
-\PassOptionsToPackage{unicode}{hyperref}
-\PassOptionsToPackage{hyphens}{url}
-\documentclass[11pt]{book}
-\usepackage[letterpaper, margin=1in]{geometry}
-\usepackage{xcolor}
-\usepackage{amsmath,amssymb}
-\setcounter{secnumdepth}{-\maxdimen} % remove section numbering
-\usepackage{iftex}
-\ifPDFTeX
- \usepackage[T1]{fontenc}
- \usepackage[utf8]{inputenc}
- \usepackage{textcomp} % provide euro and other symbols
-\else % if luatex or xetex
- \usepackage{unicode-math} % this also loads fontspec
- \defaultfontfeatures{Scale=MatchLowercase}
- \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1}
-\fi
-\usepackage{lmodern}
-\ifPDFTeX\else
- % xetex/luatex font selection
-\fi
-% Use upquote if available, for straight quotes in verbatim environments
-\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
-\IfFileExists{microtype.sty}{% use microtype if available
- \usepackage[]{microtype}
- \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
-}{}
-\makeatletter
-\@ifundefined{KOMAClassName}{% if non-KOMA class
- \IfFileExists{parskip.sty}{%
- \usepackage{parskip}
- }{% else
- \setlength{\parindent}{0pt}
- \setlength{\parskip}{6pt plus 2pt minus 1pt}}
-}{% if KOMA class
- \KOMAoptions{parskip=half}}
-\makeatother
-\usepackage{color}
-\usepackage{fancyvrb}
-\newcommand{\VerbBar}{|}
-\newcommand{\VERB}{\Verb[commandchars=\\\{\}]}
-\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}}
-% Add ',fontsize=\small' for more characters per line
-\newenvironment{Shaded}{}{}
-\newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}}
-\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}}
-\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{#1}}
-\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}}
-\newcommand{\BuiltInTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{#1}}
-\newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}}
-\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{#1}}}
-\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}}
-\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{#1}}
-\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}}
-\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{#1}}
-\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}}
-\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{#1}}}
-\newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}}
-\newcommand{\ExtensionTok}[1]{#1}
-\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}}
-\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{#1}}
-\newcommand{\ImportTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{\textbf{#1}}}
-\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}}
-\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}}
-\newcommand{\NormalTok}[1]{#1}
-\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{#1}}
-\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{#1}}
-\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{#1}}
-\newcommand{\RegionMarkerTok}[1]{#1}
-\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}}
-\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{#1}}
-\newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}}
-\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{#1}}
-\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}}
-\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}}
-\usepackage{longtable,booktabs,array}
-\newcounter{none} % for unnumbered tables
-\usepackage{calc} % for calculating minipage widths
-% Correct order of tables after \paragraph or \subparagraph
-\usepackage{etoolbox}
-\makeatletter
-\patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{}
-\makeatother
-% Allow footnotes in longtable head/foot
-\IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}}
-\makesavenoteenv{longtable}
-\setlength{\emergencystretch}{3em} % prevent overfull lines
-\providecommand{\tightlist}{%
- \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
-\usepackage{bookmark}
-\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available
-\urlstyle{same}
-\hypersetup{
- hidelinks,
- pdfcreator={LaTeX via pandoc}}
-
-% ═══════════════════════════════════════════════════════════════════════════════
-% TOUR GUIDE COMMENTARY STYLING
-% ═══════════════════════════════════════════════════════════════════════════════
-\usepackage{pifont} % Required for \ding symbols in tcolorbox titles
-\usepackage{tcolorbox}
-\tcbuselibrary{skins,breakable}
-
-% Tour Guide Commentary Box - the main insight boxes
-\newtcolorbox{tourguide}[1][]{
- enhanced,
- breakable,
- colback=blue!5!white,
- colframe=blue!60!black,
- fonttitle=\bfseries,
- title={\raisebox{-0.2em}{\large\ding{46}} Tour Guide Notes},
- left=8pt,
- right=8pt,
- top=6pt,
- bottom=6pt,
- #1
-}
-
-% Clever Pattern Box - for particularly elegant code patterns
-\newtcolorbox{cleverpattern}[1][]{
- enhanced,
- breakable,
- colback=green!5!white,
- colframe=green!50!black,
- fonttitle=\bfseries,
- title={\raisebox{-0.1em}{\large$\star$} Clever Pattern},
- left=8pt,
- right=8pt,
- top=6pt,
- bottom=6pt,
- #1
-}
-
-% Warning/Gotcha Box - for subtle traps or important invariants
-\newtcolorbox{watchout}[1][]{
- enhanced,
- breakable,
- colback=orange!8!white,
- colframe=orange!70!black,
- fonttitle=\bfseries,
- title={\raisebox{-0.1em}{\large$\triangle$} Watch Out},
- left=8pt,
- right=8pt,
- top=6pt,
- bottom=6pt,
- #1
-}
-
-% Deep Dive Box - for architectural insights
-\newtcolorbox{deepdive}[1][]{
- enhanced,
- breakable,
- colback=purple!5!white,
- colframe=purple!60!black,
- fonttitle=\bfseries,
- title={\raisebox{-0.1em}{\large$\blacktriangledown$} Deep Dive},
- left=8pt,
- right=8pt,
- top=6pt,
- bottom=6pt,
- #1
-}
-
-% Pro Tip Box - for practical advice
-\newtcolorbox{protip}[1][]{
- enhanced,
- breakable,
- colback=teal!5!white,
- colframe=teal!60!black,
- fonttitle=\bfseries,
- title={\raisebox{-0.1em}{\large$\checkmark$} Pro Tip},
- left=8pt,
- right=8pt,
- top=6pt,
- bottom=6pt,
- #1
-}
-
-\author{}
-\date{}
-
-\begin{document}
-\frontmatter
-
-\mainmatter
-\chapter{Echo: Tour de Code}\label{echo-tour-de-code}
-
-\begin{quote}
-\textbf{The complete function-by-function trace of Echo's execution
-pipeline.}
-
-This document traces EVERY function call involved in processing a user
-action through the Echo engine. File paths and line numbers are accurate
-as of 2026-01-25.
-
-\emph{Annotated with tour guide commentary --- insights, patterns, and observations from a detailed code review.}
-\end{quote}
-
-\begin{tourguide}
-Welcome to the Echo Tour de Code! I'll be your guide through this remarkable piece of systems engineering.
-
-What strikes me most about Echo's architecture is its \textbf{relentless pursuit of determinism}. Every design decision---from content-addressed identities to 20-pass radix sorts---serves the goal of ensuring that the same inputs always produce the same outputs, regardless of execution timing or parallelism.
-
-As we walk through the pipeline, I'll highlight:
-\begin{itemize}
-\item \textbf{Clever patterns} that solve subtle problems elegantly
-\item \textbf{Invariants} that must hold for correctness
-\item \textbf{Performance optimizations} hidden in plain sight
-\item \textbf{Architectural decisions} and their trade-offs
-\end{itemize}
-
-Let's begin our journey from intent to commit!
-\end{tourguide}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{Table of Contents}\label{table-of-contents}
-
-\begin{enumerate}
-\def\labelenumi{\arabic{enumi}.}
-\tightlist
-\item
- \hyperref[intent-ingestion]{Intent Ingestion}
-\item
- \hyperref[transaction-lifecycle]{Transaction Lifecycle}
-\item
- \hyperref[rule-matching]{Rule Matching}
-\item
- \hyperref[scheduler-drain-reserve]{Scheduler: Drain \& Reserve}
-\item
- \hyperref[boaw-parallel-execution]{BOAW Parallel Execution}
-\item
- \hyperref[delta-merge-state-finalization]{Delta Merge \& State
- Finalization}
-\item
- \hyperref[hash-computation]{Hash Computation}
-\item
- \hyperref[commit-orchestration]{Commit Orchestration}
-\item
- \hyperref[complete-call-graph]{Complete Call Graph}
-\end{enumerate}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{1. Intent Ingestion}\label{intent-ingestion}
-
-\textbf{Entry Point:} \texttt{Engine::ingest\_intent()} \textbf{File:}
-\texttt{crates/warp-core/src/engine\_impl.rs}
-
-\begin{tourguide}
-This is where user actions enter the system. Notice how Echo treats intents as \emph{immutable, content-addressed} data from the very first moment. The intent bytes are hashed to create a unique identifier, ensuring that duplicate intents are detected automatically---no coordination required.
-\end{tourguide}
-
-\subsection{1.1 Function Signature}\label{function-signature}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ ingest\_intent(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ intent\_bytes}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[}\DataTypeTok{u8}\NormalTok{]) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Result}\OperatorTok{\textless{}}\NormalTok{IngestDisposition}\OperatorTok{,}\NormalTok{ EngineError}\OperatorTok{\textgreater{}}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{Returns:} -
-\texttt{IngestDisposition::Accepted\ \{\ intent\_id:\ Hash\ \}} --- New
-intent accepted -
-\texttt{IngestDisposition::Duplicate\ \{\ intent\_id:\ Hash\ \}} ---
-Already ingested
-
-\subsection{1.2 Complete Call Trace}\label{complete-call-trace}
-
-\begin{verbatim}
-Engine::ingest_intent(intent_bytes: &[u8])
-│
-├─[1] compute_intent_id(intent_bytes) → Hash
-│ FILE: crates/warp-core/src/inbox.rs
-│ CODE:
-│ let mut hasher = blake3::Hasher::new();
-│ hasher.update(b"intent:"); // Domain separation
-│ hasher.update(intent_bytes);
-│ hasher.finalize().into() // → [u8; 32]
-│
-├─[2] NodeId(intent_id)
-│ Creates strongly-typed NodeId from Hash
-│
-├─[3] self.state.store_mut(&warp_id) → Option<&mut GraphStore>
-│ FILE: crates/warp-core/src/engine_impl.rs
-│ ERROR: EngineError::UnknownWarp if None
-│
-├─[4] Extract root_node_id from self.current_root.local_id
-│
-├─[5] STRUCTURAL NODE CREATION (Idempotent)
-│ ├─ make_node_id("sim") → NodeId
-│ │ FILE: crates/warp-core/src/ident.rs
-│ │ CODE: blake3("node:" || "sim")
-│ │
-│ ├─ make_node_id("sim/inbox") → NodeId
-│ │ CODE: blake3("node:" || "sim/inbox")
-│ │
-│ ├─ make_type_id("sim") → TypeId
-│ │ FILE: crates/warp-core/src/ident.rs
-│ │ CODE: blake3("type:" || "sim")
-│ │
-│ ├─ make_type_id("sim/inbox") → TypeId
-│ ├─ make_type_id("sim/inbox/event") → TypeId
-│ │
-│ ├─ store.insert_node(sim_id, NodeRecord { ty: sim_ty })
-│ │ FILE: crates/warp-core/src/graph.rs
-│ │ CODE: self.nodes.insert(id, record)
-│ │
-│ └─ store.insert_node(inbox_id, NodeRecord { ty: inbox_ty })
-│
-├─[6] STRUCTURAL EDGE CREATION
-│ ├─ make_edge_id("edge:root/sim") → EdgeId
-│ │ FILE: crates/warp-core/src/ident.rs
-│ │ CODE: blake3("edge:" || "edge:root/sim")
-│ │
-│ ├─ store.insert_edge(root_id, EdgeRecord { ... })
-│ │ FILE: crates/warp-core/src/graph.rs
-│ │ └─ GraphStore::upsert_edge_record(from, edge)
-│ │ FILE: crates/warp-core/src/graph.rs
-│ │ UPDATES:
-│ │ self.edge_index.insert(edge_id, from)
-│ │ self.edge_to_index.insert(edge_id, to)
-│ │ self.edges_from.entry(from).or_default().push(edge)
-│ │ self.edges_to.entry(to).or_default().push(edge_id)
-│ │
-│ └─ store.insert_edge(sim_id, EdgeRecord { ... }) [sim → inbox]
-│
-├─[7] DUPLICATE DETECTION
-│ store.node(&event_id) → Option<&NodeRecord>
-│ FILE: crates/warp-core/src/graph.rs
-│ CODE: self.nodes.get(id)
-│ IF Some(_): return Ok(IngestDisposition::Duplicate { intent_id })
-│
-├─[8] EVENT NODE CREATION
-│ store.insert_node(event_id, NodeRecord { ty: event_ty })
-│ NOTE: event_id = intent_id (content-addressed)
-│
-├─[9] INTENT ATTACHMENT
-│ ├─ AtomPayload::new(type_id, bytes)
-│ │ FILE: crates/warp-core/src/attachment.rs
-│ │ CODE: Self { type_id, bytes: Bytes::copy_from_slice(intent_bytes) }
-│ │
-│ └─ store.set_node_attachment(event_id, Some(AttachmentValue::Atom(payload)))
-│ FILE: crates/warp-core/src/graph.rs
-│ CODE: self.node_attachments.insert(id, v)
-│
-├─[10] PENDING EDGE CREATION (Queue Membership)
-│ ├─ pending_edge_id(&inbox_id, &intent_id) → EdgeId
-│ │ FILE: crates/warp-core/src/inbox.rs
-│ │ CODE: blake3("edge:" || "sim/inbox/pending:" || inbox_id || intent_id)
-│ │
-│ └─ store.insert_edge(inbox_id, EdgeRecord {
-│ id: pending_edge_id,
-│ from: inbox_id,
-│ to: event_id,
-│ ty: make_type_id("edge:pending")
-│ })
-│
-└─[11] return Ok(IngestDisposition::Accepted { intent_id })
-\end{verbatim}
-
-\begin{cleverpattern}
-\textbf{Domain Separation in Hashing}
-
-Notice step [1]: the hasher prefixes with \texttt{b"intent:"} before the actual data. This is a cryptographic best practice called \emph{domain separation}---it prevents a hash collision between an intent and, say, a node ID that happens to have the same bytes.
-
-Echo uses this pattern consistently:
-\begin{itemize}
-\item \texttt{"intent:"} for intent IDs
-\item \texttt{"node:"} for node IDs
-\item \texttt{"type:"} for type IDs
-\item \texttt{"edge:"} for edge IDs
-\end{itemize}
-
-This ensures that even if two different domain values have the same raw bytes, they'll produce different hashes.
-\end{cleverpattern}
-
-\begin{deepdive}
-\textbf{Why Content-Addressed Event IDs?}
-
-In step [8], note that \texttt{event\_id = intent\_id}. This is a profound design choice:
-
-\begin{enumerate}
-\item \textbf{Automatic deduplication}: If the same intent arrives twice, it hashes to the same ID, and step [7] catches it.
-\item \textbf{Reproducibility}: Given the same intent bytes, any node in a distributed system will compute the same event ID.
-\item \textbf{Auditability}: You can verify an event's integrity by re-hashing its content.
-\end{enumerate}
-
-This is the foundation of Echo's deterministic execution model---events are identified by \emph{what they are}, not \emph{when they arrived}.
-\end{deepdive}
-
-\subsection{1.3 Data Structures
-Modified}\label{data-structures-modified}
-
-{\def\LTcaptype{none} % do not increment counter
-\begin{longtable}[]{@{}
- >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.4231}}
- >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.2692}}
- >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.3077}}@{}}
-\toprule\noalign{}
-\begin{minipage}[b]{\linewidth}\raggedright
-Structure
-\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright
-Field
-\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright
-Change
-\end{minipage} \\
-\midrule\noalign{}
-\endhead
-\bottomrule\noalign{}
-\endlastfoot
-\texttt{GraphStore} & \texttt{nodes} & +3 entries (sim, inbox, event) \\
-\texttt{GraphStore} & \texttt{edges\_from} & +3 edges (root→sim,
-sim→inbox, inbox→event) \\
-\texttt{GraphStore} & \texttt{edges\_to} & +3 reverse entries \\
-\texttt{GraphStore} & \texttt{edge\_index} & +3 edge→from mappings \\
-\texttt{GraphStore} & \texttt{edge\_to\_index} & +3 edge→to mappings \\
-\texttt{GraphStore} & \texttt{node\_attachments} & +1 (event → intent
-payload) \\
-\end{longtable}
-}
-
-\begin{tourguide}
-Notice the \textbf{four separate edge indices}: \texttt{edges\_from}, \texttt{edges\_to}, \texttt{edge\_index}, and \texttt{edge\_to\_index}. This redundancy enables O(1) lookups in any direction---find edges from a node, to a node, or look up either endpoint given an edge ID. The space cost is modest (pointers/IDs are small), but the query flexibility is enormous.
-\end{tourguide}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{2. Transaction Lifecycle}\label{transaction-lifecycle}
-
-\subsection{2.1 Begin Transaction}\label{begin-transaction}
-
-\textbf{Entry Point:} \texttt{Engine::begin()} \textbf{File:}
-\texttt{crates/warp-core/src/engine\_impl.rs-719}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ begin(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\NormalTok{) }\OperatorTok{{-}\textgreater{}}\NormalTok{ TxId }\OperatorTok{\{}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter }\OperatorTok{=} \KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter}\OperatorTok{.}\NormalTok{wrapping\_add(}\DecValTok{1}\NormalTok{)}\OperatorTok{;} \CommentTok{// Line 713}
- \ControlFlowTok{if} \KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter }\OperatorTok{==} \DecValTok{0} \OperatorTok{\{}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter }\OperatorTok{=} \DecValTok{1}\OperatorTok{;} \CommentTok{// Line 715: Zero is reserved}
- \OperatorTok{\}}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{live\_txs}\OperatorTok{.}\NormalTok{insert(}\KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter)}\OperatorTok{;} \CommentTok{// Line 717}
- \PreprocessorTok{TxId::}\NormalTok{from\_raw(}\KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter) }\CommentTok{// Line 718}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{watchout}
-\textbf{The Zero Invariant}
-
-Line 715 is subtle but critical: \texttt{TxId(0)} is reserved as an invalid/sentinel value. Without this check, after $2^{64}$ transactions (admittedly unlikely!), the counter would wrap to zero and potentially confuse code that uses zero to mean ``no transaction.''
-
-This is defensive programming at its finest---the cost is one branch that's almost never taken, but it eliminates an entire class of potential bugs.
-\end{watchout}
-
-\textbf{Call Trace:}
-
-\begin{verbatim}
-Engine::begin()
-│
-├─ self.tx_counter.wrapping_add(1)
-│ Rust std: u64::wrapping_add
-│ Handles u64::MAX → 0 overflow
-│
-├─ if self.tx_counter == 0: self.tx_counter = 1
-│ INVARIANT: TxId(0) is reserved as invalid
-│
-├─ self.live_txs.insert(self.tx_counter)
-│ TYPE: HashSet
-│ Registers transaction as active
-│
-└─ TxId::from_raw(self.tx_counter)
- FILE: crates/warp-core/src/tx.rs
- CODE: pub const fn from_raw(value: u64) -> Self { Self(value) }
- TYPE: #[repr(transparent)] struct TxId(u64)
-\end{verbatim}
-
-\begin{tourguide}
-The \texttt{\#[repr(transparent)]} on \texttt{TxId} is worth noting---it guarantees that \texttt{TxId} has exactly the same memory layout as \texttt{u64}. This means zero-cost abstraction: you get type safety (can't accidentally pass a \texttt{NodeId} where a \texttt{TxId} is expected) with no runtime overhead.
-\end{tourguide}
-
-\textbf{State Changes:} - \texttt{tx\_counter}: N → N+1 (or 1 if
-wrapped) - \texttt{live\_txs}: Insert new counter value
-
-\subsection{2.2 Abort Transaction}\label{abort-transaction}
-
-\textbf{Entry Point:} \texttt{Engine::abort()} \textbf{File:}
-\texttt{crates/warp-core/src/engine\_impl.rs-968}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ abort(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ tx}\OperatorTok{:}\NormalTok{ TxId) }\OperatorTok{\{}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{live\_txs}\OperatorTok{.}\NormalTok{remove(}\OperatorTok{\&}\NormalTok{tx}\OperatorTok{.}\NormalTok{value())}\OperatorTok{;}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{scheduler}\OperatorTok{.}\NormalTok{finalize\_tx(tx)}\OperatorTok{;}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{bus}\OperatorTok{.}\NormalTok{clear()}\OperatorTok{;}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{last\_materialization}\OperatorTok{.}\NormalTok{clear()}\OperatorTok{;}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{last\_materialization\_errors}\OperatorTok{.}\NormalTok{clear()}\OperatorTok{;}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{tourguide}
-Abort is refreshingly simple---just remove the transaction from tracking and clear transient state. No rollback needed because Echo hasn't mutated the graph yet! All graph mutations happen atomically during commit. This is a key architectural decision: the graph is effectively immutable until commit time.
-\end{tourguide}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{3. Rule Matching}\label{rule-matching}
-
-\textbf{Entry Point:} \texttt{Engine::apply()} \textbf{File:}
-\texttt{crates/warp-core/src/engine\_impl.rs-737}
-
-\begin{tourguide}
-Now we enter the heart of Echo's reactive model. Rules are matched against graph patterns, and when they match, they're enqueued for execution. The beauty is that matching is \emph{pure}---it reads the graph but doesn't modify it.
-\end{tourguide}
-
-\subsection{3.1 Function Signature}\label{function-signature-1}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ apply(}
- \OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}
-\NormalTok{ tx}\OperatorTok{:}\NormalTok{ TxId}\OperatorTok{,}
-\NormalTok{ rule\_name}\OperatorTok{:} \OperatorTok{\&}\DataTypeTok{str}\OperatorTok{,}
-\NormalTok{ scope}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId}\OperatorTok{,}
-\NormalTok{) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Result}\OperatorTok{\textless{}}\NormalTok{ApplyResult}\OperatorTok{,}\NormalTok{ EngineError}\OperatorTok{\textgreater{}}
-\end{Highlighting}
-\end{Shaded}
-
-\subsection{3.2 Complete Call Trace}\label{complete-call-trace-1}
-
-\begin{verbatim}
-Engine::apply(tx, rule_name, scope)
-│
-└─ Engine::apply_in_warp(tx, self.current_root.warp_id, rule_name, scope, &[])
- FILE: crates/warp-core/src/engine_impl.rs-806
- │
- ├─[1] TRANSACTION VALIDATION
- │ CODE: if tx.value() == 0 || !self.live_txs.contains(&tx.value())
- │ ERROR: EngineError::UnknownTx
- │
- ├─[2] RULE LOOKUP
- │ self.rules.get(rule_name) → Option<&RewriteRule>
- │ TYPE: HashMap<&'static str, RewriteRule>
- │ ERROR: EngineError::UnknownRule(rule_name.to_owned())
- │
- ├─[3] STORE LOOKUP
- │ self.state.store(&warp_id) → Option<&GraphStore>
- │ ERROR: EngineError::UnknownWarp(warp_id)
- │
- ├─[4] CREATE GRAPHVIEW
- │ GraphView::new(store) → GraphView<'_>
- │ FILE: crates/warp-core/src/graph_view.rs
- │ TYPE: Read-only wrapper (Copy, lightweight)
- │
- ├─[5] CALL MATCHER
- │ (rule.matcher)(view, scope) → bool
- │ TYPE: MatchFn = for<'a> fn(GraphView<'a>, &NodeId) -> bool
- │ FILE: crates/warp-core/src/rule.rs-24
- │ IF false: return Ok(ApplyResult::NoMatch)
- │
- ├─[6] CREATE SCOPE KEY
- │ let scope_key = NodeKey { warp_id, local_id: *scope }
- │
- ├─[7] COMPUTE SCOPE HASH
- │ scope_hash(&rule.id, &scope_key) → Hash
- │ FILE: crates/warp-core/src/engine_impl.rs-1718
- │ CODE:
- │ let mut hasher = Hasher::new();
- │ hasher.update(rule_id); // 32 bytes
- │ hasher.update(scope.warp_id.as_bytes()); // 32 bytes
- │ hasher.update(scope.local_id.as_bytes()); // 32 bytes
- │ hasher.finalize().into()
- │
- ├─[8] COMPUTE FOOTPRINT
- │ (rule.compute_footprint)(view, scope) → Footprint
- │ TYPE: FootprintFn = for<'a> fn(GraphView<'a>, &NodeId) -> Footprint
- │ FILE: crates/warp-core/src/rule.rs-46
- │ RETURNS:
- │ Footprint {
- │ n_read: IdSet, // Nodes read
- │ n_write: IdSet, // Nodes written
- │ e_read: IdSet, // Edges read
- │ e_write: IdSet, // Edges written
- │ a_read: AttachmentSet, // Attachments read
- │ a_write: AttachmentSet, // Attachments written
- │ b_in: PortSet, // Input ports
- │ b_out: PortSet, // Output ports
- │ factor_mask: u64, // O(1) prefilter
- │ }
- │
- ├─[9] AUGMENT FOOTPRINT WITH DESCENT STACK
- │ for key in descent_stack:
- │ footprint.a_read.insert(*key)
- │ FILE: crates/warp-core/src/footprint.rs-107
- │ PURPOSE: Stage B1 law - READs of all descent chain slots
- │
- ├─[10] COMPACT RULE ID LOOKUP
- │ self.compact_rule_ids.get(&rule.id) → Option<&CompactRuleId>
- │ TYPE: HashMap
- │ ERROR: EngineError::InternalCorruption
- │
- └─[11] ENQUEUE TO SCHEDULER
- self.scheduler.enqueue(tx, PendingRewrite { ... })
- │
- └─ DeterministicScheduler::enqueue(tx, rewrite)
- FILE: crates/warp-core/src/scheduler.rs-659
- │
- └─ RadixScheduler::enqueue(tx, rewrite)
- FILE: crates/warp-core/src/scheduler.rs-105
- CODE:
- let txq = self.pending.entry(tx).or_default();
- txq.enqueue(rewrite.scope_hash, rewrite.compact_rule.0, rewrite);
- │
- └─ PendingTx::enqueue(scope_be32, rule_id, payload)
- FILE: crates/warp-core/src/scheduler.rs-355
-
- CASE 1: Duplicate (scope_hash, rule_id) — LAST WINS
- index.get(&key) → Some(&i)
- fat[thin[i].handle] = Some(payload) // Overwrite
- thin[i].nonce = next_nonce++ // Refresh nonce
-
- CASE 2: New entry
- fat.push(Some(payload))
- thin.push(RewriteThin { scope_be32, rule_id, nonce, handle })
- index.insert(key, thin.len() - 1)
-\end{verbatim}
-
-\begin{cleverpattern}
-\textbf{GraphView: The Read-Only Wrapper}
-
-Step [4] creates a \texttt{GraphView}---a lightweight, copyable handle to the underlying \texttt{GraphStore}. In enforcement builds, it optionally holds a \texttt{FootprintGuard} reference that ties the view's lifetime to runtime protection---a borrow token that prevents the underlying \texttt{GraphStore} from being mutably borrowed while the \texttt{GraphView} exists. This guard validates reads against declared footprints at runtime, augmenting the compile-time read-only guarantee with runtime protection against unauthorized access. This is Rust's type system doing the heavy lifting: you literally \emph{cannot} mutate the graph through a \texttt{GraphView}. The compiler enforces read-only access, and the guard (when present in enforcement builds) enforces read permissions at runtime.
-\end{cleverpattern}
-
-\begin{deepdive}
-\textbf{The Footprint: Declaring Your Intentions}
-
-Step [8] is architecturally critical. Before a rule can execute, it must declare its \emph{footprint}---exactly which nodes, edges, and attachments it will read and write.
-
-This enables:
-\begin{itemize}
-\item \textbf{Parallel execution}: Rules with non-overlapping footprints can run concurrently
-\item \textbf{Conflict detection}: Rules with conflicting footprints are serialized
-\item \textbf{Determinism}: The scheduler can order rules without knowing their implementation details
-\end{itemize}
-
-The footprint is computed \emph{before} execution, not discovered during execution. This is a constraint on rule authors, but it's what makes the whole system tractable.
-\end{deepdive}
-
-\begin{cleverpattern}
-\textbf{Last-Wins Deduplication}
-
-In step [11], notice the ``LAST WINS'' semantics. If the same (scope\_hash, rule\_id) pair is enqueued twice, the second one \emph{replaces} the first.
-
-Why? Because enqueuing a rule is idempotent: if you match the same rule at the same scope twice in one transaction, you only want to execute it once. The ``last wins'' ensures the most recent footprint is used (which matters if the graph changed between matches).
-\end{cleverpattern}
-
-\subsection{3.3 PendingRewrite
-Structure}\label{pendingrewrite-structure}
-
-\textbf{File:} \texttt{crates/warp-core/src/scheduler.rs-82}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{struct}\NormalTok{ PendingRewrite }\OperatorTok{\{}
- \KeywordTok{pub}\NormalTok{ rule\_id}\OperatorTok{:} \BuiltInTok{Hash}\OperatorTok{,} \CommentTok{// 32{-}byte rule identifier}
- \KeywordTok{pub}\NormalTok{ compact\_rule}\OperatorTok{:}\NormalTok{ CompactRuleId}\OperatorTok{,} \CommentTok{// u32 hot{-}path handle}
- \KeywordTok{pub}\NormalTok{ scope\_hash}\OperatorTok{:} \BuiltInTok{Hash}\OperatorTok{,} \CommentTok{// 32{-}byte ordering key}
- \KeywordTok{pub}\NormalTok{ scope}\OperatorTok{:}\NormalTok{ NodeKey}\OperatorTok{,} \CommentTok{// \{ warp\_id, local\_id \}}
- \KeywordTok{pub}\NormalTok{ footprint}\OperatorTok{:}\NormalTok{ Footprint}\OperatorTok{,} \CommentTok{// Read/write declaration}
- \KeywordTok{pub}\NormalTok{ phase}\OperatorTok{:}\NormalTok{ RewritePhase}\OperatorTok{,} \CommentTok{// State machine: Matched → Reserved → ...}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{tourguide}
-Notice the dual identity: \texttt{rule\_id} (32-byte hash) for correctness, and \texttt{compact\_rule} (u32) for performance. The hash ensures cryptographic uniqueness; the u32 enables O(1) array indexing. This ``have your cake and eat it too'' pattern appears throughout Echo.
-\end{tourguide}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{4. Scheduler: Drain \& Reserve}\label{scheduler-drain-reserve}
-
-\begin{tourguide}
-The scheduler is where Echo's determinism guarantees are forged. No matter what order rules are enqueued, the scheduler produces a \emph{canonical} execution order. This is perhaps the most technically impressive part of the system.
-\end{tourguide}
-
-\subsection{4.1 Drain Phase (Radix Sort)}\label{drain-phase-radix-sort}
-
-\textbf{Entry Point:} \texttt{RadixScheduler::drain\_for\_tx()}
-\textbf{File:} \texttt{crates/warp-core/src/scheduler.rs-113}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{fn}\NormalTok{ drain\_for\_tx(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ tx}\OperatorTok{:}\NormalTok{ TxId) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{PendingRewrite}\OperatorTok{\textgreater{}} \OperatorTok{\{}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{pending}
- \OperatorTok{.}\NormalTok{remove(}\OperatorTok{\&}\NormalTok{tx)}
- \OperatorTok{.}\NormalTok{map\_or\_else(}\DataTypeTok{Vec}\PreprocessorTok{::}\NormalTok{new}\OperatorTok{,} \OperatorTok{|}\KeywordTok{mut}\NormalTok{ txq}\OperatorTok{|}\NormalTok{ txq}\OperatorTok{.}\NormalTok{drain\_in\_order())}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{Complete Call Trace:}
-
-\begin{verbatim}
-RadixScheduler::drain_for_tx(tx)
-│
-├─ self.pending.remove(&tx) → Option>
-│
-└─ PendingTx::drain_in_order()
- FILE: crates/warp-core/src/scheduler.rs-446
- │
- ├─ DECISION: n <= 1024 (SMALL_SORT_THRESHOLD)?
- │ ├─ YES: sort_unstable_by(cmp_thin)
- │ │ Rust std comparison sort
- │ │
- │ └─ NO: radix_sort()
- │ FILE: crates/warp-core/src/scheduler.rs-413
- │
- └─ radix_sort()
- │
- ├─ Initialize scratch buffer: self.scratch.resize(n, default)
- │
- ├─ Lazy allocate histogram: self.counts16 = vec![0u32; 65536]
- │
- └─ FOR pass IN 0..20: // ═══ 20 PASSES ═══
- │
- ├─ SELECT src/dst buffers (ping-pong)
- │ flip = false: src=thin, dst=scratch
- │ flip = true: src=scratch, dst=thin
- │
- ├─ PHASE 1: COUNT BUCKETS
- │ FOR r IN src:
- │ b = bucket16(r, pass)
- │ counts[b] += 1
- │
- ├─ PHASE 2: PREFIX SUMS
- │ sum = 0
- │ FOR c IN counts:
- │ t = *c
- │ *c = sum
- │ sum += t
- │
- ├─ PHASE 3: STABLE SCATTER
- │ FOR r IN src:
- │ b = bucket16(r, pass)
- │ dst[counts[b]] = r
- │ counts[b] += 1
- │
- └─ flip = !flip
-
-BUCKET EXTRACTION (bucket16):
-FILE: crates/warp-core/src/scheduler.rs-498
-
-Pass 0: u16_from_u32_le(r.nonce, 0) // Nonce bytes [0:2]
-Pass 1: u16_from_u32_le(r.nonce, 1) // Nonce bytes [2:4]
-Pass 2: u16_from_u32_le(r.rule_id, 0) // Rule ID bytes [0:2]
-Pass 3: u16_from_u32_le(r.rule_id, 1) // Rule ID bytes [2:4]
-Pass 4: u16_be_from_pair32(scope, 15) // Scope bytes [30:32]
-Pass 5: u16_be_from_pair32(scope, 14) // Scope bytes [28:30]
-...
-Pass 19: u16_be_from_pair32(scope, 0) // Scope bytes [0:2] (MSD)
-
-SORT ORDER: (scope_hash, rule_id, nonce) ascending lexicographic
-\end{verbatim}
-
-\begin{cleverpattern}
-\textbf{LSD Radix Sort: O(n) Guaranteed}
-
-This is a \textbf{Least Significant Digit} radix sort---it processes from the least significant bits to the most significant. After 20 passes (320 bits total), the array is sorted by:
-\begin{enumerate}
-\item \texttt{scope\_hash} (256 bits = 16 passes)
-\item then \texttt{rule\_id} (32 bits = 2 passes)
-\item then \texttt{nonce} (32 bits = 2 passes)
-\end{enumerate}
-
-Why radix sort instead of comparison sort?
-\begin{itemize}
-\item \textbf{Determinism}: Radix sort is inherently stable and makes no comparisons that could be affected by memory layout
-\item \textbf{O(n) complexity}: With fixed key size, radix sort is linear
-\item \textbf{Cache-friendly}: Sequential memory access in each pass
-\end{itemize}
-
-The 1024-element threshold is a practical optimization: for small arrays, the overhead of radix sort exceeds its benefits, so a comparison sort is used instead.
-\end{cleverpattern}
-
-\begin{deepdive}
-\textbf{Why 20 Passes?}
-
-Each pass extracts 16 bits (bucket size 65536). To sort by:
-\begin{itemize}
-\item 256 bits of scope\_hash = 16 passes (passes 4--19)
-\item 32 bits of rule\_id = 2 passes (passes 2--3)
-\item 32 bits of nonce = 2 passes (passes 0--1)
-\end{itemize}
-
-That's exactly 20 passes processing 320 bits total. Since LSD radix sort processes from least significant to most significant, passes 4--19 progressively refine the scope ordering from least significant bytes to most significant.
-
-The nonce is processed first (passes 0--1) because it's the tiebreaker---when scope\_hash and rule\_id are equal, the nonce determines order, and we want that to be the finest-grained distinction.
-\end{deepdive}
-
-\subsection{4.2 Reserve Phase (Independence
-Check)}\label{reserve-phase-independence-check}
-
-\textbf{Entry Point:} \texttt{RadixScheduler::reserve()} \textbf{File:}
-\texttt{crates/warp-core/src/scheduler.rs-143}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{fn}\NormalTok{ reserve(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ tx}\OperatorTok{:}\NormalTok{ TxId}\OperatorTok{,}\NormalTok{ pr}\OperatorTok{:} \OperatorTok{\&}\KeywordTok{mut}\NormalTok{ PendingRewrite) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{bool} \OperatorTok{\{}
- \KeywordTok{let}\NormalTok{ active }\OperatorTok{=} \KeywordTok{self}\OperatorTok{.}\NormalTok{active}\OperatorTok{.}\NormalTok{entry(tx)}\OperatorTok{.}\NormalTok{or\_insert\_with(}\PreprocessorTok{ActiveFootprints::}\NormalTok{new)}\OperatorTok{;}
- \ControlFlowTok{if} \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{has\_conflict(active}\OperatorTok{,}\NormalTok{ pr) }\OperatorTok{\{}
- \ControlFlowTok{return} \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{on\_conflict(pr)}\OperatorTok{;}
- \OperatorTok{\}}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{mark\_all(active}\OperatorTok{,}\NormalTok{ pr)}\OperatorTok{;}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{on\_reserved(pr)}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{Complete Call Trace:}
-
-\begin{verbatim}
-RadixScheduler::reserve(tx, pr)
-│
-├─ self.active.entry(tx).or_insert_with(ActiveFootprints::new)
-│ TYPE: HashMap
-│ ActiveFootprints contains 7 GenSets:
-│ - nodes_written: GenSet
-│ - nodes_read: GenSet
-│ - edges_written: GenSet
-│ - edges_read: GenSet
-│ - attachments_written: GenSet
-│ - attachments_read: GenSet
-│ - ports: GenSet
-│
-├─ has_conflict(active, pr) → bool
-│ FILE: crates/warp-core/src/scheduler.rs-236
-│ │
-│ ├─ FOR node IN pr.footprint.n_write:
-│ │ IF active.nodes_written.contains(node): return true // W-W conflict
-│ │ IF active.nodes_read.contains(node): return true // W-R conflict
-│ │
-│ ├─ FOR node IN pr.footprint.n_read:
-│ │ IF active.nodes_written.contains(node): return true // R-W conflict
-│ │ (R-R is allowed)
-│ │
-│ ├─ FOR edge IN pr.footprint.e_write:
-│ │ IF active.edges_written.contains(edge): return true
-│ │ IF active.edges_read.contains(edge): return true
-│ │
-│ ├─ FOR edge IN pr.footprint.e_read:
-│ │ IF active.edges_written.contains(edge): return true
-│ │
-│ ├─ FOR key IN pr.footprint.a_write:
-│ │ IF active.attachments_written.contains(key): return true
-│ │ IF active.attachments_read.contains(key): return true
-│ │
-│ ├─ FOR key IN pr.footprint.a_read:
-│ │ IF active.attachments_written.contains(key): return true
-│ │
-│ └─ FOR port IN pr.footprint.b_in ∪ pr.footprint.b_out:
-│ IF active.ports.contains(port): return true
-│
-├─ IF conflict:
-│ └─ on_conflict(pr)
-│ FILE: crates/warp-core/src/scheduler.rs-149
-│ pr.phase = RewritePhase::Aborted
-│ return false
-│
-├─ mark_all(active, pr)
-│ FILE: crates/warp-core/src/scheduler.rs-278
-│ │
-│ ├─ FOR node IN pr.footprint.n_write:
-│ │ active.nodes_written.mark(NodeKey { warp_id, local_id: node })
-│ │
-│ ├─ FOR node IN pr.footprint.n_read:
-│ │ active.nodes_read.mark(NodeKey { ... })
-│ │
-│ ... (similar for edges, attachments, ports)
-│
-└─ on_reserved(pr)
- FILE: crates/warp-core/src/scheduler.rs-155
- pr.phase = RewritePhase::Reserved
- return true
-\end{verbatim}
-
-\begin{tourguide}
-This is classic \textbf{two-phase locking} without the locks! The \texttt{has\_conflict} function implements the conflict matrix:
-
-\begin{center}
-\begin{tabular}{|c|c|c|}
-\hline
-& Read & Write \\
-\hline
-Read & OK & CONFLICT \\
-\hline
-Write & CONFLICT & CONFLICT \\
-\hline
-\end{tabular}
-\end{center}
-
-Multiple readers are allowed (R-R is OK), but any write conflicts with both reads and writes of the same resource.
-\end{tourguide}
-
-\subsection{4.3 GenSet: O(1) Conflict
-Detection}\label{genset-o1-conflict-detection}
-
-\textbf{File:} \texttt{crates/warp-core/src/scheduler.rs-535}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{struct}\NormalTok{ GenSet}\OperatorTok{\textless{}}\NormalTok{K}\OperatorTok{\textgreater{}} \OperatorTok{\{}
-\NormalTok{ gen}\OperatorTok{:} \DataTypeTok{u32}\OperatorTok{,} \CommentTok{// Current generation}
-\NormalTok{ seen}\OperatorTok{:}\NormalTok{ FxHashMap}\OperatorTok{\textless{}}\NormalTok{K}\OperatorTok{,} \DataTypeTok{u32}\OperatorTok{\textgreater{},} \CommentTok{// Key → generation when marked}
-\OperatorTok{\}}
-
-\KeywordTok{impl}\OperatorTok{\textless{}}\NormalTok{K}\OperatorTok{:} \BuiltInTok{Hash} \OperatorTok{+} \BuiltInTok{Eq} \OperatorTok{+} \BuiltInTok{Copy}\OperatorTok{\textgreater{}}\NormalTok{ GenSet}\OperatorTok{\textless{}}\NormalTok{K}\OperatorTok{\textgreater{}} \OperatorTok{\{}
- \AttributeTok{\#[}\NormalTok{inline}\AttributeTok{]}
- \KeywordTok{pub} \KeywordTok{fn}\NormalTok{ contains(}\OperatorTok{\&}\KeywordTok{self}\OperatorTok{,}\NormalTok{ key}\OperatorTok{:}\NormalTok{ K) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{bool} \OperatorTok{\{}
- \PreprocessorTok{matches!}\NormalTok{(}\KeywordTok{self}\OperatorTok{.}\NormalTok{seen}\OperatorTok{.}\NormalTok{get(}\OperatorTok{\&}\NormalTok{key)}\OperatorTok{,} \ConstantTok{Some}\NormalTok{(}\OperatorTok{\&}\NormalTok{g) }\ControlFlowTok{if}\NormalTok{ g }\OperatorTok{==} \KeywordTok{self}\OperatorTok{.}\NormalTok{gen)}
- \OperatorTok{\}}
-
- \AttributeTok{\#[}\NormalTok{inline}\AttributeTok{]}
- \KeywordTok{pub} \KeywordTok{fn}\NormalTok{ mark(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ key}\OperatorTok{:}\NormalTok{ K) }\OperatorTok{\{}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{seen}\OperatorTok{.}\NormalTok{insert(key}\OperatorTok{,} \KeywordTok{self}\OperatorTok{.}\NormalTok{gen)}\OperatorTok{;}
- \OperatorTok{\}}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{Key Insight:} No clearing needed between transactions. Increment
-\texttt{gen} → all old entries become stale.
-
-\begin{cleverpattern}
-\textbf{Generation-Based Set: Amortized O(1) Clear}
-
-This is one of the most elegant patterns in Echo. Instead of clearing the hash map between transactions (O(n) operation), just increment a generation counter!
-
-An entry is ``in the set'' only if its stored generation matches the current generation. Old entries with stale generations are effectively invisible.
-
-The hash map only grows---it's never shrunk. But since the same keys tend to be accessed repeatedly (temporal locality), the map stabilizes quickly. The payoff is enormous: clearing the ``set'' is O(1) instead of O(n).
-\end{cleverpattern}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{5. BOAW Parallel Execution}\label{boaw-parallel-execution}
-
-\textbf{Entry Point:} \texttt{execute\_parallel()} \textbf{File:}
-\texttt{crates/warp-core/src/boaw/exec.rs-83}
-
-\begin{tourguide}
-BOAW---``Best Of All Worlds''---is where Echo's determinism meets parallelism. The key insight: \emph{order of execution doesn't matter if we sort the outputs}. Rules execute in arbitrary order on worker threads, but their outputs are merged canonically.
-\end{tourguide}
-
-\subsection{5.1 Entry Point}\label{entry-point}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ execute\_parallel(view}\OperatorTok{:}\NormalTok{ GraphView}\OperatorTok{\textless{}}\OtherTok{\textquotesingle{}\_}\OperatorTok{\textgreater{},}\NormalTok{ items}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[ExecItem]}\OperatorTok{,}\NormalTok{ workers}\OperatorTok{:} \DataTypeTok{usize}\NormalTok{) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{TickDelta}\OperatorTok{\textgreater{}} \OperatorTok{\{}
- \PreprocessorTok{assert!}\NormalTok{(workers }\OperatorTok{\textgreater{}=} \DecValTok{1}\NormalTok{)}\OperatorTok{;}
- \KeywordTok{let}\NormalTok{ capped\_workers }\OperatorTok{=}\NormalTok{ workers}\OperatorTok{.}\NormalTok{min(NUM\_SHARDS)}\OperatorTok{;} \CommentTok{// Cap at 256}
-
- \AttributeTok{\#[}\NormalTok{cfg}\AttributeTok{(}\NormalTok{feature }\OperatorTok{=} \StringTok{"parallel{-}stride{-}fallback"}\AttributeTok{)]}
- \ControlFlowTok{if} \PreprocessorTok{std::env::}\NormalTok{var(}\StringTok{"ECHO\_PARALLEL\_STRIDE"}\NormalTok{)}\OperatorTok{.}\NormalTok{is\_ok() }\OperatorTok{\{}
- \ControlFlowTok{return}\NormalTok{ execute\_parallel\_stride(view}\OperatorTok{,}\NormalTok{ items}\OperatorTok{,}\NormalTok{ capped\_workers)}\OperatorTok{;}
- \OperatorTok{\}}
-
-\NormalTok{ execute\_parallel\_sharded(view}\OperatorTok{,}\NormalTok{ items}\OperatorTok{,}\NormalTok{ capped\_workers) }\CommentTok{// DEFAULT}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\subsection{5.2 Complete Call Trace}\label{complete-call-trace-2}
-
-\begin{verbatim}
-execute_parallel(view, items, workers)
-│
-└─ execute_parallel_sharded(view, items, capped_workers)
- FILE: crates/warp-core/src/boaw/exec.rs-152
- │
- ├─ IF items.is_empty():
- │ return (0..workers).map(|_| TickDelta::new()).collect()
- │
- ├─ partition_into_shards(items.to_vec()) → Vec
- │ FILE: crates/warp-core/src/boaw/shard.rs-120
- │ │
- │ ├─ Create 256 empty VirtualShard structures
- │ │
- │ └─ FOR item IN items:
- │ │
- │ ├─ shard_of(&item.scope) → usize
- │ │ FILE: crates/warp-core/src/boaw/shard.rs-92
- │ │ CODE:
- │ │ let bytes = scope.as_bytes();
- │ │ let first_8: [u8; 8] = [bytes[0..8]];
- │ │ let val = u64::from_le_bytes(first_8);
- │ │ (val & 255) as usize // SHARD_MASK = 255
- │ │
- │ └─ shards[shard_id].items.push(item)
- │
- ├─ let next_shard = AtomicUsize::new(0)
- │
- └─ std::thread::scope(|s| { ... })
- FILE: Rust std (scoped threads)
- │
- ├─ FOR _ IN 0..workers:
- │ │
- │ └─ s.spawn(move || { ... }) // ═══ WORKER THREAD ═══
- │ │
- │ ├─ let mut delta = TickDelta::new()
- │ │ FILE: crates/warp-core/src/tick_delta.rs-52
- │ │ CREATES: { ops: Vec::new(), origins: Vec::new() }
- │ │
- │ └─ LOOP: // Work-stealing loop
- │ │
- │ ├─ shard_id = next_shard.fetch_add(1, Ordering::Relaxed)
- │ │ ATOMIC: Returns old value, increments counter
- │ │ ORDERING: Relaxed (no synchronization cost)
- │ │
- │ ├─ IF shard_id >= 256: break
- │ │
- │ └─ FOR item IN &shards[shard_id].items:
- │ │
- │ ├─ let mut scoped = delta.scoped(item.origin)
- │ │ FILE: crates/warp-core/src/tick_delta.rs-142
- │ │ CREATES: ScopedDelta { inner: &mut delta, origin, next_op_ix: 0 }
- │ │
- │ └─ (item.exec)(view, &item.scope, scoped.inner_mut())
- │ │
- │ └─ INSIDE EXECUTOR:
- │ scoped.emit(op)
- │ FILE: crates/warp-core/src/tick_delta.rs-239
- │ CODE:
- │ origin.op_ix = self.next_op_ix;
- │ self.next_op_ix += 1;
- │ self.inner.emit_with_origin(op, origin);
- │ │
- │ └─ TickDelta::emit_with_origin(op, origin)
- │ FILE: crates/warp-core/src/tick_delta.rs-75
- │ CODE:
- │ self.ops.push(op);
- │ self.origins.push(origin); // if delta_validate
- │
- └─ COLLECT THREADS:
- handles.into_iter().map(|h| h.join()).collect()
- RETURNS: Vec (one per worker)
-\end{verbatim}
-
-\begin{cleverpattern}
-\textbf{Shard-Based Work Distribution}
-
-The sharding scheme is beautifully simple: take the first 8 bytes of the scope's NodeId, mask with 255, and you have your shard.
-
-Why 256 shards?
-\begin{itemize}
-\item \textbf{Granularity}: Fine enough that work distributes evenly
-\item \textbf{Overhead}: Coarse enough that per-shard overhead is negligible
-\item \textbf{Determinism}: The shard assignment is deterministic (depends only on NodeId)
-\end{itemize}
-
-The work-stealing loop with \texttt{AtomicUsize::fetch\_add} is lock-free and cache-friendly---each worker claims shards sequentially, minimizing contention.
-\end{cleverpattern}
-
-\begin{deepdive}
-\textbf{Why \texttt{Ordering::Relaxed}?}
-
-The atomic counter uses \texttt{Relaxed} ordering---the weakest memory ordering. This is safe because:
-
-\begin{enumerate}
-\item Each shard is processed by exactly one worker (no data races)
-\item Workers don't need to see each other's results until after \texttt{join()}
-\item The \texttt{join()} itself provides the necessary synchronization
-\end{enumerate}
-
-Using \texttt{Relaxed} instead of \texttt{SeqCst} avoids memory barriers, which can be expensive on multi-core CPUs.
-\end{deepdive}
-
-\subsection{5.3 Enforced Execution Path}\label{enforced-execution-path}
-
-\textbf{Entry Point:} \texttt{execute\_item\_enforced()}
-\textbf{File:} \texttt{crates/warp-core/src/boaw/exec.rs}
-
-When footprint enforcement is active, each item is executed via
-\texttt{execute\_item\_enforced()} instead of a bare function-pointer call.
-Read access is enforced in-line by \texttt{GraphView}/\texttt{FootprintGuard}
-while the executor runs inside \texttt{catch\_unwind}, and post-hoc
-\texttt{check\_op()} validation is applied to any newly-emitted ops.
-
-\begin{verbatim}
-execute_item_enforced(store, item, idx, unit, delta)
-│
-├─ guard = unit.guards[idx]
-├─ view = GraphView::new_guarded(store, guard)
-│
-├─ ops_before = delta.len()
-│ Snapshot the op count BEFORE the executor runs
-│
-├─ result = std::panic::catch_unwind(AssertUnwindSafe(|| {
-│ (item.exec)(view, &item.scope, delta)
-│ }))
-│
-├─ NOTE: During execution above, GraphView validates reads via
-│ FootprintGuard—unauthorized reads are detected inline.
-│
-├─ FOR op IN delta.ops()[ops_before..]:
-│ guard.check_op(op) → panic_any(FootprintViolation) on failure
-│ Validates that each newly-emitted op falls within the declared footprint.
-│ ExecItemKind::System items may emit warp-instance-level ops;
-│ ExecItemKind::User items may not.
-│
-└─ OUTCOME PRECEDENCE:
- ├─ IF check_op fails:
- │ std::panic::panic_any(FootprintViolation { ... })
- │ Footprint violations OVERRIDE executor panics — violation takes precedence.
- │
- ├─ IF footprint is clean BUT executor panicked:
- │ std::panic::resume_unwind(payload)
- │ The original panic propagates to the caller.
- │
- └─ IF both clean:
- return Ok(delta) // Result
-\end{verbatim}
-
-\begin{tourguide}
-The post-hoc strategy is a deliberate design choice: we let the executor run to completion (or panic), then inspect what it wrote. This avoids the overhead of intercepting every write call during hot-loop execution. Read access is still enforced in-line by \texttt{GraphView}/\texttt{FootprintGuard} while the executor runs under \texttt{catch\_unwind}, so unauthorized reads surface immediately even before \texttt{check\_op()} validates writes.
-\end{tourguide}
-
-\begin{cleverpattern}
-\textbf{Outcome Precedence:} Why do write violations override executor panics?
-
-Consider: a rule panics, but before panicking it wrote an out-of-footprint op. If we propagated the panic, the violation evidence would be lost. By checking the delta first, we guarantee the developer sees the footprint violation message—which is more actionable than a random panic.
-\end{cleverpattern}
-
-\textbf{The Poison Invariant:} If the executor panics, the \texttt{TickDelta}
-it was writing into is considered poisoned. The execution path returns a
-\texttt{PoisonedDelta} marker, and poisoned deltas are never merged or
-committed.
-
-\subsection{5.4 ExecItem Structure}\label{execitem-structure}
-
-\textbf{File:} \texttt{crates/warp-core/src/boaw/exec.rs-35}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\AttributeTok{\#[}\NormalTok{derive}\AttributeTok{(}\BuiltInTok{Clone}\OperatorTok{,} \BuiltInTok{Copy}\AttributeTok{)]}
-\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ ExecItem }\OperatorTok{\{}
- \KeywordTok{pub}\NormalTok{ exec}\OperatorTok{:}\NormalTok{ ExecuteFn}\OperatorTok{,} \CommentTok{// fn(GraphView, \&NodeId, \&mut TickDelta)}
- \KeywordTok{pub}\NormalTok{ scope}\OperatorTok{:}\NormalTok{ NodeId}\OperatorTok{,} \CommentTok{// 32{-}byte node identifier}
- \KeywordTok{pub}\NormalTok{ origin}\OperatorTok{:}\NormalTok{ OpOrigin}\OperatorTok{,} \CommentTok{// \{ intent\_id, rule\_id, match\_ix, op\_ix \}}
-
- \CommentTok{// Private field, present only in enforcement builds:}
- \AttributeTok{\#[}\NormalTok{cfg}\AttributeTok{(}\NormalTok{any}\AttributeTok{(}\NormalTok{debug\_assertions}\OperatorTok{,}\NormalTok{ feature }\OperatorTok{=} \StringTok{"footprint\_enforce\_release"}\AttributeTok{))]}
- \AttributeTok{\#[}\NormalTok{cfg}\AttributeTok{(}\NormalTok{not}\AttributeTok{(}\NormalTok{feature }\OperatorTok{=} \StringTok{"unsafe\_graph"}\AttributeTok{))]}
-\NormalTok{ kind}\OperatorTok{:}\NormalTok{ ExecItemKind}\OperatorTok{,}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{tourguide}
-\texttt{ExecItem} is \texttt{Clone + Copy}---it's just a function pointer plus some IDs. This means workers can own their items without any reference counting or synchronization. The \texttt{origin} field enables tracing any operation back to the intent and rule that produced it.
-\end{tourguide}
-
-\textbf{\texttt{ExecItemKind} (cfg-gated):}
-
-\begin{itemize}
-\tightlist
-\item
- \texttt{ExecItemKind::User} --- Normal rule executor. May emit
- node/edge/attachment ops scoped to the declared footprint. Cannot emit
- warp-instance-level ops (\texttt{UpsertWarpInstance},
- \texttt{DeleteWarpInstance}, \texttt{OpenPortal}).
-\item
- \texttt{ExecItemKind::System} --- Internal-only executor (e.g., portal
- opening). May emit warp-instance-level ops.
-\end{itemize}
-
-\texttt{ExecItem::new()} always creates \texttt{User} items. System items are
-constructed via \texttt{ExecItem::new\_system()} (cfg-gated \texttt{pub(crate)}
-constructor used by portal/inbox rules) and are never exposed through the public
-API.
-
-\begin{cleverpattern}
-\textbf{The dual-attribute cfg-gate pattern:} The \texttt{kind} field (and all
-enforcement logic) is guarded by two cfg attributes that together express three
-conditions (\texttt{debug\_assertions}, \texttt{footprint\_enforce\_release},
-and \texttt{unsafe\_graph}):
-
-\begin{enumerate}
-\def\labelenumi{\arabic{enumi}.}
-\tightlist
-\item
- \texttt{\#[cfg(any(debug\_assertions, feature = "footprint\_enforce\_release"))]}
- --- active in debug builds or when the release enforcement feature is
- opted-in.
-\item
- \texttt{\#[cfg(not(feature = "unsafe\_graph"))]} --- disabled when the
- escape-hatch feature is set (for benchmarks/fuzzing that intentionally
- bypass checks).
-\end{enumerate}
-
-The gates are symmetric: the \texttt{kind} field, \texttt{guards} vector, and
-validation code all have both cfg attributes applied identically.
-\textbf{Precedence:} When both features are enabled
-(\texttt{footprint\_enforce\_release} and \texttt{unsafe\_graph}), the
-\texttt{unsafe\_graph} escape hatch takes precedence and disables
-enforcement---the \texttt{kind} field and enforcement guards are not compiled
-in, so \texttt{ExecItem} retains its non-enforced layout but enforcement is
-silently inactive. Practically, the \texttt{kind} field, \texttt{guards} vector,
-and validation code are compiled out under \texttt{unsafe\_graph}, even if
-release enforcement is requested. The struct layout changes depending on the
-build profile---\texttt{ExecItem} is smaller in release builds where the guard
-is inactive.
-\end{cleverpattern}
-
-\subsection{5.5 Thread Safety}\label{thread-safety}
-
-{\def\LTcaptype{none} % do not increment counter
-\begin{longtable}[]{@{}lll@{}}
-\toprule\noalign{}
-Type & Safety & Reason \\
-\midrule\noalign{}
-\endhead
-\bottomrule\noalign{}
-\endlastfoot
-\texttt{GraphView} & \texttt{Sync\ +\ Send\ +\ Clone} & Read-only
-snapshot \\
-\texttt{ExecItem} & \texttt{Sync\ +\ Send\ +\ Copy} & Function pointer +
-primitives \\
-\texttt{TickDelta} & Per-worker exclusive & No shared mutation \\
-\texttt{AtomicUsize} & Lock-free & \texttt{fetch\_add} with
-\texttt{Relaxed} ordering \\
-\end{longtable}
-}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{6. Delta Merge \& State
-Finalization}\label{delta-merge-state-finalization}
-
-\begin{tourguide}
-This is where the magic happens: multiple workers produce independent deltas, and we merge them into a single canonical result. The key invariant: \emph{the merge output depends only on the operations, not on which worker produced them or when}.
-\end{tourguide}
-
-\subsection{6.1 Canonical Merge}\label{canonical-merge}
-
-\textbf{Entry Point:} \texttt{merge\_deltas()} \textbf{File:}
-\texttt{crates/warp-core/src/boaw/merge.rs-75}
-
-\begin{verbatim}
-merge_deltas(deltas: Vec) → Result, MergeConflict>
-│
-├─[1] FLATTEN ALL OPS WITH ORIGINS
-│ let mut flat: Vec<(WarpOpKey, OpOrigin, WarpOp)> = Vec::new();
-│ FOR d IN deltas:
-│ let (ops, origins) = d.into_parts_unsorted();
-│ FOR (op, origin) IN ops.zip(origins):
-│ flat.push((op.sort_key(), origin, op));
-│
-├─[2] CANONICAL SORT
-│ flat.sort_by(|a, b| (&a.0, &a.1).cmp(&(&b.0, &b.1)));
-│ ORDER: (WarpOpKey, OpOrigin) lexicographic
-│
-└─[3] DEDUPE & CONFLICT DETECTION
- let mut out = Vec::new();
- let mut i = 0;
- WHILE i < flat.len():
- │
- ├─ GROUP by WarpOpKey
- │ key = flat[i].0
- │ start = i
- │ WHILE i < flat.len() && flat[i].0 == key: i++
- │
- ├─ CHECK if all ops identical
- │ first = &flat[start].2
- │ all_same = flat[start+1..i].iter().all(|(_, _, op)| op == first)
- │
- └─ IF all_same:
- out.push(first.clone()) // Accept one copy
- ELSE:
- writers = flat[start..i].iter().map(|(_, o, _)| *o).collect()
- return Err(MergeConflict { writers }) // CONFLICT!
-
- return Ok(out)
-\end{verbatim}
-
-\begin{cleverpattern}
-\textbf{Benevolent Coincidence}
-
-The merge allows multiple writers to produce the same operation---this is called a \emph{benevolent coincidence}. If two rules independently decide to create the same edge, that's fine! The merge keeps one copy.
-
-But if they produce \emph{different} operations for the same key (e.g., setting an attachment to different values), that's a \texttt{MergeConflict}---a bug in the rule definitions.
-
-This policy allows natural redundancy in rule specifications while catching genuine conflicts.
-\end{cleverpattern}
-
-\subsection{6.2 WarpOp Sort Key}\label{warpop-sort-key}
-
-\textbf{File:} \texttt{crates/warp-core/src/tick\_patch.rs-287}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{fn}\NormalTok{ sort\_key(}\OperatorTok{\&}\KeywordTok{self}\NormalTok{) }\OperatorTok{{-}\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}
- \ControlFlowTok{match} \KeywordTok{self} \OperatorTok{\{}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{OpenPortal }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{1}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{UpsertWarpInstance }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{2}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{DeleteWarpInstance }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{3}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{DeleteEdge }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{4}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} \CommentTok{// Delete before upsert}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{DeleteNode }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{5}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{UpsertNode }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{6}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{UpsertEdge }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{7}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{SetAttachment }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{8}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} \CommentTok{// Last}
- \OperatorTok{\}}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{Canonical Order:} 1. OpenPortal (creates child instances) 2.
-UpsertWarpInstance 3. DeleteWarpInstance 4. DeleteEdge (delete before
-upsert) 5. DeleteNode (delete before upsert) 6. UpsertNode 7. UpsertEdge
-8. SetAttachment (after skeleton exists)
-
-\begin{deepdive}
-\textbf{Why This Specific Order?}
-
-The operation order is carefully chosen to maintain invariants:
-
-\begin{enumerate}
-\item \textbf{OpenPortal first}: Creates warp instances that other ops may reference
-\item \textbf{Deletes before upserts}: Ensures we don't accidentally delete something we just created (idempotence)
-\item \textbf{Nodes before edges}: Edges reference nodes, so nodes must exist first
-\item \textbf{Attachments last}: Attachments reference nodes/edges, so the skeleton must be complete
-\end{enumerate}
-
-This ordering means rules don't need to worry about operation sequencing---emit ops in any order, and the merge will sort them correctly.
-\end{deepdive}
-
-\subsection{6.3 State Mutation Methods}\label{state-mutation-methods}
-
-\textbf{File:} \texttt{crates/warp-core/src/graph.rs}
-
-\begin{verbatim}
-GraphStore::insert_node(id, record)
- LINE: 175-177
- CODE: self.nodes.insert(id, record)
-
-GraphStore::upsert_edge_record(from, edge)
- LINE: 196-261
- UPDATES:
- - self.edge_index.insert(edge_id, from)
- - self.edge_to_index.insert(edge_id, to)
- - Remove old edge from previous bucket if exists
- - self.edges_from.entry(from).or_default().push(edge)
- - self.edges_to.entry(to).or_default().push(edge_id)
-
-GraphStore::delete_node_cascade(node)
- LINE: 277-354
- CASCADES:
- - Remove from self.nodes
- - Remove node attachment
- - Remove ALL outbound edges (and their attachments)
- - Remove ALL inbound edges (and their attachments)
- - Maintain all 4 index maps consistently
-
-GraphStore::delete_edge_exact(from, edge_id)
- LINE: 360-412
- VALIDATES: edge is in correct "from" bucket
- REMOVES:
- - From edges_from bucket
- - From edge_index
- - From edge_to_index
- - From edges_to bucket
- - Edge attachment
-
-GraphStore::set_node_attachment(id, value)
- LINE: 125-134
- CODE:
- None → self.node_attachments.remove(&id)
- Some(v) → self.node_attachments.insert(id, v)
-
-GraphStore::set_edge_attachment(id, value)
- LINE: 163-172
- Same pattern as node attachments
-\end{verbatim}
-
-\begin{watchout}
-\textbf{Cascade Deletes Are Dangerous}
-
-\texttt{delete\_node\_cascade} removes not just the node, but all its edges and attachments. This is correct behavior (dangling edges would violate invariants), but rule authors must be aware: deleting a highly-connected node triggers many index updates.
-
-This is why footprints must declare write access to all edges that might be affected---the cascade happens even if the rule only explicitly deletes the node.
-\end{watchout}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{7. Hash Computation}\label{hash-computation}
-
-\begin{tourguide}
-Hashing is Echo's fingerprint technology. The state root captures \emph{what the graph looks like}; the commit hash captures \emph{how we got here}. Both are computed deterministically using BLAKE3, ensuring that identical states produce identical hashes across all nodes in a distributed system.
-\end{tourguide}
-
-\subsection{7.1 State Root}\label{state-root}
-
-\textbf{Entry Point:} \texttt{compute\_state\_root()} \textbf{File:}
-\texttt{crates/warp-core/src/snapshot.rs-209}
-
-\begin{verbatim}
-compute_state_root(state: &WarpState, root: &NodeKey) → Hash
-│
-├─[1] BFS REACHABILITY TRAVERSAL
-│ │
-│ ├─ Initialize:
-│ │ reachable_nodes: BTreeSet = { root }
-│ │ reachable_warps: BTreeSet = { root.warp_id }
-│ │ queue: VecDeque = [ root ]
-│ │
-│ └─ WHILE let Some(current) = queue.pop_front():
-│ │
-│ ├─ store = state.store(¤t.warp_id)
-│ │
-│ ├─ FOR edge IN store.edges_from(¤t.local_id):
-│ │ ├─ to = NodeKey { warp_id: current.warp_id, local_id: edge.to }
-│ │ ├─ IF reachable_nodes.insert(to): queue.push_back(to)
-│ │ │
-│ │ └─ IF edge has Descend(child_warp) attachment:
-│ │ └─ enqueue_descend(state, child_warp, ...)
-│ │ Adds child instance root to queue
-│ │
-│ └─ IF current node has Descend(child_warp) attachment:
-│ enqueue_descend(state, child_warp, ...)
-│
-├─[2] HASHING PHASE
-│ │
-│ ├─ let mut hasher = Hasher::new() // BLAKE3
-│ │
-│ ├─ HASH ROOT BINDING:
-│ │ hasher.update(&root.warp_id.0) // 32 bytes
-│ │ hasher.update(&root.local_id.0) // 32 bytes
-│ │
-│ └─ FOR warp_id IN reachable_warps: // BTreeSet = sorted order
-│ │
-│ ├─ HASH INSTANCE HEADER:
-│ │ hasher.update(&instance.warp_id.0) // 32 bytes
-│ │ hasher.update(&instance.root_node.0) // 32 bytes
-│ │ hash_attachment_key_opt(&mut hasher, instance.parent.as_ref())
-│ │
-│ ├─ FOR (node_id, node) IN store.nodes: // BTreeMap = sorted
-│ │ IF reachable_nodes.contains(&NodeKey { warp_id, local_id: node_id }):
-│ │ hasher.update(&node_id.0) // 32 bytes
-│ │ hasher.update(&node.ty.0) // 32 bytes
-│ │ hash_attachment_value_opt(&mut hasher, store.node_attachment(node_id))
-│ │
-│ └─ FOR (from, edges) IN store.edges_from: // BTreeMap = sorted
-│ IF from is reachable:
-│ sorted_edges = edges.filter(reachable).sort_by(|a,b| a.id.cmp(b.id))
-│ hasher.update(&from.0) // 32 bytes
-│ hasher.update(&(sorted_edges.len() as u64).to_le_bytes()) // 8 bytes
-│ FOR edge IN sorted_edges:
-│ hasher.update(&edge.id.0) // 32 bytes
-│ hasher.update(&edge.ty.0) // 32 bytes
-│ hasher.update(&edge.to.0) // 32 bytes
-│ hash_attachment_value_opt(&mut hasher, store.edge_attachment(&edge.id))
-│
-└─ hasher.finalize().into() // → [u8; 32]
-\end{verbatim}
-
-\begin{cleverpattern}
-\textbf{BTreeSet/BTreeMap for Determinism}
-
-Notice the use of \texttt{BTreeSet} and \texttt{BTreeMap} throughout. Unlike \texttt{HashSet}/\texttt{HashMap}, B-tree collections iterate in \emph{sorted order}. This is essential for deterministic hashing---the hash must be the same regardless of insertion order.
-
-The trade-off: B-tree operations are O(log n) instead of O(1). But for hashing (which happens once per commit), correctness trumps speed.
-\end{cleverpattern}
-
-\begin{deepdive}
-\textbf{Reachability Pruning}
-
-The BFS traversal only hashes \emph{reachable} nodes and edges. This means:
-
-\begin{enumerate}
-\item Garbage (unreachable nodes) doesn't affect the hash
-\item Two states with the same reachable structure have the same hash
-\item Deleting a disconnected subgraph doesn't change the hash
-\end{enumerate}
-
-This is a subtle but important property for garbage collection---you can safely remove unreachable data without affecting consensus.
-\end{deepdive}
-
-\subsection{7.2 Commit Hash v2}\label{commit-hash-v2}
-
-\textbf{Entry Point:} \texttt{compute\_commit\_hash\_v2()}
-\textbf{File:} \texttt{crates/warp-core/src/snapshot.rs-263}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{fn}\NormalTok{ compute\_commit\_hash\_v2(}
-\NormalTok{ state\_root}\OperatorTok{:} \OperatorTok{\&}\BuiltInTok{Hash}\OperatorTok{,}
-\NormalTok{ parents}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[}\BuiltInTok{Hash}\NormalTok{]}\OperatorTok{,}
-\NormalTok{ patch\_digest}\OperatorTok{:} \OperatorTok{\&}\BuiltInTok{Hash}\OperatorTok{,}
-\NormalTok{ policy\_id}\OperatorTok{:} \DataTypeTok{u32}\OperatorTok{,}
-\NormalTok{) }\OperatorTok{{-}\textgreater{}} \BuiltInTok{Hash} \OperatorTok{\{}
- \KeywordTok{let} \KeywordTok{mut}\NormalTok{ h }\OperatorTok{=} \BuiltInTok{Hasher}\PreprocessorTok{::}\NormalTok{new()}\OperatorTok{;}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\DecValTok{2u16}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// Version tag (2 bytes)}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\NormalTok{(parents}\OperatorTok{.}\NormalTok{len() }\KeywordTok{as} \DataTypeTok{u64}\NormalTok{)}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// Parent count (8 bytes)}
- \ControlFlowTok{for}\NormalTok{ p }\KeywordTok{in}\NormalTok{ parents }\OperatorTok{\{}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(p)}\OperatorTok{;} \CommentTok{// Each parent (32 bytes)}
- \OperatorTok{\}}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(state\_root)}\OperatorTok{;} \CommentTok{// Graph hash (32 bytes)}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(patch\_digest)}\OperatorTok{;} \CommentTok{// Ops hash (32 bytes)}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\NormalTok{policy\_id}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// Policy (4 bytes)}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{finalize()}\OperatorTok{.}\NormalTok{into()}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{Byte Layout:}
-
-\begin{verbatim}
-Offset Size Field
-0 2 version_tag (0x02 0x00)
-2 8 parent_count (u64 LE)
-10 32*N parents[] (N parent hashes)
-10+32N 32 state_root
-42+32N 32 patch_digest
-74+32N 4 policy_id (u32 LE)
-─────────────────────────────────────
-TOTAL: 78 + 32*N bytes → BLAKE3 → 32-byte hash
-\end{verbatim}
-
-\begin{tourguide}
-The version tag (\texttt{0x02 0x00}) is future-proofing: if the commit hash format ever needs to change, the version lets validators distinguish between formats. The ``v2'' in the function name indicates this is already the second iteration of the format.
-\end{tourguide}
-
-\subsection{7.3 Patch Digest}\label{patch-digest}
-
-\textbf{Entry Point:} \texttt{compute\_patch\_digest\_v2()}
-\textbf{File:} \texttt{crates/warp-core/src/tick\_patch.rs-774}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{fn}\NormalTok{ compute\_patch\_digest\_v2(}
-\NormalTok{ policy\_id}\OperatorTok{:} \DataTypeTok{u32}\OperatorTok{,}
-\NormalTok{ rule\_pack\_id}\OperatorTok{:} \OperatorTok{\&}\NormalTok{ContentHash}\OperatorTok{,}
-\NormalTok{ commit\_status}\OperatorTok{:}\NormalTok{ TickCommitStatus}\OperatorTok{,}
-\NormalTok{ in\_slots}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[SlotId]}\OperatorTok{,}
-\NormalTok{ out\_slots}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[SlotId]}\OperatorTok{,}
-\NormalTok{ ops}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[WarpOp]}\OperatorTok{,}
-\NormalTok{) }\OperatorTok{{-}\textgreater{}}\NormalTok{ ContentHash }\OperatorTok{\{}
- \KeywordTok{let} \KeywordTok{mut}\NormalTok{ h }\OperatorTok{=} \BuiltInTok{Hasher}\PreprocessorTok{::}\NormalTok{new()}\OperatorTok{;}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\DecValTok{2u16}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// Format version}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\NormalTok{policy\_id}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// 4 bytes}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(rule\_pack\_id)}\OperatorTok{;} \CommentTok{// 32 bytes}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\NormalTok{[commit\_status}\OperatorTok{.}\NormalTok{code()])}\OperatorTok{;} \CommentTok{// 1 byte}
-\NormalTok{ encode\_slots(}\OperatorTok{\&}\KeywordTok{mut}\NormalTok{ h}\OperatorTok{,}\NormalTok{ in\_slots)}\OperatorTok{;}
-\NormalTok{ encode\_slots(}\OperatorTok{\&}\KeywordTok{mut}\NormalTok{ h}\OperatorTok{,}\NormalTok{ out\_slots)}\OperatorTok{;}
-\NormalTok{ encode\_ops(}\OperatorTok{\&}\KeywordTok{mut}\NormalTok{ h}\OperatorTok{,}\NormalTok{ ops)}\OperatorTok{;}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{finalize()}\OperatorTok{.}\NormalTok{into()}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{8. Commit Orchestration}\label{commit-orchestration}
-
-\textbf{Entry Point:} \texttt{Engine::commit\_with\_receipt()}
-\textbf{File:} \texttt{crates/warp-core/src/engine\_impl.rs-954}
-
-\begin{tourguide}
-This is the grand finale---where all the pieces come together. The commit orchestrator drains the scheduler, reserves resources, executes rules, merges deltas, computes hashes, and records the transaction. Let's trace through every step.
-\end{tourguide}
-
-\subsection{8.1 Complete Call Trace}\label{complete-call-trace-3}
-
-\begin{verbatim}
-Engine::commit_with_receipt(tx) → Result<(Snapshot, TickReceipt, WarpTickPatchV1), EngineError>
-│
-├─[1] VALIDATE TRANSACTION
-│ IF tx.value() == 0 || !self.live_txs.contains(&tx.value()):
-│ return Err(EngineError::UnknownTx)
-│
-├─[2] DRAIN CANDIDATES
-│ policy_id = self.policy_id // Line 844
-│ rule_pack_id = self.compute_rule_pack_id() // Line 845
-│ │
-│ ├─ compute_rule_pack_id()
-│ │ FILE: engine_impl.rs
-│ │ CODE:
-│ │ ids = self.rules.values().map(|r| r.id).collect()
-│ │ ids.sort_unstable(); ids.dedup()
-│ │ hasher.update(&1u16.to_le_bytes()) // version
-│ │ hasher.update(&(ids.len() as u64).to_le_bytes())
-│ │ FOR id IN ids: hasher.update(&id)
-│ │ hasher.finalize().into()
-│ │
-│ drained = self.scheduler.drain_for_tx(tx) // Line 847
-│ plan_digest = compute_plan_digest(&drained) // Line 848
-│
-├─[3] RESERVE (INDEPENDENCE CHECK)
-│ ReserveOutcome { receipt, reserved, in_slots, out_slots }
-│ = self.reserve_for_receipt(tx, drained)? // Line 850-855
-│ │
-│ └─ reserve_for_receipt(tx, drained)
-│ FILE: engine_impl.rs
-│ │
-│ FOR rewrite IN drained (canonical order):
-│ │
-│ ├─ accepted = self.scheduler.reserve(tx, &mut rewrite)
-│ │
-│ ├─ IF !accepted:
-│ │ blockers = find_blocking_rewrites(reserved, &rewrite)
-│ │
-│ ├─ receipt_entries.push(TickReceiptEntry { ... })
-│ │
-│ └─ IF accepted:
-│ reserved.push(rewrite)
-│ extend_slots_from_footprint(&mut in_slots, &mut out_slots, ...)
-│ │
-│ return ReserveOutcome { receipt, reserved, in_slots, out_slots }
-│
-│ rewrites_digest = compute_rewrites_digest(&reserved_rewrites) // Line 858
-│
-├─[4] EXECUTE (PHASE 5 BOAW)
-│ state_before = self.state.clone() // Line 862
-│ delta_ops = self.apply_reserved_rewrites(reserved, &state_before)?
-│ │
-│ └─ apply_reserved_rewrites(rewrites, state_before)
-│ FILE: engine_impl.rs
-│ │
-│ ├─ let mut delta = TickDelta::new()
-│ │
-│ ├─ FOR rewrite IN rewrites:
-│ │ executor = self.rule_by_compact(rewrite.compact_rule).executor
-│ │ view = GraphView::new(self.state.store(&rewrite.scope.warp_id))
-│ │ (executor)(view, &rewrite.scope.local_id, &mut delta)
-│ │
-│ ├─ let ops = delta.finalize() // Canonical sort
-│ │
-│ ├─ patch = WarpTickPatchV1::new(policy_id, rule_pack_id, ..., ops)
-│ │ patch.apply_to_state(&mut self.state)?
-│ │
-│ └─ [delta_validate]: assert_delta_matches_diff(&ops, &diff_ops)
-│
-├─[5] MATERIALIZE
-│ mat_report = self.bus.finalize() // Line 884
-│ self.last_materialization = mat_report.channels
-│ self.last_materialization_errors = mat_report.errors
-│
-├─[6] COMPUTE DELTA PATCH
-│ ops = diff_state(&state_before, &self.state) // Line 889
-│ │
-│ └─ diff_state(before, after)
-│ FILE: tick_patch.rs
-│ - Canonicalize portal authoring (OpenPortal)
-│ - Diff instances (delete/upsert)
-│ - Diff nodes, edges, attachments
-│ - Sort by WarpOp::sort_key()
-│ │
-│ patch = WarpTickPatchV1::new(policy_id, rule_pack_id, ..., ops)
-│ patch_digest = patch.digest() // Line 898
-│
-├─[7] COMPUTE STATE ROOT
-│ state_root = compute_state_root(&self.state, &self.current_root) // Line 900
-│
-├─[8] GET PARENTS
-│ parents = self.last_snapshot.as_ref().map(|s| vec![s.hash]).unwrap_or_default()
-│
-├─[9] COMPUTE DECISION DIGEST
-│ decision_digest = receipt.digest() // Line 929
-│
-├─[10] COMPUTE COMMIT HASH
-│ hash = compute_commit_hash_v2(&state_root, &parents, &patch_digest, policy_id)
-│
-├─[11] BUILD SNAPSHOT
-│ snapshot = Snapshot {
-│ root: self.current_root,
-│ hash, // commit_id v2
-│ parents,
-│ plan_digest, // Diagnostic
-│ decision_digest, // Diagnostic
-│ rewrites_digest, // Diagnostic
-│ patch_digest, // COMMITTED
-│ policy_id, // COMMITTED
-│ tx,
-│ }
-│
-├─[12] RECORD TO HISTORY
-│ self.last_snapshot = Some(snapshot.clone()) // Line 947
-│ self.tick_history.push((snapshot, receipt, patch)) // Line 948-949
-│ self.live_txs.remove(&tx.value()) // Line 951
-│ self.scheduler.finalize_tx(tx) // Line 952
-│
-└─[13] RETURN
- Ok((snapshot, receipt, patch))
-\end{verbatim}
-
-\begin{cleverpattern}
-\textbf{State Snapshot Before Mutation}
-
-In step [4], notice \texttt{state\_before = self.state.clone()}. This clone happens \emph{before} any mutations. Why?
-
-\begin{enumerate}
-\item Enables \texttt{diff\_state()} to compute exactly what changed
-\item Supports rollback if execution fails (though this isn't shown)
-\item Provides validation: the delta from execution should match the diff
-\end{enumerate}
-
-The clone is relatively cheap because it's copy-on-write under the hood---most data is shared until mutation.
-\end{cleverpattern}
-
-\begin{deepdive}
-\textbf{Diagnostic vs. Committed Digests}
-
-The snapshot contains multiple digests, but only some are ``committed'' (affect the hash):
-
-\begin{itemize}
-\item \textbf{Committed}: \texttt{state\_root}, \texttt{patch\_digest}, \texttt{policy\_id}, \texttt{parents}
-\item \textbf{Diagnostic}: \texttt{plan\_digest}, \texttt{decision\_digest}, \texttt{rewrites\_digest}
-\end{itemize}
-
-Diagnostic digests are for debugging and auditing---they help trace what happened, but don't affect consensus. This separation keeps the consensus-critical path minimal while providing rich observability.
-\end{deepdive}
-
-\subsection{8.2 Commit Hash Inputs}\label{commit-hash-inputs}
-
-{\def\LTcaptype{none} % do not increment counter
-\begin{longtable}[]{@{}lll@{}}
-\toprule\noalign{}
-Input & Committed? & Purpose \\
-\midrule\noalign{}
-\endhead
-\bottomrule\noalign{}
-\endlastfoot
-\texttt{state\_root} & ✓ & What the graph looks like \\
-\texttt{patch\_digest} & ✓ & How we got here (ops) \\
-\texttt{parents} & ✓ & Chain continuity \\
-\texttt{policy\_id} & ✓ & Aion policy version \\
-\texttt{plan\_digest} & ✗ & Diagnostic only \\
-\texttt{decision\_digest} & ✗ & Diagnostic only \\
-\texttt{rewrites\_digest} & ✗ & Diagnostic only \\
-\end{longtable}
-}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{9. Complete Call Graph}\label{complete-call-graph}
-
-\subsection{9.1 Full Journey: Intent →
-Commit}\label{full-journey-intent-commit}
-
-\begin{verbatim}
-USER ACTION
- │
- ▼
-Engine::ingest_intent(intent_bytes)
- ├─ compute_intent_id() // BLAKE3 content hash
- ├─ make_node_id(), make_type_id() // Structural IDs
- ├─ store.insert_node() // Create event node
- ├─ store.set_node_attachment() // Attach intent payload
- └─ store.insert_edge() // Pending edge to inbox
- │
- ▼
-Engine::begin() → TxId
- ├─ tx_counter.wrapping_add(1)
- ├─ live_txs.insert(tx_counter)
- └─ TxId::from_raw(tx_counter)
- │
- ▼
-Engine::dispatch_next_intent(tx) // (or manual apply)
- │
- ▼
-Engine::apply(tx, rule_name, scope)
- └─ Engine::apply_in_warp(tx, warp_id, rule_name, scope, &[])
- ├─ rules.get(rule_name) // Lookup rule
- ├─ GraphView::new(store) // Read-only view
- ├─ (rule.matcher)(view, scope) // Match check
- ├─ scope_hash() // BLAKE3 ordering key
- ├─ (rule.compute_footprint)(view, scope) // Footprint
- └─ scheduler.enqueue(tx, PendingRewrite)
- └─ PendingTx::enqueue() // Last-wins dedup
- │
- ▼
-Engine::commit_with_receipt(tx)
- │
- ├─[DRAIN]
- │ scheduler.drain_for_tx(tx)
- │ └─ PendingTx::drain_in_order()
- │ └─ radix_sort() or sort_unstable_by()
- │ 20-pass LSD radix sort
- │ ORDER: (scope_hash, rule_id, nonce)
- │
- ├─[RESERVE]
- │ FOR rewrite IN drained:
- │ scheduler.reserve(tx, &mut rewrite)
- │ ├─ has_conflict(active, pr)
- │ │ └─ GenSet::contains() × N // O(1) per check
- │ └─ mark_all(active, pr)
- │ └─ GenSet::mark() × M // O(1) per mark
- │
- ├─[EXECUTE]
- │ apply_reserved_rewrites(reserved, state_before)
- │ FOR rewrite IN reserved:
- │ (executor)(view, &scope, &mut delta)
- │ └─ scoped.emit(op)
- │ └─ delta.emit_with_origin(op, origin)
- │ delta.finalize() // Sort ops
- │ patch.apply_to_state(&mut self.state)
- │
- ├─[MATERIALIZE]
- │ bus.finalize()
- │
- ├─[DELTA PATCH]
- │ diff_state(&state_before, &self.state)
- │ └─ Sort by WarpOp::sort_key()
- │ WarpTickPatchV1::new(...)
- │ └─ compute_patch_digest_v2()
- │
- ├─[HASHES]
- │ compute_state_root(&self.state, &self.current_root)
- │ ├─ BFS reachability
- │ └─ BLAKE3 over canonical encoding
- │ compute_commit_hash_v2(state_root, parents, patch_digest, policy_id)
- │ └─ BLAKE3(version || parents || state_root || patch_digest || policy_id)
- │
- ├─[SNAPSHOT]
- │ Snapshot { root, hash, parents, digests..., policy_id, tx }
- │
- └─[RECORD]
- tick_history.push((snapshot, receipt, patch))
- live_txs.remove(&tx.value())
- scheduler.finalize_tx(tx)
- │
- ▼
-RETURN: (Snapshot, TickReceipt, WarpTickPatchV1)
-\end{verbatim}
-
-\begin{tourguide}
-And there you have it---the complete journey from user action to committed state. Every step is deterministic, every hash is content-addressed, and the system can be replayed or verified by any node with the same inputs.
-
-The elegance lies in the separation of concerns:
-\begin{itemize}
-\item \textbf{Ingestion} is pure data capture
-\item \textbf{Matching} is pure pattern recognition
-\item \textbf{Scheduling} is pure ordering
-\item \textbf{Execution} is pure computation (no side effects escape)
-\item \textbf{Merging} is pure deduplication
-\item \textbf{Hashing} is pure fingerprinting
-\end{itemize}
-
-Each phase can be reasoned about independently, tested independently, and optimized independently. This is the hallmark of well-architected systems.
-\end{tourguide}
-
-\subsection{9.2 File Index}\label{file-index}
-
-{\def\LTcaptype{none} % do not increment counter
-\begin{longtable}[]{@{}lll@{}}
-\toprule\noalign{}
-Component & Primary File & Key Lines \\
-\midrule\noalign{}
-\endhead
-\bottomrule\noalign{}
-\endlastfoot
-Intent Ingestion & \texttt{engine\_impl.rs} & 1216-1281 \\
-Identity Hashing & \texttt{ident.rs} & 85-109 \\
-Transaction Begin & \texttt{engine\_impl.rs} & 711-719 \\
-Rule Apply & \texttt{engine\_impl.rs} & 730-806 \\
-Footprint & \texttt{footprint.rs} & 131-152 \\
-Scheduler Enqueue & \texttt{scheduler.rs} & 102-105, 331-355 \\
-Radix Sort & \texttt{scheduler.rs} & 360-413, 481-498 \\
-Reserve/Conflict & \texttt{scheduler.rs} & 134-278 \\
-GenSet & \texttt{scheduler.rs} & 509-535 \\
-BOAW Execute & \texttt{boaw/exec.rs} & 61-152 \\
-Shard Routing & \texttt{boaw/shard.rs} & 82-120 \\
-Delta Merge & \texttt{boaw/merge.rs} & 36-75 \\
-TickDelta & \texttt{tick\_delta.rs} & 38-172 \\
-WarpOp Sort Key & \texttt{tick\_patch.rs} & 207-287 \\
-State Mutations & \texttt{graph.rs} & 175-412 \\
-Patch Apply & \texttt{tick\_patch.rs} & 434-561 \\
-Diff State & \texttt{tick\_patch.rs} & 979-1069 \\
-State Root Hash & \texttt{snapshot.rs} & 88-209 \\
-Commit Hash v2 & \texttt{snapshot.rs} & 244-263 \\
-Patch Digest & \texttt{tick\_patch.rs} & 755-774 \\
-Commit Orchestrator & \texttt{engine\_impl.rs} & 837-954 \\
-\end{longtable}
-}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{Appendix A: Complexity
-Summary}\label{appendix-a-complexity-summary}
-
-{\def\LTcaptype{none} % do not increment counter
-\begin{longtable}[]{@{}lll@{}}
-\toprule\noalign{}
-Operation & Complexity & Notes \\
-\midrule\noalign{}
-\endhead
-\bottomrule\noalign{}
-\endlastfoot
-\texttt{ingest\_intent} & O(1) & Fixed structural insertions \\
-\texttt{begin} & O(1) & Counter increment + set insert \\
-\texttt{apply} & O(m) & m = footprint size \\
-\texttt{drain\_for\_tx} (radix) & O(n) & n = candidates, 20 passes \\
-\texttt{reserve} per rewrite & O(m) & m = footprint size, O(1) per
-check \\
-\texttt{execute\_parallel} & O(n/w) & n = items, w = workers \\
-\texttt{merge\_deltas} & O(k log k) & k = total ops (sort + dedup) \\
-\texttt{compute\_state\_root} & O(V + E) & V = nodes, E = edges \\
-\texttt{compute\_commit\_hash\_v2} & O(P) & P = parents \\
-\end{longtable}
-}
-
-\begin{tourguide}
-Notice that all operations are either O(1), O(n), or O(n log n)---there's nothing quadratic or exponential lurking here. The system scales linearly with the amount of work, which is essential for predictable performance.
-
-The one potential bottleneck is \texttt{compute\_state\_root} at O(V + E), which traverses the entire reachable graph. For very large graphs, this could become expensive. In practice, graphs are partitioned across warp instances, keeping each traversal manageable.
-\end{tourguide}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{Appendix B: Determinism
-Boundaries}\label{appendix-b-determinism-boundaries}
-
-\subsection{Guaranteed Deterministic}\label{guaranteed-deterministic}
-
-\begin{itemize}
-\tightlist
-\item
- Radix sort ordering (20-pass LSD)
-\item
- BTreeMap/BTreeSet iteration
-\item
- BLAKE3 hashing
-\item
- GenSet conflict detection
-\item
- Canonical merge deduplication
-\end{itemize}
-
-\subsection{Intentionally Non-Deterministic (Handled by
-Merge)}\label{intentionally-non-deterministic-handled-by-merge}
-
-\begin{itemize}
-\tightlist
-\item
- Worker execution order in BOAW
-\item
- Shard claim order (atomic counter)
-\end{itemize}
-
-\begin{deepdive}
-\textbf{The Determinism Contract}
-
-Echo's determinism guarantee is: \emph{given the same inputs (intents, rules, initial state), the output (commit hash) is identical across all executions}.
-
-This holds even though:
-\begin{itemize}
-\item Workers execute in arbitrary order
-\item Shards are claimed non-deterministically
-\item Thread scheduling varies between runs
-\end{itemize}
-
-The canonical merge absorbs this non-determinism, producing a deterministic output from non-deterministic intermediate results. It's a beautiful example of ``eventual determinism''---chaos in the middle, order at the end.
-\end{deepdive}
-
-\subsection{Protocol Constants
-(Frozen)}\label{protocol-constants-frozen}
-
-\begin{itemize}
-\tightlist
-\item
- \texttt{NUM\_SHARDS\ =\ 256}
-\item
- \texttt{SHARD\_MASK\ =\ 255}
-\item
- Shard routing: \texttt{LE\_u64(node\_id{[}0..8{]})\ \&\ 255}
-\item
- Commit hash v2 version tag: \texttt{0x02\ 0x00}
-\end{itemize}
-
-\begin{watchout}
-\textbf{Protocol Constants Are Sacred}
-
-These constants are ``frozen''---changing them would break compatibility with existing commits. If you're tempted to tweak \texttt{NUM\_SHARDS} or the shard routing formula, remember: every historical commit was created with these values, and changing them would make replay impossible.
-
-Protocol evolution happens through version tags (like the \texttt{0x02} in commit hash v2), not by modifying existing constants.
-\end{watchout}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\begin{tourguide}
-\textbf{End of Tour}
-
-Thank you for joining me on this journey through Echo's internals! We've seen:
-
-\begin{itemize}
-\item \textbf{Content-addressed everything}: From intents to commits, identity comes from content
-\item \textbf{Deterministic scheduling}: Radix sort + footprints = predictable execution
-\item \textbf{Safe parallelism}: Sharded execution + canonical merge = speed without chaos
-\item \textbf{Cryptographic integrity}: BLAKE3 hashes throughout = verifiable state
-\end{itemize}
-
-Echo is a remarkable piece of engineering---complex enough to solve hard problems, yet built from simple, composable primitives. The code rewards careful study, and I hope these annotations help illuminate the ``why'' behind the ``what.''
-
-Happy hacking!
-\end{tourguide}
-
-\emph{Document generated 2026-01-25. File paths and line numbers
-accurate as of this date. Commentary added by your friendly AI tour guide.}
-
-\backmatter
-\end{document}
diff --git a/docs/archive/study/echo-tour-de-code.md b/docs/archive/study/echo-tour-de-code.md
deleted file mode 100644
index 04c270db..00000000
--- a/docs/archive/study/echo-tour-de-code.md
+++ /dev/null
@@ -1,1355 +0,0 @@
-
-
-
-# Echo: Tour de Code
-
-> **The complete function-by-function trace of Echo's execution pipeline.**
->
-> This document traces EVERY function call involved in processing a user action through the Echo engine.
-> File paths are accurate as of 2026-01-25; line numbers are intentionally omitted to avoid drift.
-
----
-
-## Table of Contents
-
-1. [Intent Ingestion](#1-intent-ingestion)
-2. [Transaction Lifecycle](#2-transaction-lifecycle)
-3. [Rule Matching](#3-rule-matching)
-4. [Scheduler: Drain & Reserve](#4-scheduler-drain--reserve)
-5. [BOAW Parallel Execution](#5-boaw-parallel-execution)
-6. [Delta Merge & State Finalization](#6-delta-merge--state-finalization)
-7. [Hash Computation](#7-hash-computation)
-8. [Commit Orchestration](#8-commit-orchestration)
-9. [Complete Call Graph](#9-complete-call-graph)
-
----
-
-## 1. Intent Ingestion
-
-**Entry Point:** `Engine::ingest_intent()`
-**File:** `crates/warp-core/src/engine_impl.rs`
-
-### 1.1 Function Signature
-
-```rust
-pub fn ingest_intent(&mut self, intent_bytes: &[u8]) -> Result
-```
-
-**Returns:**
-
-- `IngestDisposition::Accepted { intent_id: Hash }` — New intent accepted
-- `IngestDisposition::Duplicate { intent_id: Hash }` — Already ingested
-
-### 1.2 Complete Call Trace
-
-```text
-Engine::ingest_intent(intent_bytes: &[u8])
-│
-├─[1] compute_intent_id(intent_bytes) → Hash
-│ FILE: crates/warp-core/src/inbox.rs
-│ CODE:
-│ let mut hasher = blake3::Hasher::new();
-│ hasher.update(b"intent:"); // Domain separation
-│ hasher.update(intent_bytes);
-│ hasher.finalize().into() // → [u8; 32]
-│
-├─[2] NodeId(intent_id)
-│ Creates strongly-typed NodeId from Hash
-│
-├─[3] self.state.store_mut(&warp_id) → Option<&mut GraphStore>
-│ FILE: crates/warp-core/src/engine_impl.rs
-│ ERROR: EngineError::UnknownWarp if None
-│
-├─[4] Extract root_node_id from self.current_root.local_id
-│
-├─[5] STRUCTURAL NODE CREATION (Idempotent)
-│ ├─ make_node_id("sim") → NodeId
-│ │ FILE: crates/warp-core/src/ident.rs
-│ │ CODE: blake3("node:" || "sim")
-│ │
-│ ├─ make_node_id("sim/inbox") → NodeId
-│ │ CODE: blake3("node:" || "sim/inbox")
-│ │
-│ ├─ make_type_id("sim") → TypeId
-│ │ FILE: crates/warp-core/src/ident.rs
-│ │ CODE: blake3("type:" || "sim")
-│ │
-│ ├─ make_type_id("sim/inbox") → TypeId
-│ ├─ make_type_id("sim/inbox/event") → TypeId
-│ │
-│ ├─ store.insert_node(sim_id, NodeRecord { ty: sim_ty })
-│ │ FILE: crates/warp-core/src/graph.rs
-│ │ CODE: self.nodes.insert(id, record)
-│ │
-│ └─ store.insert_node(inbox_id, NodeRecord { ty: inbox_ty })
-│
-├─[6] STRUCTURAL EDGE CREATION
-│ ├─ make_edge_id("edge:root/sim") → EdgeId
-│ │ FILE: crates/warp-core/src/ident.rs
-│ │ CODE: blake3("edge:" || "edge:root/sim")
-│ │
-│ ├─ store.insert_edge(root_id, EdgeRecord { ... })
-│ │ FILE: crates/warp-core/src/graph.rs
-│ │ └─ GraphStore::upsert_edge_record(from, edge)
-│ │ FILE: crates/warp-core/src/graph.rs
-│ │ UPDATES:
-│ │ self.edge_index.insert(edge_id, from)
-│ │ self.edge_to_index.insert(edge_id, to)
-│ │ self.edges_from.entry(from).or_default().push(edge)
-│ │ self.edges_to.entry(to).or_default().push(edge_id)
-│ │
-│ └─ store.insert_edge(sim_id, EdgeRecord { ... }) [sim → inbox]
-│
-├─[7] DUPLICATE DETECTION
-│ store.node(&event_id) → Option<&NodeRecord>
-│ FILE: crates/warp-core/src/graph.rs
-│ CODE: self.nodes.get(id)
-│ IF Some(_): return Ok(IngestDisposition::Duplicate { intent_id })
-│
-├─[8] EVENT NODE CREATION
-│ store.insert_node(event_id, NodeRecord { ty: event_ty })
-│ NOTE: event_id = intent_id (content-addressed)
-│
-├─[9] INTENT ATTACHMENT
-│ ├─ AtomPayload::new(type_id, bytes)
-│ │ FILE: crates/warp-core/src/attachment.rs
-│ │ CODE: Self { type_id, bytes: Bytes::copy_from_slice(intent_bytes) }
-│ │
-│ └─ store.set_node_attachment(event_id, Some(AttachmentValue::Atom(payload)))
-│ FILE: crates/warp-core/src/graph.rs
-│ CODE: self.node_attachments.insert(id, v)
-│
-├─[10] PENDING EDGE CREATION (Queue Membership)
-│ ├─ pending_edge_id(&inbox_id, &intent_id) → EdgeId
-│ │ FILE: crates/warp-core/src/inbox.rs
-│ │ CODE: blake3("edge:" || "sim/inbox/pending:" || inbox_id || intent_id)
-│ │
-│ └─ store.insert_edge(inbox_id, EdgeRecord {
-│ id: pending_edge_id,
-│ from: inbox_id,
-│ to: event_id,
-│ ty: make_type_id("edge:pending")
-│ })
-│
-└─[11] return Ok(IngestDisposition::Accepted { intent_id })
-```
-
-### 1.3 Data Structures Modified
-
-| Structure | Field | Change |
-| ------------ | ------------------ | ------------------------------------------- |
-| `GraphStore` | `nodes` | +3 entries (sim, inbox, event) |
-| `GraphStore` | `edges_from` | +3 edges (root→sim, sim→inbox, inbox→event) |
-| `GraphStore` | `edges_to` | +3 reverse entries |
-| `GraphStore` | `edge_index` | +3 edge→from mappings |
-| `GraphStore` | `edge_to_index` | +3 edge→to mappings |
-| `GraphStore` | `node_attachments` | +1 (event → intent payload) |
-
----
-
-## 2. Transaction Lifecycle
-
-### 2.1 Begin Transaction
-
-**Entry Point:** `Engine::begin()`
-**File:** `crates/warp-core/src/engine_impl.rs-719`
-
-```rust
-pub fn begin(&mut self) -> TxId {
- self.tx_counter = self.tx_counter.wrapping_add(1); // Line 713
- if self.tx_counter == 0 {
- self.tx_counter = 1; // Line 715: Zero is reserved
- }
- self.live_txs.insert(self.tx_counter); // Line 717
- TxId::from_raw(self.tx_counter) // Line 718
-}
-```
-
-**Call Trace:**
-
-```text
-Engine::begin()
-│
-├─ self.tx_counter.wrapping_add(1)
-│ Rust std: u64::wrapping_add
-│ Handles u64::MAX → 0 overflow
-│
-├─ if self.tx_counter == 0: self.tx_counter = 1
-│ INVARIANT: TxId(0) is reserved as invalid
-│
-├─ self.live_txs.insert(self.tx_counter)
-│ TYPE: HashSet
-│ Registers transaction as active
-│
-└─ TxId::from_raw(self.tx_counter)
- FILE: crates/warp-core/src/tx.rs
- CODE: pub const fn from_raw(value: u64) -> Self { Self(value) }
- TYPE: #[repr(transparent)] struct TxId(u64)
-```
-
-**State Changes:**
-
-- `tx_counter`: N → N+1 (or 1 if wrapped)
-- `live_txs`: Insert new counter value
-
-### 2.2 Abort Transaction
-
-**Entry Point:** `Engine::abort()`
-**File:** `crates/warp-core/src/engine_impl.rs-968`
-
-```rust
-pub fn abort(&mut self, tx: TxId) {
- self.live_txs.remove(&tx.value());
- self.scheduler.finalize_tx(tx);
- self.bus.clear();
- self.last_materialization.clear();
- self.last_materialization_errors.clear();
-}
-```
-
----
-
-## 3. Rule Matching
-
-**Entry Point:** `Engine::apply()`
-**File:** `crates/warp-core/src/engine_impl.rs-737`
-
-### 3.1 Function Signature
-
-```rust
-pub fn apply(
- &mut self,
- tx: TxId,
- rule_name: &str,
- scope: &NodeId,
-) -> Result
-```
-
-### 3.2 Complete Call Trace
-
-```text
-Engine::apply(tx, rule_name, scope)
-│
-└─ Engine::apply_in_warp(tx, self.current_root.warp_id, rule_name, scope, &[])
- FILE: crates/warp-core/src/engine_impl.rs
- │
- ├─[1] TRANSACTION VALIDATION
- │ CODE: if tx.value() == 0 || !self.live_txs.contains(&tx.value())
- │ ERROR: EngineError::UnknownTx
- │
- ├─[2] RULE LOOKUP
- │ self.rules.get(rule_name) → Option<&RewriteRule>
- │ TYPE: HashMap<&'static str, RewriteRule>
- │ ERROR: EngineError::UnknownRule(rule_name.to_owned())
- │
- ├─[3] STORE LOOKUP
- │ self.state.store(&warp_id) → Option<&GraphStore>
- │ ERROR: EngineError::UnknownWarp(warp_id)
- │
- ├─[4] CREATE GRAPHVIEW
- │ GraphView::new(store) → GraphView<'_>
- │ FILE: crates/warp-core/src/graph_view.rs
- │ TYPE: Read-only wrapper (Copy, 8 bytes)
- │
- ├─[5] CALL MATCHER
- │ (rule.matcher)(view, scope) → bool
- │ TYPE: MatchFn = for<'a> fn(GraphView<'a>, &NodeId) -> bool
- │ FILE: crates/warp-core/src/rule.rs
- │ IF false: return Ok(ApplyResult::NoMatch)
- │
- ├─[6] CREATE SCOPE KEY
- │ let scope_key = NodeKey { warp_id, local_id: *scope }
- │
- ├─[7] COMPUTE SCOPE HASH
- │ scope_hash(&rule.id, &scope_key) → Hash
- │ FILE: crates/warp-core/src/engine_impl.rs
- │ CODE:
- │ let mut hasher = Hasher::new();
- │ hasher.update(rule_id); // 32 bytes
- │ hasher.update(scope.warp_id.as_bytes()); // 32 bytes
- │ hasher.update(scope.local_id.as_bytes()); // 32 bytes
- │ hasher.finalize().into()
- │
- ├─[8] COMPUTE FOOTPRINT
- │ (rule.compute_footprint)(view, scope) → Footprint
- │ TYPE: FootprintFn = for<'a> fn(GraphView<'a>, &NodeId) -> Footprint
- │ FILE: crates/warp-core/src/rule.rs
- │ RETURNS:
- │ Footprint {
- │ n_read: IdSet, // Nodes read
- │ n_write: IdSet, // Nodes written
- │ e_read: IdSet, // Edges read
- │ e_write: IdSet, // Edges written
- │ a_read: AttachmentSet, // Attachments read
- │ a_write: AttachmentSet, // Attachments written
- │ b_in: PortSet, // Input ports
- │ b_out: PortSet, // Output ports
- │ factor_mask: u64, // O(1) prefilter
- │ }
- │
- ├─[9] AUGMENT FOOTPRINT WITH DESCENT STACK
- │ for key in descent_stack:
- │ footprint.a_read.insert(*key)
- │ FILE: crates/warp-core/src/footprint.rs
- │ PURPOSE: Stage B1 law - READs of all descent chain slots
- │
- ├─[10] COMPACT RULE ID LOOKUP
- │ self.compact_rule_ids.get(&rule.id) → Option<&CompactRuleId>
- │ TYPE: HashMap
- │ ERROR: EngineError::InternalCorruption
- │
- └─[11] ENQUEUE TO SCHEDULER
- self.scheduler.enqueue(tx, PendingRewrite { ... })
- │
- └─ DeterministicScheduler::enqueue(tx, rewrite)
- FILE: crates/warp-core/src/scheduler.rs
- │
- └─ RadixScheduler::enqueue(tx, rewrite)
- FILE: crates/warp-core/src/scheduler.rs
- CODE:
- let txq = self.pending.entry(tx).or_default();
- txq.enqueue(rewrite.scope_hash, rewrite.compact_rule.0, rewrite);
- │
- └─ PendingTx::enqueue(scope_be32, rule_id, payload)
- FILE: crates/warp-core/src/scheduler.rs
-
- CASE 1: Duplicate (scope_hash, rule_id) — LAST WINS
- index.get(&key) → Some(&i)
- fat[thin[i].handle] = Some(payload) // Overwrite
- thin[i].nonce = next_nonce++ // Refresh nonce
-
- CASE 2: New entry
- fat.push(Some(payload))
- thin.push(RewriteThin { scope_be32, rule_id, nonce, handle })
- index.insert(key, thin.len() - 1)
-```
-
-### 3.3 PendingRewrite Structure
-
-**File:** `crates/warp-core/src/scheduler.rs-82`
-
-```rust
-pub(crate) struct PendingRewrite {
- pub rule_id: Hash, // 32-byte rule identifier
- pub compact_rule: CompactRuleId, // u32 hot-path handle
- pub scope_hash: Hash, // 32-byte ordering key
- pub scope: NodeKey, // { warp_id, local_id }
- pub footprint: Footprint, // Read/write declaration
- pub phase: RewritePhase, // State machine: Matched → Reserved → ...
-}
-```
-
----
-
-## 4. Scheduler: Drain & Reserve
-
-### 4.1 Drain Phase (Radix Sort)
-
-**Entry Point:** `RadixScheduler::drain_for_tx()`
-**File:** `crates/warp-core/src/scheduler.rs-113`
-
-```rust
-pub(crate) fn drain_for_tx(&mut self, tx: TxId) -> Vec {
- self.pending
- .remove(&tx)
- .map_or_else(Vec::new, |mut txq| txq.drain_in_order())
-}
-```
-
-**Complete Call Trace:**
-
-```text
-RadixScheduler::drain_for_tx(tx)
-│
-├─ self.pending.remove(&tx) → Option>
-│
-└─ PendingTx::drain_in_order()
- FILE: crates/warp-core/src/scheduler.rs
- │
- ├─ DECISION: n <= 1024 (SMALL_SORT_THRESHOLD)?
- │ ├─ YES: sort_unstable_by(cmp_thin)
- │ │ Rust std comparison sort
- │ │
- │ └─ NO: radix_sort()
- │ FILE: crates/warp-core/src/scheduler.rs
- │
- └─ radix_sort()
- │
- ├─ Initialize scratch buffer: self.scratch.resize(n, default)
- │
- ├─ Lazy allocate histogram: self.counts16 = vec![0u32; 65536]
- │
- └─ FOR pass IN 0..20: // ═══ 20 PASSES ═══
- │
- ├─ SELECT src/dst buffers (ping-pong)
- │ flip = false: src=thin, dst=scratch
- │ flip = true: src=scratch, dst=thin
- │
- ├─ PHASE 1: COUNT BUCKETS
- │ FOR r IN src:
- │ b = bucket16(r, pass)
- │ counts[b] += 1
- │
- ├─ PHASE 2: PREFIX SUMS
- │ sum = 0
- │ FOR c IN counts:
- │ t = *c
- │ *c = sum
- │ sum += t
- │
- ├─ PHASE 3: STABLE SCATTER
- │ FOR r IN src:
- │ b = bucket16(r, pass)
- │ dst[counts[b]] = r
- │ counts[b] += 1
- │
- └─ flip = !flip
-
-BUCKET EXTRACTION (bucket16):
-FILE: crates/warp-core/src/scheduler.rs
-
-Pass 0: u16_from_u32_le(r.nonce, 0) // Nonce bytes [0:2]
-Pass 1: u16_from_u32_le(r.nonce, 1) // Nonce bytes [2:4]
-Pass 2: u16_from_u32_le(r.rule_id, 0) // Rule ID bytes [0:2]
-Pass 3: u16_from_u32_le(r.rule_id, 1) // Rule ID bytes [2:4]
-Pass 4: u16_be_from_pair32(scope, 15) // Scope bytes [30:32]
-Pass 5: u16_be_from_pair32(scope, 14) // Scope bytes [28:30]
-...
-Pass 19: u16_be_from_pair32(scope, 0) // Scope bytes [0:2] (MSD)
-
-SORT ORDER: (scope_hash, rule_id, nonce) ascending lexicographic
-```
-
-### 4.2 Reserve Phase (Independence Check)
-
-**Entry Point:** `RadixScheduler::reserve()`
-**File:** `crates/warp-core/src/scheduler.rs-143`
-
-```rust
-pub(crate) fn reserve(&mut self, tx: TxId, pr: &mut PendingRewrite) -> bool {
- let active = self.active.entry(tx).or_insert_with(ActiveFootprints::new);
- if Self::has_conflict(active, pr) {
- return Self::on_conflict(pr);
- }
- Self::mark_all(active, pr);
- Self::on_reserved(pr)
-}
-```
-
-**Complete Call Trace:**
-
-```text
-RadixScheduler::reserve(tx, pr)
-│
-├─ self.active.entry(tx).or_insert_with(ActiveFootprints::new)
-│ TYPE: HashMap
-│ ActiveFootprints contains 7 GenSets:
-│ - nodes_written: GenSet
-│ - nodes_read: GenSet
-│ - edges_written: GenSet
-│ - edges_read: GenSet
-│ - attachments_written: GenSet
-│ - attachments_read: GenSet
-│ - ports: GenSet
-│
-├─ has_conflict(active, pr) → bool
-│ FILE: crates/warp-core/src/scheduler.rs
-│ │
-│ ├─ FOR node IN pr.footprint.n_write:
-│ │ IF active.nodes_written.contains(node): return true // W-W conflict
-│ │ IF active.nodes_read.contains(node): return true // W-R conflict
-│ │
-│ ├─ FOR node IN pr.footprint.n_read:
-│ │ IF active.nodes_written.contains(node): return true // R-W conflict
-│ │ (R-R is allowed)
-│ │
-│ ├─ FOR edge IN pr.footprint.e_write:
-│ │ IF active.edges_written.contains(edge): return true
-│ │ IF active.edges_read.contains(edge): return true
-│ │
-│ ├─ FOR edge IN pr.footprint.e_read:
-│ │ IF active.edges_written.contains(edge): return true
-│ │
-│ ├─ FOR key IN pr.footprint.a_write:
-│ │ IF active.attachments_written.contains(key): return true
-│ │ IF active.attachments_read.contains(key): return true
-│ │
-│ ├─ FOR key IN pr.footprint.a_read:
-│ │ IF active.attachments_written.contains(key): return true
-│ │
-│ └─ FOR port IN pr.footprint.b_in ∪ pr.footprint.b_out:
-│ IF active.ports.contains(port): return true
-│
-├─ IF conflict:
-│ └─ on_conflict(pr)
-│ FILE: crates/warp-core/src/scheduler.rs
-│ pr.phase = RewritePhase::Aborted
-│ return false
-│
-├─ mark_all(active, pr)
-│ FILE: crates/warp-core/src/scheduler.rs
-│ │
-│ ├─ FOR node IN pr.footprint.n_write:
-│ │ active.nodes_written.mark(NodeKey { warp_id, local_id: node })
-│ │
-│ ├─ FOR node IN pr.footprint.n_read:
-│ │ active.nodes_read.mark(NodeKey { ... })
-│ │
-│ ├─ FOR edge IN pr.footprint.e_write:
-│ │ active.edges_written.mark(EdgeKey { ... })
-│ │
-│ ├─ FOR edge IN pr.footprint.e_read:
-│ │ active.edges_read.mark(EdgeKey { ... })
-│ │
-│ ├─ FOR key IN pr.footprint.a_write:
-│ │ active.attachments_written.mark(key)
-│ │
-│ ├─ FOR key IN pr.footprint.a_read:
-│ │ active.attachments_read.mark(key)
-│ │
-│ └─ FOR port IN pr.footprint.b_in ∪ pr.footprint.b_out:
-│ active.ports.mark(port)
-│
-└─ on_reserved(pr)
- FILE: crates/warp-core/src/scheduler.rs
- pr.phase = RewritePhase::Reserved
- return true
-```
-
-### 4.3 GenSet: O(1) Conflict Detection
-
-**File:** `crates/warp-core/src/scheduler.rs-535`
-
-```rust
-pub(crate) struct GenSet {
- gen: u32, // Current generation
- seen: FxHashMap, // Key → generation when marked
-}
-
-impl GenSet {
- #[inline]
- pub fn contains(&self, key: K) -> bool {
- matches!(self.seen.get(&key), Some(&g) if g == self.gen)
- }
-
- #[inline]
- pub fn mark(&mut self, key: K) {
- self.seen.insert(key, self.gen);
- }
-}
-```
-
-**Key Insight:** No clearing needed between transactions. Increment `gen` → all old entries become stale.
-
----
-
-## 5. BOAW Parallel Execution
-
-**Entry Point:** `execute_parallel()`
-**File:** `crates/warp-core/src/boaw/exec.rs-83`
-
-### 5.1 Entry Point
-
-```rust
-pub fn execute_parallel(view: GraphView<'_>, items: &[ExecItem], workers: usize) -> Vec {
- assert!(workers >= 1);
- let capped_workers = workers.min(NUM_SHARDS); // Cap at 256
-
- #[cfg(feature = "parallel-stride-fallback")]
- if std::env::var("ECHO_PARALLEL_STRIDE").is_ok() {
- return execute_parallel_stride(view, items, capped_workers);
- }
-
- execute_parallel_sharded(view, items, capped_workers) // DEFAULT
-}
-```
-
-### 5.2 Complete Call Trace
-
-```text
-execute_parallel(view, items, workers)
-│
-└─ execute_parallel_sharded(view, items, capped_workers)
- FILE: crates/warp-core/src/boaw/exec.rs
- │
- ├─ IF items.is_empty():
- │ return (0..workers).map(|_| TickDelta::new()).collect()
- │
- ├─ partition_into_shards(items.to_vec()) → Vec
- │ FILE: crates/warp-core/src/boaw/shard.rs
- │ │
- │ ├─ Create 256 empty VirtualShard structures
- │ │
- │ └─ FOR item IN items:
- │ │
- │ ├─ shard_of(&item.scope) → usize
- │ │ FILE: crates/warp-core/src/boaw/shard.rs
- │ │ CODE:
- │ │ let bytes = scope.as_bytes();
- │ │ let first_8: [u8; 8] = [bytes[0..8]];
- │ │ let val = u64::from_le_bytes(first_8);
- │ │ (val & 255) as usize // SHARD_MASK = 255
- │ │
- │ └─ shards[shard_id].items.push(item)
- │
- ├─ let next_shard = AtomicUsize::new(0)
- │
- └─ std::thread::scope(|s| { ... })
- FILE: Rust std (scoped threads)
- │
- ├─ FOR _ IN 0..workers:
- │ │
- │ └─ s.spawn(move || { ... }) // ═══ WORKER THREAD ═══
- │ │
- │ ├─ let mut delta = TickDelta::new()
- │ │ FILE: crates/warp-core/src/tick_delta.rs
- │ │ CREATES: { ops: Vec::new(), origins: Vec::new() }
- │ │
- │ └─ LOOP: // Work-stealing loop
- │ │
- │ ├─ shard_id = next_shard.fetch_add(1, Ordering::Relaxed)
- │ │ ATOMIC: Returns old value, increments counter
- │ │ ORDERING: Relaxed (no synchronization cost)
- │ │
- │ ├─ IF shard_id >= 256: break
- │ │
- │ └─ FOR item IN &shards[shard_id].items:
- │ │
- │ ├─ let mut scoped = delta.scoped(item.origin)
- │ │ FILE: crates/warp-core/src/tick_delta.rs
- │ │ CREATES: ScopedDelta { inner: &mut delta, origin, next_op_ix: 0 }
- │ │
- │ └─ (item.exec)(view, &item.scope, scoped.inner_mut())
- │ │
- │ └─ INSIDE EXECUTOR:
- │ scoped.emit(op)
- │ FILE: crates/warp-core/src/tick_delta.rs
- │ CODE:
- │ origin.op_ix = self.next_op_ix;
- │ self.next_op_ix += 1;
- │ self.inner.emit_with_origin(op, origin);
- │ │
- │ └─ TickDelta::emit_with_origin(op, origin)
- │ FILE: crates/warp-core/src/tick_delta.rs
- │ CODE:
- │ self.ops.push(op);
- │ self.origins.push(origin); // if delta_validate
- │
- └─ COLLECT THREADS:
- handles.into_iter().map(|h| h.join()).collect()
- RETURNS: Vec (one per worker)
-```
-
-### 5.3 Enforced Execution Path
-
-**Entry Point:** `execute_item_enforced()`
-**File:** `crates/warp-core/src/boaw/exec.rs`
-
-When footprint enforcement is active, each item is executed via `execute_item_enforced()` instead of a bare function-pointer call. Read access is enforced in-line by `GraphView`/`FootprintGuard` while the executor runs inside `catch_unwind`, and post-hoc `check_op()` validation is applied to newly-emitted ops.
-
-**Signature (anchor):**
-
-```rust
-fn execute_item_enforced(
- store: &GraphStore,
- item: &ExecItem,
- idx: usize,
- unit: &WorkUnit,
- delta: TickDelta,
-) -> Result
-```
-
-**Guard Check (anchor):**
-**File:** `crates/warp-core/src/footprint_guard.rs`
-
-```rust
-impl FootprintGuard {
- pub(crate) fn check_op(&self, op: &WarpOp)
-}
-```
-
-```text
-execute_item_enforced(store, item, idx, unit, delta)
-│
-├─ guard = unit.guards[idx]
-├─ view = GraphView::new_guarded(store, guard)
-│
-├─ ops_before = delta.len()
-│ Snapshot the op count BEFORE the executor runs
-│
-├─ let mut scoped = delta.scoped(item.origin)
-│ Wrap delta with origin tracking (mutable binding required)
-│
-├─ result = std::panic::catch_unwind(AssertUnwindSafe(|| {
-│ (item.exec)(view, &item.scope, scoped.inner_mut())
-│ }))
-│ Pass the inner mutable accessor to the executor, not the scoped wrapper
-│
-├─ FOR op IN delta.ops_ref()[ops_before..]:
-│ guard.check_op(op) → panic_any(FootprintViolation)
-│ Validates that each newly-emitted op falls within the declared footprint.
-│ ExecItemKind::System items may emit warp-instance-level ops;
-│ ExecItemKind::User items may not.
-│
-└─ OUTCOME PRECEDENCE:
- ├─ IF check_op fails:
- │ return Err(PoisonedDelta)
- │ Write violations OVERRIDE executor panics — violation takes precedence.
- │
- ├─ IF footprint is clean BUT executor panicked:
- │ return Err(PoisonedDelta)
- │ The original panic propagates to the caller.
- │
- └─ IF both clean:
- return Ok(delta)
-```
-
-**Poison Safety (type-level):** `execute_item_enforced` returns `Result`,
-and `merge_deltas` consumes `Vec>`. Poisoned deltas are never
-merged or committed; they are dropped and their panic payload is re-thrown at the engine layer.
-
-#### 5.3.1 Cross-Warp Enforcement Policy
-
-`check_op()` rejects cross-warp writes: any op must target the executor’s `scope.warp_id`. Violations
-surface as `FootprintViolation` with `ViolationKind::CrossWarpEmission`. Exception: `ExecItemKind::System` may emit
-warp-instance-level ops (`OpenPortal`, `UpsertWarpInstance`, `DeleteWarpInstance`) for authorized
-instance lifecycle changes. **TODO (Phase 7):** allow portal-based cross-warp permissions with
-explicit footprint allowlists.
-
-**Warp-instance-level ops:** Operations that modify multiverse topology (e.g., `OpenPortal`,
-`UpsertWarpInstance`, `DeleteWarpInstance` from Section 6.2). They are enforced via `ExecItemKind`:
-`User` items attempting these ops produce a `FootprintViolation` with
-`ViolationKind::UnauthorizedInstanceOp`. There are no additional op categories beyond
-warp-instance-level vs normal graph ops.
-
-**Panic Recovery & Tick Semantics:** Worker threads run under `std::thread::scope`. A panic or
-`FootprintViolation` from `execute_item_enforced` produces a poisoned `TickDelta` that is never
-merged; `execute_parallel` propagates the panic when the worker results are joined. Any worker
-panic aborts the parallel execution. The caller observes the panic, the tick does not commit, and
-any partial delta stays on the worker stack and is dropped. Callers that catch the panic should
-invoke `Engine::abort` to roll back the transaction.
-
-### 5.4 ExecItem Structure
-
-**File:** `crates/warp-core/src/boaw/exec.rs-35`
-
-```rust
-#[derive(Clone, Copy)]
-pub struct ExecItem {
- pub exec: ExecuteFn, // fn(GraphView, &NodeId, &mut TickDelta)
- pub scope: NodeId, // 32-byte node identifier
- pub origin: OpOrigin, // { intent_id, rule_id, match_ix, op_ix }
-
- // Private field, present only in enforcement builds:
- #[cfg(any(debug_assertions, feature = "footprint_enforce_release"))]
- #[cfg(not(feature = "unsafe_graph"))]
- kind: ExecItemKind,
-}
-```
-
-**`ExecItemKind` (cfg-gated):**
-
-**Enum (anchor):**
-
-```rust
-enum ExecItemKind {
- User,
- System,
-}
-```
-
-- `ExecItemKind::User` — Normal rule executor. May emit node/edge/attachment ops scoped to the declared footprint. Cannot emit warp-instance-level ops (`UpsertWarpInstance`, `DeleteWarpInstance`, `OpenPortal`).
-- `ExecItemKind::System` — Internal-only executor (e.g., portal opening). May emit warp-instance-level ops.
-
-`ExecItem::new()` always creates `User` items. System items are constructed only by internal engine
-code via `ExecItem::new_system(exec: ExecuteFn, scope: NodeId, origin: OpOrigin)` when a rule is
-registered as `is_system`. The constructor is only compiled when
-`debug_assertions || footprint_enforce_release` (and not `unsafe_graph`), so plain release builds
-fall back to `ExecItem::new()` even for system rules.
-
-**The triple cfg-gate pattern:** The `kind` field (and all enforcement logic) is guarded by:
-
-1. `#[cfg(any(debug_assertions, feature = "footprint_enforce_release"))]` — active in debug builds or when the release enforcement feature is opted-in.
-2. `#[cfg(not(feature = "unsafe_graph"))]` — disabled when the escape-hatch feature is set (for benchmarks/fuzzing that intentionally bypass checks).
-
-This means enforcement is always-on in dev/test, opt-in for release, and explicitly removable for
-unsafe experimentation. A compile-time guard in `lib.rs` rejects builds that enable both
-`footprint_enforce_release` and `unsafe_graph`.
-
-### 5.5 Thread Safety
-
-| Type | Safety | Reason |
-| ------------- | --------------------- | ----------------------------------- |
-| `GraphView` | `Sync + Send + Clone` | Read-only snapshot |
-| `ExecItem` | `Sync + Send + Copy` | Function pointer + primitives |
-| `TickDelta` | Per-worker exclusive | Poisoned deltas must be discarded |
-| `AtomicUsize` | Lock-free | `fetch_add` with `Relaxed` ordering |
-
-**Note:** `ExecItem` stays `Copy` because `ExecItemKind` is `Copy` when present; the cfg-gated
-field does not change its `Send`/`Sync` bounds.
-
----
-
-## 6. Delta Merge & State Finalization
-
-### 6.1 Canonical Merge
-
-**Entry Point:** `merge_deltas()`
-**File:** `crates/warp-core/src/boaw/merge.rs-75`
-
-```text
-merge_deltas(deltas: Vec>) → Result, MergeError>
-│
-├─[1] FLATTEN ALL OPS WITH ORIGINS
-│ let mut flat: Vec<(WarpOpKey, OpOrigin, WarpOp)> = Vec::new();
-│ FOR d IN deltas:
-│ IF d is Err(PoisonedDelta): return Err(MergeError::PoisonedDelta)
-│ let (ops, origins) = d.into_parts_unsorted();
-│ FOR (op, origin) IN ops.zip(origins):
-│ flat.push((op.sort_key(), origin, op));
-│
-├─[2] CANONICAL SORT
-│ flat.sort_by(|a, b| (&a.0, &a.1).cmp(&(&b.0, &b.1)));
-│ ORDER: (WarpOpKey, OpOrigin) lexicographic
-│
-└─[3] DEDUPE & CONFLICT DETECTION
- let mut out = Vec::new();
- let mut i = 0;
- WHILE i < flat.len():
- │
- ├─ GROUP by WarpOpKey
- │ key = flat[i].0
- │ start = i
- │ WHILE i < flat.len() && flat[i].0 == key: i++
- │
- ├─ CHECK if all ops identical
- │ first = &flat[start].2
- │ all_same = flat[start+1..i].iter().all(|(_, _, op)| op == first)
- │
- └─ IF all_same:
- out.push(first.clone()) // Accept one copy
- ELSE:
- writers = flat[start..i].iter().map(|(_, o, _)| *o).collect()
- return Err(MergeError::Conflict(Box::new(MergeConflict { key, writers }))) // CONFLICT!
-
- return Ok(out)
-```
-
-### 6.2 WarpOp Sort Key
-
-**File:** `crates/warp-core/src/tick_patch.rs-287`
-
-```rust
-pub(crate) fn sort_key(&self) -> WarpOpKey {
- match self {
- Self::OpenPortal { .. } => WarpOpKey { kind: 1, ... },
- Self::UpsertWarpInstance { .. } => WarpOpKey { kind: 2, ... },
- Self::DeleteWarpInstance { .. } => WarpOpKey { kind: 3, ... },
- Self::DeleteEdge { .. } => WarpOpKey { kind: 4, ... }, // Delete before upsert
- Self::DeleteNode { .. } => WarpOpKey { kind: 5, ... },
- Self::UpsertNode { .. } => WarpOpKey { kind: 6, ... },
- Self::UpsertEdge { .. } => WarpOpKey { kind: 7, ... },
- Self::SetAttachment { .. } => WarpOpKey { kind: 8, ... }, // Last
- }
-}
-```
-
-**Canonical Order:**
-
-1. OpenPortal (creates child instances)
-2. UpsertWarpInstance
-3. DeleteWarpInstance
-4. DeleteEdge (delete before upsert)
-5. DeleteNode (delete before upsert)
-6. UpsertNode
-7. UpsertEdge
-8. SetAttachment (after skeleton exists)
-
-### 6.3 State Mutation Methods
-
-**File:** `crates/warp-core/src/graph.rs`
-
-```text
-GraphStore::insert_node(id, record)
- LINE: 175-177
- CODE: self.nodes.insert(id, record)
-
-GraphStore::upsert_edge_record(from, edge)
- LINE: 196-261
- UPDATES:
- - self.edge_index.insert(edge_id, from)
- - self.edge_to_index.insert(edge_id, to)
- - Remove old edge from previous bucket if exists
- - self.edges_from.entry(from).or_default().push(edge)
- - self.edges_to.entry(to).or_default().push(edge_id)
-
-GraphStore::delete_node_isolated(node) -> Result<(), DeleteNodeError>
- LINE: 393-418
- REJECTS if node has incident edges (no cascade!)
- ALLOWED MINI-CASCADE:
- - Remove from self.nodes
- - Remove node alpha attachment (key is derivable)
-
- > NOTE: `delete_node_cascade` still exists but is INTERNAL.
- > WarpOp::DeleteNode uses `delete_node_isolated` to ensure
- > all mutations are explicit in the delta.
-
-GraphStore::delete_edge_exact(from, edge_id)
- LINE: 360-412
- VALIDATES: edge is in correct "from" bucket
- REMOVES:
- - From edges_from bucket
- - From edge_index
- - From edge_to_index
- - From edges_to bucket
- - Edge attachment
-
-GraphStore::set_node_attachment(id, value)
- LINE: 125-134
- CODE:
- None → self.node_attachments.remove(&id)
- Some(v) → self.node_attachments.insert(id, v)
-
-GraphStore::set_edge_attachment(id, value)
- LINE: 163-172
- Same pattern as node attachments
-```
-
----
-
-## 7. Hash Computation
-
-### 7.1 State Root
-
-**Entry Point:** `compute_state_root()`
-**File:** `crates/warp-core/src/snapshot.rs-209`
-
-```text
-compute_state_root(state: &WarpState, root: &NodeKey) → Hash
-│
-├─[1] BFS REACHABILITY TRAVERSAL
-│ │
-│ ├─ Initialize:
-│ │ reachable_nodes: BTreeSet = { root }
-│ │ reachable_warps: BTreeSet = { root.warp_id }
-│ │ queue: VecDeque = [ root ]
-│ │
-│ └─ WHILE let Some(current) = queue.pop_front():
-│ │
-│ ├─ store = state.store(¤t.warp_id)
-│ │
-│ ├─ FOR edge IN store.edges_from(¤t.local_id):
-│ │ ├─ to = NodeKey { warp_id: current.warp_id, local_id: edge.to }
-│ │ ├─ IF reachable_nodes.insert(to): queue.push_back(to)
-│ │ │
-│ │ └─ IF edge has Descend(child_warp) attachment:
-│ │ └─ enqueue_descend(state, child_warp, ...)
-│ │ Adds child instance root to queue
-│ │
-│ └─ IF current node has Descend(child_warp) attachment:
-│ enqueue_descend(state, child_warp, ...)
-│
-├─[2] HASHING PHASE
-│ │
-│ ├─ let mut hasher = Hasher::new() // BLAKE3
-│ │
-│ ├─ HASH ROOT BINDING:
-│ │ hasher.update(&root.warp_id.0) // 32 bytes
-│ │ hasher.update(&root.local_id.0) // 32 bytes
-│ │
-│ └─ FOR warp_id IN reachable_warps: // BTreeSet = sorted order
-│ │
-│ ├─ HASH INSTANCE HEADER:
-│ │ hasher.update(&instance.warp_id.0) // 32 bytes
-│ │ hasher.update(&instance.root_node.0) // 32 bytes
-│ │ hash_attachment_key_opt(&mut hasher, instance.parent.as_ref())
-│ │
-│ ├─ FOR (node_id, node) IN store.nodes: // BTreeMap = sorted
-│ │ IF reachable_nodes.contains(&NodeKey { warp_id, local_id: node_id }):
-│ │ hasher.update(&node_id.0) // 32 bytes
-│ │ hasher.update(&node.ty.0) // 32 bytes
-│ │ hash_attachment_value_opt(&mut hasher, store.node_attachment(node_id))
-│ │
-│ └─ FOR (from, edges) IN store.edges_from: // BTreeMap = sorted
-│ IF from is reachable:
-│ sorted_edges = edges.filter(reachable).sort_by(|a,b| a.id.cmp(b.id))
-│ hasher.update(&from.0) // 32 bytes
-│ hasher.update(&(sorted_edges.len() as u64).to_le_bytes()) // 8 bytes
-│ FOR edge IN sorted_edges:
-│ hasher.update(&edge.id.0) // 32 bytes
-│ hasher.update(&edge.ty.0) // 32 bytes
-│ hasher.update(&edge.to.0) // 32 bytes
-│ hash_attachment_value_opt(&mut hasher, store.edge_attachment(&edge.id))
-│
-└─ hasher.finalize().into() // → [u8; 32]
-```
-
-### 7.2 Commit Hash v2
-
-**Entry Point:** `compute_commit_hash_v2()`
-**File:** `crates/warp-core/src/snapshot.rs-263`
-
-```rust
-pub(crate) fn compute_commit_hash_v2(
- state_root: &Hash,
- parents: &[Hash],
- patch_digest: &Hash,
- policy_id: u32,
-) -> Hash {
- let mut h = Hasher::new();
- h.update(&2u16.to_le_bytes()); // Version tag (2 bytes)
- h.update(&(parents.len() as u64).to_le_bytes()); // Parent count (8 bytes)
- for p in parents {
- h.update(p); // Each parent (32 bytes)
- }
- h.update(state_root); // Graph hash (32 bytes)
- h.update(patch_digest); // Ops hash (32 bytes)
- h.update(&policy_id.to_le_bytes()); // Policy (4 bytes)
- h.finalize().into()
-}
-```
-
-**Byte Layout:**
-
-```text
-Offset Size Field
-0 2 version_tag (0x02 0x00)
-2 8 parent_count (u64 LE)
-10 32*N parents[] (N parent hashes)
-10+32N 32 state_root
-42+32N 32 patch_digest
-74+32N 4 policy_id (u32 LE)
-─────────────────────────────────────
-TOTAL: 78 + 32*N bytes → BLAKE3 → 32-byte hash
-```
-
-### 7.3 Patch Digest
-
-**Entry Point:** `compute_patch_digest_v2()`
-**File:** `crates/warp-core/src/tick_patch.rs-774`
-
-```rust
-fn compute_patch_digest_v2(
- policy_id: u32,
- rule_pack_id: &ContentHash,
- commit_status: TickCommitStatus,
- in_slots: &[SlotId],
- out_slots: &[SlotId],
- ops: &[WarpOp],
-) -> ContentHash {
- let mut h = Hasher::new();
- h.update(&2u16.to_le_bytes()); // Format version
- h.update(&policy_id.to_le_bytes()); // 4 bytes
- h.update(rule_pack_id); // 32 bytes
- h.update(&[commit_status.code()]); // 1 byte
- encode_slots(&mut h, in_slots);
- encode_slots(&mut h, out_slots);
- encode_ops(&mut h, ops);
- h.finalize().into()
-}
-```
-
----
-
-## 8. Commit Orchestration
-
-**Entry Point:** `Engine::commit_with_receipt()`
-**File:** `crates/warp-core/src/engine_impl.rs-954`
-
-### 8.1 Complete Call Trace
-
-```text
-Engine::commit_with_receipt(tx) → Result<(Snapshot, TickReceipt, WarpTickPatchV1), EngineError>
-│
-├─[1] VALIDATE TRANSACTION
-│ IF tx.value() == 0 || !self.live_txs.contains(&tx.value()):
-│ return Err(EngineError::UnknownTx)
-│
-├─[2] DRAIN CANDIDATES
-│ policy_id = self.policy_id // Line 844
-│ rule_pack_id = self.compute_rule_pack_id() // Line 845
-│ │
-│ ├─ compute_rule_pack_id()
-│ │ FILE: engine_impl.rs
-│ │ CODE:
-│ │ ids = self.rules.values().map(|r| r.id).collect()
-│ │ ids.sort_unstable(); ids.dedup()
-│ │ hasher.update(&1u16.to_le_bytes()) // version
-│ │ hasher.update(&(ids.len() as u64).to_le_bytes())
-│ │ FOR id IN ids: hasher.update(&id)
-│ │ hasher.finalize().into()
-│ │
-│ drained = self.scheduler.drain_for_tx(tx) // Line 847
-│ plan_digest = compute_plan_digest(&drained) // Line 848
-│
-├─[3] RESERVE (INDEPENDENCE CHECK)
-│ ReserveOutcome { receipt, reserved, in_slots, out_slots }
-│ = self.reserve_for_receipt(tx, drained)? // Line 850-855
-│ │
-│ └─ reserve_for_receipt(tx, drained)
-│ FILE: engine_impl.rs
-│ │
-│ FOR rewrite IN drained (canonical order):
-│ │
-│ ├─ accepted = self.scheduler.reserve(tx, &mut rewrite)
-│ │
-│ ├─ IF !accepted:
-│ │ blockers = find_blocking_rewrites(reserved, &rewrite)
-│ │
-│ ├─ receipt_entries.push(TickReceiptEntry { ... })
-│ │
-│ └─ IF accepted:
-│ reserved.push(rewrite)
-│ extend_slots_from_footprint(&mut in_slots, &mut out_slots, ...)
-│ │
-│ return ReserveOutcome { receipt, reserved, in_slots, out_slots }
-│
-│ rewrites_digest = compute_rewrites_digest(&reserved_rewrites) // Line 858
-│
-├─[4] EXECUTE (PHASE 5 BOAW)
-│ state_before = self.state.clone() // Line 862
-│ delta_ops = self.apply_reserved_rewrites(reserved, &state_before)?
-│ │
-│ └─ apply_reserved_rewrites(rewrites, state_before)
-│ FILE: engine_impl.rs
-│ │
-│ ├─ let mut delta = TickDelta::new()
-│ │
-│ ├─ FOR rewrite IN rewrites:
-│ │ executor = self.rule_by_compact(rewrite.compact_rule).executor
-│ │ view = GraphView::new(self.state.store(&rewrite.scope.warp_id))
-│ │ (executor)(view, &rewrite.scope.local_id, &mut delta)
-│ │
-│ ├─ let ops = delta.finalize() // Canonical sort
-│ │
-│ ├─ patch = WarpTickPatchV1::new(policy_id, rule_pack_id, ..., ops)
-│ │ patch.apply_to_state(&mut self.state)?
-│ │
-│ └─ [delta_validate]: assert_delta_matches_diff(&ops, &diff_ops)
-│
-├─[5] MATERIALIZE
-│ mat_report = self.bus.finalize() // Line 884
-│ self.last_materialization = mat_report.channels
-│ self.last_materialization_errors = mat_report.errors
-│
-├─[6] COMPUTE DELTA PATCH
-│ ops = diff_state(&state_before, &self.state) // Line 889
-│ │
-│ └─ diff_state(before, after)
-│ FILE: tick_patch.rs
-│ - Canonicalize portal authoring (OpenPortal)
-│ - Diff instances (delete/upsert)
-│ - Diff nodes, edges, attachments
-│ - Sort by WarpOp::sort_key()
-│ │
-│ patch = WarpTickPatchV1::new(policy_id, rule_pack_id, ..., ops)
-│ patch_digest = patch.digest() // Line 898
-│
-├─[7] COMPUTE STATE ROOT
-│ state_root = compute_state_root(&self.state, &self.current_root) // Line 900
-│
-├─[8] GET PARENTS
-│ parents = self.last_snapshot.as_ref().map(|s| vec![s.hash]).unwrap_or_default()
-│
-├─[9] COMPUTE DECISION DIGEST
-│ decision_digest = receipt.digest() // Line 929
-│
-├─[10] COMPUTE COMMIT HASH
-│ hash = compute_commit_hash_v2(&state_root, &parents, &patch_digest, policy_id)
-│
-├─[11] BUILD SNAPSHOT
-│ snapshot = Snapshot {
-│ root: self.current_root,
-│ hash, // commit_id v2
-│ parents,
-│ plan_digest, // Diagnostic
-│ decision_digest, // Diagnostic
-│ rewrites_digest, // Diagnostic
-│ patch_digest, // COMMITTED
-│ policy_id, // COMMITTED
-│ tx,
-│ }
-│
-├─[12] RECORD TO HISTORY
-│ self.last_snapshot = Some(snapshot.clone()) // Line 947
-│ self.tick_history.push((snapshot, receipt, patch)) // Line 948-949
-│ self.live_txs.remove(&tx.value()) // Line 951
-│ self.scheduler.finalize_tx(tx) // Line 952
-│
-└─[13] RETURN
- Ok((snapshot, receipt, patch))
-```
-
-### 8.2 Commit Hash Inputs
-
-| Input | Committed? | Purpose |
-| ----------------- | ---------- | ------------------------- |
-| `state_root` | ✓ | What the graph looks like |
-| `patch_digest` | ✓ | How we got here (ops) |
-| `parents` | ✓ | Chain continuity |
-| `policy_id` | ✓ | Aion policy version |
-| `plan_digest` | ✗ | Diagnostic only |
-| `decision_digest` | ✗ | Diagnostic only |
-| `rewrites_digest` | ✗ | Diagnostic only |
-
----
-
-## 9. Complete Call Graph
-
-### 9.1 Full Journey: Intent → Commit
-
-```text
-USER ACTION
- │
- ▼
-Engine::ingest_intent(intent_bytes)
- ├─ compute_intent_id() // BLAKE3 content hash
- ├─ make_node_id(), make_type_id() // Structural IDs
- ├─ store.insert_node() // Create event node
- ├─ store.set_node_attachment() // Attach intent payload
- └─ store.insert_edge() // Pending edge to inbox
- │
- ▼
-Engine::begin() → TxId
- ├─ tx_counter.wrapping_add(1)
- ├─ live_txs.insert(tx_counter)
- └─ TxId::from_raw(tx_counter)
- │
- ▼
-Engine::dispatch_next_intent(tx) // (or manual apply)
- │
- ▼
-Engine::apply(tx, rule_name, scope)
- └─ Engine::apply_in_warp(tx, warp_id, rule_name, scope, &[])
- ├─ rules.get(rule_name) // Lookup rule
- ├─ GraphView::new(store) // Read-only view
- ├─ (rule.matcher)(view, scope) // Match check
- ├─ scope_hash() // BLAKE3 ordering key
- ├─ (rule.compute_footprint)(view, scope) // Footprint
- └─ scheduler.enqueue(tx, PendingRewrite)
- └─ PendingTx::enqueue() // Last-wins dedup
- │
- ▼
-Engine::commit_with_receipt(tx)
- │
- ├─[DRAIN]
- │ scheduler.drain_for_tx(tx)
- │ └─ PendingTx::drain_in_order()
- │ └─ radix_sort() or sort_unstable_by()
- │ 20-pass LSD radix sort
- │ ORDER: (scope_hash, rule_id, nonce)
- │
- ├─[RESERVE]
- │ FOR rewrite IN drained:
- │ scheduler.reserve(tx, &mut rewrite)
- │ ├─ has_conflict(active, pr)
- │ │ └─ GenSet::contains() × N // O(1) per check
- │ └─ mark_all(active, pr)
- │ └─ GenSet::mark() × M // O(1) per mark
- │
- ├─[EXECUTE]
- │ apply_reserved_rewrites(reserved, state_before)
- │ FOR rewrite IN reserved:
- │ (executor)(view, &scope, &mut delta)
- │ └─ scoped.emit(op)
- │ └─ delta.emit_with_origin(op, origin)
- │ delta.finalize() // Sort ops
- │ patch.apply_to_state(&mut self.state)
- │
- ├─[MATERIALIZE]
- │ bus.finalize()
- │
- ├─[DELTA PATCH]
- │ diff_state(&state_before, &self.state)
- │ └─ Sort by WarpOp::sort_key()
- │ WarpTickPatchV1::new(...)
- │ └─ compute_patch_digest_v2()
- │
- ├─[HASHES]
- │ compute_state_root(&self.state, &self.current_root)
- │ ├─ BFS reachability
- │ └─ BLAKE3 over canonical encoding
- │ compute_commit_hash_v2(state_root, parents, patch_digest, policy_id)
- │ └─ BLAKE3(version || parents || state_root || patch_digest || policy_id)
- │
- ├─[SNAPSHOT]
- │ Snapshot { root, hash, parents, digests..., policy_id, tx }
- │
- └─[RECORD]
- tick_history.push((snapshot, receipt, patch))
- live_txs.remove(&tx.value())
- scheduler.finalize_tx(tx)
- │
- ▼
-RETURN: (Snapshot, TickReceipt, WarpTickPatchV1)
-```
-
-### 9.2 File Index
-
-| Component | Primary File | Key Lines |
-| ------------------- | ---------------- | ---------------- |
-| Intent Ingestion | `engine_impl.rs` | 1216-1281 |
-| Identity Hashing | `ident.rs` | 85-109 |
-| Transaction Begin | `engine_impl.rs` | 711-719 |
-| Rule Apply | `engine_impl.rs` | 730-806 |
-| Footprint | `footprint.rs` | 131-152 |
-| Scheduler Enqueue | `scheduler.rs` | 102-105, 331-355 |
-| Radix Sort | `scheduler.rs` | 360-413, 481-498 |
-| Reserve/Conflict | `scheduler.rs` | 134-278 |
-| GenSet | `scheduler.rs` | 509-535 |
-| BOAW Execute | `boaw/exec.rs` | 61-152 |
-| Shard Routing | `boaw/shard.rs` | 82-120 |
-| Delta Merge | `boaw/merge.rs` | 36-75 |
-| TickDelta | `tick_delta.rs` | 38-172 |
-| WarpOp Sort Key | `tick_patch.rs` | 207-287 |
-| State Mutations | `graph.rs` | 175-412 |
-| Patch Apply | `tick_patch.rs` | 434-561 |
-| Diff State | `tick_patch.rs` | 979-1069 |
-| State Root Hash | `snapshot.rs` | 88-209 |
-| Commit Hash v2 | `snapshot.rs` | 244-263 |
-| Patch Digest | `tick_patch.rs` | 755-774 |
-| Commit Orchestrator | `engine_impl.rs` | 837-954 |
-
----
-
-## Appendix A: Complexity Summary
-
-| Operation | Complexity | Notes |
-| ------------------------ | ---------- | ---------------------------------- |
-| `ingest_intent` | O(1) | Fixed structural insertions |
-| `begin` | O(1) | Counter increment + set insert |
-| `apply` | O(m) | m = footprint size |
-| `drain_for_tx` (radix) | O(n) | n = candidates, 20 passes |
-| `reserve` per rewrite | O(m) | m = footprint size, O(1) per check |
-| `execute_parallel` | O(n/w) | n = items, w = workers |
-| `merge_deltas` | O(k log k) | k = total ops (sort + dedup) |
-| `compute_state_root` | O(V + E) | V = nodes, E = edges |
-| `compute_commit_hash_v2` | O(P) | P = parents |
-
----
-
-## Appendix B: Determinism Boundaries
-
-### Guaranteed Deterministic
-
-- Radix sort ordering (20-pass LSD)
-- BTreeMap/BTreeSet iteration
-- BLAKE3 hashing
-- GenSet conflict detection
-- Canonical merge deduplication
-
-### Intentionally Non-Deterministic (Handled by Merge)
-
-- Worker execution order in BOAW
-- Shard claim order (atomic counter)
-
-### Protocol Constants (Frozen)
-
-- `NUM_SHARDS = 256`
-- `SHARD_MASK = 255`
-- Shard routing: `LE_u64(node_id[0..8]) & 255`
-- Commit hash v2 version tag: `0x02 0x00`
-
----
-
-_Document generated 2026-01-25. File paths are accurate as of this date; line numbers are intentionally omitted._
diff --git a/docs/archive/study/echo-tour-de-code.pdf b/docs/archive/study/echo-tour-de-code.pdf
deleted file mode 100644
index a32b911f..00000000
Binary files a/docs/archive/study/echo-tour-de-code.pdf and /dev/null differ
diff --git a/docs/archive/study/echo-tour-de-code.tex b/docs/archive/study/echo-tour-de-code.tex
deleted file mode 100644
index 0c317858..00000000
--- a/docs/archive/study/echo-tour-de-code.tex
+++ /dev/null
@@ -1,1560 +0,0 @@
-% SPDX-License-Identifier: Apache-2.0 OR LicenseRef-MIND-UCAL-1.0
-% © James Ross Ω FLYING•ROBOTS
-% Options for packages loaded elsewhere
-\PassOptionsToPackage{unicode}{hyperref}
-\PassOptionsToPackage{hyphens}{url}
-\documentclass[
-]{book}
-\usepackage[letterpaper, margin=1in]{geometry}
-\usepackage{xcolor}
-\usepackage{amsmath,amssymb}
-\setcounter{secnumdepth}{-\maxdimen} % remove section numbering
-\usepackage{iftex}
-\ifPDFTeX
- \usepackage[T1]{fontenc}
- \usepackage[utf8]{inputenc}
- \usepackage{textcomp} % provide euro and other symbols
-\else % if luatex or xetex
- \usepackage{unicode-math} % this also loads fontspec
- \defaultfontfeatures{Scale=MatchLowercase}
- \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1}
-\fi
-\usepackage{lmodern}
-\ifPDFTeX\else
- % xetex/luatex font selection
-\fi
-% Use upquote if available, for straight quotes in verbatim environments
-\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
-\IfFileExists{microtype.sty}{% use microtype if available
- \usepackage[]{microtype}
- \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
-}{}
-\makeatletter
-\@ifundefined{KOMAClassName}{% if non-KOMA class
- \IfFileExists{parskip.sty}{%
- \usepackage{parskip}
- }{% else
- \setlength{\parindent}{0pt}
- \setlength{\parskip}{6pt plus 2pt minus 1pt}}
-}{% if KOMA class
- \KOMAoptions{parskip=half}}
-\makeatother
-\usepackage{color}
-\usepackage{fancyvrb}
-\newcommand{\VerbBar}{|}
-\newcommand{\VERB}{\Verb[commandchars=\\\{\}]}
-\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}}
-% Add ',fontsize=\small' for more characters per line
-\newenvironment{Shaded}{}{}
-\newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}}
-\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}}
-\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{#1}}
-\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}}
-\newcommand{\BuiltInTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{#1}}
-\newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}}
-\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{#1}}}
-\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}}
-\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{#1}}
-\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}}
-\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{#1}}
-\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}}
-\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{#1}}}
-\newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}}
-\newcommand{\ExtensionTok}[1]{#1}
-\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}}
-\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{#1}}
-\newcommand{\ImportTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{\textbf{#1}}}
-\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}}
-\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}}
-\newcommand{\NormalTok}[1]{#1}
-\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{#1}}
-\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{#1}}
-\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{#1}}
-\newcommand{\RegionMarkerTok}[1]{#1}
-\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}}
-\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{#1}}
-\newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}}
-\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{#1}}
-\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}}
-\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}}
-\usepackage{longtable,booktabs,array}
-\newcounter{none} % for unnumbered tables
-\usepackage{calc} % for calculating minipage widths
-% Correct order of tables after \paragraph or \subparagraph
-\usepackage{etoolbox}
-\makeatletter
-\patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{}
-\makeatother
-% Allow footnotes in longtable head/foot
-\IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}}
-\makesavenoteenv{longtable}
-\setlength{\emergencystretch}{3em} % prevent overfull lines
-\providecommand{\tightlist}{%
- \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
-\usepackage{bookmark}
-\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available
-\urlstyle{same}
-\hypersetup{
- hidelinks,
- pdfcreator={LaTeX via pandoc}}
-
-\author{}
-\date{}
-
-\begin{document}
-\frontmatter
-
-\mainmatter
-\chapter{Echo: Tour de Code}\label{echo-tour-de-code}
-
-\begin{quote}
-\textbf{The complete function-by-function trace of Echo's execution
-pipeline.}
-
-This document traces EVERY function call involved in processing a user
-action through the Echo engine. References use \textbf{symbol names}
-(functions, structs) rather than line numbers to reduce maintenance burden.
-Run \texttt{scripts/validate-tour-refs.sh} to verify all referenced symbols
-still exist in the codebase.
-\end{quote}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{Table of Contents}\label{table-of-contents}
-
-\begin{enumerate}
-\def\labelenumi{\arabic{enumi}.}
-\tightlist
-\item
- \hyperref[1-intent-ingestion]{Intent Ingestion}
-\item
- \hyperref[2-transaction-lifecycle]{Transaction Lifecycle}
-\item
- \hyperref[3-rule-matching]{Rule Matching}
-\item
- \hyperref[4-scheduler-drain--reserve]{Scheduler: Drain \& Reserve}
-\item
- \hyperref[5-boaw-parallel-execution]{BOAW Parallel Execution}
-\item
- \hyperref[6-delta-merge--state-finalization]{Delta Merge \& State
- Finalization}
-\item
- \hyperref[7-hash-computation]{Hash Computation}
-\item
- \hyperref[8-commit-orchestration]{Commit Orchestration}
-\item
- \hyperref[9-complete-call-graph]{Complete Call Graph}
-\end{enumerate}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{1. Intent Ingestion}\label{intent-ingestion}
-
-\textbf{Entry Point:} \texttt{Engine::ingest\_intent()} \textbf{File:}
-\texttt{crates/warp-core/src/engine\_impl.rs:1216}
-
-\subsection{1.1 Function Signature}\label{function-signature}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ ingest\_intent(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ intent\_bytes}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[}\DataTypeTok{u8}\NormalTok{]) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Result}\OperatorTok{\textless{}}\NormalTok{IngestDisposition}\OperatorTok{,}\NormalTok{ EngineError}\OperatorTok{\textgreater{}}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{Returns:} -
-\texttt{IngestDisposition::Accepted\ \{\ intent\_id:\ Hash\ \}} --- New
-intent accepted -
-\texttt{IngestDisposition::Duplicate\ \{\ intent\_id:\ Hash\ \}} ---
-Already ingested
-
-\subsection{1.2 Complete Call Trace}\label{complete-call-trace}
-
-\begin{verbatim}
-Engine::ingest_intent(intent_bytes: &[u8])
-│
-├─[1] compute_intent_id(intent_bytes) → Hash
-│ FILE: crates/warp-core/src/inbox.rs:205
-│ CODE:
-│ let mut hasher = blake3::Hasher::new();
-│ hasher.update(b"intent:"); // Domain separation
-│ hasher.update(intent_bytes);
-│ hasher.finalize().into() // → [u8; 32]
-│
-├─[2] NodeId(intent_id)
-│ Creates strongly-typed NodeId from Hash
-│
-├─[3] self.state.store_mut(&warp_id) → Option<&mut GraphStore>
-│ FILE: crates/warp-core/src/engine_impl.rs:1221
-│ ERROR: EngineError::UnknownWarp if None
-│
-├─[4] Extract root_node_id from self.current_root.local_id
-│
-├─[5] STRUCTURAL NODE CREATION (Idempotent)
-│ ├─ make_node_id("sim") → NodeId
-│ │ FILE: crates/warp-core/src/ident.rs:93
-│ │ CODE: blake3("node:" || "sim")
-│ │
-│ ├─ make_node_id("sim/inbox") → NodeId
-│ │ CODE: blake3("node:" || "sim/inbox")
-│ │
-│ ├─ make_type_id("sim") → TypeId
-│ │ FILE: crates/warp-core/src/ident.rs:85
-│ │ CODE: blake3("type:" || "sim")
-│ │
-│ ├─ make_type_id("sim/inbox") → TypeId
-│ ├─ make_type_id("sim/inbox/event") → TypeId
-│ │
-│ ├─ store.insert_node(sim_id, NodeRecord { ty: sim_ty })
-│ │ FILE: crates/warp-core/src/graph.rs:175
-│ │ CODE: self.nodes.insert(id, record)
-│ │
-│ └─ store.insert_node(inbox_id, NodeRecord { ty: inbox_ty })
-│
-├─[6] STRUCTURAL EDGE CREATION
-│ ├─ make_edge_id("edge:root/sim") → EdgeId
-│ │ FILE: crates/warp-core/src/ident.rs:109
-│ │ CODE: blake3("edge:" || "edge:root/sim")
-│ │
-│ ├─ store.insert_edge(root_id, EdgeRecord { ... })
-│ │ FILE: crates/warp-core/src/graph.rs:188
-│ │ └─ GraphStore::upsert_edge_record(from, edge)
-│ │ FILE: crates/warp-core/src/graph.rs:196
-│ │ UPDATES:
-│ │ self.edge_index.insert(edge_id, from)
-│ │ self.edge_to_index.insert(edge_id, to)
-│ │ self.edges_from.entry(from).or_default().push(edge)
-│ │ self.edges_to.entry(to).or_default().push(edge_id)
-│ │
-│ └─ store.insert_edge(sim_id, EdgeRecord { ... }) [sim → inbox]
-│
-├─[7] DUPLICATE DETECTION
-│ store.node(&event_id) → Option<&NodeRecord>
-│ FILE: crates/warp-core/src/graph.rs:87
-│ CODE: self.nodes.get(id)
-│ IF Some(_): return Ok(IngestDisposition::Duplicate { intent_id })
-│
-├─[8] EVENT NODE CREATION
-│ store.insert_node(event_id, NodeRecord { ty: event_ty })
-│ NOTE: event_id = intent_id (content-addressed)
-│
-├─[9] INTENT ATTACHMENT
-│ ├─ AtomPayload::new(type_id, bytes)
-│ │ FILE: crates/warp-core/src/attachment.rs:149
-│ │ CODE: Self { type_id, bytes: Bytes::copy_from_slice(intent_bytes) }
-│ │
-│ └─ store.set_node_attachment(event_id, Some(AttachmentValue::Atom(payload)))
-│ FILE: crates/warp-core/src/graph.rs:125
-│ CODE: self.node_attachments.insert(id, v)
-│
-├─[10] PENDING EDGE CREATION (Queue Membership)
-│ ├─ pending_edge_id(&inbox_id, &intent_id) → EdgeId
-│ │ FILE: crates/warp-core/src/inbox.rs:212
-│ │ CODE: blake3("edge:" || "sim/inbox/pending:" || inbox_id || intent_id)
-│ │
-│ └─ store.insert_edge(inbox_id, EdgeRecord {
-│ id: pending_edge_id,
-│ from: inbox_id,
-│ to: event_id,
-│ ty: make_type_id("edge:pending")
-│ })
-│
-└─[11] return Ok(IngestDisposition::Accepted { intent_id })
-\end{verbatim}
-
-\subsection{1.3 Data Structures
-Modified}\label{data-structures-modified}
-
-{\def\LTcaptype{none} % do not increment counter
-\begin{longtable}[]{@{}
- >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.4231}}
- >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.2692}}
- >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.3077}}@{}}
-\toprule\noalign{}
-\begin{minipage}[b]{\linewidth}\raggedright
-Structure
-\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright
-Field
-\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright
-Change
-\end{minipage} \\
-\midrule\noalign{}
-\endhead
-\bottomrule\noalign{}
-\endlastfoot
-\texttt{GraphStore} & \texttt{nodes} & +3 entries (sim, inbox, event) \\
-\texttt{GraphStore} & \texttt{edges\_from} & +3 edges (root→sim,
-sim→inbox, inbox→event) \\
-\texttt{GraphStore} & \texttt{edges\_to} & +3 reverse entries \\
-\texttt{GraphStore} & \texttt{edge\_index} & +3 edge→from mappings \\
-\texttt{GraphStore} & \texttt{edge\_to\_index} & +3 edge→to mappings \\
-\texttt{GraphStore} & \texttt{node\_attachments} & +1 (event → intent
-payload) \\
-\end{longtable}
-}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{2. Transaction Lifecycle}\label{transaction-lifecycle}
-
-\subsection{2.1 Begin Transaction}\label{begin-transaction}
-
-\textbf{Entry Point:} \texttt{Engine::begin()} \textbf{File:}
-\texttt{crates/warp-core/src/engine\_impl.rs:711-719}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ begin(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\NormalTok{) }\OperatorTok{{-}\textgreater{}}\NormalTok{ TxId }\OperatorTok{\{}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter }\OperatorTok{=} \KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter}\OperatorTok{.}\NormalTok{wrapping\_add(}\DecValTok{1}\NormalTok{)}\OperatorTok{;} \CommentTok{// Line 713}
- \ControlFlowTok{if} \KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter }\OperatorTok{==} \DecValTok{0} \OperatorTok{\{}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter }\OperatorTok{=} \DecValTok{1}\OperatorTok{;} \CommentTok{// Line 715: Zero is reserved}
- \OperatorTok{\}}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{live\_txs}\OperatorTok{.}\NormalTok{insert(}\KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter)}\OperatorTok{;} \CommentTok{// Line 717}
- \PreprocessorTok{TxId::}\NormalTok{from\_raw(}\KeywordTok{self}\OperatorTok{.}\NormalTok{tx\_counter) }\CommentTok{// Line 718}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{Call Trace:}
-
-\begin{verbatim}
-Engine::begin()
-│
-├─ self.tx_counter.wrapping_add(1)
-│ Rust std: u64::wrapping_add
-│ Handles u64::MAX → 0 overflow
-│
-├─ if self.tx_counter == 0: self.tx_counter = 1
-│ INVARIANT: TxId(0) is reserved as invalid
-│
-├─ self.live_txs.insert(self.tx_counter)
-│ TYPE: HashSet
-│ Registers transaction as active
-│
-└─ TxId::from_raw(self.tx_counter)
- FILE: crates/warp-core/src/tx.rs:34
- CODE: pub const fn from_raw(value: u64) -> Self { Self(value) }
- TYPE: #[repr(transparent)] struct TxId(u64)
-\end{verbatim}
-
-\textbf{State Changes:} - \texttt{tx\_counter}: N → N+1 (or 1 if
-wrapped) - \texttt{live\_txs}: Insert new counter value
-
-\subsection{2.2 Abort Transaction}\label{abort-transaction}
-
-\textbf{Entry Point:} \texttt{Engine::abort()} \textbf{File:}
-\texttt{crates/warp-core/src/engine\_impl.rs:962-968}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ abort(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ tx}\OperatorTok{:}\NormalTok{ TxId) }\OperatorTok{\{}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{live\_txs}\OperatorTok{.}\NormalTok{remove(}\OperatorTok{\&}\NormalTok{tx}\OperatorTok{.}\NormalTok{value())}\OperatorTok{;}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{scheduler}\OperatorTok{.}\NormalTok{finalize\_tx(tx)}\OperatorTok{;}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{bus}\OperatorTok{.}\NormalTok{clear()}\OperatorTok{;}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{last\_materialization}\OperatorTok{.}\NormalTok{clear()}\OperatorTok{;}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{last\_materialization\_errors}\OperatorTok{.}\NormalTok{clear()}\OperatorTok{;}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{3. Rule Matching}\label{rule-matching}
-
-\textbf{Entry Point:} \texttt{Engine::apply()} \textbf{File:}
-\texttt{crates/warp-core/src/engine\_impl.rs:730-737}
-
-\subsection{3.1 Function Signature}\label{function-signature-1}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ apply(}
- \OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}
-\NormalTok{ tx}\OperatorTok{:}\NormalTok{ TxId}\OperatorTok{,}
-\NormalTok{ rule\_name}\OperatorTok{:} \OperatorTok{\&}\DataTypeTok{str}\OperatorTok{,}
-\NormalTok{ scope}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId}\OperatorTok{,}
-\NormalTok{) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Result}\OperatorTok{\textless{}}\NormalTok{ApplyResult}\OperatorTok{,}\NormalTok{ EngineError}\OperatorTok{\textgreater{}}
-\end{Highlighting}
-\end{Shaded}
-
-\subsection{3.2 Complete Call Trace}\label{complete-call-trace-1}
-
-\begin{verbatim}
-Engine::apply(tx, rule_name, scope)
-│
-└─ Engine::apply_in_warp(tx, self.current_root.warp_id, rule_name, scope, &[])
- FILE: crates/warp-core/src/engine_impl.rs:754-806
- │
- ├─[1] TRANSACTION VALIDATION
- │ CODE: if tx.value() == 0 || !self.live_txs.contains(&tx.value())
- │ ERROR: EngineError::UnknownTx
- │
- ├─[2] RULE LOOKUP
- │ self.rules.get(rule_name) → Option<&RewriteRule>
- │ TYPE: HashMap<&'static str, RewriteRule>
- │ ERROR: EngineError::UnknownRule(rule_name.to_owned())
- │
- ├─[3] STORE LOOKUP
- │ self.state.store(&warp_id) → Option<&GraphStore>
- │ ERROR: EngineError::UnknownWarp(warp_id)
- │
- ├─[4] CREATE GRAPHVIEW
- │ GraphView::new(store) → GraphView<'_>
- │ FILE: crates/warp-core/src/graph_view.rs
- │ TYPE: Read-only wrapper (Copy, 8 bytes)
- │
- ├─[5] CALL MATCHER
- │ (rule.matcher)(view, scope) → bool
- │ TYPE: MatchFn = for<'a> fn(GraphView<'a>, &NodeId) -> bool
- │ FILE: crates/warp-core/src/rule.rs:16-24
- │ IF false: return Ok(ApplyResult::NoMatch)
- │
- ├─[6] CREATE SCOPE KEY
- │ let scope_key = NodeKey { warp_id, local_id: *scope }
- │
- ├─[7] COMPUTE SCOPE HASH
- │ scope_hash(&rule.id, &scope_key) → Hash
- │ FILE: crates/warp-core/src/engine_impl.rs:1712-1718
- │ CODE:
- │ let mut hasher = Hasher::new();
- │ hasher.update(rule_id); // 32 bytes
- │ hasher.update(scope.warp_id.as_bytes()); // 32 bytes
- │ hasher.update(scope.local_id.as_bytes()); // 32 bytes
- │ hasher.finalize().into()
- │
- ├─[8] COMPUTE FOOTPRINT
- │ (rule.compute_footprint)(view, scope) → Footprint
- │ TYPE: FootprintFn = for<'a> fn(GraphView<'a>, &NodeId) -> Footprint
- │ FILE: crates/warp-core/src/rule.rs:38-46
- │ RETURNS:
- │ Footprint {
- │ n_read: IdSet, // Nodes read
- │ n_write: IdSet, // Nodes written
- │ e_read: IdSet, // Edges read
- │ e_write: IdSet, // Edges written
- │ a_read: AttachmentSet, // Attachments read
- │ a_write: AttachmentSet, // Attachments written
- │ b_in: PortSet, // Input ports
- │ b_out: PortSet, // Output ports
- │ factor_mask: u64, // O(1) prefilter
- │ }
- │
- ├─[9] AUGMENT FOOTPRINT WITH DESCENT STACK
- │ for key in descent_stack:
- │ footprint.a_read.insert(*key)
- │ FILE: crates/warp-core/src/footprint.rs:104-107
- │ PURPOSE: Stage B1 law - READs of all descent chain slots
- │
- ├─[10] COMPACT RULE ID LOOKUP
- │ self.compact_rule_ids.get(&rule.id) → Option<&CompactRuleId>
- │ TYPE: HashMap
- │ ERROR: EngineError::InternalCorruption
- │
- └─[11] ENQUEUE TO SCHEDULER
- self.scheduler.enqueue(tx, PendingRewrite { ... })
- │
- └─ DeterministicScheduler::enqueue(tx, rewrite)
- FILE: crates/warp-core/src/scheduler.rs:654-659
- │
- └─ RadixScheduler::enqueue(tx, rewrite)
- FILE: crates/warp-core/src/scheduler.rs:102-105
- CODE:
- let txq = self.pending.entry(tx).or_default();
- txq.enqueue(rewrite.scope_hash, rewrite.compact_rule.0, rewrite);
- │
- └─ PendingTx::enqueue(scope_be32, rule_id, payload)
- FILE: crates/warp-core/src/scheduler.rs:331-355
-
- CASE 1: Duplicate (scope_hash, rule_id) — LAST WINS
- index.get(&key) → Some(&i)
- fat[thin[i].handle] = Some(payload) // Overwrite
- thin[i].nonce = next_nonce++ // Refresh nonce
-
- CASE 2: New entry
- fat.push(Some(payload))
- thin.push(RewriteThin { scope_be32, rule_id, nonce, handle })
- index.insert(key, thin.len() - 1)
-\end{verbatim}
-
-\subsection{3.3 PendingRewrite
-Structure}\label{pendingrewrite-structure}
-
-\textbf{File:} \texttt{crates/warp-core/src/scheduler.rs:68-82}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{struct}\NormalTok{ PendingRewrite }\OperatorTok{\{}
- \KeywordTok{pub}\NormalTok{ rule\_id}\OperatorTok{:} \BuiltInTok{Hash}\OperatorTok{,} \CommentTok{// 32{-}byte rule identifier}
- \KeywordTok{pub}\NormalTok{ compact\_rule}\OperatorTok{:}\NormalTok{ CompactRuleId}\OperatorTok{,} \CommentTok{// u32 hot{-}path handle}
- \KeywordTok{pub}\NormalTok{ scope\_hash}\OperatorTok{:} \BuiltInTok{Hash}\OperatorTok{,} \CommentTok{// 32{-}byte ordering key}
- \KeywordTok{pub}\NormalTok{ scope}\OperatorTok{:}\NormalTok{ NodeKey}\OperatorTok{,} \CommentTok{// \{ warp\_id, local\_id \}}
- \KeywordTok{pub}\NormalTok{ footprint}\OperatorTok{:}\NormalTok{ Footprint}\OperatorTok{,} \CommentTok{// Read/write declaration}
- \KeywordTok{pub}\NormalTok{ phase}\OperatorTok{:}\NormalTok{ RewritePhase}\OperatorTok{,} \CommentTok{// State machine: Matched → Reserved → ...}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{4. Scheduler: Drain \& Reserve}\label{scheduler-drain-reserve}
-
-\subsection{4.1 Drain Phase (Radix Sort)}\label{drain-phase-radix-sort}
-
-\textbf{Entry Point:} \texttt{RadixScheduler::drain\_for\_tx()}
-\textbf{File:} \texttt{crates/warp-core/src/scheduler.rs:109-113}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{fn}\NormalTok{ drain\_for\_tx(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ tx}\OperatorTok{:}\NormalTok{ TxId) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{PendingRewrite}\OperatorTok{\textgreater{}} \OperatorTok{\{}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{pending}
- \OperatorTok{.}\NormalTok{remove(}\OperatorTok{\&}\NormalTok{tx)}
- \OperatorTok{.}\NormalTok{map\_or\_else(}\DataTypeTok{Vec}\PreprocessorTok{::}\NormalTok{new}\OperatorTok{,} \OperatorTok{|}\KeywordTok{mut}\NormalTok{ txq}\OperatorTok{|}\NormalTok{ txq}\OperatorTok{.}\NormalTok{drain\_in\_order())}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{Complete Call Trace:}
-
-\begin{verbatim}
-RadixScheduler::drain_for_tx(tx)
-│
-├─ self.pending.remove(&tx) → Option>
-│
-└─ PendingTx::drain_in_order()
- FILE: crates/warp-core/src/scheduler.rs:416-446
- │
- ├─ DECISION: n <= 1024 (SMALL_SORT_THRESHOLD)?
- │ ├─ YES: sort_unstable_by(cmp_thin)
- │ │ Rust std comparison sort
- │ │
- │ └─ NO: radix_sort()
- │ FILE: crates/warp-core/src/scheduler.rs:360-413
- │
- └─ radix_sort()
- │
- ├─ Initialize scratch buffer: self.scratch.resize(n, default)
- │
- ├─ Lazy allocate histogram: self.counts16 = vec![0u32; 65536]
- │
- └─ FOR pass IN 0..20: // ═══ 20 PASSES ═══
- │
- ├─ SELECT src/dst buffers (ping-pong)
- │ flip = false: src=thin, dst=scratch
- │ flip = true: src=scratch, dst=thin
- │
- ├─ PHASE 1: COUNT BUCKETS
- │ FOR r IN src:
- │ b = bucket16(r, pass)
- │ counts[b] += 1
- │
- ├─ PHASE 2: PREFIX SUMS
- │ sum = 0
- │ FOR c IN counts:
- │ t = *c
- │ *c = sum
- │ sum += t
- │
- ├─ PHASE 3: STABLE SCATTER
- │ FOR r IN src:
- │ b = bucket16(r, pass)
- │ dst[counts[b]] = r
- │ counts[b] += 1
- │
- └─ flip = !flip
-
-BUCKET EXTRACTION (bucket16):
-FILE: crates/warp-core/src/scheduler.rs:481-498
-
-Pass 0: u16_from_u32_le(r.nonce, 0) // Nonce bytes [0:2]
-Pass 1: u16_from_u32_le(r.nonce, 1) // Nonce bytes [2:4]
-Pass 2: u16_from_u32_le(r.rule_id, 0) // Rule ID bytes [0:2]
-Pass 3: u16_from_u32_le(r.rule_id, 1) // Rule ID bytes [2:4]
-Pass 4: u16_be_from_pair32(scope, 15) // Scope bytes [30:32]
-Pass 5: u16_be_from_pair32(scope, 14) // Scope bytes [28:30]
-...
-Pass 19: u16_be_from_pair32(scope, 0) // Scope bytes [0:2] (MSD)
-
-SORT ORDER: (scope_hash, rule_id, nonce) ascending lexicographic
-\end{verbatim}
-
-\subsection{4.2 Reserve Phase (Independence
-Check)}\label{reserve-phase-independence-check}
-
-\textbf{Entry Point:} \texttt{RadixScheduler::reserve()} \textbf{File:}
-\texttt{crates/warp-core/src/scheduler.rs:134-143}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{fn}\NormalTok{ reserve(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ tx}\OperatorTok{:}\NormalTok{ TxId}\OperatorTok{,}\NormalTok{ pr}\OperatorTok{:} \OperatorTok{\&}\KeywordTok{mut}\NormalTok{ PendingRewrite) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{bool} \OperatorTok{\{}
- \KeywordTok{let}\NormalTok{ active }\OperatorTok{=} \KeywordTok{self}\OperatorTok{.}\NormalTok{active}\OperatorTok{.}\NormalTok{entry(tx)}\OperatorTok{.}\NormalTok{or\_insert\_with(}\PreprocessorTok{ActiveFootprints::}\NormalTok{new)}\OperatorTok{;}
- \ControlFlowTok{if} \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{has\_conflict(active}\OperatorTok{,}\NormalTok{ pr) }\OperatorTok{\{}
- \ControlFlowTok{return} \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{on\_conflict(pr)}\OperatorTok{;}
- \OperatorTok{\}}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{mark\_all(active}\OperatorTok{,}\NormalTok{ pr)}\OperatorTok{;}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{on\_reserved(pr)}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{Complete Call Trace:}
-
-\begin{verbatim}
-RadixScheduler::reserve(tx, pr)
-│
-├─ self.active.entry(tx).or_insert_with(ActiveFootprints::new)
-│ TYPE: HashMap
-│ ActiveFootprints contains 7 GenSets:
-│ - nodes_written: GenSet
-│ - nodes_read: GenSet
-│ - edges_written: GenSet
-│ - edges_read: GenSet
-│ - attachments_written: GenSet
-│ - attachments_read: GenSet
-│ - ports: GenSet
-│
-├─ has_conflict(active, pr) → bool
-│ FILE: crates/warp-core/src/scheduler.rs:157-236
-│ │
-│ ├─ FOR node IN pr.footprint.n_write:
-│ │ IF active.nodes_written.contains(node): return true // W-W conflict
-│ │ IF active.nodes_read.contains(node): return true // W-R conflict
-│ │
-│ ├─ FOR node IN pr.footprint.n_read:
-│ │ IF active.nodes_written.contains(node): return true // R-W conflict
-│ │ (R-R is allowed)
-│ │
-│ ├─ FOR edge IN pr.footprint.e_write:
-│ │ IF active.edges_written.contains(edge): return true
-│ │ IF active.edges_read.contains(edge): return true
-│ │
-│ ├─ FOR edge IN pr.footprint.e_read:
-│ │ IF active.edges_written.contains(edge): return true
-│ │
-│ ├─ FOR key IN pr.footprint.a_write:
-│ │ IF active.attachments_written.contains(key): return true
-│ │ IF active.attachments_read.contains(key): return true
-│ │
-│ ├─ FOR key IN pr.footprint.a_read:
-│ │ IF active.attachments_written.contains(key): return true
-│ │
-│ └─ FOR port IN pr.footprint.b_in ∪ pr.footprint.b_out:
-│ IF active.ports.contains(port): return true
-│
-├─ IF conflict:
-│ └─ on_conflict(pr)
-│ FILE: crates/warp-core/src/scheduler.rs:145-149
-│ pr.phase = RewritePhase::Aborted
-│ return false
-│
-├─ mark_all(active, pr)
-│ FILE: crates/warp-core/src/scheduler.rs:238-278
-│ │
-│ ├─ FOR node IN pr.footprint.n_write:
-│ │ active.nodes_written.mark(NodeKey { warp_id, local_id: node })
-│ │
-│ ├─ FOR node IN pr.footprint.n_read:
-│ │ active.nodes_read.mark(NodeKey { ... })
-│ │
-│ ├─ FOR edge IN pr.footprint.e_write:
-│ │ active.edges_written.mark(EdgeKey { ... })
-│ │
-│ ├─ FOR edge IN pr.footprint.e_read:
-│ │ active.edges_read.mark(EdgeKey { ... })
-│ │
-│ ├─ FOR key IN pr.footprint.a_write:
-│ │ active.attachments_written.mark(key)
-│ │
-│ ├─ FOR key IN pr.footprint.a_read:
-│ │ active.attachments_read.mark(key)
-│ │
-│ └─ FOR port IN pr.footprint.b_in ∪ pr.footprint.b_out:
-│ active.ports.mark(port)
-│
-└─ on_reserved(pr)
- FILE: crates/warp-core/src/scheduler.rs:151-155
- pr.phase = RewritePhase::Reserved
- return true
-\end{verbatim}
-
-\subsection{4.3 GenSet: O(1) Conflict
-Detection}\label{genset-o1-conflict-detection}
-
-\textbf{File:} \texttt{crates/warp-core/src/scheduler.rs:509-535}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{struct}\NormalTok{ GenSet}\OperatorTok{\textless{}}\NormalTok{K}\OperatorTok{\textgreater{}} \OperatorTok{\{}
-\NormalTok{ gen}\OperatorTok{:} \DataTypeTok{u32}\OperatorTok{,} \CommentTok{// Current generation}
-\NormalTok{ seen}\OperatorTok{:}\NormalTok{ FxHashMap}\OperatorTok{\textless{}}\NormalTok{K}\OperatorTok{,} \DataTypeTok{u32}\OperatorTok{\textgreater{},} \CommentTok{// Key → generation when marked}
-\OperatorTok{\}}
-
-\KeywordTok{impl}\OperatorTok{\textless{}}\NormalTok{K}\OperatorTok{:} \BuiltInTok{Hash} \OperatorTok{+} \BuiltInTok{Eq} \OperatorTok{+} \BuiltInTok{Copy}\OperatorTok{\textgreater{}}\NormalTok{ GenSet}\OperatorTok{\textless{}}\NormalTok{K}\OperatorTok{\textgreater{}} \OperatorTok{\{}
- \AttributeTok{\#[}\NormalTok{inline}\AttributeTok{]}
- \KeywordTok{pub} \KeywordTok{fn}\NormalTok{ contains(}\OperatorTok{\&}\KeywordTok{self}\OperatorTok{,}\NormalTok{ key}\OperatorTok{:}\NormalTok{ K) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{bool} \OperatorTok{\{}
- \PreprocessorTok{matches!}\NormalTok{(}\KeywordTok{self}\OperatorTok{.}\NormalTok{seen}\OperatorTok{.}\NormalTok{get(}\OperatorTok{\&}\NormalTok{key)}\OperatorTok{,} \ConstantTok{Some}\NormalTok{(}\OperatorTok{\&}\NormalTok{g) }\ControlFlowTok{if}\NormalTok{ g }\OperatorTok{==} \KeywordTok{self}\OperatorTok{.}\NormalTok{gen)}
- \OperatorTok{\}}
-
- \AttributeTok{\#[}\NormalTok{inline}\AttributeTok{]}
- \KeywordTok{pub} \KeywordTok{fn}\NormalTok{ mark(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{,}\NormalTok{ key}\OperatorTok{:}\NormalTok{ K) }\OperatorTok{\{}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{seen}\OperatorTok{.}\NormalTok{insert(key}\OperatorTok{,} \KeywordTok{self}\OperatorTok{.}\NormalTok{gen)}\OperatorTok{;}
- \OperatorTok{\}}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{Key Insight:} No clearing needed between transactions. Increment
-\texttt{gen} → all old entries become stale.
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{5. BOAW Parallel Execution}\label{boaw-parallel-execution}
-
-\textbf{Entry Point:} \texttt{execute\_parallel()} \textbf{File:}
-\texttt{crates/warp-core/src/boaw/exec.rs:61-83}
-
-\subsection{5.1 Entry Point}\label{entry-point}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ execute\_parallel(view}\OperatorTok{:}\NormalTok{ GraphView}\OperatorTok{\textless{}}\OtherTok{\textquotesingle{}\_}\OperatorTok{\textgreater{},}\NormalTok{ items}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[ExecItem]}\OperatorTok{,}\NormalTok{ workers}\OperatorTok{:} \DataTypeTok{usize}\NormalTok{) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{TickDelta}\OperatorTok{\textgreater{}} \OperatorTok{\{}
- \PreprocessorTok{assert!}\NormalTok{(workers }\OperatorTok{\textgreater{}=} \DecValTok{1}\NormalTok{)}\OperatorTok{;}
- \KeywordTok{let}\NormalTok{ capped\_workers }\OperatorTok{=}\NormalTok{ workers}\OperatorTok{.}\NormalTok{min(NUM\_SHARDS)}\OperatorTok{;} \CommentTok{// Cap at 256}
-
- \AttributeTok{\#[}\NormalTok{cfg}\AttributeTok{(}\NormalTok{feature }\OperatorTok{=} \StringTok{"parallel{-}stride{-}fallback"}\AttributeTok{)]}
- \ControlFlowTok{if} \PreprocessorTok{std::env::}\NormalTok{var(}\StringTok{"ECHO\_PARALLEL\_STRIDE"}\NormalTok{)}\OperatorTok{.}\NormalTok{is\_ok() }\OperatorTok{\{}
- \ControlFlowTok{return}\NormalTok{ execute\_parallel\_stride(view}\OperatorTok{,}\NormalTok{ items}\OperatorTok{,}\NormalTok{ capped\_workers)}\OperatorTok{;}
- \OperatorTok{\}}
-
-\NormalTok{ execute\_parallel\_sharded(view}\OperatorTok{,}\NormalTok{ items}\OperatorTok{,}\NormalTok{ capped\_workers) }\CommentTok{// DEFAULT}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\subsection{5.2 Complete Call Trace}\label{complete-call-trace-2}
-
-\begin{verbatim}
-execute_parallel(view, items, workers)
-│
-└─ execute_parallel_sharded(view, items, capped_workers)
- FILE: crates/warp-core/src/boaw/exec.rs:101-152
- │
- ├─ IF items.is_empty():
- │ return (0..workers).map(|_| TickDelta::new()).collect()
- │
- ├─ partition_into_shards(items.to_vec()) → Vec
- │ FILE: crates/warp-core/src/boaw/shard.rs:109-120
- │ │
- │ ├─ Create 256 empty VirtualShard structures
- │ │
- │ └─ FOR item IN items:
- │ │
- │ ├─ shard_of(&item.scope) → usize
- │ │ FILE: crates/warp-core/src/boaw/shard.rs:82-92
- │ │ CODE:
- │ │ let bytes = scope.as_bytes();
- │ │ let first_8: [u8; 8] = [bytes[0..8]];
- │ │ let val = u64::from_le_bytes(first_8);
- │ │ (val & 255) as usize // SHARD_MASK = 255
- │ │
- │ └─ shards[shard_id].items.push(item)
- │
- ├─ let next_shard = AtomicUsize::new(0)
- │
- └─ std::thread::scope(|s| { ... })
- FILE: Rust std (scoped threads)
- │
- ├─ FOR _ IN 0..workers:
- │ │
- │ └─ s.spawn(move || { ... }) // ═══ WORKER THREAD ═══
- │ │
- │ ├─ let mut delta = TickDelta::new()
- │ │ FILE: crates/warp-core/src/tick_delta.rs:44-52
- │ │ CREATES: { ops: Vec::new(), origins: Vec::new() }
- │ │
- │ └─ LOOP: // Work-stealing loop
- │ │
- │ ├─ shard_id = next_shard.fetch_add(1, Ordering::Relaxed)
- │ │ ATOMIC: Returns old value, increments counter
- │ │ ORDERING: Relaxed (no synchronization cost)
- │ │
- │ ├─ IF shard_id >= 256: break
- │ │
- │ └─ FOR item IN &shards[shard_id].items:
- │ │
- │ ├─ let mut scoped = delta.scoped(item.origin)
- │ │ FILE: crates/warp-core/src/tick_delta.rs:140-142
- │ │ CREATES: ScopedDelta { inner: &mut delta, origin, next_op_ix: 0 }
- │ │
- │ └─ (item.exec)(view, &item.scope, scoped.inner_mut())
- │ │
- │ └─ INSIDE EXECUTOR:
- │ scoped.emit(op)
- │ FILE: crates/warp-core/src/tick_delta.rs:234-239
- │ CODE:
- │ origin.op_ix = self.next_op_ix;
- │ self.next_op_ix += 1;
- │ self.inner.emit_with_origin(op, origin);
- │ │
- │ └─ TickDelta::emit_with_origin(op, origin)
- │ FILE: crates/warp-core/src/tick_delta.rs:69-75
- │ CODE:
- │ self.ops.push(op);
- │ self.origins.push(origin); // if delta_validate
- │
- └─ COLLECT THREADS:
- handles.into_iter().map(|h| h.join()).collect()
- RETURNS: Vec (one per worker)
-\end{verbatim}
-
-\subsection{5.3 Enforced Execution Path}\label{enforced-execution-path}
-
-\textbf{Entry Point:} \texttt{execute\_item\_enforced()}
-\textbf{File:} \texttt{crates/warp-core/src/boaw/exec.rs:409-487}
-
-When footprint enforcement is active, each item is executed via
-\texttt{execute\_item\_enforced()} instead of a bare function-pointer call.
-This wraps execution with \texttt{catch\_unwind} and performs post-hoc
-\texttt{check\_op()} validation on any newly-emitted ops.
-
-\begin{verbatim}
-execute_item_enforced(store, item, idx, unit, delta)
-│
-├─ guard = unit.guards[idx]
-├─ view = GraphView::new_guarded(store, guard)
-│
-├─ ops_before = delta.len()
-│ Snapshot the op count BEFORE the executor runs
-│
-├─ result = std::panic::catch_unwind(AssertUnwindSafe(|| {
-│ (item.exec)(view, &item.scope, delta)
-│ }))
-│
-├─ FOR op IN delta.ops_ref()[ops_before..]:
-│ guard.check_op(op) → panic_any(FootprintViolation) on failure
-│ Validates that each newly-emitted op falls within the declared footprint.
-│ ExecItemKind::System items may emit warp-instance-level ops;
-│ ExecItemKind::User items may not.
-│
-└─ OUTCOME PRECEDENCE (returns Result):
- ├─ IF exec panicked AND check_op panicked:
- │ return Err(PoisonedDelta(FootprintViolationWithPanic))
- │ The violation wraps both the FootprintViolation and the exec panic.
- │
- ├─ IF exec panicked OR check_op panicked (but not both):
- │ return Err(PoisonedDelta(panic_payload))
- │ Single panic payload (either executor or violation).
- │
- └─ IF both clean:
- return Ok(delta)
-\end{verbatim}
-
-\textbf{The Poison Invariant:} If the executor panics, the \texttt{TickDelta}
-it was writing into is considered poisoned (partially-written ops with no
-transactional rollback). After an executor panic the delta must be
-discarded---it cannot be merged or committed.
-
-\textbf{Type-Level Enforcement:} The poison invariant is enforced at the type
-level via \texttt{PoisonedDelta}, a newtype distinct from \texttt{TickDelta}.
-When an executor panics, \texttt{execute\_item\_enforced()} returns
-\texttt{Result}. The API exposes \texttt{merge\_deltas\_ok()}
-(a higher-level wrapper around \texttt{merge\_deltas()}, which remains available
-feature-gated) that returns \texttt{Result} and only accepts non-poisoned deltas.
-A \texttt{PoisonedDelta} cannot be passed to \texttt{merge\_deltas\_ok()}---the
-type system prevents accidental merging.
-
-\subsection{5.4 ExecItem Structure}\label{execitem-structure}
-
-\textbf{File:} \texttt{crates/warp-core/src/boaw/exec.rs:19-35}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\AttributeTok{\#[}\NormalTok{derive}\AttributeTok{(}\BuiltInTok{Clone}\OperatorTok{,} \BuiltInTok{Copy}\AttributeTok{)]}
-\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ ExecItem }\OperatorTok{\{}
- \KeywordTok{pub}\NormalTok{ exec}\OperatorTok{:}\NormalTok{ ExecuteFn}\OperatorTok{,} \CommentTok{// fn(GraphView, \&NodeId, \&mut TickDelta)}
- \KeywordTok{pub}\NormalTok{ scope}\OperatorTok{:}\NormalTok{ NodeId}\OperatorTok{,} \CommentTok{// 32{-}byte node identifier}
- \KeywordTok{pub}\NormalTok{ origin}\OperatorTok{:}\NormalTok{ OpOrigin}\OperatorTok{,} \CommentTok{// \{ intent\_id, rule\_id, match\_ix, op\_ix \}}
-
- \CommentTok{// Private field, present only in enforcement builds:}
- \AttributeTok{\#[}\NormalTok{cfg}\AttributeTok{(}\NormalTok{any}\AttributeTok{(}\NormalTok{debug\_assertions}\OperatorTok{,}\NormalTok{ feature }\OperatorTok{=} \StringTok{"footprint\_enforce\_release"}\AttributeTok{))]}
- \AttributeTok{\#[}\NormalTok{cfg}\AttributeTok{(}\NormalTok{not}\AttributeTok{(}\NormalTok{feature }\OperatorTok{=} \StringTok{"unsafe\_graph"}\AttributeTok{))]}
-\NormalTok{ kind}\OperatorTok{:}\NormalTok{ ExecItemKind}\OperatorTok{,}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{\texttt{ExecItemKind} (cfg-gated):}
-
-\begin{itemize}
-\tightlist
-\item
- \texttt{ExecItemKind::User} --- Normal rule executor. May emit
- node/edge/attachment ops scoped to the declared footprint. Cannot emit
- warp-instance-level ops (\texttt{UpsertWarpInstance},
- \texttt{DeleteWarpInstance}, \texttt{OpenPortal}).
-\item
- \texttt{ExecItemKind::System} --- Internal-only executor (e.g., portal
- opening). May emit warp-instance-level ops.
-\end{itemize}
-
-\texttt{ExecItem::new()} always creates \texttt{User} items. System items are
-constructed via \texttt{ExecItem::new\_system()} (cfg-gated \texttt{pub(crate)}
-constructor used by portal/inbox rules) and are never exposed through the public
-API.
-
-\textbf{The dual-attribute cfg-gate pattern:} The \texttt{kind} field (and all
-enforcement logic) is guarded by two cfg attributes that together express three
-conditions (\texttt{debug\_assertions}, \texttt{footprint\_enforce\_release},
-and \texttt{unsafe\_graph}):
-
-\begin{enumerate}
-\def\labelenumi{\arabic{enumi}.}
-\tightlist
-\item
- \texttt{\#[cfg(any(debug\_assertions, feature = "footprint\_enforce\_release"))]}
- --- active in debug builds or when the release enforcement feature is
- opted-in.
-\item
- \texttt{\#[cfg(not(feature = "unsafe\_graph"))]} --- disabled when the
- escape-hatch feature is set (for benchmarks/fuzzing that intentionally
- bypass checks).
-\end{enumerate}
-
-This means enforcement is always-on in dev/test, opt-in for release, and
-explicitly removable for unsafe experimentation.
-
-\subsection{5.5 Thread Safety}\label{thread-safety}
-
-{\def\LTcaptype{none} % do not increment counter
-\begin{longtable}[]{@{}lll@{}}
-\toprule\noalign{}
-Type & Safety & Reason \\
-\midrule\noalign{}
-\endhead
-\bottomrule\noalign{}
-\endlastfoot
-\texttt{GraphView} & \texttt{Sync\ +\ Send\ +\ Clone} & Read-only
-snapshot \\
-\texttt{ExecItem} & \texttt{Sync\ +\ Send\ +\ Copy} & Function pointer +
-primitives \\
-\texttt{TickDelta} & Per-worker exclusive & No shared mutation \\
-\texttt{AtomicUsize} & Lock-free & \texttt{fetch\_add} with
-\texttt{Relaxed} ordering \\
-\end{longtable}
-}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{6. Delta Merge \& State
-Finalization}\label{delta-merge-state-finalization}
-
-\subsection{6.1 Canonical Merge}\label{canonical-merge}
-
-\textbf{Entry Point:} \texttt{merge\_deltas()} \textbf{File:}
-\texttt{crates/warp-core/src/boaw/merge.rs:36-75}
-
-\begin{verbatim}
-merge_deltas(deltas: Vec) → Result, MergeConflict>
-│
-├─[1] FLATTEN ALL OPS WITH ORIGINS
-│ let mut flat: Vec<(WarpOpKey, OpOrigin, WarpOp)> = Vec::new();
-│ FOR d IN deltas:
-│ let (ops, origins) = d.into_parts_unsorted();
-│ FOR (op, origin) IN ops.zip(origins):
-│ flat.push((op.sort_key(), origin, op));
-│
-├─[2] CANONICAL SORT
-│ flat.sort_by(|a, b| (&a.0, &a.1).cmp(&(&b.0, &b.1)));
-│ ORDER: (WarpOpKey, OpOrigin) lexicographic
-│
-└─[3] DEDUPE & CONFLICT DETECTION
- let mut out = Vec::new();
- let mut i = 0;
- WHILE i < flat.len():
- │
- ├─ GROUP by WarpOpKey
- │ key = flat[i].0
- │ start = i
- │ WHILE i < flat.len() && flat[i].0 == key: i++
- │
- ├─ CHECK if all ops identical
- │ first = &flat[start].2
- │ all_same = flat[start+1..i].iter().all(|(_, _, op)| op == first)
- │
- └─ IF all_same:
- out.push(first.clone()) // Accept one copy
- ELSE:
- writers = flat[start..i].iter().map(|(_, o, _)| *o).collect()
- return Err(MergeConflict { writers }) // CONFLICT!
-
- return Ok(out)
-\end{verbatim}
-
-\subsection{6.2 WarpOp Sort Key}\label{warpop-sort-key}
-
-\textbf{File:} \texttt{crates/warp-core/src/tick\_patch.rs:207-287}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{fn}\NormalTok{ sort\_key(}\OperatorTok{\&}\KeywordTok{self}\NormalTok{) }\OperatorTok{{-}\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}
- \ControlFlowTok{match} \KeywordTok{self} \OperatorTok{\{}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{OpenPortal }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{1}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{UpsertWarpInstance }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{2}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{DeleteWarpInstance }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{3}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{DeleteEdge }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{4}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} \CommentTok{// Delete before upsert}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{DeleteNode }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{5}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{UpsertNode }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{6}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{UpsertEdge }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{7}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},}
- \DataTypeTok{Self}\PreprocessorTok{::}\NormalTok{SetAttachment }\OperatorTok{\{} \OperatorTok{..} \OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ WarpOpKey }\OperatorTok{\{}\NormalTok{ kind}\OperatorTok{:} \DecValTok{8}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},} \CommentTok{// Last}
- \OperatorTok{\}}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{Canonical Order:} 1. OpenPortal (creates child instances) 2.
-UpsertWarpInstance 3. DeleteWarpInstance 4. DeleteEdge (delete before
-upsert) 5. DeleteNode (delete before upsert) 6. UpsertNode 7. UpsertEdge
-8. SetAttachment (after skeleton exists)
-
-\subsection{6.3 State Mutation Methods}\label{state-mutation-methods}
-
-\textbf{File:} \texttt{crates/warp-core/src/graph.rs}
-
-\begin{verbatim}
-GraphStore::insert_node(id, record)
- LINE: 175-177
- CODE: self.nodes.insert(id, record)
-
-GraphStore::upsert_edge_record(from, edge)
- LINE: 196-261
- UPDATES:
- - self.edge_index.insert(edge_id, from)
- - self.edge_to_index.insert(edge_id, to)
- - Remove old edge from previous bucket if exists
- - self.edges_from.entry(from).or_default().push(edge)
- - self.edges_to.entry(to).or_default().push(edge_id)
-
-GraphStore::delete_node_cascade(node)
- LINE: 277-354
- CASCADES:
- - Remove from self.nodes
- - Remove node attachment
- - Remove ALL outbound edges (and their attachments)
- - Remove ALL inbound edges (and their attachments)
- - Maintain all 4 index maps consistently
-
-GraphStore::delete_edge_exact(from, edge_id)
- LINE: 360-412
- VALIDATES: edge is in correct "from" bucket
- REMOVES:
- - From edges_from bucket
- - From edge_index
- - From edge_to_index
- - From edges_to bucket
- - Edge attachment
-
-GraphStore::set_node_attachment(id, value)
- LINE: 125-134
- CODE:
- None → self.node_attachments.remove(&id)
- Some(v) → self.node_attachments.insert(id, v)
-
-GraphStore::set_edge_attachment(id, value)
- LINE: 163-172
- Same pattern as node attachments
-\end{verbatim}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{7. Hash Computation}\label{hash-computation}
-
-\subsection{7.1 State Root}\label{state-root}
-
-\textbf{Entry Point:} \texttt{compute\_state\_root()} \textbf{File:}
-\texttt{crates/warp-core/src/snapshot.rs:88-209}
-
-\begin{verbatim}
-compute_state_root(state: &WarpState, root: &NodeKey) → Hash
-│
-├─[1] BFS REACHABILITY TRAVERSAL
-│ │
-│ ├─ Initialize:
-│ │ reachable_nodes: BTreeSet = { root }
-│ │ reachable_warps: BTreeSet = { root.warp_id }
-│ │ queue: VecDeque = [ root ]
-│ │
-│ └─ WHILE let Some(current) = queue.pop_front():
-│ │
-│ ├─ store = state.store(¤t.warp_id)
-│ │
-│ ├─ FOR edge IN store.edges_from(¤t.local_id):
-│ │ ├─ to = NodeKey { warp_id: current.warp_id, local_id: edge.to }
-│ │ ├─ IF reachable_nodes.insert(to): queue.push_back(to)
-│ │ │
-│ │ └─ IF edge has Descend(child_warp) attachment:
-│ │ └─ enqueue_descend(state, child_warp, ...)
-│ │ Adds child instance root to queue
-│ │
-│ └─ IF current node has Descend(child_warp) attachment:
-│ enqueue_descend(state, child_warp, ...)
-│
-├─[2] HASHING PHASE
-│ │
-│ ├─ let mut hasher = Hasher::new() // BLAKE3
-│ │
-│ ├─ HASH ROOT BINDING:
-│ │ hasher.update(&root.warp_id.0) // 32 bytes
-│ │ hasher.update(&root.local_id.0) // 32 bytes
-│ │
-│ └─ FOR warp_id IN reachable_warps: // BTreeSet = sorted order
-│ │
-│ ├─ HASH INSTANCE HEADER:
-│ │ hasher.update(&instance.warp_id.0) // 32 bytes
-│ │ hasher.update(&instance.root_node.0) // 32 bytes
-│ │ hash_attachment_key_opt(&mut hasher, instance.parent.as_ref())
-│ │
-│ ├─ FOR (node_id, node) IN store.nodes: // BTreeMap = sorted
-│ │ IF reachable_nodes.contains(&NodeKey { warp_id, local_id: node_id }):
-│ │ hasher.update(&node_id.0) // 32 bytes
-│ │ hasher.update(&node.ty.0) // 32 bytes
-│ │ hash_attachment_value_opt(&mut hasher, store.node_attachment(node_id))
-│ │
-│ └─ FOR (from, edges) IN store.edges_from: // BTreeMap = sorted
-│ IF from is reachable:
-│ sorted_edges = edges.filter(reachable).sort_by(|a,b| a.id.cmp(b.id))
-│ hasher.update(&from.0) // 32 bytes
-│ hasher.update(&(sorted_edges.len() as u64).to_le_bytes()) // 8 bytes
-│ FOR edge IN sorted_edges:
-│ hasher.update(&edge.id.0) // 32 bytes
-│ hasher.update(&edge.ty.0) // 32 bytes
-│ hasher.update(&edge.to.0) // 32 bytes
-│ hash_attachment_value_opt(&mut hasher, store.edge_attachment(&edge.id))
-│
-└─ hasher.finalize().into() // → [u8; 32]
-\end{verbatim}
-
-\subsection{7.2 Commit Hash v2}\label{commit-hash-v2}
-
-\textbf{Entry Point:} \texttt{compute\_commit\_hash\_v2()}
-\textbf{File:} \texttt{crates/warp-core/src/snapshot.rs:244-263}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) }\KeywordTok{fn}\NormalTok{ compute\_commit\_hash\_v2(}
-\NormalTok{ state\_root}\OperatorTok{:} \OperatorTok{\&}\BuiltInTok{Hash}\OperatorTok{,}
-\NormalTok{ parents}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[}\BuiltInTok{Hash}\NormalTok{]}\OperatorTok{,}
-\NormalTok{ patch\_digest}\OperatorTok{:} \OperatorTok{\&}\BuiltInTok{Hash}\OperatorTok{,}
-\NormalTok{ policy\_id}\OperatorTok{:} \DataTypeTok{u32}\OperatorTok{,}
-\NormalTok{) }\OperatorTok{{-}\textgreater{}} \BuiltInTok{Hash} \OperatorTok{\{}
- \KeywordTok{let} \KeywordTok{mut}\NormalTok{ h }\OperatorTok{=} \BuiltInTok{Hasher}\PreprocessorTok{::}\NormalTok{new()}\OperatorTok{;}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\DecValTok{2u16}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// Version tag (2 bytes)}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\NormalTok{(parents}\OperatorTok{.}\NormalTok{len() }\KeywordTok{as} \DataTypeTok{u64}\NormalTok{)}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// Parent count (8 bytes)}
- \ControlFlowTok{for}\NormalTok{ p }\KeywordTok{in}\NormalTok{ parents }\OperatorTok{\{}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(p)}\OperatorTok{;} \CommentTok{// Each parent (32 bytes)}
- \OperatorTok{\}}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(state\_root)}\OperatorTok{;} \CommentTok{// Graph hash (32 bytes)}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(patch\_digest)}\OperatorTok{;} \CommentTok{// Ops hash (32 bytes)}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\NormalTok{policy\_id}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// Policy (4 bytes)}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{finalize()}\OperatorTok{.}\NormalTok{into()}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{Byte Layout:}
-
-\begin{verbatim}
-Offset Size Field
-0 2 version_tag (0x02 0x00)
-2 8 parent_count (u64 LE)
-10 32*N parents[] (N parent hashes)
-10+32N 32 state_root
-42+32N 32 patch_digest
-74+32N 4 policy_id (u32 LE)
-─────────────────────────────────────
-TOTAL: 78 + 32*N bytes → BLAKE3 → 32-byte hash
-\end{verbatim}
-
-\subsection{7.3 Patch Digest}\label{patch-digest}
-
-\textbf{Entry Point:} \texttt{compute\_patch\_digest\_v2()}
-\textbf{File:} \texttt{crates/warp-core/src/tick\_patch.rs:755-774}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{fn}\NormalTok{ compute\_patch\_digest\_v2(}
-\NormalTok{ policy\_id}\OperatorTok{:} \DataTypeTok{u32}\OperatorTok{,}
-\NormalTok{ rule\_pack\_id}\OperatorTok{:} \OperatorTok{\&}\NormalTok{ContentHash}\OperatorTok{,}
-\NormalTok{ commit\_status}\OperatorTok{:}\NormalTok{ TickCommitStatus}\OperatorTok{,}
-\NormalTok{ in\_slots}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[SlotId]}\OperatorTok{,}
-\NormalTok{ out\_slots}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[SlotId]}\OperatorTok{,}
-\NormalTok{ ops}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[WarpOp]}\OperatorTok{,}
-\NormalTok{) }\OperatorTok{{-}\textgreater{}}\NormalTok{ ContentHash }\OperatorTok{\{}
- \KeywordTok{let} \KeywordTok{mut}\NormalTok{ h }\OperatorTok{=} \BuiltInTok{Hasher}\PreprocessorTok{::}\NormalTok{new()}\OperatorTok{;}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\DecValTok{2u16}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// Format version}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\NormalTok{policy\_id}\OperatorTok{.}\NormalTok{to\_le\_bytes())}\OperatorTok{;} \CommentTok{// 4 bytes}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(rule\_pack\_id)}\OperatorTok{;} \CommentTok{// 32 bytes}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{update(}\OperatorTok{\&}\NormalTok{[commit\_status}\OperatorTok{.}\NormalTok{code()])}\OperatorTok{;} \CommentTok{// 1 byte}
-\NormalTok{ encode\_slots(}\OperatorTok{\&}\KeywordTok{mut}\NormalTok{ h}\OperatorTok{,}\NormalTok{ in\_slots)}\OperatorTok{;}
-\NormalTok{ encode\_slots(}\OperatorTok{\&}\KeywordTok{mut}\NormalTok{ h}\OperatorTok{,}\NormalTok{ out\_slots)}\OperatorTok{;}
-\NormalTok{ encode\_ops(}\OperatorTok{\&}\KeywordTok{mut}\NormalTok{ h}\OperatorTok{,}\NormalTok{ ops)}\OperatorTok{;}
-\NormalTok{ h}\OperatorTok{.}\NormalTok{finalize()}\OperatorTok{.}\NormalTok{into()}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{8. Commit Orchestration}\label{commit-orchestration}
-
-\textbf{Entry Point:} \texttt{Engine::commit\_with\_receipt()}
-\textbf{File:} \texttt{crates/warp-core/src/engine\_impl.rs:837-954}
-
-\subsection{8.1 Complete Call Trace}\label{complete-call-trace-3}
-
-\begin{verbatim}
-Engine::commit_with_receipt(tx) → Result<(Snapshot, TickReceipt, WarpTickPatchV1), EngineError>
-│
-├─[1] VALIDATE TRANSACTION
-│ IF tx.value() == 0 || !self.live_txs.contains(&tx.value()):
-│ return Err(EngineError::UnknownTx)
-│
-├─[2] DRAIN CANDIDATES
-│ policy_id = self.policy_id // Line 844
-│ rule_pack_id = self.compute_rule_pack_id() // Line 845
-│ │
-│ ├─ compute_rule_pack_id()
-│ │ FILE: engine_impl.rs:1675-1688
-│ │ CODE:
-│ │ ids = self.rules.values().map(|r| r.id).collect()
-│ │ ids.sort_unstable(); ids.dedup()
-│ │ hasher.update(&1u16.to_le_bytes()) // version
-│ │ hasher.update(&(ids.len() as u64).to_le_bytes())
-│ │ FOR id IN ids: hasher.update(&id)
-│ │ hasher.finalize().into()
-│ │
-│ drained = self.scheduler.drain_for_tx(tx) // Line 847
-│ plan_digest = compute_plan_digest(&drained) // Line 848
-│
-├─[3] RESERVE (INDEPENDENCE CHECK)
-│ ReserveOutcome { receipt, reserved, in_slots, out_slots }
-│ = self.reserve_for_receipt(tx, drained)? // Line 850-855
-│ │
-│ └─ reserve_for_receipt(tx, drained)
-│ FILE: engine_impl.rs:970-1042
-│ │
-│ FOR rewrite IN drained (canonical order):
-│ │
-│ ├─ accepted = self.scheduler.reserve(tx, &mut rewrite)
-│ │
-│ ├─ IF !accepted:
-│ │ blockers = find_blocking_rewrites(reserved, &rewrite)
-│ │
-│ ├─ receipt_entries.push(TickReceiptEntry { ... })
-│ │
-│ └─ IF accepted:
-│ reserved.push(rewrite)
-│ extend_slots_from_footprint(&mut in_slots, &mut out_slots, ...)
-│ │
-│ return ReserveOutcome { receipt, reserved, in_slots, out_slots }
-│
-│ rewrites_digest = compute_rewrites_digest(&reserved_rewrites) // Line 858
-│
-├─[4] EXECUTE (PHASE 5 BOAW)
-│ state_before = self.state.clone() // Line 862
-│ delta_ops = self.apply_reserved_rewrites(reserved, &state_before)?
-│ │
-│ └─ apply_reserved_rewrites(rewrites, state_before)
-│ FILE: engine_impl.rs:1044-1105
-│ │
-│ ├─ let mut delta = TickDelta::new()
-│ │
-│ ├─ FOR rewrite IN rewrites:
-│ │ executor = self.rule_by_compact(rewrite.compact_rule).executor
-│ │ view = GraphView::new(self.state.store(&rewrite.scope.warp_id))
-│ │ (executor)(view, &rewrite.scope.local_id, &mut delta)
-│ │
-│ ├─ let ops = delta.finalize() // Canonical sort
-│ │
-│ ├─ patch = WarpTickPatchV1::new(policy_id, rule_pack_id, ..., ops)
-│ │ patch.apply_to_state(&mut self.state)?
-│ │
-│ └─ [delta_validate]: assert_delta_matches_diff(&ops, &diff_ops)
-│
-├─[5] MATERIALIZE
-│ mat_report = self.bus.finalize() // Line 884
-│ self.last_materialization = mat_report.channels
-│ self.last_materialization_errors = mat_report.errors
-│
-├─[6] COMPUTE DELTA PATCH
-│ ops = diff_state(&state_before, &self.state) // Line 889
-│ │
-│ └─ diff_state(before, after)
-│ FILE: tick_patch.rs:979-1069
-│ - Canonicalize portal authoring (OpenPortal)
-│ - Diff instances (delete/upsert)
-│ - Diff nodes, edges, attachments
-│ - Sort by WarpOp::sort_key()
-│ │
-│ patch = WarpTickPatchV1::new(policy_id, rule_pack_id, ..., ops)
-│ patch_digest = patch.digest() // Line 898
-│
-├─[7] COMPUTE STATE ROOT
-│ state_root = compute_state_root(&self.state, &self.current_root) // Line 900
-│
-├─[8] GET PARENTS
-│ parents = self.last_snapshot.as_ref().map(|s| vec![s.hash]).unwrap_or_default()
-│
-├─[9] COMPUTE DECISION DIGEST
-│ decision_digest = receipt.digest() // Line 929
-│
-├─[10] COMPUTE COMMIT HASH
-│ hash = compute_commit_hash_v2(&state_root, &parents, &patch_digest, policy_id)
-│
-├─[11] BUILD SNAPSHOT
-│ snapshot = Snapshot {
-│ root: self.current_root,
-│ hash, // commit_id v2
-│ parents,
-│ plan_digest, // Diagnostic
-│ decision_digest, // Diagnostic
-│ rewrites_digest, // Diagnostic
-│ patch_digest, // COMMITTED
-│ policy_id, // COMMITTED
-│ tx,
-│ }
-│
-├─[12] RECORD TO HISTORY
-│ self.last_snapshot = Some(snapshot.clone()) // Line 947
-│ self.tick_history.push((snapshot, receipt, patch)) // Line 948-949
-│ self.live_txs.remove(&tx.value()) // Line 951
-│ self.scheduler.finalize_tx(tx) // Line 952
-│
-└─[13] RETURN
- Ok((snapshot, receipt, patch))
-\end{verbatim}
-
-\subsection{8.2 Commit Hash Inputs}\label{commit-hash-inputs}
-
-{\def\LTcaptype{none} % do not increment counter
-\begin{longtable}[]{@{}lll@{}}
-\toprule\noalign{}
-Input & Committed? & Purpose \\
-\midrule\noalign{}
-\endhead
-\bottomrule\noalign{}
-\endlastfoot
-\texttt{state\_root} & ✓ & What the graph looks like \\
-\texttt{patch\_digest} & ✓ & How we got here (ops) \\
-\texttt{parents} & ✓ & Chain continuity \\
-\texttt{policy\_id} & ✓ & Aion policy version \\
-\texttt{plan\_digest} & ✗ & Diagnostic only \\
-\texttt{decision\_digest} & ✗ & Diagnostic only \\
-\texttt{rewrites\_digest} & ✗ & Diagnostic only \\
-\end{longtable}
-}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{9. Complete Call Graph}\label{complete-call-graph}
-
-\subsection{9.1 Full Journey: Intent →
-Commit}\label{full-journey-intent-commit}
-
-\begin{verbatim}
-USER ACTION
- │
- ▼
-Engine::ingest_intent(intent_bytes)
- ├─ compute_intent_id() // BLAKE3 content hash
- ├─ make_node_id(), make_type_id() // Structural IDs
- ├─ store.insert_node() // Create event node
- ├─ store.set_node_attachment() // Attach intent payload
- └─ store.insert_edge() // Pending edge to inbox
- │
- ▼
-Engine::begin() → TxId
- ├─ tx_counter.wrapping_add(1)
- ├─ live_txs.insert(tx_counter)
- └─ TxId::from_raw(tx_counter)
- │
- ▼
-Engine::dispatch_next_intent(tx) // (or manual apply)
- │
- ▼
-Engine::apply(tx, rule_name, scope)
- └─ Engine::apply_in_warp(tx, warp_id, rule_name, scope, &[])
- ├─ rules.get(rule_name) // Lookup rule
- ├─ GraphView::new(store) // Read-only view
- ├─ (rule.matcher)(view, scope) // Match check
- ├─ scope_hash() // BLAKE3 ordering key
- ├─ (rule.compute_footprint)(view, scope) // Footprint
- └─ scheduler.enqueue(tx, PendingRewrite)
- └─ PendingTx::enqueue() // Last-wins dedup
- │
- ▼
-Engine::commit_with_receipt(tx)
- │
- ├─[DRAIN]
- │ scheduler.drain_for_tx(tx)
- │ └─ PendingTx::drain_in_order()
- │ └─ radix_sort() or sort_unstable_by()
- │ 20-pass LSD radix sort
- │ ORDER: (scope_hash, rule_id, nonce)
- │
- ├─[RESERVE]
- │ FOR rewrite IN drained:
- │ scheduler.reserve(tx, &mut rewrite)
- │ ├─ has_conflict(active, pr)
- │ │ └─ GenSet::contains() × N // O(1) per check
- │ └─ mark_all(active, pr)
- │ └─ GenSet::mark() × M // O(1) per mark
- │
- ├─[EXECUTE]
- │ apply_reserved_rewrites(reserved, state_before)
- │ FOR rewrite IN reserved:
- │ (executor)(view, &scope, &mut delta)
- │ └─ scoped.emit(op)
- │ └─ delta.emit_with_origin(op, origin)
- │ delta.finalize() // Sort ops
- │ patch.apply_to_state(&mut self.state)
- │
- ├─[MATERIALIZE]
- │ bus.finalize()
- │
- ├─[DELTA PATCH]
- │ diff_state(&state_before, &self.state)
- │ └─ Sort by WarpOp::sort_key()
- │ WarpTickPatchV1::new(...)
- │ └─ compute_patch_digest_v2()
- │
- ├─[HASHES]
- │ compute_state_root(&self.state, &self.current_root)
- │ ├─ BFS reachability
- │ └─ BLAKE3 over canonical encoding
- │ compute_commit_hash_v2(state_root, parents, patch_digest, policy_id)
- │ └─ BLAKE3(version || parents || state_root || patch_digest || policy_id)
- │
- ├─[SNAPSHOT]
- │ Snapshot { root, hash, parents, digests..., policy_id, tx }
- │
- └─[RECORD]
- tick_history.push((snapshot, receipt, patch))
- live_txs.remove(&tx.value())
- scheduler.finalize_tx(tx)
- │
- ▼
-RETURN: (Snapshot, TickReceipt, WarpTickPatchV1)
-\end{verbatim}
-
-\subsection{9.2 File Index}\label{file-index}
-
-{\def\LTcaptype{none} % do not increment counter
-\begin{longtable}[]{@{}lll@{}}
-\toprule\noalign{}
-Component & Primary File & Key Lines \\
-\midrule\noalign{}
-\endhead
-\bottomrule\noalign{}
-\endlastfoot
-Intent Ingestion & \texttt{engine\_impl.rs} & 1216-1281 \\
-Identity Hashing & \texttt{ident.rs} & 85-109 \\
-Transaction Begin & \texttt{engine\_impl.rs} & 711-719 \\
-Rule Apply & \texttt{engine\_impl.rs} & 730-806 \\
-Footprint & \texttt{footprint.rs} & 131-152 \\
-Scheduler Enqueue & \texttt{scheduler.rs} & 102-105, 331-355 \\
-Radix Sort & \texttt{scheduler.rs} & 360-413, 481-498 \\
-Reserve/Conflict & \texttt{scheduler.rs} & 134-278 \\
-GenSet & \texttt{scheduler.rs} & 509-535 \\
-BOAW Execute & \texttt{boaw/exec.rs} & 61-152 \\
-Shard Routing & \texttt{boaw/shard.rs} & 82-120 \\
-Delta Merge & \texttt{boaw/merge.rs} & 36-75 \\
-TickDelta & \texttt{tick\_delta.rs} & 38-172 \\
-WarpOp Sort Key & \texttt{tick\_patch.rs} & 207-287 \\
-State Mutations & \texttt{graph.rs} & 175-412 \\
-Patch Apply & \texttt{tick\_patch.rs} & 434-561 \\
-Diff State & \texttt{tick\_patch.rs} & 979-1069 \\
-State Root Hash & \texttt{snapshot.rs} & 88-209 \\
-Commit Hash v2 & \texttt{snapshot.rs} & 244-263 \\
-Patch Digest & \texttt{tick\_patch.rs} & 755-774 \\
-Commit Orchestrator & \texttt{engine\_impl.rs} & 837-954 \\
-\end{longtable}
-}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{Appendix A: Complexity
-Summary}\label{appendix-a-complexity-summary}
-
-{\def\LTcaptype{none} % do not increment counter
-\begin{longtable}[]{@{}lll@{}}
-\toprule\noalign{}
-Operation & Complexity & Notes \\
-\midrule\noalign{}
-\endhead
-\bottomrule\noalign{}
-\endlastfoot
-\texttt{ingest\_intent} & O(1) & Fixed structural insertions \\
-\texttt{begin} & O(1) & Counter increment + set insert \\
-\texttt{apply} & O(m) & m = footprint size \\
-\texttt{drain\_for\_tx} (radix) & O(n) & n = candidates, 20 passes \\
-\texttt{reserve} per rewrite & O(m) & m = footprint size, O(1) per
-check \\
-\texttt{execute\_parallel} & O(n/w) & n = items, w = workers \\
-\texttt{merge\_deltas} & O(k log k) & k = total ops (sort + dedup) \\
-\texttt{compute\_state\_root} & O(V + E) & V = nodes, E = edges \\
-\texttt{compute\_commit\_hash\_v2} & O(P) & P = parents \\
-\end{longtable}
-}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{Appendix B: Determinism
-Boundaries}\label{appendix-b-determinism-boundaries}
-
-\subsection{Guaranteed Deterministic}\label{guaranteed-deterministic}
-
-\begin{itemize}
-\tightlist
-\item
- Radix sort ordering (20-pass LSD)
-\item
- BTreeMap/BTreeSet iteration
-\item
- BLAKE3 hashing
-\item
- GenSet conflict detection
-\item
- Canonical merge deduplication
-\end{itemize}
-
-\subsection{Intentionally Non-Deterministic (Handled by
-Merge)}\label{intentionally-non-deterministic-handled-by-merge}
-
-\begin{itemize}
-\tightlist
-\item
- Worker execution order in BOAW
-\item
- Shard claim order (atomic counter)
-\end{itemize}
-
-\subsection{Protocol Constants
-(Frozen)}\label{protocol-constants-frozen}
-
-\begin{itemize}
-\tightlist
-\item
- \texttt{NUM\_SHARDS\ =\ 256}
-\item
- \texttt{SHARD\_MASK\ =\ 255}
-\item
- Shard routing: \texttt{LE\_u64(node\_id{[}0..8{]})\ \&\ 255}
-\item
- Commit hash v2 version tag: \texttt{0x02\ 0x00}
-\end{itemize}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\emph{Document generated 2026-01-18. File paths and line numbers
-accurate as of this date.}
-
-\backmatter
-\end{document}
diff --git a/docs/archive/study/echo-visual-atlas-with-diagrams.pdf b/docs/archive/study/echo-visual-atlas-with-diagrams.pdf
deleted file mode 100644
index 1c8767b1..00000000
Binary files a/docs/archive/study/echo-visual-atlas-with-diagrams.pdf and /dev/null differ
diff --git a/docs/archive/study/echo-visual-atlas-with-diagrams.tex b/docs/archive/study/echo-visual-atlas-with-diagrams.tex
deleted file mode 100644
index 8e33ecc3..00000000
--- a/docs/archive/study/echo-visual-atlas-with-diagrams.tex
+++ /dev/null
@@ -1,279 +0,0 @@
-% SPDX-License-Identifier: Apache-2.0 OR LicenseRef-MIND-UCAL-1.0
-% © James Ross Ω FLYING•ROBOTS
-% Options for packages loaded elsewhere
-\PassOptionsToPackage{unicode}{hyperref}
-\PassOptionsToPackage{hyphens}{url}
-\documentclass[
-]{book}
-\usepackage[letterpaper, margin=1in]{geometry}
-\usepackage{xcolor}
-\usepackage{amsmath,amssymb}
-% Enable section numbering (sections within chapters)
-\setcounter{secnumdepth}{2}
-\usepackage{iftex}
-\ifPDFTeX
- \usepackage[T1]{fontenc}
- \usepackage[utf8]{inputenc}
- \usepackage{textcomp} % provide euro and other symbols
-\else % if luatex or xetex
- \usepackage{unicode-math} % this also loads fontspec
- \defaultfontfeatures{Scale=MatchLowercase}
- \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1}
-\fi
-\usepackage{lmodern}
-% Use upquote if available, for straight quotes in verbatim environments
-\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
-\IfFileExists{microtype.sty}{% use microtype if available
- \usepackage[]{microtype}
- \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
-}{}
-\makeatletter
-\@ifundefined{KOMAClassName}{% if non-KOMA class
- \IfFileExists{parskip.sty}{%
- \usepackage{parskip}
- }{% else
- \setlength{\parindent}{0pt}
- \setlength{\parskip}{6pt plus 2pt minus 1pt}}
-}{% if KOMA class
- \KOMAoptions{parskip=half}}
-\makeatother
-\usepackage{color}
-\usepackage{fancyvrb}
-\newcommand{\VerbBar}{|}
-\newcommand{\VERB}{\Verb[commandchars=\\\{\}]}
-\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}}
-% Add ',fontsize=\small' for more characters per line
-\newenvironment{Shaded}{}{}
-\newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}}
-\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}}
-\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{#1}}
-\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}}
-\newcommand{\BuiltInTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{#1}}
-\newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}}
-\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{#1}}}
-\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}}
-\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{#1}}
-\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}}
-\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{#1}}
-\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}}
-\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{#1}}}
-\newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}}
-\newcommand{\ExtensionTok}[1]{#1}
-\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}}
-\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{#1}}
-\newcommand{\ImportTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{\textbf{#1}}}
-\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}}
-\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}}
-\newcommand{\NormalTok}[1]{#1}
-\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{#1}}
-\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{#1}}
-\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{#1}}
-\newcommand{\RegionMarkerTok}[1]{#1}
-\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}}
-\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{#1}}
-\newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}}
-\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{#1}}
-\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}}
-\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}}
-\usepackage{graphicx}
-\usepackage[export]{adjustbox}
-\usepackage{longtable,booktabs,array}
-\usepackage{calc} % for calculating minipage widths
-% Correct order of tables after \paragraph or \subparagraph
-\usepackage{etoolbox}
-\makeatletter
-\patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{}
-\makeatother
-% Allow footnotes in longtable head/foot
-\IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}}
-\makesavenoteenv{longtable}
-\setlength{\emergencystretch}{3em} % prevent overfull lines
-\providecommand{\tightlist}{%
- \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
-\usepackage{bookmark}
-\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available
-\urlstyle{same}
-\hypersetup{
- hidelinks,
- pdfcreator={LaTeX via pandoc}}
-
-% Single source of truth for document date
-\newcommand{\docdate}{2026-01-19}
-
-\author{Echo Project Contributors}
-\date{\docdate}
-
-\begin{document}
-\chapter{Echo Visual Atlas}\label{echo-visual-atlas}
-
-\begin{quote}
-Standalone diagrams for understanding Echo's architecture. These
-diagrams complement the main guide ``What Makes Echo Tick?''
-\end{quote}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{The Complete Tick Pipeline}\label{the-complete-tick-pipeline}
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/tour-06.pdf}
-\end{center}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{BOAW Parallel Execution Model}\label{boaw-parallel-execution-model}
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/tour-08.pdf}
-\end{center}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{Virtual Shard Routing}\label{virtual-shard-routing}
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/tour-10.pdf}
-\end{center}
-
-\subsection{Test Vectors (Frozen
-Protocol)}\label{test-vectors-frozen-protocol}
-
-{\def\LTcaptype{none} % do not increment counter
-\begin{longtable}[]{@{}lll@{}}
-\toprule\noalign{}
-Input (first 8 bytes) & LE u64 & Shard \\
-\midrule\noalign{}
-\endhead
-\bottomrule\noalign{}
-\endlastfoot
-\texttt{0xDEADBEEFCAFEBABE} & \texttt{0xBEBAFECAEFBEADDE} & 190
-(0xBE) \\
-\texttt{0x0000000000000000} & \texttt{0x0000000000000000} & 0 \\
-\texttt{0x2A00000000000000} & \texttt{0x000000000000002A} & 42 \\
-\texttt{0xFFFFFFFFFFFFFFFF} & \texttt{0xFFFFFFFFFFFFFFFF} & 255 \\
-\end{longtable}
-}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{Two-Plane WARP Architecture}\label{two-plane-warp-architecture}
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/tour-03.pdf}
-\end{center}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{GraphView Contract Enforcement}\label{graphview-contract-enforcement}
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/tour-11.pdf}
-\end{center}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{State Root Hash Computation}\label{state-root-hash-computation}
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/tour-09.pdf}
-\end{center}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{Commit Hash v2 Structure}\label{commit-hash-v2-structure}
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/tour-07.pdf}
-\end{center}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{WSC Snapshot Format}\label{wsc-snapshot-format}
-
-\begin{verbatim}
-┌─────────────────────────────────────────────────────────────────────────┐
-│ WSC SNAPSHOT FILE │
-├─────────────────────────────────────────────────────────────────────────┤
-│ │
-│ ┌────────────────────────────────────────────────────────────────────┐ │
-│ │ HEADER (fixed size) │ │
-│ │ ┌──────────┬──────────┬──────────┬──────────┬──────────┐ │ │
-│ │ │ magic │ version │ node_cnt │ edge_cnt │ offsets │ │ │
-│ │ │ 8 bytes │ 8 bytes │ 8 bytes │ 8 bytes │ 8×N bytes│ │ │
-│ │ └──────────┴──────────┴──────────┴──────────┴──────────┘ │ │
-│ └────────────────────────────────────────────────────────────────────┘ │
-│ │
-│ ┌────────────────────────────────────────────────────────────────────┐ │
-│ │ NODES TABLE (sorted by NodeId, 8-byte aligned) │ │
-│ │ ┌─────────────────┬─────────────────┬─────────────────┐ │ │
-│ │ │ NodeRow │ NodeRow │ NodeRow │ ... │ │
-│ │ │ 64 bytes │ 64 bytes │ 64 bytes │ │ │
-│ │ │ [id:32][type:32]│ [id:32][type:32]│ [id:32][type:32]│ │ │
-│ │ └─────────────────┴─────────────────┴─────────────────┘ │ │
-│ └────────────────────────────────────────────────────────────────────┘ │
-│ │
-│ ┌────────────────────────────────────────────────────────────────────┐ │
-│ │ EDGES TABLE (sorted by EdgeId, 8-byte aligned) │ │
-│ │ ┌─────────────────────────┬─────────────────────────┐ │ │
-│ │ │ EdgeRow │ EdgeRow │ ... │ │
-│ │ │ 128 bytes │ 128 bytes │ │ │
-│ │ │[id:32][from:32][to:32] │[id:32][from:32][to:32] │ │ │
-│ │ │[type:32] │[type:32] │ │ │
-│ │ └─────────────────────────┴─────────────────────────┘ │ │
-│ └────────────────────────────────────────────────────────────────────┘ │
-│ │
-│ ┌────────────────────────────────────────────────────────────────────┐ │
-│ │ OUT_INDEX (per-node ranges into out_edges) │ │
-│ │ ┌──────────────┬──────────────┬──────────────┐ │ │
-│ │ │ Range │ Range │ Range │ ... │ │
-│ │ │ 16 bytes │ 16 bytes │ 16 bytes │ │ │
-│ │ │[start:8][len:8]│[start:8][len:8]│[start:8][len:8]│ │ │
-│ │ └──────────────┴──────────────┴──────────────┘ │ │
-│ └────────────────────────────────────────────────────────────────────┘ │
-│ │
-│ ┌────────────────────────────────────────────────────────────────────┐ │
-│ │ ATTACHMENT INDEX (per-slot ranges) │ │
-│ │ Similar structure to OUT_INDEX │ │
-│ └────────────────────────────────────────────────────────────────────┘ │
-│ │
-│ ┌────────────────────────────────────────────────────────────────────┐ │
-│ │ BLOB ARENA (variable-length payloads) │ │
-│ │ ┌─────────────────────────────────────────────────────────────┐ │ │
-│ │ │ [payload bytes...] [payload bytes...] [payload bytes...] ...│ │ │
-│ │ └─────────────────────────────────────────────────────────────┘ │ │
-│ │ Referenced by (offset: u64, length: u64) tuples │ │
-│ └────────────────────────────────────────────────────────────────────┘ │
-│ │
-└─────────────────────────────────────────────────────────────────────────┘
-\end{verbatim}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{Footprint Independence Check}\label{footprint-independence-check}
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/tour-08.pdf}
-\end{center}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{Complete Data Flow: Intent to Render}\label{complete-data-flow-intent-to-render}
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/tour-02.pdf}
-\end{center}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{Viewer Event Loop}\label{viewer-event-loop}
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/tour-15.pdf}
-\end{center}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\emph{Visual Atlas generated \docdate. Use alongside ``What Makes Echo
-Tick?'' for complete understanding.}
-
-\end{document}
diff --git a/docs/archive/study/echo-visual-atlas.md b/docs/archive/study/echo-visual-atlas.md
deleted file mode 100644
index c65681c9..00000000
--- a/docs/archive/study/echo-visual-atlas.md
+++ /dev/null
@@ -1,662 +0,0 @@
-
-
-
-# Echo Visual Atlas
-
-> Standalone diagrams for understanding Echo's architecture.
-> These diagrams complement the main guide "What Makes Echo Tick?"
-
----
-
-## 1. The Complete Tick Pipeline
-
-```mermaid
-flowchart TB
- subgraph PHASE1["Phase 1: BEGIN"]
- B1[engine.begin]
- B2[Increment tx_counter]
- B3[Add to live_txs]
- B4[Return TxId]
- B1 --> B2 --> B3 --> B4
- end
-
- subgraph PHASE2["Phase 2: APPLY (0..N times)"]
- A1[engine.apply]
- A2{Matcher?}
- A3[Compute Footprint]
- A4[Create PendingRewrite]
- A5[Enqueue to Scheduler]
- A6[NoMatch]
- A1 --> A2
- A2 -->|true| A3 --> A4 --> A5
- A2 -->|false| A6
- end
-
- subgraph PHASE3["Phase 3: COMMIT"]
- subgraph DRAIN["3a. Drain"]
- D1[Radix sort pending]
- D2[Canonical order]
- end
- subgraph RESERVE["3b. Reserve"]
- R1[For each rewrite]
- R2{Footprint conflict?}
- R3[Accept]
- R4[Reject + witness]
- R1 --> R2
- R2 -->|no| R3
- R2 -->|yes| R4
- end
- subgraph EXECUTE["3c. Execute"]
- E1[For each accepted]
- E2[Call executor]
- E3[Emit to TickDelta]
- E1 --> E2 --> E3
- end
- subgraph MERGE["3d. Merge"]
- M1[Collect all deltas]
- M2[Sort by key+origin]
- M3[Dedupe/detect conflicts]
- M1 --> M2 --> M3
- end
- subgraph FINALIZE["3e. Finalize"]
- F1[Apply ops to state]
- F2[Update indexes]
- F1 --> F2
- end
- DRAIN --> RESERVE --> EXECUTE --> MERGE --> FINALIZE
- end
-
- subgraph PHASE4["Phase 4: HASH"]
- H1[BFS reachable nodes]
- H2[Canonical encode]
- H3[BLAKE3 state_root]
- H4[BLAKE3 patch_digest]
- H5[Compute commit_hash]
- H1 --> H2 --> H3 --> H4 --> H5
- end
-
- subgraph PHASE5["Phase 5: RECORD"]
- REC1[Append Snapshot]
- REC2[Append Receipt]
- REC3[Append Patch]
- REC1 --> REC2 --> REC3
- end
-
- PHASE1 --> PHASE2 --> PHASE3 --> PHASE4 --> PHASE5
-```
-
----
-
-## 2. BOAW Parallel Execution Model
-
-```mermaid
-flowchart LR
- subgraph INPUT["Input"]
- I[ExecItems n items]
- end
-
- subgraph PARTITION["Partition Phase"]
- P[partition_into_shards]
- S0[Shard 0]
- S1[Shard 1]
- S2[...]
- S255[Shard 255]
- P --> S0
- P --> S1
- P --> S2
- P --> S255
- end
-
- subgraph EXECUTE["Execute Phase (Parallel)"]
- W0[Worker 0 TickDelta]
- W1[Worker 1 TickDelta]
- W2[Worker 2 TickDelta]
- WN[Worker N TickDelta]
- end
-
- subgraph STEAL["Work Stealing"]
- AC[AtomicUsize next_shard]
- AC -.->|fetch_add| W0
- AC -.->|fetch_add| W1
- AC -.->|fetch_add| W2
- AC -.->|fetch_add| WN
- end
-
- subgraph MERGE["Merge Phase"]
- MG[merge_deltas]
- SORT[Sort by key+origin]
- DEDUP[Dedupe identical]
- MG --> SORT --> DEDUP
- end
-
- subgraph OUTPUT["Output"]
- O[Canonical Ops deterministic]
- end
-
- I --> P
- S0 --> W0
- S1 --> W1
- S2 --> W2
- S255 --> WN
- W0 --> MG
- W1 --> MG
- W2 --> MG
- WN --> MG
- DEDUP --> O
-```
-
----
-
-## 3. Virtual Shard Routing
-
-```mermaid
-flowchart TD
- subgraph NODEID["NodeId (32 bytes)"]
- B0["byte 0"]
- B1["byte 1"]
- B2["byte 2"]
- B3["byte 3"]
- B4["byte 4"]
- B5["byte 5"]
- B6["byte 6"]
- B7["byte 7"]
- REST["bytes 8-31 (ignored)"]
- end
-
- subgraph EXTRACT["Extract First 8 Bytes"]
- LE["u64::from_le_bytes [b0,b1,b2,b3,b4,b5,b6,b7]"]
- end
-
- subgraph MASK["Apply Shard Mask"]
- AND["val & 0xFF (NUM_SHARDS - 1)"]
- end
-
- subgraph RESULT["Shard ID"]
- SID["0..255"]
- end
-
- B0 --> LE
- B1 --> LE
- B2 --> LE
- B3 --> LE
- B4 --> LE
- B5 --> LE
- B6 --> LE
- B7 --> LE
- LE --> AND --> SID
-```
-
-### Test Vectors (Frozen Protocol)
-
-| Input (first 8 bytes) | LE u64 | Shard |
-| --------------------- | -------------------- | ---------- |
-| `0xDEADBEEFCAFEBABE` | `0xBEBAFECAEFBEADDE` | 222 (0xDE) |
-| `0x0000000000000000` | `0x0000000000000000` | 0 |
-| `0x2A00000000000000` | `0x000000000000002A` | 42 |
-| `0xFFFFFFFFFFFFFFFF` | `0xFFFFFFFFFFFFFFFF` | 255 |
-
----
-
-## 4. Two-Plane WARP Architecture
-
-```mermaid
-graph TB
- subgraph SKELETON["Skeleton Plane (Structure)"]
- direction TB
- N1["Node A id: 0x1234"]
- N2["Node B id: 0x5678"]
- N3["Node C id: 0x9ABC"]
-
- N1 -->|"edge:link id: 0xE001"| N2
- N1 -->|"edge:child id: 0xE002"| N3
- N2 -->|"edge:ref id: 0xE003"| N3
- end
-
- subgraph ALPHA["Attachment Plane (α)"]
- direction TB
- A1["N1.α['title'] Atom{string, 'Home'}"]
- A2["N2.α['url'] Atom{string, '/page/b'}"]
- A3["N3.α['body'] Atom{html, '<p>...</p>'}"]
- A4["N3.α['portal'] Descend('child-instance')"]
- end
-
- N1 -.- A1
- N2 -.- A2
- N3 -.- A3
- N3 -.- A4
-
- subgraph DESCENDED["Descended Instance"]
- direction TB
- C1["Child Root id: 0xCCC1"]
- C2["Child Node id: 0xCCC2"]
- C1 --> C2
- end
-
- A4 -.->|"Descend pointer"| C1
-```
-
----
-
-## 5. GraphView Contract Enforcement
-
-```mermaid
-flowchart TD
- subgraph EXECUTOR["Executor Function"]
- EX["fn executor(view: GraphView, scope: &NodeId, delta: &mut TickDelta)"]
- end
-
- subgraph READ["Read Path (GraphView)"]
- R1["view.node(id)"]
- R2["view.edges_from(id)"]
- R3["view.attachment(id, key)"]
- R4["view.has_edge(id)"]
-
- R1 --> GS
- R2 --> GS
- R3 --> GS
- R4 --> GS
- end
-
- subgraph GS["GraphStore (Immutable)"]
- NODES["nodes: BTreeMap"]
- EDGES["edges_from: BTreeMap"]
- ATTACH["attachments: BTreeMap"]
- end
-
- subgraph WRITE["Write Path (TickDelta)"]
- W1["delta.emit(UpsertNode)"]
- W2["delta.emit(UpsertEdge)"]
- W3["delta.emit(SetAttachment)"]
- W4["delta.emit(DeleteNode)"]
-
- W1 --> OPS
- W2 --> OPS
- W3 --> OPS
- W4 --> OPS
- end
-
- subgraph OPS["Accumulated Ops"]
- OPLIST["Vec<(WarpOp, OpOrigin)>"]
- end
-
- EX --> READ
- EX --> WRITE
-
- style GS fill:#e8f5e9
- style OPS fill:#fff3e0
-```
-
----
-
-## 6. State Root Hash Computation
-
-```mermaid
-flowchart TD
- subgraph BFS["1. Deterministic BFS"]
- START["Start at root"]
- VISIT["Visit reachable nodes"]
- DESCEND["Follow Descend() attachments"]
- COLLECT["Collect reachable set"]
- START --> VISIT --> DESCEND --> COLLECT
- end
-
- subgraph ENCODE["2. Canonical Encoding"]
- subgraph INSTANCE["Per Instance (BTreeMap order)"]
- IH["warp_id header"]
- subgraph NODE["Per Node (ascending NodeId)"]
- NH["node_id[32]"]
- NT["node_type[32]"]
- subgraph EDGE["Per Edge (ascending EdgeId)"]
- EH["edge_id[32]"]
- ET["edge_type[32]"]
- ED["to_node[32]"]
- end
- subgraph ATTACH["Per Attachment"]
- AK["key_len[8] + key"]
- AT["type_id[32]"]
- AV["value_len[8] + value"]
- end
- end
- end
- end
-
- subgraph HASH["3. BLAKE3 Digest"]
- STREAM["Byte stream"]
- DIGEST["state_root[32]"]
- STREAM --> DIGEST
- end
-
- BFS --> ENCODE --> HASH
-```
-
----
-
-## 7. Commit Hash v2 Structure
-
-```mermaid
-flowchart LR
- subgraph INPUTS["Commit Hash Inputs"]
- V["version[4] protocol tag"]
- P["parents[] parent hashes"]
- SR["state_root[32] graph hash"]
- PD["patch_digest[32] ops hash"]
- PI["policy_id[4] aion policy"]
- end
-
- subgraph CONCAT["Concatenation"]
- BYTES["version || parents || state_root || patch_digest || policy_id"]
- end
-
- subgraph OUTPUT["Output"]
- CH["commit_hash[32] BLAKE3"]
- end
-
- V --> BYTES
- P --> BYTES
- SR --> BYTES
- PD --> BYTES
- PI --> BYTES
- BYTES --> CH
-```
-
----
-
-## 8. WSC Snapshot Format
-
-```text
-┌─────────────────────────────────────────────────────────────────────────┐
-│ WSC SNAPSHOT FILE │
-├─────────────────────────────────────────────────────────────────────────┤
-│ │
-│ ┌────────────────────────────────────────────────────────────────────┐ │
-│ │ HEADER (fixed size) │ │
-│ │ ┌──────────┬──────────┬──────────┬──────────┬──────────┐ │ │
-│ │ │ magic │ version │ node_cnt │ edge_cnt │ offsets │ │ │
-│ │ │ 8 bytes │ 8 bytes │ 8 bytes │ 8 bytes │ 8×N bytes│ │ │
-│ │ └──────────┴──────────┴──────────┴──────────┴──────────┘ │ │
-│ └────────────────────────────────────────────────────────────────────┘ │
-│ │
-│ ┌────────────────────────────────────────────────────────────────────┐ │
-│ │ NODES TABLE (sorted by NodeId, 8-byte aligned) │ │
-│ │ ┌─────────────────┬─────────────────┬─────────────────┐ │ │
-│ │ │ NodeRow │ NodeRow │ NodeRow │ ... │ │
-│ │ │ 64 bytes │ 64 bytes │ 64 bytes │ │ │
-│ │ │ [id:32][type:32]│ [id:32][type:32]│ [id:32][type:32]│ │ │
-│ │ └─────────────────┴─────────────────┴─────────────────┘ │ │
-│ └────────────────────────────────────────────────────────────────────┘ │
-│ │
-│ ┌────────────────────────────────────────────────────────────────────┐ │
-│ │ EDGES TABLE (sorted by EdgeId, 8-byte aligned) │ │
-│ │ ┌─────────────────────────┬─────────────────────────┐ │ │
-│ │ │ EdgeRow │ EdgeRow │ ... │ │
-│ │ │ 128 bytes │ 128 bytes │ │ │
-│ │ │[id:32][from:32][to:32] │[id:32][from:32][to:32] │ │ │
-│ │ │[type:32] │[type:32] │ │ │
-│ │ └─────────────────────────┴─────────────────────────┘ │ │
-│ └────────────────────────────────────────────────────────────────────┘ │
-│ │
-│ ┌────────────────────────────────────────────────────────────────────┐ │
-│ │ OUT_INDEX (per-node ranges into out_edges) │ │
-│ │ ┌──────────────┬──────────────┬──────────────┐ │ │
-│ │ │ Range │ Range │ Range │ ... │ │
-│ │ │ 16 bytes │ 16 bytes │ 16 bytes │ │ │
-│ │ │[start:8][len:8]│[start:8][len:8]│[start:8][len:8]│ │ │
-│ │ └──────────────┴──────────────┴──────────────┘ │ │
-│ └────────────────────────────────────────────────────────────────────┘ │
-│ │
-│ ┌────────────────────────────────────────────────────────────────────┐ │
-│ │ ATTACHMENT INDEX (per-slot ranges) │ │
-│ │ Similar structure to OUT_INDEX │ │
-│ └────────────────────────────────────────────────────────────────────┘ │
-│ │
-│ ┌────────────────────────────────────────────────────────────────────┐ │
-│ │ BLOB ARENA (variable-length payloads) │ │
-│ │ ┌─────────────────────────────────────────────────────────────┐ │ │
-│ │ │ [payload bytes...] [payload bytes...] [payload bytes...] ...│ │ │
-│ │ └─────────────────────────────────────────────────────────────┘ │ │
-│ │ Referenced by (offset: u64, length: u64) tuples │ │
-│ └────────────────────────────────────────────────────────────────────┘ │
-│ │
-└─────────────────────────────────────────────────────────────────────────┘
-```
-
----
-
-## 9. Footprint Independence Check
-
-```mermaid
-flowchart TD
- subgraph REWRITE1["Rewrite A"]
- R1_READ["reads: {N1, N2}"]
- R1_WRITE["writes: {N3}"]
- end
-
- subgraph REWRITE2["Rewrite B"]
- R2_READ["reads: {N4, N5}"]
- R2_WRITE["writes: {N6}"]
- end
-
- subgraph REWRITE3["Rewrite C"]
- R3_READ["reads: {N1, N3}"]
- R3_WRITE["writes: {N7}"]
- end
-
- subgraph CHECK["Independence Check"]
- C1{{"A ∩ B"}}
- C2{{"A ∩ C"}}
- C3{{"B ∩ C"}}
- end
-
- subgraph RESULT["Results"]
- OK1["A || B: OK (no overlap)"]
- CONFLICT["A || C: CONFLICT (A.write ∩ C.read = {N3})"]
- OK2["B || C: OK (no overlap)"]
- end
-
- R1_WRITE --> C1
- R2_WRITE --> C1
- R1_WRITE --> C2
- R3_READ --> C2
- R2_WRITE --> C3
- R3_WRITE --> C3
-
- C1 --> OK1
- C2 --> CONFLICT
- C3 --> OK2
-
- style CONFLICT fill:#ffcdd2
- style OK1 fill:#c8e6c9
- style OK2 fill:#c8e6c9
-```
-
----
-
-## 9b. FootprintGuard Enforcement Flow
-
-```mermaid
-flowchart TD
- EXEC["execute_item_enforced()"]
- SNAP["ops_before = delta.len()"]
-
- subgraph parallel["Two independent catch_unwind calls"]
- CATCH_EXEC["catch_unwind(executor)"]
- CATCH_CHECK["catch_unwind(check_op loop)"]
- end
-
- MATCH{"Match (exec_panic, check_result)"}
-
- OK["Ok(delta)"]
- ERR_SINGLE["Err(PoisonedDelta)"]
- ERR_BOTH["Err(FootprintViolationWithPanic)"]
-
- EXEC --> SNAP --> CATCH_EXEC
- SNAP --> CATCH_CHECK
- CATCH_EXEC --> MATCH
- CATCH_CHECK --> MATCH
-
- MATCH -->|"(None, Ok)" | OK
- MATCH -->|"(Some, Ok) or (None, Err)"| ERR_SINGLE
- MATCH -->|"(Some, Err)"| ERR_BOTH
-
- style OK fill:#c8e6c9
- style ERR_SINGLE fill:#fff9c4
- style ERR_BOTH fill:#ffcdd2
-```
-
-**Key:** Footprint enforcement is active when `cfg(debug_assertions)` or the
-`footprint_enforce_release` feature is enabled, **unless** the `unsafe_graph`
-feature is set. The `unsafe_graph` feature is mutually exclusive with enforcement
-and disables all footprint validation—no `FootprintViolation` can occur while
-`unsafe_graph` is active.
-
-When enforcement is active, every `ExecItem` execution is wrapped by
-`execute_item_enforced()`. Two independent `catch_unwind` boundaries run:
-one for the executor, one for the `check_op` validation loop. Both run
-regardless of whether the other panics. Results are combined in a 3-way match:
-
-- `(None, Ok)` → success, return `Ok(delta)`
-- `(Some, Ok)` or `(None, Err)` → single panic, return `Err(PoisonedDelta)`
-- `(Some, Err)` → both panicked, return `Err(FootprintViolationWithPanic)` wrapping both payloads
-
----
-
-## 10. Complete Data Flow: Intent to Render
-
-```mermaid
-sequenceDiagram
- autonumber
- participant U as User
- participant V as Viewer
- participant H as Session Hub
- participant E as Engine
- participant S as Scheduler
- participant B as BOAW
- participant G as GraphStore
- participant W as WSC
-
- U->>V: Click action
- V->>V: Encode intent bytes
- V->>H: ingest_intent(bytes)
- H->>E: forward intent
-
- Note over E: Phase 1: BEGIN
- E->>E: begin() → TxId
-
- Note over E: Intent Processing
- E->>E: dispatch_next_intent(tx)
- E->>G: GraphView lookup
- G-->>E: intent data
-
- Note over E: Phase 2: APPLY
- E->>S: apply(tx, rule, scope)
- S->>G: matcher(view, scope)
- G-->>S: match result
- S->>S: compute footprint
- S->>S: enqueue PendingRewrite
-
- Note over E: Phase 3: COMMIT
- E->>S: commit(tx)
- S->>S: radix sort (drain)
- S->>S: independence check (reserve)
-
- Note over B: Parallel Execution
- S->>B: execute_parallel(items)
- B->>B: partition into shards
- par Worker 0
- B->>G: read via GraphView
- G-->>B: data
- B->>B: emit to TickDelta
- and Worker 1
- B->>G: read via GraphView
- G-->>B: data
- B->>B: emit to TickDelta
- and Worker N
- B->>G: read via GraphView
- G-->>B: data
- B->>B: emit to TickDelta
- end
- B->>B: merge_deltas (canonical)
- B-->>S: merged ops
-
- S->>G: apply ops
-
- Note over E: Phase 4: HASH
- E->>G: compute state_root
- G-->>E: hash
- E->>E: compute commit_hash
-
- Note over E: Phase 5: RECORD
- E->>W: store snapshot
- E->>E: append to history
-
- Note over H: Emit to Tools
- E->>H: WarpDiff
- H->>V: WarpFrame
-
- Note over V: Apply & Render
- V->>V: apply_op (each op)
- V->>V: verify state_hash
- V->>V: render frame
- V->>U: Display result
-```
-
----
-
-## 11. Viewer Event Loop
-
-```mermaid
-flowchart TD
- subgraph FRAME["Frame Loop"]
- START[frame start]
-
- subgraph DRAIN["1. Drain Session"]
- DN[drain_notifications]
- DF[drain_frames]
- end
-
- subgraph PROCESS["2. Process Frames"]
- PF[process_frames]
- SNAP{Snapshot?}
- DIFF{Diff?}
- APPLY[apply_op each]
- VERIFY[verify hash]
- end
-
- subgraph EVENTS["3. Handle Events"]
- UE[apply_ui_event]
- REDUCE[reduce pure]
- EFFECTS[run effects]
- end
-
- subgraph RENDER["4. Render"]
- MATCH{screen?}
- TITLE[draw_title]
- VIEW[draw_view]
- HUD[draw_hud]
- end
-
- END[frame end]
-
- START --> DRAIN
- DN --> DF
- DF --> PROCESS
- PF --> SNAP
- SNAP -->|yes| APPLY
- PF --> DIFF
- DIFF -->|yes| APPLY
- APPLY --> VERIFY
- VERIFY --> EVENTS
- UE --> REDUCE
- REDUCE --> EFFECTS
- EFFECTS --> RENDER
- MATCH -->|Title| TITLE
- MATCH -->|View| VIEW
- VIEW --> HUD
- TITLE --> END
- HUD --> END
- end
-```
-
----
-
-_Visual Atlas generated 2026-01-25. Use alongside "What Makes Echo Tick?" for complete understanding._
diff --git a/docs/archive/study/echo-visual-atlas.tex b/docs/archive/study/echo-visual-atlas.tex
deleted file mode 100644
index f2a3ff32..00000000
--- a/docs/archive/study/echo-visual-atlas.tex
+++ /dev/null
@@ -1,760 +0,0 @@
-% SPDX-License-Identifier: Apache-2.0 OR LicenseRef-MIND-UCAL-1.0
-% © James Ross Ω FLYING•ROBOTS
-% Options for packages loaded elsewhere
-\PassOptionsToPackage{unicode}{hyperref}
-\PassOptionsToPackage{hyphens}{url}
-\documentclass[
-]{book}
-\usepackage[letterpaper, margin=1in]{geometry}
-\usepackage{xcolor}
-\usepackage{amsmath,amssymb}
-\setcounter{secnumdepth}{-\maxdimen} % remove section numbering
-\usepackage{iftex}
-\ifPDFTeX
- \usepackage[T1]{fontenc}
- \usepackage[utf8]{inputenc}
- \usepackage{textcomp} % provide euro and other symbols
-\else % if luatex or xetex
- \usepackage{unicode-math} % this also loads fontspec
- \defaultfontfeatures{Scale=MatchLowercase}
- \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1}
-\fi
-\usepackage{lmodern}
-\ifPDFTeX\else
- % xetex/luatex font selection
-\fi
-% Use upquote if available, for straight quotes in verbatim environments
-\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
-\IfFileExists{microtype.sty}{% use microtype if available
- \usepackage[]{microtype}
- \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
-}{}
-\makeatletter
-\@ifundefined{KOMAClassName}{% if non-KOMA class
- \IfFileExists{parskip.sty}{%
- \usepackage{parskip}
- }{% else
- \setlength{\parindent}{0pt}
- \setlength{\parskip}{6pt plus 2pt minus 1pt}}
-}{% if KOMA class
- \KOMAoptions{parskip=half}}
-\makeatother
-\usepackage{color}
-\usepackage{fancyvrb}
-\newcommand{\VerbBar}{|}
-\newcommand{\VERB}{\Verb[commandchars=\\\{\}]}
-\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}}
-% Add ',fontsize=\small' for more characters per line
-\newenvironment{Shaded}{}{}
-\newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}}
-\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}}
-\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{#1}}
-\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}}
-\newcommand{\BuiltInTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{#1}}
-\newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}}
-\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{#1}}}
-\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}}
-\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{#1}}
-\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}}
-\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{#1}}
-\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}}
-\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{#1}}}
-\newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}}
-\newcommand{\ExtensionTok}[1]{#1}
-\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}}
-\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{#1}}
-\newcommand{\ImportTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{\textbf{#1}}}
-\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}}
-\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}}
-\newcommand{\NormalTok}[1]{#1}
-\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{#1}}
-\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{#1}}
-\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{#1}}
-\newcommand{\RegionMarkerTok}[1]{#1}
-\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}}
-\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{#1}}
-\newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}}
-\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{#1}}
-\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}}
-\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}}
-\usepackage{longtable,booktabs,array}
-\newcounter{none} % for unnumbered tables
-\usepackage{calc} % for calculating minipage widths
-% Correct order of tables after \paragraph or \subparagraph
-\usepackage{etoolbox}
-\makeatletter
-\patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{}
-\makeatother
-% Allow footnotes in longtable head/foot
-\IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}}
-\makesavenoteenv{longtable}
-\setlength{\emergencystretch}{3em} % prevent overfull lines
-\providecommand{\tightlist}{%
- \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
-\usepackage{bookmark}
-\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available
-\urlstyle{same}
-\hypersetup{
- hidelinks,
- pdfcreator={LaTeX via pandoc}}
-
-\author{}
-\date{}
-
-\begin{document}
-\frontmatter
-
-\mainmatter
-\chapter{Echo Visual Atlas}\label{echo-visual-atlas}
-
-\begin{quote}
-Standalone diagrams for understanding Echo's architecture. These
-diagrams complement the main guide ``What Makes Echo Tick?''
-\end{quote}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{1. The Complete Tick
-Pipeline}\label{the-complete-tick-pipeline}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\NormalTok{flowchart TB}
-\NormalTok{ subgraph PHASE1["Phase 1: BEGIN"]}
-\NormalTok{ B1[engine.begin]}
-\NormalTok{ B2[Increment tx\_counter]}
-\NormalTok{ B3[Add to live\_txs]}
-\NormalTok{ B4[Return TxId]}
-\NormalTok{ B1 {-}{-}\textgreater{} B2 {-}{-}\textgreater{} B3 {-}{-}\textgreater{} B4}
-\NormalTok{ end}
-
-\NormalTok{ subgraph PHASE2["Phase 2: APPLY (0..N times)"]}
-\NormalTok{ A1[engine.apply]}
-\NormalTok{ A2\{Matcher?\}}
-\NormalTok{ A3[Compute Footprint]}
-\NormalTok{ A4[Create PendingRewrite]}
-\NormalTok{ A5[Enqueue to Scheduler]}
-\NormalTok{ A6[NoMatch]}
-\NormalTok{ A1 {-}{-}\textgreater{} A2}
-\NormalTok{ A2 {-}{-}\textgreater{}|true| A3 {-}{-}\textgreater{} A4 {-}{-}\textgreater{} A5}
-\NormalTok{ A2 {-}{-}\textgreater{}|false| A6}
-\NormalTok{ end}
-
-\NormalTok{ subgraph PHASE3["Phase 3: COMMIT"]}
-\NormalTok{ subgraph DRAIN["3a. Drain"]}
-\NormalTok{ D1[Radix sort pending]}
-\NormalTok{ D2[Canonical order]}
-\NormalTok{ end}
-\NormalTok{ subgraph RESERVE["3b. Reserve"]}
-\NormalTok{ R1[For each rewrite]}
-\NormalTok{ R2\{Footprint conflict?\}}
-\NormalTok{ R3[Accept]}
-\NormalTok{ R4[Reject + witness]}
-\NormalTok{ R1 {-}{-}\textgreater{} R2}
-\NormalTok{ R2 {-}{-}\textgreater{}|no| R3}
-\NormalTok{ R2 {-}{-}\textgreater{}|yes| R4}
-\NormalTok{ end}
-\NormalTok{ subgraph EXECUTE["3c. Execute"]}
-\NormalTok{ E1[For each accepted]}
-\NormalTok{ E2[Call executor]}
-\NormalTok{ E3[Emit to TickDelta]}
-\NormalTok{ E1 {-}{-}\textgreater{} E2 {-}{-}\textgreater{} E3}
-\NormalTok{ end}
-\NormalTok{ subgraph MERGE["3d. Merge"]}
-\NormalTok{ M1[Collect all deltas]}
-\NormalTok{ M2[Sort by key+origin]}
-\NormalTok{ M3[Dedupe/detect conflicts]}
-\NormalTok{ M1 {-}{-}\textgreater{} M2 {-}{-}\textgreater{} M3}
-\NormalTok{ end}
-\NormalTok{ subgraph FINALIZE["3e. Finalize"]}
-\NormalTok{ F1[Apply ops to state]}
-\NormalTok{ F2[Update indexes]}
-\NormalTok{ F1 {-}{-}\textgreater{} F2}
-\NormalTok{ end}
-\NormalTok{ DRAIN {-}{-}\textgreater{} RESERVE {-}{-}\textgreater{} EXECUTE {-}{-}\textgreater{} MERGE {-}{-}\textgreater{} FINALIZE}
-\NormalTok{ end}
-
-\NormalTok{ subgraph PHASE4["Phase 4: HASH"]}
-\NormalTok{ H1[BFS reachable nodes]}
-\NormalTok{ H2[Canonical encode]}
-\NormalTok{ H3[BLAKE3 state\_root]}
-\NormalTok{ H4[BLAKE3 patch\_digest]}
-\NormalTok{ H5[Compute commit\_hash]}
-\NormalTok{ H1 {-}{-}\textgreater{} H2 {-}{-}\textgreater{} H3 {-}{-}\textgreater{} H4 {-}{-}\textgreater{} H5}
-\NormalTok{ end}
-
-\NormalTok{ subgraph PHASE5["Phase 5: RECORD"]}
-\NormalTok{ REC1[Append Snapshot]}
-\NormalTok{ REC2[Append Receipt]}
-\NormalTok{ REC3[Append Patch]}
-\NormalTok{ REC1 {-}{-}\textgreater{} REC2 {-}{-}\textgreater{} REC3}
-\NormalTok{ end}
-
-\NormalTok{ PHASE1 {-}{-}\textgreater{} PHASE2 {-}{-}\textgreater{} PHASE3 {-}{-}\textgreater{} PHASE4 {-}{-}\textgreater{} PHASE5}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{2. BOAW Parallel Execution
-Model}\label{boaw-parallel-execution-model}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\NormalTok{flowchart LR}
-\NormalTok{ subgraph INPUT["Input"]}
-\NormalTok{ I[ExecItems\textless{}br/\textgreater{}n items]}
-\NormalTok{ end}
-
-\NormalTok{ subgraph PARTITION["Partition Phase"]}
-\NormalTok{ P[partition\_into\_shards]}
-\NormalTok{ S0[Shard 0]}
-\NormalTok{ S1[Shard 1]}
-\NormalTok{ S2[...]}
-\NormalTok{ S255[Shard 255]}
-\NormalTok{ P {-}{-}\textgreater{} S0}
-\NormalTok{ P {-}{-}\textgreater{} S1}
-\NormalTok{ P {-}{-}\textgreater{} S2}
-\NormalTok{ P {-}{-}\textgreater{} S255}
-\NormalTok{ end}
-
-\NormalTok{ subgraph EXECUTE["Execute Phase (Parallel)"]}
-\NormalTok{ W0[Worker 0\textless{}br/\textgreater{}TickDelta]}
-\NormalTok{ W1[Worker 1\textless{}br/\textgreater{}TickDelta]}
-\NormalTok{ W2[Worker 2\textless{}br/\textgreater{}TickDelta]}
-\NormalTok{ WN[Worker N\textless{}br/\textgreater{}TickDelta]}
-\NormalTok{ end}
-
-\NormalTok{ subgraph STEAL["Work Stealing"]}
-\NormalTok{ AC[AtomicUsize\textless{}br/\textgreater{}next\_shard]}
-\NormalTok{ AC {-}.{-}\textgreater{}|fetch\_add| W0}
-\NormalTok{ AC {-}.{-}\textgreater{}|fetch\_add| W1}
-\NormalTok{ AC {-}.{-}\textgreater{}|fetch\_add| W2}
-\NormalTok{ AC {-}.{-}\textgreater{}|fetch\_add| WN}
-\NormalTok{ end}
-
-\NormalTok{ subgraph MERGE["Merge Phase"]}
-\NormalTok{ MG[merge\_deltas]}
-\NormalTok{ SORT[Sort by key+origin]}
-\NormalTok{ DEDUP[Dedupe identical]}
-\NormalTok{ MG {-}{-}\textgreater{} SORT {-}{-}\textgreater{} DEDUP}
-\NormalTok{ end}
-
-\NormalTok{ subgraph OUTPUT["Output"]}
-\NormalTok{ O[Canonical Ops\textless{}br/\textgreater{}deterministic]}
-\NormalTok{ end}
-
-\NormalTok{ I {-}{-}\textgreater{} P}
-\NormalTok{ S0 {-}{-}\textgreater{} W0}
-\NormalTok{ S1 {-}{-}\textgreater{} W1}
-\NormalTok{ S2 {-}{-}\textgreater{} W2}
-\NormalTok{ S255 {-}{-}\textgreater{} WN}
-\NormalTok{ W0 {-}{-}\textgreater{} MG}
-\NormalTok{ W1 {-}{-}\textgreater{} MG}
-\NormalTok{ W2 {-}{-}\textgreater{} MG}
-\NormalTok{ WN {-}{-}\textgreater{} MG}
-\NormalTok{ DEDUP {-}{-}\textgreater{} O}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{3. Virtual Shard Routing}\label{virtual-shard-routing}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\NormalTok{flowchart TD}
-\NormalTok{ subgraph NODEID["NodeId (32 bytes)"]}
-\NormalTok{ B0["byte 0"]}
-\NormalTok{ B1["byte 1"]}
-\NormalTok{ B2["byte 2"]}
-\NormalTok{ B3["byte 3"]}
-\NormalTok{ B4["byte 4"]}
-\NormalTok{ B5["byte 5"]}
-\NormalTok{ B6["byte 6"]}
-\NormalTok{ B7["byte 7"]}
-\NormalTok{ REST["bytes 8{-}31\textless{}br/\textgreater{}(ignored)"]}
-\NormalTok{ end}
-
-\NormalTok{ subgraph EXTRACT["Extract First 8 Bytes"]}
-\NormalTok{ LE["u64::from\_le\_bytes\textless{}br/\textgreater{}[b0,b1,b2,b3,b4,b5,b6,b7]"]}
-\NormalTok{ end}
-
-\NormalTok{ subgraph MASK["Apply Shard Mask"]}
-\NormalTok{ AND["val \& 0xFF\textless{}br/\textgreater{}(NUM\_SHARDS {-} 1)"]}
-\NormalTok{ end}
-
-\NormalTok{ subgraph RESULT["Shard ID"]}
-\NormalTok{ SID["0..255"]}
-\NormalTok{ end}
-
-\NormalTok{ B0 {-}{-}\textgreater{} LE}
-\NormalTok{ B1 {-}{-}\textgreater{} LE}
-\NormalTok{ B2 {-}{-}\textgreater{} LE}
-\NormalTok{ B3 {-}{-}\textgreater{} LE}
-\NormalTok{ B4 {-}{-}\textgreater{} LE}
-\NormalTok{ B5 {-}{-}\textgreater{} LE}
-\NormalTok{ B6 {-}{-}\textgreater{} LE}
-\NormalTok{ B7 {-}{-}\textgreater{} LE}
-\NormalTok{ LE {-}{-}\textgreater{} AND {-}{-}\textgreater{} SID}
-\end{Highlighting}
-\end{Shaded}
-
-\subsection{Test Vectors (Frozen
-Protocol)}\label{test-vectors-frozen-protocol}
-
-{\def\LTcaptype{none} % do not increment counter
-\begin{longtable}[]{@{}lll@{}}
-\toprule\noalign{}
-Input (first 8 bytes) & LE u64 & Shard \\
-\midrule\noalign{}
-\endhead
-\bottomrule\noalign{}
-\endlastfoot
-\texttt{0xDEADBEEFCAFEBABE} & \texttt{0xBEBAFECAEFBEADDE} & 190
-(0xBE) \\
-\texttt{0x0000000000000000} & \texttt{0x0000000000000000} & 0 \\
-\texttt{0x2A00000000000000} & \texttt{0x000000000000002A} & 42 \\
-\texttt{0xFFFFFFFFFFFFFFFF} & \texttt{0xFFFFFFFFFFFFFFFF} & 255 \\
-\end{longtable}
-}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{4. Two-Plane WARP
-Architecture}\label{two-plane-warp-architecture}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\NormalTok{graph TB}
-\NormalTok{ subgraph SKELETON["Skeleton Plane (Structure)"]}
-\NormalTok{ direction TB}
-\NormalTok{ N1["Node A\textless{}br/\textgreater{}id: 0x1234"]}
-\NormalTok{ N2["Node B\textless{}br/\textgreater{}id: 0x5678"]}
-\NormalTok{ N3["Node C\textless{}br/\textgreater{}id: 0x9ABC"]}
-
-\NormalTok{ N1 {-}{-}\textgreater{}|"edge:link\textless{}br/\textgreater{}id: 0xE001"| N2}
-\NormalTok{ N1 {-}{-}\textgreater{}|"edge:child\textless{}br/\textgreater{}id: 0xE002"| N3}
-\NormalTok{ N2 {-}{-}\textgreater{}|"edge:ref\textless{}br/\textgreater{}id: 0xE003"| N3}
-\NormalTok{ end}
-
-\NormalTok{ subgraph ALPHA["Attachment Plane (α)"]}
-\NormalTok{ direction TB}
-\NormalTok{ A1["N1.α[\textquotesingle{}title\textquotesingle{}]\textless{}br/\textgreater{}Atom\{string, \textquotesingle{}Home\textquotesingle{}\}"]}
-\NormalTok{ A2["N2.α[\textquotesingle{}url\textquotesingle{}]\textless{}br/\textgreater{}Atom\{string, \textquotesingle{}/page/b\textquotesingle{}\}"]}
-\NormalTok{ A3["N3.α[\textquotesingle{}body\textquotesingle{}]\textless{}br/\textgreater{}Atom\{html, \textquotesingle{}\<p\>...\</p\>\textquotesingle{}\}"]}
-\NormalTok{ A4["N3.α[\textquotesingle{}portal\textquotesingle{}]\textless{}br/\textgreater{}Descend(\textquotesingle{}child{-}instance\textquotesingle{})"]}
-\NormalTok{ end}
-
-\NormalTok{ N1 {-}.{-} A1}
-\NormalTok{ N2 {-}.{-} A2}
-\NormalTok{ N3 {-}.{-} A3}
-\NormalTok{ N3 {-}.{-} A4}
-
-\NormalTok{ subgraph DESCENDED["Descended Instance"]}
-\NormalTok{ direction TB}
-\NormalTok{ C1["Child Root\textless{}br/\textgreater{}id: 0xCCC1"]}
-\NormalTok{ C2["Child Node\textless{}br/\textgreater{}id: 0xCCC2"]}
-\NormalTok{ C1 {-}{-}\textgreater{} C2}
-\NormalTok{ end}
-
-\NormalTok{ A4 {-}.{-}\textgreater{}|"Descend pointer"| C1}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{5. GraphView Contract
-Enforcement}\label{graphview-contract-enforcement}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\NormalTok{flowchart TD}
-\NormalTok{ subgraph EXECUTOR["Executor Function"]}
-\NormalTok{ EX["fn executor(view: GraphView, scope: \&NodeId, delta: \&mut TickDelta)"]}
-\NormalTok{ end}
-
-\NormalTok{ subgraph READ["Read Path (GraphView)"]}
-\NormalTok{ R1["view.node(id)"]}
-\NormalTok{ R2["view.edges\_from(id)"]}
-\NormalTok{ R3["view.attachment(id, key)"]}
-\NormalTok{ R4["view.has\_edge(id)"]}
-
-\NormalTok{ R1 {-}{-}\textgreater{} GS}
-\NormalTok{ R2 {-}{-}\textgreater{} GS}
-\NormalTok{ R3 {-}{-}\textgreater{} GS}
-\NormalTok{ R4 {-}{-}\textgreater{} GS}
-\NormalTok{ end}
-
-\NormalTok{ subgraph GS["GraphStore (Immutable)"]}
-\NormalTok{ NODES["nodes: BTreeMap"]}
-\NormalTok{ EDGES["edges\_from: BTreeMap"]}
-\NormalTok{ ATTACH["attachments: BTreeMap"]}
-\NormalTok{ end}
-
-\NormalTok{ subgraph WRITE["Write Path (TickDelta)"]}
-\NormalTok{ W1["delta.emit(UpsertNode)"]}
-\NormalTok{ W2["delta.emit(UpsertEdge)"]}
-\NormalTok{ W3["delta.emit(SetAttachment)"]}
-\NormalTok{ W4["delta.emit(DeleteNode)"]}
-
-\NormalTok{ W1 {-}{-}\textgreater{} OPS}
-\NormalTok{ W2 {-}{-}\textgreater{} OPS}
-\NormalTok{ W3 {-}{-}\textgreater{} OPS}
-\NormalTok{ W4 {-}{-}\textgreater{} OPS}
-\NormalTok{ end}
-
-\NormalTok{ subgraph OPS["Accumulated Ops"]}
-\NormalTok{ OPLIST["Vec\<(WarpOp, OpOrigin)\>"]}
-\NormalTok{ end}
-
-\NormalTok{ EX {-}{-}\textgreater{} READ}
-\NormalTok{ EX {-}{-}\textgreater{} WRITE}
-
-\NormalTok{ style GS fill:\#e8f5e9}
-\NormalTok{ style OPS fill:\#fff3e0}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{6. State Root Hash
-Computation}\label{state-root-hash-computation}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\NormalTok{flowchart TD}
-\NormalTok{ subgraph BFS["1. Deterministic BFS"]}
-\NormalTok{ START["Start at root"]}
-\NormalTok{ VISIT["Visit reachable nodes"]}
-\NormalTok{ DESCEND["Follow Descend() attachments"]}
-\NormalTok{ COLLECT["Collect reachable set"]}
-\NormalTok{ START {-}{-}\textgreater{} VISIT {-}{-}\textgreater{} DESCEND {-}{-}\textgreater{} COLLECT}
-\NormalTok{ end}
-
-\NormalTok{ subgraph ENCODE["2. Canonical Encoding"]}
-\NormalTok{ subgraph INSTANCE["Per Instance (BTreeMap order)"]}
-\NormalTok{ IH["warp\_id header"]}
-\NormalTok{ subgraph NODE["Per Node (ascending NodeId)"]}
-\NormalTok{ NH["node\_id[32]"]}
-\NormalTok{ NT["node\_type[32]"]}
-\NormalTok{ subgraph EDGE["Per Edge (ascending EdgeId)"]}
-\NormalTok{ EH["edge\_id[32]"]}
-\NormalTok{ ET["edge\_type[32]"]}
-\NormalTok{ ED["to\_node[32]"]}
-\NormalTok{ end}
-\NormalTok{ subgraph ATTACH["Per Attachment"]}
-\NormalTok{ AK["key\_len[8] + key"]}
-\NormalTok{ AT["type\_id[32]"]}
-\NormalTok{ AV["value\_len[8] + value"]}
-\NormalTok{ end}
-\NormalTok{ end}
-\NormalTok{ end}
-\NormalTok{ end}
-
-\NormalTok{ subgraph HASH["3. BLAKE3 Digest"]}
-\NormalTok{ STREAM["Byte stream"]}
-\NormalTok{ DIGEST["state\_root[32]"]}
-\NormalTok{ STREAM {-}{-}\textgreater{} DIGEST}
-\NormalTok{ end}
-
-\NormalTok{ BFS {-}{-}\textgreater{} ENCODE {-}{-}\textgreater{} HASH}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{7. Commit Hash v2 Structure}\label{commit-hash-v2-structure}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\NormalTok{flowchart LR}
-\NormalTok{ subgraph INPUTS["Commit Hash Inputs"]}
-\NormalTok{ V["version[4]\textless{}br/\textgreater{}protocol tag"]}
-\NormalTok{ P["parents[]\textless{}br/\textgreater{}parent hashes"]}
-\NormalTok{ SR["state\_root[32]\textless{}br/\textgreater{}graph hash"]}
-\NormalTok{ PD["patch\_digest[32]\textless{}br/\textgreater{}ops hash"]}
-\NormalTok{ PI["policy\_id[4]\textless{}br/\textgreater{}aion policy"]}
-\NormalTok{ end}
-
-\NormalTok{ subgraph CONCAT["Concatenation"]}
-\NormalTok{ BYTES["version || parents || state\_root || patch\_digest || policy\_id"]}
-\NormalTok{ end}
-
-\NormalTok{ subgraph OUTPUT["Output"]}
-\NormalTok{ CH["commit\_hash[32]\textless{}br/\textgreater{}BLAKE3"]}
-\NormalTok{ end}
-
-\NormalTok{ V {-}{-}\textgreater{} BYTES}
-\NormalTok{ P {-}{-}\textgreater{} BYTES}
-\NormalTok{ SR {-}{-}\textgreater{} BYTES}
-\NormalTok{ PD {-}{-}\textgreater{} BYTES}
-\NormalTok{ PI {-}{-}\textgreater{} BYTES}
-\NormalTok{ BYTES {-}{-}\textgreater{} CH}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{8. WSC Snapshot Format}\label{wsc-snapshot-format}
-
-\begin{verbatim}
-┌─────────────────────────────────────────────────────────────────────────┐
-│ WSC SNAPSHOT FILE │
-├─────────────────────────────────────────────────────────────────────────┤
-│ │
-│ ┌────────────────────────────────────────────────────────────────────┐ │
-│ │ HEADER (fixed size) │ │
-│ │ ┌──────────┬──────────┬──────────┬──────────┬──────────┐ │ │
-│ │ │ magic │ version │ node_cnt │ edge_cnt │ offsets │ │ │
-│ │ │ 8 bytes │ 8 bytes │ 8 bytes │ 8 bytes │ 8×N bytes│ │ │
-│ │ └──────────┴──────────┴──────────┴──────────┴──────────┘ │ │
-│ └────────────────────────────────────────────────────────────────────┘ │
-│ │
-│ ┌────────────────────────────────────────────────────────────────────┐ │
-│ │ NODES TABLE (sorted by NodeId, 8-byte aligned) │ │
-│ │ ┌─────────────────┬─────────────────┬─────────────────┐ │ │
-│ │ │ NodeRow │ NodeRow │ NodeRow │ ... │ │
-│ │ │ 64 bytes │ 64 bytes │ 64 bytes │ │ │
-│ │ │ [id:32][type:32]│ [id:32][type:32]│ [id:32][type:32]│ │ │
-│ │ └─────────────────┴─────────────────┴─────────────────┘ │ │
-│ └────────────────────────────────────────────────────────────────────┘ │
-│ │
-│ ┌────────────────────────────────────────────────────────────────────┐ │
-│ │ EDGES TABLE (sorted by EdgeId, 8-byte aligned) │ │
-│ │ ┌─────────────────────────┬─────────────────────────┐ │ │
-│ │ │ EdgeRow │ EdgeRow │ ... │ │
-│ │ │ 128 bytes │ 128 bytes │ │ │
-│ │ │[id:32][from:32][to:32] │[id:32][from:32][to:32] │ │ │
-│ │ │[type:32] │[type:32] │ │ │
-│ │ └─────────────────────────┴─────────────────────────┘ │ │
-│ └────────────────────────────────────────────────────────────────────┘ │
-│ │
-│ ┌────────────────────────────────────────────────────────────────────┐ │
-│ │ OUT_INDEX (per-node ranges into out_edges) │ │
-│ │ ┌──────────────┬──────────────┬──────────────┐ │ │
-│ │ │ Range │ Range │ Range │ ... │ │
-│ │ │ 16 bytes │ 16 bytes │ 16 bytes │ │ │
-│ │ │[start:8][len:8]│[start:8][len:8]│[start:8][len:8]│ │ │
-│ │ └──────────────┴──────────────┴──────────────┘ │ │
-│ └────────────────────────────────────────────────────────────────────┘ │
-│ │
-│ ┌────────────────────────────────────────────────────────────────────┐ │
-│ │ ATTACHMENT INDEX (per-slot ranges) │ │
-│ │ Similar structure to OUT_INDEX │ │
-│ └────────────────────────────────────────────────────────────────────┘ │
-│ │
-│ ┌────────────────────────────────────────────────────────────────────┐ │
-│ │ BLOB ARENA (variable-length payloads) │ │
-│ │ ┌─────────────────────────────────────────────────────────────┐ │ │
-│ │ │ [payload bytes...] [payload bytes...] [payload bytes...] ...│ │ │
-│ │ └─────────────────────────────────────────────────────────────┘ │ │
-│ │ Referenced by (offset: u64, length: u64) tuples │ │
-│ └────────────────────────────────────────────────────────────────────┘ │
-│ │
-└─────────────────────────────────────────────────────────────────────────┘
-\end{verbatim}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{9. Footprint Independence
-Check}\label{footprint-independence-check}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\NormalTok{flowchart TD}
-\NormalTok{ subgraph REWRITE1["Rewrite A"]}
-\NormalTok{ R1\_READ["reads: \{N1, N2\}"]}
-\NormalTok{ R1\_WRITE["writes: \{N3\}"]}
-\NormalTok{ end}
-
-\NormalTok{ subgraph REWRITE2["Rewrite B"]}
-\NormalTok{ R2\_READ["reads: \{N4, N5\}"]}
-\NormalTok{ R2\_WRITE["writes: \{N6\}"]}
-\NormalTok{ end}
-
-\NormalTok{ subgraph REWRITE3["Rewrite C"]}
-\NormalTok{ R3\_READ["reads: \{N1, N3\}"]}
-\NormalTok{ R3\_WRITE["writes: \{N7\}"]}
-\NormalTok{ end}
-
-\NormalTok{ subgraph CHECK["Independence Check"]}
-\NormalTok{ C1\{\{"A ∩ B"\}\}}
-\NormalTok{ C2\{\{"A ∩ C"\}\}}
-\NormalTok{ C3\{\{"B ∩ C"\}\}}
-\NormalTok{ end}
-
-\NormalTok{ subgraph RESULT["Results"]}
-\NormalTok{ OK1["A || B: OK\textless{}br/\textgreater{}(no overlap)"]}
-\NormalTok{ CONFLICT["A || C: CONFLICT\textless{}br/\textgreater{}(A.write ∩ C.read = \{N3\})"]}
-\NormalTok{ OK2["B || C: OK\textless{}br/\textgreater{}(no overlap)"]}
-\NormalTok{ end}
-
-\NormalTok{ R1\_WRITE {-}{-}\textgreater{} C1}
-\NormalTok{ R2\_WRITE {-}{-}\textgreater{} C1}
-\NormalTok{ R1\_WRITE {-}{-}\textgreater{} C2}
-\NormalTok{ R3\_READ {-}{-}\textgreater{} C2}
-\NormalTok{ R2\_WRITE {-}{-}\textgreater{} C3}
-\NormalTok{ R3\_WRITE {-}{-}\textgreater{} C3}
-
-\NormalTok{ C1 {-}{-}\textgreater{} OK1}
-\NormalTok{ C2 {-}{-}\textgreater{} CONFLICT}
-\NormalTok{ C3 {-}{-}\textgreater{} OK2}
-
-\NormalTok{ style CONFLICT fill:\#ffcdd2}
-\NormalTok{ style OK1 fill:\#c8e6c9}
-\NormalTok{ style OK2 fill:\#c8e6c9}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{10. Complete Data Flow: Intent to
-Render}\label{complete-data-flow-intent-to-render}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\NormalTok{sequenceDiagram}
-\NormalTok{ autonumber}
-\NormalTok{ participant U as User}
-\NormalTok{ participant V as Viewer}
-\NormalTok{ participant H as Session Hub}
-\NormalTok{ participant E as Engine}
-\NormalTok{ participant S as Scheduler}
-\NormalTok{ participant B as BOAW}
-\NormalTok{ participant G as GraphStore}
-\NormalTok{ participant W as WSC}
-
-\NormalTok{ U{-}\textgreater{}\textgreater{}V: Click action}
-\NormalTok{ V{-}\textgreater{}\textgreater{}V: Encode intent bytes}
-\NormalTok{ V{-}\textgreater{}\textgreater{}H: ingest\_intent(bytes)}
-\NormalTok{ H{-}\textgreater{}\textgreater{}E: forward intent}
-
-\NormalTok{ Note over E: Phase 1: BEGIN}
-\NormalTok{ E{-}\textgreater{}\textgreater{}E: begin() → TxId}
-
-\NormalTok{ Note over E: Intent Processing}
-\NormalTok{ E{-}\textgreater{}\textgreater{}E: dispatch\_next\_intent(tx)}
-\NormalTok{ E{-}\textgreater{}\textgreater{}G: GraphView lookup}
-\NormalTok{ G{-}{-}\textgreater{}\textgreater{}E: intent data}
-
-\NormalTok{ Note over E: Phase 2: APPLY}
-\NormalTok{ E{-}\textgreater{}\textgreater{}S: apply(tx, rule, scope)}
-\NormalTok{ S{-}\textgreater{}\textgreater{}G: matcher(view, scope)}
-\NormalTok{ G{-}{-}\textgreater{}\textgreater{}S: match result}
-\NormalTok{ S{-}\textgreater{}\textgreater{}S: compute footprint}
-\NormalTok{ S{-}\textgreater{}\textgreater{}S: enqueue PendingRewrite}
-
-\NormalTok{ Note over E: Phase 3: COMMIT}
-\NormalTok{ E{-}\textgreater{}\textgreater{}S: commit(tx)}
-\NormalTok{ S{-}\textgreater{}\textgreater{}S: radix sort (drain)}
-\NormalTok{ S{-}\textgreater{}\textgreater{}S: independence check (reserve)}
-
-\NormalTok{ Note over B: Parallel Execution}
-\NormalTok{ S{-}\textgreater{}\textgreater{}B: execute\_parallel(items)}
-\NormalTok{ B{-}\textgreater{}\textgreater{}B: partition into shards}
-\NormalTok{ par Worker 0}
-\NormalTok{ B{-}\textgreater{}\textgreater{}G: read via GraphView}
-\NormalTok{ G{-}{-}\textgreater{}\textgreater{}B: data}
-\NormalTok{ B{-}\textgreater{}\textgreater{}B: emit to TickDelta}
-\NormalTok{ and Worker 1}
-\NormalTok{ B{-}\textgreater{}\textgreater{}G: read via GraphView}
-\NormalTok{ G{-}{-}\textgreater{}\textgreater{}B: data}
-\NormalTok{ B{-}\textgreater{}\textgreater{}B: emit to TickDelta}
-\NormalTok{ and Worker N}
-\NormalTok{ B{-}\textgreater{}\textgreater{}G: read via GraphView}
-\NormalTok{ G{-}{-}\textgreater{}\textgreater{}B: data}
-\NormalTok{ B{-}\textgreater{}\textgreater{}B: emit to TickDelta}
-\NormalTok{ end}
-\NormalTok{ B{-}\textgreater{}\textgreater{}B: merge\_deltas (canonical)}
-\NormalTok{ B{-}{-}\textgreater{}\textgreater{}S: merged ops}
-
-\NormalTok{ S{-}\textgreater{}\textgreater{}G: apply ops}
-
-\NormalTok{ Note over E: Phase 4: HASH}
-\NormalTok{ E{-}\textgreater{}\textgreater{}G: compute state\_root}
-\NormalTok{ G{-}{-}\textgreater{}\textgreater{}E: hash}
-\NormalTok{ E{-}\textgreater{}\textgreater{}E: compute commit\_hash}
-
-\NormalTok{ Note over E: Phase 5: RECORD}
-\NormalTok{ E{-}\textgreater{}\textgreater{}W: store snapshot}
-\NormalTok{ E{-}\textgreater{}\textgreater{}E: append to history}
-
-\NormalTok{ Note over H: Emit to Tools}
-\NormalTok{ E{-}\textgreater{}\textgreater{}H: WarpDiff}
-\NormalTok{ H{-}\textgreater{}\textgreater{}V: WarpFrame}
-
-\NormalTok{ Note over V: Apply \& Render}
-\NormalTok{ V{-}\textgreater{}\textgreater{}V: apply\_op (each op)}
-\NormalTok{ V{-}\textgreater{}\textgreater{}V: verify state\_hash}
-\NormalTok{ V{-}\textgreater{}\textgreater{}V: render frame}
-\NormalTok{ V{-}\textgreater{}\textgreater{}U: Display result}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{11. Viewer Event Loop}\label{viewer-event-loop}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\NormalTok{flowchart TD}
-\NormalTok{ subgraph FRAME["Frame Loop"]}
-\NormalTok{ START[frame start]}
-
-\NormalTok{ subgraph DRAIN["1. Drain Session"]}
-\NormalTok{ DN[drain\_notifications]}
-\NormalTok{ DF[drain\_frames]}
-\NormalTok{ end}
-
-\NormalTok{ subgraph PROCESS["2. Process Frames"]}
-\NormalTok{ PF[process\_frames]}
-\NormalTok{ SNAP\{Snapshot?\}}
-\NormalTok{ DIFF\{Diff?\}}
-\NormalTok{ APPLY[apply\_op each]}
-\NormalTok{ VERIFY[verify hash]}
-\NormalTok{ end}
-
-\NormalTok{ subgraph EVENTS["3. Handle Events"]}
-\NormalTok{ UE[apply\_ui\_event]}
-\NormalTok{ REDUCE[reduce pure]}
-\NormalTok{ EFFECTS[run effects]}
-\NormalTok{ end}
-
-\NormalTok{ subgraph RENDER["4. Render"]}
-\NormalTok{ MATCH\{screen?\}}
-\NormalTok{ TITLE[draw\_title]}
-\NormalTok{ VIEW[draw\_view]}
-\NormalTok{ HUD[draw\_hud]}
-\NormalTok{ end}
-
-\NormalTok{ END[frame end]}
-
-\NormalTok{ START {-}{-}\textgreater{} DRAIN}
-\NormalTok{ DN {-}{-}\textgreater{} DF}
-\NormalTok{ DF {-}{-}\textgreater{} PROCESS}
-\NormalTok{ PF {-}{-}\textgreater{} SNAP}
-\NormalTok{ SNAP {-}{-}\textgreater{}|yes| APPLY}
-\NormalTok{ PF {-}{-}\textgreater{} DIFF}
-\NormalTok{ DIFF {-}{-}\textgreater{}|yes| APPLY}
-\NormalTok{ APPLY {-}{-}\textgreater{} VERIFY}
-\NormalTok{ VERIFY {-}{-}\textgreater{} EVENTS}
-\NormalTok{ UE {-}{-}\textgreater{} REDUCE}
-\NormalTok{ REDUCE {-}{-}\textgreater{} EFFECTS}
-\NormalTok{ EFFECTS {-}{-}\textgreater{} RENDER}
-\NormalTok{ MATCH {-}{-}\textgreater{}|Title| TITLE}
-\NormalTok{ MATCH {-}{-}\textgreater{}|View| VIEW}
-\NormalTok{ VIEW {-}{-}\textgreater{} HUD}
-\NormalTok{ TITLE {-}{-}\textgreater{} END}
-\NormalTok{ HUD {-}{-}\textgreater{} END}
-\NormalTok{ end}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\emph{Visual Atlas generated 2026-01-18. Use alongside ``What Makes Echo
-Tick?'' for complete understanding.}
-
-\backmatter
-\end{document}
diff --git a/docs/archive/study/extract-mermaid.py b/docs/archive/study/extract-mermaid.py
deleted file mode 100755
index cd03489d..00000000
--- a/docs/archive/study/extract-mermaid.py
+++ /dev/null
@@ -1,137 +0,0 @@
-#!/usr/bin/env python3
-# SPDX-License-Identifier: Apache-2.0
-# © James Ross Ω FLYING•ROBOTS
-"""
-Extract Mermaid diagrams from Markdown files and convert to PDF via SVG.
-
-Pipeline: .md -> extract mermaid blocks -> .mmd -> mmdc -> .svg -> inkscape -> .pdf
-"""
-
-import re
-import subprocess
-import sys
-from pathlib import Path
-
-STUDY_DIR = Path(__file__).parent
-DIAGRAMS_DIR = STUDY_DIR / "diagrams"
-
-def extract_mermaid_blocks(md_file: Path) -> list[tuple[str, str]]:
- """Extract mermaid code blocks from a markdown file.
-
- Returns list of (diagram_id, mermaid_code) tuples.
- """
- content = md_file.read_text()
-
- # Match ```mermaid ... ``` blocks
- pattern = r'```mermaid\n(.*?)```'
- matches = re.findall(pattern, content, re.DOTALL)
-
- results = []
- base_name = md_file.stem
-
- for i, code in enumerate(matches, 1):
- diagram_id = f"{base_name}-{i:02d}"
- results.append((diagram_id, code.strip()))
-
- return results
-
-
-def convert_mermaid_to_pdf(diagram_id: str, mermaid_code: str, output_dir: Path) -> Path | None:
- """Convert mermaid code to PDF via SVG.
-
- Returns path to PDF or None on failure.
- """
- output_dir.mkdir(parents=True, exist_ok=True)
-
- mmd_file = output_dir / f"{diagram_id}.mmd"
- svg_file = output_dir / f"{diagram_id}.svg"
- pdf_file = output_dir / f"{diagram_id}.pdf"
-
- # Write mermaid source
- mmd_file.write_text(mermaid_code)
-
- # Convert to SVG with mmdc
- try:
- result = subprocess.run(
- ["mmdc", "-i", str(mmd_file), "-o", str(svg_file), "-b", "transparent"],
- capture_output=True,
- text=True,
- timeout=30
- )
- if result.returncode != 0:
- print(f" mmdc failed for {diagram_id}: {result.stderr}", file=sys.stderr)
- return None
- except subprocess.TimeoutExpired:
- print(f" mmdc timeout for {diagram_id}", file=sys.stderr)
- return None
- except FileNotFoundError:
- print(" mmdc not found - install with: npm install -g @mermaid-js/mermaid-cli", file=sys.stderr)
- return None
-
- if not svg_file.exists():
- print(f" SVG not created for {diagram_id}", file=sys.stderr)
- return None
-
- # Convert SVG to PDF with inkscape
- try:
- result = subprocess.run(
- ["inkscape", str(svg_file), "--export-type=pdf", f"--export-filename={pdf_file}"],
- capture_output=True,
- text=True,
- timeout=30
- )
- if result.returncode != 0:
- print(f" inkscape failed for {diagram_id}: {result.stderr}", file=sys.stderr)
- return None
- except subprocess.TimeoutExpired:
- print(f" inkscape timeout for {diagram_id}", file=sys.stderr)
- return None
- except FileNotFoundError:
- print(" inkscape not found", file=sys.stderr)
- return None
-
- if pdf_file.exists():
- return pdf_file
- return None
-
-
-def main():
- """Process all markdown files in study directory."""
- md_files = [
- STUDY_DIR / "what-makes-echo-tick.md",
- STUDY_DIR / "echo-visual-atlas.md",
- STUDY_DIR / "echo-tour-de-code.md",
- ]
-
- total_diagrams = 0
- converted = 0
-
- for md_file in md_files:
- if not md_file.exists():
- print(f"Skipping {md_file.name} (not found)")
- continue
-
- print(f"\n=== Processing {md_file.name} ===")
- blocks = extract_mermaid_blocks(md_file)
- print(f"Found {len(blocks)} mermaid diagrams")
-
- for diagram_id, code in blocks:
- total_diagrams += 1
- print(f" Converting {diagram_id}...", end=" ")
-
- pdf_path = convert_mermaid_to_pdf(diagram_id, code, DIAGRAMS_DIR)
- if pdf_path:
- print(f"OK -> {pdf_path.name}")
- converted += 1
- else:
- print("FAILED")
-
- print(f"\n=== Summary ===")
- print(f"Total diagrams: {total_diagrams}")
- print(f"Converted: {converted}")
- print(f"Failed: {total_diagrams - converted}")
- print(f"Output directory: {DIAGRAMS_DIR}")
-
-
-if __name__ == "__main__":
- main()
diff --git a/docs/archive/study/inject-diagrams.py b/docs/archive/study/inject-diagrams.py
deleted file mode 100644
index 1dfab19a..00000000
--- a/docs/archive/study/inject-diagrams.py
+++ /dev/null
@@ -1,112 +0,0 @@
-#!/usr/bin/env python3
-# SPDX-License-Identifier: Apache-2.0
-# © James Ross Ω FLYING•ROBOTS
-"""
-Post-process LaTeX files to replace mermaid code blocks with diagram includes.
-
-Finds Shaded blocks containing mermaid syntax and replaces with \includegraphics.
-"""
-
-import re
-import sys
-from pathlib import Path
-
-STUDY_DIR = Path(__file__).parent
-DIAGRAMS_DIR = STUDY_DIR / "diagrams"
-
-# Mermaid start patterns
-MERMAID_STARTS = [
- r'\\NormalTok\{graph ',
- r'\\NormalTok\{flowchart ',
- r'\\NormalTok\{sequenceDiagram\}',
- r'\\NormalTok\{classDiagram\}',
- r'\\NormalTok\{stateDiagram',
- r'\\NormalTok\{erDiagram\}',
- r'\\NormalTok\{pie ',
- r'\\NormalTok\{gantt\}',
-]
-
-
-def is_mermaid_block(block_content: str) -> bool:
- """Check if a Shaded block contains mermaid diagram syntax."""
- for pattern in MERMAID_STARTS:
- if re.search(pattern, block_content):
- return True
- return False
-
-
-def process_tex_file(tex_file: Path, base_name: str) -> str:
- """Process a tex file, replacing mermaid blocks with includegraphics."""
- content = tex_file.read_text()
-
- # Match Shaded environments
- shaded_pattern = r'\\begin\{Shaded\}(.*?)\\end\{Shaded\}'
-
- diagram_counter = 0
- replacements = []
-
- for match in re.finditer(shaded_pattern, content, re.DOTALL):
- block = match.group(0)
- block_content = match.group(1)
-
- if is_mermaid_block(block_content):
- diagram_counter += 1
- diagram_id = f"{base_name}-{diagram_counter:02d}"
- pdf_path = DIAGRAMS_DIR / f"{diagram_id}.pdf"
-
- if pdf_path.exists():
- # Create centered figure with the diagram
- replacement = (
- f"\\begin{{center}}\n"
- f"\\includegraphics[max width=\\textwidth,max height=0.4\\textheight,keepaspectratio]"
- f"{{diagrams/{diagram_id}.pdf}}\n"
- f"\\end{{center}}"
- )
- replacements.append((match.start(), match.end(), replacement))
- else:
- print(f" Warning: {pdf_path.name} not found, keeping code block")
-
- # Apply replacements in reverse order to preserve positions
- for start, end, replacement in reversed(replacements):
- content = content[:start] + replacement + content[end:]
-
- # Add graphicx package if we made replacements and it's not already there
- if replacements and r'\usepackage{graphicx}' not in content:
- # Insert after documentclass or after other usepackage statements
- content = content.replace(
- r'\usepackage{longtable',
- r'\usepackage{graphicx}' + '\n' + r'\usepackage[export]{adjustbox}' + '\n' + r'\usepackage{longtable'
- )
-
- return content, len(replacements)
-
-
-def main():
- """Process all tex files."""
- tex_files = [
- ("what-makes-echo-tick.tex", "what-makes-echo-tick"),
- ("echo-visual-atlas.tex", "echo-visual-atlas"),
- ("echo-tour-de-code.tex", "echo-tour-de-code"),
- ]
-
- for tex_name, base_name in tex_files:
- tex_file = STUDY_DIR / tex_name
- if not tex_file.exists():
- print(f"Skipping {tex_name} (not found)")
- continue
-
- print(f"\n=== Processing {tex_name} ===")
- new_content, count = process_tex_file(tex_file, base_name)
-
- if count > 0:
- # Write to new file (preserve original)
- output_file = STUDY_DIR / tex_name.replace('.tex', '-with-diagrams.tex')
- output_file.write_text(new_content)
- print(f" Replaced {count} mermaid blocks")
- print(f" Output: {output_file.name}")
- else:
- print(f" No mermaid blocks found")
-
-
-if __name__ == "__main__":
- main()
diff --git a/docs/archive/study/macros.tex b/docs/archive/study/macros.tex
deleted file mode 100644
index 7020557e..00000000
--- a/docs/archive/study/macros.tex
+++ /dev/null
@@ -1,13 +0,0 @@
-% SPDX-License-Identifier: Apache-2.0 OR LicenseRef-MIND-UCAL-1.0
-% © James Ross Ω FLYING•ROBOTS
-% Macros for the WARPs paper
-% Shared commands to keep notation consistent across the manuscript.
-\usepackage{tikz} % Needed for \AIONLogo in this macros file
-\usetikzlibrary{positioning,calc,shapes.geometric}
-\newcommand{\AION}{\textrm{AI}\ensuremath{\Omega}\textrm{N}}
-\newcommand{\AIONProjectURL}{\url{https://github.com/flyingrobots/aion}}
-
-% WARP term: small caps in prose, italic in math.
-% Force upright small caps in text to avoid missing scit font shapes.
-\DeclareRobustCommand{\WARP}{\ifmmode\mathit{WARP}\else{\upshape\scshape warp}\fi}
-\DeclareMathOperator{\skel}{skel}
diff --git a/docs/archive/study/paper-7eee.pdf b/docs/archive/study/paper-7eee.pdf
deleted file mode 100644
index 2787128c..00000000
Binary files a/docs/archive/study/paper-7eee.pdf and /dev/null differ
diff --git a/docs/archive/study/paper-7eee.tex b/docs/archive/study/paper-7eee.tex
deleted file mode 100644
index 2ff4ab27..00000000
--- a/docs/archive/study/paper-7eee.tex
+++ /dev/null
@@ -1,1315 +0,0 @@
-% SPDX-License-Identifier: Apache-2.0 OR LicenseRef-MIND-UCAL-1.0
-% © James Ross Ω FLYING•ROBOTS
-\documentclass{aion}
-
-% ------------------------------------------------------------
-% Metadata for this paper
-% ------------------------------------------------------------
-\renewcommand{\papertitle}{WARP Graphs---WARP Core: Deterministic Graph Rewrite Simulation Engine}
-\renewcommand{\papernumber}{Paper VII}
-\renewcommand{\paperdate}{Januray 2026}
-
-\renewcommand{\paperauthor}{James Ross}
-\renewcommand{\paperaffiliation}{Independent Researcher}
-\renewcommand{\paperorcid}{0009-0006-0025-7801}
-\renewcommand{\paperdoi}{10.5281/zenodo.18038297}
-
-% ------------------------------------------------------------
-% Packages (local to this paper)
-% ------------------------------------------------------------
-\usepackage{float}
-\usepackage{mathtools}
-\usepackage{tabularx}
-\usepackage{tikz}
-\usepackage{tikz-cd}
-\usetikzlibrary{arrows.meta,positioning,decorations.pathreplacing,fit,calc}
-
-\input{macros}
-
-% ------------------------------------------------------------
-% Notation shortcuts (guarded to avoid clashes with other papers)
-% ------------------------------------------------------------
-\ifdefined\WCat\else\newcommand{\WCat}{\mathcal{W}}\fi
-\ifdefined\Hist\else\DeclareMathOperator{\Hist}{Hist}\fi
-\ifdefined\Trans\else\DeclareMathOperator{\Trans}{Trans}\fi
-\ifdefined\DL\else\DeclareMathOperator{\DL}{DL}\fi
-\ifdefined\Dist\else\DeclareMathOperator{\Dist}{Dist}\fi
-\ifdefined\To\else\newcommand{\To}{\to}\fi
-
-\newcommand{\WState}{\mathsf{WState}}
-\newcommand{\Tr}{\mathsf{Tr}}
-\newcommand{\Labels}{\mathsf{Labels}}
-\newcommand{\Apply}{\mathsf{Apply}}
-
-\DeclareMathOperator{\MW}{MW}
-\DeclareMathOperator{\Path}{Path}
-\DeclareMathOperator{\Obj}{Obj}
-\DeclareMathOperator{\Mor}{Mor}
-\DeclareMathOperator{\dom}{dom}
-\DeclareMathOperator{\cod}{cod}
-
-\newcommand{\Ruliad}{\mathcal{R}}
-\newcommand{\Chronos}{\mathsf{Chronos}}
-\newcommand{\Kairos}{\mathsf{Kairos}}
-\newcommand{\Aion}{\mathsf{Aion}}
-
-% \newcommand{\AION}{\mathdf{AI\upOmegaN}}
-
-% Paper I used \sectionbreak; provide a default in case the class doesn't.
-\providecommand{\sectionbreak}{\clearpage}
-
-% Avoid duplicate hyperlink anchors for figures
-\makeatletter
-\renewcommand{\theHfigure}{\thesection.\arabic{figure}}
-\makeatother
-
-\usetikzlibrary{backgrounds}
-
-\begin{document}
-
-\AIONFrontMatter{This paper outlines the architecture of the \textit`'{WARP Core} deterministic graph rewrite engine, a high-performance, real-time simulation engine with bit-level perfect deterministism, embarassingly-high coucurrency, and first-class time travel, by construction.
-}
-
-% ============================================================
-\section{Introduction}
-\label{sec:intro}
-
-To conclude the \textbf{\AION{} Foundations Series}, we describe the construction of the WARP Core, a real-time deterministic graph rewrite simulation engine. This technology is already running this project's homepage, https://flyingrobots.dev, known herein as the "WARPSite"---a "website" powered by the WARP
-Paper~I introduces \WARP\ graphs as a minimal recursively nested state object~\cite{Ros25a}.
-Paper~II defines a deterministic multiway semantics (via a two-plane DPO discipline) so that
-executions become replayable \emph{worldlines}~\cite{Ros25b}.
-Paper~III shows that deterministic worldlines admit a boundary representation:
-a \emph{provenance payload} is sufficient to reconstruct the full interior derivation volume
-(\emph{computational holography})~\cite{Ros25c}.
-
-This paper addresses the remaining mathematical question: \emph{how should a computation be compared across observers?}
-In practice, consumers rarely require a raw microstep-by-microstep derivation. Engineering use-cases demand derived views:
-summaries for interpretation, invariants for compilation, provenance for audit, and counterfactual branches for adversarial analysis.
-These viewpoints correspond to different \emph{observers} of the same underlying history.
-
-The correct comparison problem is not ``which observer is right?'' but:
-\begin{quote}
-\emph{Given two observers that emit different trace languages, what is the cost of translating between them
-under explicit resource constraints, and how much distortion is unavoidable?}
-\end{quote}
-We operationalise this as a geometry on observer space: a distance defined by translator description length
-(MDL) plus trace distortion. This distance is \emph{budgeted}: two observers may be equivalent under unbounded
-resources but far apart at finite time/memory budgets.
-
-\paragraph{Context within the Series.}
-Paper~V is about ethics: what perfect provenance implies for accountability, privacy, and power.
-Paper~VI is about architecture: how to implement the semantics as a system.
-Paper~IV is therefore the final mathematics-oriented paper in the series.
-Accordingly, we take the opportunity to give the Ruliad connection and observer geometry the full formal weight
-they require, rather than deferring mathematical structure to later papers.
-
-\paragraph{Contributions.}
-The contributions of this paper are:
-
-\begin{enumerate}[leftmargin=*]
- \item A stand-alone account of \emph{history categories} for \WARP\ rewriting, relating deterministic worldlines
- to multiway systems (\S\ref{sec:prelim}, \S\ref{sec:multiway}).
- \item A formal definition of \emph{observers} as resource-bounded functors out of $\Hist(\mathcal{U},R)$ into an
- observation space, including boundary-versus-bulk observer pairs induced by holography (\S\ref{sec:observers}).
- \item A translation framework between observers, equipped with MDL description length~\cite{Ris78} and
- trace distortion, together with explicit assumptions needed for compositionality (\S\ref{sec:translators}).
- \item The definition of the \emph{rulial distance} $D_{\tau,m}$ and its core properties:
- non-negativity, symmetry, monotonicity under budget relaxation, and a triangle inequality up to a constant
- overhead inherited from prefix coding, together with a Lawvere-metric/enriched-category interpretation of directed cost
- (\S\ref{sec:rulial}).
- \item A formalisation of the Chronos--Kairos--Aion triad as a three-layer time model embedded in the multiway space,
- and an interpretation of rulial distance as ``frame separation'' in the Ruliad (\S\ref{sec:multiway}).
- \item A minimal temporal logic aligned with Chronos--Kairos--Aion, with concrete liveness/reconciliation examples and a
- transport lemma relating temporal satisfaction to observer translation cost (\S\ref{sec:multiway}).
-\end{enumerate}
-
-\paragraph{Scope management.}
-We deliberately do \emph{not} attempt to axiomatise a single canonical trace metric,
-nor do we claim that MDL gives an optimal notion of semantic similarity for all domains.
-Our aim is narrower and foundational: to provide a mathematically explicit, computable mechanism that turns
-\emph{translation cost} into geometry, so that later work can specialise it to concrete trace languages and security goals.
-
-\paragraph{Roadmap.}
-We begin by restating the minimal background from Papers~I--III needed for a stand-alone reading (\S\ref{sec:prelim}).
-We then formalise observers as resource-bounded functors out of history categories and motivate canonical observer families
-induced by holography (\S\ref{sec:observers}).
-Next we introduce translators, MDL description length, and lifted distortion as the ingredients for a quantitative comparison
-of observers (\S\ref{sec:translators}).
-Rulial distance is defined and analysed in \S\ref{sec:rulial}, including the Lawvere-metric/enriched-category interpretation of
-directed cost (\S\ref{subsec:lawvere}).
-We then connect deterministic worldlines to multiway systems and the Ruliad, formalise the Chronos--Kairos--Aion time model,
-and develop a minimal temporal logic whose semantics range over worldlines and branching histories (\S\ref{sec:multiway},
-\S\ref{subsec:temporal-logic}).
-Finally, we summarise related work (\S\ref{sec:related}), discuss implications and open directions (\S\ref{sec:outlook}),
-and provide a notation summary (\S\ref{sec:notation}).
-
-\sectionbreak
-
-% ============================================================
-\section{Preliminaries and Standing Assumptions}
-\label{sec:prelim}
-
-We briefly restate the fragments of Papers~I--III needed for a self-contained treatment.
-Throughout, we adopt the deterministic replay discipline of Paper~II (fixed boundary data determines a unique committed tick worldline)
-and the boundary encoding of Paper~III.
-
-\subsection[Warp states and deterministic worldlines]{\textnormal{\textsc{Warp}} states and deterministic worldlines}
-\label{subsec:prelim-warps}
-
-A \WARP\ graph is a finite directed multigraph whose vertices and edges carry recursively attached \WARP\ graphs~\cite{Ros25a}.
-A \emph{\WARP\ state} $U\in\WState$ is a typed open graph skeleton together with recursively attached \WARP\ states on each vertex and edge.
-We write $\skel(U)$ for the skeleton component.
-
-Deterministic evolution is expressed in ticks.
-Let $\Labels$ denote the space of tick patches: finite records sufficient to advance the state by one tick under the deterministic semantics of Paper~II.
-Write
-\[
- \Apply : \WState \times \Labels \rightharpoonup \WState
-\]
- for the deterministic tick-application function.
- A \emph{tick patch} is an element $\mu\in\Labels$ (intended to be applied via $\Apply$).
- Intuitively, a tick patch is the serialised record of the within-tick batch committed at that tick.
- A \emph{tick} is the unit of concurrent evolution: it groups attachment-plane rewrites together with a scheduler-selected batch of independent skeleton rewrites, committed atomically (Paper~II, Def.~4.2).
- A deterministic worldline is a sequence
-\[
- U_0 \;\Rightarrow\; U_1 \;\Rightarrow\; \cdots \;\Rightarrow\; U_n
-\qquad\text{with}\qquad
-U_{i+1}=\Apply(U_i,\mu_i)
-\]
-whenever defined~\cite{Ros25b}.
-Paper~II shows how to construct such an $\Apply$ from DPO rewriting in adhesive categories under a two-plane discipline,
-and how to package scheduling decisions so that replay is bit-level deterministic.
-
-\subsection{Boundary encoding and wormholes}
-\label{subsec:prelim-holography}
-
-Paper~III introduces a \emph{provenance payload}
-\[
- P=(\mu_0,\ldots,\mu_{n-1})
-\]
-and the boundary encoding $(U_0,P)$~\cite{Ros25c}.
-Under patch sufficiency,\footnote{Patch sufficiency is the condition that $(U_0,P)$ uniquely determines the interior worldline under the deterministic semantics.}
-$(U_0,P)$ reconstructs the interior worldline uniquely.
-A \emph{wormhole} is a provenance-preserving compression of a multi-tick segment into a single edge labelled by a sub-payload.
-For the present paper, holography induces two natural classes of observers:
-\begin{itemize}[leftmargin=*]
- \item \emph{bulk observers} that inspect some or all of the interior worldline; and
- \item \emph{boundary observers} that operate only on the compact boundary artefact $(U_0,P)$.
-\end{itemize}
-Rulial distance will quantify the cost of translating between these viewpoints.
-
-\subsection{Multiway graphs and history categories}
-\label{subsec:prelim-history}
-
-Fix a universe $\mathcal{U}\subseteq\WState$ and a rule pack $R$ (a finite set of rewrite rules plus the fixed typing/open-graph discipline).
-The associated \emph{multiway graph} is the directed graph
-\[
- \MW(\mathcal{U},R) = (V,E)
-\]
-whose vertices are states $V=\mathcal{U}$ and whose directed edges are individual rewrite steps generated by $R$
-(including alternative matches and orderings where applicable).
-In general $\MW(\mathcal{U},R)$ branches and merges.
-
-\begin{definition}[History category]\label{def:hist-category}
-Let $\MW(\mathcal{U},R)$ be a multiway graph.
-Its \emph{history category} $\Hist(\mathcal{U},R)$ is the path category of $\MW(\mathcal{U},R)$:
-\begin{itemize}[leftmargin=*]
- \item objects are states $U\in\mathcal{U}$;
- \item morphisms $h:U\to V$ are finite directed paths in $\MW(\mathcal{U},R)$ from $U$ to $V$;
- \item composition is path concatenation.
-\end{itemize}
-\end{definition}
-
-\begin{remark}[Deterministic worldlines as functors]
-A deterministic worldline $U_0\Rightarrow U_1\Rightarrow\cdots$ defines a functor
-$W:\mathbb{N}\to\Hist(\mathcal{U},R)$ sending $i\mapsto U_i$ and $(i\to i{+}1)\mapsto (U_i\to U_{i+1})$.
-The determinism discipline of Paper~II can be understood as selecting a unique such functor for fixed boundary data.
-We later formalise its finite restriction as the Chronos functor (Definition~\ref{def:chronos} in \S\ref{subsec:chronos-kairos-aion}).
-\end{remark}
-
-\sectionbreak
-
-% ============================================================
-\section{Observers}
-\label{sec:observers}
-
-Observers are the interface between a \WARP\ history and a consumer.
-We treat observers as functors out of the history category into a structured space of traces.
-
-\subsection{Observation spaces}
-\label{subsec:obs-spaces}
-
-An \emph{observation space} is an object that supports:
-(i) a notion of trace value; and (ii) a distortion measure between traces.
-The minimal structure we require is a set $\Tr$ equipped with a metric (or pseudometric)
-\[
- \mathrm{dist}_{\mathrm{tr}} : \Tr\times\Tr\to\mathbb{R}_{\ge 0}.
-\]
-In applications, $\Tr$ may be:
-symbol streams, labelled paths, graphs of causal dependencies, certificates, or slices of provenance payloads.
-
-When it is convenient to keep categorical structure explicit, we may regard $\Tr$ as the object set of a category $\mathcal{Y}$
-and work objectwise. Nothing in the core definitions requires nontrivial morphisms in $\mathcal{Y}$; the geometry is carried
-by $\mathrm{dist}_{\mathrm{tr}}$.
-
-\subsection{Observers as budgeted functors}
-\label{subsec:obs-functors}
-
-\begin{definition}[Observer]\label{def:observer}
-Fix $\Hist(\mathcal{U},R)$ and an observation space $(\Tr,\mathrm{dist}_{\mathrm{tr}})$.
-An \emph{observer} is a functor
-\[
- O : \Hist(\mathcal{U},R)\to \Tr,
-\]
-where we regard $\Tr$ as a discrete category.
-Operationally, $O$ is realised by an algorithm that maps any derivation path $h$ to a trace value $O(h)$.
-\end{definition}
-
-\begin{definition}[Resource-bounded observer]\label{def:budgeted-observer}
-Let $(\tau,m)$ be time and memory budgets (in any fixed machine model).
-An observer $O$ is \emph{$(\tau,m)$-bounded} if it admits an implementation that, on any history input $h$ in its domain,
-runs within time $\tau$ and memory $m$.
-\end{definition}
-
-\begin{remark}[Why we bound observers]
-Without explicit budgets, all observers collapse into an uninformative equivalence: ``compute the full worldline and output it''.
-Budgets ensure the geometry respects real computational constraints: replaying a wormhole is algorithmically simple
-(low description length) but may be infeasible at small $\tau$.
-\end{remark}
-
-\subsection{Canonical observer families induced by holography}
-\label{subsec:obs-holography}
-
-Holographic boundary encoding induces a practical taxonomy of observers:
-\begin{itemize}[leftmargin=*]
- \item \emph{boundary observers} that inspect only $(U_0,P)$ (or its authenticated packaging such as a BTR);
- \item \emph{bulk observers} that inspect interior states, matches, receipts, or causal cones; and
- \item \emph{semantic observers} that collapse syntactic evolution into invariant properties (types, safety checks, query semantics).
-\end{itemize}
-
-\begin{example}[Boundary vs bulk]\label{ex:boundary-bulk}
-Let $O_{\partial}$ map a history $h$ to the boundary artefact $(U_0,P)$ that generates it,
-and let $O_{\mathrm{bulk}}$ map $h$ to the full state sequence $(U_0,\ldots,U_n)$.
-There is a natural translator $T_{\mathrm{replay}}$ from $O_{\partial}$ to $O_{\mathrm{bulk}}$ given by deterministic replay.
-Its \emph{description length} is small (it is essentially the interpreter $\Apply$),
-but its \emph{time cost} grows with the length of $P$.
-This example will be revisited in \S\ref{subsec:rulial-budget-effects}.
-\end{example}
-
-\subsection{Observer projections of wormholes}
-\label{subsec:obs-projections}
-
-Given a wormhole boundary encoding $(U_0,P)$, different observers may:
-\begin{itemize}[leftmargin=*]
- \item expose only coarse-grained stages of $P$ (e.g.\ AST$\to$IR$\to$plan);
- \item restrict to semantic effects (e.g.\ schema and invariants);
- \item highlight only adversarial or counterfactual branches;
- \item or inspect every microstep.
-\end{itemize}
-
-\begin{figure}[t]
- \centering
- \begin{tikzpicture}[
- wormhole/.style={rectangle,draw=black,thick,rounded corners,
- minimum width=36mm,minimum height=14mm,align=center},
- observer/.style={rectangle,draw=black,thick,rounded corners=3pt,
- minimum width=22mm,minimum height=9mm,align=center,font=\small},
- arrow/.style={-Latex,thick,draw=black},
- >=Latex
- ]
-
- % Central wormhole
- \node[wormhole] (W) at (0,0)
- {wormhole\\[-1pt]
- \scriptsize $(U_0,P)$};
-
- % Observers
- \node[observer] (O1) at (-4.2,2.4) {$O_1$\\[-2pt]\scriptsize coarse stages};
- \node[observer] (O2) at (4.2,2.4) {$O_2$\\[-2pt]\scriptsize semantic};
- \node[observer] (O3) at (-4.2,-2.4) {$O_3$\\[-2pt]\scriptsize adversarial};
- \node[observer] (O4) at (4.2,-2.4) {$O_4$\\[-2pt]\scriptsize full microsteps};
-
- % Projections
- \draw[arrow] (W.north west) -- (O1.south east);
- \draw[arrow] (W.north east) -- (O2.south west);
- \draw[arrow] (W.south west) -- (O3.north east);
- \draw[arrow] (W.south east) -- (O4.north west);
-
- % Labels on arrows
- \node[rotate=45,font=\scriptsize] at (-2.3,1.4) {project};
- \node[rotate=-45,font=\scriptsize] at (2.3,1.4) {project};
- \node[rotate=-45,font=\scriptsize] at (-2.3,-1.4) {project};
- \node[rotate=45,font=\scriptsize] at (2.3,-1.4) {project};
-
- \end{tikzpicture}
- \caption{Multiple observers projecting the same wormhole boundary $(U_0,P)$ into different trace formats.
- The rulial distance measures the complexity of translating between such views, balancing translator description length
- against residual distortion.}
- \label{fig:observer-projections}
-\end{figure}
-
-\sectionbreak
-
-% ============================================================
-\section{Translators, MDL Complexity, and Distortion}
-\label{sec:translators}
-
-To compare observers we require a compositional notion of translation, a complexity measure for translators,
-and a distortion measure between outputs.
-
-\subsection{Translators}
-\label{subsec:translators-def}
-
-Let $O_1,O_2:\Hist(\mathcal{U},R)\to\Tr$ be observers into a common trace space.
-A translator should map traces produced by $O_1$ into traces in the format of $O_2$.
-
-\begin{definition}[Translator]\label{def:translator}
-A \emph{translator} from $O_1$ to $O_2$ is an algorithmic operator
-\[
- T_{12} : \Tr \to \Tr
-\]
-such that $T_{12}\circ O_1$ is a well-defined observer and is intended to approximate $O_2$.
-We write $T_{12}\in\Trans(O_1,O_2)$.
-\end{definition}
-
-\begin{remark}[Why we translate by post-composition]
-This definition makes typing explicit: $T_{12}\circ O_1$ is an observer with the same domain as $O_2$.
-If one prefers to keep the functor category $\Tr^{\Hist(\mathcal{U},R)}$ explicit, a translator can be regarded as an endofunctor
-on $\Tr$ together with the induced action on observers by post-composition.
-\end{remark}
-
-\begin{definition}[Budgeted translators]\label{def:budgeted-trans}
-For budgets $(\tau,m)$, let $\Trans_{\tau,m}(O_1,O_2)\subseteq\Trans(O_1,O_2)$ denote the translators
-realisable within those budgets.
-\end{definition}
-
-\begin{assumption}[Budgeted translator axioms]\label{ass:budgeted-trans}
-For each budget pair $(\tau,m)$:
-\begin{enumerate}[leftmargin=*]
- \item \emph{Identity.} For every observer $O$, the identity translator $I$ belongs to $\Trans_{\tau,m}(O,O)$, and we normalise
- codes so that $\DL(I)=0$.
- \item \emph{Composition.} If $T_{12}\in\Trans_{\tau,m}(O_1,O_2)$ and $T_{23}\in\Trans_{\tau,m}(O_2,O_3)$, then
- \[
- T_{23}\circ T_{12}\in\Trans_{\tau,m}(O_1,O_3).
- \]
-\end{enumerate}
-\end{assumption}
-
-\begin{example}[SQL $\leftrightarrow$ AST]\label{ex:sql-ast}
-Consider a \WARP\ universe modelling a database query planner.
-Observer $O_1$ outputs a trace of AST transformations, while observer $O_2$ outputs only the initial SQL string
-and a final execution summary.
-A translator $T_{12}$ must compile an AST evolution into a SQL-like summary, while $T_{21}$ must infer a plausible AST evolution
-consistent with SQL and execution effects.
-The description lengths $\DL(T_{12}),\DL(T_{21})$ and their residual distortions quantify the separation of these two views.
-\end{example}
-
-\subsection{MDL and description length}
-\label{subsec:mdl}
-
-We measure translator complexity using MDL: a translator is ``simple'' if it admits a short prefix-free description.
-
-\begin{definition}[Description length]\label{def:dl}
-Fix a prefix-free code over translator programmes.
-For a translator $T$, let $\DL(T)\in\mathbb{R}_{\ge 0}$ denote the length of its code word.
-\end{definition}
-
-The constant-overhead behaviour of prefix codes gives the subadditivity we require.
-
-\begin{assumption}[Subadditivity up to a constant]\label{ass:dl-subadd}
-There exists a constant $c\ge 0$ such that for any composable translators $T_{12},T_{23}$ we have
-\[
- \DL(T_{23}\circ T_{12}) \le \DL(T_{12}) + \DL(T_{23}) + c.
-\]
-\end{assumption}
-
-\begin{remark}[On the constant $c$]
-The constant $c$ is the code overhead required to describe ``run $T_{12}$ then $T_{23}$'' under the chosen universal coding scheme.
-MDL theory~\cite{Ris78} (and related invariance results for prefix complexity~\cite{LiVitanyi2019}) justify treating such overhead as $O(1)$:
-it does not scale with the size of the translators being composed.
-\end{remark}
-\begin{remark}[Relation to information distance and rate--distortion]
-At $\lambda\to\infty$ with the constraint $\Dist(O_2,T\circ O_1)=0$, the directed cost
-reduces to the description length of the shortest exact translator from $O_1$ to $O_2$.
-The resulting symmetrised distance is closely related in spirit to \emph{algorithmic information distance}:
-the Kolmogorov-style cost of converting one description into another~\cite{Bennett98,LiVitanyi2019}.
-At finite $\lambda$, the objective is an MDL-flavoured instance of a rate--distortion trade-off:
-we pay \emph{rate} (translator description length) to purchase lower residual distortion.
-\end{remark}
-
-
-\subsection{Trace distortion and lifted observer distortion}
-\label{subsec:distortion}
-
-Fix a metric (or pseudometric) $\mathrm{dist}_{\mathrm{tr}}$ on trace space $\Tr$.
-We lift it to a distortion between observers by taking a supremum over histories.
-
-\begin{definition}[Lifted distortion]\label{def:dist-lift}
-For observers $O,O':\Hist(\mathcal{U},R)\to\Tr$, define
-\[
- \Dist(O,O') \;:=\; \sup_{h\in\Mor(\Hist(\mathcal{U},R))}\,
- \mathrm{dist}_{\mathrm{tr}}\bigl(O(h),O'(h)\bigr).
-\]
-\end{definition}
-
-\begin{assumption}[Bounded diameter]\label{ass:bounded-diameter}
-All observers under comparison take values in a common trace space $\Tr$ of uniformly bounded diameter,
-so that $\Dist(O,O')$ is finite.
-\end{assumption}
-
-\begin{assumption}[Non-expansive translators]\label{ass:lipschitz}
-Post-composition by any translator is $1$-Lipschitz:
-\[
- \Dist(T\circ O,\, T\circ O') \le \Dist(O,O')
-\]
-for all translators $T$ and observers $O,O'$.
-\end{assumption}
-
-\begin{remark}[Alternative liftings]
-The supremum lifting is conservative: it protects against worst-case histories and adversarial inputs.
-In statistical settings we may instead use an expected distortion over a distribution on histories, or restrict to histories within a time cone.
-Our results adapt to any lifting that preserves the triangle inequality and non-expansiveness properties used in \S\ref{sec:rulial}.
-\end{remark}
-
-\sectionbreak
-
-% ============================================================
-\section{Rulial Distance}
-\label{sec:rulial}
-
-We now define the rulial distance and prove its core properties.
-Throughout we fix a weighting parameter $\lambda>0$ trading off translator complexity against residual distortion.
-
-\subsection{Directed and symmetrised distance}
-
-It is useful to separate the directed translation problem from its symmetrisation.
-
-\begin{definition}[Directed rulial cost]\label{def:directed}
-For observers $O_1,O_2$ define the directed cost
-\[
- \vec{D}_{\tau,m}(O_1\!\to\! O_2)
- :=
- \inf_{T_{12}\in\Trans_{\tau,m}(O_1,O_2)}
- \Bigl(\DL(T_{12}) + \lambda\,\Dist(O_2,\,T_{12}\circ O_1)\Bigr),
-\]
-with the convention that the infimum over an empty set is $+\infty$.
-\end{definition}
-
-\begin{definition}[Rulial distance]\label{def:rulial}
-The (symmetrised) \emph{rulial distance} is
-\[
- D_{\tau,m}(O_1,O_2)
- :=
- \vec{D}_{\tau,m}(O_1\!\to\! O_2)
- \;+\;
- \vec{D}_{\tau,m}(O_2\!\to\! O_1).
-\]
-Equivalently, expanding the two infima yields the joint infimum formulation used in earlier drafts:
-\[
- D_{\tau,m}(O_1,O_2)
- = \inf_{\substack{
- T_{12}\in\Trans_{\tau,m}(O_1,O_2)\\
- T_{21}\in\Trans_{\tau,m}(O_2,O_1)}}
- \Bigl(
- \DL(T_{12}) + \DL(T_{21})
- + \lambda \bigl(
- \Dist(O_2, T_{12}\circ O_1) +
- \Dist(O_1, T_{21}\circ O_2)
- \bigr)
- \Bigr).
-\]
-\end{definition}
-
-\subsection{Basic properties}
-
-\begin{theorem}[Basic properties]\label{thm:rulial-basic}
-For all observers $O_1,O_2$ and budgets $(\tau,m)$:
-\begin{enumerate}[leftmargin=*]
- \item $D_{\tau,m}(O_1,O_2)\ge 0$;
- \item $D_{\tau,m}(O_1,O_2)=D_{\tau,m}(O_2,O_1)$;
- \item $D_{\tau,m}(O,O)=0$ for every observer $O$.
-\end{enumerate}
-\end{theorem}
-
-\begin{proof}
-Non-negativity follows because $\DL\ge 0$ and $\Dist\ge 0$.
-Symmetry holds by definition of $D_{\tau,m}$ as the sum of two directed terms.
-For reflexivity, the identity translator $I$ is admissible by Assumption~\ref{ass:budgeted-trans} and
-satisfies $\DL(I)=0$ and $\Dist(O,I\circ O)=0$, so both directed costs vanish.
-\end{proof}
-
-\begin{corollary}[Observer equivalence]\label{cor:observer-equivalence}
-Let $O_1,O_2$ be observers.
-Then $D_{\tau,m}(O_1,O_2)=0$ if and only if there exist translators
-$T_{12}\in\Trans_{\tau,m}(O_1,O_2)$ and $T_{21}\in\Trans_{\tau,m}(O_2,O_1)$ such that:
-\begin{enumerate}[leftmargin=*]
- \item $\Dist(O_2, T_{12}\circ O_1)=0$ and $\Dist(O_1, T_{21}\circ O_2)=0$; and
- \item $\DL(T_{12})$ and $\DL(T_{21})$ are bounded by a constant independent of the histories under consideration.
-\end{enumerate}
-In this case the observers are equivalent under the rulial geometry: they differ only by constant-overhead,
-distortion-free translation.
-\end{corollary}
-
-\begin{proof}[Proof sketch]
-If such translators exist then both directed costs are bounded by constants independent of the histories (distortion is $0$ and description length is constant),
-so $D_{\tau,m}(O_1,O_2)$ is bounded by a constant.
-Under the constant-overhead convention of the remark below, we identify such constant separation with $0$, yielding $D_{\tau,m}(O_1,O_2)=0$.
-Conversely, if $D_{\tau,m}(O_1,O_2)=0$ then (by definition of $D_{\tau,m}$ as the sum of two directed costs)
-both directed costs vanish modulo constant overhead, hence there exist translators in both directions with
-zero residual distortion and constant description length, as claimed.
-\end{proof}
-
-\begin{remark}
-Observer equivalence is defined modulo constant description overhead; exact zero-length translators are not required and depend on the choice of coding scheme.
-\end{remark}
-
-\subsection{Monotonicity under budget relaxation}
-\label{subsec:rulial-monotone}
-
-The budgeted nature of rulial distance is essential: it distinguishes translations that are short in description length
-but exceed available time/memory resources from those that are admissible under the deployment constraints.
-
-\begin{proposition}[Budget monotonicity]\label{prop:budget-monotone}
-If $(\tau',m')\succeq(\tau,m)$ (i.e.\ $\tau'\ge\tau$ and $m'\ge m$) then
-\[
- D_{\tau',m'}(O_1,O_2) \le D_{\tau,m}(O_1,O_2).
-\]
-\end{proposition}
-
-\begin{proof}
-By definition, $\Trans_{\tau,m}(O_i,O_j)\subseteq \Trans_{\tau',m'}(O_i,O_j)$ under budget relaxation,
-so the infimum is taken over a larger set and cannot increase.
-\end{proof}
-
-\subsection{Triangle inequality up to a constant}
-\label{subsec:rulial-triangle}
-
-\begin{theorem}[Triangle inequality up to additive slack]\label{thm:rulial-triangle}
-Assume:
-\begin{enumerate}[leftmargin=*]
- \item Assumption~\ref{ass:dl-subadd} (subadditivity of $\DL$ up to constant $c$);
- \item $\Dist$ is a metric on observers (triangle inequality) and translators are non-expansive
- (Assumption~\ref{ass:lipschitz});
- \item budget classes are closed under composition (Assumption~\ref{ass:budgeted-trans}).
-\end{enumerate}
-Then for all observers $O_1,O_2,O_3$ we have
-\[
- D_{\tau,m}(O_1,O_3) \le D_{\tau,m}(O_1,O_2) + D_{\tau,m}(O_2,O_3) + 2c.
-\]
-\end{theorem}
-
-\begin{proof}
-Fix $\varepsilon>0$ and choose near-optimal translators for the two distances:
-pick $T_{12},T_{21}$ such that the objective for $D_{\tau,m}(O_1,O_2)$ is within $\varepsilon/2$ of the infimum,
-and $T_{23},T_{32}$ similarly for $D_{\tau,m}(O_2,O_3)$.
-By closure under composition, $T_{13}=T_{23}\circ T_{12}$ and $T_{31}=T_{21}\circ T_{32}$
-are admissible budgeted translators.
-
-Subadditivity gives $\DL(T_{13})\le\DL(T_{12})+\DL(T_{23})+c$ and
-$\DL(T_{31})\le\DL(T_{21})+\DL(T_{32})+c$.
-For distortion, the triangle inequality and non-expansiveness yield
-\begin{align*}
- \Dist(O_3,\,T_{13}\circ O_1)
- &= \Dist(O_3,\,T_{23}\circ T_{12}\circ O_1)\\
- &\le \Dist(O_3,\,T_{23}\circ O_2) + \Dist(T_{23}\circ O_2,\,T_{23}\circ T_{12}\circ O_1)\\
- &\le \Dist(O_3,\,T_{23}\circ O_2) + \Dist(O_2,\,T_{12}\circ O_1),
-\end{align*}
-and similarly for $\Dist(O_1,\,T_{31}\circ O_3)$.
-Summing the bounds and using near-optimality yields the stated inequality up to $\varepsilon$.
-Letting $\varepsilon\to 0$ completes the proof.
-\end{proof}
-
-\begin{remark}[Quasi-pseudometric]
-Together with Theorem~\ref{thm:rulial-basic}, Theorem~\ref{thm:rulial-triangle} makes $D_{\tau,m}$ a
-quasi-pseudometric: it satisfies all pseudometric axioms except that the triangle inequality holds only up to an additive constant $2c$.
-In practice $c$ is a small, fixed prefix-coding overhead; it may also be absorbed into $\lambda$ if desired.
-The geometry is most informative when translation costs scale nontrivially with history size, or when comparing asymptotically distinct observer classes (e.g.\ $O(1)$ vs $O(N)$), in which regime the constant $c$ becomes negligible.
-\end{remark}
-
-\subsection{Lawvere-metric (enriched category) viewpoint}
-\label{subsec:lawvere}
-
-The symmetrised distance $D_{\tau,m}$ is convenient for neighbourhoods and ``frame separation'',
-but the underlying translation problem is inherently \emph{directed}:
-decompressing a boundary view into a bulk view can be infeasible under strict budgets,
-whereas projection from bulk into boundary is typically admissible under the same budgets.
-This asymmetry is captured by Lawvere's observation that metric spaces are categories enriched in
-the monoidal poset $([0,\infty],\ge,+,0)$~\cite{Lawvere73,Kelly82}.
-
-\begin{definition}[Lawvere metric space]\label{def:lawvere-metric}
-A \emph{Lawvere metric space} is a category enriched over the monoidal poset $([0,\infty],\ge,+,0)$.
-Concretely, it is a collection of objects together with a function $d(x,y)\in[0,\infty]$ such that:
-(i) $d(x,x)=0$ for all $x$; and (ii) $d(x,z)\le d(x,y)+d(y,z)$ for all $x,y,z$.
-No symmetry condition is imposed; $d(x,y)$ and $d(y,x)$ may differ.
-The value $+\infty$ is permitted and represents ``no morphism'' (infeasible translation).
-\end{definition}
-
-\begin{definition}[Directed rulial hom]\label{def:lawvere-hom}
-Fix budgets $(\tau,m)$.
-For observers $O_1,O_2$ define the \emph{directed hom-value}
-\[
- d_{\tau,m}(O_1,O_2) \;:=\; \vec{D}_{\tau,m}(O_1\!\to\!O_2)\in[0,\infty],
-\]
-with the convention $d_{\tau,m}(O_1,O_2)=+\infty$ when $\Trans_{\tau,m}(O_1,O_2)=\varnothing$.
-The symmetrised rulial distance is the induced symmetrisation
-$D_{\tau,m}(O_1,O_2)=d_{\tau,m}(O_1,O_2)+d_{\tau,m}(O_2,O_1)$.
-\end{definition}
-
-For notational convenience, we treat $\vec{D}_{\tau,m}(O_1\!\to\!O_2)$ and $d_{\tau,m}(O_1,O_2)$ as interchangeable;
-we use $d_{\tau,m}$ when emphasising the Lawvere-enriched interpretation.
-
-\begin{proposition}[Composition as triangle inequality]\label{prop:lawvere-triangle}
-Assume:
-(i) Assumption~\ref{ass:dl-subadd};
-(ii) $\Dist$ satisfies the triangle inequality and translators are non-expansive (Assumption~\ref{ass:lipschitz});
-and (iii) budget classes are closed under composition (Assumption~\ref{ass:budgeted-trans}).
-Then for all observers $O_1,O_2,O_3$,
-\[
- d_{\tau,m}(O_1,O_3)
- \le
- d_{\tau,m}(O_1,O_2) + d_{\tau,m}(O_2,O_3) + c.
-\]
-\end{proposition}
-
-\begin{proof}[Proof sketch]
-The argument is the directed half of the proof of Theorem~\ref{thm:rulial-triangle}.
-Choose near-optimal translators $T_{12}\in\Trans_{\tau,m}(O_1,O_2)$ and $T_{23}\in\Trans_{\tau,m}(O_2,O_3)$.
-Closure under composition gives an admissible translator $T_{13}=T_{23}\circ T_{12}$.
-Subadditivity bounds $\DL(T_{13})\le \DL(T_{12})+\DL(T_{23})+c$.
-The distortion term satisfies
-$\Dist(O_3,T_{13}\circ O_1)\le \Dist(O_3,T_{23}\circ O_2)+\Dist(O_2,T_{12}\circ O_1)$
-by the triangle inequality and non-expansiveness.
-Taking infima yields the stated inequality.
-\end{proof}
-
-\begin{remark}[Strict enrichment vs $O(1)$ slack]
-If we treat description lengths modulo constant additive overhead (as is standard in Kolmogorov/MDL-style arguments),
-or adopt a translator description language with a primitive sequencing combinator whose size is absorbed into the base machine model,
-then the constant $c$ may be taken as $0$.
-In that regime, $d_{\tau,m}$ satisfies the Lawvere triangle inequality exactly and the ``space of observers''
-is a $[0,\infty]$-enriched category.
-When $c>0$, the enrichment is accurate up to fixed $O(1)$ slack, matching the quasi-pseudometric remark above.
-\end{remark}
-
-The enriched viewpoint encodes several familiar facts:
-directed costs compose by addition (triangle inequality);
-budgets produce $+\infty$ hom-values (no admissible translator);
-and asymmetry is the generic case rather than an exception.
-It also exposes standard categorical tools: the enriched Yoneda embedding associates to each observer $O$
-its distance profile $d_{\tau,m}(O,-)$, and Cauchy completion corresponds to freely adjoining ``ideal observers''
-realising limits of Cauchy weights (useful when taking refinement limits)~\cite{Kelly82}.
-
-\begin{example}[Boundary vs bulk as an asymmetric hom]\label{ex:lawvere-boundary-bulk}
-Let $O_{\partial}$ be the boundary observer of Example~\ref{ex:boundary-bulk}.
-Let $O_{\mathrm{bulk}}^{+}$ be a bulk observer whose trace format includes the boundary payload as a visible component
-(e.g.\ it outputs $(U_0,P)$ together with additional interior witnesses such as $(U_1,\ldots,U_n)$, match receipts, or causal cones).
-Then the forgetful projection translator $T_{\mathrm{forget}}$ extracting $(U_0,P)$ is admissible with
-$\DL(T_{\mathrm{forget}})=O(1)$ and zero residual distortion, so $d_{\tau,m}(O_{\mathrm{bulk}}^{+},O_{\partial})=O(1)$.
-In the opposite direction, Proposition~\ref{prop:boundary-bulk} shows that
-$d_{\tau,m}(O_{\partial},O_{\mathrm{bulk}}^{+})$ can be $+\infty$ under strict budgets (replay is infeasible under the time bound),
-but reduces to $O(1)$ when $(\tau,m)$ are unbounded.
-This is typical of Lawvere metric spaces: translation is compositional, but symmetry is not assumed.
-\end{example}
-
-\begin{figure}[t]
- \centering
- \begin{tikzpicture}[
- obs/.style={draw=black,thick,rounded corners=3pt,inner sep=6pt,align=center,font=\scriptsize},
- arr/.style={-Latex,thick,draw=black},
- maybe/.style={-Latex,thick,draw=black!70,dash pattern=on 6pt off 4pt},
- >=Latex
- ]
-
- \node[obs] (Op) at (0,0) {$O_{\partial}$\\boundary};
- \node[obs] (Ob) at (10.0,0) {$O_{bulk}^{+}$\\bulk$+$};
- \node[obs] (Os) at (5.0,-6.0) {$O_{sum}$\\summary};
-
- % Draw edges first; add labels afterwards so they are never occluded by the centre inequality box.
- \draw[maybe] (Op) -- (Ob);
- \draw[arr] (Ob) -- (Os);
- \draw[maybe] (Op) -- (Os);
-
- \node[font=\scriptsize,align=center,text width=10.5cm,fill=white,inner sep=3pt] at (5.0,-1.9)
- {$\begin{aligned}
- d_{\tau,m}(O_{\partial}, O_{sum})
- &\le d_{\tau,m}(O_{\partial}, O_{bulk}^{+})
- + d_{\tau,m}(O_{bulk}^{+}, O_{sum})\;(+c)
- \end{aligned}$};
-
- % Edge labels (drawn last to sit above the inequality box)
- \path (Op) -- (Ob)
- node[midway,above=5mm,font=\scriptsize,fill=white,inner sep=1pt]
- {$d_{\tau,m}(O_{\partial}, O_{bulk}^{+})$};
- \path (Ob) -- (Os)
- node[pos=0.75,sloped,above=3mm,font=\scriptsize,fill=white,inner sep=1pt]
- {$d_{\tau,m}(O_{bulk}^{+}, O_{sum})$};
- \path (Op) -- (Os)
- node[pos=0.75,sloped,below=3mm,font=\scriptsize,fill=white,inner sep=1pt]
- {$d_{\tau,m}(O_{\partial}, O_{sum})$};
-
- \end{tikzpicture}
- \caption{Directed translation costs form a Lawvere-style geometry: costs compose additively (triangle inequality),
- asymmetry is expected, and strict budgets can force $+\infty$ distances.
- The dashed arrows emphasise that some translations (e.g.\ boundary$\to$bulk$+$) may be infeasible at fixed $(\tau,m)$.}
- \label{fig:lawvere}
-\end{figure}
-
-\subsection{Budget effects: replay is short but not fast}
-\label{subsec:rulial-budget-effects}
-
-The boundary/bulk example illustrates why we insist on explicit budgets.
-
-\begin{proposition}[Boundary-to-bulk translation]\label{prop:boundary-bulk}
-Let $O_{\partial}$ be a boundary observer and $O_{\mathrm{bulk}}$ a bulk observer as in Example~\ref{ex:boundary-bulk}.
-Assume deterministic replay is available as a translator $T_{\mathrm{replay}}$.
-Then:
-\begin{enumerate}[leftmargin=*]
- \item $\DL(T_{\mathrm{replay}})$ is $O(1)$ relative to the fixed semantics $\Apply$ (it is essentially the interpreter);
- \item for fixed finite budgets $(\tau,m)$, $T_{\mathrm{replay}}\notin\Trans_{\tau,m}(O_{\partial},O_{\mathrm{bulk}})$ once the payload length exceeds $\tau$,
- so $\vec{D}_{\tau,m}(O_{\partial}\!\to\!O_{\mathrm{bulk}})=+\infty$ beyond that regime;
- \item for unbounded budgets, the directed distortion term can be $0$ (exact replay), so
- $\vec{D}_{\infty,\infty}(O_{\partial}\!\to\!O_{\mathrm{bulk}})=O(1)$.
-\end{enumerate}
-\end{proposition}
-
-\begin{proof}
-(1) follows from the fact that the replay algorithm is fixed once the operational semantics is fixed.
-(2) is immediate: replay must apply the tick patches sequentially and therefore requires time proportional to payload length.
-(3) follows because replay is exact, so distortion vanishes, and only the constant description length remains.
-\end{proof}
-
-\begin{remark}[Interpretation]
-At unbounded resources, boundary and bulk descriptions may be close in rulial distance.
-At bounded resources, they can be infinitely far.
-This captures an engineering reality: a short programme can still be computationally infeasible under strict budgets.
-In particular, under the two-plane \WARP\ semantics (Paper~II), a state decomposes into a skeleton together with recursively
-attached sub-states; translating a boundary observer into a bulk observer amounts to \emph{expanding} these attachment fibres
-across the committed tick sequence, work that can be infeasible under strict $(\tau,m)$ budgets.
-Wormholes (Paper~III) are precisely provenance-preserving compressions of multi-tick segments into single labelled edges; a bulk observer ``sees inside''
-only by replaying (and hence expanding) the corresponding sub-payload.
-\end{remark}
-
-\sectionbreak
-
-% ============================================================
-\section{Multiway Systems, the Ruliad, and Observer Geometry}
-\label{sec:multiway}
-
-We treat the Ruliad connection with full mathematical detail, since later papers in the series
-move from mathematics to ethics and architecture.
-
-\subsection[Multiway space induced by warp rewriting]{Multiway space induced by \textnormal{\textsc{Warp}} rewriting}
-\label{subsec:multiway-warps}
-
-A rule pack $R$ induces a multiway graph $\MW(\mathcal{U},R)$.
-The determinism discipline of Paper~II does \emph{not} remove branching from the underlying multiway space;
-rather, it ensures that once a boundary encoding is fixed (initial state, rule pack, scheduler policy, and tie-breaks),
-the realised evolution is a unique path.
-
-\begin{figure}[t]
- \centering
- \begin{tikzpicture}[
- state/.style={circle,draw=black,thick,minimum size=5mm,inner sep=0pt},
- ghost/.style={circle,draw=black!40,thick,minimum size=5mm,inner sep=0pt},
- arrow/.style={-Latex,thick,draw=black!40},
- detarrow/.style={-Latex,very thick,draw=black},
- >=Latex
- ]
-
- % Initial state
- \node[state] (S0) at (0,0) {$S_0$};
-
- % Level 1
- \node[ghost] (A1) at (-1.5,1.6) {};
- \node[state] (A2) at (0,1.6) {};
- \node[ghost] (A3) at (1.5,1.6) {};
-
- \draw[arrow] (S0) -- (A1);
- \draw[detarrow] (S0) -- (A2);
- \draw[arrow] (S0) -- (A3);
-
- % Level 2
- \node[ghost] (B1) at (-2.6,3.2) {};
- \node[ghost] (B2) at (-1.5,3.2) {};
- \node[ghost] (B3) at (-0.5,3.2) {};
- \node[state] (B4) at (0.6,3.2) {};
- \node[ghost] (B5) at (1.6,3.2) {};
- \node[ghost] (B6) at (2.6,3.2) {};
-
- \draw[arrow] (A1) -- (B1);
- \draw[arrow] (A1) -- (B2);
- \draw[arrow] (A2) -- (B3);
- \draw[detarrow] (A2) -- (B4);
- \draw[arrow] (A3) -- (B5);
- \draw[arrow] (A3) -- (B6);
-
- % Level 3 merge points
- \node[ghost] (C1) at (-1.0,4.8) {};
- \node[state] (C2) at (0.6,4.8) {};
- \node[ghost] (C3) at (2.0,4.8) {};
-
- \draw[arrow] (B2) -- (C1);
- \draw[arrow] (B3) -- (C1);
- \draw[detarrow] (B4) -- (C2);
- \draw[arrow] (B5) -- (C3);
- \draw[arrow] (B6) -- (C3);
-
- % Annotation
- \node[anchor=west,align=left] at (3.5,2.35)
- {\scriptsize multiway space:\\[-1pt]
- \scriptsize all possible rewrites};
- \node[anchor=west,align=left] at (3.5,1.05)
- {\scriptsize deterministic worldline:\\[-1pt]
- \scriptsize unique path for fixed\\[-1pt]
- \scriptsize boundary data};
-
- \end{tikzpicture}
- \caption{A deterministic worldline (thick) through the multiway space of all possible \WARP\ rewrites.
- Fixing the rule pack, initial state, and scheduling/tie-break data selects a unique path; alternative branches
- represent different matches, schedules, or rule-pack choices.}
- \label{fig:multiway-slice}
-\end{figure}
-
-\begin{remark}[Confluence vs determinism]
-Confluence is a property of a rewrite system: different rewrite orders lead to a common result.
-Determinism in Paper~II is stronger and more operational: given fixed boundary data, there is a unique committed tick outcome.
-The multiway graph still exists as the ambient possibility space in which observers may reason about counterfactuals.
-\end{remark}
-
-\subsection{The Ruliad as a large history space}
-\label{subsec:ruliad}
-
-Wolfram's Ruliad is informally the limit of all possible computations; in our setting it is natural to model it
-as a large history space built from multiway systems~\cite{Wolfram2020}.
-
-\begin{definition}[Aion/Ruliad history space]\label{def:ruliad}
-Fix a class $\mathfrak{R}$ of admissible rule packs and a class $\mathfrak{U}$ of admissible initial states.
-Define the \emph{Aion history space} (the \emph{Ruliad} in our setting) as the disjoint union of history categories
-\[
- \Ruliad \;:=\; \bigsqcup_{(U_0,R)\in\mathfrak{U}\times\mathfrak{R}} \Hist(\mathcal{U}_{U_0,R},R),
-\]
-where $\mathcal{U}_{U_0,R}$ is the forward closure of $U_0$ under $R$ (the reachable states).
-\end{definition}
-
-\begin{remark}[Large-category caveat]
-$\Ruliad$ is a large category (indeed a proper class in many settings).
-The disjoint union is deliberate: we treat histories as provenance-bearing artefacts, so components are not quotiented by extensional
-state equality.
-In particular, even if two reachable states are \emph{identical} as graph-shaped data, we keep them as distinct objects of $\Ruliad$
-when they arise from different causal origins (different initial states and/or rule packs).
-This contrasts with a ``merging'' view of the Ruliad that identifies states across components and thereby erases origin information.
-We use it as a conceptual container: the purpose is to make explicit that a single deterministic worldline is a small, selected path
-within a vastly larger possibility space.
-None of the metric arguments in \S\ref{sec:rulial} require manipulating $\Ruliad$ as a set-theoretic object.
-\end{remark}
-
-\subsection{Chronos, Kairos, Aion}
-\label{subsec:chronos-kairos-aion}
-
-We formalise the three-layer time model alluded to in earlier drafts.
-
-\begin{definition}[Chronos]\label{def:chronos}
-\emph{Chronos time} is the linear time of a fixed worldline:
-given a replayable payload $P=(\mu_0,\ldots,\mu_{n-1})$ of \emph{tick patches}, Chronos is the finite linear order
-$0<1<\cdots0$)
-\[
-\vec{D}_{\tau,m}(O_{\mathrm{bulk}} \to O_\partial) = O(1).
-\]
-\end{itemize}
-
-For unbounded budgets $(\tau,m)=(\infty,\infty)$, replay is admissible and exact, so both directed
-costs are $O(1)$.
-Here the hidden constant is independent of $|P|$ (history length) once the fixed semantics $\Apply$ and the translator coding scheme are chosen.
-
-\subsection*{A.4 Symmetrised distance}
-
-The symmetrised rulial distance is
-\[
-D_{\tau,m}(O_\partial,O_{\mathrm{bulk}})
-= \vec{D}_{\tau,m}(O_\partial \to O_{\mathrm{bulk}})
-+ \vec{D}_{\tau,m}(O_{\mathrm{bulk}} \to O_\partial).
-\]
-
-Thus:
-\begin{itemize}
-\item under strict budgets,
-$D_{\tau,m}(O_\partial,O_{\mathrm{bulk}})=+\infty$; and
-\item under relaxed budgets,
-$D_{\infty,\infty}(O_\partial,O_{\mathrm{bulk}})=O(1)$.
-\end{itemize}
-
-\subsection*{A.5 Interpretation}
-
-This example illustrates the operational use of rulial distance. In practice we do not compute the
-infimum in Definition~\ref{def:rulial} directly; rather, we construct explicit translators and thereby obtain
-concrete upper bounds. Improving translators reduces these bounds, whereas strong summarisation
-or information hiding can increase them---in the limiting case, to $+\infty$. The geometry therefore captures
-how observer separation depends on available resources rather than on semantic disagreement.
-
-\clearpage
-\addcontentsline{toc}{section}{References}
-\bibliographystyle{alphaurl}
-\bibliography{refs}
-
-\end{document}
diff --git a/docs/archive/study/refs.bib b/docs/archive/study/refs.bib
deleted file mode 100644
index 42a45ae6..00000000
--- a/docs/archive/study/refs.bib
+++ /dev/null
@@ -1,111 +0,0 @@
-@article{Bennett98,
- author = {Bennett, Charles H. and G{\'a}cs, P{\'e}ter and Li, Ming and Vit{\'a}nyi, Paul M. B. and Zurek, Wojciech H.},
- title = {Information Distance},
- journal = {IEEE Transactions on Information Theory},
- volume = {44},
- number = {4},
- pages = {1407--1423},
- year = {1998},
- doi = {10.1109/18.672557}
-}
-
-@article{CD11,
- author = {Coecke, Bob and Duncan, Ross},
- title = {Interacting quantum observables: Categorical algebra and diagrammatic reasoning},
- journal = {New Journal of Physics},
- volume = {13},
- number = {4},
- pages = {043016},
- year = {2011},
- doi = {10.1088/1367-2630/13/4/043016}
-}
-
-@inproceedings{CES86,
- author = {Clarke, Edmund M. and Emerson, E. Allen and Sistla, A. Prasad},
- title = {Automatic verification of finite-state concurrent systems using temporal logic},
- booktitle = {Proceedings of the 8th Annual ACM Symposium on Principles of Programming Languages},
- pages = {85--96},
- year = {1986},
- publisher = {ACM},
- doi = {10.1145/643800.643807}
-}
-
-@book{Kelly82,
- author = {Kelly, G. M.},
- title = {Basic Concepts of Enriched Category Theory},
- publisher = {Cambridge University Press},
- year = {1982},
- isbn = {9780521282648}
-}
-
-@article{Lawvere73,
- author = {Lawvere, F. William},
- title = {Metric spaces, generalized logic, and closed categories},
- journal = {Rendiconti del Seminario Matematico e Fisico di Milano},
- volume = {43},
- pages = {135--166},
- year = {1973}
-}
-
-@book{LiVitanyi2019,
- author = {Li, Ming and Vit{\'a}nyi, Paul M. B.},
- title = {An Introduction to Kolmogorov Complexity and Its Applications},
- edition = {4},
- publisher = {Springer},
- year = {2019},
- isbn = {978-3-030-10664-8}
-}
-
-@inproceedings{Pnu77,
- author = {Pnueli, Amir},
- title = {The temporal logic of programs},
- booktitle = {Proceedings of the 18th Annual Symposium on Foundations of Computer Science},
- pages = {46--57},
- year = {1977},
- organization = {IEEE}
-}
-
-@article{Ris78,
- author = {Rissanen, Jorma},
- title = {Modeling by shortest data description},
- journal = {Automatica},
- volume = {14},
- number = {5},
- pages = {465--471},
- year = {1978}
-}
-
-@misc{Ros25a,
- author = {Ross, James},
- title = {{WARP Graphs: A Worldline Algebra for Recursive Provenance}},
- howpublished = {AI$\Omega$N Foundations Series --- Paper I},
- month = {December},
- year = {2025},
- doi = {10.5281/zenodo.17908005},
- note = {Version cited: December 2025 PDF.}
-}
-
-@misc{Ros25b,
- author = {Ross, James},
- title = {{Deterministic Multiway Rewriting and Tick-Based Semantics}},
- howpublished = {AI$\Omega$N Foundations Series --- Paper II},
- month = {December},
- year = {2025},
- doi = {10.5281/zenodo.17934512}
-}
-
-@misc{Ros25c,
- author = {Ross, James},
- title = {{Computation Holography and Boundary Provenance Payloads}},
- howpublished = {AI$\Omega$N Foundations Series --- Paper III},
- month = {December},
- year = {2025},
- doi = {10.5281/zenodo.17963669}
-}
-
-@misc{Wolfram2020,
- author = {Wolfram, Stephen},
- title = {The Ruliad and the Wolfram Physics Project},
- year = {2020},
- note = {Available at \url{https://www.wolframphysics.org}}
-}
diff --git a/docs/archive/study/render-tour-diagrams.py b/docs/archive/study/render-tour-diagrams.py
deleted file mode 100644
index 4aa65951..00000000
--- a/docs/archive/study/render-tour-diagrams.py
+++ /dev/null
@@ -1,91 +0,0 @@
-#!/usr/bin/env python3
-# SPDX-License-Identifier: Apache-2.0
-# © James Ross Ω FLYING•ROBOTS
-"""
-Extract mermaid blocks from what-makes-echo-tick-tour.md,
-render them to SVG, and update the markdown to reference the SVGs.
-"""
-
-import re
-import subprocess
-import sys
-from pathlib import Path
-
-STUDY_DIR = Path(__file__).parent
-DIAGRAMS_DIR = STUDY_DIR / "tour-diagrams"
-INPUT_MD = STUDY_DIR / "what-makes-echo-tick-tour.md"
-
-
-def extract_mermaid_blocks(content: str) -> list[tuple[int, int, str]]:
- """Extract (start, end, code) tuples for all mermaid blocks."""
- pattern = r'```mermaid\n(.*?)```'
- results = []
- for match in re.finditer(pattern, content, re.DOTALL):
- results.append((match.start(), match.end(), match.group(1).strip()))
- return results
-
-
-def render_mermaid_to_svg(diagram_id: str, mermaid_code: str) -> Path | None:
- """Render mermaid code to SVG. Returns path to SVG or None on failure."""
- DIAGRAMS_DIR.mkdir(parents=True, exist_ok=True)
-
- mmd_file = DIAGRAMS_DIR / f"{diagram_id}.mmd"
- svg_file = DIAGRAMS_DIR / f"{diagram_id}.svg"
-
- mmd_file.write_text(mermaid_code)
-
- try:
- result = subprocess.run(
- ["mmdc", "-i", str(mmd_file), "-o", str(svg_file), "-b", "transparent"],
- capture_output=True,
- text=True,
- timeout=30
- )
- if result.returncode != 0:
- print(f" mmdc failed for {diagram_id}: {result.stderr}", file=sys.stderr)
- return None
- except subprocess.TimeoutExpired:
- print(f" mmdc timeout for {diagram_id}", file=sys.stderr)
- return None
- except FileNotFoundError:
- print(" mmdc not found - install with: npm install -g @mermaid-js/mermaid-cli", file=sys.stderr)
- return None
-
- if svg_file.exists():
- return svg_file
- return None
-
-
-def main():
- print("=== Rendering Tour Diagrams ===\n")
-
- content = INPUT_MD.read_text()
- blocks = extract_mermaid_blocks(content)
-
- print(f"Found {len(blocks)} mermaid diagrams")
-
- # Process in reverse order to preserve string positions
- for i, (start, end, code) in enumerate(reversed(blocks), 1):
- diagram_num = len(blocks) - i + 1
- diagram_id = f"tour-{diagram_num:02d}"
-
- print(f" Converting {diagram_id}...", end=" ")
-
- svg_path = render_mermaid_to_svg(diagram_id, code)
- if svg_path:
- # Replace mermaid block with image reference
- # Use relative path from study dir
- img_ref = f""
- content = content[:start] + img_ref + content[end:]
- print("OK")
- else:
- print("FAILED")
-
- # Write updated markdown
- INPUT_MD.write_text(content)
- print(f"\nUpdated {INPUT_MD.name} with SVG references")
- print(f"Diagrams saved to {DIAGRAMS_DIR}")
-
-
-if __name__ == "__main__":
- main()
diff --git a/docs/archive/study/what-makes-echo-tick-processed.md b/docs/archive/study/what-makes-echo-tick-processed.md
deleted file mode 100644
index 3a3e092f..00000000
--- a/docs/archive/study/what-makes-echo-tick-processed.md
+++ /dev/null
@@ -1,1121 +0,0 @@
-
-
-
-# What Makes Echo Tick?
-
-> **Your Tour Guide**: Claude (Opus 4.5)
->
-> Welcome! I've been asked to give you a personal tour through Echo's internals. This isn't just documentation—I'll share what I find elegant, surprising, and occasionally baffling about this codebase. When you see a red-outlined box, that's me stepping out of "narrator mode" to give you my unfiltered take.
->
-> **Reading Time**: ~45 minutes for complete understanding.
-
----
-
-## Table of Contents
-
-1. [Philosophy: Why Echo Exists](#1-philosophy-why-echo-exists)
-2. [The Big Picture: Architecture Overview](#2-the-big-picture-architecture-overview)
-3. [Core Concepts: The WARP Graph](#3-core-concepts-the-warp-graph)
-4. [The Engine: Heart of Echo](#4-the-engine-heart-of-echo)
-5. [The Tick Pipeline: Where Everything Happens](#5-the-tick-pipeline-where-everything-happens)
-6. [Parallel Execution: BOAW (Bag of Autonomous Workers)](#6-parallel-execution-boaw-bag-of-autonomous-workers)
-7. [Storage & Hashing: Content-Addressed Truth](#7-storage--hashing-content-addressed-truth)
-8. [Worked Example: Tracing a Link Click](#8-worked-example-tracing-a-link-click)
-9. [The Viewer: Observing Echo](#9-the-viewer-observing-echo)
-10. [Glossary](#10-glossary)
-
----
-
-## 1. Philosophy: Why Echo Exists
-
-### 1.1 The Problem
-
-Traditional game engines and simulations treat state as **mutable objects**. This creates fundamental problems:
-
-- **Replay is hard**: You can't just "rewind" because state changes are scattered and untracked.
-- **Synchronization is fragile**: Two machines running the same logic may diverge due to floating-point differences, thread timing, or iteration order.
-- **Debugging is a nightmare**: "It worked on my machine" is the symptom of non-determinism.
-- **Branching is impossible**: You can't easily ask "what if?" without copying everything.
-
-\begin{claudecommentary}
-**Claude's Take**: This list of problems isn't theoretical. I've seen countless debugging sessions where the root cause was "HashMap iteration order changed between runs." Echo's designers clearly got burned by non-determinism at some point and decided: _never again_.
-
-What strikes me most is the last point—"branching is impossible." Most engines don't even _try_ to support branching because it seems like a feature for version control, not runtime systems. Echo treats it as a first-class concern. That's unusual and, I think, genuinely forward-thinking.
-\end{claudecommentary}
-
-### 1.2 Echo's Answer
-
-Echo treats **state as a typed graph** and **all changes as rewrites**. Each "tick" of the engine:
-
-1. Proposes a set of rewrites
-2. Executes them in **deterministic order**
-3. Emits **cryptographic hashes** of the resulting state
-
-This means:
-
-- **Same inputs → Same outputs** (always, on any machine)
-- **State is verifiable** (hashes prove correctness)
-- **Replay is trivial** (patches are prescriptive)
-- **Branching is free** (copy-on-write snapshots)
-
-### 1.3 Core Design Principles
-
-```text
-┌─────────────────────────────────────────────────────────────────┐
-│ ECHO'S THREE PILLARS │
-├─────────────────────────────────────────────────────────────────┤
-│ │
-│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
-│ │ DETERMINISM │ │ PROVENANCE │ │ TOOLING │ │
-│ │ FIRST │ │ YOU CAN │ │ AS FIRST │ │
-│ │ │ │ TRUST │ │ CLASS │ │
-│ ├─────────────────┤ ├─────────────────┤ ├─────────────────┤ │
-│ │ Same inputs │ │ Snapshots are │ │ Graphs stream │ │
-│ │ always produce │ │ content- │ │ over canonical │ │
-│ │ same hashes │ │ addressed │ │ wire protocol │ │
-│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
-│ │
-└─────────────────────────────────────────────────────────────────┘
-```
-
-\begin{claudecommentary}
-**Claude's Take**: "Tooling as first-class" is the sleeper here. Most engines treat debugging tools, replay systems, and visualization as afterthoughts—bolted on after the core is done. Echo inverts this: the wire protocol, the hash scheme, and the diff format were designed _so that tools could exist_.
-
-I've read a lot of engine architectures. This level of intentionality about tooling is rare. It's also why Echo can have a separate `warp-viewer` crate that just... works, instead of requiring heroic reverse-engineering.
-\end{claudecommentary}
-
----
-
-## 2. The Big Picture: Architecture Overview
-
-### 2.1 System Layers
-
-Echo is organized into distinct layers, each with a specific responsibility:
-
-
-
-\begin{claudecommentary}
-**Claude's Take**: This is a _clean_ layer cake. Each layer only talks to its neighbors. No "Layer 5 reaching down to Layer 1 for performance reasons." That discipline is hard to maintain, and I respect it.
-
-The `WSC Format` at Layer 2 caught my eye. It's Echo's custom columnar storage format—and before you ask "why not just use Arrow or Parquet?"—I'll spoil it: WSC is designed for mmap-friendly, zero-copy reads where every row is 8-byte aligned and you can binary-search directly into the file. It's specialized for _exactly this use case_. Sometimes NIH syndrome is justified.
-\end{claudecommentary}
-
-### 2.2 Crate Map
-
-| Crate | Purpose |
-| ---------------------- | ---------------------------------------------- |
-| `warp-core` | The deterministic rewrite engine (the "brain") |
-| `echo-graph` | Renderable graph types + diff operations |
-| `echo-session-proto` | Wire protocol (canonical CBOR framing) |
-| `echo-session-service` | Headless Unix-socket hub for tools |
-| `echo-session-client` | Client helpers for connecting to the hub |
-| `warp-viewer` | Native WGPU viewer for visualizing graphs |
-
-### 2.3 Data Flow Overview
-
-
-
-\begin{claudecommentary}
-**Claude's Take**: Notice how the Engine talks to itself multiple times before touching the Store? That's the commit protocol at work. The Engine is _paranoid_ about mutations—it queues up intentions, validates them, and only then touches state. If you're used to "just mutate it directly" game engines, this will feel ceremonial. The ceremony is the point.
-\end{claudecommentary}
-
----
-
-## 3. Core Concepts: The WARP Graph
-
-### 3.1 What is a WARP Graph?
-
-A WARP (**W**orldline **A**lgebra for **R**ecursive **P**rovenance) graph is Echo's fundamental data structure. It's not just a graph—it's a graph with **deterministic semantics**.
-
-
-
-\begin{claudecommentary}
-**Claude's Take**: The name "WARP" is doing a lot of work here. "Worldline" evokes physics—specifically, the path an object traces through spacetime. In Echo, a node's "worldline" is its history of states across ticks. "Recursive Provenance" means you can always ask "where did this value come from?" and trace it back through the graph's history.
-
-Is the name a bit grandiose for what amounts to "typed graph with audit trail"? Maybe. But I've seen worse acronyms in this industry.
-\end{claudecommentary}
-
-### 3.2 Two-Plane Architecture
-
-Echo separates structure from data via the **Two-Plane Model** (ADR-0001):
-
-| Plane | Contains | Purpose |
-| ------------------ | ------------------------- | ------------------------------------- |
-| **Skeleton** | Nodes + Edges (structure) | Fast traversal, deterministic hashing |
-| **Attachment (α)** | Typed payloads | Domain-specific data |
-
-**Why separate them?**
-
-```text
-┌────────────────────────────────────────────────────────────────────┐
-│ SKELETON PLANE (Structure) │
-│ │
-│ ┌─────┐ edge:link ┌─────┐ │
-│ │ N1 │─────────────────▶│ N2 │ │
-│ └─────┘ └─────┘ │
-│ │ │ │
-│ │ edge:child │ edge:ref │
-│ ▼ ▼ │
-│ ┌─────┐◀─────────────────────┘ │
-│ │ N3 │ │
-│ └─────┘ │
-│ │
-├────────────────────────────────────────────────────────────────────┤
-│ ATTACHMENT PLANE (Payloads) │
-│ │
-│ N1.α["title"] = Atom { type: "string", bytes: "Home" } │
-│ N2.α["url"] = Atom { type: "string", bytes: "/page/b" } │
-│ N3.α["body"] = Atom { type: "html", bytes: "...
" } │
-│ │
-└────────────────────────────────────────────────────────────────────┘
-```
-
-**Key insight**: Skeleton rewrites **never decode attachments**. This keeps the hot path fast and deterministic.
-
-\begin{claudecommentary}
-**Claude's Take**: This is where Echo gets clever. The Skeleton plane only contains node IDs, edge IDs, and type tags—all fixed-size, all byte-comparable. You can compute the entire state hash without ever deserializing a single JSON blob, HTML string, or texture.
-
-The Attachment plane (they call it "α" because of course they do) holds the actual domain data. It participates in hashing but doesn't affect traversal. This separation means you can have a 10MB texture attached to a node and still iterate the graph at full speed.
-
-I've seen similar ideas in ECS architectures, but usually the separation is "components vs. systems." Echo's split is "structure vs. data," which is subtly different and, I think, more principled.
-\end{claudecommentary}
-
-### 3.3 Node and Edge Identity
-
-Every node and edge has a **32-byte identifier**:
-
-```rust
-pub struct NodeId([u8; 32]); // Content-addressed or assigned
-pub struct EdgeId([u8; 32]); // Unique edge identifier
-```
-
-These IDs are:
-
-- **Deterministic**: Same content → same ID (when content-addressed)
-- **Sortable**: Lexicographic ordering enables deterministic iteration
-- **Hashable**: Participate in state root computation
-
-### 3.4 WarpInstances: Graphs Within Graphs
-
-Echo supports **descended attachments**—embedding entire graphs within attachment slots:
-
-
-
-This enables "WARPs all the way down"—recursive composition while maintaining determinism.
-
-\begin{claudecommentary}
-**Claude's Take**: WarpInstances are _wild_. You can have a node whose attachment slot contains... another entire graph. And that graph can have nodes whose attachment slots contain... more graphs. It's turtles, but the turtles are graphs.
-
-Why would you want this? Think of a game with procedurally generated dungeons. Each dungeon could be its own WarpInstance, loaded on demand, with its own tick history and state root. The player character is in the "outer" instance; stepping through a portal descends into the "inner" one.
-
-I don't know if Echo actually uses this feature yet, but the architecture supports it cleanly. That's design for the future without overengineering the present.
-\end{claudecommentary}
-
----
-
-## 4. The Engine: Heart of Echo
-
-### 4.1 The Engine Struct
-
-The `Engine` is Echo's central orchestrator. Located in `crates/warp-core/src/engine_impl.rs`:
-
-```rust
-pub struct Engine {
- state: WarpState, // Multi-instance graph state
- rules: HashMap, // Registered rewrite rules
- scheduler: DeterministicScheduler, // Deterministic ordering
- bus: MaterializationBus, // Output channels
- history: Vec<(Snapshot, TickReceipt, WarpTickPatchV1)>,
- tx_counter: u64, // Transaction counter
- live_txs: BTreeSet, // Active transactions
- // ... more fields
-}
-```
-
-\begin{claudecommentary}
-**Claude's Take**: A few things jump out here:
-
-1. **`rules: HashMap`** — Wait, HashMap? Isn't that non-deterministic? It is! But notice: this is for _looking up_ rules by ID, not for _iterating_. The iteration order is determined by the `scheduler`, which is explicitly deterministic. The HashMap is fine because rule IDs are stable.
-
-2. **`history: Vec<(Snapshot, TickReceipt, WarpTickPatchV1)>`** — The engine keeps its entire history in memory? That seems expensive. I suspect this is configurable, or there's a garbage collection pass I haven't found yet. For long-running simulations, unbounded history would be a problem.
-
-3. **`BTreeSet` for live transactions** — BTreeSet, not HashSet. They're _really_ committed to determinism. Even the set of "which transactions are in-flight" is stored in sorted order.
- \end{claudecommentary}
-
-### 4.2 Construction
-
-The engine is built via the `EngineBuilder`:
-
-```rust
-let engine = EngineBuilder::new(store, root_node_id)
- .with_policy_id(1)
- .with_telemetry(telemetry)
- .build();
-```
-
-**What happens during construction:**
-
-
-
-### 4.3 Rewrite Rules
-
-Rules are the atoms of change in Echo. Each rule has three functions:
-
-```rust
-pub struct RewriteRule {
- pub name: String,
- pub matcher: MatchFn, // Does this rule apply?
- pub executor: ExecuteFn, // What changes to make
- pub footprint: FootprintFn, // What resources are touched
- pub policy: ConflictPolicy, // What to do on conflict
-}
-
-// Function signatures (Phase 5 BOAW model):
-type MatchFn = fn(GraphView, &NodeId) -> bool;
-type ExecuteFn = fn(GraphView, &NodeId, &mut TickDelta);
-type FootprintFn = fn(GraphView, &NodeId) -> Footprint;
-```
-
-**Critical constraint**: Executors receive a **read-only** `GraphView` and emit changes to a `TickDelta`. They **never** mutate the graph directly.
-
-\begin{claudecommentary}
-**Claude's Take**: The `FootprintFn` is the secret sauce. Before executing a rule, Echo calls this function to ask: "What nodes, edges, and attachments will you touch?" The footprint is a _conservative estimate_—you must declare everything you _might_ read or write.
-
-This enables Echo's parallel execution model. If two rules have non-overlapping footprints, they can execute in parallel, in any order, and the result is guaranteed identical. If footprints overlap, they're sequenced deterministically.
-
-The burden on the rule author is significant: you must declare your footprint accurately, or you'll get either conflicts (declared overlap when there was none) or silent bugs (undeclared overlap that corrupts state). This is a sharp edge in the API.
-\end{claudecommentary}
-
-### 4.4 GraphView: Read-Only Access
-
-The `GraphView` enforces BOAW's immutability contract:
-
-```rust
-pub struct GraphView<'a> {
- store: &'a GraphStore,
- warp_id: WarpId,
-}
-
-impl<'a> GraphView<'a> {
- pub fn node(&self, id: &NodeId) -> Option<&NodeRecord>;
- pub fn edges_from(&self, id: &NodeId) -> impl Iterator- ;
- pub fn node_attachment(&self, id: &NodeId, key: &str) -> Option<&AttachmentValue>;
- // ... read-only methods only
-}
-```
-
-**No `DerefMut`, no `AsRef
`, no interior mutability.** This is enforced at the type level.
-
-\begin{claudecommentary}
-**Claude's Take**: I went looking for escape hatches here. `RefCell`? No. `UnsafeCell`? No. `Arc>`? No. The `GraphView` is genuinely immutable by construction.
-
-This is Rust at its best: the borrow checker prevents you from shooting yourself in the foot. In C++, you'd need discipline and code review to enforce "executors don't mutate the graph." In Rust, it's just... not possible. The types don't allow it.
-\end{claudecommentary}
-
----
-
-## 5. The Tick Pipeline: Where Everything Happens
-
-### 5.1 Overview
-
-A "tick" is one complete cycle of the engine. It has five phases:
-
-
-
-\begin{claudecommentary}
-**Claude's Take**: The "Commit" phase has five sub-steps. _Five_. This is where I started to appreciate how much thought went into this system. Let me summarize what each does:
-
-1. **Drain**: Pull all pending rewrites from the scheduler in canonical order
-2. **Reserve**: Check footprints for conflicts, accept or reject each rewrite
-3. **Execute**: Run the accepted rewrites (this is where parallelism happens)
-4. **Merge**: Combine all `TickDelta` outputs into a single canonical operation list
-5. **Finalize**: Apply the merged operations to produce the new state
-
-The reservation phase is particularly clever. It's like a two-phase commit: first you "reserve" your footprint (claim your lock), then you execute. If your footprint conflicts with an already-reserved footprint, you're rejected. No execution happens until all accepted rewrites have been validated.
-\end{claudecommentary}
-
-### 5.2 Phase 1: Begin Transaction
-
-```rust
-let tx = engine.begin();
-```
-
-**What happens:**
-
-1. Increment `tx_counter` (wrapping to avoid 0)
-2. Add `TxId` to `live_txs` set
-3. Return opaque transaction identifier
-
-```text
-┌─────────────────────────────────────────────────┐
-│ engine.begin() │
-├─────────────────────────────────────────────────┤
-│ tx_counter: 0 → 1 │
-│ live_txs: {} → {TxId(1)} │
-│ returns: TxId(1) │
-└─────────────────────────────────────────────────┘
-```
-
-### 5.3 Phase 2: Apply Rules
-
-```rust
-engine.apply(tx, "rule_name", &scope_node_id);
-```
-
-**What happens:**
-
-
-
-**The Footprint**: A declaration of what resources the rule will read and write:
-
-```rust
-pub struct Footprint {
- pub n_read: BTreeSet, // Nodes to read
- pub n_write: BTreeSet, // Nodes to write
- pub e_read: BTreeSet, // Edges to read
- pub e_write: BTreeSet, // Edges to write
- pub a_read: BTreeSet, // Attachments to read
- pub a_write: BTreeSet, // Attachments to write
- // ... ports, factor_mask
-}
-```
-
-**Scheduler deduplication**: If the same `(scope_hash, rule_id)` is applied multiple times, **last wins**. This enables idempotent retry semantics.
-
-### 5.4 Phase 3: Commit (The Heart of Determinism)
-
-```rust
-let (snapshot, receipt, patch) = engine.commit_with_receipt(tx);
-```
-
-This is where Echo's magic happens. Let's break it down:
-
-#### 5.4.1 Drain
-
-The scheduler drains all pending rewrites in **canonical order**:
-
-```rust
-// RadixScheduler uses O(n) LSD radix sort
-// 20 passes: 2 nonce + 2 rule_id + 16 scope_hash (16-bit digits)
-let rewrites = scheduler.drain_for_tx(tx); // Vec in canonical order
-```
-
-**Ordering key**: `(scope_hash[0..32], rule_id, nonce)`
-
-This ensures the **same rewrites always execute in the same order**, regardless of when they were applied.
-
-\begin{claudecommentary}
-**Claude's Take**: Radix sort! They're using radix sort for the scheduler drain. Not quicksort, not merge sort—radix sort.
-
-Why? Because radix sort is _stable_ and _deterministic_ by construction. Quicksort's behavior depends on pivot selection, which can vary. Merge sort is deterministic, but radix sort is faster for fixed-size keys. Since the ordering key is exactly 36 bytes (32-byte scope hash + 2-byte rule ID + 2-byte nonce), radix sort is perfect.
-
-This is the kind of detail that separates "deterministic by accident" from "deterministic by design."
-\end{claudecommentary}
-
-#### 5.4.2 Reserve (Independence Check)
-
-For each rewrite in canonical order:
-
-
-
-**Conflict detection**: Uses `GenSet` for O(1) lookups:
-
-- Read-read overlap: **allowed**
-- Write-write overlap: **conflict**
-- Read-write overlap: **conflict**
-
-#### 5.4.3 Execute (Parallel, Lockless)
-
-Accepted rewrites execute against the **read-only snapshot**:
-
-```rust
-for rewrite in accepted {
- let rule = &rules[rewrite.rule_id];
- let view = GraphView::new(&state, rewrite.warp_id);
-
- // Executor reads from view, emits to delta
- (rule.executor)(view, &rewrite.scope, &mut delta);
-}
-```
-
-**Critical**: `GraphView` is immutable. `TickDelta` accumulates operations:
-
-```rust
-pub struct TickDelta {
- ops: Vec<(WarpOp, OpOrigin)>,
-}
-
-// Operations emitted during execution:
-delta.emit(WarpOp::UpsertNode { id, record });
-delta.emit(WarpOp::UpsertEdge { from, edge });
-delta.emit(WarpOp::DeleteNode { id });
-delta.emit(WarpOp::SetAttachment { node, key, value });
-```
-
-#### 5.4.4 Merge (Canonical Sort)
-
-All operations are sorted into **canonical replay order**:
-
-```rust
-// Sort by (WarpOpKey, OpOrigin)
-ops.sort_by_key(|(op, origin)| (op.sort_key(), origin.clone()));
-
-// Deduplicate identical ops
-// Error on conflicting ops (footprint model violation)
-```
-
-**Conflict handling**: If two rewrites wrote **different values** to the same key, that's a bug in the footprint model. Echo errors loudly.
-
-#### 5.4.5 Finalize
-
-Apply the merged delta to produce the new state:
-
-```rust
-for op in merged_ops {
- match op {
- WarpOp::UpsertNode { id, record } => state.insert_node(id, record),
- WarpOp::UpsertEdge { from, edge } => state.insert_edge(from, edge),
- WarpOp::DeleteNode { id } => state.delete_node_isolated(id)?, // rejects if edges exist
- WarpOp::SetAttachment { node, key, value } => state.set_attachment(node, key, value),
- // ...
- }
-}
-```
-
-### 5.5 Phase 4: Hash Computation
-
-#### State Root (BLAKE3)
-
-The state root is computed via **deterministic BFS** over reachable nodes:
-
-
-
-**Encoding** (architecture-independent):
-
-- All IDs: raw 32 bytes
-- Counts: u64 little-endian
-- Payloads: 1-byte tag + type_id[32] + u64 LE length + bytes
-
-#### Commit Hash (v2)
-
-```rust
-commit_hash = BLAKE3(
- version_tag[4] || // Protocol version
- parents[] || // Parent commit hashes
- state_root[32] || // Graph-only hash
- patch_digest[32] || // Merged ops digest
- policy_id[4] // Policy identifier
-)
-```
-
-\begin{claudecommentary}
-**Claude's Take**: The commit hash includes a `policy_id`. This is subtle but important: two engines with different policies could produce the same state but different commit hashes. Why? Because the _process_ matters, not just the result.
-
-Imagine one policy allows rules to run in parallel; another requires sequential execution. They might produce identical graphs, but the commit hashes differ because the policies differ. This prevents accidentally mixing outputs from incompatible engine configurations.
-
-It's defensive design: "Trust, but verify—and make verification easy."
-\end{claudecommentary}
-
-### 5.6 Phase 5: Record to History
-
-```rust
-history.push((
- Snapshot { hash: commit_hash, state_root, parents, ... },
- TickReceipt { applied, rejected, ... },
- WarpTickPatchV1 { ops, in_slots, out_slots, patch_digest, ... }
-));
-```
-
-The patch is **prescriptive**: it can be replayed without re-matching to reproduce the exact same state.
-
----
-
-## 6. Parallel Execution: BOAW (Bag of Autonomous Workers)
-
-### 6.1 What is BOAW?
-
-BOAW stands for **Best Of All Worlds**. It's Echo's parallel execution architecture that enables:
-
-- **Massive parallelism** without locks
-- **Deterministic convergence** across platforms
-- **Worker-count invariance** (same result with 1 or 32 workers)
-
-### 6.2 The Key Insight
-
-```text
-┌──────────────────────────────────────────────────────────────────┐
-│ THE BOAW INSIGHT │
-├──────────────────────────────────────────────────────────────────┤
-│ │
-│ Traditional parallelism: │
-│ "Make execution order deterministic" → Complex, slow │
-│ │
-│ BOAW parallelism: │
-│ "Let execution order vary, make MERGE deterministic" → Fast! │
-│ │
-│ Workers race freely → Each produces a TickDelta │
-│ Merge step sorts all deltas → Canonical output │
-│ │
-└──────────────────────────────────────────────────────────────────┘
-```
-
-\begin{claudecommentary}
-**Claude's Take**: This is the insight that makes Echo work. Most parallel systems try to _control_ the execution order—barriers, locks, atomic sequences. BOAW says: "Forget it. Let chaos reign during execution. We'll sort it out in the merge."
-
-It's like MapReduce: the map phase runs in any order; the reduce phase (merge) produces the canonical result. But unlike MapReduce, Echo operates on a graph with complex dependencies. The footprint model makes this possible: by declaring what you'll touch before executing, you enable the merge to validate that no conflicts occurred.
-
-If this sounds too good to be true, it mostly is—_if_ you get the footprints wrong. The system is only as deterministic as your footprint declarations. Lie to the footprint system, and you'll get non-determinism.
-\end{claudecommentary}
-
-### 6.3 Execution Strategies
-
-#### Phase 6A: Stride Partitioning (Legacy)
-
-```text
-Worker 0: items[0], items[4], items[8], ...
-Worker 1: items[1], items[5], items[9], ...
-Worker 2: items[2], items[6], items[10], ...
-Worker 3: items[3], items[7], items[11], ...
-```
-
-**Problem**: Poor cache locality—related items scatter across workers.
-
-#### Phase 6B: Virtual Shards (Current Default)
-
-```rust
-const NUM_SHARDS: usize = 256; // Protocol constant (frozen)
-
-fn shard_of(node_id: &NodeId) -> usize {
- let bytes = node_id.as_bytes();
- let val = u64::from_le_bytes(bytes[0..8]);
- (val & 255) as usize // Fast modulo via bitmask
-}
-```
-
-
-
-**Benefits**:
-
-- Items with same `shard_of(scope)` processed together → better cache hits
-- Workers dynamically claim shards via atomic counter → load balancing
-- Determinism enforced by merge, not execution order
-
-\begin{claudecommentary}
-**Claude's Take**: 256 shards is an interesting choice. It's small enough that the atomic counter for work-stealing doesn't become a bottleneck, but large enough to distribute work across many cores.
-
-The `& 255` bitmask is a micro-optimization I appreciate. It's equivalent to `% 256` but faster because 256 is a power of 2. This is the kind of low-level detail that adds up when you're processing millions of items per second.
-
-One thing I wondered: what if your NodeIds are clustered? Like, if all recent nodes have IDs starting with `0x00...`, they'd all end up in shard 0. I suspect content-addressed IDs (via BLAKE3) distribute uniformly, so this isn't a problem in practice. But for user-assigned IDs, you'd need to be careful.
-\end{claudecommentary}
-
-### 6.4 The Execution Loop
-
-```rust
-pub fn execute_parallel_sharded(
- view: GraphView<'_>,
- items: &[ExecItem],
- workers: usize,
-) -> Vec {
- // Partition items into 256 shards
- let shards = partition_into_shards(items);
-
- // Atomic counter for work-stealing
- let next_shard = AtomicUsize::new(0);
-
- std::thread::scope(|s| {
- let handles: Vec<_> = (0..workers).map(|_| {
- s.spawn(|| {
- let mut delta = TickDelta::new();
- loop {
- // Claim next shard atomically
- let shard_id = next_shard.fetch_add(1, Ordering::Relaxed);
- if shard_id >= NUM_SHARDS { break; }
-
- // Execute all items in this shard
- for item in &shards[shard_id].items {
- (item.exec)(view.clone(), &item.scope, &mut delta);
- }
- }
- delta
- })
- }).collect();
-
- handles.into_iter().map(|h| h.join().unwrap()).collect()
- })
-}
-```
-
-### 6.5 The Canonical Merge
-
-```rust
-pub fn merge_deltas(deltas: Vec) -> Result, MergeConflict> {
- // 1. Flatten all ops from all workers
- let mut all_ops: Vec<(WarpOpKey, OpOrigin, WarpOp)> = deltas
- .into_iter()
- .flat_map(|d| d.ops_with_origins())
- .collect();
-
- // 2. Sort canonically by (key, origin)
- all_ops.sort_by_key(|(key, origin, _)| (key.clone(), origin.clone()));
-
- // 3. Deduplicate and detect conflicts
- let mut result = Vec::new();
- for group in all_ops.group_by(|(k1, _, _), (k2, _, _)| k1 == k2) {
- let first = &group[0].2;
- if group.iter().all(|(_, _, op)| op == first) {
- result.push(first.clone()); // All identical: keep one
- } else {
- return Err(MergeConflict { writers: group.iter().map(|(_, o, _)| o).collect() });
- }
- }
-
- Ok(result)
-}
-```
-
-**Key guarantee**: Conflicts are bugs. If footprints were correct, no two rewrites should write different values to the same key.
-
----
-
-## 7. Storage & Hashing: Content-Addressed Truth
-
-### 7.1 The GraphStore
-
-Located in `crates/warp-core/src/graph.rs`:
-
-```rust
-pub struct GraphStore {
- pub(crate) warp_id: WarpId,
- pub(crate) nodes: BTreeMap,
- pub(crate) edges_from: BTreeMap>,
- pub(crate) edges_to: BTreeMap>, // Reverse index
- pub(crate) node_attachments: BTreeMap,
- pub(crate) edge_attachments: BTreeMap,
- pub(crate) edge_index: BTreeMap, // Edge → Source
- pub(crate) edge_to_index: BTreeMap, // Edge → Target
-}
-```
-
-**Why BTreeMap everywhere?**
-
-- Deterministic iteration order (sorted by key)
-- Enables canonical hashing
-- No HashMap ordering surprises
-
-\begin{claudecommentary}
-**Claude's Take**: Seven BTreeMaps! This is the price of determinism. Each of these maps is sorted, which means:
-
-1. Insertions are O(log n) instead of O(1) amortized for HashMap
-2. Iteration is always in key order, so hashing is deterministic
-3. Memory overhead is slightly higher due to tree structure
-
-Is it worth it? For Echo's use case, absolutely. The alternative—using HashMap and then sorting before each hash—would be slower and more error-prone. By paying the cost upfront (O(log n) writes), you get guaranteed correctness.
-
-The multiple indices (`edges_from`, `edges_to`, `edge_index`, `edge_to_index`) look redundant, but they enable O(log n) lookups from any direction. Want all edges _from_ a node? `edges_from[node_id]`. Want all edges _to_ a node? `edges_to[node_id]`. This is a classic space-time tradeoff.
-\end{claudecommentary}
-
-### 7.2 WSC: Write-Streaming Columnar Format
-
-For efficient snapshots, Echo uses WSC—a zero-copy, mmap-friendly format:
-
-```text
-┌─────────────────────────────────────────────────────────────────┐
-│ WSC SNAPSHOT FILE │
-├─────────────────────────────────────────────────────────────────┤
-│ ┌─────────────────────────────────────────────────────────────┐ │
-│ │ NODES TABLE (sorted by NodeId) │ │
-│ │ ┌──────────┬───────────┬──────────┐ │ │
-│ │ │ NodeRow │ NodeRow │ NodeRow │ ... │ │
-│ │ │ 64 bytes │ 64 bytes │ 64 bytes │ │ │
-│ │ └──────────┴───────────┴──────────┘ │ │
-│ └─────────────────────────────────────────────────────────────┘ │
-│ ┌─────────────────────────────────────────────────────────────┐ │
-│ │ EDGES TABLE (sorted by EdgeId) │ │
-│ │ ┌───────────┬───────────┬───────────┐ │ │
-│ │ │ EdgeRow │ EdgeRow │ EdgeRow │ ... │ │
-│ │ │ 128 bytes │ 128 bytes │ 128 bytes │ │ │
-│ │ └───────────┴───────────┴───────────┘ │ │
-│ └─────────────────────────────────────────────────────────────┘ │
-│ ┌─────────────────────────────────────────────────────────────┐ │
-│ │ OUT_INDEX (per-node → range into out_edges) │ │
-│ │ ┌────────────────┬────────────────┐ │ │
-│ │ │ Range (16 B) │ Range (16 B) │ ... │ │
-│ │ └────────────────┴────────────────┘ │ │
-│ └─────────────────────────────────────────────────────────────┘ │
-│ ┌─────────────────────────────────────────────────────────────┐ │
-│ │ BLOB ARENA (variable-length data) │ │
-│ │ Referenced by (offset, length) tuples │ │
-│ └─────────────────────────────────────────────────────────────┘ │
-└─────────────────────────────────────────────────────────────────┘
-```
-
-**Row types** (8-byte aligned):
-
-- `NodeRow`: 64 bytes (node_id[32] + node_type[32])
-- `EdgeRow`: 128 bytes (edge_id[32] + from[32] + to[32] + type[32])
-- `Range`: 16 bytes (start_le[8] + len_le[8])
-
-\begin{claudecommentary}
-**Claude's Take**: WSC is gloriously simple. Fixed-size rows, sorted tables, binary search for lookups. No compression, no Parquet-style encoding tricks—just flat bytes on disk that you can mmap and use directly.
-
-The trade-off is size: WSC files are larger than compressed formats. But the benefit is speed: you can find node #1000 by seeking to `offset + 1000 * 64` and reading 64 bytes. No decompression, no index lookups, no memory allocation.
-
-For Echo's use case (local caching, fast restarts), this makes sense. You're not storing petabytes; you're storing the state of a single simulation that fits in RAM. Optimize for access latency, not storage cost.
-\end{claudecommentary}
-
-### 7.3 Copy-on-Write Semantics
-
-**Rule**: During a tick, nothing shared is mutated.
-
-
-
-**Structural sharing**: Only changed segments are newly written. Unchanged data is referenced by hash.
-
-### 7.4 Hash Algorithm Details
-
-**State Root** (BLAKE3, v2):
-
-```text
-state_root = BLAKE3(
- root_id[32] ||
- instance_count[8, LE] ||
- for each instance in BTreeMap order:
- warp_id_len[8, LE] ||
- warp_id_bytes ||
- node_count[8, LE] ||
- for each node in ascending NodeId order:
- node_id[32] ||
- node_type[32] ||
- for each outbound edge in ascending EdgeId order:
- edge_id[32] ||
- edge_type[32] ||
- to_node[32] ||
- for each attachment:
- key_len[8, LE] ||
- key_bytes ||
- type_id[32] ||
- value_len[8, LE] ||
- value_bytes
-)
-```
-
-\begin{claudecommentary}
-**Claude's Take**: The hashing is _exhaustive_. Every node, every edge, every attachment, every byte—all streamed through BLAKE3 in a defined order. There's no "we'll just hash the IDs and trust the content"—everything participates.
-
-This is expensive! But it's the foundation of Echo's trust model. If two engines produce the same state root, they have the same state. Period. No exceptions, no edge cases.
-
-The `version_tag` in the commit hash is a nice touch. If Echo ever changes its hashing algorithm (say, BLAKE3 v2 to v3), old and new hashes won't collide. Protocol evolution is built in.
-\end{claudecommentary}
-
----
-
-## 8. Worked Example: Tracing a Link Click
-
-Let's trace what happens when a user clicks a link in a hypothetical WARP-based navigation system.
-
-### 8.1 The Scenario
-
-Imagine a simple site with two pages:
-
-
-
-**User clicks the link**: This should navigate from Home to About.
-
-\begin{claudecommentary}
-**Claude's Take**: This example is deceptively simple—two pages, one link—but it exercises the entire engine: intent ingestion, rule matching, footprint validation, execution, merge, hashing, and emission.
-
-I'll add my notes at the interesting points. If you're skimming, watch for where the determinism guarantees kick in.
-\end{claudecommentary}
-
-### 8.2 Step 1: Intent Ingestion
-
-The click is captured by the viewer and converted to an **intent**:
-
-```rust
-// In the viewer:
-let intent = NavigateIntent {
- target_page: about_node_id,
- timestamp: deterministic_tick,
-};
-let intent_bytes = canonical_encode(&intent);
-
-// Send to engine:
-engine.ingest_intent(intent_bytes);
-```
-
-**What happens inside `ingest_intent`**:
-
-
-
-### 8.3 Step 2: Begin Transaction
-
-```rust
-let tx = engine.begin(); // tx = TxId(1)
-```
-
-### 8.4 Step 3: Dispatch Intent
-
-```rust
-engine.dispatch_next_intent(tx);
-```
-
-**What happens**:
-
-
-
-### 8.5 Step 4: Rule Matching
-
-The `cmd/navigate` rule matches:
-
-```rust
-// Matcher: Does this intent want navigation?
-fn navigate_matcher(view: GraphView, scope: &NodeId) -> bool {
- let intent = view.node(scope)?;
- intent.type_id == "navigate_intent"
-}
-
-// Footprint: What will we read/write?
-fn navigate_footprint(view: GraphView, scope: &NodeId) -> Footprint {
- Footprint {
- n_read: btreeset![scope.clone(), viewer_node],
- n_write: btreeset![],
- a_read: btreeset![],
- a_write: btreeset![AttachmentKey::new(viewer_node, "current")],
- ..default()
- }
-}
-```
-
-\begin{claudecommentary}
-**Claude's Take**: Notice the footprint. We declare that we'll:
-
-- **Read** two nodes: the intent (to get the target) and the viewer (to validate the current page)
-- **Write** one attachment: the viewer's `current` attachment
-
-We're _not_ reading any attachments (we just need the node records), and we're _not_ writing any nodes (the viewer node already exists). This precision matters—if another rule also wants to write `viewer.current`, there's a conflict.
-\end{claudecommentary}
-
-The rule is enqueued:
-
-```text
-┌─────────────────────────────────────────────────────────────┐
-│ PendingRewrite │
-├─────────────────────────────────────────────────────────────┤
-│ rule_id: "cmd/navigate" │
-│ scope: 0xABCD... (intent node) │
-│ footprint: { n_read: [intent, viewer], a_write: [current] } │
-│ tx: TxId(1) │
-└─────────────────────────────────────────────────────────────┘
-```
-
-### 8.6 Step 5: Commit
-
-```rust
-let (snapshot, receipt, patch) = engine.commit_with_receipt(tx);
-```
-
-#### 5a. Drain
-
-```rust
-let rewrites = scheduler.drain_for_tx(tx);
-// Result: [PendingRewrite { rule: "cmd/navigate", scope: intent_node }]
-```
-
-#### 5b. Reserve
-
-```rust
-// Check footprint independence
-// No conflicts (only one rewrite)
-// Accepted!
-```
-
-#### 5c. Execute
-
-```rust
-fn navigate_executor(view: GraphView, scope: &NodeId, delta: &mut TickDelta) {
- // Read the intent to find target
- let intent = view.node(scope).unwrap();
- let target_page = intent.attachment("target").unwrap();
-
- // Read current viewer state (for logging/validation)
- let viewer = view.node(&VIEWER_NODE).unwrap();
- let old_page = viewer.attachment("current");
-
- // Emit the change: update viewer's current page
- delta.emit(WarpOp::SetAttachment {
- node: VIEWER_NODE,
- key: "current".into(),
- value: AttachmentValue::Atom(AtomPayload {
- type_id: "node_ref".into(),
- bytes: target_page.to_bytes(),
- }),
- });
-}
-```
-
-**TickDelta now contains**:
-
-```rust
-[
- (WarpOp::SetAttachment {
- node: viewer_node,
- key: "current",
- value: about_node_id
- }, OpOrigin { intent_id: 1, rule_id: 42, match_ix: 0, op_ix: 0 })
-]
-```
-
-#### 5d. Merge
-
-Only one delta, trivial merge:
-
-```rust
-let merged_ops = vec![
- WarpOp::SetAttachment { node: viewer_node, key: "current", value: about_node_id }
-];
-```
-
-#### 5e. Finalize
-
-Apply to state:
-
-```rust
-state.set_attachment(viewer_node, "current", about_node_id);
-```
-
-### 8.7 Step 6: Hash Computation
-
-```rust
-// State root: BLAKE3 of reachable graph
-let state_root = compute_state_root(&state); // 0x7890...
-
-// Patch digest: BLAKE3 of merged ops
-let patch_digest = compute_patch_digest(&merged_ops); // 0xDEF0...
-
-// Commit hash
-let commit_hash = BLAKE3(
- VERSION_TAG ||
- [parent_hash] ||
- state_root ||
- patch_digest ||
- policy_id
-); // 0x1234...
-```
-
-### 8.8 Step 7: Emit to Tools
-
-The engine emits a `WarpDiff` to the session hub:
-
-```rust
-WarpDiff {
- from_epoch: 0,
- to_epoch: 1,
- ops: vec![
- WarpOp::SetAttachment {
- node: viewer_node,
- key: "current",
- value: about_node_id
- }
- ],
- state_hash: 0x7890...,
-}
-```
-
-### 8.9 Step 8: Viewer Applies Diff
-
-The viewer receives the diff and updates its rendering:
-
-```rust
-for op in diff.ops {
- match op {
- WarpOp::SetAttachment { node, key, value } => {
- if node == viewer_node && key == "current" {
- // Update the displayed page
- self.navigate_to(value.as_node_ref());
- }
- }
- _ => { /* other ops */ }
- }
-}
-```
-
-**Result**: The user sees the About page.
-
-\begin{claudecommentary}
-**Claude's Take**: That's a lot of machinery for one link click! But here's what we get for free:
-
-1. **Replay**: Save the intent bytes, replay them later, get the exact same state hash
-2. **Verification**: Any other engine given the same inputs produces the same commit hash
-3. **Undo**: The previous snapshot is still in history; restoring is a pointer swap
-4. **Branching**: Fork the state, try a different navigation, compare outcomes
-
-This is the payoff for all the ceremony. A traditional engine would do `viewer.current = about_page` and call it done. Echo builds a _provable audit trail_ around every state change.
-\end{claudecommentary}
-
----
-
-## 9. The Viewer: Observing Echo
-
-The `warp-viewer` crate provides real-time visualization of WARP graphs. It's built on WGPU for cross-platform GPU rendering.
-
-### 9.1 Architecture
-
-
-
-### 9.2 Rendering Pipeline
-
-1. **Diff arrives** via session client
-2. **State cache** updates local graph replica
-3. **Layout engine** computes node positions (force-directed)
-4. **Renderer** converts graph to GPU buffers
-5. **Display** shows updated visualization
-
-\begin{claudecommentary}
-**Claude's Take**: The viewer is _reactive_, not poll-based. It subscribes to diffs from the session hub and updates only when state changes. This means zero CPU usage when the graph is idle.
-
-The force-directed layout is a classic choice for graph visualization. It's not perfect—large graphs can take time to settle—but it's good enough for debugging and exploration. If you need a specific layout, you can inject position attachments and the viewer will respect them.
-\end{claudecommentary}
-
----
-
-## 10. Glossary
-
-| Term | Definition |
-| ------------------ | ------------------------------------------------------------------------- |
-| **WARP** | Worldline Algebra for Recursive Provenance—Echo's core graph model |
-| **Tick** | One complete cycle of the engine (begin → apply → commit → hash → record) |
-| **Snapshot** | Immutable point-in-time capture of graph state |
-| **Footprint** | Declaration of resources a rule will read/write |
-| **BOAW** | Bag of Autonomous Workers—parallel execution model |
-| **TickDelta** | Accumulated operations from rule execution |
-| **State Root** | BLAKE3 hash of the entire graph |
-| **Commit Hash** | BLAKE3 hash of (state root + patch + metadata) |
-| **WarpInstance** | A graph-within-a-graph, enabling recursive composition |
-| **WSC** | Write-Streaming Columnar—Echo's snapshot file format |
-| **GraphView** | Read-only handle to graph state for rule executors |
-| **PendingRewrite** | Queued rule application awaiting commit |
-
----
-
-\begin{claudecommentary}
-**Final Thoughts from Your Tour Guide**
-
-Echo is not a simple system. It's a _principled_ system built on hard-won lessons about determinism, reproducibility, and trust.
-
-What I find most impressive isn't any single feature—it's the coherence. Every piece reinforces the others:
-
-- BTreeMaps enable deterministic hashing
-- Footprints enable parallel execution
-- Parallel execution requires immutable GraphView
-- Immutable GraphView enables copy-on-write
-- Copy-on-write enables cheap branching
-- Cheap branching enables "what if?" queries
-
-Pull one thread and the whole tapestry unravels. This is integrated design, not a collection of independent features.
-
-Is Echo perfect? No. The footprint model requires discipline. The ceremony adds latency. The BTreeMaps trade speed for determinism. But for applications where _provability_ matters—games with replays, simulations with audits, collaborative tools with conflict resolution—Echo offers something rare: a foundation you can trust.
-
-Thanks for joining me on this tour. May your state roots always match.
-
-— Claude
-\end{claudecommentary}
diff --git a/docs/archive/study/what-makes-echo-tick-with-diagrams.pdf b/docs/archive/study/what-makes-echo-tick-with-diagrams.pdf
deleted file mode 100644
index a2524efd..00000000
Binary files a/docs/archive/study/what-makes-echo-tick-with-diagrams.pdf and /dev/null differ
diff --git a/docs/archive/study/what-makes-echo-tick-with-diagrams.tex b/docs/archive/study/what-makes-echo-tick-with-diagrams.tex
deleted file mode 100644
index 95e2a209..00000000
--- a/docs/archive/study/what-makes-echo-tick-with-diagrams.tex
+++ /dev/null
@@ -1,1515 +0,0 @@
-% SPDX-License-Identifier: Apache-2.0 OR LicenseRef-MIND-UCAL-1.0
-% © James Ross Ω FLYING•ROBOTS
-% Options for packages loaded elsewhere
-\PassOptionsToPackage{unicode}{hyperref}
-\PassOptionsToPackage{hyphens}{url}
-\documentclass[
-]{book}
-\usepackage[letterpaper, margin=1in]{geometry}
-\usepackage{xcolor}
-\usepackage{amsmath,amssymb}
-\setcounter{secnumdepth}{-\maxdimen} % remove section numbering
-\usepackage{iftex}
-\ifPDFTeX
- \usepackage[T1]{fontenc}
- \usepackage[utf8]{inputenc}
- \usepackage{textcomp} % provide euro and other symbols
-\else % if luatex or xetex
- \usepackage{unicode-math} % this also loads fontspec
- \defaultfontfeatures{Scale=MatchLowercase}
- \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1}
-\fi
-\usepackage{lmodern}
-\ifPDFTeX\else
- % xetex/luatex font selection
-\fi
-% Use upquote if available, for straight quotes in verbatim environments
-\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
-\IfFileExists{microtype.sty}{% use microtype if available
- \usepackage[]{microtype}
- \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
-}{}
-\makeatletter
-\@ifundefined{KOMAClassName}{% if non-KOMA class
- \IfFileExists{parskip.sty}{%
- \usepackage{parskip}
- }{% else
- \setlength{\parindent}{0pt}
- \setlength{\parskip}{6pt plus 2pt minus 1pt}}
-}{% if KOMA class
- \KOMAoptions{parskip=half}}
-\makeatother
-\usepackage{color}
-\usepackage{fancyvrb}
-\newcommand{\VerbBar}{|}
-\newcommand{\VERB}{\Verb[commandchars=\\\{\}]}
-\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}}
-% Add ',fontsize=\small' for more characters per line
-\newenvironment{Shaded}{}{}
-\newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}}
-\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}}
-\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{#1}}
-\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}}
-\newcommand{\BuiltInTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{#1}}
-\newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}}
-\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{#1}}}
-\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}}
-\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{#1}}
-\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}}
-\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{#1}}
-\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}}
-\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{#1}}}
-\newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}}
-\newcommand{\ExtensionTok}[1]{#1}
-\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}}
-\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{#1}}
-\newcommand{\ImportTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{\textbf{#1}}}
-\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}}
-\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}}
-\newcommand{\NormalTok}[1]{#1}
-\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{#1}}
-\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{#1}}
-\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{#1}}
-\newcommand{\RegionMarkerTok}[1]{#1}
-\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}}
-\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{#1}}
-\newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}}
-\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{#1}}
-\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}}
-\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}}
-\usepackage{graphicx}
-\usepackage[export]{adjustbox}
-\usepackage{longtable,booktabs,array}
-\newcounter{none} % for unnumbered tables
-\usepackage{calc} % for calculating minipage widths
-% Correct order of tables after \paragraph or \subparagraph
-\usepackage{etoolbox}
-\makeatletter
-\patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{}
-\makeatother
-% Allow footnotes in longtable head/foot
-\IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}}
-\makesavenoteenv{longtable}
-\setlength{\emergencystretch}{3em} % prevent overfull lines
-\providecommand{\tightlist}{%
- \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
-\usepackage{bookmark}
-\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available
-\urlstyle{same}
-\hypersetup{
- hidelinks,
- pdfcreator={LaTeX via pandoc}}
-
-\author{}
-\date{}
-
-\begin{document}
-\frontmatter
-
-\mainmatter
-\chapter{What Makes Echo Tick?}\label{what-makes-echo-tick}
-
-\begin{quote}
-A comprehensive technical guide to the Echo deterministic graph-rewrite
-engine.
-
-\textbf{Target Audience}: Developers who want to understand Echo's
-internals in exhaustive detail.
-
-\textbf{Reading Time}: \textasciitilde45 minutes for complete
-understanding.
-\end{quote}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{Table of Contents}\label{table-of-contents}
-
-\begin{enumerate}
-\def\labelenumi{\arabic{enumi}.}
-\tightlist
-\item
- \hyperref[1-philosophy-why-echo-exists]{Philosophy: Why Echo Exists}
-\item
- \hyperref[2-the-big-picture-architecture-overview]{The Big Picture:
- Architecture Overview}
-\item
- \hyperref[3-core-concepts-the-warp-graph]{Core Concepts: The WARP
- Graph}
-\item
- \hyperref[4-the-engine-heart-of-echo]{The Engine: Heart of Echo}
-\item
- \hyperref[5-the-tick-pipeline-where-everything-happens]{The Tick
- Pipeline: Where Everything Happens}
-\item
- \hyperref[6-parallel-execution-boaw-bag-of-autonomous-workers]{Parallel
- Execution: BOAW (Bag of Autonomous Workers)}
-\item
- \hyperref[7-storage--hashing-content-addressed-truth]{Storage \&
- Hashing: Content-Addressed Truth}
-\item
- \hyperref[8-worked-example-tracing-a-link-click]{Worked Example:
- Tracing a Link Click}
-\item
- \hyperref[9-the-viewer-observing-echo]{The Viewer: Observing Echo}
-\item
- \hyperref[10-glossary]{Glossary}
-\end{enumerate}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{1. Philosophy: Why Echo
-Exists}\label{philosophy-why-echo-exists}
-
-\subsection{1.1 The Problem}\label{the-problem}
-
-Traditional game engines and simulations treat state as \textbf{mutable
-objects}. This creates fundamental problems:
-
-\begin{itemize}
-\tightlist
-\item
- \textbf{Replay is hard}: You can't just ``rewind'' because state
- changes are scattered and untracked.
-\item
- \textbf{Synchronization is fragile}: Two machines running the same
- logic may diverge due to floating-point differences, thread timing, or
- iteration order.
-\item
- \textbf{Debugging is a nightmare}: ``It worked on my machine'' is the
- symptom of non-determinism.
-\item
- \textbf{Branching is impossible}: You can't easily ask ``what if?''
- without copying everything.
-\end{itemize}
-
-\subsection{1.2 Echo's Answer}\label{echos-answer}
-
-Echo treats \textbf{state as a typed graph} and \textbf{all changes as
-rewrites}. Each ``tick'' of the engine:
-
-\begin{enumerate}
-\def\labelenumi{\arabic{enumi}.}
-\tightlist
-\item
- Proposes a set of rewrites
-\item
- Executes them in \textbf{deterministic order}
-\item
- Emits \textbf{cryptographic hashes} of the resulting state
-\end{enumerate}
-
-This means: - \textbf{Same inputs → Same outputs} (always, on any
-machine) - \textbf{State is verifiable} (hashes prove correctness) -
-\textbf{Replay is trivial} (patches are prescriptive) -
-\textbf{Branching is free} (copy-on-write snapshots)
-
-\subsection{1.3 Core Design Principles}\label{core-design-principles}
-
-\begin{verbatim}
-┌─────────────────────────────────────────────────────────────────┐
-│ ECHO'S THREE PILLARS │
-├─────────────────────────────────────────────────────────────────┤
-│ │
-│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
-│ │ DETERMINISM │ │ PROVENANCE │ │ TOOLING │ │
-│ │ FIRST │ │ YOU CAN │ │ AS FIRST │ │
-│ │ │ │ TRUST │ │ CLASS │ │
-│ ├─────────────────┤ ├─────────────────┤ ├─────────────────┤ │
-│ │ Same inputs │ │ Snapshots are │ │ Graphs stream │ │
-│ │ always produce │ │ content- │ │ over canonical │ │
-│ │ same hashes │ │ addressed │ │ wire protocol │ │
-│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
-│ │
-└─────────────────────────────────────────────────────────────────┘
-\end{verbatim}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{2. The Big Picture: Architecture
-Overview}\label{the-big-picture-architecture-overview}
-
-\subsection{2.1 System Layers}\label{system-layers}
-
-Echo is organized into distinct layers, each with a specific
-responsibility:
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-01.pdf}
-\end{center}
-
-\subsection{2.2 Crate Map}\label{crate-map}
-
-{\def\LTcaptype{none} % do not increment counter
-\begin{longtable}[]{@{}ll@{}}
-\toprule\noalign{}
-Crate & Purpose \\
-\midrule\noalign{}
-\endhead
-\bottomrule\noalign{}
-\endlastfoot
-\texttt{warp-core} & The deterministic rewrite engine (the ``brain'') \\
-\texttt{echo-graph} & Renderable graph types + diff operations \\
-\texttt{echo-session-proto} & Wire protocol (canonical CBOR framing) \\
-\texttt{echo-session-service} & Headless Unix-socket hub for tools \\
-\texttt{echo-session-client} & Client helpers for connecting to the
-hub \\
-\texttt{warp-viewer} & Native WGPU viewer for visualizing graphs \\
-\end{longtable}
-}
-
-\subsection{2.3 Data Flow Overview}\label{data-flow-overview}
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-02.pdf}
-\end{center}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{3. Core Concepts: The WARP
-Graph}\label{core-concepts-the-warp-graph}
-
-\subsection{3.1 What is a WARP Graph?}\label{what-is-a-warp-graph}
-
-A WARP (\textbf{W}orldline \textbf{A}lgebra for \textbf{R}ecursive
-\textbf{P}rovenance) graph is Echo's fundamental data structure. It's
-not just a graph---it's a graph with \textbf{deterministic semantics}.
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-03.pdf}
-\end{center}
-
-\subsection{3.2 Two-Plane Architecture}\label{two-plane-architecture}
-
-Echo separates structure from data via the \textbf{Two-Plane Model}
-(ADR-0001):
-
-{\def\LTcaptype{none} % do not increment counter
-\begin{longtable}[]{@{}
- >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.2692}}
- >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.3846}}
- >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.3462}}@{}}
-\toprule\noalign{}
-\begin{minipage}[b]{\linewidth}\raggedright
-Plane
-\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright
-Contains
-\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright
-Purpose
-\end{minipage} \\
-\midrule\noalign{}
-\endhead
-\bottomrule\noalign{}
-\endlastfoot
-\textbf{Skeleton} & Nodes + Edges (structure) & Fast traversal,
-deterministic hashing \\
-\textbf{Attachment (α)} & Typed payloads & Domain-specific data \\
-\end{longtable}
-}
-
-\textbf{Why separate them?}
-
-\begin{verbatim}
-┌────────────────────────────────────────────────────────────────────┐
-│ SKELETON PLANE (Structure) │
-│ │
-│ ┌─────┐ edge:link ┌─────┐ │
-│ │ N1 │─────────────────▶│ N2 │ │
-│ └─────┘ └─────┘ │
-│ │ │ │
-│ │ edge:child │ edge:ref │
-│ ▼ ▼ │
-│ ┌─────┐◀─────────────────────┘ │
-│ │ N3 │ │
-│ └─────┘ │
-│ │
-├────────────────────────────────────────────────────────────────────┤
-│ ATTACHMENT PLANE (Payloads) │
-│ │
-│ N1.α["title"] = Atom { type: "string", bytes: "Home" } │
-│ N2.α["url"] = Atom { type: "string", bytes: "/page/b" } │
-│ N3.α["body"] = Atom { type: "html", bytes: "...
" } │
-│ │
-└────────────────────────────────────────────────────────────────────┘
-\end{verbatim}
-
-\textbf{Key insight}: Skeleton rewrites \textbf{never decode
-attachments}. This keeps the hot path fast and deterministic.
-
-\subsection{3.3 Node and Edge Identity}\label{node-and-edge-identity}
-
-Every node and edge has a \textbf{32-byte identifier}:
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ NodeId([}\DataTypeTok{u8}\OperatorTok{;} \DecValTok{32}\NormalTok{])}\OperatorTok{;} \CommentTok{// Content{-}addressed or assigned}
-\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ EdgeId([}\DataTypeTok{u8}\OperatorTok{;} \DecValTok{32}\NormalTok{])}\OperatorTok{;} \CommentTok{// Unique edge identifier}
-\end{Highlighting}
-\end{Shaded}
-
-These IDs are: - \textbf{Deterministic}: Same content → same ID (when
-content-addressed) - \textbf{Sortable}: Lexicographic ordering enables
-deterministic iteration - \textbf{Hashable}: Participate in state root
-computation
-
-\subsection{3.4 WarpInstances: Graphs Within
-Graphs}\label{warpinstances-graphs-within-graphs}
-
-Echo supports \textbf{descended attachments}---embedding entire graphs
-within attachment slots:
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-04.pdf}
-\end{center}
-
-This enables ``WARPs all the way down''---recursive composition while
-maintaining determinism.
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{4. The Engine: Heart of Echo}\label{the-engine-heart-of-echo}
-
-\subsection{4.1 The Engine Struct}\label{the-engine-struct}
-
-The \texttt{Engine} is Echo's central orchestrator. Located in
-\texttt{crates/warp-core/src/engine\_impl.rs}:
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ Engine }\OperatorTok{\{}
-\NormalTok{ state}\OperatorTok{:}\NormalTok{ WarpState}\OperatorTok{,} \CommentTok{// Multi{-}instance graph state}
-\NormalTok{ rules}\OperatorTok{:}\NormalTok{ HashMap}\OperatorTok{\textless{}}\NormalTok{RuleId}\OperatorTok{,}\NormalTok{ RewriteRule}\OperatorTok{\textgreater{},} \CommentTok{// Registered rewrite rules}
-\NormalTok{ scheduler}\OperatorTok{:}\NormalTok{ DeterministicScheduler}\OperatorTok{,} \CommentTok{// Deterministic ordering}
-\NormalTok{ bus}\OperatorTok{:}\NormalTok{ MaterializationBus}\OperatorTok{,} \CommentTok{// Output channels}
-\NormalTok{ history}\OperatorTok{:} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{(Snapshot}\OperatorTok{,}\NormalTok{ TickReceipt}\OperatorTok{,}\NormalTok{ WarpTickPatchV1)}\OperatorTok{\textgreater{},}
-\NormalTok{ tx\_counter}\OperatorTok{:} \DataTypeTok{u64}\OperatorTok{,} \CommentTok{// Transaction counter}
-\NormalTok{ live\_txs}\OperatorTok{:}\NormalTok{ BTreeSet}\OperatorTok{\textless{}}\NormalTok{TxId}\OperatorTok{\textgreater{},} \CommentTok{// Active transactions}
- \CommentTok{// ... more fields}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\subsection{4.2 Construction}\label{construction}
-
-The engine is built via the \texttt{EngineBuilder}:
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{let}\NormalTok{ engine }\OperatorTok{=} \PreprocessorTok{EngineBuilder::}\NormalTok{new(store}\OperatorTok{,}\NormalTok{ root\_node\_id)}
- \OperatorTok{.}\NormalTok{with\_policy\_id(}\DecValTok{1}\NormalTok{)}
- \OperatorTok{.}\NormalTok{with\_telemetry(telemetry)}
- \OperatorTok{.}\NormalTok{build()}\OperatorTok{;}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{What happens during construction:}
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-05.pdf}
-\end{center}
-
-\subsection{4.3 Rewrite Rules}\label{rewrite-rules}
-
-Rules are the atoms of change in Echo. Each rule has three functions:
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ RewriteRule }\OperatorTok{\{}
- \KeywordTok{pub}\NormalTok{ name}\OperatorTok{:} \DataTypeTok{String}\OperatorTok{,}
- \KeywordTok{pub}\NormalTok{ matcher}\OperatorTok{:}\NormalTok{ MatchFn}\OperatorTok{,} \CommentTok{// Does this rule apply?}
- \KeywordTok{pub}\NormalTok{ executor}\OperatorTok{:}\NormalTok{ ExecuteFn}\OperatorTok{,} \CommentTok{// What changes to make}
- \KeywordTok{pub}\NormalTok{ footprint}\OperatorTok{:}\NormalTok{ FootprintFn}\OperatorTok{,} \CommentTok{// What resources are touched}
- \KeywordTok{pub}\NormalTok{ policy}\OperatorTok{:}\NormalTok{ ConflictPolicy}\OperatorTok{,} \CommentTok{// What to do on conflict}
-\OperatorTok{\}}
-
-\CommentTok{// Function signatures (Phase 5 BOAW model):}
-\KeywordTok{type}\NormalTok{ MatchFn }\OperatorTok{=} \KeywordTok{fn}\NormalTok{(GraphView}\OperatorTok{,} \OperatorTok{\&}\NormalTok{NodeId) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{bool}\OperatorTok{;}
-\KeywordTok{type}\NormalTok{ ExecuteFn }\OperatorTok{=} \KeywordTok{fn}\NormalTok{(GraphView}\OperatorTok{,} \OperatorTok{\&}\NormalTok{NodeId}\OperatorTok{,} \OperatorTok{\&}\KeywordTok{mut}\NormalTok{ TickDelta)}\OperatorTok{;}
-\KeywordTok{type}\NormalTok{ FootprintFn }\OperatorTok{=} \KeywordTok{fn}\NormalTok{(GraphView}\OperatorTok{,} \OperatorTok{\&}\NormalTok{NodeId) }\OperatorTok{{-}\textgreater{}}\NormalTok{ Footprint}\OperatorTok{;}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{Critical constraint}: Executors receive a \textbf{read-only}
-\texttt{GraphView} and emit changes to a \texttt{TickDelta}. They
-\textbf{never} mutate the graph directly.
-
-\subsection{4.4 GraphView: Read-Only
-Access}\label{graphview-read-only-access}
-
-The \texttt{GraphView} enforces BOAW's immutability contract:
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ GraphView}\OperatorTok{\textless{}}\OtherTok{\textquotesingle{}a}\OperatorTok{\textgreater{}} \OperatorTok{\{}
-\NormalTok{ store}\OperatorTok{:} \OperatorTok{\&}\OtherTok{\textquotesingle{}a}\NormalTok{ GraphStore}\OperatorTok{,}
-\NormalTok{ warp\_id}\OperatorTok{:}\NormalTok{ WarpId}\OperatorTok{,}
-\OperatorTok{\}}
-
-\KeywordTok{impl}\OperatorTok{\textless{}}\OtherTok{\textquotesingle{}a}\OperatorTok{\textgreater{}}\NormalTok{ GraphView}\OperatorTok{\textless{}}\OtherTok{\textquotesingle{}a}\OperatorTok{\textgreater{}} \OperatorTok{\{}
- \KeywordTok{pub} \KeywordTok{fn}\NormalTok{ node(}\OperatorTok{\&}\KeywordTok{self}\OperatorTok{,}\NormalTok{ id}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Option}\OperatorTok{\textless{}\&}\NormalTok{NodeRecord}\OperatorTok{\textgreater{};}
- \KeywordTok{pub} \KeywordTok{fn}\NormalTok{ edges\_from(}\OperatorTok{\&}\KeywordTok{self}\OperatorTok{,}\NormalTok{ id}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId) }\OperatorTok{{-}\textgreater{}} \KeywordTok{impl} \BuiltInTok{Iterator}\OperatorTok{\textless{}}\NormalTok{Item }\OperatorTok{=} \OperatorTok{\&}\NormalTok{EdgeRecord}\OperatorTok{\textgreater{};}
- \KeywordTok{pub} \KeywordTok{fn}\NormalTok{ node\_attachment(}\OperatorTok{\&}\KeywordTok{self}\OperatorTok{,}\NormalTok{ id}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId}\OperatorTok{,}\NormalTok{ key}\OperatorTok{:} \OperatorTok{\&}\DataTypeTok{str}\NormalTok{) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Option}\OperatorTok{\textless{}\&}\NormalTok{AttachmentValue}\OperatorTok{\textgreater{};}
- \CommentTok{// ... read{-}only methods only}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{No \texttt{DerefMut}, no
-\texttt{AsRef\textless{}GraphStore\textgreater{}}, no interior
-mutability.} This is enforced at the type level.
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{5. The Tick Pipeline: Where Everything
-Happens}\label{the-tick-pipeline-where-everything-happens}
-
-\subsection{5.1 Overview}\label{overview}
-
-A ``tick'' is one complete cycle of the engine. It has five phases:
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-06.pdf}
-\end{center}
-
-\subsection{5.2 Phase 1: Begin
-Transaction}\label{phase-1-begin-transaction}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{let}\NormalTok{ tx }\OperatorTok{=}\NormalTok{ engine}\OperatorTok{.}\NormalTok{begin()}\OperatorTok{;}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{What happens:} 1. Increment \texttt{tx\_counter} (wrapping to
-avoid 0) 2. Add \texttt{TxId} to \texttt{live\_txs} set 3. Return opaque
-transaction identifier
-
-\begin{verbatim}
-┌─────────────────────────────────────────────────┐
-│ engine.begin() │
-├─────────────────────────────────────────────────┤
-│ tx_counter: 0 → 1 │
-│ live_txs: {} → {TxId(1)} │
-│ returns: TxId(1) │
-└─────────────────────────────────────────────────┘
-\end{verbatim}
-
-\subsection{5.3 Phase 2: Apply Rules}\label{phase-2-apply-rules}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\NormalTok{engine}\OperatorTok{.}\NormalTok{apply(tx}\OperatorTok{,} \StringTok{"rule\_name"}\OperatorTok{,} \OperatorTok{\&}\NormalTok{scope\_node\_id)}\OperatorTok{;}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{What happens:}
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-07.pdf}
-\end{center}
-
-\textbf{The Footprint}: A declaration of what resources the rule will
-read and write:
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ Footprint }\OperatorTok{\{}
- \KeywordTok{pub}\NormalTok{ n\_read}\OperatorTok{:}\NormalTok{ BTreeSet}\OperatorTok{\textless{}}\NormalTok{NodeId}\OperatorTok{\textgreater{},} \CommentTok{// Nodes to read}
- \KeywordTok{pub}\NormalTok{ n\_write}\OperatorTok{:}\NormalTok{ BTreeSet}\OperatorTok{\textless{}}\NormalTok{NodeId}\OperatorTok{\textgreater{},} \CommentTok{// Nodes to write}
- \KeywordTok{pub}\NormalTok{ e\_read}\OperatorTok{:}\NormalTok{ BTreeSet}\OperatorTok{\textless{}}\NormalTok{EdgeId}\OperatorTok{\textgreater{},} \CommentTok{// Edges to read}
- \KeywordTok{pub}\NormalTok{ e\_write}\OperatorTok{:}\NormalTok{ BTreeSet}\OperatorTok{\textless{}}\NormalTok{EdgeId}\OperatorTok{\textgreater{},} \CommentTok{// Edges to write}
- \KeywordTok{pub}\NormalTok{ a\_read}\OperatorTok{:}\NormalTok{ BTreeSet}\OperatorTok{\textless{}}\NormalTok{AttachmentKey}\OperatorTok{\textgreater{},} \CommentTok{// Attachments to read}
- \KeywordTok{pub}\NormalTok{ a\_write}\OperatorTok{:}\NormalTok{ BTreeSet}\OperatorTok{\textless{}}\NormalTok{AttachmentKey}\OperatorTok{\textgreater{},} \CommentTok{// Attachments to write}
- \CommentTok{// ... ports, factor\_mask}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{Runtime enforcement.} As of Phase~6B, footprint declarations are
-enforced at runtime by \texttt{FootprintGuard} when
-\texttt{footprint\_enforce\_release} is enabled or in debug builds; the
-\texttt{unsafe\_graph} escape hatch disables these checks. The guard catches
-the following violations:
-
-\begin{itemize}
-\item Undeclared reads (node, edge, or attachment access not listed in the footprint)
-\item Undeclared writes (ops emitted for resources not in \texttt{n\_write} / \texttt{e\_write} / \texttt{a\_write})
-\item Cross-warp emissions (ops targeting a \texttt{WarpId} other than the executing warp)
-\item Instance ops blocked by \texttt{ExecItemKind} (not footprint coverage)
-\item Adjacency violations (edge ops whose \texttt{from} node is absent from \texttt{n\_write})
-\end{itemize}
-
-\textbf{Scheduler deduplication}: If the same
-\texttt{(scope\_hash,\ rule\_id)} is applied multiple times,
-\textbf{last wins}. This enables idempotent retry semantics.
-
-\subsection{5.4 Phase 3: Commit (The Heart of
-Determinism)}\label{phase-3-commit-the-heart-of-determinism}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{let}\NormalTok{ (snapshot}\OperatorTok{,}\NormalTok{ receipt}\OperatorTok{,}\NormalTok{ patch) }\OperatorTok{=}\NormalTok{ engine}\OperatorTok{.}\NormalTok{commit\_with\_receipt(tx)}\OperatorTok{;}
-\end{Highlighting}
-\end{Shaded}
-
-This is where Echo's magic happens. Let's break it down:
-
-\subsubsection{5.4.1 Drain}\label{drain}
-
-The scheduler drains all pending rewrites in \textbf{canonical order}:
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\CommentTok{// RadixScheduler uses O(n) LSD radix sort}
-\CommentTok{// 20 passes: 2 nonce + 2 rule\_id + 16 scope\_hash (16{-}bit digits)}
-\KeywordTok{let}\NormalTok{ rewrites }\OperatorTok{=}\NormalTok{ scheduler}\OperatorTok{.}\NormalTok{drain\_for\_tx(tx)}\OperatorTok{;} \CommentTok{// Vec\textless{}PendingRewrite\textgreater{} in canonical order}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{Ordering key}:
-\texttt{(scope\_hash{[}0..32{]},\ rule\_id,\ nonce)}
-
-This ensures the \textbf{same rewrites always execute in the same
-order}, regardless of when they were applied.
-
-\subsubsection{5.4.2 Reserve (Independence
-Check)}\label{reserve-independence-check}
-
-For each rewrite in canonical order:
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-08.pdf}
-\end{center}
-
-\textbf{Conflict detection}: Uses
-\texttt{GenSet\textless{}K\textgreater{}} for O(1) lookups: - Read-read
-overlap: \textbf{allowed} - Write-write overlap: \textbf{conflict} -
-Read-write overlap: \textbf{conflict}
-
-\subsubsection{5.4.3 Execute (Parallel,
-Lockless)}\label{execute-parallel-lockless}
-
-Accepted rewrites execute against the \textbf{read-only snapshot}:
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\ControlFlowTok{for}\NormalTok{ rewrite }\KeywordTok{in}\NormalTok{ accepted }\OperatorTok{\{}
- \KeywordTok{let}\NormalTok{ rule }\OperatorTok{=} \OperatorTok{\&}\NormalTok{rules[rewrite}\OperatorTok{.}\NormalTok{rule\_id]}\OperatorTok{;}
- \KeywordTok{let}\NormalTok{ view }\OperatorTok{=} \PreprocessorTok{GraphView::}\NormalTok{new(}\OperatorTok{\&}\NormalTok{state}\OperatorTok{,}\NormalTok{ rewrite}\OperatorTok{.}\NormalTok{warp\_id)}\OperatorTok{;}
-
- \CommentTok{// Executor reads from view, emits to delta}
-\NormalTok{ (rule}\OperatorTok{.}\NormalTok{executor)(view}\OperatorTok{,} \OperatorTok{\&}\NormalTok{rewrite}\OperatorTok{.}\NormalTok{scope}\OperatorTok{,} \OperatorTok{\&}\KeywordTok{mut}\NormalTok{ delta)}\OperatorTok{;}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{Critical}: \texttt{GraphView} is immutable. \texttt{TickDelta}
-accumulates operations:
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ TickDelta }\OperatorTok{\{}
-\NormalTok{ ops}\OperatorTok{:} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{(WarpOp}\OperatorTok{,}\NormalTok{ OpOrigin)}\OperatorTok{\textgreater{},}
-\OperatorTok{\}}
-
-\CommentTok{// Operations emitted during execution:}
-\NormalTok{delta}\OperatorTok{.}\NormalTok{emit(}\PreprocessorTok{WarpOp::}\NormalTok{UpsertNode }\OperatorTok{\{}\NormalTok{ id}\OperatorTok{,}\NormalTok{ record }\OperatorTok{\}}\NormalTok{)}\OperatorTok{;}
-\NormalTok{delta}\OperatorTok{.}\NormalTok{emit(}\PreprocessorTok{WarpOp::}\NormalTok{UpsertEdge }\OperatorTok{\{}\NormalTok{ from}\OperatorTok{,}\NormalTok{ edge }\OperatorTok{\}}\NormalTok{)}\OperatorTok{;}
-\NormalTok{delta}\OperatorTok{.}\NormalTok{emit(}\PreprocessorTok{WarpOp::}\NormalTok{DeleteNode }\OperatorTok{\{}\NormalTok{ id }\OperatorTok{\}}\NormalTok{)}\OperatorTok{;}
-\NormalTok{delta}\OperatorTok{.}\NormalTok{emit(}\PreprocessorTok{WarpOp::}\NormalTok{SetAttachment }\OperatorTok{\{}\NormalTok{ node}\OperatorTok{,}\NormalTok{ key}\OperatorTok{,}\NormalTok{ value }\OperatorTok{\}}\NormalTok{)}\OperatorTok{;}
-\end{Highlighting}
-\end{Shaded}
-
-\subsubsection{5.4.4 Merge (Canonical Sort)}\label{merge-canonical-sort}
-
-All operations are sorted into \textbf{canonical replay order}:
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\CommentTok{// Sort by (WarpOpKey, OpOrigin)}
-\NormalTok{ops}\OperatorTok{.}\NormalTok{sort\_by\_key(}\OperatorTok{|}\NormalTok{(op}\OperatorTok{,}\NormalTok{ origin)}\OperatorTok{|}\NormalTok{ (op}\OperatorTok{.}\NormalTok{sort\_key()}\OperatorTok{,}\NormalTok{ origin}\OperatorTok{.}\NormalTok{clone()))}\OperatorTok{;}
-
-\CommentTok{// Deduplicate identical ops}
-\CommentTok{// Error on conflicting ops (footprint model violation)}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{Conflict handling}: If two rewrites wrote \textbf{different
-values} to the same key, that's a bug in the footprint model. Echo
-errors loudly.
-
-\subsubsection{5.4.5 Finalize}\label{finalize}
-
-Apply the merged delta to produce the new state:
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\ControlFlowTok{for}\NormalTok{ op }\KeywordTok{in}\NormalTok{ merged\_ops }\OperatorTok{\{}
- \ControlFlowTok{match}\NormalTok{ op }\OperatorTok{\{}
- \PreprocessorTok{WarpOp::}\NormalTok{UpsertNode }\OperatorTok{\{}\NormalTok{ id}\OperatorTok{,}\NormalTok{ record }\OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ state}\OperatorTok{.}\NormalTok{insert\_node(id}\OperatorTok{,}\NormalTok{ record)}\OperatorTok{,}
- \PreprocessorTok{WarpOp::}\NormalTok{UpsertEdge }\OperatorTok{\{}\NormalTok{ from}\OperatorTok{,}\NormalTok{ edge }\OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ state}\OperatorTok{.}\NormalTok{insert\_edge(from}\OperatorTok{,}\NormalTok{ edge)}\OperatorTok{,}
- \PreprocessorTok{WarpOp::}\NormalTok{DeleteNode }\OperatorTok{\{}\NormalTok{ id }\OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ state}\OperatorTok{.}\NormalTok{delete\_node\_cascade(id)}\OperatorTok{,}
- \PreprocessorTok{WarpOp::}\NormalTok{SetAttachment }\OperatorTok{\{}\NormalTok{ node}\OperatorTok{,}\NormalTok{ key}\OperatorTok{,}\NormalTok{ value }\OperatorTok{\}} \OperatorTok{=\textgreater{}}\NormalTok{ state}\OperatorTok{.}\NormalTok{set\_attachment(node}\OperatorTok{,}\NormalTok{ key}\OperatorTok{,}\NormalTok{ value)}\OperatorTok{,}
- \CommentTok{// ...}
- \OperatorTok{\}}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\subsection{5.5 Phase 4: Hash
-Computation}\label{phase-4-hash-computation}
-
-\subsubsection{State Root (BLAKE3)}\label{state-root-blake3}
-
-The state root is computed via \textbf{deterministic BFS} over reachable
-nodes:
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-09.pdf}
-\end{center}
-
-\textbf{Encoding} (architecture-independent): - All IDs: raw 32 bytes -
-Counts: u64 little-endian - Payloads: 1-byte tag + type\_id{[}32{]} +
-u64 LE length + bytes
-
-\subsubsection{Commit Hash (v2)}\label{commit-hash-v2}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\NormalTok{commit\_hash }\OperatorTok{=}\NormalTok{ BLAKE3(}
-\NormalTok{ version\_tag[}\DecValTok{4}\NormalTok{] }\OperatorTok{||} \CommentTok{// Protocol version}
-\NormalTok{ parents[] }\OperatorTok{||} \CommentTok{// Parent commit hashes}
-\NormalTok{ state\_root[}\DecValTok{32}\NormalTok{] }\OperatorTok{||} \CommentTok{// Graph{-}only hash}
-\NormalTok{ patch\_digest[}\DecValTok{32}\NormalTok{] }\OperatorTok{||} \CommentTok{// Merged ops digest}
-\NormalTok{ policy\_id[}\DecValTok{4}\NormalTok{] }\CommentTok{// Policy identifier}
-\NormalTok{)}
-\end{Highlighting}
-\end{Shaded}
-
-\subsection{5.6 Phase 5: Record to
-History}\label{phase-5-record-to-history}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\NormalTok{history}\OperatorTok{.}\NormalTok{push((}
-\NormalTok{ Snapshot }\OperatorTok{\{}\NormalTok{ hash}\OperatorTok{:}\NormalTok{ commit\_hash}\OperatorTok{,}\NormalTok{ state\_root}\OperatorTok{,}\NormalTok{ parents}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},}
-\NormalTok{ TickReceipt }\OperatorTok{\{}\NormalTok{ applied}\OperatorTok{,}\NormalTok{ rejected}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\},}
-\NormalTok{ WarpTickPatchV1 }\OperatorTok{\{}\NormalTok{ ops}\OperatorTok{,}\NormalTok{ in\_slots}\OperatorTok{,}\NormalTok{ out\_slots}\OperatorTok{,}\NormalTok{ patch\_digest}\OperatorTok{,} \OperatorTok{...} \OperatorTok{\}}
-\NormalTok{))}\OperatorTok{;}
-\end{Highlighting}
-\end{Shaded}
-
-The patch is \textbf{prescriptive}: it can be replayed without
-re-matching to reproduce the exact same state.
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{6. Parallel Execution: BOAW (Bag of Autonomous
-Workers)}\label{parallel-execution-boaw-bag-of-autonomous-workers}
-
-\subsection{6.1 What is BOAW?}\label{what-is-boaw}
-
-BOAW stands for \textbf{Best Of All Worlds}. It's Echo's parallel
-execution architecture that enables:
-
-\begin{itemize}
-\tightlist
-\item
- \textbf{Massive parallelism} without locks
-\item
- \textbf{Deterministic convergence} across platforms
-\item
- \textbf{Worker-count invariance} (same result with 1 or 32 workers)
-\end{itemize}
-
-\subsection{6.2 The Key Insight}\label{the-key-insight}
-
-\begin{verbatim}
-┌──────────────────────────────────────────────────────────────────┐
-│ THE BOAW INSIGHT │
-├──────────────────────────────────────────────────────────────────┤
-│ │
-│ Traditional parallelism: │
-│ "Make execution order deterministic" → Complex, slow │
-│ │
-│ BOAW parallelism: │
-│ "Let execution order vary, make MERGE deterministic" → Fast! │
-│ │
-│ Workers race freely → Each produces a TickDelta │
-│ Merge step sorts all deltas → Canonical output │
-│ │
-└──────────────────────────────────────────────────────────────────┘
-\end{verbatim}
-
-\subsection{6.3 Execution Strategies}\label{execution-strategies}
-
-\subsubsection{Phase 6A: Stride Partitioning
-(Legacy)}\label{phase-6a-stride-partitioning-legacy}
-
-\begin{verbatim}
-Worker 0: items[0], items[4], items[8], ...
-Worker 1: items[1], items[5], items[9], ...
-Worker 2: items[2], items[6], items[10], ...
-Worker 3: items[3], items[7], items[11], ...
-\end{verbatim}
-
-\textbf{Problem}: Poor cache locality---related items scatter across
-workers.
-
-\subsubsection{Phase 6B: Virtual Shards (Current
-Default)}\label{phase-6b-virtual-shards-current-default}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{const}\NormalTok{ NUM\_SHARDS}\OperatorTok{:} \DataTypeTok{usize} \OperatorTok{=} \DecValTok{256}\OperatorTok{;} \CommentTok{// Protocol constant (frozen)}
-
-\KeywordTok{fn}\NormalTok{ shard\_of(node\_id}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{usize} \OperatorTok{\{}
- \KeywordTok{let}\NormalTok{ bytes }\OperatorTok{=}\NormalTok{ node\_id}\OperatorTok{.}\NormalTok{as\_bytes()}\OperatorTok{;}
- \KeywordTok{let}\NormalTok{ val }\OperatorTok{=} \DataTypeTok{u64}\PreprocessorTok{::}\NormalTok{from\_le\_bytes(bytes[}\DecValTok{0}\OperatorTok{..}\DecValTok{8}\NormalTok{])}\OperatorTok{;}
-\NormalTok{ (val }\OperatorTok{\&} \DecValTok{255}\NormalTok{) }\KeywordTok{as} \DataTypeTok{usize} \CommentTok{// Fast modulo via bitmask}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-10.pdf}
-\end{center}
-
-\textbf{Benefits}: - Items with same \texttt{shard\_of(scope)} processed
-together → better cache hits - Workers dynamically claim shards via
-atomic counter → load balancing - Determinism enforced by merge, not
-execution order
-
-\subsection{6.4 The Execution Loop}\label{the-execution-loop}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ execute\_parallel\_sharded(}
-\NormalTok{ view}\OperatorTok{:}\NormalTok{ GraphView}\OperatorTok{\textless{}}\OtherTok{\textquotesingle{}\_}\OperatorTok{\textgreater{},}
-\NormalTok{ items}\OperatorTok{:} \OperatorTok{\&}\NormalTok{[ExecItem]}\OperatorTok{,}
-\NormalTok{ workers}\OperatorTok{:} \DataTypeTok{usize}\OperatorTok{,}
-\NormalTok{) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{TickDelta}\OperatorTok{\textgreater{}} \OperatorTok{\{}
- \CommentTok{// Partition items into 256 shards}
- \KeywordTok{let}\NormalTok{ shards }\OperatorTok{=}\NormalTok{ partition\_into\_shards(items)}\OperatorTok{;}
-
- \CommentTok{// Atomic counter for work{-}stealing}
- \KeywordTok{let}\NormalTok{ next\_shard }\OperatorTok{=} \PreprocessorTok{AtomicUsize::}\NormalTok{new(}\DecValTok{0}\NormalTok{)}\OperatorTok{;}
-
- \PreprocessorTok{std::thread::}\NormalTok{scope(}\OperatorTok{|}\NormalTok{s}\OperatorTok{|} \OperatorTok{\{}
- \KeywordTok{let}\NormalTok{ handles}\OperatorTok{:} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{\_}\OperatorTok{\textgreater{}} \OperatorTok{=}\NormalTok{ (}\DecValTok{0}\OperatorTok{..}\NormalTok{workers)}\OperatorTok{.}\NormalTok{map(}\OperatorTok{|}\NormalTok{\_}\OperatorTok{|} \OperatorTok{\{}
-\NormalTok{ s}\OperatorTok{.}\NormalTok{spawn(}\OperatorTok{||} \OperatorTok{\{}
- \KeywordTok{let} \KeywordTok{mut}\NormalTok{ delta }\OperatorTok{=} \PreprocessorTok{TickDelta::}\NormalTok{new()}\OperatorTok{;}
- \ControlFlowTok{loop} \OperatorTok{\{}
- \CommentTok{// Claim next shard atomically}
- \KeywordTok{let}\NormalTok{ shard\_id }\OperatorTok{=}\NormalTok{ next\_shard}\OperatorTok{.}\NormalTok{fetch\_add(}\DecValTok{1}\OperatorTok{,} \PreprocessorTok{Ordering::}\NormalTok{Relaxed)}\OperatorTok{;}
- \ControlFlowTok{if}\NormalTok{ shard\_id }\OperatorTok{\textgreater{}=}\NormalTok{ NUM\_SHARDS }\OperatorTok{\{} \ControlFlowTok{break}\OperatorTok{;} \OperatorTok{\}}
-
- \CommentTok{// Execute all items in this shard}
- \ControlFlowTok{for}\NormalTok{ item }\KeywordTok{in} \OperatorTok{\&}\NormalTok{shards[shard\_id]}\OperatorTok{.}\NormalTok{items }\OperatorTok{\{}
-\NormalTok{ (item}\OperatorTok{.}\NormalTok{exec)(view}\OperatorTok{.}\NormalTok{clone()}\OperatorTok{,} \OperatorTok{\&}\NormalTok{item}\OperatorTok{.}\NormalTok{scope}\OperatorTok{,} \OperatorTok{\&}\KeywordTok{mut}\NormalTok{ delta)}\OperatorTok{;}
- \OperatorTok{\}}
- \OperatorTok{\}}
-\NormalTok{ delta}
- \OperatorTok{\}}\NormalTok{)}
- \OperatorTok{\}}\NormalTok{)}\OperatorTok{.}\NormalTok{collect()}\OperatorTok{;}
-
-\NormalTok{ handles}\OperatorTok{.}\NormalTok{into\_iter()}\OperatorTok{.}\NormalTok{map(}\OperatorTok{|}\NormalTok{h}\OperatorTok{|}\NormalTok{ h}\OperatorTok{.}\NormalTok{join()}\OperatorTok{.}\NormalTok{unwrap())}\OperatorTok{.}\NormalTok{collect()}
- \OperatorTok{\}}\NormalTok{)}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\subsection{6.5 The Canonical Merge}\label{the-canonical-merge}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ merge\_deltas(deltas}\OperatorTok{:} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{TickDelta}\OperatorTok{\textgreater{}}\NormalTok{) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{Result}\OperatorTok{\textless{}}\DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{WarpOp}\OperatorTok{\textgreater{},}\NormalTok{ MergeConflict}\OperatorTok{\textgreater{}} \OperatorTok{\{}
- \CommentTok{// 1. Flatten all ops from all workers}
- \KeywordTok{let} \KeywordTok{mut}\NormalTok{ all\_ops}\OperatorTok{:} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{(WarpOpKey}\OperatorTok{,}\NormalTok{ OpOrigin}\OperatorTok{,}\NormalTok{ WarpOp)}\OperatorTok{\textgreater{}} \OperatorTok{=}\NormalTok{ deltas}
- \OperatorTok{.}\NormalTok{into\_iter()}
- \OperatorTok{.}\NormalTok{flat\_map(}\OperatorTok{|}\NormalTok{d}\OperatorTok{|}\NormalTok{ d}\OperatorTok{.}\NormalTok{ops\_with\_origins())}
- \OperatorTok{.}\NormalTok{collect()}\OperatorTok{;}
-
- \CommentTok{// 2. Sort canonically by (key, origin)}
-\NormalTok{ all\_ops}\OperatorTok{.}\NormalTok{sort\_by\_key(}\OperatorTok{|}\NormalTok{(key}\OperatorTok{,}\NormalTok{ origin}\OperatorTok{,}\NormalTok{ \_)}\OperatorTok{|}\NormalTok{ (key}\OperatorTok{.}\NormalTok{clone()}\OperatorTok{,}\NormalTok{ origin}\OperatorTok{.}\NormalTok{clone()))}\OperatorTok{;}
-
- \CommentTok{// 3. Deduplicate and detect conflicts}
- \KeywordTok{let} \KeywordTok{mut}\NormalTok{ result }\OperatorTok{=} \DataTypeTok{Vec}\PreprocessorTok{::}\NormalTok{new()}\OperatorTok{;}
- \ControlFlowTok{for}\NormalTok{ group }\KeywordTok{in}\NormalTok{ all\_ops}\OperatorTok{.}\NormalTok{group\_by(}\OperatorTok{|}\NormalTok{(k1}\OperatorTok{,}\NormalTok{ \_}\OperatorTok{,}\NormalTok{ \_)}\OperatorTok{,}\NormalTok{ (k2}\OperatorTok{,}\NormalTok{ \_}\OperatorTok{,}\NormalTok{ \_)}\OperatorTok{|}\NormalTok{ k1 }\OperatorTok{==}\NormalTok{ k2) }\OperatorTok{\{}
- \KeywordTok{let}\NormalTok{ first }\OperatorTok{=} \OperatorTok{\&}\NormalTok{group[}\DecValTok{0}\NormalTok{]}\OperatorTok{.}\DecValTok{2}\OperatorTok{;}
- \ControlFlowTok{if}\NormalTok{ group}\OperatorTok{.}\NormalTok{iter()}\OperatorTok{.}\NormalTok{all(}\OperatorTok{|}\NormalTok{(\_}\OperatorTok{,}\NormalTok{ \_}\OperatorTok{,}\NormalTok{ op)}\OperatorTok{|}\NormalTok{ op }\OperatorTok{==}\NormalTok{ first) }\OperatorTok{\{}
-\NormalTok{ result}\OperatorTok{.}\NormalTok{push(first}\OperatorTok{.}\NormalTok{clone())}\OperatorTok{;} \CommentTok{// All identical: keep one}
- \OperatorTok{\}} \ControlFlowTok{else} \OperatorTok{\{}
- \ControlFlowTok{return} \ConstantTok{Err}\NormalTok{(MergeConflict }\OperatorTok{\{}\NormalTok{ writers}\OperatorTok{:}\NormalTok{ group}\OperatorTok{.}\NormalTok{iter()}\OperatorTok{.}\NormalTok{map(}\OperatorTok{|}\NormalTok{(\_}\OperatorTok{,}\NormalTok{ o}\OperatorTok{,}\NormalTok{ \_)}\OperatorTok{|}\NormalTok{ o)}\OperatorTok{.}\NormalTok{collect() }\OperatorTok{\}}\NormalTok{)}\OperatorTok{;}
- \OperatorTok{\}}
- \OperatorTok{\}}
-
- \ConstantTok{Ok}\NormalTok{(result)}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{Key guarantee}: Conflicts are bugs. If footprints were correct,
-no two rewrites should write different values to the same key.
-
-\subsection{6.6 Runtime Enforcement:
-FootprintGuard}\label{runtime-enforcement-footprintguard}
-
-\texttt{FootprintGuard} is the runtime mechanism that validates every
-graph access and emitted op against the declared footprint.
-
-\subsubsection{Read Enforcement}\label{read-enforcement}
-
-Read enforcement is implemented via \texttt{GraphView::new\_guarded()},
-which wraps the underlying \texttt{GraphView} with an intercepting layer.
-Every accessor call---\texttt{node()}, \texttt{edges\_from()},
-\texttt{node\_attachment()}, etc.---is checked against the footprint's
-declared read sets (\texttt{n\_read}, \texttt{e\_read}, \texttt{a\_read}).
-An access to an undeclared resource triggers a \texttt{FootprintViolation}
-panic.
-
-\subsubsection{Write Enforcement}\label{write-enforcement}
-
-Write enforcement uses a post-hoc \texttt{check\_op()} strategy. The
-executor runs inside a \texttt{catch\_unwind} boundary, and validation
-runs on every op emitted into the \texttt{TickDelta} regardless of
-whether the executor completes normally or panics. This catches undeclared
-writes, cross-warp emissions, unauthorized instance ops, and adjacency
-violations (edge ops whose \texttt{from} node is absent from
-\texttt{n\_write}).
-
-\subsubsection{Scope and Lifecycle}\label{scope-and-lifecycle}
-
-The guard is instantiated \emph{per-\texttt{ExecItem}} within a
-\texttt{WorkUnit}. Each rule invocation receives its own guard, scoped to
-that item's computed footprint. The \texttt{check\_op()} function validates
-\texttt{TickDelta} emissions against the footprint. Enforcement yields two
-payload variants:
-\begin{itemize}
-\item \texttt{FootprintViolation}: emitted when \texttt{check\_op} detects an
- illegal op (undeclared write, cross-warp emission, etc.)
-\item \texttt{FootprintViolationWithPanic}: emitted when the executor itself
- panics and the guard wraps that panic together with any detected violation
-\end{itemize}
-
-\textbf{Tick Fallout Semantics:} When enforcement fails, the wrapped panic
-causes the \texttt{TickDelta} to become a \texttt{PoisonedDelta}, preventing
-merge. The current \texttt{ExecItem}/tick is aborted. At merge time, if a
-poisoned delta is encountered, a \texttt{MergeError::PoisonedDelta} is raised,
-triggering worker/tick recovery. The distinction is: abort of the current
-\texttt{ExecItem} happens immediately at detection; merge-time errors occur
-when poisoned deltas reach the commit path.
-
-\subsubsection{Configuration}\label{guard-configuration}
-
-The guard is \texttt{cfg}-gated:
-
-\begin{itemize}
-\item \textbf{Active} in debug builds (\texttt{debug\_assertions}) or when
- the \texttt{footprint\_enforce\_release} feature is enabled.
-\item \textbf{Disabled} when the \texttt{unsafe\_graph} feature is set,
- which removes all guard overhead for maximum throughput in production
- scenarios where footprints have already been validated.
-\end{itemize}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{7. Storage \& Hashing: Content-Addressed
-Truth}\label{storage-hashing-content-addressed-truth}
-
-\subsection{7.1 The GraphStore}\label{the-graphstore}
-
-Located in \texttt{crates/warp-core/src/graph.rs}:
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{struct}\NormalTok{ GraphStore }\OperatorTok{\{}
- \KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) warp\_id}\OperatorTok{:}\NormalTok{ WarpId}\OperatorTok{,}
- \KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) nodes}\OperatorTok{:}\NormalTok{ BTreeMap}\OperatorTok{\textless{}}\NormalTok{NodeId}\OperatorTok{,}\NormalTok{ NodeRecord}\OperatorTok{\textgreater{},}
- \KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) edges\_from}\OperatorTok{:}\NormalTok{ BTreeMap}\OperatorTok{\textless{}}\NormalTok{NodeId}\OperatorTok{,} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{EdgeRecord}\OperatorTok{\textgreater{}\textgreater{},}
- \KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) edges\_to}\OperatorTok{:}\NormalTok{ BTreeMap}\OperatorTok{\textless{}}\NormalTok{NodeId}\OperatorTok{,} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{EdgeId}\OperatorTok{\textgreater{}\textgreater{},} \CommentTok{// Reverse index}
- \KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) node\_attachments}\OperatorTok{:}\NormalTok{ BTreeMap}\OperatorTok{\textless{}}\NormalTok{NodeId}\OperatorTok{,}\NormalTok{ AttachmentValue}\OperatorTok{\textgreater{},}
- \KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) edge\_attachments}\OperatorTok{:}\NormalTok{ BTreeMap}\OperatorTok{\textless{}}\NormalTok{EdgeId}\OperatorTok{,}\NormalTok{ AttachmentValue}\OperatorTok{\textgreater{},}
- \KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) edge\_index}\OperatorTok{:}\NormalTok{ BTreeMap}\OperatorTok{\textless{}}\NormalTok{EdgeId}\OperatorTok{,}\NormalTok{ NodeId}\OperatorTok{\textgreater{},} \CommentTok{// Edge → Source}
- \KeywordTok{pub}\NormalTok{(}\KeywordTok{crate}\NormalTok{) edge\_to\_index}\OperatorTok{:}\NormalTok{ BTreeMap}\OperatorTok{\textless{}}\NormalTok{EdgeId}\OperatorTok{,}\NormalTok{ NodeId}\OperatorTok{\textgreater{},} \CommentTok{// Edge → Target}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{Why BTreeMap everywhere?} - Deterministic iteration order
-(sorted by key) - Enables canonical hashing - No HashMap ordering
-surprises
-
-\subsection{7.2 WSC: Write-Streaming Columnar
-Format}\label{wsc-write-streaming-columnar-format}
-
-For efficient snapshots, Echo uses WSC---a zero-copy, mmap-friendly
-format:
-
-\begin{verbatim}
-┌─────────────────────────────────────────────────────────────────┐
-│ WSC SNAPSHOT FILE │
-├─────────────────────────────────────────────────────────────────┤
-│ ┌─────────────────────────────────────────────────────────────┐ │
-│ │ NODES TABLE (sorted by NodeId) │ │
-│ │ ┌──────────┬───────────┬──────────┐ │ │
-│ │ │ NodeRow │ NodeRow │ NodeRow │ ... │ │
-│ │ │ 64 bytes │ 64 bytes │ 64 bytes │ │ │
-│ │ └──────────┴───────────┴──────────┘ │ │
-│ └─────────────────────────────────────────────────────────────┘ │
-│ ┌─────────────────────────────────────────────────────────────┐ │
-│ │ EDGES TABLE (sorted by EdgeId) │ │
-│ │ ┌───────────┬───────────┬───────────┐ │ │
-│ │ │ EdgeRow │ EdgeRow │ EdgeRow │ ... │ │
-│ │ │ 128 bytes │ 128 bytes │ 128 bytes │ │ │
-│ │ └───────────┴───────────┴───────────┘ │ │
-│ └─────────────────────────────────────────────────────────────┘ │
-│ ┌─────────────────────────────────────────────────────────────┐ │
-│ │ OUT_INDEX (per-node → range into out_edges) │ │
-│ │ ┌────────────────┬────────────────┐ │ │
-│ │ │ Range (16 B) │ Range (16 B) │ ... │ │
-│ │ └────────────────┴────────────────┘ │ │
-│ └─────────────────────────────────────────────────────────────┘ │
-│ ┌─────────────────────────────────────────────────────────────┐ │
-│ │ BLOB ARENA (variable-length data) │ │
-│ │ Referenced by (offset, length) tuples │ │
-│ └─────────────────────────────────────────────────────────────┘ │
-└─────────────────────────────────────────────────────────────────┘
-\end{verbatim}
-
-\textbf{Row types} (8-byte aligned): - \texttt{NodeRow}: 64 bytes
-(node\_id{[}32{]} + node\_type{[}32{]}) - \texttt{EdgeRow}: 128 bytes
-(edge\_id{[}32{]} + from{[}32{]} + to{[}32{]} + type{[}32{]}) -
-\texttt{Range}: 16 bytes (start\_le{[}8{]} + len\_le{[}8{]})
-
-\subsection{7.3 Copy-on-Write Semantics}\label{copy-on-write-semantics}
-
-\textbf{Rule}: During a tick, nothing shared is mutated.
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-11.pdf}
-\end{center}
-
-\textbf{Structural sharing}: Only changed segments are newly written.
-Unchanged data is referenced by hash.
-
-\subsection{7.4 Hash Algorithm Details}\label{hash-algorithm-details}
-
-\textbf{State Root} (BLAKE3, v2):
-
-\begin{verbatim}
-state_root = BLAKE3(
- root_id[32] ||
- instance_count[8, LE] ||
- for each instance in BTreeMap order:
- warp_id_len[8, LE] ||
- warp_id_bytes ||
- node_count[8, LE] ||
- for each node in ascending NodeId order:
- node_id[32] ||
- node_type[32] ||
- for each outbound edge in ascending EdgeId order:
- edge_id[32] ||
- edge_type[32] ||
- to_node[32] ||
- for each attachment:
- key_len[8, LE] ||
- key_bytes ||
- type_id[32] ||
- value_len[8, LE] ||
- value_bytes
-)
-\end{verbatim}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{8. Worked Example: Tracing a Link
-Click}\label{worked-example-tracing-a-link-click}
-
-Let's trace what happens when a user clicks a link in a hypothetical
-WARP-based navigation system.
-
-\subsection{8.1 The Scenario}\label{the-scenario}
-
-Imagine a simple site with two pages:
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-12.pdf}
-\end{center}
-
-\textbf{User clicks the link}: This should navigate from Home to About.
-
-\subsection{8.2 Step 1: Intent Ingestion}\label{step-1-intent-ingestion}
-
-The click is captured by the viewer and converted to an \textbf{intent}:
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\CommentTok{// In the viewer:}
-\KeywordTok{let}\NormalTok{ intent }\OperatorTok{=}\NormalTok{ NavigateIntent }\OperatorTok{\{}
-\NormalTok{ target\_page}\OperatorTok{:}\NormalTok{ about\_node\_id}\OperatorTok{,}
-\NormalTok{ timestamp}\OperatorTok{:}\NormalTok{ deterministic\_tick}\OperatorTok{,}
-\OperatorTok{\};}
-\KeywordTok{let}\NormalTok{ intent\_bytes }\OperatorTok{=}\NormalTok{ canonical\_encode(}\OperatorTok{\&}\NormalTok{intent)}\OperatorTok{;}
-
-\CommentTok{// Send to engine:}
-\NormalTok{engine}\OperatorTok{.}\NormalTok{ingest\_intent(intent\_bytes)}\OperatorTok{;}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{What happens inside \texttt{ingest\_intent}}:
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-13.pdf}
-\end{center}
-
-\subsection{8.3 Step 2: Begin
-Transaction}\label{step-2-begin-transaction}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{let}\NormalTok{ tx }\OperatorTok{=}\NormalTok{ engine}\OperatorTok{.}\NormalTok{begin()}\OperatorTok{;} \CommentTok{// tx = TxId(1)}
-\end{Highlighting}
-\end{Shaded}
-
-\subsection{8.4 Step 3: Dispatch Intent}\label{step-3-dispatch-intent}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\NormalTok{engine}\OperatorTok{.}\NormalTok{dispatch\_next\_intent(tx)}\OperatorTok{;}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{What happens}:
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-14.pdf}
-\end{center}
-
-\subsection{8.5 Step 4: Rule Matching}\label{step-4-rule-matching}
-
-The \texttt{cmd/navigate} rule matches:
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\CommentTok{// Matcher: Does this intent want navigation?}
-\KeywordTok{fn}\NormalTok{ navigate\_matcher(view}\OperatorTok{:}\NormalTok{ GraphView}\OperatorTok{,}\NormalTok{ scope}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId) }\OperatorTok{{-}\textgreater{}} \DataTypeTok{bool} \OperatorTok{\{}
- \KeywordTok{let}\NormalTok{ intent }\OperatorTok{=}\NormalTok{ view}\OperatorTok{.}\NormalTok{node(scope)}\OperatorTok{?;}
-\NormalTok{ intent}\OperatorTok{.}\NormalTok{type\_id }\OperatorTok{==} \StringTok{"navigate\_intent"}
-\OperatorTok{\}}
-
-\CommentTok{// Footprint: What will we read/write?}
-\KeywordTok{fn}\NormalTok{ navigate\_footprint(view}\OperatorTok{:}\NormalTok{ GraphView}\OperatorTok{,}\NormalTok{ scope}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId) }\OperatorTok{{-}\textgreater{}}\NormalTok{ Footprint }\OperatorTok{\{}
-\NormalTok{ Footprint }\OperatorTok{\{}
-\NormalTok{ n\_read}\OperatorTok{:} \PreprocessorTok{btreeset!}\NormalTok{[scope}\OperatorTok{.}\NormalTok{clone()}\OperatorTok{,}\NormalTok{ viewer\_node]}\OperatorTok{,}
-\NormalTok{ n\_write}\OperatorTok{:} \PreprocessorTok{btreeset!}\NormalTok{[]}\OperatorTok{,}
-\NormalTok{ a\_read}\OperatorTok{:} \PreprocessorTok{btreeset!}\NormalTok{[]}\OperatorTok{,}
-\NormalTok{ a\_write}\OperatorTok{:} \PreprocessorTok{btreeset!}\NormalTok{[}\PreprocessorTok{AttachmentKey::}\NormalTok{new(viewer\_node}\OperatorTok{,} \StringTok{"current"}\NormalTok{)]}\OperatorTok{,}
- \OperatorTok{..}\KeywordTok{default}\NormalTok{()}
- \OperatorTok{\}}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-The rule is enqueued:
-
-\begin{verbatim}
-┌─────────────────────────────────────────────────────────────┐
-│ PendingRewrite │
-├─────────────────────────────────────────────────────────────┤
-│ rule_id: "cmd/navigate" │
-│ scope: 0xABCD... (intent node) │
-│ footprint: { n_read: [intent, viewer], a_write: [current] } │
-│ tx: TxId(1) │
-└─────────────────────────────────────────────────────────────┘
-\end{verbatim}
-
-\subsection{8.6 Step 5: Commit}\label{step-5-commit}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{let}\NormalTok{ (snapshot}\OperatorTok{,}\NormalTok{ receipt}\OperatorTok{,}\NormalTok{ patch) }\OperatorTok{=}\NormalTok{ engine}\OperatorTok{.}\NormalTok{commit\_with\_receipt(tx)}\OperatorTok{;}
-\end{Highlighting}
-\end{Shaded}
-
-\subsubsection{5a. Drain}\label{a.-drain}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{let}\NormalTok{ rewrites }\OperatorTok{=}\NormalTok{ scheduler}\OperatorTok{.}\NormalTok{drain\_for\_tx(tx)}\OperatorTok{;}
-\CommentTok{// Result: [PendingRewrite \{ rule: "cmd/navigate", scope: intent\_node \}]}
-\end{Highlighting}
-\end{Shaded}
-
-\subsubsection{5b. Reserve}\label{b.-reserve}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\CommentTok{// Check footprint independence}
-\CommentTok{// No conflicts (only one rewrite)}
-\CommentTok{// Accepted!}
-\end{Highlighting}
-\end{Shaded}
-
-\subsubsection{5c. Execute}\label{c.-execute}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{fn}\NormalTok{ navigate\_executor(view}\OperatorTok{:}\NormalTok{ GraphView}\OperatorTok{,}\NormalTok{ scope}\OperatorTok{:} \OperatorTok{\&}\NormalTok{NodeId}\OperatorTok{,}\NormalTok{ delta}\OperatorTok{:} \OperatorTok{\&}\KeywordTok{mut}\NormalTok{ TickDelta) }\OperatorTok{\{}
- \CommentTok{// Read the intent to find target}
- \KeywordTok{let}\NormalTok{ intent }\OperatorTok{=}\NormalTok{ view}\OperatorTok{.}\NormalTok{node(scope)}\OperatorTok{.}\NormalTok{unwrap()}\OperatorTok{;}
- \KeywordTok{let}\NormalTok{ target\_page }\OperatorTok{=}\NormalTok{ intent}\OperatorTok{.}\NormalTok{attachment(}\StringTok{"target"}\NormalTok{)}\OperatorTok{.}\NormalTok{unwrap()}\OperatorTok{;}
-
- \CommentTok{// Read current viewer state (for logging/validation)}
- \KeywordTok{let}\NormalTok{ viewer }\OperatorTok{=}\NormalTok{ view}\OperatorTok{.}\NormalTok{node(}\OperatorTok{\&}\NormalTok{VIEWER\_NODE)}\OperatorTok{.}\NormalTok{unwrap()}\OperatorTok{;}
- \KeywordTok{let}\NormalTok{ old\_page }\OperatorTok{=}\NormalTok{ viewer}\OperatorTok{.}\NormalTok{attachment(}\StringTok{"current"}\NormalTok{)}\OperatorTok{;}
-
- \CommentTok{// Emit the change: update viewer\textquotesingle{}s current page}
-\NormalTok{ delta}\OperatorTok{.}\NormalTok{emit(}\PreprocessorTok{WarpOp::}\NormalTok{SetAttachment }\OperatorTok{\{}
-\NormalTok{ node}\OperatorTok{:}\NormalTok{ VIEWER\_NODE}\OperatorTok{,}
-\NormalTok{ key}\OperatorTok{:} \StringTok{"current"}\OperatorTok{.}\NormalTok{into()}\OperatorTok{,}
-\NormalTok{ value}\OperatorTok{:} \PreprocessorTok{AttachmentValue::}\NormalTok{Atom(AtomPayload }\OperatorTok{\{}
-\NormalTok{ type\_id}\OperatorTok{:} \StringTok{"node\_ref"}\OperatorTok{.}\NormalTok{into()}\OperatorTok{,}
-\NormalTok{ bytes}\OperatorTok{:}\NormalTok{ target\_page}\OperatorTok{.}\NormalTok{to\_bytes()}\OperatorTok{,}
- \OperatorTok{\}}\NormalTok{)}\OperatorTok{,}
- \OperatorTok{\}}\NormalTok{)}\OperatorTok{;}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{TickDelta now contains}:
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\NormalTok{[}
-\NormalTok{ (}\PreprocessorTok{WarpOp::}\NormalTok{SetAttachment }\OperatorTok{\{}
-\NormalTok{ node}\OperatorTok{:}\NormalTok{ viewer\_node}\OperatorTok{,}
-\NormalTok{ key}\OperatorTok{:} \StringTok{"current"}\OperatorTok{,}
-\NormalTok{ value}\OperatorTok{:}\NormalTok{ about\_node\_id}
- \OperatorTok{\},}\NormalTok{ OpOrigin }\OperatorTok{\{}\NormalTok{ intent\_id}\OperatorTok{:} \DecValTok{1}\OperatorTok{,}\NormalTok{ rule\_id}\OperatorTok{:} \DecValTok{42}\OperatorTok{,}\NormalTok{ match\_ix}\OperatorTok{:} \DecValTok{0}\OperatorTok{,}\NormalTok{ op\_ix}\OperatorTok{:} \DecValTok{0} \OperatorTok{\}}\NormalTok{)}
-\NormalTok{]}
-\end{Highlighting}
-\end{Shaded}
-
-\subsubsection{5d. Merge}\label{d.-merge}
-
-Only one delta, trivial merge:
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{let}\NormalTok{ merged\_ops }\OperatorTok{=} \PreprocessorTok{vec!}\NormalTok{[}
- \PreprocessorTok{WarpOp::}\NormalTok{SetAttachment }\OperatorTok{\{}\NormalTok{ node}\OperatorTok{:}\NormalTok{ viewer\_node}\OperatorTok{,}\NormalTok{ key}\OperatorTok{:} \StringTok{"current"}\OperatorTok{,}\NormalTok{ value}\OperatorTok{:}\NormalTok{ about\_node\_id }\OperatorTok{\}}
-\NormalTok{]}\OperatorTok{;}
-\end{Highlighting}
-\end{Shaded}
-
-\subsubsection{5e. Finalize}\label{e.-finalize}
-
-Apply to state:
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\NormalTok{state}\OperatorTok{.}\NormalTok{set\_attachment(viewer\_node}\OperatorTok{,} \StringTok{"current"}\OperatorTok{,}\NormalTok{ about\_node\_id)}\OperatorTok{;}
-\end{Highlighting}
-\end{Shaded}
-
-\subsection{8.7 Step 6: Hash Computation}\label{step-6-hash-computation}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\CommentTok{// State root: BLAKE3 of reachable graph}
-\KeywordTok{let}\NormalTok{ state\_root }\OperatorTok{=}\NormalTok{ compute\_state\_root(}\OperatorTok{\&}\NormalTok{state)}\OperatorTok{;} \CommentTok{// 0x7890...}
-
-\CommentTok{// Patch digest: BLAKE3 of merged ops}
-\KeywordTok{let}\NormalTok{ patch\_digest }\OperatorTok{=}\NormalTok{ compute\_patch\_digest(}\OperatorTok{\&}\NormalTok{merged\_ops)}\OperatorTok{;} \CommentTok{// 0xDEF0...}
-
-\CommentTok{// Commit hash}
-\KeywordTok{let}\NormalTok{ commit\_hash }\OperatorTok{=}\NormalTok{ BLAKE3(}
-\NormalTok{ VERSION\_TAG }\OperatorTok{||}
-\NormalTok{ [parent\_hash] }\OperatorTok{||}
-\NormalTok{ state\_root }\OperatorTok{||}
-\NormalTok{ patch\_digest }\OperatorTok{||}
-\NormalTok{ policy\_id}
-\NormalTok{)}\OperatorTok{;} \CommentTok{// 0x1234...}
-\end{Highlighting}
-\end{Shaded}
-
-\subsection{8.8 Step 7: Emit to Tools}\label{step-7-emit-to-tools}
-
-The engine emits a \texttt{WarpDiff} to the session hub:
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\NormalTok{WarpDiff }\OperatorTok{\{}
-\NormalTok{ from\_epoch}\OperatorTok{:} \DecValTok{0}\OperatorTok{,}
-\NormalTok{ to\_epoch}\OperatorTok{:} \DecValTok{1}\OperatorTok{,}
-\NormalTok{ ops}\OperatorTok{:} \PreprocessorTok{vec!}\NormalTok{[}
- \PreprocessorTok{WarpOp::}\NormalTok{SetAttachment }\OperatorTok{\{}
-\NormalTok{ node}\OperatorTok{:}\NormalTok{ viewer\_node}\OperatorTok{,}
-\NormalTok{ key}\OperatorTok{:} \StringTok{"current"}\OperatorTok{,}
-\NormalTok{ value}\OperatorTok{:}\NormalTok{ about\_node\_id}
- \OperatorTok{\}}
-\NormalTok{ ]}\OperatorTok{,}
-\NormalTok{ state\_hash}\OperatorTok{:} \DecValTok{0x7890}\OperatorTok{...,}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\subsection{8.9 Step 8: Viewer Applies
-Diff}\label{step-8-viewer-applies-diff}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\CommentTok{// In warp{-}viewer:}
-\KeywordTok{fn}\NormalTok{ process\_frames(viewer}\OperatorTok{:} \OperatorTok{\&}\KeywordTok{mut}\NormalTok{ ViewerState}\OperatorTok{,}\NormalTok{ frames}\OperatorTok{:} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{WarpFrame}\OperatorTok{\textgreater{}}\NormalTok{) }\OperatorTok{\{}
- \ControlFlowTok{for}\NormalTok{ frame }\KeywordTok{in}\NormalTok{ frames }\OperatorTok{\{}
- \ControlFlowTok{match}\NormalTok{ frame }\OperatorTok{\{}
- \PreprocessorTok{WarpFrame::}\NormalTok{Diff(diff) }\OperatorTok{=\textgreater{}} \OperatorTok{\{}
- \CommentTok{// Verify we have the parent epoch}
- \PreprocessorTok{assert\_eq!}\NormalTok{(viewer}\OperatorTok{.}\NormalTok{epoch}\OperatorTok{,} \ConstantTok{Some}\NormalTok{(diff}\OperatorTok{.}\NormalTok{from\_epoch))}\OperatorTok{;}
-
- \CommentTok{// Apply each operation}
- \ControlFlowTok{for}\NormalTok{ op }\KeywordTok{in}\NormalTok{ diff}\OperatorTok{.}\NormalTok{ops }\OperatorTok{\{}
-\NormalTok{ viewer}\OperatorTok{.}\NormalTok{wire\_graph}\OperatorTok{.}\NormalTok{apply\_op(op)}\OperatorTok{;}
- \OperatorTok{\}}
-
- \CommentTok{// Update epoch}
-\NormalTok{ viewer}\OperatorTok{.}\NormalTok{epoch }\OperatorTok{=} \ConstantTok{Some}\NormalTok{(diff}\OperatorTok{.}\NormalTok{to\_epoch)}\OperatorTok{;}
-
- \CommentTok{// Verify hash matches!}
- \KeywordTok{let}\NormalTok{ computed }\OperatorTok{=}\NormalTok{ viewer}\OperatorTok{.}\NormalTok{wire\_graph}\OperatorTok{.}\NormalTok{state\_hash()}\OperatorTok{;}
- \PreprocessorTok{assert\_eq!}\NormalTok{(computed}\OperatorTok{,}\NormalTok{ diff}\OperatorTok{.}\NormalTok{state\_hash}\OperatorTok{,} \StringTok{"DESYNC!"}\NormalTok{)}\OperatorTok{;}
- \OperatorTok{\}}
- \CommentTok{// ...}
- \OperatorTok{\}}
- \OperatorTok{\}}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\subsection{8.10 The Result}\label{the-result}
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-15.pdf}
-\end{center}
-
-\textbf{The navigation is complete.} The viewer now displays the About
-page, and the state hash proves it happened deterministically.
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{9. The Viewer: Observing Echo}\label{the-viewer-observing-echo}
-
-\subsection{9.1 Event Handling
-Architecture}\label{event-handling-architecture}
-
-The viewer uses a \textbf{pure reducer pattern} (similar to Redux/Elm):
-
-\begin{center}
-\includegraphics[max width=\textwidth,max height=0.4\textheight,keepaspectratio]{diagrams/what-makes-echo-tick-16.pdf}
-\end{center}
-
-\subsection{9.2 The UiEvent Enum}\label{the-uievent-enum}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{enum}\NormalTok{ UiEvent }\OperatorTok{\{}
- \CommentTok{// Menu navigation}
-\NormalTok{ ConnectClicked}\OperatorTok{,}
-\NormalTok{ SettingsClicked}\OperatorTok{,}
-\NormalTok{ ExitClicked}\OperatorTok{,}
-
- \CommentTok{// Connection form}
-\NormalTok{ ConnectHostChanged(}\DataTypeTok{String}\NormalTok{)}\OperatorTok{,}
-\NormalTok{ ConnectPortChanged(}\DataTypeTok{u16}\NormalTok{)}\OperatorTok{,}
-\NormalTok{ ConnectSubmit}\OperatorTok{,}
-
- \CommentTok{// Overlays}
-\NormalTok{ OpenMenu}\OperatorTok{,}
-\NormalTok{ CloseOverlay}\OperatorTok{,}
-\NormalTok{ OpenSettingsOverlay}\OperatorTok{,}
-
- \CommentTok{// System}
-\NormalTok{ ShutdownRequested}\OperatorTok{,}
-\NormalTok{ EnterView}\OperatorTok{,}
-\NormalTok{ ShowError(}\DataTypeTok{String}\NormalTok{)}\OperatorTok{,}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\subsection{9.3 The Pure Reducer}\label{the-pure-reducer}
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ reduce(ui}\OperatorTok{:} \OperatorTok{\&}\NormalTok{UiState}\OperatorTok{,}\NormalTok{ ev}\OperatorTok{:}\NormalTok{ UiEvent) }\OperatorTok{{-}\textgreater{}}\NormalTok{ (UiState}\OperatorTok{,} \DataTypeTok{Vec}\OperatorTok{\textless{}}\NormalTok{UiEffect}\OperatorTok{\textgreater{}}\NormalTok{) }\OperatorTok{\{}
- \KeywordTok{let} \KeywordTok{mut}\NormalTok{ next }\OperatorTok{=}\NormalTok{ ui}\OperatorTok{.}\NormalTok{clone()}\OperatorTok{;}
- \KeywordTok{let} \KeywordTok{mut}\NormalTok{ fx }\OperatorTok{=} \DataTypeTok{Vec}\PreprocessorTok{::}\NormalTok{new()}\OperatorTok{;}
-
- \ControlFlowTok{match}\NormalTok{ ev }\OperatorTok{\{}
- \PreprocessorTok{UiEvent::}\NormalTok{ConnectClicked }\OperatorTok{=\textgreater{}} \OperatorTok{\{}
-\NormalTok{ next}\OperatorTok{.}\NormalTok{title\_mode }\OperatorTok{=} \PreprocessorTok{TitleMode::}\NormalTok{ConnectForm}\OperatorTok{;}
- \OperatorTok{\}}
- \PreprocessorTok{UiEvent::}\NormalTok{ConnectSubmit }\OperatorTok{=\textgreater{}} \OperatorTok{\{}
-\NormalTok{ next}\OperatorTok{.}\NormalTok{screen }\OperatorTok{=} \PreprocessorTok{Screen::}\NormalTok{Connecting}\OperatorTok{;}
-\NormalTok{ fx}\OperatorTok{.}\NormalTok{push(}\PreprocessorTok{UiEffect::}\NormalTok{RequestConnect)}\OperatorTok{;}
- \OperatorTok{\}}
- \PreprocessorTok{UiEvent::}\NormalTok{EnterView }\OperatorTok{=\textgreater{}} \OperatorTok{\{}
-\NormalTok{ next}\OperatorTok{.}\NormalTok{screen }\OperatorTok{=} \PreprocessorTok{Screen::}\NormalTok{View}\OperatorTok{;}
- \OperatorTok{\}}
- \CommentTok{// ...}
- \OperatorTok{\}}
-
-\NormalTok{ (next}\OperatorTok{,}\NormalTok{ fx)}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\textbf{Benefits}: - \textbf{Testable}: Pure function, easy to unit test
-- \textbf{Predictable}: Same input always produces same output -
-\textbf{Debuggable}: State transitions are explicit
-
-\subsection{9.4 Frame Loop}\label{frame-loop}
-
-Each frame:
-
-\begin{Shaded}
-\begin{Highlighting}[]
-\KeywordTok{pub} \KeywordTok{fn}\NormalTok{ frame(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\NormalTok{) }\OperatorTok{\{}
- \CommentTok{// 1. Drain session notifications}
- \ControlFlowTok{for}\NormalTok{ notification }\KeywordTok{in} \KeywordTok{self}\OperatorTok{.}\NormalTok{session}\OperatorTok{.}\NormalTok{drain\_notifications(}\DecValTok{64}\NormalTok{) }\OperatorTok{\{}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{handle\_notification(notification)}\OperatorTok{;}
- \OperatorTok{\}}
-
- \CommentTok{// 2. Process incoming frames}
- \KeywordTok{let}\NormalTok{ frames }\OperatorTok{=} \KeywordTok{self}\OperatorTok{.}\NormalTok{session}\OperatorTok{.}\NormalTok{drain\_frames(}\DecValTok{64}\NormalTok{)}\OperatorTok{;}
- \KeywordTok{let}\NormalTok{ outcome }\OperatorTok{=}\NormalTok{ process\_frames(}\OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{.}\NormalTok{ui}\OperatorTok{,} \OperatorTok{\&}\KeywordTok{mut} \KeywordTok{self}\OperatorTok{.}\NormalTok{viewer}\OperatorTok{,}\NormalTok{ frames)}\OperatorTok{;}
-
- \CommentTok{// 3. Handle state changes}
- \ControlFlowTok{if}\NormalTok{ outcome}\OperatorTok{.}\NormalTok{enter\_view }\OperatorTok{\{}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{apply\_ui\_event(}\PreprocessorTok{UiEvent::}\NormalTok{EnterView)}\OperatorTok{;}
- \OperatorTok{\}}
-
- \CommentTok{// 4. Handle pointer interaction (3D view)}
- \ControlFlowTok{if} \KeywordTok{self}\OperatorTok{.}\NormalTok{ui}\OperatorTok{.}\NormalTok{screen }\OperatorTok{==} \PreprocessorTok{Screen::}\NormalTok{View }\OperatorTok{\{}
- \KeywordTok{self}\OperatorTok{.}\NormalTok{handle\_pointer(dt}\OperatorTok{,}\NormalTok{ aspect}\OperatorTok{,}\NormalTok{ width}\OperatorTok{,}\NormalTok{ height}\OperatorTok{,}\NormalTok{ window)}\OperatorTok{;}
- \OperatorTok{\}}
-
- \CommentTok{// 5. Render UI}
- \ControlFlowTok{match} \KeywordTok{self}\OperatorTok{.}\NormalTok{ui}\OperatorTok{.}\NormalTok{screen }\OperatorTok{\{}
- \PreprocessorTok{Screen::}\NormalTok{Title }\OperatorTok{=\textgreater{}}\NormalTok{ draw\_title\_screen(ctx}\OperatorTok{,} \KeywordTok{self}\NormalTok{)}\OperatorTok{,}
- \PreprocessorTok{Screen::}\NormalTok{View }\OperatorTok{=\textgreater{}}\NormalTok{ draw\_view\_hud(ctx}\OperatorTok{,} \KeywordTok{self}\NormalTok{)}\OperatorTok{,}
- \CommentTok{// ...}
- \OperatorTok{\}}
-\OperatorTok{\}}
-\end{Highlighting}
-\end{Shaded}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{10. Glossary}\label{glossary}
-
-{\def\LTcaptype{none} % do not increment counter
-\begin{longtable}[]{@{}
- >{\raggedright\arraybackslash}p{(\linewidth - 2\tabcolsep) * \real{0.3333}}
- >{\raggedright\arraybackslash}p{(\linewidth - 2\tabcolsep) * \real{0.6667}}@{}}
-\toprule\noalign{}
-\begin{minipage}[b]{\linewidth}\raggedright
-Term
-\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright
-Definition
-\end{minipage} \\
-\midrule\noalign{}
-\endhead
-\bottomrule\noalign{}
-\endlastfoot
-\textbf{WARP} & Worldline Algebra for Recursive Provenance---Echo's
-graph formalism \\
-\textbf{BOAW} & Bag of Autonomous Workers---parallel execution
-architecture \\
-\textbf{Tick} & One complete cycle of the engine (begin → apply →
-commit) \\
-\textbf{Footprint} & Declaration of resources a rule will read/write \\
-\textbf{TickDelta} & Accumulator for operations during execution \\
-\textbf{WarpOp} & A single graph mutation operation \\
-\textbf{GraphView} & Read-only wrapper enforcing BOAW contract \\
-\textbf{Snapshot} & Immutable, hashable state at a point in time \\
-\textbf{WSC} & Write-Streaming Columnar---zero-copy snapshot format \\
-\textbf{State Root} & BLAKE3 hash of reachable graph state \\
-\textbf{Commit Hash} & Combined hash of state + patch + parents +
-policy \\
-\textbf{Intent} & External input that causes state changes \\
-\textbf{MaterializationBus} & Channel system for emitting data to
-tools \\
-\textbf{Scheduler} & Component ensuring deterministic rewrite
-ordering \\
-\textbf{Virtual Shard} & Cache-locality optimization (256 shards) \\
-\textbf{OpOrigin} & Metadata tracking which intent/rule produced an
-op \\
-\end{longtable}
-}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{Appendix A: Key File
-Locations}\label{appendix-a-key-file-locations}
-
-{\def\LTcaptype{none} % do not increment counter
-\begin{longtable}[]{@{}lll@{}}
-\toprule\noalign{}
-Component & Path & Lines \\
-\midrule\noalign{}
-\endhead
-\bottomrule\noalign{}
-\endlastfoot
-Engine & \texttt{crates/warp-core/src/engine\_impl.rs} & 302-954 \\
-GraphStore & \texttt{crates/warp-core/src/graph.rs} & 1-300 \\
-GraphView & \texttt{crates/warp-core/src/graph\_view.rs} & 42-100 \\
-Scheduler & \texttt{crates/warp-core/src/scheduler.rs} & 59-712 \\
-Snapshot & \texttt{crates/warp-core/src/snapshot.rs} & 49-263 \\
-TickDelta & \texttt{crates/warp-core/src/tick\_delta.rs} & 38-172 \\
-BOAW Exec & \texttt{crates/warp-core/src/boaw/exec.rs} & 38-192 \\
-BOAW Shard & \texttt{crates/warp-core/src/boaw/shard.rs} & 82-120 \\
-BOAW Merge & \texttt{crates/warp-core/src/boaw/merge.rs} & 36-75 \\
-UI State & \texttt{crates/warp-viewer/src/ui\_state.rs} & 8-127 \\
-Viewer Frame & \texttt{crates/warp-viewer/src/app\_frame.rs} & 24-349 \\
-\end{longtable}
-}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\section{Appendix B: Architecture Decision
-Records}\label{appendix-b-architecture-decision-records}
-
-{\def\LTcaptype{none} % do not increment counter
-\begin{longtable}[]{@{}
- >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.1923}}
- >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.2692}}
- >{\raggedright\arraybackslash}p{(\linewidth - 4\tabcolsep) * \real{0.5385}}@{}}
-\toprule\noalign{}
-\begin{minipage}[b]{\linewidth}\raggedright
-ADR
-\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright
-Title
-\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright
-Key Decision
-\end{minipage} \\
-\midrule\noalign{}
-\endhead
-\bottomrule\noalign{}
-\endlastfoot
-ADR-0001 & Two-Plane Model & Separate skeleton from attachments \\
-ADR-0002 & WarpInstances & Flattened indirection for nested graphs \\
-ADR-0003 & MaterializationBus & Causality-first API, no direct writes \\
-ADR-0004 & No Global State & Dependency injection only \\
-ADR-0005 & Physics & Deterministic scheduled rewrites \\
-ADR-0006 & Ban Non-Determinism & CI enforcement scripts \\
-ADR-0007 & BOAW Storage & Immutable base + overlay + merge \\
-\end{longtable}
-}
-
-\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
-
-\emph{Document generated 2026-01-18. For the latest information, consult
-the source code and ADRs.}
-
-\backmatter
-\end{document}
diff --git a/docs/archive/study/what-makes-echo-tick.md b/docs/archive/study/what-makes-echo-tick.md
deleted file mode 100644
index 84467771..00000000
--- a/docs/archive/study/what-makes-echo-tick.md
+++ /dev/null
@@ -1,1198 +0,0 @@
-
-
-
-# What Makes Echo Tick?
-
-> **Your Tour Guide**: Claude (Opus 4.5)
->
-> Welcome! I've been asked to give you a personal tour through Echo's internals. This isn't just documentation—I'll share what I find elegant, surprising, and occasionally baffling about this codebase. When you see a red-outlined box, that's me stepping out of "narrator mode" to give you my unfiltered take.
->
-> **Reading Time**: ~45 minutes for complete understanding.
-
----
-
-## Table of Contents
-
-1. [Philosophy: Why Echo Exists](#1-philosophy-why-echo-exists)
-2. [The Big Picture: Architecture Overview](#2-the-big-picture-architecture-overview)
-3. [Core Concepts: The WARP Graph](#3-core-concepts-the-warp-graph)
-4. [The Engine: Heart of Echo](#4-the-engine-heart-of-echo)
-5. [The Tick Pipeline: Where Everything Happens](#5-the-tick-pipeline-where-everything-happens)
-6. [Parallel Execution: BOAW (Bag of Autonomous Workers)](#6-parallel-execution-boaw-bag-of-autonomous-workers)
-7. [Storage & Hashing: Content-Addressed Truth](#7-storage--hashing-content-addressed-truth)
-8. [Worked Example: Tracing a Link Click](#8-worked-example-tracing-a-link-click)
-9. [The Viewer: Observing Echo](#9-the-viewer-observing-echo)
-10. [Glossary](#10-glossary)
-
----
-
-## 1. Philosophy: Why Echo Exists
-
-### 1.1 The Problem
-
-Traditional game engines and simulations treat state as **mutable objects**. This creates fundamental problems:
-
-- **Replay is hard**: You can't just "rewind" because state changes are scattered and untracked.
-- **Synchronization is fragile**: Two machines running the same logic may diverge due to floating-point differences, thread timing, or iteration order.
-- **Debugging is a nightmare**: "It worked on my machine" is the symptom of non-determinism.
-- **Branching is impossible**: You can't easily ask "what if?" without copying everything.
-
-
-
-**Claude's Take**: These problems aren't theoretical. I've seen debugging sessions where the root cause was "HashMap iteration order changed between runs." Echo's designers got burned by non-determinism and decided: _never again_.
-
-The last point—"branching is impossible"—stands out. Most engines don't even try to support branching because it feels like a version-control feature, not runtime. Echo treats it as first-class. That's unusual and forward-looking.
-
-
-
-### 1.2 Echo's Answer
-
-Echo treats **state as a typed graph** and **all changes as rewrites**. Each "tick" of the engine:
-
-1. Proposes a set of rewrites
-2. Executes them in **deterministic order**
-3. Emits **cryptographic hashes** of the resulting state
-
-This means:
-
-- **Same inputs → Same outputs** (always, on any machine)
-- **State is verifiable** (hashes prove correctness)
-- **Replay is trivial** (patches are prescriptive)
-- **Branching is free** (copy-on-write snapshots)
-
-### 1.3 Core Design Principles
-
-```text
-┌─────────────────────────────────────────────────────────────────┐
-│ ECHO'S THREE PILLARS │
-├─────────────────────────────────────────────────────────────────┤
-│ │
-│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
-│ │ DETERMINISM │ │ PROVENANCE │ │ TOOLING │ │
-│ │ FIRST │ │ YOU CAN │ │ AS FIRST │ │
-│ │ │ │ TRUST │ │ CLASS │ │
-│ ├─────────────────┤ ├─────────────────┤ ├─────────────────┤ │
-│ │ Same inputs │ │ Snapshots are │ │ Graphs stream │ │
-│ │ always produce │ │ content- │ │ over canonical │ │
-│ │ same hashes │ │ addressed │ │ wire protocol │ │
-│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
-│ │
-└─────────────────────────────────────────────────────────────────┘
-```
-
-
-
-**Claude's Take**: "Tooling as first-class" is the quiet win here. Most engines treat debugging, replay, and visualization as afterthoughts—bolted on after the core. Echo inverts this: the wire protocol, hash scheme, and diff format are designed so tools can exist.
-
-I've read a lot of engine architectures. This level of tooling intent is rare. It also explains why Echo can have a separate `warp-viewer` crate that works without heroic reverse-engineering.
-
-
-
----
-
-## 2. The Big Picture: Architecture Overview
-
-### 2.1 System Layers
-
-Echo is organized into distinct layers, each with a specific responsibility:
-
-
-
-
-
-**Claude's Take**: A _clean_ layer cake. Each layer talks only to its neighbors—no "Layer 5 reaching down to Layer 1 for performance reasons." That discipline is hard to maintain, and I respect it.
-
-The `WSC Format` at Layer 2 caught my eye. It's Echo's custom columnar storage format—and before you ask "why not just use Arrow or Parquet?"—I'll spoil it: WSC is designed for mmap-friendly, zero-copy reads where every row is 8-byte aligned and you can binary-search directly into the file. It's specialized for _exactly this use case_. Sometimes NIH syndrome is justified.
-
-
-
-### 2.2 Crate Map
-
-| Crate | Purpose |
-| ---------------------- | ---------------------------------------------- |
-| `warp-core` | The deterministic rewrite engine (the "brain") |
-| `echo-graph` | Renderable graph types + diff operations |
-| `echo-session-proto` | Wire protocol (canonical CBOR framing) |
-| `echo-session-service` | Headless Unix-socket hub for tools |
-| `echo-session-client` | Client helpers for connecting to the hub |
-| `warp-viewer` | Native WGPU viewer for visualizing graphs |
-
-### 2.3 Data Flow Overview
-
-
-
-
-
-**Claude's Take**: Notice how the Engine talks to itself before touching the Store? That's the commit protocol. The Engine is _paranoid_ about mutations—it queues intentions, validates them, and only then touches state. If you're used to "just mutate it directly" game engines, this will feel ceremonial. The ceremony is the point.
-
-
-
----
-
-## 3. Core Concepts: The WARP Graph
-
-### 3.1 What is a WARP Graph?
-
-A WARP (**W**orldline **A**lgebra for **R**ecursive **P**rovenance) graph is Echo's fundamental data structure. It's not just a graph—it's a graph with **deterministic semantics**.
-
-
-
-
-
-**Claude's Take**: The name "WARP" is doing a lot of work here. "Worldline" evokes physics—specifically, the path an object traces through spacetime. In Echo, a node's "worldline" is its history of states across ticks. "Recursive Provenance" means you can always ask "where did this value come from?" and trace it back through the graph's history.
-
-Is the name a bit grandiose for what amounts to "typed graph with audit trail"? Maybe. But I've seen worse acronyms in this industry.
-
-
-
-### 3.2 Two-Plane Architecture
-
-Echo separates structure from data via the **Two-Plane Model** (ADR-0001):
-
-| Plane | Contains | Purpose |
-| ------------------ | ------------------------- | ------------------------------------- |
-| **Skeleton** | Nodes + Edges (structure) | Fast traversal, deterministic hashing |
-| **Attachment (α)** | Typed payloads | Domain-specific data |
-
-**Why separate them?**
-
-```text
-┌────────────────────────────────────────────────────────────────────┐
-│ SKELETON PLANE (Structure) │
-│ │
-│ ┌─────┐ edge:link ┌─────┐ │
-│ │ N1 │─────────────────▶│ N2 │ │
-│ └─────┘ └─────┘ │
-│ │ │ │
-│ │ edge:child │ edge:ref │
-│ ▼ ▼ │
-│ ┌─────┐◀─────────────────────┘ │
-│ │ N3 │ │
-│ └─────┘ │
-│ │
-├────────────────────────────────────────────────────────────────────┤
-│ ATTACHMENT PLANE (Payloads) │
-│ │
-│ N1.α["title"] = Atom { type: "string", bytes: "Home" } │
-│ N2.α["url"] = Atom { type: "string", bytes: "/page/b" } │
-│ N3.α["body"] = Atom { type: "html", bytes: "...
" } │
-│ │
-└────────────────────────────────────────────────────────────────────┘
-```
-
-**Key insight**: Skeleton rewrites **never decode attachments**. This keeps the hot path fast and deterministic.
-
-
-
-**Claude's Take**: This is where Echo gets clever. The Skeleton plane only contains node IDs, edge IDs, and type tags—all fixed-size, all byte-comparable. You can compute the entire state hash without ever deserializing a single JSON blob, HTML string, or texture.
-
-The Attachment plane (they call it "α" because of course they do) holds the actual domain data. It participates in hashing but doesn't affect traversal. This separation means you can have a 10MB texture attached to a node and still iterate the graph at full speed.
-
-I've seen similar ideas in ECS architectures, but usually the separation is "components vs. systems." Echo's split is "structure vs. data," which is subtly different and, I think, more principled.
-
-
-
-### 3.3 Node and Edge Identity
-
-Every node and edge has a **32-byte identifier**:
-
-```rust
-pub struct NodeId([u8; 32]); // Content-addressed or assigned
-pub struct EdgeId([u8; 32]); // Unique edge identifier
-```
-
-These IDs are:
-
-- **Deterministic**: Same content → same ID (when content-addressed)
-- **Sortable**: Lexicographic ordering enables deterministic iteration
-- **Hashable**: Participate in state root computation
-
-### 3.4 WarpInstances: Graphs Within Graphs
-
-Echo supports **descended attachments**—embedding entire graphs within attachment slots:
-
-
-
-This enables "WARPs all the way down"—recursive composition while maintaining determinism.
-
-
-
-**Claude's Take**: WarpInstances are _wild_. You can have a node whose attachment slot contains... another entire graph. And that graph can have nodes whose attachment slots contain... more graphs. It's turtles, but the turtles are graphs.
-
-Why would you want this? Think of a game with procedurally generated dungeons. Each dungeon could be its own WarpInstance, loaded on demand, with its own tick history and state root. The player character is in the "outer" instance; stepping through a portal descends into the "inner" one.
-
-I don't know if Echo actually uses this feature yet, but the architecture supports it cleanly. That's design for the future without overengineering the present.
-
-
-
----
-
-## 4. The Engine: Heart of Echo
-
-### 4.1 The Engine Struct
-
-The `Engine` is Echo's central orchestrator. Located in `crates/warp-core/src/engine_impl.rs`:
-
-```rust
-pub struct Engine {
- state: WarpState, // Multi-instance graph state
- rules: HashMap, // Registered rewrite rules
- scheduler: DeterministicScheduler, // Deterministic ordering
- bus: MaterializationBus, // Output channels
- history: Vec<(Snapshot, TickReceipt, WarpTickPatchV1)>,
- tx_counter: u64, // Transaction counter
- live_txs: BTreeSet, // Active transactions
- // ... more fields
-}
-```
-
-
-
-**Claude's Take**: A few things jump out here:
-
-1. **`rules: HashMap`** — Wait, HashMap? Isn't that non-deterministic? It is! But notice: this is for _looking up_ rules by ID, not for _iterating_. The iteration order is determined by the `scheduler`, which is explicitly deterministic. The HashMap is fine because rule IDs are stable.
-
-2. **`history: Vec<(Snapshot, TickReceipt, WarpTickPatchV1)>`** — The engine keeps its entire history in memory? That seems expensive. I suspect this is configurable, or there's a garbage collection pass I haven't found yet. For long-running simulations, unbounded history would be a problem.
-
-3. **`BTreeSet` for live transactions** — BTreeSet, not HashSet. They're _really_ committed to determinism. Even the set of "which transactions are in-flight" is stored in sorted order.
-
-
-### 4.2 Construction
-
-The engine is built via the `EngineBuilder`:
-
-```rust
-let engine = EngineBuilder::new(store, root_node_id)
- .with_policy_id(1)
- .with_telemetry(telemetry)
- .build();
-```
-
-**What happens during construction:**
-
-
-
-### 4.3 Rewrite Rules
-
-Rules are the atoms of change in Echo. Each rule has three functions:
-
-```rust
-pub struct RewriteRule {
- pub name: String,
- pub matcher: MatchFn, // Does this rule apply?
- pub executor: ExecuteFn, // What changes to make
- pub footprint: FootprintFn, // What resources are touched
- pub policy: ConflictPolicy, // What to do on conflict
-}
-
-// Function signatures (Phase 5 BOAW model):
-type MatchFn = fn(GraphView, &NodeId) -> bool;
-type ExecuteFn = fn(GraphView, &NodeId, &mut TickDelta);
-type FootprintFn = fn(GraphView, &NodeId) -> Footprint;
-```
-
-**Critical constraint**: Executors receive a **read-only** `GraphView` and emit changes to a `TickDelta`. They **never** mutate the graph directly.
-
-
-
-**Claude's Take**: The `FootprintFn` is the secret sauce. Before executing a rule, Echo calls this function to ask: "What nodes, edges, and attachments will you touch?" The footprint is a _conservative estimate_—you must declare everything you _might_ read or write.
-
-This enables Echo's parallel execution model. If two rules have non-overlapping footprints, they can execute in parallel, in any order, and the result is guaranteed identical. If footprints overlap, they're sequenced deterministically.
-
-The burden on the rule author is significant: you must declare your footprint accurately, or you'll get either conflicts (declared overlap when there was none) or silent bugs (undeclared overlap that corrupts state). This is a sharp edge in the API.
-
-
-
-**Runtime enforcement**: Footprint declarations are no longer just documentation or planning artifacts. They are actively enforced at runtime by `FootprintGuard` (see [Section 6.6](#66-runtime-enforcement-footprintguard)) when `footprint_enforce_release` is enabled or in debug builds, and can be disabled via the `unsafe_graph` escape hatch. The guard catches:
-
-- **Undeclared reads**: accessing nodes or attachments not declared in the footprint. Node-based edge traversal via `GraphView::edges_from()` checks `n_read` (reading adjacency from a node), while direct edge-by-ID operations like `has_edge()` and `edge_attachment()` check `e_read`. Attachment reads check `a_read`.
-- **Undeclared writes**: emitting ops that target nodes, edges, or attachments not in `n_write`/`e_write`/`a_write`
-- **Cross-warp emissions**: an op targets a different warp than the rule's execution scope
-- **Unauthorized instance ops**: `ExecItemKind::User` rules emitting `UpsertWarpInstance` or `DeleteWarpInstance`
-- **Attachment write violations**: `OpenPortal` is treated as an attachment write by `FootprintGuard` and requires the target node in `n_write`
-- **Adjacency violations**: edge mutations where the `from` node is missing from `n_write`
-
-This means an inaccurate footprint is no longer a silent bug—it's a hard failure whenever enforcement is active.
-
-### 4.4 GraphView: Read-Only Access
-
-The `GraphView` enforces BOAW's immutability contract:
-
-```rust
-pub struct GraphView<'a> {
- store: &'a GraphStore,
- warp_id: WarpId,
-}
-
-impl<'a> GraphView<'a> {
- pub fn node(&self, id: &NodeId) -> Option<&NodeRecord>;
- pub fn edges_from(&self, id: &NodeId) -> impl Iterator- ;
- pub fn node_attachment(&self, id: &NodeId, key: &str) -> Option<&AttachmentValue>;
- // ... read-only methods only
-}
-```
-
-**No `DerefMut`, no `AsRef
`, no interior mutability.** This is enforced at the type level.
-
-
-
-**Claude's Take**: I went looking for escape hatches here. `RefCell`? No. `UnsafeCell`? No. `Arc>`? No. The `GraphView` is genuinely immutable by construction.
-
-This is Rust at its best: the borrow checker prevents you from shooting yourself in the foot. In C++, you'd need discipline and code review to enforce "executors don't mutate the graph." In Rust, it's just... not possible. The types don't allow it.
-
-
-
----
-
-## 5. The Tick Pipeline: Where Everything Happens
-
-### 5.1 Overview
-
-A "tick" is one complete cycle of the engine. It has five phases:
-
-
-
-
-
-**Claude's Take**: The "Commit" phase has five sub-steps. _Five_. This is where I started to appreciate how much thought went into this system. Let me summarize what each does:
-
-1. **Drain**: Pull all pending rewrites from the scheduler in canonical order
-2. **Reserve**: Check footprints for conflicts, accept or reject each rewrite
-3. **Execute**: Run the accepted rewrites (this is where parallelism happens)
-4. **Merge**: Combine all `TickDelta` outputs into a single canonical operation list
-5. **Finalize**: Apply the merged operations to produce the new state
-
-The reservation phase is particularly clever. It's like a two-phase commit: first you "reserve" your footprint (claim your lock), then you execute. If your footprint conflicts with an already-reserved footprint, you're rejected. No execution happens until all accepted rewrites have been validated.
-
-
-
-### 5.2 Phase 1: Begin Transaction
-
-```rust
-let tx = engine.begin();
-```
-
-**What happens:**
-
-1. Increment `tx_counter` (wrapping to avoid 0)
-2. Add `TxId` to `live_txs` set
-3. Return opaque transaction identifier
-
-```text
-┌─────────────────────────────────────────────────┐
-│ engine.begin() │
-├─────────────────────────────────────────────────┤
-│ tx_counter: 0 → 1 │
-│ live_txs: {} → {TxId(1)} │
-│ returns: TxId(1) │
-└─────────────────────────────────────────────────┘
-```
-
-### 5.3 Phase 2: Apply Rules
-
-```rust
-engine.apply(tx, "rule_name", &scope_node_id);
-```
-
-**What happens:**
-
-
-
-**The Footprint**: A declaration of what resources the rule will read and write:
-
-```rust
-pub struct Footprint {
- pub n_read: BTreeSet, // Nodes to read
- pub n_write: BTreeSet, // Nodes to write
- pub e_read: BTreeSet, // Edges to read
- pub e_write: BTreeSet, // Edges to write
- pub a_read: BTreeSet, // Attachments to read
- pub a_write: BTreeSet, // Attachments to write
- // ... ports, factor_mask
-}
-```
-
-**Scheduler deduplication**: If the same `(scope_hash, rule_id)` is applied multiple times, **last wins**. This enables idempotent retry semantics.
-
-### 5.4 Phase 3: Commit (The Heart of Determinism)
-
-```rust
-let (snapshot, receipt, patch) = engine.commit_with_receipt(tx);
-```
-
-This is where Echo's magic happens. Let's break it down:
-
-#### 5.4.1 Drain
-
-The scheduler drains all pending rewrites in **canonical order**:
-
-```rust
-// RadixScheduler uses O(n) LSD radix sort
-// 20 passes: 2 nonce + 2 rule_id + 16 scope_hash (16-bit digits)
-let rewrites = scheduler.drain_for_tx(tx); // Vec in canonical order
-```
-
-**Ordering key**: `(scope_hash[0..32], rule_id, nonce)`
-
-This ensures the **same rewrites always execute in the same order**, regardless of when they were applied.
-
-
-
-**Claude's Take**: Radix sort! They're using radix sort for the scheduler drain. Not quicksort, not merge sort—radix sort.
-
-Why? Because radix sort is _stable_ and _deterministic_ by construction. Quicksort's behavior depends on pivot selection, which can vary. Merge sort is deterministic, but radix sort is faster for fixed-size keys. Since the ordering key is exactly 36 bytes (32-byte scope hash + 2-byte rule ID + 2-byte nonce), radix sort is perfect.
-
-This is the kind of detail that separates "deterministic by accident" from "deterministic by design."
-
-
-
-#### 5.4.2 Reserve (Independence Check)
-
-For each rewrite in canonical order:
-
-
-
-**Conflict detection**: Uses `GenSet` for O(1) lookups:
-
-- Read-read overlap: **allowed**
-- Write-write overlap: **conflict**
-- Read-write overlap: **conflict**
-
-#### 5.4.3 Execute (Parallel, Lockless)
-
-Accepted rewrites execute against the **read-only snapshot**:
-
-```rust
-for rewrite in accepted {
- let rule = &rules[rewrite.rule_id];
- let view = GraphView::new(&state, rewrite.warp_id);
-
- // Executor reads from view, emits to delta
- (rule.executor)(view, &rewrite.scope, &mut delta);
-}
-```
-
-**Critical**: `GraphView` is immutable. `TickDelta` accumulates operations:
-
-```rust
-pub struct TickDelta {
- ops: Vec<(WarpOp, OpOrigin)>,
-}
-
-// Operations emitted during execution:
-delta.emit(WarpOp::UpsertNode { id, record });
-delta.emit(WarpOp::UpsertEdge { from, edge });
-delta.emit(WarpOp::DeleteNode { id });
-delta.emit(WarpOp::SetAttachment { node, key, value });
-```
-
-#### 5.4.4 Merge (Canonical Sort)
-
-All operations are sorted into **canonical replay order**:
-
-```rust
-// Sort by (WarpOpKey, OpOrigin)
-ops.sort_by_key(|(op, origin)| (op.sort_key(), origin.clone()));
-
-// Deduplicate identical ops
-// Error on conflicting ops (footprint model violation)
-```
-
-**Conflict handling**: If two rewrites wrote **different values** to the same key, that's a bug in the footprint model. Echo errors loudly.
-
-#### 5.4.5 Finalize
-
-Apply the merged delta to produce the new state:
-
-```rust
-for op in merged_ops {
- match op {
- WarpOp::UpsertNode { id, record } => state.insert_node(id, record),
- WarpOp::UpsertEdge { from, edge } => state.insert_edge(from, edge),
- WarpOp::DeleteNode { id } => state.delete_node_isolated(id)?, // rejects if edges exist
- WarpOp::SetAttachment { node, key, value } => state.set_attachment(node, key, value),
- // ...
- }
-}
-```
-
-> **Note:** `DeleteNode` requires the node to be _isolated_ (no incident edges).
-> Callers must emit explicit `DeleteEdge` ops before `DeleteNode`. This ensures
-> that WarpOps explicitly describe all mutations—no hidden cascade side effects.
-
-### 5.5 Phase 4: Hash Computation
-
-#### State Root (BLAKE3)
-
-The state root is computed via **deterministic BFS** over reachable nodes:
-
-
-
-**Encoding** (architecture-independent):
-
-- All IDs: raw 32 bytes
-- Counts: u64 little-endian
-- Payloads: 1-byte tag + type_id[32] + u64 LE length + bytes
-
-#### Commit Hash (v2)
-
-```rust
-commit_hash = BLAKE3(
- version_tag[4] || // Protocol version
- parents[] || // Parent commit hashes
- state_root[32] || // Graph-only hash
- patch_digest[32] || // Merged ops digest
- policy_id[4] // Policy identifier
-)
-```
-
-
-
-**Claude's Take**: The commit hash includes a `policy_id`. This is subtle but important: two engines with different policies could produce the same state but different commit hashes. Why? Because the _process_ matters, not just the result.
-
-Imagine one policy allows rules to run in parallel; another requires sequential execution. They might produce identical graphs, but the commit hashes differ because the policies differ. This prevents accidentally mixing outputs from incompatible engine configurations.
-
-It's defensive design: "Trust, but verify—and make verification easy."
-
-
-
-### 5.6 Phase 5: Record to History
-
-```rust
-history.push((
- Snapshot { hash: commit_hash, state_root, parents, ... },
- TickReceipt { applied, rejected, ... },
- WarpTickPatchV1 { ops, in_slots, out_slots, patch_digest, ... }
-));
-```
-
-The patch is **prescriptive**: it can be replayed without re-matching to reproduce the exact same state.
-
----
-
-## 6. Parallel Execution: BOAW (Bag of Autonomous Workers)
-
-### 6.1 What is BOAW?
-
-BOAW stands for **Best Of All Worlds**. It's Echo's parallel execution architecture that enables:
-
-- **Massive parallelism** without locks
-- **Deterministic convergence** across platforms
-- **Worker-count invariance** (same result with 1 or 32 workers)
-
-### 6.2 The Key Insight
-
-```text
-┌──────────────────────────────────────────────────────────────────┐
-│ THE BOAW INSIGHT │
-├──────────────────────────────────────────────────────────────────┤
-│ │
-│ Traditional parallelism: │
-│ "Make execution order deterministic" → Complex, slow │
-│ │
-│ BOAW parallelism: │
-│ "Let execution order vary, make MERGE deterministic" → Fast! │
-│ │
-│ Workers race freely → Each produces a TickDelta │
-│ Merge step sorts all deltas → Canonical output │
-│ │
-└──────────────────────────────────────────────────────────────────┘
-```
-
-
-
-**Claude's Take**: This is the insight that makes Echo work. Most parallel systems try to _control_ the execution order—barriers, locks, atomic sequences. BOAW says: "Forget it. Let chaos reign during execution. We'll sort it out in the merge."
-
-It's like MapReduce: the map phase runs in any order; the reduce phase (merge) produces the canonical result. But unlike MapReduce, Echo operates on a graph with complex dependencies. The footprint model makes this possible: by declaring what you'll touch before executing, you enable the merge to validate that no conflicts occurred.
-
-If this sounds too good to be true, it mostly is—_if_ you get the footprints wrong. The system is only as deterministic as your footprint declarations. Lie to the footprint system, and you'll get non-determinism.
-
-
-
-### 6.3 Execution Strategies
-
-#### Phase 6A: Stride Partitioning (Legacy)
-
-```text
-Worker 0: items[0], items[4], items[8], ...
-Worker 1: items[1], items[5], items[9], ...
-Worker 2: items[2], items[6], items[10], ...
-Worker 3: items[3], items[7], items[11], ...
-```
-
-**Problem**: Poor cache locality—related items scatter across workers.
-
-#### Phase 6B: Virtual Shards (Current Default)
-
-```rust
-const NUM_SHARDS: usize = 256; // Protocol constant (frozen)
-
-fn shard_of(node_id: &NodeId) -> usize {
- let bytes = node_id.as_bytes();
- let val = u64::from_le_bytes(bytes[0..8]);
- (val & 255) as usize // Fast modulo via bitmask
-}
-```
-
-
-
-**Benefits**:
-
-- Items with same `shard_of(scope)` processed together → better cache hits
-- Workers dynamically claim shards via atomic counter → load balancing
-- Determinism enforced by merge, not execution order
-
-
-
-**Claude's Take**: 256 shards is an interesting choice. It's small enough that the atomic counter for work-stealing doesn't become a bottleneck, but large enough to distribute work across many cores.
-
-The `& 255` bitmask is a micro-optimization I appreciate. It's equivalent to `% 256` but faster because 256 is a power of 2. This is the kind of low-level detail that adds up when you're processing millions of items per second.
-
-One thing I wondered: what if your NodeIds are clustered? Like, if all recent nodes have IDs starting with `0x00...`, they'd all end up in shard 0. I suspect content-addressed IDs (via BLAKE3) distribute uniformly, so this isn't a problem in practice. But for user-assigned IDs, you'd need to be careful.
-
-
-
-### 6.4 The Execution Loop
-
-```rust
-pub fn execute_parallel_sharded(
- view: GraphView<'_>,
- items: &[ExecItem],
- workers: usize,
-) -> Vec {
- // Partition items into 256 shards
- let shards = partition_into_shards(items);
-
- // Atomic counter for work-stealing
- let next_shard = AtomicUsize::new(0);
-
- std::thread::scope(|s| {
- let handles: Vec<_> = (0..workers).map(|_| {
- s.spawn(|| {
- let mut delta = TickDelta::new();
- loop {
- // Claim next shard atomically
- let shard_id = next_shard.fetch_add(1, Ordering::Relaxed);
- if shard_id >= NUM_SHARDS { break; }
-
- // Execute all items in this shard
- for item in &shards[shard_id].items {
- (item.exec)(view.clone(), &item.scope, &mut delta);
- }
- }
- delta
- })
- }).collect();
-
- handles.into_iter().map(|h| h.join().unwrap()).collect()
- })
-}
-```
-
-### 6.5 The Canonical Merge
-
-```rust
-pub fn merge_deltas(deltas: Vec) -> Result