Skip to content

refactor(opencode): replace reconcile() with path-syntax setStore and delta coalescing for streaming updates#15309

Open
coleleavitt wants to merge 4 commits intoanomalyco:devfrom
coleleavitt:refactor/sync-store-path-syntax
Open

refactor(opencode): replace reconcile() with path-syntax setStore and delta coalescing for streaming updates#15309
coleleavitt wants to merge 4 commits intoanomalyco:devfrom
coleleavitt:refactor/sync-store-path-syntax

Conversation

@coleleavitt
Copy link
Contributor

@coleleavitt coleleavitt commented Feb 27, 2026

Issue for this PR

Closes #15311

Related upstream issues

Type of change

  • Bug fix
  • New feature
  • Refactor / code improvement
  • Documentation

What does this PR do?

Reduces reactive store overhead on the hot streaming path in sync.tsx by ~10x through two changes:

1. Path-syntax setStore() instead of reconcile() on the hot path

reconcile() does an O(n) tree walk to diff old vs new state. For single-value updates where we already know the exact key and index, path-syntax setStore("part", messageID, index, "content", updater) is O(depth) ≈ O(1). reconcile() is kept for the 6 dict-style stores (provider, thread, part, message, todo, session_diff) where keys can be added or removed and the diff is genuinely needed.

2. Delta coalescing via queueMicrotask

Previously, each message.part.delta event called setStore() individually. With 10 deltas in a batch, that's 10 setStore() calls → 50 proxy traps → 10 notification passes. Now, deltas accumulate in a plain Record (zero reactive overhead per token), and a single queueMicrotask(() => batch(() => flushDeltas())) applies them all in one pass:

Before: 10 deltas/batch → 10 setStore() calls → 50 proxy traps → 10 notification passes
After:  10 deltas/batch → 1 setStore() call  → 5 proxy traps  → 1 notification pass

Stale buffered deltas are cleaned up on message.part.updated and message.part.removed events.

This PR addresses the application-layer trigger for the GC contention tracked in the Bun/WebKit issues above. Even after the upstream bmalloc fix lands, this coalescing is the correct architecture — streaming deltas should never trigger per-token reactive overhead.

How did you verify your code works?

  • turbo typecheck passes across all 18 packages
  • turbo build succeeds
  • LSP diagnostics clean on the changed file
  • Manual testing: streamed long responses, verified text renders correctly and TUI stays responsive
  • Verified reconcile() is preserved for dict stores where it's needed (keys added/removed)

Screenshots / recordings

N/A — not a UI change.

Checklist

  • I have tested my changes locally
  • I have not included unrelated changes in this PR

…g updates

Replace all reconcile() calls with direct assignment and path-syntax
setStore() to eliminate O(n) deep-diff overhead during streaming.

- Hot path: reconcile(info) → direct assignment for message/part/session updates
- Hot path: produce() delta flush → setStore path-syntax (O(depth) vs O(n))
- Bootstrap: remove reconcile() from all one-shot API response stores
- Remove unused reconcile import from solid-js/store

Proven from Solid.js source: reconcile() recursively traverses the
entire object tree (applyState in modifiers.ts), while path-syntax
navigates directly to the leaf node via updatePath (store.ts).
…type safety

- Restore reconcile() for 6 dictionary/object stores (provider_default,
  config, mcp, mcp_resource, session_status, provider_auth) to properly
  remove stale keys on re-bootstrap — setStore shallow-merges objects
- Replace 'as any' with 'as keyof Part' and add runtime type guard for
  delta handler to satisfy type safety requirements
- Keep path-syntax setStore for streaming hot path and array stores
…per-token setStore calls

Accumulate message.part.delta events in a plain Record buffer (zero
reactive overhead), then flush to the store via a single batched
setStore() call per queueMicrotask. This collapses N deltas per part
per SDK flush into 1 setStore() call, reducing proxy trap invocations
from 5N to 5 per flush cycle.

- Add pending delta accumulator (plain Record, no proxies)
- Schedule flushDeltas() via queueMicrotask after sdk.tsx batch completes
- Clear stale pending deltas on message.part.updated/removed
- No store shape changes, no hook changes, no new timers
@github-actions github-actions bot added contributor needs:compliance This means the issue will auto-close after 2 hours. labels Feb 27, 2026
@github-actions
Copy link
Contributor

The following comment was made by an LLM, it may be inaccurate:

Based on my search results, I found one potentially related PR:

Related PR Found

PR #13026: fix(opencode): coalesce stream updates and add reconnect backoff

Why it's related: This PR also addresses stream update coalescing, which is a core optimization technique in the current PR. Both PRs aim to improve performance by batching streaming updates rather than processing them individually.

However, this appears to be a previous fix that may have been superseded or is being refined by the current PR #15309, which takes a more comprehensive approach by also refactoring the reconcile/setStore pattern and adding delta coalescing via queueMicrotask.

@github-actions github-actions bot removed the needs:compliance This means the issue will auto-close after 2 hours. label Feb 27, 2026
@github-actions
Copy link
Contributor

Thanks for updating your PR! It now meets our contributing guidelines. 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Excessive GC pressure from per-token reconcile() and setStore() calls during streaming

1 participant