Open
Conversation
These are either unused (C++ client), generated artifacts that shouldn't be checked in (go/ts/cpp stubs), or superseded build files (Makefiles). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Rework MinIO storage layer, update session broadcaster, improve chunk sink signal flow, and add RetryRenderSession server handler. Includes signal flow integration test. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add reconnectingStream utility for resilient gRPC server streaming, retryRenderSession procedure, and updated session card components with error state handling and retry controls. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add settings.local.json and systemd to .gitignore. Update dev mode commands to reflect new pnpm-based workflow. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…imeout race Add 28 new signal flow integration tests covering CutSession, session state transitions (PROCESSING→FINISHED/ERROR), DeleteSession, SetKeep, SetName, segment operations, RetryRender, invalid inputs, and edge cases like rapid reconnect and multiple session lifecycles. Extract MinioClient interface from *minio.Client to enable testing the real Minio storage logic without a running MinIO server. Add in-memory FakeMinioClient and 23 contract tests covering Start, SafeChunks, metadata persistence, CloseRecordingSession, session timeout, and more. Fix two production bugs discovered by the contract tests: - closeSession/closeSessionAsync: flush in-memory chunks to storage BEFORE checking isSessionClosed(). Previously, small recordings (< 5MB, still in memory buffer) were silently lost because isSessionClosed() saw no chunk objects in storage and returned true. - SetSessionTimeout: protect sessionTimeout field with dataLock to prevent a data race with the background timeout checker goroutine. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace ad-hoc state transitions with per-session finite state machines (qmuntal/stateless) that enforce valid transitions and serialize concurrent access. Replace fire-and-forget goroutines for rendering with a bounded worker pool (alitto/pond/v2) that supports graceful shutdown and context cancellation. Key changes: - Session FSM validates all state transitions (RECORDING→PROCESSING→ FINISHED/ERROR, ERROR→PROCESSING for retry) - Segment state transitions validated via lookup table - Bounded render work queue (3 workers) replaces unbounded goroutines - dataLock never held across full transition+callback chain - Removed renderSemaphore from handler (concurrency now in storage layer) - Added Stop() to Storage interface for graceful shutdown Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The storage layer used a single-callback pattern (onSessionStateChangedCb, onSessionClosedCb) that only supported one listener and lost segment-level metadata. This replaces it with a typed EventBus supporting multiple listeners, proper segment events with SegmentID/PreviousState/NewState, and a centralized LogListener for structured lifecycle logging. Key changes: - Add EventBus, EventListener interface, SessionStateChangedEvent, SegmentStateChangedEvent, and LogListener in go/storage/ - Remove OnSessionClosedCb, OnSessionStateChangedCb, cbLock, and notifyStateChange from Minio - SetSegmentState now emits SegmentStateChangedEvent (was silent) - All emission is synchronous (removes inconsistent go notifyStateChange) - SessionSourceHandler implements EventListener interface - 11 new event bus tests, all existing tests updated and passing Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
On backend restart, RECORDING sessions were immediately transitioned to PROCESSING and re-rendered, even if the recorder was still actively streaming. When the next chunk arrived, SafeChunks called initSession which overwrote the session back to RECORDING, causing the "1x processing → rec + processing → 2x processing" cycle. Three fixes: - closeSessions: set up chunk tracking for RECORDING sessions instead of closing them, letting the timeout checker handle truly stale sessions - SafeChunks: resume existing RECORDING sessions instead of calling initSession which overwrites metadata and emits bogus state events - closeSessionAsync: skip render if session was resumed back to RECORDING Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two bugs causing stale state and repeated re-renders: 1. onSessionTransition persisted state to MinIO but never wrote the updated session back to the in-memory map. closeIntermediateSessions would find stale PROCESSING states and re-submit renders endlessly. Same issue in initSession — new sessions weren't added to the map. 2. SafeChunks only resumed sessions in RECORDING state. After a restart, a session could be PROCESSING (from a previous crash) while the recorder is still actively streaming. Chunks would trigger initSession which overwrote the session back to RECORDING, losing render progress. Now resumes in any state — the recorder sending chunks is the source of truth. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add FLAC golden file fixture (generated from the existing 30s frequency sweep) and byte-for-byte deterministic comparison test that runs on every platform without external tools. Enhance render unit tests with property assertions beyond magic bytes: - OGG: file size range validation for 30s audio - FLAC (sox): verify output smaller than raw input - PNG overview: decode and verify image dimensions match request - Waveform dat: validate non-trivial output size - Clip: verify clipped output proportionally smaller than full encoding Add full render pipeline contract test (TestContractFullRenderPipeline) that exercises SafeChunks → CloseRecordingSession → render → FINISHED with all 5 output files verified. Skips when sox/audiowaveform unavailable. Add Dockerfile.ci-test: multi-stage Docker image with Go toolchain + sox + audiowaveform for running the complete test suite in CI. Add go-test-render job to CI workflow that builds and runs the Docker test image with GitHub Actions Docker layer caching. Add waitForSessionState polling helper to replace time.Sleep in contract tests for deterministic timing. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add duration field to SessionInfo proto message. When a session is cut (CloseRecordingSession), estimate the recording duration from the number of flushed chunks and remaining buffer size before transitioning to PROCESSING. This gives the UI an immediate duration estimate while rendering is in progress. The exact duration is updated after rendering completes. Map session.Duration through newSessionInfo to the proto duration field and display it in the SessionCard header for processing sessions, using the same time format as the recording elapsed time. Also fix RetryRenderSession to use its own request type instead of reusing DeleteSessionRequest (exposed by proto regeneration). Note: TS proto stubs need regeneration (make ts) before building web. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add a progressReader that wraps the raw audio io.Reader during rendering and tracks bytes read vs total size. Emits RenderProgressEvent through the EventBus every 500ms with a 0.0-1.0 progress value. Backend changes: - Add RenderProgress field to Session struct (transient, not persisted) - Add RenderProgressEvent and OnRenderProgress to EventListener/EventBus - progressReader in renderFromRawData tracks io.Copy progress - SessionSourceHandler.OnRenderProgress broadcasts session updates with progress so existing StreamSessions subscribers get live updates - Map renderProgress to new proto SessionInfo.renderProgress field Frontend changes: - Add renderProgress field to Session type - Map from proto in streamSessions normalizer - Show thin progress bar at bottom of SessionCardProcessing component with smooth CSS transition The progress represents how far through the raw audio data the parallel encoders (FLAC, OGG, waveform, overview PNG) have read. Since all 4 encoders consume the same stream via multicast pipes, progress tracks the single io.Copy that feeds them all. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The default HTTP transport timed out during large ComposeObject operations (server-side multipart copy of many chunks), causing sessions to get stuck in PROCESSING. Bump ResponseHeaderTimeout to 10 minutes. Strip internal URLs, uploadIds, and bucket paths from error messages before storing them in session/segment metadata so they don't leak infrastructure details into the UI. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…processing UI orange Only transition RECORDING sessions on startup — PROCESSING sessions already have a render job queued, so re-closing them caused duplicate renders. Update the web UI to use orange (primary color) for processing state indicators and show estimated end time from the duration field on processing session cards. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sessions with many chunks (e.g. ~660 for a 5-hour recording) cause MinIO's ComposeObject to time out with "Unexpected EOF". Compose in batches of 100, creating temporary intermediate objects, then merge those into the final data.raw. Recurses for extremely large sessions. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Stop clearing all sessions when the stream reconnects, which caused a visible flicker and reset processing progress to zero. Instead, track which sessions the server sends in the reconnect snapshot and prune only those that no longer exist on the backend. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Large recording sessions produced hundreds of 5MB chunk objects that were glued together via recursive ComposeObject calls, causing timeouts on MinIO. Now chunks are uploaded as parts of a single multipart upload for data.raw: - initSession starts a NewMultipartUpload - SafeChunks uses PutObjectPart instead of PutObject - flushChunks uploads the final part (no 5MB padding needed) and calls CompleteMultipartUpload - renderSession simply reads the already-assembled data.raw This eliminates the compose step entirely, removes temporary intermediate objects, and fixes the timeout class of bugs. Also handles crash recovery via ListMultipartUploads/ListObjectParts and concurrent close races where completeOrphanedUpload and flushChunks could race on the same upload. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Guard against a race where the async gRPC iterator delivers a buffered message after the user switches to a different recorder tab, causing the old recorder's processing session to appear in the new tab's list. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Color the completed portion of the animated waveform in orange to match the progress bar, and add a shimmer animation to the progress bar fill. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Remove double triggerRenderFailure fire in closeSessionAsync (renderFromRawData already handles it) - Transition session to ERROR when flushChunks or completeOrphanedUpload fails, preventing sessions stuck in PROCESSING - Recover segments stuck in RENDERING state on startup by moving them to ERROR - Map unknown proto session state to 'error' instead of 'recording' in frontend Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…alid segment transitions - Prevent deleted sessions from being resurrected by stale render callbacks (track deleted session IDs, check in onSessionTransition) - Validate segment transitions in setSegmentError instead of bypassing FSM - Route closeIntermediateSessions through the FSM (triggerCloseRecording) instead of writing state directly, ensuring callbacks fire consistently - Use captured previous state in segment RENDERING event instead of hardcoded value Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…sessions FLAC, OGG, and waveform render functions now accept an io.Writer for streaming output directly (e.g. via io.Pipe to S3), avoiding multi-GB in-memory buffering that caused OOM crashes on long sessions. Original buffer-returning signatures preserved as thin wrappers. - Replace hand-rolled FLAC encoder with sox-based FlacStream, dropping the mewkiz/flac dependency - Add SamplePositionToByteOffset and EncodeStream helpers for segment rendering - Simplify cmd/render to use the new FlacStream API Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Segment render now fetches only the byte range needed (via S3 Range header) instead of downloading the entire data.raw, then streams encoding output directly to S3 via io.Pipe — eliminating full-file buffering for both input and output. - Add SamplePositionToByteOffset to compute byte offsets for range requests - Stream OGG/FLAC encoding output to PutObject via io.Pipe (-1 size) - Add close tracking to FakeMinioClient for GetObject reader leak tests - Add TestFix_GetObjectReadersAreClosed contract test Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Guard RecorderBroadcaster.Stop() with sync.Once to prevent panic on double close of stopTimeout channel - Replace time.Tick with time.NewTicker in chunk_sink_client to avoid goroutine/channel leak - Pre-allocate samples slice in chunk-sink-handler instead of append-growing Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Close AudioContext and destroy Peaks instance on WaveformView unmount via 'destroy' command event, preventing browser resource exhaustion - Replace deep watch on segments array with a lightweight derived string key in SessionCardFinished, avoiding unnecessary re-renders - Watch recorders.size instead of deep-watching the entire Map in RecordersIndexView Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Tests now compare two runs for determinism instead of matching a golden file from the old pure-Go encoder. Removes the 7MB embedded test fixture. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ender Replace the multipart-upload-then-download-then-encode pipeline with streaming encoding that runs for the entire recording duration. Chunks are buffered for 1s then flushed through concurrent encoding pipelines (raw, FLAC, WAV, waveform DAT) that stream directly to S3 via PutObject. On session close, only the remaining <1s buffer needs flushing — encoding completes almost instantly instead of re-downloading and re-encoding the entire session. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
With streaming encoding during recording, the PROCESSING state is now ~instant so progress tracking is unnecessary. Remove progressReader, RenderProgressEvent, OnRenderProgress listener, and the animated waveform/progress bar UI. Replace with a simple "Processing…" label. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Use clip-path instead of height for the meter fill so the gradient always maps green/yellow/red to consistent level ranges. Reorient the meter horizontally and position it in the top-right corner of the recording card. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add StreamWaveformPeaks RPC to broadcast live peak data during recording. Includes PeakAccumulator for downsampling audio to min/max pairs and PeakBroadcaster for fan-out to multiple subscribers. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Wire StreamWaveformPeaks to the web frontend. Add peak streaming procedure, connect it to the recording session card, and update layout components with minor improvements. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Use solid backgrounds with explicit white text for dark theme toasts. Move dark overrides to unscoped style block so Vue scoping doesn't break the selectors. Add gaps between stacked toasts. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace embedded sine wave test data with real ALSA microphone capture via arecord. Add GetCommands stream so the server recognizes the recorder as connected and CutSession works. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The streaming encode path produces data.raw, data.flac, and waveform.dat — not data.ogg or overview.png. Those are only generated in the fallback renderFromRawData path. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add full test coverage for PeakBroadcaster: subscribe/broadcast, multiple subscribers, unsubscribe safety, buffer overflow drops, and concurrent access. Add accumulator edge cases for empty input, single sample, and negative peak level. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Vue P1 fixes: - Remove onBeforeUnmount from singleton Pinia stores (useRecordersStore, useSessionsStore) — was killing gRPC streams when first consumer unmounted - Fix watcher leak in useConfirmation — stop() handle now captured and called - Return cleanup function from integrateSegments, call it on unmount - Clean up command handlers before re-registering in installPlayerControls - Move useTimeAgo to setup level in SessionMenu (was leaking timers inside computed) - Add v-if guard on session.downloadFiles to prevent null deref Vue P2 fixes: - Replace unconditional 60fps rAF loop with data-driven redraws in SessionCardRecording — only schedules frame when peak data arrives - Replace fragile queueMicrotask pruning with debounced setTimeout(100ms) in useSessionsStore — resilient to HTTP/2 framing across microtasks Go P2 fixes: - Include streaming.totalBytes in CloseRecordingSession duration estimate (was only using unflushed buffer residual, showing <1s duration) - Use zoom parameter in CreateWaveformStream instead of hardcoded 256 - Fix error variable shadowing in makeSureBucketExists - Extract urlUnsafeChars regex to package-level var (was recompiling per call) MinIO access hardening: - Remove MinIO port exposure (9000/9090) from production docker-compose - Restrict nginx /minio proxy to GET/HEAD/OPTIONS only - Remove unused publicAccessFormula (world-writable bucket policy dead code) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Fix deletedSessions map growing unbounded by adding timestamped tombstones with amortized sweep cleanup on subsequent DeleteSession calls - Fix stale SM references from in-flight renders overwriting resumed session state by validating source state matches in-memory state in onSessionTransition - Fix initSession emitting events while holding dataLock (deadlock hazard) by deferring event emission via initSessionResult until after lock release - Fix double render submission on session switch by deduplicating when the old session appears in both closeIntermediateSessions and needsSessionSwitch paths Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…evention - Stop() drains work queue with stopAndWait() instead of discarding jobs, preventing sessions stuck in PROCESSING on shutdown - Streaming uploads use shutdown-aware context instead of context.Background(), so orphaned upload goroutines are cancelled on server stop - Work queue context cancelled only after pool drains, so in-flight S3 uploads complete before context invalidation - OnRecorderDisconnected uses 30s timeout instead of unbounded Background() - Peak accumulator cleanup triggers on all non-RECORDING states, fixing memory leak for PROCESSING→ERROR/FINISHED transitions - gRPC connection callbacks fire synchronously after mutex release, eliminating race between connection state and callback completion - Broadcaster.Close() added to unblock subscriber goroutines on shutdown - Periodic tombstone sweep in timeout checker prevents deletedSessions map growth between deletes Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Remove empty go/README.md and boilerplate web/libs/session-waveform/README.md - Move protocol build deps from go/cmd/chunk_sink/README.md to protocols/README.md - Replace protocols/README.md (had misplaced web instructions) with actual protocol docs: proto files, make targets, system dependencies - Trim web/README.md to web-specific conventions and dev commands only - Remove duplication from CLAUDE.md (ports, commands, structure, env vars, architecture all duplicated README.md) — keep only AI-specific conventions - Update docs/state-lifecycle.md with backend internals: shutdown sequence, concurrency/locking rules, tombstone cleanup, broadcasting, segment lifecycle, resume flow, session close triggers, peak accumulator Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ilure tests - Add maxChunkSamples guard to prevent OOM from oversized gRPC payloads - Wrap all gRPC stream console.log calls with import.meta.env.DEV guards - Pin MinIO Docker image to RELEASE.2025-09-07T16-13-09Z - Add S3 failure injection to FakeMinioClient with 4 error-path tests - Add CreateAudioFileStream streaming tests (FLAC, OGG, concurrent, large input) - Fix pre-existing vet warning (unused mutex in signal_flow_test.go) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace `any` with proper types in test mocks, remove unused variable, and enable CGO in Alpine Docker image so `-race` flag works. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.