Skip to content

feat(elevenlabs): add eleven_v3 model support with HTTP streaming fallback#4936

Open
yusuf-eren wants to merge 7 commits intolivekit:mainfrom
yusuf-eren:feat/elevenlabs-eleven-v3-support
Open

feat(elevenlabs): add eleven_v3 model support with HTTP streaming fallback#4936
yusuf-eren wants to merge 7 commits intolivekit:mainfrom
yusuf-eren:feat/elevenlabs-eleven-v3-support

Conversation

@yusuf-eren
Copy link
Contributor

@yusuf-eren yusuf-eren commented Feb 24, 2026

Adds support for ElevenLabs eleven_v3 TTS model by implementing HTTP streaming fallback. The plugin automatically detects when eleven_v3 is used and routes to the HTTP /stream endpoint instead of WebSocket.

Changes

  • Add HTTPSynthesizeStream class for HTTP-based streaming
  • Automatically detect and route eleven_v3 to HTTP streaming
  • Add comprehensive test coverage (8 unit tests)
  • Include example demonstrating usage

Implementation

tts = elevenlabs.TTS(model="eleven_v3")
stream = tts.stream()  # Automatically uses HTTP streaming

Other models continue using WebSocket unchanged.

Limitations

HTTP streaming has higher latency (~1.5s TTFB) compared to WebSocket models (~300ms) due to text buffering requirements. This is an ElevenLabs API limitation since eleven_v3 doesn't support WebSocket connections.

Fixes 403 errors when using eleven_v3.

devin-ai-integration[bot]

This comment was marked as resolved.

devin-ai-integration[bot]

This comment was marked as resolved.

devin-ai-integration[bot]

This comment was marked as resolved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant