Skip to content

feat: add Pulse STT support for smallest.ai pulse (streaming + pre-recorded) #4858

Open
mahimairaja wants to merge 20 commits intolivekit:mainfrom
mahimairaja:feat/smallest-ai-stt
Open

feat: add Pulse STT support for smallest.ai pulse (streaming + pre-recorded) #4858
mahimairaja wants to merge 20 commits intolivekit:mainfrom
mahimairaja:feat/smallest-ai-stt

Conversation

@mahimairaja
Copy link
Contributor

What does this PR does?

Adds Speech-to-Text (STT) support to the livekit-plugins-smallestai plugin using Smallest AI's Pulse STT API. The existing plugin only supported TTS, this PR brings it to parity with plugins like Deepgram, ElevenLabs, and Soniox that offer both TTS and STT.

Closes #4856

Summary of Changes

New: STT class (stt.py)

  • Pre-recorded transcription via HTTP POST (/api/v1/pulse/get_text)
  • Real-time streaming via WebSocket (wss://waves-api.smallest.ai/api/v1/pulse/get_text)
  • ~64ms TTFB streaming, word-level timestamps, speaker diarization
  • 32+ languages with auto-detection (language="multi")
  • Capabilities: streaming=True, interim_results=True

New: SpeechStream class (stt.py)

  • WebSocket-based streaming with concurrent send/recv/keepalive tasks
  • Audio chunking via AudioByteStream (~4096 byte chunks per Smallest AI docs)
  • Full speech event lifecycle: START_OF_SPEECH β†’ INTERIM_TRANSCRIPT / FINAL_TRANSCRIPT β†’ END_OF_SPEECH
  • Graceful shutdown with {"type": "end"} signaling

Usage

from livekit.plugins import smallestai

# Pre-recorded
stt = smallestai.STT(language="en")

# Streaming (used in AgentSession)
session = AgentSession(
    stt=smallestai.STT(language="en"),
    llm=...,
    tts=smallestai.TTS(),
)

Configuration via SMALLEST_API_KEY environment variable (same key used for TTS).

Testing

  • Verified pre-recorded transcription with WAV audio files
  • Verified real-time streaming with live microphone input via LiveKit Agents Playground
  • Tested interim + final transcript emission and speech event lifecycle
  • Tested with language="en" and language="multi" (auto-detection)
  • Ran ruff format and check
❯ uv run ruff check .
All checks passed!

❯ uv run ruff format .
629 files left unchanged
  • Ran type checking
❯ uv pip install pip && uv run mypy --install-types --non-interactive \
    -p livekit.agents \
    -p livekit.plugins.smallestai
Audited 1 package in 5ms
Success: no issues found in 169 source files

API Reference

@CLAassistant
Copy link

CLAassistant commented Feb 16, 2026

CLA assistant check
All committers have signed the CLA.

devin-ai-integration[bot]

This comment was marked as resolved.

chatgpt-codex-connector[bot]

This comment was marked as resolved.

@mahimairaja
Copy link
Contributor Author

mahimairaja commented Feb 16, 2026

Tested prerecorded:

import asyncio
from pathlib import Path

import aiohttp
from dotenv import load_dotenv

from livekit.agents import utils
from livekit.plugins import smallestai

load_dotenv()


async def main():
    wav = Path(__file__).resolve().parent / "sample.wav"

    async with aiohttp.ClientSession() as session:
        stt = smallestai.STT(language="en", http_session=session)
        frames = [
            f
            async for f in utils.audio.audio_frames_from_file(
                str(wav), sample_rate=16000, num_channels=1
            )
        ]
        event = await stt.recognize(frames)

    print(event.alternatives[0].text if event.alternatives else "")


if __name__ == "__main__":
    asyncio.run(main())

@mahimairaja
Copy link
Contributor Author

mahimairaja commented Feb 16, 2026

Testing streaming:

from dotenv import load_dotenv

from livekit import agents
from livekit.agents import Agent, AgentServer, AgentSession, room_io
from livekit.plugins import silero
from livekit.plugins.openai.llm import LLM
from livekit.plugins.smallestai.stt import STT
from livekit.plugins.smallestai.tts import TTS
from livekit.plugins.turn_detector.english import EnglishModel

load_dotenv()


class Assistant(Agent):
    def __init__(self) -> None:
        super().__init__(
            instructions="""You are a helpful voice AI assistant.""",
        )


server = AgentServer()


@server.rtc_session(agent_name="my-agent")
async def my_agent(ctx: agents.JobContext):
    session = AgentSession(
        stt=STT(),
        llm=LLM(model="gpt-4.1-mini"),
        tts=TTS(),
        vad=silero.VAD.load(),
        turn_detection=EnglishModel(),
    )

    await session.start(
        room=ctx.room,
        agent=Assistant(),
        room_options=room_io.RoomOptions(),
    )

    await session.generate_reply(instructions="Greet the user and offer your assistance.")


if __name__ == "__main__":
    agents.cli.run_app(server)

devin-ai-integration[bot]

This comment was marked as resolved.

devin-ai-integration[bot]

This comment was marked as resolved.

devin-ai-integration[bot]

This comment was marked as resolved.

@mahimairaja
Copy link
Contributor Author

After conversations with @ harshitajain165 from smallest.ai, I came to know that few more steps needed for streaming support from the smallest server. for now I am moving this PR to draft.

@mahimairaja mahimairaja marked this pull request as draft February 16, 2026 19:35
@mahimairaja mahimairaja marked this pull request as ready for review March 2, 2026 20:53
Copy link
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 new potential issue.

View 8 additional findings in Devin Review.

Open in Devin Review

Comment on lines +400 to +402
if self._is_last_event.is_set():
closing_ws = True
return
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

πŸ”΄ recv_task early return on is_last leaves keepalive_task blocking tasks_group, causing stream to hang

After both send_task and recv_task complete normally, keepalive_task continues running indefinitely, preventing asyncio.gather from completing. This causes _run to never return, _event_ch to never close, and any consumer iterating the speech stream to hang after receiving all transcripts.

Root Cause and Detailed Walkthrough

The shutdown sequence proceeds as follows:

  1. send_task exhausts self._input_ch, sends the END message (stt.py:366-368), and returns normally.
  2. The server processes the end signal and sends a final transcript with is_last=True.
  3. recv_task processes this event, _process_stream_event sets self._is_last_event (stt.py:524-525), and recv_task returns early at stt.py:400-402:
    if self._is_last_event.is_set():
        closing_ws = True
        return
  4. keepalive_task (stt.py:327-333) is still running β€” it pings every 30 seconds and only exits when ws.ping() raises an exception.
  5. tasks_group = asyncio.gather(*tasks) at stt.py:416 requires ALL three tasks to complete. Since keepalive_task is still alive, the gather never resolves.
  6. asyncio.wait at stt.py:419-422 blocks forever (or until keepalive_task's next ping detects a closed connection, which may take up to 30 seconds β€” or never, if the server doesn't close the WebSocket).
  7. _run never returns β†’ _main_task never returns β†’ _event_ch never closes β†’ consumer's async for event in stream hangs after the last real event.

Compare with the Deepgram plugin (livekit-plugins/livekit-plugins-deepgram/livekit/plugins/deepgram/stt.py:531-559): Deepgram's recv_task does NOT exit early β€” it continues to call ws.receive() until the WebSocket is actually closed by the server, which naturally causes keepalive_task to fail on its next send and exit.

Impact: In standalone stream usage (e.g. async for event in stt.stream(): ...), the iteration hangs indefinitely after all transcripts are received. In AgentSession usage, the hang is masked by explicit aclose() calls, but cleanup is still delayed.

Prompt for agents
In livekit-plugins/livekit-plugins-smallestai/livekit/plugins/smallestai/stt.py, the recv_task function (lines 371-402) returns early when is_last_event is set (lines 400-402), but this leaves keepalive_task running and blocking asyncio.gather from completing.

The fix: instead of returning early from recv_task, continue the while loop to let the WebSocket close naturally. Since closing_ws is already set to True at line 401, the existing close-frame handler at lines 376-382 will cleanly return when the server closes the connection. This matches the Deepgram plugin's pattern.

Replace lines 400-402:
    if self._is_last_event.is_set():
        closing_ws = True
        return

With:
    if self._is_last_event.is_set():
        closing_ws = True
        # Don't return early; continue loop so ws.receive() sees
        # the server-side close frame, which also lets keepalive_task
        # detect the closed connection and exit.
Open in Devin Review

Was this helpful? React with πŸ‘ or πŸ‘Ž to provide feedback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: Add STT (Speech-to-Text) support to livekit-plugins-smallestai

2 participants