Skip to main content

Streaming Speech-to-Text

AssemblyAI's Streaming Speech-to-Text (STT) allows you to transcribe live audio streams with high accuracy and low latency. By streaming your audio data to our secure WebSocket API, you can receive transcripts back within a few hundred milliseconds.

Supported languages

Streaming Speech-to-Text is only available for English. See Supported languages.

Getting started

Get started with any of our official SDKs:

If your programming language isn't supported yet, see the WebSocket API:

Audio requirements

The audio format must conform to the following requirements:

  • PCM16 or Mu-law encoding (See Specify the encoding)
  • A sample rate that matches the value of the supplied sample_rate parameter
  • Single-channel
  • 100 to 2000 milliseconds of audio per message
tip

Audio segments with a duration between 100 ms and 450 ms produce the best results in transcription accuracy.

Specify the encoding

By default, transcriptions expect PCM16 encoding. If you want to use Mu-law encoding, you must set the encoding parameter to pcm_mulaw:

PCM16 (Default)aai.AudioEncoding.pcm_s16lePCM signed 16-bit little-endian.
Mu-lawaai.AudioEncoding.pcm_mulawPCM Mu-law.

Add custom vocabulary

You can add up to 2500 characters of custom vocabulary to boost their transcription probability.

For this, create a list of strings and set the word_boost parameter:

Authenticate with a temporary token

If you need to authenticate on the client, you can avoid exposing your API key by using temporary authentication tokens.

  1. 1

    To generate a temporary token, call aai.RealtimeTranscriber.create_temporary_token().

    Use the expires_in parameter to specify how long the token should be valid for, in seconds.

    note

    The expiration time must be a value between 60 and 360000 seconds.

  2. 2

    You can now use this temporary token to authenticate a new WebSocket session.

    note

    Each token has a one-time use restriction and can only be used for a single session.

    To use it, specify the token parameter when initializing the streaming transcriber.

Manually end current utterance

To manually end an utterance, call force_end_utterance():

Manually ending an utterance immediately produces a final transcript.

Configure the threshold for automatic utterance detection

You can configure the threshold for how long to wait before ending an utterance.

To change the threshold, you can specify the end_utterance_silence_threshold parameter when initializing the real-time transcriber.

After the session has started, you can change the threshold by calling configure_end_utterance_silence_threshold().

note

By default, Streaming Speech-to-Text ends an utterance after 700 milliseconds of silence. You can configure the duration threshold any number of times during a session after the session has started. The valid range is between 0 and 20000.

Disable partial transcripts

If you're only using the final transcript, you can disable partial transcripts to reduce network traffic.

You can disable partial transcripts by setting the disable_partial_transcripts parameter to True.

Enable extra session information

If you enable extra session information, the client receives a RealtimeSessionInformation message right before receiving the session termination message.

To enable it, define a callback function to handle the event and cofigure the on_extra_session_information parameter.