Voice agents
Voice agent use case
To optimize for latency, we recommend using the unformatted transcript as it’s received more quickly than the formatted version. In typical voice agent applications involving large language models (LLMs), the lack of formatting makes little impact on the subsequent LLM processing.
Possible implementation strategy
Since all our transcripts are immutable, the data is immediately ready to be sent in the voice agent pipeline. Here’s one way to handle the conversation flow:
- When you receive a transcription response with the
end_of_turn
value beingtrue
but your Voice Agent (i.e., your own turn detection logic) hasn’t detected end of turn, save this data in a variable (let’s call itrunning_transcript
). - When the voice agent detects end of turn, combine the
running_transcript
with the latest partial transcript and send it to the LLM. - Clear the
running_transcript
after sending and be sure to ignore the next transcription withend_of_turn
oftrue
, that will eventually arrive for the latest partial you used. This prevents duplicate information from being processed in future turns.
What you send to the voice agent should look like: running_transcript
+ ’ ’ + latest_partial
Example flow
Utilizing our ongoing transcriptions in this manner will allow you to achieve the fastest possible latency for this step of your Voice Agent. Please reach out to the AssemblyAI team with any questions.
Instead of building your own logic for conversation flow handling, you may use AssemblyAI via integrations with tools like LiveKit and Pipecat. See the next section of our docs for more information on using these orchestrators.