Newsletter

AssemblyAI + 🔗LangChain Go, Universal-1 Recap

Explore our new AssemblyAI and LangChain Go integration to unlock powerful LLM capabilities for audio data. Check out Universal-1, our advanced multilingual Speech-to-Text model with unmatched accuracy.

AssemblyAI + 🔗LangChain Go, Universal-1 Recap

Hey 👋, this weekly update contains the latest info on our new product features, tutorials, and our community.

AssemblyAI + 🔗LangChain Go: Unleash LLMs on Audio 

Our new integration to LangChain Go allows you to integrate AssemblyAI's speech-to-text models and unlock large language model capabilities on your audio data. With this integration, you can now leverage LLMs to generate summariesextract insightsanswer queries, and more - all based on audio transcripts from AssemblyAI. 

Check out the LangChain Go documentation to get started.

🚀Universal-1: Powerful Speech-to-Text Model

Last week, we introduced Universal-1, our groundbreaking multilingual Speech-to-Text model trained on a massive 12.5M hours of audio data. The response has been overwhelming, and we're thrilled to see developers leveraging its unparalleled accuracy and performance capabilities. 

If you haven't had a chance to explore Universal-1 yet, here's a quick recap of the improvements it provides:

  • 71% better speaker count estimation and 14% better word timestamp estimation compared to our prior models
  • Up to 30% fewer hallucinations compared to Whisper Large-v3, ensuring cleaner, more reliable transcriptions. 
  • Over 22% more accurate compared to speech-to-text APIs from Azure, AWS, and Google. 
  • Ability to code switch, transcribing multiple languages within a single audio file.
  • And, it processes an hour of audio in just 38 seconds.âš¡

Universal-1 is now the default model for transcription, available to all our users without any changes required. Check out our docsto start building with Universal-1.

Fresh From Our Blog

Transcribe an audio file with Universal-1 using Go: Dive into transcribing audio files in your Go applications using our flagship Universal-1 model, delivering industry-leading speech recognition performance.  Read more>>

Transcribe audio and video files with Python and Universal-1: Discover how to leverage Universal-1 to transcribe both audio and video files with high accuracy in your Python applications. Read more>>

Transcribe an audio file with Universal-1 in Node.js: Unlock unparalleled transcription accuracy in your Node.js apps using our Universal-1 model - the cutting-edge in speech-to-text technology. Read more>>

Automatically extract phone call insights with LLMs and Python | Full tutorial: Build an app that extracts phone calls automatically with LLMs and Python. 

How to Build a RAG Application for Multi-Speaker Audio Data: Learn how to build a RAG application in 10 minutes that can take multiple speakers into account when answering a question. 

Coding an AI Voice Bot from Scratch: Real-Time Conversation with Python: Learn how to build a real-time AI voice assistant using Python that transcribes real-time speech, generates AI responses, and provides a human-like conversational experience.