Benchmarks
Industry-leading accuracy across pre-recorded and streaming speech-to-text.
Benchmarks are an important first step before running your own evaluation. Below are the current benchmarks for our models so you can assess performance across accuracy, latency, and error rates.
Public benchmarks can be misleading due to overfitting and benchmark gaming. We strongly recommend running your own evaluation on your audio data to identify the best model for your use case.
For the full interactive benchmark experience with competitive comparisons, visit assemblyai.com/benchmarks.
Pre-recorded speech-to-text
Word accuracy and error rate
AssemblyAI Universal-3 Pro achieves a mean WER of 6.2% (median 6.5%) on English benchmarks, with a hallucination rate of 0.58%.
English benchmarks
Most recent update: October 2025.
Multilingual benchmarks
Most recent update: June 2025. Dataset: FLEURS.
Hallucinations and consecutive errors
Hallucinations are a critical concern in production STT systems. AssemblyAI reduces hallucinations by 30% compared to Whisper, across three error categories:
- Fabrications — words inserted that were never spoken
- Omissions — spoken words that are missing from the transcript
- Hallucinations — extended sequences of fabricated content
Benchmark challenges
Models are often trained on publicly available datasets — sometimes the very same datasets used for evaluation. When this happens, the model becomes overfit to the evaluation set and will show artificially strong performance on standard WER tests. This makes WER potentially misleading, as real-world performance on unseen audio will be significantly worse.
External benchmarks
For third-party benchmarks, we recommend the Hugging Face ASR Leaderboard. Note that many models listed require self-hosting and lack production features like speaker diarization and automatic language detection.
Streaming speech-to-text
English benchmarks
Most recent update: October 2025.
Multilingual benchmarks
Most recent update: November 2025.
Latency gaming
In streaming, speed is critical. To achieve lower TTFT (time to first token) metrics, some providers emit tokens before any audio is actually spoken. These early tokens are hallucinations designed to game the benchmark, making TTFT a misleading measure of actual latency.
External benchmarks
For third-party streaming benchmarks, we recommend the Coval Speech-to-Text Playground.
Methodology
Our benchmarks are evaluated across 250+ hours of audio data, 80,000+ audio files, and 26 datasets. We apply standard text normalization before calculating metrics. For full details on our methodology, visit assemblyai.com/benchmarks.
Run your own benchmark
We’d be happy to help. AssemblyAI has a benchmarking tool to help you run a custom evaluation against your real audio files. Contact us for more information.
You can also run your own benchmarks following the Hugging Face framework which provides a GitHub repo with full instructions.