Accuracy is the most important characteristic of a speech recognition system. While AssemblyAI’s production end-to-end approach for speech recognition is able to provide better accuracy than other commercial grade speech recognition systems, improvements could still be made to achieve human performance. As part of our core research and development efforts to continue pushing the state of the art of speech recognition accuracy, in this post, we explore speech recognition architectures that are gaining new popularity in both academia and industry settings.
As we continue our core research and development efforts to push the boundaries of state of the art accuracy, we begin exploring architectures for end-to-end speech recognition that are gaining relative new popularity, not only in the research domain but also in production settings in industry (, ). We are specially interested in architectures that have shown production level accuracy matching or surpassing that of conventional hybrid DNN-HMM (, ) speech recognition systems. With that in mind, we survey the following two architectures: Listen Attend and Spell (LAS, ), and Recurrent Neural Network Transducers (RNNT, ).
We survey these new architectures from a production perspective. In section 2, we report on accuracy comparisons from some of the latest research done on both LAS and RNNT (, , ). We also do a comparison of these techniques against Connectionist Temporal Classification (CTC, ), which is a more widely known end-to-end approach, and is what early versions of AssemblyAI’s speech recognition system was based on. We also a look at recent research that combines RNNT and LAS into a single system (, ).
In section 3, keeping our focus on production grade speech recognition, we compare LAS and RNNT models from the perspective of feature parity. We focus on contextualization (, , ), inverse text normalization (, ), word timestamps, and the possibility of doing real time speech recognition (). And we finish this post in section 4, highlighting high level conclusions about our survey as well as next steps for future blog posts.
While human parity speech recognition accuracy has been reached for some research data sets(), word error rates (WER) for industrial production grade applications in challenging acoustic environments is far from human level. One example of such domain is spontaneous telephone conversations, where, even in research datasets, human level parity has not yet been achieved ().
We set our focus in this section on LAS and RNNT systems tested on production data, and not on research datasets. We avoid looking into research datasets as they are easily overfit by models that do not generalize well to real live audio signals. We want to keep our focus on datasets that contain challenging characteristics such as channel noise, compression, ambient noise, crosstalk, accents and speaker diversity.
2.1 LAS vs RNNT
When comparing accuracy of LAS and RNNT based architectures, the consensus seems to be thatLAS has better accuracy than RNNT.  compares both LAS and RNNT models using 12500 hours of production voice-search traffic, artificially distorted by adding noise and by simulating room acoustics. WER is reported on both dictation as well as voice search utterances. Both LAS andRNNT use the same encoder neural network architecture and size. Their parameters however, are different and were initialized to those of a converged CTC trained network with the same encoder architecture. The decoders also have the same architecture, same number of layers, size and output the set of 26 lower case letters plus numerals, space and punctuation symbols. As a result, the only difference in terms of numbers of parameters lies in the RNNT joint network and the LAS attention mechanism. Neither the LAS nor the RNNT model use any sort of external language model. Table 1summarizes their results showing LAS performing better in general.
 also shows LAS performing better in general. They emphasize RNNT models lag in quality whileLAS shows competitive performance when compared to hybrid DNN-HMM models ().  also compares the WERs of LAS with RNNT. Similar to , the encoder and decoder networks are the same for both RNNT and LAS, differing only on the joint and attention networks. Both predict outputs from a set of 4096 word pieces. The training data used is the same for both models and consists of 30,000 hours of voice search training data, corrupted with added noise and room acoustics.The models are tested on two test sets: one made of utterances shorter than 5.5 seconds (Short Utts.)and another one made of utterances longer than 5.5 seconds (Long Utts.). Results showing LAS performing better can be seen in table 1. LAS shows degrading accuracy on long utterances, which is attributed to the attention mechanism as explained in .
While  does not explicitly compare the accuracy rates of LAS and RNNT models, they do also mention, similar to , that under low latency constraints the accuracy of LAS outperforms conventional DNN-HMM models, while RNNT models do not.
2.2 LAS and RNNT vs CTC
Comparisons with CTC based models are not as simple as comparisons just between LAS and RNNT.The main reason being that a CTC model is highly dependent on the use of an external language model to have acceptable accuracy. LAS and RNNT models do not need an external language model since the original model itself models language implicitly.
 compares LAS, RNNT and CTC models without any external LM, where the encoder architecture and size is the same across all three models. CTC’s accuracy without any external LM, which is shown in table 2, is significantly lower than RNNT and LAS. The rest of the experimental details are the same as those describe section 2.1.
 compares RNNT and CTC, but with significant differences in each architecture. The CTC model consists of 6 LSTM layers with each layer having 1200 cells and a 400 dimensional projection layer.The model outputs 42 phoneme targets through a softmax layer. Decoding is preformed with a 5gram first pass language model and a second pass LSTM LM rescoring model. The RNNT model’s encoder
consists of 8 LSTM layers with 2048 cells each and a 640 dimensional projection layer. A time reduction layer of factor 2 is inserted after the second layer of the encoder. The prediction network has 2 LSTM layers each with 2048 cels and a 640 dimensional projection layer. The joint network has 640 hidden units. The output layer models 4096 word pieces. The size of the RNNT model is120MB, while the size of the CTC model is 130MB.
 performs training with 27500 hours of voice search and dictation data, artificially corrupted to simulate room acoustics and noise. Table 2 shows the comparison between their CTC and RNNT models on both a voice search and a dictation test sets, showing the RNNT model being significantly better.
2.3 Combining LAS and RNNT
While LAS is perceived as having better accuracy, RNNT models are perceived to have production quality features, such as streaming capabilities, that make them more desirable.
With the objective of bridging the accuracy gap between both architectures,  and  develop a combination of RNNT and LAS models. RNNT is used during first pass decoding, and LAS is used as a second pass rescoring model.
The experimental details of  with respect to separate RNNT and LAS models are described in section 2.1. The two pass experiments use the same architecture for both and LAS and RNNT. The number of parameters are the same as well but this time the encoder parameters are shared between them. Training is done in three steps. First an RNNT model is converged. Then the RNNT encoder is frozen and a LAS decoder is trained with it. Finally a combined loss is used to retrain both the RNNT and LAS models (with a shared encoder) together. Table 3 shows the rescoring approach significantly improves the accuracy of the RNNT models. It also improves the LAS weaknesses with respect to longer utterances.
 also implements LAS rescoring on a second pass, where the first pass is an RNNT model. Besides using search data during training they also use data from multiple other domains (e.g.: farfield, phone)and accented speech from countries other than the U.S. The number of hours used for training is not specified, but the model architecture details related to LAS rescoring are similar: A shared encoder is used between the RNNT model and the LAS model, and the LAS model is used to rescore hypotheses coming from the RNNT model. They add an additional LAS encoder layer in between the shared encoder and the LAS decoder. Table 3 shows that LAS rescoring significantly improves RNNT accuracy.
More importantly, table 3 shows the combination of RNNT and LAS beating, for the first time, a conventional HMM-RNN hybrid model. The conventional model’s acoustic model outputs context dependent phones, and uses a phonetic dictionary with close to 800,000 words. A 5-gram language model is used during first pass decoding and a MaxEnt language model is used for second pass rescoring. The total size of the conventional model is around 87 GBs. The RNNT+LASS model’s size is 0.18 GBs. The RNNT model has 120 million parameters and the LAS model (both the additional encoder and the decoder) have 57 million additional parameters. Parameters are quantized to be 8-bit fixed point.
3 Feature Parity
While accuracy is the most important characteristic of a speech recognition system, there are many other features that contribute to usability and cost. In this section we explore LAS and RNNT from the perspective of features such as contextualization, inverse text normalization, timestamps and realtime speech recognition.
The words produced by a speaker during a dialog depend on the context the speaker is in. E.g., if the speaker wants to call a friend, he is very likely to say the friend’s name. The name may be made of uncommon or foreign words. These words may have had very few or no samples in the training data of the ASR system, and as a result they may not be able to get recognized correctly.
Contextualization in ASR is about biasing the models towards the words and phrases that belong to the context without hurting the ASR performance of non contextual sentences.
 implements this in a CTC system by incorporating in the language model a dynamic class that represents the contextual information. During decoding the dynamic class is replaced with a finite state transducer (FST) containing the contextual phrases and vocabulary (e.g. contact names).On-the-fly rescoring is then applied with a language model that contains the ngrams corresponding to those contextual phrases and vocabulary. The accuracy effect of using this type of contextual mechanism is shown in table 4. Although no results are shown on generic test sets, the accuracy on contextual test sets is improved significantly.
 implements contextualization on RNNT models. This is done through an FST that represents the entire contextual ngram model (instead of just the vocabulary and phrases), and this model is interpolated with the RNNT model through shallow fusion during beam search decoding. The accuracy improvements, shown in table 4, are also significant. Results on generic test sets are not shown, but the improvements on contextual test sets suggests that contextualization can be successfully implemented on an RNNT framework as well.
 also implements contextualization through FSTs and shallow fusion in a two pass RNNT + LAS rescoring system. Since shallow fusion is equivalent to an interpolation of scores, just like rescoring ,it’s fair to assume that only one single bias FST during first pass RNNT decoding is needed. Their contextual test set results are used to show the performance impact of other tuning approaches rather than contextualization itself, therefore we don’t summarize those results here.
With respect to LAS,  implements shallow fusion selectively on top of a LAS system.  goes further and implements contextualization inside the LAS framework in an all neural way, calling itCLAS. The accuracy improvements of using contextualization are significant but are measured on artificial test sets that are either generated using TTS of where the context is generated from truth transcriptions, therefore we don’t summarize those results here.
3.2 Inverse Text Normalization
Inverse text normalization converts a transcription in the spoken domain (e.g.: "one twenty three first street") to the written domain (e.g.: "123 1st st"). Speech recognition systems with conventionalDNN-HMM models usually do this with separate models. The output of the ASR system, which is in the spoken domain, is passed through a separate model that does the conversion to the written domain.
One could do the same with an end-to-end ASR system. But the following work suggests that in a single ASR end-to-end system one could model the acoustics, phonetics, language and normalization models. The normalization can be learned by including numerals, space and punctuation symbols in the output layer of their models. Training data would have to be in the written domain.
 implements a LAS system able to match state of the art accuracy from hybrid DNN-HMM systems. This network incorporates acoustic, pronunciation and language models into a single network which outputs written form text. It explicitly mentions that a text normalization component is not needed to normalize the output of the recognizer.
 implements their RNNT models in a way that they output text in the written domain as well. They are able to improve their numeric output performance by including numeric utterances synthetically produced with text-to-speech (TTS) in their training data.
 does an error analysis comparison between their RNNT models and conventional models, and credit the RNNT models having learned normalization as one of the reasons behind their good accuracy.
3.3 Word Timestamps
Depending on the application, another feature that is very useful is providing time alignments in the speech recognition result. These are usually provided as word timestamps, meaning the beginning and end time of a recognized word in the audio stream. This feature is necessary, for example, in captioning applications for podcasts or videos.
A LAS system lacks the ability to produce time stamps. As described in , the alignment between text and audio are provided by the attention mechanism. However, the attention coefficients produced span through the entire audio stream. While there has been research regarding monotonic attention mechanisms (, , ), which could be more promising in providing word timestamps, this research seems to be in early stages.
RNNT decoding, as described in , does not provide word timestamps. The decoding process provides a probability total for all time alignments for each hypothesis within the beam. However, within a single alignment, the decoder either aligns a word piece (or grapheme) to an input feature vector, or decides to consume another feature vector. Extracting time alignments should be possible with small modifications to the decoding algorithm in . Further development and experimentation will be needed to address the following two possible difficulties: First, there may be several word pieces aligned to one single feature vector. Second, silence alignments may also be difficult depending on the output vocabulary of the model.
A combined model, using RNNT on a first pass and LAS as a rescoring pass (as in  and ) would be able to use the timestamps produced by a modified RNNT decoding process as described in the paragraph above.
3.4 Real Time Speech Recognition
Real time speech recognition is constrained by three aspects:
- The first aspect is defined by the ability of the decoding algorithm to provide speech recognition results as it digests feature vectors, which is known as streaming speech recognition.This is clearly not possible with LAS, but is possible with RNNT. This is explained in .The attention mechanism of LAS requires the entire audio stream to be processed by the encoder before the decoder can start emitting output labels (). In the case of RNNT, foreach input feature vector, one or more output labels are emitted ().
- The second aspect is defined by the CPU or GPU consumption of the decoding algorithm and the models, which can be measured by the real time factor (RTF). RTF is the ratio of how long it takes to process an utterance over how long the utterance is.  performs symmetric parameter quantization to bring the 90 percentile RTF of RNNT models from1.43 down to 0.51 in mobile devices.
- The third aspect is defined by the latency of the system, which is the amount of time the user has to wait, from the moment he stops speaking, to the point he receives the final speech recognition result. In a mixed RNNT and LAS rescoring system, the LAS computation has to be done after RNNT decoding is finished. This means LAS adds all it’s time consumption to latency time.  reduces latency by moving parts of the LAS computation to the firstRNNT pass. It also parallelizes the LAS processing of arcs from the nbest lattice. This removes almost all latency produced by LAS. Also, an end-of-speech label is included in the output layer of the RNNT model, effectively allowing the model to learn when to predict an endpoint. This removes the need for an external end pointer, and improves WER (as compared to using an external endpointer) by 10% relative.
We surveyed recent research literature about Listen Attend and Spell (LAS) and Recurrent NeuralNetwork Transducers (RNNT). We kept a perspective related to production accuracy and production features. Literature suggests that RNNT models are more accurate than CTC. It also suggests that, while LAS may be more accurate than RNNT, a mix of both can achieve better accuracy and feature parity when compared to hybrid RNN-HMM models.
The test sets used in these research literature works may be unique and different than those of other domains, but the accuracy tables may provide good guidance for experiments that could be performed in the process of bringing LAS or RNNT architectures into a production ASR design.
Having looked at accuracy and feature parity at a high level in this blog post, in future blog posts we will focus on the implementation details of LAS, RNNT and we will compare them to CTC.
 Y. He et al. “Streaming end-to-end speech recognition for mobile devices”. In: ICASPP (2019).
 T. N. Sainath et al. “A streaming on-device end-to-end model surpassing server-side conven-
tional model quality and latency”. In: ICASPP (2020).
 A. Mohamed et al. “Acoustic modeling using deep belief networks”. In: IEEE Transactions on
Audio, Speech, and Language Processing 20(1):14-22 (2012).
 N. Jaitly et al. “Application of Pretrained Deep Neural Networks to Large Vocabulary Speech
Recognition”. In: INTERSPEECH (2012).
 W. Chan et al. “Listen, attend and spell”. In: CoRR abs/1508.01211 (2015).
 A. Graves. “Sequence transduction with recurrent neural networks”. In: arXiv arXiv:1211.3711
 R. Prabhavalkar et al. “A Comparison of Sequence-to-Sequence Models for Speech Recogni-
tion”. In: INTERSPEECH (2017).
 T.N. Sainath et al. “Two-Pass End-to-End Speech Recognition”. In: INTERSPEECH (2019).
 A. Graves et al. “Connectionist Temporal Classification: Labeling Unsegmented Sequenece
Data with Recurrent Neural Networks”. In: ICML (2016).
 G.Pundak et al. “Deep Context: End-to-End Contextual Speech Recognition”. In: SLT (2018).
 I. McGraw et al. “Personalized speech recognition on mobile devices”. In: ICASSP (2016).
 W. Xiong et al. “Achieving human parity in conversational speech recognition”. In: arXiv:1610.05256 (2016).
 G. Saon et al. “English conversational telephone speech recognition by humans and machines”. In: INTERSPEECH (2017).
 C. Chiu et al. “State-of-the-art speech recognition with sequence-to-sequence models”. In: ICASSP (2018).
 J. Chorowski et al. “Attention-Based Models for Speech Recognition”. In: NIPS (2015).
 I. Williams et al. “Contextual speech recognition in end-to-end neural network systems using
beam search”. In: INTERSPEECH (2018).
 Tjandra et al. “Local monotonic attention mechanism for end-to-end speech and language
processing”. In: Proc. 8th International Joint Conference on Natural Language Processing.
 C. Chiu et al. “Monotonic chunkwise attention”. In: ICLR (2018).
 A. Merboldt et al. “An analysis of local monotonic attention variants”. In: INTERSPEECH