Table of contents
Text Summarization for NLP APIs are powerful tools that can shorten documents, papers, podcasts, videos, and more into their most important soundbites.
In this article, we’ll discuss what exactly text summarization is, how it works, a few of the best Text Summarization APIs, and some of its use cases.
What is Text Summarization for NLP?
In Natural Language Processing, or NLP, Text Summarization refers to the process of using Deep Learning and Machine Learning models to synthesize large bodies of texts into their most important parts. Text Summarization can be applied to static, pre-existing texts, like research papers or news stories, or to audio or video streams, like a podcast or YouTube video, with the help of Speech-to-Text APIs.
Say, for example, you wanted to summarize the 2021 State of the Union Address–an hour and 43 minute long video.
Using a Text Summarization API, you’d be able to generate the following summaries for key sections of the video:
1:45: I have the high privilege and distinct honor to present to you the President of the United States. 31:42: 90% of Americans now live within 5 miles of a vaccination site. 44:28: The American job plan is going to create millions of good paying jobs. 47:59: No one working 40 hours a week should live below the poverty line. 48:22: American jobs finally be the biggest increase in non defense research and development. 49:21: The National Institute of Health, the NIH, should create a similar advanced research Projects agency for Health. 50:31: It would have a singular purpose to develop breakthroughs to prevent, detect and treat diseases like Alzheimer's, diabetes and cancer. 51:29: I wanted to lay out before the Congress my plan. 52:19: When this nation made twelve years of public education universal in the last century, it made us the best educated, best prepared nation in the world. 54:25: The American Family's Plan guarantees four additional years of public education for every person in America, starting as early as we can. 57:08: American Family's Plan will provide access to quality, affordable childcare. 61:58: I will not impose any tax increase on people making less than $400,000. 67:34: He said the U.S. will become an Arsenal for vaccines for other countries. 74:12: After 20 years of value, Valor and sacrifice, it's time to bring those troops home. 76:01: We have to come together to heal the soul of this nation. 80:02: Gun violence has become an epidemic in America. 84:23: If you believe we need to secure the border, pass it. 85:00: Congress needs to pass legislation this year to finally secure protection for dreamers. 87:02: If we want to restore the soul of America, we need to protect the right to vote.
This makes the video much more understandable at a glance.
How does Text Summarization Work?
This section will provide an overview of how Text Summarization works and what it is used for.
How Text Summarization Works
At a high level, Text Summarization works by providing a “summary over time” for static texts or transcription texts. First, a Text Summarization API breaks the text into logical chunks or chapters, which is where the subject or topic naturally changes. Then, the API automatically generates a summary for each of these sections. At the end, you can skim each chapter summary to gain an overall understanding of the text–without having to read or listen to the entire original text or audio stream.
Text Summarization Methods
Text Summarization methods are grouped into two main categories: Extractive and Abstractive. Extractive Text Summarization, where the model “extracts” the most important sentences from the original text, is the more traditional method. Extractive Text Summarization does not alter the original language used in the text. In contrast, Abstractive Text Summarization requires the model itself to generate the summaries, which may or may not include words and/or sentences from the original text.
Extractive Text Summarization Methods
As mentioned above, Extractive Text Summarization works by extracting and isolating key information from a pre-existing text, compressing the text into a summarized version. There are several ways to do this, including looking at relative word frequency. For example, we might assign each word in the body of text a value, which could be equal to the total number of times that word appears in the text. From here, we can determine a value for each sentence by simply summing up the values of the individual words within it. Now all we have to do is rank the sentences by their values, and return the highest value sentences. The intuition here is that sentences with high-frequency words are probably related to other sentences, and therefore encapsulate summarizing information that relates to all sentences.
Taking this intuition further, we might consider the TextRank algorithm. Google uses an algorithm called PageRank in order to rank web pages in their search engine results. TextRank implements PageRank in a specialized way for Text Summarization, where the highest “ranking” sentences in the text are the ones that describe it the best. As before, we can extract the highest ranking sentences to yield a summary of the text.
TextRank constructs a graph whose nodes are the sentences within the body of text. The edges between the nodes represent the similarity between the corresponding sentences. In TextRank the edge weight is simply the number of tokens the two sentences share (scaled by the sentence lengths so as to not unfairly promote long sentences). From here, the PageRank algorithm is used directly to rank the sentences by importance; and then the k most important sentences are returned in the order that they appear in the document, rather than by order of ranking. TextRank is therefore an unsupervised method, omitting the need to build a labeled dataset.
There are plenty of other Extractive Text Summarization methods like LexRank, which works similarly to TextRank, and Latent Semantic Analysis, which utilizes the singular value decomposition of a word-sentence matrix. Although Extractive Text Summarization has a large body of research behind it, developments in Machine Learning over the past several years have opened the door to more complicated Abstractive Text Summarization methods.
Abstractive Text Summarization Methods
The huge strides made across the Machine Learning world over the past decade or so have unsurprisingly also made their way into the subfield of NLP. The attention mechanism found its way into Abstractive Text Summarization in the mid 2010s, and the development of Transformers in particular has opened the door to impressive Text Summarization capabilities. Abstractive Text Summarization models built on transformers generally pose the problem as a sequence-to-sequence task in which a body of text is input into the model which generates a summary from it.
Beyond pure transformer models, there are even GAN-based methods to Text Summarization, which train a generator to create summaries and a discriminator to differentiate between real and generated summaries. Some methods even combine both extractive and abstractive elements into one framework. Text Summarization is perhaps the most complicated task in NLP, so new methods are constantly being developed to address this decades-old problem.
Best APIs for Text Summarization
Now that we’ve discussed what Text Summarization for NLP is and how it works, we’ll compare some of the best Text Summarization APIs to utilize today. Note that some of these APIs support Text Summarization for pre-existing bodies of text while others perform Text Summarization on top of audio or video stream transcriptions.
1. AssemblyAI’s Auto Chapters API
AssemblyAI offers highly-accurate Speech-to-Text APIs and Audio Intelligence APIs. Its Auto Chapters API, part of its Audio Intelligence suite of APIs, applies Text Summarization on top of the data from an audio or video stream, and supplies both a one paragraph summary and single sentence headline for each chapter. The API is used by top players in podcasts, telephony, virtual meeting platforms, conversation intelligence platforms, and more. Use of AssemblyAI’s Audio Intelligence APIs, including Auto Chapters, starts at $0.000583 per second, in addition to core transcription pricing.
Here’s an example of AssemblyAI’s Auto Chapters API in action using this seven minute YouTube video discussing Bias and Variance in Machine Learning.
AssemblyAI's Auto Chapters API Results:
Bias and Variance Explained
Bias and variants are two of the most important topics when it comes to data science. This video is brought to you by AssemblyAI and is part of our Deep Learning Explained series. AssemblyAI is a company that is making a state of the art speech to text API. You can grab a free API token using the link in the description.
Models with High Bias
Bias is the amount of assumptions your model makes about the problem it is trying to solve. Underfitting is when a model is underfitting. Fitting variants show us the sensitivity of the model on the training data. High variance means overfitting models with high flexibility tend to have high variance like decision trees.
Solutions for Model Overfitting
When a model is underfitting or overfitting, the first thing to do is to train it more or increase the complexity of the model. To deal with high variance you need to decrease the complexity or introduce more data to the training. Regularization on the other hand, reduces the complexity and lowers the variance of a model.
Let’s See You Next Week
Thanks for watching the video. If you liked it, give us a like and subscribe. We would love to hear about your questions or comments in the comments section below.
As you can see, the AssemblyAI Auto Chapters API split the above video into four main sections, with summarized text for each section included beneath the chapter sections.Test AssemblyAI's Audio Intelligence APIs for Free
2. plnia’s Text Summarization API
The plnia Text Summarization API generates summaries of static documents or other pre-existing bodies of text. In addition to Text Summarization, plnia also offers Sentiment Analysis, Keyword Extractor, Abusive Language Check, and more. Developers wishing to test plnia can sign-up for a 10-day free trial; plans that include Text Summarization then start at $19 per month.
3. Microsoft Azure Text Summarization
As part of its Text Analytics suite, Azure’s Text Summarization API offers extractive summarization for articles, papers, or documents. Requirements to get started include an Azure subscription and the Visual Studio IDE. Pricing to use the API is pay-as-you-go, though prices vary depending on usage and other desired features.
4. MeaningCloud’s Automatic Summarization
MeaningCloud’s Automatic Summarization API lets users summarize the meaning of any document by extracting the most relevant sentences and using these to build a synopsis. The API is multilingual, so users can use the API regardless of the language the text is in. Those looking to test the API must first sign-up for a free developer account and then pricing to use the API ranges from $0-$999+/month, depending on usage.
5. NLP Cloud Summarization API
NLP Cloud offers several text understanding and NLP APIs, including Text Summarization, in addition to supporting fine-tuning and deploying of community AI models to boost accuracy further. You can also build your own custom models and train and deploy them into production. Pricing ranges from $0-$499/month, depending on usage.
Text Summarization Tutorials
Want to try Text Summarization yourself? This video tutorial walks you through how to apply Text Summarization to podcasts.
Text Summarization Use Cases
Text Summarization is used across a wide range of industries and applications.
- Creating chapters for YouTube videos or educational online courses via video editing platforms.
- Summarizing and sharing key parts of corporate meetings to reduce the need for mass attendance.
- Automatically identifying key parts of calls and flagging sections for follow-up via revenue intelligence platforms.
- Summarizing large analytical documents to ease readability and understanding.
- Segmenting podcasts and automatically providing a Table of Contents for listeners.