Skip to main content

Key Phrases

The Key Phrases model can accurately identify significant words and phrases in your transcription, enabling you to extract the most pertinent concepts or highlights from your audio or video file.

Quickstart

In the Analyzing highlights of call center recordings guide, the client uploads an audio file and configures the API request to use key phrase extraction by including the auto_highlights parameter.

You can explore the full JSON response here:

Show JSON

You run this code snippet in Colab here, or you can view the full source code here.

Understanding the response

The JSON object above contains all information about the transcription. Depending on which Models are used to analyze the audio, the attributes of this object will vary. For example, in the quickstart above we did not enable Summarization, which is reflected by the summarization: false key-value pair in the JSON above. Had we activated Summarization, then the summary, summary_type, and summary_model key values would contain the file summary (and additional details) rather than the current null values.

To access the Key Phrases information, we use the auto_highlights and auto_highlights_result keys:

The reference table below lists all relevant attributes along with their descriptions, where we've called the JSON response object results. Object attributes are accessed via dot notation, and arbitrary array elements are denoted with [i]. For example, results.words[i].text refers to the text attribute of the i-th element of the words array in the JSON results object.

results.auto_highlightsbooleanWhether Key Phrases was enabled in the transcription request
results.auto_highlights_resultobjectThe results of the Key Phrases model
results.auto_highlights_result.statusstringIs either success, or unavailable in the rare case that the Key Phrases model failed
results.auto_highlights_result.resultsarrayA temporally-sequential array of Key Phrases
results.auto_highlights_result.results[i].countnumberThe total number of times the i-th key phrase appears in the audio file
results.auto_highlights_result.results[i].ranknumberThe total relevancy to the overall audio file of this key phrase - a greater number means more relevant
results.auto_highlights_result.results[i].textstringThe text itself of the key phrase
results.auto_highlights_result.results[i].timestamps[j].startnumberThe starting time of the j-th appearance of the i-th key phrase
results.auto_highlights_result.results[i].timestamps[j].endnumberThe ending time of the j-th appearance of the i-th key phrase

Frequently Asked Questions

How does the Key Phrases model identify important phrases in my transcription?

The Key Phrases model uses natural language processing and machine learning algorithms to analyze the frequency and distribution of words and phrases in your transcription. The algorithm identifies key phrases based on their relevancy score, which takes into account factors such as the number of times a phrase occurs, the distance between occurrences, and the overall length of the transcription.

What is the difference between the Key Phrases model and the Topic Detection model?

The Key Phrases model is designed to identify important phrases and words in your transcription, whereas the Topic Detection model is designed to categorize your transcription into predefined topics. While both models use natural language processing and machine learning algorithms, they have different goals and approaches to analyzing your text.

Can the Key Phrases model handle misspelled or unrecognized words?

Yes, the Key Phrases model can handle misspelled or unrecognized words to some extent. However, the accuracy of the detection may depend on the severity of the misspelling or the obscurity of the word. It's recommended to provide high-quality, relevant audio files with accurate transcriptions for the best results.

What are some limitations of the Key Phrases model?

Some limitations of the Key Phrases model include its limited understanding of context, which may lead to inaccuracies in identifying the most important phrases in certain cases, such as text with heavy use of jargon or idioms. Additionally, the model assigns higher scores to words or phrases that occur more frequently in the text, which may lead to an over-representation of common words and phrases that may not be as important in the context of the text. Finally, the Key Phrases model is a general-purpose algorithm that can't be easily customized or fine-tuned for specific domains, meaning it may not perform as well for specialized texts where certain keywords or concepts may be more important than others.

How can I optimize the performance of the Key Phrases model?

To optimize the performance of the Key Phrases model, it's recommended to provide high-quality, relevant audio files with accurate transcriptions, to review and adjust the model's configuration parameters, such as the confidence threshold for key phrase detection, and to refer to the list of identified key phrases to guide the analysis. It may also be helpful to consider adding additional training data to the model or consulting with AssemblyAI support for further assistance.