Industry

Top AI models for conversation intelligence

This article examines how AI Speech-to-Text, Audio Intelligence, and LLMs help power conversation intelligence platforms.

Top AI models for conversation intelligence

To drive lead conversion and customer engagement, sales, marketing, customer success, and human resource teams must be able to record, transcribe, and search across all voice conversations and make sense of these conversations. This increases visibility, drives process and behavior changes, and delivers bottom-line impact.

Many teams are turning to conversation intelligence to help them achieve these goals. 

In this article, we cover what exactly conversation intelligence is and why conversation intelligence is important before exploring the top use cases for AI models in conversation intelligence.

What is conversation intelligence?

Conversation intelligence (sometimes referred to as conversational intelligence AI) is the use of Artificial Intelligence (AI) to infer valuable meaning from conversational data. 

For example, lead intelligence software company CallRail uses conversation intelligence to auto-score and auto-categorize key sections of conversations processed by its platform to help its customers more intelligently and efficiently analyze this call data. This feature leads to better lead tracking and improved relationship building for its customers.

Conversation intelligence platforms include tools that infer intent or sentiment, surface named entities, sort conversations into shorter chapters or summaries, identify common topics or themes, and more.

Why is conversation intelligence important?

Conversation intelligence is important because it helps companies record and analyze all desired customer interactions, replacing tedious, error-prone manual processes. Conversation intelligence is especially important for companies that process enormous amounts of customer data. 

After a call is transcribed via AI speech-to-text, companies can leverage insights from this call data to automate personalized responses, train customer service agents on how to respond to customer concerns, facilitate more productive sales calls, and more. 

For example, a sales company might integrate conversation intelligence to automatically identify risks or opportunities or to coach sales representatives on best practices. A marketing agency might integrate conversation intelligence to gain visibility across all touchpoints, fine-tune keywords, and identify industry trends. This analysis can take place across a single call or across aggregate call data.

Conversation intelligence platforms generate high ROI for users by providing intelligent, actionable insights based on customer data. 

Top AI models for conversation intelligence

Conversation intelligence platforms leverage Speech AI models to build suites of high-value tools for their customers. 

First, let’s examine what Speech AI is before exploring some of the most relevant Speech AI models to conversation intelligence.

Speech AI typically refers to three groups of AI models that help users understand speech or spoken data: Automatic Speech Recognition (ASR), Audio Intelligence, and Large Language Models (LLMs).

Automatic Speech Recognition, or ASR, models are used to transcribe human speech into readable text. ASR models can transcribe speech both synchronously, with the aid of real-time transcription models, or asynchronously. Top ASR models, like Conformer-2, are informed by state-of-the-art AI research and trained on enormous datasets to achieve near-human accuracy.

Audio Intelligence models help users unlock critical information from their transcriptions. Audio Intelligence models can include summarization, topic detection, sentiment analysis, content moderation, and more. 

More recently, conversation intelligence companies have also started building high-quality generative AI tools with the help of Large Language Models, or LLMs. LLMs are machine learning models that understand, generate, and interact with human language. Frameworks for applying LLMs to spoken data, like LeMUR, make it easier for product teams to build these high-powered generative AI tools by enabling them to use a variety of LLM capabilities for their needs.

See also: How to extract insights from customer conversations

Use cases for Speech AI and conversation intelligence

Here are some best use cases for building conversation intelligence and generative AI tools with Speech AI models like ASR, Audio Intelligence, and LLMs: 

1. Automate meeting transcription and analysis

Conversation intelligence tools built with Speech AI can automate the time-consuming process of accurately capturing customer meetings, filling out corresponding CRM data, and making meaningful connections across conversations.

Choosing Speech-to-Text models with a low Word Error Rate (WER) that have been trained on large, diverse datasets will help ensure high transcription accuracy regardless of speech patterns and technical jargon. 

Accurate transcription is only the first step. Next, you need to build additional conversation intelligence tools on top of the transcription data to identify speakers, automate CRM data, identify important sections of the calls, and more.

Several Audio Intelligence models–such as speaker diarization, summarization, topic detection, and sentiment analysis–as well as LLMs, can be used to build tools that achieve this more sophisticated analysis.

Speaker diarization models apply speaker labels to transcription text and help answer the “who spoke when?” question. This lets companies automatically sort speech utterances by agents or customers to ease readability and analysis.

Summarization models intelligently summarize transcribed spoken data into accurate, usable snippets of text. Tools built with summarization models help make conversations easier to skim, navigate, or identify common themes/topics quickly for further analysis.

LeMUR, a framework for applying LLMs to spoken data, can also be used to summarize conversational data. The framework includes a Custom Summary endpoint that lets users customize the summary format to meet their specific needs, instead of just receiving a generic summary of the entire conversation.

Topic detection models identify and label topics in transcription text, which help users better understand context and identify patterns. Tools built with topic detection models also help users identify conversational trends that lead to questions, objections, positive statements, negative statements, and more.

Sentiment Analysis models identify positive, negative, or neutral sentiments in a transcription. Sentiments are identified and sorted by each speech segment and/or speaker. For example, the speech segment, “This product has been working really well for me so far,” would be identified as a positive sentiment.

Sentiment analysis models can help identify changes in tone in a transcription–where did a call change from positive to negative or from negative to positive? Are more positive or negative sentiments associated with certain topics? With certain speakers? Then this data can be used to determine patterns and identify the appropriate actions that need to be taken.

2. Search and index voice conversations at scale

In addition to automating transcription and analysis, conversation intelligence platforms can help users make voice conversations both searchable and indexable. 

Audio Intelligence models and LLMs can be used to build tools that allow users to review both individual and aggregate call data in mere minutes through search across action items and auto-highlights of key sections of the conversations.

As discussed in the previous section, Summarization models can be a powerful tool to help companies quickly make sense of a lengthy conversation. Some APIs also offer the option to detect important phrases and words in a transcription text.

For example, the AssemblyAI API detected “synthetic happiness” and “natural happiness” in this transcription text: 

We smirk because we believe that synthetic happiness is not of the same 

quality as what we might call natural happiness. What are these terms? 

Natural happiness is what we get when we get what we wanted. And 

synthetic happiness is what we make when we don't get what we wanted. 

And in our society…

Tools built with this model allow users to quickly search for these common words/phrases and identify trends for further analysis. 

Entity Detection is another Audio Intelligence model that can help make transcriptions more easily searchable.

Entity Detection APIs (A) identify and (B) classify specified entities in a transcription. For example, “San Diego” is an entity that would be classified as a “location.” Common entities that can be detected include dates, languages, nationalities, number strings (like a phone number), personal names, company names, locations, addresses, and more. Companies can use this information to search for these entities and identify common trends or other actionable insights.

LLMs are a powerful new way to build tools for searching and indexing conversational data as well. For example, LLMs can be used to build a question-and-answer feature where users can ask specific questions about call data and receive an intelligent response. Additional context and parameters can be added to the prompt to increase the accuracy of the response as well.

Several of the other Audio Intelligence models mentioned previously can help here as well–search by speaker using Speaker Diarization, search by topic using Topic Detection, or search by sentiments or conversational tone changes using Sentiment Analysis.

3. Surface actionable insights

Conversation intelligence tools can also help users surface actionable insights that directly increase customer engagement, drive process and behavior changes, and deliver faster ROI.

Audio Intelligence models don’t have to stand alone. When combined, Audio Intelligence models help companies find valuable structure and patterns in previously unstructured data. This structure provides important visibility into rep activity and customer and prospect engagement. This helps teams generate data-backed goals and actions while also staying in sync.

For example, Topic and Entity Detection, combined with Sentiment Analysis, can be used to build tools that help companies track how customers react to a particular product, pitch, or pricing change. Detecting Important Words and Phrases, combined with Topic Detection, can be used to build tools that help companies identify common language being used about products or services. Entity Detection can also be used to surface when a prospect mentions a certain competitor, while Sentiment Analysis can inform opinions around this mention.

LLMs can also be used to automatically generate lists of action items following a virtual meeting, suggest a follow-up email, or define other helpful tasks and prompts.

AI-powered conversation intelligence platforms

Sales, marketing, customer success, and human resource teams must be equipped with powerful tools to boost lead conversion and customer engagement in a competitive market.

Incorporating AI models like AI Speech-to-Text transcription, Audio Intelligence, and LLMs into conversation intelligence Platforms provides these teams with the advanced capabilities, analytics, and strategic insights needed to accomplish these goals.