Announcements

Introducing Sentiment Analysis - Detect Sentiments in Spoken Audio

We are very excited to announce the release of our latest feature, Sentiment Analysis, a much awaited addition to our Speech-to-Text API. In this post, we’ll examine what the Sentiment Analysis feature does, how it works, how to use it, and some of the top use cases.

Introducing Sentiment Analysis - Detect Sentiments in Spoken Audio

Table of contents

We are very excited to announce the release of our latest feature, Sentiment Analysis--a much awaited addition to our Speech-to-Text API. In this post, we’ll examine what the Sentiment Analysis feature does, how it works, how to use it, and some of the top use cases.

What is Sentiment Analysis?

Sentiment Analysis is used to classify content as positive, negative, or neutral. It is also sometimes referred to as Sentiment Mining because it works to identify and extract, or mine, these sentiments in your source material.

A great way to visualize this is to think about a product review on Amazon or restaurant review on Yelp. Someone might say, I love this lightbulb! It’s the best I’ve ever used!. The corresponding Sentiment Analysis classification would then be positive. Or someone might write The food was undercooked and the service was terrible in a restaurant review. The corresponding Sentiment Analysis classification here would be negative.

How Does AssemblyAI’s Sentiment Analysis Feature Work?

When Sentiment Analysis is enabled (more on that below), the API will return the sentiment for each sentence spoken in your audio file, as well as the confidence that the API had in the sentiment label it applied. At this time, the sentiments the API classifies are "negative", "positive" and "neutral".

Let’s look at an example of how this works in the example below. To enable the Sentiment Analysis feature for your transcript, you’ll need to add sentiment_analysis in your POST request when you use the /v2/transcript endpoint. Set this parameter to true:

curl -- request POST \
    --url https://api.assemblyai.com/v2/transcript \
    --header 'authorization: YOUR-API-TOKEN' \
    --header 'content-type: application/json' \
    --data '{"audio_url": 
"https://app.assemblyai.com/static/media/phone_demo_clip_1.wav", 
"sentiment_analysis": true}'

Once your transcript is complete, make a GET request to /v2/transcript/<id>. Once you receive your transcript, the JSON response will include an additional key called sentiment_analysis_results with your results.

The JSON response  will look similar to these results below:

{
    "id": "5551722-f677-48a6-9287-39c0aafd9ac1",
    "status": "completed",
    "text": "...",
    "words": [...],
    # sentiment analysis results are below
    "sentiment_analysis_results":[      
        {
            "text": "Speaking of being in a shadow, didn't you feel 
            yourself in the shadow of those Hollywood A-listers?",
            "start": 106090,
            "end": 112806,
            "sentiment": "NEUTRAL",
            "confidence": 0.5466495752334595,
            "speaker": null
         },
         {
            "text": "It's my privilege to work with these actors.",
            "start": 114790,
            "end": 116994,
            "sentiment": "POSITIVE",
            "confidence": 0.9224883317947388,
            "speaker": null
         },
         ...
    ]    
}


As you can see in the above response, the "sentiment_analysis_results" key contains the sentiment analysis results for the audio file you transcribed. The API will return the following for each sentence in the transcription text:

  • text: The text for the sentence being analyzed
  • start: Starting timestamps (ms) of the text in the transcript
  • end: Ending timestamp (ms) of the text of the transcript
  • sentiment: The detected sentiment - "POSITIVE", "NEGATIVE", or "NEUTRAL"
  • confidence: Confidence score for the detected sentiment, between 0 and 1.0
  • speaker: If using the dual_channel or speaker_labels options in your API calls, then the speaker who spoke this sentence

You can find more information about how to use the Sentiment Analysis feature in our docs.

Top Use Cases for Sentiment Analysis

The Sentiment Analysis feature can help you extract even more useful information from your audio and video files. Some ways our customers currently use Sentiment Analysis include:

  • To determine the sentiments of customer-agent conversations, say in a call center conversation. Determining customer sentiment based on agent interactions, topics, locations, and more. Identifying agent sentiments to facilitate better training and customer service.
  • To extract the sentiments in virtual meetings by speaker, topic, location, meeting time, and more. Using defined sentiments for analytics that inform product development, marketing, and customer service.
  • To define the sentiment of news stories in news coverage. Analyze anchor or caller sentiments by topic, political affiliation, time of day, and more.