The AssemblyAI Content Moderation model detect sensitive content in audio files.
In the Identifying hate speech in audio or video files guide, the client submits an audio file and configures the API request to analyze the content for any sensitive material. The model then pinpoints the sensitive discussions and provides information on the severity to which they occurred.
You can explore the full JSON response here:
Understanding the response
The JSON object above contains all information about the transcription. Depending on which Models are used to analyze the audio, the attributes of this object will vary. For example, in the quickstart above we did not enable Summarization, which is reflected by the
summarization: false key-value pair in the JSON above. Had we activated Summarization, then the
summary_model key values would contain the file summary (and additional details) rather than the current
To access the Content Moderation information, we use the
The reference table below lists all relevant attributes along with their descriptions, where we've called the JSON response object
results. Object attributes are accessed via dot notation, and arbitrary array elements are denoted with
results.words[i].text refers to the
text attribute of the
i-th element of the
words array in the JSON
|boolean||Whether Content Moderation was enabled in the transcription request|
|object||An object containing all results of the Content Moderation model|
|string||Is either |
|array||An array of objects, one for each section in the audio file that the Content Moderation file flagged|
|string||The transcript of the i-th section flagged by the Content Moderation model|
|array||An array of objects, one per sensitive topic that was detected in the i-th section|
|string||The label of the sensitive topic|
|number||The confidence score for the j-th topic being discussed in the i-th section, from 0 to 1|
|number||How severely the j-th topic is discussed in the i-th section, from 0 to 1|
|number||The sentence index at which the i-th section begins|
|number||The sentence index at which the i-th section ends|
|object||Timestamp information for the i-th section|
|number||The time, in milliseconds, at which the i-th section begins|
|number||The time, in milliseconds, at which the i-th section ends|
|object||A summary of the Content Moderation confidence results for the entire audio file|
|number||A confidence score for the presence of the sensitive topic "topic" across the entire audio file|
|object||A summary of the Content Moderation severity results for the entire audio file|
|number||A distribution across the values "low", "medium", and "high" for the severity of the presence of "topic" in the audio file.|
All labels supported by the model
Adjusting the confidence threshold
The confidence threshold for content moderation results is set to 50% by default, meaning that any label with a confidence score equal to or higher than 50% is returned. If you want to set a higher or lower threshold, you can include the
content_safety_confidence parameter in your request. This parameter accepts an integer value between 25 and 100, allowing you to fine-tune the threshold to your specific needs.
There could be a few reasons for this. First, make sure that the audio file contains speech, and not just background noise or music. Additionally, the model may not have been trained on the specific type of sensitive content you're looking for. If you believe the model should be able to detect the content but it's not, you can reach out to AssemblyAI's support team for assistance.
The model may occasionally flag content as sensitive that isn't actually problematic. This can happen if the model isn't trained on the specific context or nuances of the language being used. In these cases, you can manually review the flagged content and determine if it's actually sensitive or not. If you believe the model is consistently flagging content incorrectly, you can contact AssemblyAI's support team to report the issue.
The Content Moderation model provides segment-level results that pinpoint where in the audio the sensitive content was discussed, as well as the degree to which it was discussed. You can access this information in the results key of the API response. Each result in the list contains a text key that shows the sensitive content, and a labels key that shows the detected sensitive topics along with their confidence and severity scores.
The model is designed to process batches of segments in significantly less than 1 second, making it suitable for real-time applications. However, keep in mind that the actual processing time depends on the length of the audio file and the number of segments it's divided into. Additionally, the model may occasionally require additional time to process particularly complex or long segments.
If you receive an error message, it may be due to an issue with your request format or parameters. Double-check that your request includes the correct
audio_url parameter. If you continue to experience issues, you can reach out to AssemblyAI's support team for assistance.