Content Moderation
The AssemblyAI Content Moderation model detect sensitive content in audio files.
Quickstart
In the Identifying hate speech in audio or video files guide, the client submits an audio file and configures the API request to analyze the content for any sensitive material. The model then pinpoints the sensitive discussions and provides information on the severity to which they occurred.
You can explore the full JSON response here:
Show JSON
You run this code snippet in Colab here, or you can view the full source code here.
Understanding the response
The JSON object above contains all information about the transcription. Depending on which Models are used to analyze the audio, the attributes of this object will vary. For example, in the quickstart above we did not enable Summarization, which is reflected by the summarization: false
key-value pair in the JSON above. Had we activated Summarization, then the summary
, summary_type
, and summary_model
key values would contain the file summary (and additional details) rather than the current null
values.
To access the Content Moderation information, we use the content_safety
and content_safety_labels
keys:
The reference table below lists all relevant attributes along with their descriptions, where we've called the JSON response object results
. Object attributes are accessed via dot notation, and arbitrary array elements are denoted with [i]
.
For example, results.words[i].text
refers to the text
attribute of the i-th
element of the words
array in the JSON results
object.
results.content_safety | boolean | Whether Content Moderation was enabled in the transcription request |
results.content_safety_labels | object | An object containing all results of the Content Moderation model |
results.content_safety_labels.status | string | Is either success , or unavailable in the rare case that the Content Moderation model failed |
results.content_safety_labels.results | array | An array of objects, one for each section in the audio file that the Content Moderation file flagged |
results.content_safety_labels.results[i].text | string | The transcript of the i-th section flagged by the Content Moderation model |
results.content_safety_labels.results[i].labels | array | An array of objects, one per sensitive topic that was detected in the i-th section |
results.content_safety_labels.results[i].labels[j].label | string | The label of the sensitive topic |
results.content_safety_labels.results[i].labels[j].confidence | number | The confidence score for the j-th topic being discussed in the i-th section, from 0 to 1 |
results.content_safety_labels.results[i].labels[j].severity | number | How severely the j-th topic is discussed in the i-th section, from 0 to 1 |
results.content_safety_labels.results[i].sentences_idx_start | number | The sentence index at which the i-th section begins |
results.content_safety_labels.results[i].sentences_idx_end | number | The sentence index at which the i-th section ends |
results.content_safety_labels.results[i].timestamp | object | Timestamp information for the i-th section |
results.content_safety_labels.results[i].timestamp.start | number | The time, in milliseconds, at which the i-th section begins |
results.content_safety_labels.results[i].timestamp.end | number | The time, in milliseconds, at which the i-th section ends |
results.content_safety_labels.summary | object | A summary of the Content Moderation confidence results for the entire audio file |
results.content_safety_labels.summary.topic | number | A confidence score for the presence of the sensitive topic "topic" across the entire audio file |
results.content_safety_labels.severity_score_summary | object | A summary of the Content Moderation severity results for the entire audio file |
results.content_safety_labels.severity_score_summary.topic.[low, medium, high] | number | A distribution across the values "low", "medium", and "high" for the severity of the presence of "topic" in the audio file. |
All labels supported by the model
Accidents | Any man-made incident that happens unexpectedly and results in damage, injury, or death. | accidents | Yes |
Alcohol | Content that discusses any alcoholic beverage or its consumption. | alcohol | Yes |
Company Financials | Content that discusses any sensitive company financial information. | financials | No |
Crime Violence | Content that discusses any type of criminal activity or extreme violence that is criminal in nature. | crime_violence | Yes |
Drugs | Content that discusses illegal drugs or their usage. | drugs | Yes |
Gambling | Includes gambling on casino-based games such as poker, slots, etc. as well as sports betting. | gambling | Yes |
Hate Speech | Content that's a direct attack against people or groups based on their sexual orientation, gender identity, race, religion, ethnicity, national origin, disability, etc. | hate_speech | Yes |
Health Issues | Content that discusses any medical or health-related problems. | health_issues | Yes |
Manga | Mangas are comics or graphic novels originating from Japan with some of the more popular series being "Pokemon", "Naruto", "Dragon Ball Z", "One Punch Man", and "Sailor Moon". | manga | No |
Marijuana | This category includes content that discusses marijuana or its usage. | marijuana | Yes |
Natural Disasters | Phenomena that happens infrequently and results in damage, injury, or death. Such as hurricanes, tornadoes, earthquakes, volcano eruptions, and firestorms. | disasters | Yes |
Negative News | News content with a negative sentiment which typically occur in the third person as an unbiased recapping of events. | negative_news | No |
NSFW (Adult Content) | Content considered "Not Safe for Work" and consists of content that a viewer would not want to be heard/seen in a public environment. | nsfw | No |
Pornography | Content that discusses any sexual content or material. | pornography | Yes |
Profanity | Any profanity or cursing. | profanity | Yes |
Sensitive Social Issues | This category includes content that may be considered insensitive, irresponsible, or harmful to certain groups based on their beliefs, political affiliation, sexual orientation, or gender identity. | sensitive_social_issues | No |
Terrorism | Includes terrorist acts as well as terrorist groups. Examples include bombings, mass shootings, and ISIS. Note that many texts corresponding to this topic may also be classified into the crime violence topic. | terrorism | Yes |
Tobacco | Text that discusses tobacco and tobacco usage, including e-cigarettes, nicotine, vaping, and general discussions about smoking. | tobacco | Yes |
Weapons | Text that discusses any type of weapon including guns, ammunition, shooting, knives, missiles, torpedoes, etc. | weapons | Yes |
Adjusting the confidence threshold
The confidence threshold for content moderation results is set to 50% by default, meaning that any label with a confidence score equal to or higher than 50% is returned. If you want to set a higher or lower threshold, you can include the content_safety_confidence
parameter in your request. This parameter accepts an integer value between 25 and 100, allowing you to fine-tune the threshold to your specific needs.
Troubleshooting
There could be a few reasons for this. First, make sure that the audio file contains speech, and not just background noise or music. Additionally, the model may not have been trained on the specific type of sensitive content you're looking for. If you believe the model should be able to detect the content but it's not, you can reach out to AssemblyAI's support team for assistance.
The model may occasionally flag content as sensitive that isn't actually problematic. This can happen if the model isn't trained on the specific context or nuances of the language being used. In these cases, you can manually review the flagged content and determine if it's actually sensitive or not. If you believe the model is consistently flagging content incorrectly, you can contact AssemblyAI's support team to report the issue.
The Content Moderation model provides segment-level results that pinpoint where in the audio the sensitive content was discussed, as well as the degree to which it was discussed. You can access this information in the results key of the API response. Each result in the list contains a text key that shows the sensitive content, and a labels key that shows the detected sensitive topics along with their confidence and severity scores.
The model is designed to process batches of segments in significantly less than 1 second, making it suitable for real-time applications. However, keep in mind that the actual processing time depends on the length of the audio file and the number of segments it's divided into. Additionally, the model may occasionally require additional time to process particularly complex or long segments.
If you receive an error message, it may be due to an issue with your request format or parameters. Double-check that your request includes the correct audio_url
parameter. If you continue to experience issues, you can reach out to AssemblyAI's support team for assistance.