Skip to main content

Content moderation

The AssemblyAI content moderation model detect sensitive content in audio files.

Quickstart

In the Identifying hate speech in audio or video files guide, the client submits an audio file and configures the API request to analyze the content for any sensitive material. The model then pinpoints the sensitive discussions and provides information on the severity to which they occurred.

You can also view the full source code here.

All labels supported by the model

AccidentsAny man-made incident that happens unexpectedly and results in damage, injury, or death.accidentsYes
AlcoholContent that discusses any alcoholic beverage or its consumption.alcoholYes
Company FinancialsContent that discusses any sensitive company financial information.financialsNo
Crime ViolenceContent that discusses any type of criminal activity or extreme violence that is criminal in nature.crime_violenceYes
DrugsContent that discusses illegal drugs or their usage.drugsYes
GamblingIncludes gambling on casino-based games such as poker, slots, etc. as well as sports betting.gamblingYes
Hate SpeechContent that is a direct attack against people or groups based on their sexual orientation, gender identity, race, religion, ethnicity, national origin, disability, etc.hate_speechYes
Health IssuesContent that discusses any medical or health-related problems.health_issuesYes
MangaMangas are comics or graphic novels originating from Japan with some of the more popular series being "Pokemon", "Naruto", "Dragon Ball Z", "One Punch Man", and "Sailor Moon".mangaNo
MarijuanaThis category includes content that discusses marijuana or its usage.marijuanaYes
Natural DisastersPhenomena that happens infrequently and results in damage, injury, or death. Such as hurricanes, tornadoes, earthquakes, volcano eruptions, and firestorms.disastersYes
Negative NewsNews content with a negative sentiment which typically will occur in the third person as an unbiased recapping of events.negative_newsNo
NSFW (Adult Content)Content considered "Not Safe for Work" and consists of content that a viewer would not want to be heard/seen in a public environment.nsfwNo
PornographyContent that discusses any sexual content or material.pornographyYes
ProfanityAny profanity or cursing.profanityYes
Sensitive Social IssuesThis category includes content that may be considered insensitive, irresponsible, or harmful to certain groups based on their beliefs, political affiliation, sexual orientation, or gender identity.sensitive_social_issuesNo
TerrorismIncludes terrorist acts as well as terrorist groups. Examples include bombings, mass shootings, and ISIS. Note that many texts corresponding to this topic may also be classified into the crime violence topic.terrorismYes
TobaccoText that discusses tobacco and tobacco usage, including e-cigarettes, nicotine, vaping, and general discussions about smoking.tobaccoYes
WeaponsText that discusses any type of weapon including guns, ammunition, shooting, knives, missiles, torpedoes, etc.weaponsYes

Adjusting the confidence threshold

The confidence threshold for content moderation results is set to 50% by default, meaning that any label with a confidence score equal to or higher than 50% will be returned. If you want to set a higher or lower threshold, you can include the content_safety_confidence parameter in your request. This parameter accepts an integer value between 25 and 100, allowing you to fine-tune the threshold to your specific needs.

Understanding the response

After running an audio file through AssemblyAI's Content Moderation model, the response object contains information about the transcription process and its result, including any sensitive content detected within the audio. This object has various attributes, each with its own purpose and value.

The bulk of the results are stored within the results key, which contains a list of dictionaries, one for each instance of sensitive content found in the audio. Each dictionary includes information such as the start and end timestamps of the sensitive content and the corresponding text. The text key can be used to retrieve the sensitive content identified in a given instance.

# Sensitive text: Yes, that's it. Why does that happen? By calling off
# the Hunt, your brain can stop persevering on the ugly
# sister, giving the correct set of neurons a chance to
# be activated. Tip of the tongue, especially blocking on
# a person's name, is totally normal. 25 year olds can
# experience several tip of the tongues a week, but young
# people don't sweat them, in part because old age, memory
# loss, and Alzheimer's are nowhere on their radars.
# Label: health_issues
# Confidence: 0.8225132822990417
# Severity: 0.15090347826480865

Here is a reference table with all parameters returned by the Content Moderation model:

statusWill be either success, or unavailable in the rare case that the Content Moderation model failed
resultsA list of dictionaries for all the spoken audio the Content Moderation model flagged
results.textA list of labels the Content Moderation model predicted for the flagged content, as well as the confidence and severity of each label. The confidence score is a range between 0 and 1, and is how confident the model was in the label it predicted. The severity score is also a range 0 and 1, and indicates how severe the flagged content is, with 1 being most severe.
results.timestampThe start and end time, in milliseconds, for where the flagged content was spoken in the audio
summaryConfidence of the most common labels in relation to the entire audio file.
severity_score_summaryOverall severity of the most common labels in relation to the entire audio file.

Troubleshooting

Why is the Content Moderation model not detecting sensitive content in my audio file?

There could be a few reasons for this. First, make sure that the audio file contains speech, and not just background noise or music. Additionally, the model may not have been trained on the specific type of sensitive content you are looking for. If you believe the model should be able to detect the content but it's not, you can reach out to AssemblyAI's support team for assistance.

Why is the Content Moderation model flagging content that is not actually sensitive?

The model may occasionally flag content as sensitive that is not actually problematic. This can happen if the model is not trained on the specific context or nuances of the language being used. In these cases, you can manually review the flagged content and determine if it is actually sensitive or not. If you believe the model is consistently flagging content incorrectly, you can contact AssemblyAI's support team to report the issue.

How do I know which specific parts of the audio file contain sensitive content?

The Content Moderation model provides segment-level results that pinpoint where in the audio the sensitive content was discussed, as well as the degree to which it was discussed. You can access this information in the results key of the API response. Each result in the list contains a text key that shows the sensitive content, and a labels key that shows the detected sensitive topics along with their confidence and severity scores.

Can the Content Moderation model be used in real-time applications?

The model is designed to process batches of segments in significantly less than 1 second, making it suitable for real-time applications. However, keep in mind that the actual processing time will depend on the length of the audio file and the number of segments it is divided into. Additionally, the model may occasionally require additional time to process particularly complex or long segments.

Why am I receiving an error message when using the Content Moderation model?

If you are receiving an error message, it may be due to an issue with your request format or parameters. Double-check that your request includes the correct audio_url parameter. If you continue to experience issues, you can reach out to AssemblyAI's support team for assistance.