Industry

Content Moderation: What It Is, How It Works, and the Best APIs

This article will look at what Content Moderation is, how it works, some of the best APIs for performing Content Moderation, and a few of its top use cases.

Content Moderation: What It Is, How It Works, and the Best APIs

In 2017, several major brands were up in arms when they found their advertising content had been placed next to videos about terrorism on a major video sharing platform. They quickly pulled their ads but were understandably concerned about any long term impacts this mistake would have on their company’s image.

Obviously, this poor ad placement is something brands want to avoid–then and now. But with the explosion of online communication through videos, blog posts, social media, and more, ensuring crises like the one mentioned above don’t happen again is harder than one would think.

Many platforms turned to human content moderators to try and solve this problem. But not only was it impossible for humans to manually sift through and vet each piece of content–there are around 500 million tweets sent on Twitter each day alone–many moderators found their mental health being negatively affected by the content they had to examine.

Thankfully, recent major advances in Artificial Intelligence, Machine Learning, and Deep Learning have made significantly more accurate, automated Content Moderation a reality today.

This article will look at what AI-powered Content Moderation is, how it works, some of the best APIs for performing Content Moderation, and a few of its top use cases.

What is Content Moderation?

Content Moderation APIs use AI models to detect sensitive content in bodies of texts, including those shared via online platforms or social media. Some top Speech-to-Text APIs also offer Content Moderation on top of transcription data from audio or video streams. This expands the reach of Content Moderation to video platforms like YouTube, podcast platforms like Spotify, and more.

Typically, the sensitive content Content Moderation APIs detect includes topics related to drugs, alcohol, violence, sensitive social issues, hate speech, and more.

Here's an example of what might be included as "sensitive content" by a Content Moderation API:

Once detected, platforms can use this information to automate decision making regarding ad placements, content acceptance, and more. The definition of what is acceptable or not acceptable may vary across platforms and industries, as each comes with its own set of rules, users, and needs.

How Does Content Moderation Work?

Content Moderation models are typically designed using one of three methods: generative, classifier, and text analysis.

A generative model takes an input text and generates a list of Content Moderation topics that may or may not be included in the original text. For example, a generative model might label the input text He had a cigarette after dinner as containing references to tobacco.

A classifier model would take an input text and then output a probability that the text conforms with a predetermined list of sensitive content categories. For example, a simple classifier Content Moderation model could be designed with three possible outputs– hate speech, violence and profanity. Then, the model would output a probability that the text conformed to each of the above possibilities.

Finally, a general text analysis model could be used for Content Moderation. With this method, one would use a “blacklist” approach and create a mini dictionary of blacklisted words for each predefined category such as crime or drugs. If the input text fed through the model contained one of these listed words, it would categorize the text according to the category that word was listed under. This approach has its limitations given that, for example, creating exhaustive lists for each category may prove challenging. A text analysis model may also miss out on important context that could help more accurately categorize the word.

Best APIs for Content Moderation

Now that we’ve examined what Content Moderation is and how Content Moderation models work, we’ll dig into the top Content Moderation APIs available today. Note that some of these APIs offer Content Moderation on static texts only while others perform Content Moderation on top of transcribed data from audio or video streams in addition to static texts.

1. AssemblyAI’s Content Moderation API

AssemblyAI is a leading Deep Learning startup offering Speech-to-Text and Audio Intelligence APIs, including Content Moderation, Entity Detection, Text Summarization, Sentiment Analysis, PII Redaction, and more.

With the Content Moderation API, product teams and developers can pinpoint exactly what sensitive content was spoken and where it occurs in an audio or video file. These teams also receive a severity score and confidence score for each topic flagged.

For example, the AssemblyAI Content Moderation API found health_issues to be present in the following transcription text segment:

Yes, that's it. Why does that happen? By calling off the Hunt, your 
brain can stop persevering on the ugly sister, giving the correct set 
of neurons a chance to be activated. Tip of the tongue, especially 
blocking on a person's name, is totally normal. 25 year olds can 
experience several tip of the tongues a week, but young people don't 
sweat them, in part because old age, memory loss, and Alzheimer's are 
nowhere on their radars.

Pricing starts at $0.000583 per second in addition to its Core Transcription pricing, though developers or Product Managers looking to test the API can do so for free here.

Test AssemblyAI's Audio Intelligence APIs for Free

2. Azure Content Moderator

Microsoft’s Azure Content Moderator is part of its Cognitive Services suite of products. Its API can detect sensitive or offensive content in text and video. Users can also use its Human Review tool to aid confidence in a real-world context.  

Pricing for the Azure Content Moderator tool starts at $1 per 1,000 transactions. Human moderation is included in its standard API pricing. Those looking to try the API should review the Start Guide here.

3. Amazon Rekognition

Amazon Rekognition offers Content Moderation for text and video analysis, in addition to other Audio Intelligence features such as Sentiment Analysis, Text Detection, and more. The Content Moderation API identifies and labels sensitive and offensive content in videos and texts along with an accompanying confidence score.

You will need an AWS account, an AWS account ID, and IAM user profile to use Amazon Rekognition. This guide can get you started.

4. DeepAI’s Content Moderation API

DeepAI’s Content Moderation API analyzes texts to detect and label sensitive or offensive content. The company offers quick start guides and documentation to get users off the ground quickly. Developers looking to explore AI or NLU/NLP topics further should check out DeepAI’s research, news, and datasets on their main website.

5. Hive Moderation

The Hive Moderation API performs Content Moderation on texts, videos, and audio streams. The API detects more than 25 subclasses across 5 distinct classes of offensive or sensitive content, including NSFW, violence, drugs, hate, and attributes, along with a confidence score. Hive’s documentation can be found here, but developers looking to test the API will have to sign up for a demo here.

Content Moderation Use Cases

Content Moderation has significant value across a wide range of brand suitability and brand safety use cases.

For example, smart media monitoring platforms use Content Moderation to help brands see if their name is mentioned next to any sensitive content, so they can take appropriate action, if needed.

Brands looking to advertise on YouTube can use Content Moderation to ensure that their ads aren’t placed next to videos containing sensitive content.

Content Moderation APIs also help:

  • Protect advertisers
  • Protect brand reputation
  • Increase brand loyalty
  • Increase brand engagement
  • Protect communities

Content Moderation Tutorial

Want to learn how to do Content Moderation on audio files in Python? Check out this YouTube Tutorial: