Automated SRT and VTT Video Captions (April 2020 Update)

Product Updates
AssemblyAI Speech-to-Text API | Automated SRT and VTT Video Captions

This month we added new acoustic models for UK customers, automated video captioning (SRT/VTT), and automatic transcript summaries. Now, companies in industries like video hosting, media monitoring, e-discovery, or video interviewing will be able to improve video playback and search for their customers!

New Model for UK Accented English

State-of-the-Art accuracy is now available for UK Accented English with just a quick change to your API calls. For complete documentation and code samples on how to enable the UK Model, check out our API docs here.

Automated Video Captioning: SRT or VTT export

Now you can easily export your transcription in SRT or VTT format, to be plugged into a video player for subtitles and closed captions. Once your transcript status shows as "completed", you can make a GET request to the following endpoints to export your transcript in VTT or SRT format:<your transcript id>/vtt<your transcript id>/srt

The API will output a plain-text response like this (SRT example):

00:00:12,340 --> 00:00:16,380
Last year I showed these two slides said that demonstrate that

00:00:16,340 --> 00:00:19,920
the Arctic ice cap which for most of the last 3,000,000 years has been

00:00:19,880 --> 00:00:23,120
the size of the lower 48 States has shrunk by 40%


Take a look at our API docs to learn more about automatically exporting a transcript in SRT or VTT format here.

Automatic Transcript Highlights

Many of our customers with long forms of audio and video files (e.g. webinars, podcasts, conference calls, video interviews) were looking for ways to make their transcriptions easier to review more quickly. In addition, they wanted to be able to tag these calls immediately depending on the most important key phrases.

That's where auto transcript highlights come in. We can now detect key phrases in your transcripts using Natural Language Processing (NLP) to help with features like:

  • Summarize transcription text: Simplify long transcriptions to only highlight the most common keywords and phrases
  • Auto-tagging/indexing: Make your entire file searchable by adding in auto-highlights to each file as a searchable tag

Take the following sample transcription, for example:

Hi I'm joy. Hi I'm Sharon. Do you have
kids in school? I have grandchildren in
school. Okay, well, my kids are in middle
school in high school. Do you think there
is anything wrong with the school system?
Overcrowding, of course, ...

In this example, the following phrases and terms would be automatically detected by the Auto Highlights feature:

"high school", "middle school", "kids"

Below is a code sample showing how to turn on automatic transcript highlights in Python, for code samples in other languages, check out our API docs here.

import requests

endpoint = ""

json = {
  "audio_url": "",
  "auto_highlights": True

headers = {
    "authorization": "YOUR-API-TOKEN",
    "content-type": "application/json"

response =, json=json, headers=headers)


100% Uptime

Another month of 100% uptime across all our models, subscribe to our status page to stay up-to-date!

Subscribe to our blog!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

You may also like

Checkout some of our recent research and product updates

Getting started with HttpClientFactory in C# and .NET 5
Getting started with HttpClientFactory in C# and .NET 5

HttpClientFactory has been around the .NET ecosystem for a few years now. In this post we will look at 3 basic implementations of HttpClientFactory; basic, named, and typed.

Feature Announcement: Content Safety Detection
Product Updates
Feature Announcement: Content Safety Detection is now GA!

Automatically transcribe audio and video files, and surface sensitive content, such "Hate Speech" or "NSFW" content, found within the audio.

Changelog: New Speaker Diarization model released
Changelog: New Speaker Diarization model released

We have released a new Diarization model. Speaker diarization is the process of partitioning an input audio stream into homogeneous segments according to the speaker identity.


Unlock your media with our advanced features like PII Redaction,
Keyword Boosts, Automatic Transcript Highlights, and more