Transcribe audio
Authentication
Request
List multiple speech models in priority order, allowing our system to automatically route your audio to the best available option. See Model Selection for available models and routing behavior.
The point in time, in milliseconds, to stop transcribing in your media file. See Set the start and end of the transcript for more details.
The point in time, in milliseconds, to begin transcribing in your media file. See Set the start and end of the transcript for more details.
Enable Auto Chapters, can be true or false
Enable Key Phrases, either true or false
Enable Content Moderation, can be true or false
The confidence threshold for the Content Moderation model. Values must be between 25 and 100.
Customize how words are spelled and formatted using to and from values. See Custom Spelling for more details.
Transcribe Filler Words, like “umm”, in your media file; can be true or false
Enable Entity Detection, can be true or false
Filter profanity from the transcribed text, can be true or false. See Profanity Filtering for more details.
Enable Text Formatting, can be true or false
Enable Topic Detection, can be true or false
Improve accuracy with up to 200 (for Universal-2) or 1000 (for Universal-3-Pro) domain-specific words or phrases (maximum 6 words per phrase). See Keyterms Prompting for more details.
The language of your audio file. Possible values are found in Supported Languages. The default value is ‘en_us’.
The language codes of your audio file. Used for Code switching
One of the values specified must be en.
The confidence threshold for the automatically detected language. An error will be returned if the language confidence is below this threshold. Defaults to 0. See Automatic Language Detection for more details.
Enable Automatic language detection, either true or false.
Enable Multichannel transcription, can be true or false.
Provide natural language prompting of up to 1,500 words of contextual information to the model. See the Prompting Guide for best practices.
Note: This parameter is only supported for the Universal-3-Pro model.
Enable Automatic Punctuation, can be true or false
Redact PII from the transcribed text using the Redact PII model, can be true or false. See PII Redaction for more details.
Generate a copy of the original media file with spoken PII “beeped” out, can be true or false. See PII redaction for more details.
Controls the filetype of the audio created by redact_pii_audio. Currently supports mp3 (default) and wav. See PII redaction for more details.
The list of PII Redaction policies to enable. See PII redaction for more details.
The replacement logic for detected PII, can be entity_type or hash. See PII redaction for more details.
Enable Sentiment Analysis, can be true or false
Enable Speaker diarization, can be true or false
Specify options for Speaker diarization. Use this to set a range of possible speakers.
Tells the speaker label model how many speakers it should attempt to identify. See Set number of speakers expected for more details.
Reject audio files that contain less than this fraction of speech. Valid values are in the range [0, 1] inclusive. See Speech Threshold for more details.
Enable Summarization, can be true or false
The model to summarize the transcript. See Summary models for available models and when to use each.
The type of summary. See Summary types for descriptions of the available summary types.
Control the amount of randomness injected into the model’s response. See the Prompting Guide for more details.
Note: This parameter can only be used with the Universal-3-Pro model.
The header name to be sent with the transcript completed or failed webhook requests
The header value to send back with the transcript completed or failed webhook requests for added security
The URL to which we send webhook requests.
This parameter has been replaced with the speech_models parameter, learn more about the speech_models parameter here.
Response
Whether Key Phrases is enabled, either true or false
The confidence score for the detected language, between 0.0 (low confidence) and 1.0 (high confidence). See Automatic Language Detection for more details.
The confidence threshold for the automatically detected language. An error will be returned if the language confidence is below this threshold. See Automatic Language Detection for more details.
Whether PII Redaction is enabled, either true or false
Whether Summarization is enabled, either true or false
Whether webhook authentication details were provided
This parameter has been replaced with the speech_models parameter, learn more about the speech_models parameter here.
The number of audio channels in the audio file. This is only present when multichannel is enabled.
The point in time, in milliseconds, in the file at which the transcription was terminated. See Set the start and end of the transcript for more details.
The point in time, in milliseconds, in the file at which the transcription was started. See Set the start and end of the transcript for more details.
Whether Auto Chapters is enabled, can be true or false
An array of results for the Key Phrases model, if it is enabled. See Key Phrases for more information.
An array of temporally sequential chapters for the audio file. See Auto Chapters for more information.
The confidence score for the transcript, between 0.0 (low confidence) and 1.0 (high confidence)
Whether Content Moderation is enabled, can be true or false
An array of results for the Content Moderation model, if it is enabled. See Content moderation for more information.
Customize how words are spelled and formatted using to and from values. See Custom Spelling for more details.
Transcribe Filler Words, like “umm”, in your media file; can be true or false
An array of results for the Entity Detection model, if it is enabled. See Entity detection for more information.
Whether Entity Detection is enabled, can be true or false
Whether Profanity Filtering is enabled, either true or false
Whether Text Formatting is enabled, either true or false
Whether Topic Detection is enabled, can be true or false
The result of the Topic Detection model, if it is enabled. See Topic Detection for more information.
Improve accuracy with up to 200 (for Universal-2) or 1000 (for Universal-3-Pro) domain-specific words or phrases (maximum 6 words per phrase). See Keyterms Prompting for more details.
The language of your audio file. Possible values are found in Supported Languages. The default value is ‘en_us’.
The language codes of your audio file. Used for Code switching
One of the values specified must be en.
Whether Automatic language detection is enabled, either true or false
Whether Multichannel transcription was enabled in the transcription request, either true or false
Provide natural language prompting of up to 1,500 words of contextual information to the model. See the Prompting Guide for best practices.
Note: This parameter is only supported for the Universal-3-Pro model.
Whether Automatic Punctuation is enabled, either true or false
Whether a redacted version of the audio file was generated, either true or false. See PII redaction for more information.
The audio quality of the PII-redacted audio file, if redact_pii_audio is enabled. See PII redaction for more information.
The list of PII Redaction policies that were enabled, if PII Redaction is enabled. See PII redaction for more information.
The replacement logic for detected PII, can be entity_type or hash. See PII redaction for more details.
Whether Sentiment Analysis is enabled, can be true or false
An array of results for the Sentiment Analysis model, if it is enabled. See Sentiment Analysis for more information.
Whether Speaker diarization is enabled, can be true or false
Tell the speaker label model how many speakers it should attempt to identify. See Set number of speakers expected for more details.
The speech model that was actually used for the transcription. See Model Selection for available models.
List multiple speech models in priority order, allowing our system to automatically route your audio to the best available option. See Model Selection for available models and routing behavior.
Defaults to null. Reject audio files that contain less than this fraction of speech. Valid values are in the range [0, 1] inclusive. See Speech Threshold for more details.
The generated summary of the media file, if Summarization is enabled
The Summarization model used to generate the summary, if Summarization is enabled
The type of summary generated, if Summarization is enabled
The temperature that was used for the model’s response. See the Prompting Guide for more details.
Note: This parameter can only be used with the Universal-3-Pro model.
When multichannel or speaker_labels is enabled, a list of turn-by-turn utterance objects. See Speaker diarization and Multichannel transcription for more information.
The header name to be sent with the transcript completed or failed webhook requests
The status code we received from your server when delivering the transcript completed or failed webhook request, if a webhook URL was provided
The URL to which we send webhook requests.
An array of temporally-sequential word objects, one for each word in the transcript.