Ask Questions About Your Audio Transcripts

In this guide, you’ll learn how to use LLM Gateway to ask questions and get answers about your audio transcripts.

If you want a Quickstart, see Apply LLM Gateway to Audio Transcripts.

Basic Q&A example

To ask a question about your audio transcript, define a prompt with your questions and send it to LLM Gateway along with the transcript text using the chat completions API.

1import requests
2import time
3
4# Step 1: Transcribe an audio file.
5base_url = "https://api.assemblyai.com"
6
7headers = {
8 "authorization": "<YOUR_API_KEY>"
9}
10
11# You can use a local filepath:
12# with open("./my-audio.mp3", "rb") as f:
13# response = requests.post(base_url + "/v2/upload",
14# headers=headers,
15# data=f)
16# upload_url = response.json()["upload_url"]
17
18# Or use a publicly-accessible URL:
19upload_url = "https://assembly.ai/sports_injuries.mp3"
20
21data = {
22 "audio_url": upload_url
23}
24
25response = requests.post(base_url + "/v2/transcript", headers=headers, json=data)
26
27transcript_id = response.json()["id"]
28polling_endpoint = base_url + f"/v2/transcript/{transcript_id}"
29
30while True:
31 transcript = requests.get(polling_endpoint, headers=headers).json()
32
33 if transcript["status"] == "completed":
34 break
35
36 elif transcript["status"] == "error":
37 raise RuntimeError(f"Transcription failed: {transcript['error']}")
38
39 else:
40 time.sleep(3)
41
42# Step 2: Define a prompt with your question(s).
43prompt = "What is a runner's knee?"
44
45# Step 3: Send to LLM Gateway.
46llm_gateway_data = {
47 "model": "claude-sonnet-4-5-20250929",
48 "messages": [
49 {"role": "user", "content": f"{prompt}\n\nTranscript: {transcript['text']}"}
50 ],
51 "max_tokens": 1000
52}
53
54result = requests.post(
55 "https://llm-gateway.assemblyai.com/v1/chat/completions",
56 headers=headers,
57 json=llm_gateway_data
58)
59print(result.json()["choices"][0]["message"]["content"])

Example output

1Based on the transcript, runner's knee is a condition characterized
2by pain behind or around the kneecap. It is caused by overuse,
3muscle imbalance and inadequate stretching. Symptoms include pain
4under or around the kneecap and pain when walking.

Structured Q&A with LLM Gateway

You can achieve structured question-and-answer outputs by crafting specific prompts that guide the LLM to format responses in a consistent way. Here’s how to create structured Q&A responses using LLM Gateway:

1import requests
2import time
3
4# Step 1: Transcribe the audio
5base_url = "https://api.assemblyai.com"
6headers = {"authorization": "<YOUR_API_KEY>"}
7
8audio_url = "https://assembly.ai/meeting.mp4"
9data = {"audio_url": audio_url}
10response = requests.post(base_url + "/v2/transcript", headers=headers, json=data)
11transcript_id = response.json()["id"]
12polling_endpoint = base_url + f"/v2/transcript/{transcript_id}"
13
14while True:
15 transcript = requests.get(polling_endpoint, headers=headers).json()
16 if transcript["status"] == "completed":
17 break
18 elif transcript["status"] == "error":
19 raise RuntimeError(f"Transcription failed: {transcript['error']}")
20 else:
21 time.sleep(3)
22
23# Step 2: Create structured Q&A prompt
24questions = [
25 "What are the top level KPIs for engineering? (KPI stands for key performance indicator)",
26 "How many days has it been since the data team has gotten updated metrics? (Choose from: 1, 2, 3, 4, 5, 6, 7, more than 7)"
27]
28
29prompt = f"""Answer the following questions based on the meeting transcript. Format your response as:
30Q1: [question]
31A1: [answer]
32Q2: [question]
33A2: [answer]
34
35Questions:
36{". ".join([f"{i+1}. {q}" for i, q in enumerate(questions)])}
37
38Context: This is a GitLab meeting to discuss logistics."""
39
40# Step 3: Send to LLM Gateway
41llm_gateway_data = {
42 "model": "claude-sonnet-4-5-20250929",
43 "messages": [
44 {"role": "user", "content": f"{prompt}\n\nTranscript: {transcript['text']}"}
45 ],
46 "max_tokens": 1000
47}
48
49result = requests.post(
50 "https://llm-gateway.assemblyai.com/v1/chat/completions",
51 headers=headers,
52 json=llm_gateway_data
53)
54print(result.json()["choices"][0]["message"]["content"])

Advanced Q&A with LLM Gateway

For more sophisticated question-answering scenarios, you can create detailed prompts that guide the LLM to produce highly structured and context-aware responses using LLM Gateway’s flexible chat completions API.

More Q&A prompt examples

Try any of these prompts to get started:

Use caseExample prompt
Question and answer”Identify any patterns or trends based on the transcript”
Closed-ended questions”Did the customer express a positive sentiment in the phone call?”
Sentiment analysis”What was the emotional sentiment of the phone call?”

For more examples and techniques, experiment with different prompt structures and try various LLM models available through LLM Gateway.

API reference

Improve the results

To improve your Q&A results with LLM Gateway:

  • Experiment with different models: Try Claude, GPT, or Gemini models to find the best fit for your use case
  • Refine your prompts: Use clear, specific instructions and examples to guide the model’s responses
  • Add context: Provide relevant background information to help the model understand the domain
  • Structure your questions: Use consistent formatting to get more predictable outputs