Generate Action Items with LLM Gateway

This tutorial will demonstrate how to use AssemblyAI’s LLM Gateway framework to create action items from a transcript. LLM Gateway provides access to multiple LLM providers through a unified API.

Quickstart

1import requests
2import time
3
4base_url = "https://api.assemblyai.com"
5headers = {"authorization": "<YOUR_API_KEY>"}
6
7# Use a publicly-accessible URL:
8audio_url = "https://storage.googleapis.com/aai-web-samples/meeting.mp4"
9
10# with open("/your_audio_file.mp3", "rb") as f:
11# response = requests.post(base_url + "/v2/upload", headers=headers, data=f)
12# if response.status_code != 200:
13# print(f"Error: {response.status_code}, Response: {response.text}")
14# response.raise_for_status()
15# upload_json = response.json()
16# audio_url = upload_json["upload_url"]
17
18data = {
19 "audio_url": audio_url,
20}
21
22response = requests.post(base_url + "/v2/transcript", headers=headers, json=data)
23
24if response.status_code != 200:
25 print(f"Error: {response.status_code}, Response: {response.text}")
26
27transcript_json = response.json()
28transcript_id = transcript_json["id"]
29polling_endpoint = f"{base_url}/v2/transcript/{transcript_id}"
30
31while True:
32 transcript = requests.get(polling_endpoint, headers=headers).json()
33 if transcript["status"] == "completed":
34 print(transcript['id'])
35 print(f" \nFull Transcript: \n\n{transcript['text']}\n")
36
37 break
38 elif transcript["status"] == "error":
39 raise RuntimeError(f"Transcription failed: {transcript['error']}")
40 else:
41 time.sleep(3)
42
43prompt = f"""
44 Here are guidelines to follow:
45 - You are an expert at understanding transcripts of conversations, calls and meetings.
46 - You are an expert at coming up with ideal action items based on the contents of the transcripts.
47 - Action items are things that the transcript implies should get done.
48 - Your action item ideas do not make stuff up that isn't relevant to the transcript.
49 - You do not needlessly make up action items - you stick to important tasks.
50 - You are useful, true and concise, and write in perfect English.
51 - Your action items can be tied back to direct quotes in the transcript.
52 - You do not cite the quotes the action items relate to.
53 - The action items are written succinctly.
54 - Please give useful action items based on the transcript.
55 - Your response should be formatted in bullet points.
56 """
57
58llm_gateway_data = {
59 "model": "claude-sonnet-4-5-20250929",
60 "messages": [
61 {
62 "role": "user",
63 "content": f"{prompt} Please give useful action items based on this transcript: \n\n{transcript["text"]}."
64 }
65 ],
66 "max_tokens": 1500,
67 "temperature": 0
68 }
69
70response = requests.post(
71 "https://llm-gateway.assemblyai.com/v1/chat/completions",
72 headers=headers,
73 json=llm_gateway_data
74)
75
76result = response.json()
77
78if "error" in result:
79 print(f"\nError from LLM Gateway: {result['error']}")
80else:
81 response_text = result['choices'][0]['message']['content']
82 print(f"\nResponse ID: {result["request_id"]}\n")
83 print(response_text)

Getting Started

Before we begin, make sure you have an AssemblyAI account and an API key. You can sign up for an AssemblyAI account and get your API key from your dashboard.

Find more details on the current LLM Gateway pricing on the AssemblyAI pricing page.

Step-by-Step Instructions

In this guide, we’ll prompt LLM Gateway to create actions items based on a transcript.

Import the required pagackes and set the base URL and headers.

1import requests
2import time
3
4base_url = "https://api.assemblyai.com"
5headers = {"authorization": "<YOUR_API_KEY>"}

Use AssemblyAI to transcribe a file and save the transcript.

1audio_url = "https://storage.googleapis.com/aai-web-samples/meeting.mp4"
2
3# with open("/your_audio_file.mp3", "rb") as f:
4# response = requests.post(base_url + "/v2/upload", headers=headers, data=f)
5# if response.status_code != 200:
6# print(f"Error: {response.status_code}, Response: {response.text}")
7# response.raise_for_status()
8# upload_json = response.json()
9# audio_url = upload_json["upload_url"]
10
11data = {
12 "audio_url": audio_url,
13}
14
15response = requests.post(base_url + "/v2/transcript", headers=headers, json=data)
16
17if response.status_code != 200:
18 print(f"Error: {response.status_code}, Response: {response.text}")
19
20transcript_json = response.json()
21transcript_id = transcript_json["id"]
22polling_endpoint = f"{base_url}/v2/transcript/{transcript_id}"
23
24while True:
25 transcript = requests.get(polling_endpoint, headers=headers).json()
26 if transcript["status"] == "completed":
27 print(transcript['id'])
28 print(f" \nFull Transcript: \n\n{transcript['text']}\n")
29
30 break
31 elif transcript["status"] == "error":
32 raise RuntimeError(f"Transcription failed: {transcript['error']}")
33 else:
34 time.sleep(3)

Define your detailed prompt instructions for generating action items based on the transcript. This is an example prompt, which you can modify to suit your specific requirements.

1prompt = f"""
2 Here are guidelines to follow:
3 - You are an expert at understanding transcripts of conversations, calls and meetings.
4 - You are an expert at coming up with ideal action items based on the contents of the transcripts.
5 - Action items are things that the transcript implies should get done.
6 - Your action item ideas do not make stuff up that isn't relevant to the transcript.
7 - You do not needlessly make up action items - you stick to important tasks.
8 - You are useful, true and concise, and write in perfect English.
9 - Your action items can be tied back to direct quotes in the transcript.
10 - You do not cite the quotes the action items relate to.
11 - The action items are written succinctly.
12 - Please give useful action items based on the transcript.
13 - Your response should be formatted in bullet points.
14 """

Generate the custom action items using LLM Gateway.

1llm_gateway_data = {
2 "model": "claude-sonnet-4-5-20250929",
3 "messages": [
4 {
5 "role": "user",
6 "content": f"{prompt} Please give useful action items based on this transcript: \n\n{transcript["text"]}."
7 }
8 ],
9 "max_tokens": 1500
10 }
11
12response = requests.post(
13 "https://llm-gateway.assemblyai.com/v1/chat/completions",
14 headers=headers,
15 json=llm_gateway_data
16)

Finally, save and return the LLM response.

1result = response.json()
2
3if "error" in result:
4 print(f"\nError from LLM Gateway: {result['error']}")
5else:
6 response_text = result['choices'][0]['message']['content']
7 print(f"\nResponse ID: {result["request_id"]}\n")
8 print(response_text)