Claude 3 Models now available with LeMUR

Today we're releasing Anthropic's Claude 3 model family into LeMUR as part of our ongoing commitment to giving you access to the most advanced and innovative AI capabilities available on the market.

Claude 3 Models now available with LeMUR

LeMUR, AssemblyAI's framework for leveraging Large Language Models (LLMs) to understand recognized speech, makes it possible to ask questions, generate content, and create summaries from your audio data to provide better outputs and insights for your end users. With the addition of the Claude 3 model family, you'll get access to a range of state-of-the-art LLM models so you can select, test, and leverage models that best suit your users' needs.

What's new: the Claude 3 model family

Millions of end users rely on LeMUR to apply the power of LLMs to their audio transcripts. With LeMUR, you can automatically analyze AssemblyAI-generated transcripts so you can perform downstream analysis like summarize audio, ask sophisticated questions, extract insights, and generate tags and action items.

Accessing LLMs directly from AssemblyAI's API reduces work for your team in setting up and maintaining multiple AI providers, you can run speech-to-text and audio analysis and understanding from one system.

LeMUR has historically run on Anthropic’s Claude 2 family of models. Starting today, we're making LeMUR more powerful with the addition of four new Claude 3 models:

  • Claude 3.5 Sonnet: the most intelligent model to date, outperforming Claude 3 Opus on a wide range of evaluations, with the speed and cost of Claude 3 Sonnet
  • Claude 3 Opus: good at handling complex analysis, longer tasks with many steps, and higher-order math and coding tasks
  • Claude 3 Sonnet: strikes the ideal balance between intelligence and speed—particularly for enterprise workloads
  • Claude 3 Haiku: fastest, most compact model for near-instant responsiveness

Each model offers unique trade-offs, allowing you to fine-tune your Speech AI capabilities for optimal performance, speed, and cost-efficiency. With this range of options, you can choose the right model for each specific use case, and ensure your Speech AI features are operating at peak effectiveness.

Build on best-in-breed capabilities

With LeMUR + Claude 3, you can leverage the newest and most advanced set of LLM capabilities to power your product offering. Input your own prompts into LeMUR to control the insights and outputs you get from your audio data. You can use LeMUR to generate:

  • Concise summaries of hour-long meetings
  • Detailed answers to complex questions about your audio content
  • Precise action items extracted from customer calls
  • Custom analyses tailored to your unique use cases

To get started quickly, you can use a simple prompt like "summarize this transcript" and advance your prompts over time to ask LeMUR to take on complex tasks like "generate a detailed summary with key discussion points, action items, and sentiment analysis."

Here’s how you can call the new Claude 3.5 Sonnet model for transcript summarization in a few lines of code:

import assemblyai as aai

aai.settings.api_key = "YOUR_API_KEY"

# Step 1: Transcribe an audio file.
audio_url = ""

transcriber = aai.Transcriber()
transcript = transcriber.transcribe(audio_url)

# Step 2: Define a summarization prompt.
prompt = "Provide a brief summary of the transcript."

# Step 3: Choose an LLM with LeMUR.
result = transcript.lemur.task(

Visit our Prompt Guide for tips on how to obtain accurate and relevant outputs from LeMUR for your use case.

Harness the newest AI capabilities at no additional cost

We're matching Anthropic's Claude 3 pricing, giving you access to the newest and most powerful LLMs alongside our industry-leading speech AI capabilities at no additional cost from our current LeMUR pricing.

Here's the breakdown:

Get Started in Minutes

Start building on our API and access Claude 3 family of models directly from LeMUR, no need to set anything up. View our updated LeMUR Quick Start Guide.