Introducing the AssemblyAI integration for LangChain.js

You can now integrate spoken audio data into LangChain.js applications using the new AssemblyAI integration.

Introducing the AssemblyAI integration for LangChain.js

LangChain is a framework for developing applications using Large Language Models (LLM). LangChain provides common components when building integrations with LLMs. However, LLMs only operate on textual data and don’t understand what is said in audio files. With our recent contribution to LangChain.js, you can now integrate AssemblyAI's transcription models using a set of document loaders, with more integrations to come.


The AssemblyAI integration is only for LangChain.js, the TypeScript/JavaScript version of LangChain, but we’re already working on the equivalent integration for Python LangChain.

The AssemblyAI integration is built into the langchain package, so you can start using AssemblyAI's document loaders immediately without any extra dependencies.

Here's a sample LangChain.js application that can answer questions about an audio file.
The AudioTranscriptLoader uses AssemblyAI to transcribe the audio file and OpenAI to generate the response to the question.

import { OpenAI } from "langchain/llms/openai";
import { loadQAStuffChain } from 'langchain/chains';
import { AudioTranscriptLoader } from 'langchain/document_loaders/web/assemblyai';

(async () => {
  const llm = new OpenAI({});
  const chain = loadQAStuffChain(llm);

  const loader = new AudioTranscriptLoader({
    // You can also use a local path to an audio file, like ./sports_injuries.mp3
    audio_url: "",
    language_code: "en_us"
  const docs = await loader.load();

  const response = await{
    input_documents: docs,
    question: "What is a runner's knee?",

The output of the application looks like this:

Runner's knee is a condition characterized by pain behind or around the kneecap. It is caused by overuse muscle imbalance and inadequate stretching. Symptoms include pain under or around the kneecap, pain when walking.

Next steps

If you want to learn how to build the application above, check out this step-by-step tutorial on how to build a LangChain Q&A application for audio files. AssemblyAI also has its own pre-built solution called LeMUR (Leveraging Large Language Models to Understand Recognized Speech). With LeMUR you can use an LLM to perform tasks over large amounts of long audio files. You can learn more about using the LeMUR API in the docs.