🦜️🔗 LangChain JavaScript Integration with AssemblyAI
To apply LLMs to speech, you first need to transcribe the audio to text, which is what the AssemblyAI integration for LangChain helps you with.
Looking for the Python integration?
Go to the LangChain Python integration.
Quickstart
Add the AssemblyAI SDK to your project:
npm
yarn
pnpm
bun
To use the loaders, you need an AssemblyAI account and get your AssemblyAI API key from the dashboard.
Configure the API key as the ASSEMBLYAI_API_KEY environment variable or the apiKey options parameter.
- You can use the
AudioTranscriptParagraphsLoaderorAudioTranscriptSentencesLoaderto split the transcript into paragraphs or sentences. - If theaudio_fileis a local file path, the loader will upload it to AssemblyAI for you. - Theaudio_filecan also be a video file. See the list of supported file types in the FAQ doc. - If you don’t pass in the
apiKeyoption, the loader will use theASSEMBLYAI_API_KEYenvironment variable. - You can add more properties in addition toaudio. Find the full list of request parameters in the AssemblyAI API docs.
You can also use the AudioSubtitleLoader to get srt or vtt subtitles as a document.
Additional resources
You can learn more about using LangChain with AssemblyAI in these resources: