🚀 Expanded AssemblyAI Docs
- AssemblyAI Integrations: Discover how to integrate AssemblyAI with tools like LangChain, LlamaIndex, Zapier and many more.
- Speech-to-Text with Java: Make use of AssemblyAI's Java SDK to build applications with voice data in Java.
- Speech-to-Text with Go: Build applications with voice data in Go with AssemblyAI's Go SDK.
- Webhooks docs: Learn to build your own Webhooks, that you can define to get notified when your transcripts are ready.
🎉 Announcing our $50M Series C to build superhuman Speech AI models
We're excited to share that we’ve raised $50M in Series C funding led by Accel, our partners that also led our Series A, with participation from Keith Block and Smith Point Capital, Insight Partners, Daniel Gross and Nat Friedman, and Y Combinator.
We're thankful for all the amazing developers building with our API! We're just scratching the surface of voice-powered AI applications – stay tuned for a lot more to come.
Fresh From Our Blog
AI for Universal Audio Understanding: Qwen-Audio Explained: Recently, researchers have made progress towards universal audio understanding, marking an advancement towards foundational audio models. The approach is based on a joint audio-language pre-training that enhances performance without task-specific fine-tuning. Read more>>
How to integrate spoken audio into LlamaIndex.TS using AssemblyAI: Learn how to apply LLMs to speech with AssemblyAI's new integration for LlamaIndex.TS, using TypeScript and Node.js. Read more>>
Our Trending YouTube Tutorials
How do Multimodal AI models work? Simple explanation: Learn about how multimodality works in AI, and the distinction between multimodal models and multimodal interfaces.
Run LLMs locally - 5 Must-Know Frameworks!: Learn how to run LLMs locally including, Ollama, GPT4All, PrivateGPT, llama.cpp and LangChain.
Best FREE Speech to Text AI in 2023: Learn how to use AssemblyAI's API to transcribe and convert speech or audio into text.