Spoken data – from meetings, phone calls, videos, podcasts, and more – is a critical input into Generative AI workflows and applications. In response to the increased desire to build these kinds of AI apps on audio data, we’re seeing the emergence of an “AI stack” to string together components including automatic transcription, prompt augmentation, compression strategies, retrieval techniques, language models, and structured outputs.
LeMUR offers this stack in a single API, enabling developers to reason over their spoken data with a few lines of code. We launched LeMUR Early Access in April, and starting today, LeMUR is available for everyone to use, with new endpoints, higher accuracy outputs, and higher input and output limits.
These are the kinds of apps our users have been building with LeMUR:
See our Prompt Guide for tips on how to obtain accurate and relevant outputs from LeMUR for your use case.
“LeMUR unlocks some amazing new possibilities that I never would have thought were possible just a few years ago. The ability to effortlessly extract valuable insights, such as identifying optimal actions, empowering agent scorecard and coaching, and discerning call outcomes like sales, appointments, or call purposes, feels truly magical.”