Announcements

Introducing the AssemblyAI Java SDK

We are thrilled to release the AssemblyAI Java SDK. You can use the SDK to transcribe audio asynchronously or in real-time, use our audio intelligence model, and apply LLMs to your audio data using LeMUR.

Java code to transcribe an audio file using the AssemblyAI Java SDK.

We are thrilled to release the AssemblyAI Java SDK. You can use the SDK to transcribe audio asynchronously or in real-time, use our audio intelligence model, and apply LLMs to your audio data using LeMUR.

Here are a couple of examples showcasing the Java SDK.

1. Transcribe an audio file

import com.assemblyai.api.AssemblyAI;
import com.assemblyai.api.resources.transcripts.types.Transcript;

AssemblyAI client = AssemblyAI.builder()
    .apiKey(System.getenv("ASSEMBLYAI_API_KEY"))
    .build();

Transcript transcript = client.transcripts().transcribe(
    "https://storage.googleapis.com/aai-docs-samples/nbc.mp3",
);
System.out.println(transcript);

You can also transcribe a local file, as shown here.

Transcript transcript = client.transcripts().transcribe(new File("./news.mp4"));
System.out.println(transcript);

Learn how to transcribe audio files by following the step-by-step instructions in our docs.

2. Transcribe audio in real-time

import com.assemblyai.api.RealtimeTranscriber;

RealtimeTranscriber realtimeTranscriber = RealtimeTranscriber.builder()
    .apiKey(System.getenv("ASSEMBLYAI_API_KEY"))
    .onSessionStart(System.out::println)
    .onPartialTranscript(System.out::println)
    .onFinalTranscript(System.out::println)
    .onError((err) -> System.out.println(err.getMessage()))
    .build();

realtimeTranscriber.connect();

// Pseudo code for getting audio from a microphone for example
getAudio((chunk) -> realtimeTranscriber.sendAudio(chunk));

realtimeTranscriber.close();

Learn how to transcribe audio from the microphone by following the step-by-step instructions in our docs.

3. Use LeMUR to build LLM apps on voice data

import com.assemblyai.api.AssemblyAI;
import com.assemblyai.api.resources.transcripts.types.Transcript;
import com.assemblyai.api.resources.lemur.requests.LemurTaskParams;
import com.assemblyai.api.resources.lemur.types.LemurTaskResponse;

LemurTaskResponse response = client.lemur().task(LemurTaskParams.builder()
        .prompt("Summarize this transcript.")
        .transcriptIds(List.of(transcript.getId()))
        .build());

System.out.println(response.getResponse());

Learn how to use LLMs with audio data using LeMUR in our docs.

4. Use audio intelligence models

import com.assemblyai.api.AssemblyAI;
import com.assemblyai.api.RealtimeTranscriber;
import com.assemblyai.api.resources.transcripts.types.*;

Transcript transcript = aai.transcripts().transcribe(
        "https://storage.googleapis.com/aai-docs-samples/nbc.mp3",
        TranscriptOptionalParams.builder()
                .sentimentAnalysis(true)
                .build()
);
for(SentimentAnalysisResult result: transcript.getSentimentAnalysisResults().get())
{
    System.out.println("Text: " + result.getText());
    System.out.println("Sentiment: " + result.getSentiment());
    System.out.println("Confidence: " + result.getConfidence());
    System.out.printf("Timestamp: %s - %s", result.getStart(), result.getEnd());
}

Learn more about our audio intelligence models in our docs.

Get started with the SDK

You can find installation instructions and more information in README of the Java SDK GitHub repository. File an issue or get in touch with us if you have any feedback.