Multilingual streaming

English, Spanish, French, German, Italian, and Portuguese

Multilingual streaming allows you to transcribe audio streams in multiple languages.

Configuration

Keyterms prompting is not supported with multilingual streaming.

To utilize multilingual streaming, you need to include "speech_model":"universal-streaming-multilingual" as a query parameter in the WebSocket URL.

Supported languages

Multilingual currently supports: English, Spanish, French, German, Italian, and Portuguese.

Language detection

The multilingual streaming model supports automatic language detection, allowing you to identify which language is being spoken in real-time. When enabled, the model returns the detected language code and confidence score with each complete utterance.

Configuration

To enable language detection, include language_detection=true as a query parameter in the WebSocket URL:

wss://streaming.assemblyai.com/v3/ws?sample_rate=16000&speech_model=universal-streaming-multilingual&language_detection=true

Output format

When language detection is enabled, each Turn message with a complete utterance will include two additional fields:

  • language_code: The language code of the detected language (e.g., "es" for Spanish, "fr" for French)
  • language_confidence: A confidence score between 0 and 1 indicating how confident the model is in the language detection

The language_code and language_confidence fields only appear when the utterance field is non-empty and contains a complete utterance.

Example response

Here’s an example Turn message with language detection enabled, showing Spanish being detected:

1{
2 "turn_order": 1,
3 "turn_is_formatted": false,
4 "end_of_turn": false,
5 "transcript": "Buenos",
6 "end_of_turn_confidence": 0.991195,
7 "words": [
8 {
9 "start": 29920,
10 "end": 30080,
11 "text": "Buenos",
12 "confidence": 0.979445,
13 "word_is_final": true
14 },
15 {
16 "start": 30320,
17 "end": 30400,
18 "text": "días",
19 "confidence": 0.774696,
20 "word_is_final": false
21 }
22 ],
23 "utterance": "Buenos días.",
24 "language_code": "es",
25 "language_confidence": 0.999997,
26 "type": "Turn"
27}

In this example, the model detected Spanish ("es") with a confidence of 0.999997.

Understanding formatting

The multilingual model produces transcripts with punctuation and capitalization already built into the model outputs. This means you’ll receive properly formatted text without requiring any additional post-processing.

While the API still returns the turn_is_formatted parameter to maintain interface consistency with other streaming models, the multilingual model doesn’t perform additional formatting operations. All transcripts from the multilingual model are already formatted as they’re generated.

In the future, this built-in formatting capability will be extended to our English-only streaming model as well.

Quickstart

Firstly, install the required dependencies.

$pip install websockets pyaudio
1import websockets
2import asyncio
3import json
4from urllib.parse import urlencode
5
6import pyaudio
7
8FRAMES_PER_BUFFER = 3200
9FORMAT = pyaudio.paInt16
10CHANNELS = 1
11RATE = 48000
12p = pyaudio.PyAudio()
13
14stream = p.open(
15 format=FORMAT,
16 channels=CHANNELS,
17 rate=RATE,
18 input=True,
19 frames_per_buffer=FRAMES_PER_BUFFER
20)
21
22BASE_URL = "wss://streaming.assemblyai.com/v3/ws"
23CONNECTION_PARAMS = {
24 "sample_rate": RATE,
25 "speech_model": "universal-streaming-multilingual",
26 "language_detection": True,
27}
28URL = f"{BASE_URL}?{urlencode(CONNECTION_PARAMS)}"
29
30async def send_receive():
31
32 print(f'Connecting websocket to url ${URL}')
33
34 async with websockets.connect(
35 URL,
36 extra_headers={"Authorization": "YOUR-API-KEY"},
37 ping_interval=5,
38 ping_timeout=20
39 ) as _ws:
40 await asyncio.sleep(0.1)
41 print("Receiving SessionBegins ...")
42
43 session_begins = await _ws.recv()
44 print(session_begins)
45 print("Sending messages ...")
46
47 async def send():
48 while True:
49 try:
50 data = stream.read(FRAMES_PER_BUFFER, exception_on_overflow=False)
51 await _ws.send(data)
52 except websockets.exceptions.ConnectionClosedError as e:
53 print(e)
54 except Exception as e:
55 print(e)
56 await asyncio.sleep(0.01)
57
58 async def receive():
59 while True:
60 try:
61 result_str = await _ws.recv()
62 data = json.loads(result_str)
63 transcript = data['transcript']
64 utterance = data['utterance']
65
66 if data['type'] == 'Turn':
67 if data.get('utterance'):
68 print(f"\r[PARTIAL TURN UTTERANCE]: {utterance}")
69 # Display language detection info if available
70 if 'language_code' in data:
71 print(f"\r[UTTERANCE LANGUAGE DETECTION]: {data['language_code']} - {data['language_confidence']:.2%}")
72 if data.get('end_of_turn'):
73 print(f"\r[FULL TURN TRANSCRIPT]: {transcript}")
74 else:
75 pass
76
77 except websockets.exceptions.ConnectionClosed:
78 break
79 except Exception as e:
80 print(f"\nError receiving data: {e}")
81 break
82
83 try:
84 await asyncio.gather(send(), receive())
85 except KeyboardInterrupt:
86 await _ws.send({"type": "Terminate"})
87 # Wait for the server to close the connection after receiving the message
88 await _ws.wait_closed()
89 print("Session terminated and connection closed.")
90
91if __name__ == "__main__":
92 try:
93 asyncio.run(send_receive())
94 finally:
95 stream.stop_stream()
96 stream.close()
97 p.terminate()