Customize parameters
Learn how you can customize LeMUR parameters to alter the outcome.
Change the model type
LeMUR features the following LLMs:
- Default
- Claude 2.1
- Basic
You can switch the model by specifying the final_model
parameter.
Default | aai.LemurModel.default | LeMUR Default is best at complex reasoning. It offers more nuanced responses and improved contextual comprehension. |
Claude 2.1 | aai.LemurModel.claude2_1 | Claude 2.1 is similar to Default, with key improvements: it minimizes model hallucination and system prompts, has a larger context window, and performs better in citations. |
Basic | aai.LemurModel.basic | LeMUR Basic is a simplified model optimized for speed and cost. LeMUR Basic can complete requests up to 20% faster than Default. |
You can find more information on pricing for each model .
Change the maximum output size
You can change the maximum output size in tokens by specifying the max_output_size
parameter. Up to 4000 tokens are allowed.
Change the temperature
You can change the temperature by specifying the temperature
parameter, ranging from 0.0 to 1.0.
Higher values result in answers that are more creative, lower values are more conservative.
Send customized input
You can submit custom text inputs to LeMUR without transcript IDs. This allows you to customize the input, for example, you could include the speaker labels for the LLM.
To submit custom text input, use the input_text
parameter on aai.Lemur().task()
.
Submit multiple transcripts
LeMUR can easily ingest multiple transcripts in a single API call.
You can feed in up to a maximum of 100 files or 100 hours, whichever is lower.
Delete data
You can delete the data for a previously submitted LeMUR request.
Response data from the LLM, as well as any context provided in the original request will be removed.
API reference
Request
curl https://api.assemblyai.com/lemur/v3/generate/task \
--header "Authorization: YOUR_API_KEY" \
--data '{
"transcript_ids": ["TRANSCRIPT_ID1", "TRANSCRIPT_ID2"],
"prompt": "YOUR_PROMPT"
}'
transcript_ids | string[] | No | N/A | None | A list of completed transcripts with text. Up to a maximum of 100 files or 100 hours, whichever is lower. Use either transcript_ids or input_text as input into LeMUR. |
input_text | string | No | N/A | None | Custom formatted transcript data. Maximum size is the context limit of the selected model, which defaults to 100000. Use either transcript_ids or input_text as input into LeMUR. |
prompt | string | Yes | N/A | None | Your text to prompt the model to produce a desired output, including any context you want to pass into the model. |
final_model | string | No | default, basic, anthropic/claude-2-1 | default | The model that is used for the final prompt after compression is performed. |
max_output_size | int | No | N/A | 2000 | Max output size in tokens. Up to 4000 allowed. |
temperature | float | No | N/A | 0.0 | The temperature to use for the model. Higher values result in answers that are more creative, lower values are more conservative. Can be any value between 0.0 and 1.0 inclusive. |
Response
Key | Type | Description |
---|---|---|
response | string | The response of the LeMUR request. |
request_id | string | The ID of the LeMUR request. |
You can find detailed information about all LeMUR API endpoints and parameters in the LeMUR API reference.