Skip to main content

LeMUR Best Practices

This guide includes proven, effective prompting techniques to help you obtain accurate and relevant outputs from LeMUR.

Before getting started with LeMUR, check out this guide on processing audio files with LLMs using LeMUR.

Prompting 101

What is Prompting?

Prompting involves providing clear and well-crafted instructions or questions to large language models (LLMs) to guide them in generating useful responses. It acts as a roadmap for the model, ensuring it understands the desired output.

Evolution of Prompting

In early LLMs (prior to instruction fine-tuning), prompting looked very different from how it does now. Because LLMs are “next-word generators”, prompting was the process of providing a foundational preamble from which the LLM would generate meaningful output. Take the question below for example:

The goal is to get the LLM to answer the question What is the capital of France?

Without instruction finetuning, providing this prompt leads to a continuation of further similar questions.

Prompt: "What is the capital of France?"
Answer: "What is the capital of Germany?"

To achieve the correct answer of “Paris”, we would have to trick the LLM into answering the question like this:

Prompt: "What is the capital of France? The capital of France is "
Answer: "Paris"

Modern LLMs have overcome this quirk by applying a finetuning process on a series of Questions & Answer pairs. This has enabled LLMs to respond effectively to direct instructions. It's important to remember that this type of language is optimal when prompting.

Now we can prompt like this:

Prompt: "What is the capital of France?"
Answer: "Paris"

Context Is Key

Context is the additional information/data surrounding a situation or problem that the model doesn't have by default. In prompt engineering, it guides LLMs on how to generate accurate responses by providing a deeper understanding of the task.

When designing prompts for modern LLMs such as those used with LeMUR, you should speak using simple, concise language and assume the model has incredible general knowledge but knows nothing specific about your business. For example:

The goal is to get LeMUR to answer some questions about a phone call:

Question: Is this customer a Qualified Sales Lead?
Answer: Yes, this customer mentioned "that sounds interesting" which implies they are an interested buyer and should be a qualified sales lead.

This example looks probable but may or may not be correct depending on your definition of a Qualified Sales Lead. To resolve this ambiguity and tailor the answer to a specific use case, we can add some context:

Question: Is this customer a Qualified Sales Lead?
Context: Qualified Sales Leads have the following criteria:
A budget
A stakeholder with purchasing authority
A timeline

Answer: No, this customer does not meet the criteria of a Qualified Sales Lead.

Specify the answer format and answer options

We can further improve this by providing an answer_format or answer_options parameter:

Question: Is this customer a Qualified Sales Lead?
Context: Qualified Sales Leads have the following criteria:
A budget
A stakeholder with purchasing authority
A timeline
Answer_Options: ["Yes", "No"]

Answer: No

Or with an answer_format:

Answer_Format: "<Answer> (<Reason>)"

Answer: No (Didn't mention any of the outlined criteria)

Here's another example how the answer_format could look like when using LeMUR Custom Summary. The goal is to get LeMUR to generate a follow-up Slack post for a recorded team meeting:

Answer_Format: "A short paragraph describing the key points of discussion"

Answer: "In this meeting, the team discusses the current status and plan to launch a new product. Morgan mentioned that an external vendor has had an outage which has caused the engineering team to adjust their expected release date. Joshua confirms that the marketing team has everything they need to go live with the launch. The team discusses the new deadline and decides that February 18th is the updated deadline. The team posts these updates in Slack for Jeremy and Jocelyn who could not attend the meeting."

This answer is helpful as a generic overview, but we may need more specific answers based on our use case. Let's use an even more specific answer_format:

Answer_Format: "<Meeting Title> <Attendees> <Brief Recap of Conversation Topics> <Outcomes> <Next Steps>"
Context: "This is a meeting with Morgan (Engineering Lead), Joshua (Marketing Lead), Alex (Sales Lead), Mary (Social Media Manager), and Josiah (Support Engineer). The purpose of the meeting is to discuss the Launch of feature X."

Status Update on Feature X

Morgan (Engineering Lead)
Joshua (Marketing Lead)
Alex (Sales Lead)
Mary (Social Media Manager)
Josiah (Support Engineer)

Morgan mentioned that the engineering deadline has slipped because of a third-party vendor.
Joshua confirms that the Marketing team has finalized all the launch material.

The launch date has been moved to February 18th.

Morgan will finalize a contract with a new vendor for redundancy.
Mary will reschedule the social media posts to align with the new date.

Writing Good Prompts

A good prompt consists of three key elements:

  • Clear instruction or specific question: Clearly state what you want the model to do or the question you want it to answer. Use concise and unambiguous language to ensure the model understands your intent.
  • Missing information provided as context: Provide any relevant information or data that the model may need to complete the task or answer the question accurately. Context helps guide the model's understanding and ensures contextually appropriate responses.
  • Desired output format outlined: Clearly specify the desired format for the model's response. Whether you need a short paragraph, bullet points, or a specific structure, outlining the desired output format helps the model generate outputs that align with your expectations.

A good prompt is essential for obtaining accurate and relevant outputs from the generative AI model. Follow these guidelines to create effective prompts:

Providing clear instruction

  • Use action verbs: Begin prompts with action verbs to guide the model's understanding of the desired action. For example, instead of asking, "What are the features of the product?" use the prompt, "List the key features of the product."
  • Avoid compound questions/instructions: Keep your prompts focused on a single task or question. Compound instructions can confuse the model and lead to less accurate outputs. Break down complex tasks into smaller, simpler prompts for better results.
  • Utilize placeholders for structured data and natural language elements: Incorporate placeholders or variables within your prompts to represent dynamic information. These placeholders can be replaced with specific values during the generation process, making the prompts adaptable to different scenarios.

Guiding the Model with Context

  • Utilize structured data and natural language: Combine structured data, such as variables and placeholders, with natural language descriptions to provide comprehensive context to the model. This helps the model understand specific details while maintaining human-like communication.
  • Use common phrases like "Let's think step by step": To guide the model's thought process, incorporate common phrases or cues that encourage a step-by-step approach. This can help the model organize its response and provide well-structured information.
  • Embed instructions within the prompt text: Place the instructions or additional context directly within the prompt text itself. This allows the model to process all the relevant information at once and generate appropriate responses accordingly.
  • Use clear definitions: Establish specific definitions for terms or structures you want the model to use consistently. For example, define the format for action items as a bullet list to be generated after a call.
Question: "Identify action items from the meeting"

Action items are tasks for the participants to complete after the meeting
  • Incorporate Few-Shot Examples: By incorporating few-shot examples into your prompts, you can fine-tune the model's understanding and improve its ability to provide accurate and contextually relevant responses.
Question: "Identify action items from the meeting"

Action item examples from other meetings:
- "Schedule a follow-up meeting with the client to address their concerns."
- "Review the proposal and provide your feedback by the end of the week."
- "Complete the data analysis and share the results with the team."

Controlling Output from the Model

  • Create templates with variables and definitions: Construct answer format templates that include predefined variables and definitions. Variables act as placeholders for dynamic information, while definitions establish terminology and structures for consistent responses.
Question: "Identify action items from the meeting"

Answer Format:
"assignee": <assignee>,
"action_item": <action item>,
"due_date": <due_date>

LeMUR Prompt Examples

In this section you'll find concrete prompt examples for different use cases:

Important Notes:

  • Context is type any and can be a string, number, array, or object. This provides users the flexibility to work with both structured and unstructured data.
  • It's possible to process up to a maximum of 100 transcripts with each LeMUR API call.

LeMUR Set Up

Before we can use LeMUR, we need to transcribe one or multiple audio files. See this guide to learn how to transcribe files and obtain transcription ids. You can submit up to 100 files max, or 100 hours max. Whichever is lower.

Next, let's define the LeMUR base endpoint so we can use it in all examples:

Summary Examples

Write a summary of the transcript aligning with the user's requested format and context.

We use a little helper function that takes in a list with all relevant transcription_ids, as well as additional context and answer_format parameters.

See the summary parameters for more details about all parameters.

Meeting Recap Example

Use the LeMUR Summary endpoint to generate a meeting recap.

First off, provide context for what the meeting is about.

Next, specify an answer format that you want the summary returned in.

Note: LeMUR can generate outputs in Markdown code (see example below).

Let's generate our LeMUR-powered summary in the specified answer_format using the context we provided and print the result.

Podcast Titles Example

Use the LeMUR Summary endpoint to generate a podcast title.

First off, provide context for what the podcast is about.

Next, specify an answer format that you want the summary returned in.

Let's generate our LeMUR-powered summary in the specified answer_format using the context we provided and print the result.

Question Answer Examples

Ask questions about the contents of the transcript.

Note: Avoid asking too many questions per request, as the generation length of the underlying model may be exceeded.

Again, let's define a helper function that takes a list of all transcription_ids, and another list with all questions. See the Q/A parameters for more details.

Continued Learning

That's all for this guide! You can check out some of the resources below for continuing your prompt engineering education.

Alternatively, feel free to check out our Blog or YouTube channel to learn about a wide range of content on AI and development