3.2.2. Zero-shot, Single-shot, and Few-shot Prompting
First Principle: The amount of example data provided within a prompt (from zero to a few) can significantly influence the model's ability to understand the desired task and output format, a technique known as in-context learning.
This is a powerful prompt engineering technique that teaches the model what you want by showing it examples.
- Zero-shot Prompting:
- Concept: You ask the model to perform a task without giving it any prior examples of how to do it. This relies entirely on the model's pre-trained knowledge.
- Example:
Classify this text as positive or negative: "I loved the movie!"
- Single-shot Prompting (or One-shot):
- Concept: You provide a single example of the task within the prompt to show the model what you want.
- Example:
Text: "This was a waste of time." Sentiment: Negative
Text: "I loved the movie!" Sentiment:
- Few-shot Prompting:
- Concept: You provide multiple examples (typically 2-5) in the prompt. This gives the model a clearer understanding of the pattern, format, and nuances of the task, often leading to much better results.
- Example:
Text: "This was a waste of time." Sentiment: Negative
Text: "An incredible experience from start to finish." Sentiment: Positive
Text: "The plot was a bit confusing." Sentiment: Neutral
Text: "I loved the movie!" Sentiment:
Scenario: A team is trying to get an LLM to extract product names and codes from unstructured text. Their zero-shot prompts are failing because the format is inconsistent.
Reflection Question: How would you advise the team to use few-shot prompting to solve this problem? What would a few-shot prompt look like for this task?
š” Tip: When a zero-shot prompt doesn't work well, the next step is always to try few-shot prompting. It's one of the easiest and most effective ways to improve output quality without the cost and complexity of fine-tuning.