🧠 Advanced

🟢 Few-Shot Prompting

Last updated on August 7, 2024 by Sander Schulhoff

Takeaways
  • Learn Zero, One, and Few-Shot Prompting
  • Improve output accuracy
  • Improve output structure

What is Few-Shot Prompting?

In Few-Shot Prompting, you put examples of what you want the GenAI to do in your prompt. These examples are also called shots, which is how the technique gets its name.

Few-Shot prompting builds upon the idea that large language models can learn new information from their prompt. By providing a few demonstrations of the desired task or output format within the prompt itself, you guide the model to produce more accurate and consistently formatted responses.

Few-shot prompting sits on a spectrum of "shot" techniques:

  • Zero-shot prompting: No examples are provided
  • One-shot prompting: A single example is given
  • Few-shot prompting: Multiple examples (typically 2+) are included

This method is particularly useful when you need the AI to perform tasks that are difficult to explain through instructions alone, or when you require a specific output structure. By showing the model what you expect, you leverage its ability to recognize patterns and apply them to new inputs.

How to do Few-Shot Prompting

Now let's see an example of Few-Shot Prompting. Below is a prompt that is attempting to classify customer feedback as positive or negative. The first three lines are examples feedback and its classification. The fouth line is new piece of feedback that has not been classified yet (It doesn't work!:). This line is called the test instance, because it is the part of the prompt that we want to LLM to analyze.

Astronaut

Prompt


Great product, 10/10: positive Didn't work very well: negative Super helpful, worth it: positive It doesnt work!:

Here is the output of this prompt when passed through GPT-4:

Robot

AI Output


negative

The model saw that the first three examples were classified as either positive or negative, and uses this information to classify the new example as negative.

Information Extraction with Few-Shot Prompting

You can also use Few-Shot Prompting to extract key details from job postings in a structured bulleted list format. Here is a prompt that extracts the key details from a job posting as a bulleted list.

Astronaut

Prompt


INPUT: Software Engineer - Python specialist needed at TechCorp. 5+ years experience required. Salary range $90,000 - $120,000. Remote work available. Apply by June 30, 2024.
OUTPUT:
Position: Software Engineer
Specialization: Python
Company: TechCorp
Experience Required: 5+ years
Salary Range: $90,000 - $120,000
Work Type: Remote
Application Deadline: June 30, 2024

INPUT: Marketing Manager for GlobalBrand. MBA preferred. 3-5 years in consumer goods marketing. $75K-$95K DOE. Hybrid work model. Applications close July 15, 2024. OUTPUT: Position: Marketing Manager Company: GlobalBrand Education: MBA preferred Experience Required: 3-5 years Industry: Consumer goods Salary Range: $75,000 - $95,000 Work Type: Hybrid Application Deadline: July 15, 2024

INPUT: Data Scientist wanted at AI Innovations Ltd. PhD in Computer Science or related field. Minimum 2 years industry experience. Competitive salary €60,000 - €80,000 based on experience. On-site work in Berlin office. Apply by August 31, 2024. OUTPUT:

Here's the potential output. Notice how we used the INPUT, OUTPUT format instead of the colon (:) format. We will address the signifigance of this later.

Robot

AI Output


Position: Data Scientist Company: AI Innovations Ltd Education: PhD in Computer Science or related field Experience Required: Minimum 2 years Salary Range: €60,000 - €80,000 Work Type: On-site Location: Berlin Application Deadline: August 31, 2024

This example demonstrates how few-shot prompting can be used to extract structured information from unstructured text and present it in a different format. The model learns from the provided examples to identify relevant information and maintain the specified list structure, adapting to the new output format while still capturing the key details from the job posting.

Content Creation with Few-Shot Prompting

This example demonstrates how to use few-shot prompting to create concise travel itinerary descriptions for different types of vacations.

Astronaut

Prompt


Destination: Tokyo, Japan
Trip Type: Cultural Exploration
Itinerary: A 5-day immersion in Tokyo's blend of tradition and modernity. Visit ancient temples like Senso-ji, then explore futuristic districts like Akihabara. Experience a tea ceremony, try hands-on workshops in traditional crafts, and end with a day trip to Mt. Fuji for breathtaking views of Japan's iconic landmark.

Destination: Costa Rica Trip Type: Eco-Adventure Itinerary: 7 days of natural wonders and adrenaline rushes. Start in the Arenal Volcano area for hiking and zip-lining, then head to Monteverde Cloud Forest for canopy walks and wildlife spotting. Finish at Manuel Antonio National Park, combining rainforest exploration with relaxation on pristine beaches.

Destination: Rome, Italy Trip Type: Culinary Journey Itinerary: A 6-day feast for the senses in the Eternal City. Begin with a pasta-making class, followed by guided food tours through trendy Trastevere and historic Jewish Ghetto. Visit local markets, enjoy wine tasting in the Roman countryside, and cap off the trip with a Michelin-starred dining experience.

Destination: New Zealand Trip Type: Lord of the Rings Fan Tour Itinerary:

Here's a potential output. Notice how we had two "inputs", the destination and the trip type. Few-Shot prompting can be very versatile!

Robot

AI Output


An 8-day journey through Middle-earth's real-life locations. Start in Auckland with a visit to Hobbiton Movie Set in nearby Matamata. Fly to Wellington to explore Weta Workshop and tour urban filming locations. Head to the South Island to see the majestic landscapes of Edoras in Mount Sunday and Pelennor Fields in Twizel. End your adventure in Queenstown, taking in the scenery of Isengard and Lothlorien in Glenorchy, with an optional helicopter tour over the Misty Mountains (Southern Alps). Throughout the trip, enjoy themed accommodations and meals inspired by hobbit cuisine.

This example shows how few-shot prompting can be used to generate creative content tailored to specific parameters (in this case, destination and trip type). The model learns from the provided examples to create a new itinerary description that matches the style and format of the examples while incorporating relevant details for the given input. This technique can be applied to various types of content generation where consistency in structure and adaptation to specific inputs are required.

Benefits of Few-Shot Prompting

There are three main benefits of using Few-Shot Prompting. The first is that you can express ideas and concepts that are difficult to express through an instruction. The second is that Few-Shot Prompting can significantly improve the accuracy of your outputs. When the LLM is given examples of inputs in appropriate outputs it is able to better respond to new inputs. The third is that you can structure your outputs much better. This can be helpful if you need the output to be in a certain format like JSON or YAML if you are going to then parse that output.

Structuring Few-Shot Prompts

Structuring outputs is perhaps the most important benefit. If you need to copy-and-paste the AIs output into a spreadsheet or use code to extract part of its output, you need to understand how to take advantage of Few-Shot Prompting's ability to structure outputs. When we say discuss structured outputs, we are basically referring to the format that the output is in. Is it just a paragraph of text, or is it a bulleted list, or is it something else like a markdown block of code or a JSON or YAML file?

Consider the following prompt and output:

Given that we have organized these three instances in an input: classification format, the model generates a single word following the final line, rather than outputting a complete sentence such as this review is positive. However, if we wanted a complete sentence to be output, we could adjust our examples as:

Astronaut

Prompt


"Great product, 10/10": this is a positive classification "Didn't work very well": this is a negative classification "Super helpful, worth it": this is a positive classification

We could also make outputs in JSON format by structuring the examples as follows:

Astronaut

Prompt


"Great product, 10/10": {"label": "positive"} "Didn't work very well": {"label": "negative"} "Super helpful, worth it": {"label": "positive"}


More on structure

A key use case for few-shot prompting is when you need the output to be structured in a specific way that is difficult to describe to the model. To understand this, let's consider a relevant example: say you are conducting an economic analysis and need to compile the names and occupations of well-known citizens in towns nearby by analyzing local newspaper articles. You would like the model to read each article and output a list of names and occupations in the First Last [OCCUPATION] format. In order to get the model to do this, you can show a few examples. Look through the embed to see them.

By showing the model examples of the correct output format, it is able to produce the correct output for new articles. We could produce this same output by using an instruction prompt instead, but the few-shot prompt works much more consistently.

Applications of Few-Shot Prompting

Few-shot Prompting is an extremely versatile technique that can be applied to various domains. Aside from basic classification, information extraction, and content creation, it can also be used for sentiment analysis and topic categorization. The technique has also been applied successfully to named entity recognition tasks, where large language models can achieve reasonable accuracy without fine-tuning. Question answering systems have benefited from few-shot prompting, improving their ability to handle diverse queries by providing examples of question-answer pairs. In machine translation, few-shot prompting has been explored as a method to quickly adapt models to new language pairs or domains. Code generation tasks have also seen applications of few-shot prompting, assisting developers with code completion and generation. Additionally, dialogue systems have leveraged few-shot prompting to enhance their capacity to handle diverse conversational scenarios. These applications highlight the potential of few-shot prompting in addressing various natural language processing tasks, particularly in scenarios where large labeled datasets are unavailable or rapid adaptation is necessary.

Best Practices for Few-Shot Prompting

There are many design decisions to consider when creating Few-Shot Prompts. For example, how many Examples to include? Which examples do you include? In what order do you include the examples? Will learn a bit more about the answers to these in a later section. For now we will focus on different ways of structuring the examples in a Few-Shot Prompt.

The way that we structure Few-Shot Prompts is very important. By this, we mean do we separate the inputs and outputs with a colon (:) or the words INPUT/OUTPUT. We have seen examples of both earlier in this article. How can you decide? We generally use the input: output format and occassionally use the QA format, which is commonly used in research papers.

Astronaut

Prompt


Q: input A: output

For longer inputs and outputs, we will use the INPUT/OUTPUT format. This allows for greater legibility of the prompt.

Astronaut

Prompt


INPUT: input OUTPUT: output

When it comes to how to format your prompt, start simple. You will likely have to come up with your own formats eventually. If you are interested in learning more about optimizing your Few-Shot Prompts, check out our Advanced Prompt Engineering course.

Variants of Shot Prompting

Now let's discuss in more detail the two other types of "shot" prompting that exist. The only difference between these variants is how many examples you show the model.

Zero-Shot Prompting

Zero-shot prompting is the most basic form of prompting. It simply shows the model a prompt without examples and asks it to generate a response. As such, all of the instruction and role prompts that you have seen so far are zero-shot prompts. An additional example of a zero-shot prompt is:

Astronaut

Prompt


Add 2+2:

It is zero-shot since we have not shown the model any complete examples.

One-Shot Prompting

One-shot prompting is when you show the model a single example. For example, the one-shot analog of the zero-shot prompt Add 2+2: is:

Astronaut

Prompt


Add 3+3: 6
Add 2+2:

We have shown the model only one complete example (Add 3+3: 6), so this is a one-shot prompt.

Few-shot prompting

Few-shot prompting is when you show the model 2 or more examples. The few-shot analog of the above two prompts is:

Astronaut

Prompt


Add 3+3: 6
Add 5+5: 10
Add 2+2:

This is a few-shot prompt since we have shown the model at least 2 complete examples (Add 3+3: 6 and Add 5+5: 10). Usually, the more examples you show the model, the better the output will be, so few-shot prompting is preferred over zero-shot and one-shot prompting in most cases.

Limitations of Few-Shot Prompting

While few-shot prompting is a powerful technique, it's important to be aware of its limitations. One key constraint is the context window size of the model, which limits the number of examples you can include. This can be particularly challenging for complex tasks that require numerous or lengthy examples. Additionally, the model may sometimes overgeneralize from the given examples, leading to errors when faced with edge cases not represented in the prompt. There's also a risk of the model fixating on superficial patterns in the examples rather than understanding the underlying task. Few-shot prompting can be sensitive to the choice and order of examples, potentially leading to inconsistent results across different runs. Moreover, for some specialized or highly technical tasks, few-shot prompting may not provide enough information for the model to perform accurately. Finally, this method may not be as effective for tasks requiring real-world knowledge beyond what's contained in the examples, as the model is primarily learning from the in-context examples rather than its pre-trained knowledge.

Conclusion

Few-shot prompting is a powerful and versatile technique that significantly enhances the capabilities of large language models. By providing multiple examples within the prompt, it enables models to adapt quickly to specific tasks and generate more accurate, appropriately structured responses.

Key strengths of few-shot prompting include:

  • Improved output accuracy
  • Better control over output structure
  • Quick adaptation to new domains without fine-tuning

However, it's important to be aware of its limitations, such as context window constraints and sensitivity to example selection and ordering. As GenAI technology evolves, few-shot prompting remains a valuable tool for bridging pre-trained knowledge and task-specific requirements. Its applications span various domains, from content creation to data analysis and dialogue systems. For AI practitioners and researchers, mastering few-shot prompting opens up new possibilities in GenAI interaction, allowing for more nuanced control over outputs and enhanced model performance across a wide range of tasks. As research in this area continues, we can expect further improvements in the efficiency and effectiveness of few-shot prompting techniques, expanding the capabilities of AI systems in real-world scenarios.

FAQ

Why is few-shot prompting a useful prompting technique?

Showing exemplars in your model input is an effective way of implying the desired structure of the AI response.

What is the difference between zero-shot, one-shot, and few-shot prompting? Which is best?

Zero-shot, one-shot, and few-shot prompting refers to the number of examples that you provide in the model input (zero, one, and few, respectively). Usually, few-shot prompting is preferred because it is better to show more examples.

How many examples should I include in a few-shot prompt?

The optimal number of examples can vary depending on the task complexity and model capabilities. Generally, 2-5 examples are sufficient for simple tasks. However, it's important to experiment and find the right balance, as too many examples can lead to overfitting or exceed the model's context window. We include often 10 examples for harder tasks, but some researchers include 100 or 100s of examples!

Footnotes

  1. Logan IV, R., Balazevic, I., Wallace, E., Petroni, F., Singh, S., & Riedel, S. (2022). Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models. Findings of the Association for Computational Linguistics: ACL 2022, 2824–2835. https://doi.org/10.18653/v1/2022.findings-acl.222

  2. Lake, B. M., Salakhutdinov, R., & Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction. Science, 350(6266), 1332–1338.

  3. Few-shot prompting is also called in-context learning.

  4. The technical word for these examples is exemplars.

  5. Zhao, T. Z., Wallace, E., Feng, S., Klein, D., & Singh, S. (2021). Calibrate Before Use: Improving Few-Shot Performance of Language Models.

  6. Gao, T., Fisch, A., & Chen, D. (2021). Making Pre-trained Language Models Better Few-shot Learners. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 3816–3830.

  7. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., & others. (2020). Language Models are Few-Shot Learners. Advances in Neural Information Processing Systems, 33, 1877–1901.

  8. Schick, T., & Schütze, H. (2021). Exploiting Cloze-Questions for Few-Shot Text Classification and Natural Language Inference. Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, 255–269.

  9. Garcia, X., Constant, N., Parikh, A., & Firat, O. (2021). Few-shot learning for cross-lingual natural language inference. arXiv Preprint arXiv:2104.14690.

  10. Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H. P. de O., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman, G., & others. (2021). Evaluating Large Language Models Trained on Code. arXiv Preprint arXiv:2107.03374.

  11. Madotto, A., Lin, Z., Zhou, Y., Shin, J., & Fung, P. (2021). Few-shot Bot: Prompt-based Learning for Dialogue Systems. arXiv Preprint arXiv:2110.08118.

  12. These techniques are useful when you don't have examples.

Edit this page
Word count: 0

Get AI Certified by Learn Prompting


Copyright © 2024 Learn Prompting.