Prompt Engineering Guide
πŸ˜ƒ Basics
πŸ’Ό Applications
πŸ§™β€β™‚οΈ Intermediate
🧠 Advanced
Special Topics
βš–οΈ Reliability
πŸ”“ Prompt Hacking
πŸ–ΌοΈ Image Prompting
🌱 New Techniques
πŸ”§ Models
πŸ—‚οΈ RAG
πŸ€– Agents
πŸ’ͺ Prompt Tuning
πŸ” Language Model Inversion
πŸ”¨ Tooling
πŸ“™ Vocabulary Resource
🎲 Miscellaneous
πŸ“š Bibliography
πŸ“¦ Prompted Products
πŸ›Έ Additional Resources
πŸ”₯ Hot Topics
✨ Credits

Few-Shot Prompting

Reading Time: 1 minute
Last updated on November 12, 2024

Valeriia Kuka

Few-Shot Prompting is a technique for LLMs to generate more desired output after first being prompted with a few examples. Design decisions behind Few-Shot prompting are as follows:

  • Exemplar selection -- The method of selecting the few examples to prompt the model.
  • Exemplar ordering -- The method of ordering the examples to prompt the model.
  • Exemplar number -- The amount of examples to prompt the model with. Generally more examples are better, but there are diminishing returns (around 20).
  • Exemplar label quality -- The quality of examples provided. The necessity of high quality examples is unclear, as some work suggests that providing models with incorrect labels of exemplars may not hurt performance.
  • Input distribution -- How much of each label to provide to the model.
  • Input-label pairing format -- The Formatting of exemplars. One common format is "Q: input, A: label", but the optimal format may vary across tasks.

Valeriia Kuka

Valeriia Kuka, Head of Content at Learn Prompting, is passionate about making AI and ML accessible. Valeriia previously grew a 60K+ follower AI-focused social media account, earning reposts from Stanford NLP, Amazon Research, Hugging Face, and AI researchers. She has also worked with AI/ML newsletters and global communities with 100K+ members and authored clear and concise explainers and historical articles.

Footnotes

  1. Brown, T. B. (2020). Language models are few-shot learners. arXiv Preprint arXiv:2005.14165. ↩