Few-Shot Prompting
Few-Shot Prompting is a technique for LLMs to generate more desired output after first being prompted with a few examples. Design decisions behind Few-Shot prompting are as follows:
- Exemplar selection -- The method of selecting the few examples to prompt the model.
- Exemplar ordering -- The method of ordering the examples to prompt the model.
- Exemplar number -- The amount of examples to prompt the model with. Generally more examples are better, but there are diminishing returns (around 20).
- Exemplar label quality -- The quality of examples provided. The necessity of high quality examples is unclear, as some work suggests that providing models with incorrect labels of exemplars may not hurt performance.
- Input distribution -- How much of each label to provide to the model.
- Input-label pairing format -- The Formatting of exemplars. One common format is "Q: input, A: label", but the optimal format may vary across tasks.
Valeriia Kuka
Valeriia Kuka, Head of Content at Learn Prompting, is passionate about making AI and ML accessible. Valeriia previously grew a 60K+ follower AI-focused social media account, earning reposts from Stanford NLP, Amazon Research, Hugging Face, and AI researchers. She has also worked with AI/ML newsletters and global communities with 100K+ members and authored clear and concise explainers and historical articles.
Footnotes
-
Brown, T. B. (2020). Language models are few-shot learners. arXiv Preprint arXiv:2005.14165. β©