🧠 Advanced
🧙‍♂️ Intermediate🟢 Chain-of-Thought Prompting

🟢 Chain-of-Thought Prompting

Last updated on August 7, 2024 by Sander Schulhoff

What is Chain-of-Thought Prompting?

Chain-of-Thought (CoT) Prompting is a recent advancement in prompting methods that encourage Large Language Models (LLMs) to explain their reasoning. This method contrasts with standard prompting by not only seeking an answer but also requiring the model to explain its steps to arrive at that answer.

The image below from Wei et al. shows a Few-Shot prompt on the left compared to a Chain-of-Thought prompt right. This comparison between a Few-Shot standard prompt and a Chain-of-Thought prompt illustrates the difference: while the standard approach directly seeks a solution, the CoT approach guides the LLM to unfold its reasoning, often leading to more accurate and interpretable results.

Regular Prompting vs CoT (Wei et al.)

The main idea of CoT is that by showing the LLM some Few-Shot exemplars where the reasoning process is explained in the exemplars, the LLM will also show the reasoning process when answering the prompt. This explanation of reasoning often leads to more accurate results.

How to Use Chain-of-Thought Prompting

Here are a few demos. The first shows GPT-3 (davinci-003) failing to solve a simple word problem. The second shows GPT-3 (davinci-003) successfully solving the same problem, by using CoT prompting.

Incorrect

Correct

Chain-of-Thought Results

CoT has been shown to be effective in improving results on tasks like arithmetic, commonsense, and symbolic reasoning tasks. In particular, prompted PaLM 540B achieves 57% solve rate accuracy on GSM8K (SOTA at the time).

Comparison of models on the GSM8K benchmark (Wei et al.)

Limitations of Chain-of-Thought

Importantly, according to Wei et al., "CoT only yields performance gains when used with models of ∼100B parameters". Smaller models wrote illogical chains of thought, which led to worse accuracy than standard prompting. Models usually get performance boosts from CoT prompting in a manner proportional to the size of the model.

Notes

No language models were hurt finetuned in the process of writing this chapter 😊.

Conclusion

CoT prompting significantly advances how we interact with Large Language Models, offering a method that encourages an articulated reasoning process. This approach has improved the accuracy and interpretability of model outputs, particularly in complex reasoning tasks. Its effectiveness is more pronounced in larger models, and CoT prompting underscores the potential for developing AI systems that provide correct answers and transparent insights into their thought processes, bridging the gap between human reasoning and artificial intelligence.

FAQ

Why is Chain-of-Thought prompting effective?

The idea behind Chain-of-Thought prompting is that showing the LLM examples of logical reasoning to approach a solution will encourage it to follow a similar method of reasoning in its response. This can make responses more accurate and reliable than responses that seek a direct solution to the input prompt.

What is a limitation of Chain-of-Thought prompting?

Chain-of-Thought prompting has been proven to be less effective on smaller models. CoT prompting should be used in a manner proportional to the size of the model.

Footnotes

  1. Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., & Zhou, D. (2022). Chain of Thought Prompting Elicits Reasoning in Large Language Models. 2 3

  2. Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberts, A., Barham, P., Chung, H. W., Sutton, C., Gehrmann, S., Schuh, P., Shi, K., Tsvyashchenko, S., Maynez, J., Rao, A., Barnes, P., Tay, Y., Shazeer, N., Prabhakaran, V., … Fiedel, N. (2022). PaLM: Scaling Language Modeling with Pathways.

  3. Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano, R., Hesse, C., & Schulman, J. (2021). Training Verifiers to Solve Math Word Problems.

Edit this page
Word count: 0

Get AI Certified by Learn Prompting


Copyright © 2024 Learn Prompting.