Zero-Shot Chain-of-Thought (Zero-Shot CoT) prompting is a follow up to Chain-of-Thought Prompting, which introduces an incredibly simple Zero-Shot prompt. They find that by appending the words "Let's think step by step." to the end of a question, Large Language Models are able to generate a Chain-of-Thought that answers the question. From this Chain-of-Thought, they are able to extract more accurate answers.
Technically, the full Zero-Shot CoT process involves two separate prompts/completions. In the below image, the top bubble on the left generates a Chain-of-Thought, while the top bubble on the right takes in the output from the first prompt (including the first prompt itself), and extracts the answer from the Chain-of-Thought. This second prompt is a self augmented prompt.
Full Zero-Shot CoT Process (Kojima et al.)
Here are a few demos (which only perform reasoning extraction). This first demo shows GPT-3 (davinci-003) failing a simple math question, while the second demo uses a Zero-Shot CoT prompt and successfully solves the problem. Feel free to enter your OpenAI API key (Click Generate) and play around with the examples. Note how much simpler the Zero-Shot CoT prompt is compared to the CoT prompt.
Zero-Shot CoT was also effective in improving results on arithmetic, commonsense, and symbolic reasoning tasks. However, unsurprisingly, it was usually not as effective as CoT prompting. An important use case for Zero-Shot CoT is when obtaining Few-Shot examples for CoT prompting is difficult.
Kojima et al. experiment with a number of different Zero-Shot CoT prompts (e.g. "Let’s solve this problem by splitting it into steps." or "Let’s think about this logically."), but they find that "Let's think step by step" is most effective for their chosen tasks.
The extraction step often must be task-specific, making Zero-Shot CoT less generalizable than it appears at first.
Anecdotally, I've found that Zero-Shot CoT style prompts are sometimes effective
in improving the length of completions for generative tasks. For example, consider
the standard prompt Write a story about a frog and a mushroom who become friends.
Appending the words Let's think step by step.
to the end of this prompt leads to
a much longer completion.
Zero-Shot Chain-of-Thought, despite its simplicity, tends to improve model performance by including step-by-step reasoning in the response. It is encouraging that this technique can be used to solve complex tasks without the necessity of providing multiple input exemplars like in Chain-of-Thought prompting.
Zero-Shot CoT and CoT Prompting both aim to improve model responses and extract more accurate answers by generating logic-based reasoning. In Zero-Shot CoT, however, we do not have to include input exemplars of Chain-of-Thought responses, but rather just append the words "Let's think step by step" to the end of an input.
Zero-Shot CoT was most effective in tasks that involve arithmetic, commonsense reasoning, and symbolic reasoning.
Yes. Unsurprisingly, Zero-Shot CoT is not as effective as CoT prompting, especially when the reasoning tasks are more complex. Also, the answer extraction step is often task-specific and not as generalizable as it may appear at first.
Yes. Unsurprisingly, Zero-Shot CoT is not as effective as CoT prompting, especially when the reasoning tasks are more complex. Also, the answer extraction step is often task-specific and not as generalizable as it may appear at first.
Sander Schulhoff is the Founder of Learn Prompting and an ML Researcher at the University of Maryland. He created the first open-source Prompt Engineering guide, reaching 3M+ people and teaching them to use tools like ChatGPT. Sander also led a team behind Prompt Report, the most comprehensive study of prompting ever done, co-authored with researchers from the University of Maryland, OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions. This 76-page survey analyzed 1,500+ academic papers and covered 200+ prompting techniques.