LLMs that Reason and Act
- ReAct Extension: ReAct Systems enhance MRKL frameworks by combining reasoning with actions, enabling LLMs to improve complex task performance through iterative thought-action loops.
What is ReAct?
ReAct (Reason + Act) is a paradigm that enables Large Language Models (LLMs) to solve complex tasks through natural language reasoning and actions. It allows an LLM to perform certain actions, such as retrieving external information, and then reason based on the retrieved data.
ReAct systems extend Modular Reasoning, Knowledge, and Language (MRKL) systems by adding the ability to reason about the actions they can perform.
Example
Below is an example from HotPotQA, a question-answering dataset requiring complex reasoning. ReAct allows the LLM to reason about the question (Thought 1), take actions (e.g., querying Google) (Act 1). It then receives an observation (Obs 1) and continues the thought-action loop until reaching a conclusion (Act 3).
Readers with knowledge of Reinforcement Learning (RL) may recognize this process as similar to the classic RL loop of state, action and reward. ReAct provides some formalization for this in their paper.
Results
Google experimented with the PaLM LLM using ReAct, and the results showed promising improvements in complex reasoning tasks. ReAct was tested on datasets such as FEVER, focusing on fact extraction and verification.
Sander Schulhoff
Sander Schulhoff is the CEO of HackAPrompt and Learn Prompting. He created the first Prompt Engineering guide on the internet, two months before ChatGPT was released, which has taught 3 million people how to prompt ChatGPT. He also partnered with OpenAI to run the first AI Red Teaming competition, HackAPrompt, which was 2x larger than the White House's subsequent AI Red Teaming competition. Today, HackAPrompt partners with the Frontier AI labs to produce research that makes their models more secure. Sander's background is in Natural Language Processing and deep reinforcement learning. He recently led the team behind The Prompt Report, the most comprehensive study of prompt engineering ever done. This 76-page survey, co-authored with OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions, analyzed 1,500+ academic papers and covered 200+ prompting techniques.
Footnotes
-
Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, K., & Cao, Y. (2022). β©
-
Yang, Z., Qi, P., Zhang, S., Bengio, Y., Cohen, W. W., Salakhutdinov, R., & Manning, C. D. (2018). HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering. β©
-
Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberts, A., Barham, P., Chung, H. W., Sutton, C., Gehrmann, S., Schuh, P., Shi, K., Tsvyashchenko, S., Maynez, J., Rao, A., Barnes, P., Tay, Y., Shazeer, N., Prabhakaran, V., β¦ Fiedel, N. (2022). PaLM: Scaling Language Modeling with Pathways. β©
-
Thorne, J., Vlachos, A., Christodoulopoulos, C., & Mittal, A. (2018). FEVER: a large-scale dataset for Fact Extraction and VERification. β©