Prompt Engineering Guide
😃 Basics
💼 Applications
🧙‍♂️ Intermediate
🧠 Advanced
Special Topics
🌱 New Techniques
🤖 Agents
⚖️ Reliability
🖼️ Image Prompting
🔓 Prompt Hacking
🔨 Tooling
💪 Prompt Tuning
🗂️ RAG
🎲 Miscellaneous
Models
🔧 Models
Resources
📙 Vocabulary Resource
📚 Bibliography
📦 Prompted Products
🛸 Additional Resources
🔥 Hot Topics
✨ Credits

Introduction

🟢 This article is rated easy
Reading Time: 2 minutes
Last updated on October 1, 2024

Sander Schulhoff

Takeaways
  • Agent Overview: Agents are LLMs that can use external tools, such as APIs and databases, to incorporate information beyond the model's training and enhance problem-solving abilities.

We have explored many intermediate and advanced prompting techniques so far. In this section, we’ll dive into advanced methods where LLMs interact with external tools to solve complex reasoning tasks.

These methods are called agents; agents are just GenAIs (usually LLMs) that can use tools and take actions.

Although this area of research is still developing, it has already sparked significant innovations in prompting techniques. These methods broaden the range of problems that prompting can address. By performing tasks such as conducting internet searches, querying an external calculator, or executing code externally, the LLM can incorporate information it wasn’t trained on into its context.

These techniques often emerge to compensate for LLM limitations in areas like mathematical calculations, reasoning, and factual accuracy. For instance, when asked a question like, "What is 19 percent of 5619?", an LLM may struggle to provide an accurate answer. To simplify this, the LLM could choose not to generate the answer directly but instead call on a calculator tool. A response might look like this:

Astronaut

Prompt


What is 19 percent of 5619?
Robot

AI Output


CALCULATOR[(0.19) * (5619)]

In this example, instead of producing the answer itself, the LLM provides the structure, while the calculator performs the actual computation. This effectively offloads tasks that are challenging for LLMs to tools specifically designed for them. It’s clear how these techniques can be essential for developing GenAI agents.

While this is a simple example, these techniques become significantly more complex when API calls, code execution, and reasoning come into play. Some methods we’ll cover include MRKL Systems, ReAct, and PAL, though many more are emerging as this field evolves.

Sander Schulhoff

Sander Schulhoff is the Founder of Learn Prompting and an ML Researcher at the University of Maryland. He created the first open-source Prompt Engineering guide, reaching 3M+ people and teaching them to use tools like ChatGPT. Sander also led a team behind Prompt Report, the most comprehensive study of prompting ever done, co-authored with researchers from the University of Maryland, OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions. This 76-page survey analyzed 1,500+ academic papers and covered 200+ prompting techniques.

🟦 LLMs Using Tools

🟦 Code as Reasoning

🟦 LLMs that Reason and Act

Footnotes

  1. Karpas, E., Abend, O., Belinkov, Y., Lenz, B., Lieber, O., Ratner, N., Shoham, Y., Bata, H., Levine, Y., Leyton-Brown, K., Muhlgay, D., Rozen, N., Schwartz, E., Shachaf, G., Shalev-Shwartz, S., Shashua, A., & Tenenholtz, M. (2022). 2

  2. Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, K., & Cao, Y. (2022).

  3. Gao, L., Madaan, A., Zhou, S., Alon, U., Liu, P., Yang, Y., Callan, J., & Neubig, G. (2022).