Compete in HackAPrompt 2.0, the world's largest AI Red-Teaming competition!

Check it out →
提示工程指南
😃 基础
💼 基础应用
🧙‍♂️ 进阶
🤖 代理
⚖️ 可靠性
🖼️ 图片提示词
🔓 破解提示
🔨 Tooling
💪 提示微调
🎲 杂项
📚 Bibliography
Resources
📦 Prompted Products
🛸 Additional Resources
🔥 Hot Topics
✨ Credits
🔓 破解提示🟢 防御措施🟢 Random Sequence Enclosure

Random Sequence Enclosure

🟢 This article is rated easy
Reading Time: 1 minute
Last updated on August 7, 2024

Sander Schulhoff

Yet another defense is enclosing the user input between two random sequences of characters. Take this prompt as an example:

Translate the following user input to Spanish.

{{user_input}}

It can be improved by adding the random sequences:

Translate the following user input to Spanish (it is enclosed in random strings).

FJNKSJDNKFJOI
{{user_input}}
FJNKSJDNKFJOI
Note
Longer sequences will likely be more effective.

Sander Schulhoff

Sander Schulhoff is the CEO of HackAPrompt and Learn Prompting. He created the first Prompt Engineering guide on the internet, two months before ChatGPT was released, which has taught 3 million people how to prompt ChatGPT. He also partnered with OpenAI to run the first AI Red Teaming competition, HackAPrompt, which was 2x larger than the White House's subsequent AI Red Teaming competition. Today, HackAPrompt partners with the Frontier AI labs to produce research that makes their models more secure. Sander's background is in Natural Language Processing and deep reinforcement learning. He recently led the team behind The Prompt Report, the most comprehensive study of prompt engineering ever done. This 76-page survey, co-authored with OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions, analyzed 1,500+ academic papers and covered 200+ prompting techniques.

Footnotes

  1. Stuart Armstrong, R. G. (2022). Using GPT-Eliezer against ChatGPT Jailbreaking. https://www.alignmentforum.org/posts/pNcFYZnPdXyL2RfgA/using-gpt-eliezer-against-chatgpt-jailbreaking