Prompt Engineering Guide
πŸ˜ƒ Basics
πŸ’Ό Applications
πŸ§™β€β™‚οΈ Intermediate
🧠 Advanced
Special Topics
🌱 New Techniques
πŸ€– Agents
βš–οΈ Reliability
πŸ–ΌοΈ Image Prompting
πŸ”“ Prompt Hacking
πŸ”¨ Tooling
πŸ’ͺ Prompt Tuning
πŸ—‚οΈ RAG
🎲 Miscellaneous
Models
πŸ”§ Models
Resources
πŸ“™ Vocabulary Resource
πŸ“š Bibliography
πŸ“¦ Prompted Products
πŸ›Έ Additional Resources
πŸ”₯ Hot Topics
✨ Credits
πŸ”“ Prompt Hacking🟒 Defensive Measures🟒 Random Sequence Enclosure

Random Sequence Enclosure

🟒 This article is rated easy
Reading Time: 1 minute
Last updated on August 7, 2024

Sander Schulhoff

Takeaways
  • Enclosing user input between random sequences of characters helps the LLM distinguish it from developer instructions, which it can prioritize.

What is Random Sequence Enclosure?

Random sequence enclosure is yet another defense. This method encloses the user input between two random sequences of characters.

An Example of Random Sequence Enclosure

Take this prompt as an example:

Astronaut

Prompt


Translate the following user input to Spanish.

{user_input}

It can be improved by adding the random sequences:

Astronaut

Prompt


Translate the following user input to Spanish (it is enclosed in random strings).

FJNKSJDNKFJOI {user_input} FJNKSJDNKFJOI

Note
Longer sequences will likely be more effective.

Conclusion

Random sequence enclosure can help disallow user attempts to input instruction overrides by helping the LLM identify a clear distinction between user input and developer prompts.

Sander Schulhoff

Sander Schulhoff is the Founder of Learn Prompting and an ML Researcher at the University of Maryland. He created the first open-source Prompt Engineering guide, reaching 3M+ people and teaching them to use tools like ChatGPT. Sander also led a team behind Prompt Report, the most comprehensive study of prompting ever done, co-authored with researchers from the University of Maryland, OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions. This 76-page survey analyzed 1,500+ academic papers and covered 200+ prompting techniques.

Footnotes

  1. Stuart Armstrong, R. G. (2022). Using GPT-Eliezer against ChatGPT Jailbreaking. https://www.alignmentforum.org/posts/pNcFYZnPdXyL2RfgA/using-gpt-eliezer-against-chatgpt-jailbreaking ↩