Designed For Advanced Professionals

AI Red-Teaming and Security Masterclass

Learn AI Security from the Creator of HackAPrompt, the largest AI Security competition ever held, backed by OpenAI.

The fundamentals of AI security, from prompt injection to model extraction
Real-world examples and exercises from HackAPrompt competition
Practical techniques to secure AI systems against emerging threats
Industry best practices for implementing AI security measures
AI Red-Teaming and Security Masterclass

Meet Your Instructor

Sander Schulhoff

Sander Schulhoff

Founder & CEO, Learn Prompting
Sander Schulhoff is the Founder of Learn Prompting and an ML Researcher at the University of Maryland. He created the first open-source Prompt Engineering guide, which reached 3 million people and taught them to use tools like ChatGPT. Sander also led a team behind Prompt Report, the most comprehensive study of prompting ever done. This 76-page survey, co-authored with researchers from the University of Maryland, OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions, analyzed 1,500+ academic papers and covered 200+ prompting techniques.

Our AI Systems Are Vulnerable... Learn how to Secure Them!

In 2023, I partnered with OpenAI, ScaleAI, & Hugging Face to launch HackAPrompt—the 1st & Largest Generative AI Red Teaming Competition ever held. Over 3,300 AI hackers competed to bypass model guardrails using prompt injections—the #1 Security Risk in LLMs.
We collected the Largest Dataset of Prompt Injection attacks, which has been used by every major Frontier AI Lab, including OpenAI, who used it to improve their models' resistance to Prompt Injection Attacks by up to 46%.
Today, I've delivered workshops on AI Red Teaming & Prompting at OpenAI, Microsoft, Deloitte, & Stanford University. And because I love to teach... I created this course to teach you everything I know about AI Red Teaming!

About the Course

This 6-week Masterclass is the #1 AI Security course for Cybersecurity Professionals, AI Trust & Safety leads, AI product managers, and engineers who want to master AI Red Teaming and secure AI systems against real-world threats. You'll gain hands-on experience identifying vulnerabilities in Generative AI systems, including prompt injections, jailbreaks, and adversarial attacks. Using the HackAPrompt playground, you'll practice both attacking and defending AI systems—learning how to break them and how to secure them. This course is practical, not just theoretical. You'll work on real-world projects, analyzing live AI systems for vulnerabilities. These projects prepare you for the AI Red Teaming Certified Professional (AIRTP+) Exam, a 24+ hour assessment that validates your AI security expertise. Our last cohort included 150 professionals from Microsoft, Google, Capital One, IBM, ServiceNow, and Walmart. Graduates passed the certification exam and are now AIRTP+ certified, applying their skills to secure AI systems worldwide.

About Your Instructor

I'm Sander Schulhoff, the Founder of Learn Prompting & HackAPrompt. In October 2022, I published the 1st Prompt Engineering Guide on the Internet—two months before ChatGPT was released. Since then, my courses have trained over 3 million people, and I'm one of two people (other than Andrew Ng) to partner with OpenAI on a ChatGPT course. I've led Prompt Engineering workshops at OpenAI, Microsoft, Stanford, Deloitte, and Dropbox. I'm an award-winning Generative AI researcher from the University of Maryland and the youngest recipient of the Best Paper Award at EMNLP 2023, the leading NLP Conference, selected out of 20,000 submitted research papers. My research paper on HackAPrompt, Ignore This Title and HackAPrompt, has been by cited by OpenAI in three major research papers: Instruction Hierarchy, Automated Red Teaming, and Adversarial Robustness papers. I created HackAPrompt, the first and largest Generative AI Red Teaming competition. Most recently, I led a team from OpenAI, Microsoft, Google, and Stanford on The Prompt Report—the most comprehensive study on Prompt Engineering to date. This 76-page survey analyzed over 1,500 academic papers, evaluating the effectiveness of prompting techniques, AI agents, and Generative AI applications.

Expert Guest Instructors

Pliny the Prompter: The most renowned AI Jailbreaker, who has successfully jailbroken every major AI model—including OpenAI's o1, which hasn't even been made public! Pliny also jailbroke an AI agent to autonomously sign into Gmail, code ransomware, compress it into a zip file, write a phishing email, attach the payload, and successfully deliver it to a target

Johann Rehberger: Led the creation of a Red Team in Microsoft Azure as a Principal Security Engineering Manager and built Uber's Red Team. Johann discovered attack vectors like ASCII Smuggling and AI-powered C2 (Command and Control) attacks. He has also found Bug Bounties in OpenAI's ChatGPT, Microsoft Copilot, GitHub Copilot Chat, Anthropic Claude, and Google Bard/Gemini. Johann will be sharing unreleased research that he hasn't yet published on his blog, embracethered.com.

Joseph Thacker: Principal AI Engineer at AppOmni, leading AI research on agentic functionality and retrieval systems. A security researcher specializing in application security and AI, Joseph has submitted over 1,000 vulnerabilities across HackerOne and Bugcrowd. He hacked into Google Bard at their LLM Bug Bounty event and took 1st place in the competition.

Akshat Parikh: Former AI security researcher at a startup backed by OpenAI and DeepMind researchers. Ranked Top 21 in JP Morgan's Bug Bounty Hall of Fame and Top 250 in Google's Bug Bounty Hall of Fame—all by the age of 16.

Richard Lundeen: Principal Software Engineering Lead for Microsoft's AI Red Team and maintainer of Microsoft PyRit. He leads an interdisciplinary team of red teamers, ML researchers, and developers focused on securing AI systems.

Sandy Dunn: A seasoned CISO with 20+ years of experience in healthcare. Project lead for the OWASP Top 10 Risks for LLM Applications Cybersecurity and Governance

Jason Haddix - Bug bounty hunter with over 20-years of experience in cybersecurity as the CISO of Ubisoft, Head of Trust/Security/Operations at Bugcrowd, Director of Penetration Testing at HP, and Lead Penetration Tester at Redspin.

Donato Capitella - A researcher with over 12 years of experience in offensive security and security assurance, has gained a following for his AI security work. Alongside his years of research and blogs on WithSecure, he has taught over 300k people about building and breaking AI systems on his YouTube channel (@donatocapitella).

Valen Tagliabue - An AI researcher, data analyst, and prompt engineer specializing in NLP and cognitive science. His expertise includes LLM evaluation, safety, and alignment, with a strong focus on human-AI collaboration. He was part of the winning team in HackAPrompt2023, an AI safety competition backed by industry leaders like Hugging Face, Scale AI, and OpenAI.

Leonard Tang - Founder and CEO of Haize Labs, an AI safety and evaluation startup based in New York City. Leonard holds Math and CS Bachelor's and Master's degrees from Harvard University and left his Stanford University CS PhD program to start Haize. His team is building next-generation tools for evaluating, red-teaming, monitoring, guardrailing, and robustifying AI systems. Haize Labs' technology is already being used by OpenAI, Anthropic, AI21 Labs, and other leading companies. He was also just awarded Forbes 30 under 30!

David Williams-King - As a co-founder of Elpha Secure and a research scientist at MILA, David Williams-King seamlessly bridges the gap between AI Security and pure cybersecurity. He has researched under the Turing Prize winning Joshua Bengio on his Safe AI For Humanity team.

Interested in an enterprise license so your whole team or company can take the course? Please reach out directly to [email protected]

Course Syllabus

Week 1
Apr 28 — May 1

Live Session 1: Sander Schulhoff - Introduction to AI Red Teaming and Classical Security

MONDAY - APR 281:00 PM (EDT)

Guest Speaker: Joseph Thacker - AI App Attacks

TUESDAY - APR 2912:00 PM (EDT)

Guest Speaker: Donato Capitella - Hacking LLM Applications: Tales and Techniques from the Industry

WEDNESDAY - APR 301:00 PM (EDT)

Researcher with 12+ years of experience in offensive security and security assurance. Known for AI security work at WithSecure, he has taught over 300k people about building and breaking AI systems on his YouTube channel.

Guest Speaker: Sandy Dunn - Lessons from a CISO & OWASP Top 10 LLM Risks

THURSDAY - MAY 11:00 PM (EDT)

Introduction to Prompt Hacking

Introduction to GenAI Security and Harms

Project Kickoff: Hack HackAPrompt (Intro Track)

Week 2
May 6 — May 9

Live Session 2: Sander Schulhoff - Ignore Your Instructions and HackAPrompt

TUESDAY - MAY 61:00 PM (EDT)

Live Prompt Hacking Session 1: Intro Track Solutions

WEDNESDAY - MAY 72:00 PM (EDT)

Guest Speaker: Jason Haddix – Mastering AI Security: Real-World Scenarios and Cutting-Edge Methodologies

THURSDAY - MAY 81:00 PM (EDT)

Bug bounty hunter with 20+ years of experience in cybersecurity. Former CISO of Ubisoft, Head of Trust/Security/Operations at Bugcrowd, Director of Penetration Testing at HP, and Lead Penetration Tester at Redspin.

Guest Speaker: Valen Tagliabue - Nudge, Trick, Break: Hacking AI by Thinking Like It

FRIDAY - MAY 91:00 PM (EDT)

AI researcher, data analyst, and prompt engineer specializing in NLP and cognitive science. Expert in LLM evaluation, safety, and alignment. Part of the winning team in HackAPrompt2023, backed by Hugging Face, Scale AI, and OpenAI.

Module 3: Prompt Hacking Techniques and Attacks

Module 4: Defense Mechanisms

Project Kickoff: HackAPrompt 1.0

Week 3
May 12 — May 16

Live Event 3: Sander Schulhoff - Advanced Red-Teaming

MONDAY - MAY 121:00 PM (EDT)

Guest Speaker: David Williams-King - Bridging AI Security and Cybersecurity

MONDAY - MAY 122:30 PM (EDT)

Co-founder of Elpha Secure and research scientist at MILA. Has researched under the Turing Prize winning Joshua Bengio on his Safe AI For Humanity team.

Guest Speaker: Akshat Parikh - Adversarial Testing in the AI Era

MONDAY - MAY 123:30 PM (EDT)

Live Prompt Hacking Session 3: HackAPrompt 1.0 Solutions

TUESDAY - MAY 132:00 PM (EDT)

Module 5: Advanced Jailbreaking

Module 6: Advanced Prompt Injection

Project: Defeating HackAPrompt

Week 4
May 19 — May 22

Live Prompt Hacking Session 4: PyRIT and Garak

MONDAY - MAY 192:00 PM (EDT)

Live Event 4: Sander Schulhoff - The Future of Red-Teaming

TUESDAY - MAY 201:00 PM (EDT)

Guest Speaker: Leonard Tang - Building Next-Generation AI Safety Tools

WEDNESDAY - MAY 211:00 PM (EDT)

Founder and CEO of Haize Labs, an AI safety and evaluation startup. Harvard Math and CS graduate who left Stanford PhD program to build tools for evaluating, red-teaming, and robustifying AI systems used by OpenAI, Anthropic, and AI21 Labs. Forbes 30 under 30 recipient.

Guest Speaker: Richard Lundeen from Microsoft's AI Red Team & Maintainer of PyRit

THURSDAY - MAY 221:00 PM (EDT)

Module 7: Real-World Harms

Module 8: Physical harms

Project: Hack a Real-World-System

Post Course
May 23 — May 31

Thank You/Networking/Certification Celebration

FRIDAY - MAY 311:00 PM (EDT)

AI/ML Red-Teaming Certification Exam

Certificate of Completion

Who Should Attend

Security Professionals

Enhance your skill set with AI-specific security knowledge.

AI Engineers

Learn to build secure AI systems and protect against threats.

Red Team Members

Add AI security testing to your capabilities.

Business Leaders

Understand the security implications of AI adoption.

What You'll Learn

The fundamentals of AI security, from prompt injection to model extraction

Real-world examples and exercises from HackAPrompt competition

Practical techniques to secure AI systems against emerging threats

Industry best practices for implementing AI security measures

Why Choose This Masterclass

Comprehensive Curriculum

Master AI security fundamentals

Hands-on Practice

Practice with real-world examples

Professional Accolades

Earn industry-backed certifications

Course Details

Start Date: April 17, 2025
End Date: May 16, 2025
Price: $1,495

Ready to Secure Your AI Systems?

Join our AI Security Masterclass and learn the skills you need to protect your organization.

Enroll Today

© 2025 Learn Prompting. All rights reserved.