AI Red-Teaming and Security Masterclass
Learn AI Security from the Creator of HackAPrompt, the largest AI Security competition ever held, backed by OpenAI.

Meet Your Instructor
Sander Schulhoff
Our AI Systems Are Vulnerable... Learn how to Secure Them!
About the Course
About Your Instructor
Expert Guest Instructors
Pliny the Prompter: The most renowned AI Jailbreaker, who has successfully jailbroken every major AI model—including OpenAI's o1, which hasn't even been made public! Pliny also jailbroke an AI agent to autonomously sign into Gmail, code ransomware, compress it into a zip file, write a phishing email, attach the payload, and successfully deliver it to a target
Johann Rehberger: Led the creation of a Red Team in Microsoft Azure as a Principal Security Engineering Manager and built Uber's Red Team. Johann discovered attack vectors like ASCII Smuggling and AI-powered C2 (Command and Control) attacks. He has also found Bug Bounties in OpenAI's ChatGPT, Microsoft Copilot, GitHub Copilot Chat, Anthropic Claude, and Google Bard/Gemini. Johann will be sharing unreleased research that he hasn't yet published on his blog, embracethered.com.
Joseph Thacker: Principal AI Engineer at AppOmni, leading AI research on agentic functionality and retrieval systems. A security researcher specializing in application security and AI, Joseph has submitted over 1,000 vulnerabilities across HackerOne and Bugcrowd. He hacked into Google Bard at their LLM Bug Bounty event and took 1st place in the competition.
Akshat Parikh: Former AI security researcher at a startup backed by OpenAI and DeepMind researchers. Ranked Top 21 in JP Morgan's Bug Bounty Hall of Fame and Top 250 in Google's Bug Bounty Hall of Fame—all by the age of 16.
Richard Lundeen: Principal Software Engineering Lead for Microsoft's AI Red Team and maintainer of Microsoft PyRit. He leads an interdisciplinary team of red teamers, ML researchers, and developers focused on securing AI systems.
Sandy Dunn: A seasoned CISO with 20+ years of experience in healthcare. Project lead for the OWASP Top 10 Risks for LLM Applications Cybersecurity and Governance
Jason Haddix - Bug bounty hunter with over 20-years of experience in cybersecurity as the CISO of Ubisoft, Head of Trust/Security/Operations at Bugcrowd, Director of Penetration Testing at HP, and Lead Penetration Tester at Redspin.
Donato Capitella - A researcher with over 12 years of experience in offensive security and security assurance, has gained a following for his AI security work. Alongside his years of research and blogs on WithSecure, he has taught over 300k people about building and breaking AI systems on his YouTube channel (@donatocapitella).
Valen Tagliabue - An AI researcher, data analyst, and prompt engineer specializing in NLP and cognitive science. His expertise includes LLM evaluation, safety, and alignment, with a strong focus on human-AI collaboration. He was part of the winning team in HackAPrompt2023, an AI safety competition backed by industry leaders like Hugging Face, Scale AI, and OpenAI.
Leonard Tang - Founder and CEO of Haize Labs, an AI safety and evaluation startup based in New York City. Leonard holds Math and CS Bachelor's and Master's degrees from Harvard University and left his Stanford University CS PhD program to start Haize. His team is building next-generation tools for evaluating, red-teaming, monitoring, guardrailing, and robustifying AI systems. Haize Labs' technology is already being used by OpenAI, Anthropic, AI21 Labs, and other leading companies. He was also just awarded Forbes 30 under 30!
David Williams-King - As a co-founder of Elpha Secure and a research scientist at MILA, David Williams-King seamlessly bridges the gap between AI Security and pure cybersecurity. He has researched under the Turing Prize winning Joshua Bengio on his Safe AI For Humanity team.
Course Syllabus
Live Session 1: Sander Schulhoff - Introduction to AI Red Teaming and Classical Security
Guest Speaker: Joseph Thacker - AI App Attacks
Guest Speaker: Donato Capitella - Hacking LLM Applications: Tales and Techniques from the Industry
Researcher with 12+ years of experience in offensive security and security assurance. Known for AI security work at WithSecure, he has taught over 300k people about building and breaking AI systems on his YouTube channel.
Guest Speaker: Sandy Dunn - Lessons from a CISO & OWASP Top 10 LLM Risks
Introduction to Prompt Hacking
Introduction to GenAI Security and Harms
Project Kickoff: Hack HackAPrompt (Intro Track)
Live Session 2: Sander Schulhoff - Ignore Your Instructions and HackAPrompt
Live Prompt Hacking Session 1: Intro Track Solutions
Guest Speaker: Jason Haddix – Mastering AI Security: Real-World Scenarios and Cutting-Edge Methodologies
Bug bounty hunter with 20+ years of experience in cybersecurity. Former CISO of Ubisoft, Head of Trust/Security/Operations at Bugcrowd, Director of Penetration Testing at HP, and Lead Penetration Tester at Redspin.
Guest Speaker: Valen Tagliabue - Nudge, Trick, Break: Hacking AI by Thinking Like It
AI researcher, data analyst, and prompt engineer specializing in NLP and cognitive science. Expert in LLM evaluation, safety, and alignment. Part of the winning team in HackAPrompt2023, backed by Hugging Face, Scale AI, and OpenAI.
Module 3: Prompt Hacking Techniques and Attacks
Module 4: Defense Mechanisms
Project Kickoff: HackAPrompt 1.0
Live Event 3: Sander Schulhoff - Advanced Red-Teaming
Guest Speaker: David Williams-King - Bridging AI Security and Cybersecurity
Co-founder of Elpha Secure and research scientist at MILA. Has researched under the Turing Prize winning Joshua Bengio on his Safe AI For Humanity team.
Guest Speaker: Akshat Parikh - Adversarial Testing in the AI Era
Live Prompt Hacking Session 3: HackAPrompt 1.0 Solutions
Module 5: Advanced Jailbreaking
Module 6: Advanced Prompt Injection
Project: Defeating HackAPrompt
Live Prompt Hacking Session 4: PyRIT and Garak
Live Event 4: Sander Schulhoff - The Future of Red-Teaming
Guest Speaker: Leonard Tang - Building Next-Generation AI Safety Tools
Founder and CEO of Haize Labs, an AI safety and evaluation startup. Harvard Math and CS graduate who left Stanford PhD program to build tools for evaluating, red-teaming, and robustifying AI systems used by OpenAI, Anthropic, and AI21 Labs. Forbes 30 under 30 recipient.
Guest Speaker: Richard Lundeen from Microsoft's AI Red Team & Maintainer of PyRit
Module 7: Real-World Harms
Module 8: Physical harms
Project: Hack a Real-World-System
Thank You/Networking/Certification Celebration
AI/ML Red-Teaming Certification Exam
Certificate of Completion
Who Should Attend
Security Professionals
Enhance your skill set with AI-specific security knowledge.
AI Engineers
Learn to build secure AI systems and protect against threats.
Red Team Members
Add AI security testing to your capabilities.
Business Leaders
Understand the security implications of AI adoption.
What You'll Learn
The fundamentals of AI security, from prompt injection to model extraction
Real-world examples and exercises from HackAPrompt competition
Practical techniques to secure AI systems against emerging threats
Industry best practices for implementing AI security measures
Why Choose This Masterclass
Comprehensive Curriculum
Master AI security fundamentals
Hands-on Practice
Practice with real-world examples
Professional Accolades
Earn industry-backed certifications
Course Details
Ready to Secure Your AI Systems?
Join our AI Security Masterclass and learn the skills you need to protect your organization.
Enroll Today