Compete in HackAPrompt 2.0, the world's largest AI Red-Teaming competition!

Check it out →

AI Red Teaming, Prompt Hacking, &
AI Security Masterclass

4.4 (45 reviews)
On-Demand25 hours

Master the skills to identify and exploit vulnerabilities in AI systems while learning to build robust defenses.

AI Red Teaming Demand Grew by 200% in 2025 Learn More

#1 AI Security Course
13 recorded sessions with world-class experts
Get AIRTP+ Certified upon completion

Learn from the Creator of the World's 1st AI Red Teaming Competition

Sander Schulhoff
EMNLP 2023 Best Paper10,000+ Hackers Trained

Sander Schulhoff created HackAPrompt, the world's first and largest AI Red Teaming competition, which helped make OpenAI's models 46% more resistant to prompt injections. His work has been cited by OpenAI, DeepMind, Anthropic, IBM, and Microsoft. He won Best Paper at EMNLP 2023 (selected from 20,000+ submissions) and has led AI (selected from 20,000+ submissions) and has led AI security workshops at OpenAI, Microsoft, Stanford, and Deloitte.

Led AI Security & Prompting Workshops At

OpenAIMicrosoftStanfordHarvard UniversityUniversity of California Office of the PresidentDropboxDeloitteUAE Government

Your AI Systems Are Vulnerable... Learn how to Secure Them!

Prompt injections are the #1 security vulnerability in LLMs today. They enable attackers to steal sensitive data, hijack AI agents, bypass safety guardrails, and manipulate AI outputs for malicious purposes. In 2023, over 10,000 hackers competed in HackAPrompt to exploit these vulnerabilities, generating the largest dataset of prompt injection attacks ever collected. Every major AI lab now uses this data to secure their models. Without proper defenses, your AI systems remain exposed to these critical threats.

About the Course

This hands-on masterclass teaches you to identify and fix vulnerabilities in AI systems before attackers exploit them. Using the HackAPrompt playground, you'll practice both attacking and defending real AI systems through practical exercises, not just theory. You'll learn to detect prompt injections, jailbreaks, and adversarial attacks while building robust defenses against them.

The course prepares you for the AI Red Teaming Certified Professional (AIRTP+) exam, a rigorous 24+ hour assessment that validates your expertise. We've trained thousands of professionals from Microsoft, Google, Capital One, IBM, and Walmart who now secure AI systems worldwide. Join them in becoming AIRTP+ certified and protecting the AI systems your organization depends on.

Transform Your Career in AI Security

Master Advanced AI Red-Teaming Techniques

Break into any LLM using prompt injections, jailbreaking, and advanced exploitation techniques. Learn to identify and exploit AI vulnerabilities at a professional level.

Execute Real-World Security Assessments

Role-play test enterprise chatbots for data leaks, verify AI image generators against harmful content, and perform advanced prompt injection attacks on production systems.

Command Premium Salaries

Qualify for roles like AI Security Specialist ($150K-$200K), AI Red Team Lead ($180K-$220K), and AI Safety Engineer ($160K-$210K).

Build Robust AI Defenses

Implement security measures throughout the AI development lifecycle. Learn to secure AI/ML systems by building resilient models and integrating defense strategies.

Join an Elite Network

Connect with professionals from Microsoft, Google, Capital One, IBM, ServiceNow, and Walmart. Expand your network and collaborate with industry leaders.

Earn Industry Recognition

Get AIRTP+ Certified - the most recognized AI security credential. Validate your expertise and position yourself as a leader in AI security.

Learn from the World's Top AI Security Experts

Pliny the Prompter

Pliny the Prompter

The World's Most Famous AI Jailbreaker

The most renowned AI Jailbreaker, who has successfully jailbroken every major AI model—including OpenAI's o1, which hasn't even been made public!

Joseph Thacker

Joseph Thacker

Solo Founder & Top Bug Bounty Hunter

As a solo founder and security researcher, Joseph has submitted over 1,000 vulnerabilities across HackerOne and Bugcrowd.

Jason Haddix

Jason Haddix

Former CISO of Ubisoft

Bug bounty hunter with over 20-years of experience in cybersecurity as the CISO of Ubisoft and Director of Penetration Testing at HP.

Valen Tagliabue

Valen Tagliabue

Winner of HackAPrompt 1.0

An AI researcher specializing in NLP and cognitive science. Part of the winning team in HackAPrompt 1.0.

David Williams-King

David Williams-King

AI Researcher under Turing Prize Recipient

Research scientist at MILA, researching under Turing Prize winning Yoshua Bengio on his Safe AI For Humanity team.

Leonard Tang

Leonard Tang

Founder and CEO of Haize Labs

NYC-based AI safety startup providing cutting-edge evaluation tools to leading companies like OpenAI and Anthropic.

Richard Lundeen

Richard Lundeen

Microsoft's AI Red Team Lead

Principal Software Engineering Lead for Microsoft's AI Red Team and maintainer of Microsoft PyRit.

Johann Rehberger

Johann Rehberger

Founded a Red Team at Microsoft

Built Uber's Red Team and discovered attack vectors like ASCII Smuggling. Found Bug Bounties in every major Gen AI model.

Is This Masterclass Right for You?

Perfect for:

Cybersecurity professionals seeking to master AI/ML red-teaming techniques
AI Engineers and Developers building AI systems who want to understand security risks
AI safety and ethics specialists aiming to deepen their expertise in AI vulnerabilities
Professionals transitioning into AI security roles seeking practical skills and certifications
AI Product Managers and technical leads needing to understand AI security risks
CISOs and Security Executives aiming to incorporate AI security into their strategies
Government and Regulatory officials responsible for AI policy

Everything You Need to Succeed

13 expert-led sessions

In-depth training from industry leaders

8 comprehensive lessons

With hands-on labs and exercises

3 real-world projects

Hack production AI systems

Private community

Network with AI security professionals

Lifetime access

Review materials anytime

AIRTP+ Certification

Industry-recognized credential

AI Red Teaming Masterclass

AIRTP+ CertificationNo Coding RequiredLifetime AccessMoney-Back Guarantee
$1,916
$1,199
Save $717
Enroll Now →

Course Syllabus

Week 1: Introduction to AI Red Teaming

Learn the fundamentals of AI red teaming, classical security principles, and OWASP Top 10 LLM risks.

Sessions & Topics:
  • • Introduction to AI Red Teaming and Classical Security
  • • Sandy Dunn - Lessons from a CISO & OWASP Top 10 LLM Risks
  • • Joseph Thacker - AI App Attacks
  • • Pliny the Prompter - Jailbreaking Every AI Model
  • • Introduction to Prompt Hacking
  • • Introduction to GenAI Security and Harms

👾 Project: Hack HackAPrompt (Intro Track)

4 sessions1 project

Week 2: Advanced Prompt Hacking & AI Vulnerabilities

Master sophisticated prompt injection techniques and explore how AI impacts traditional cybersecurity.

Sessions & Topics:
  • • Ignore Your Instructions and HackAPrompt
  • • David Williams-King - How AI Impacts Traditional Cybersecurity
  • • Valen Tagliabue - Nudge, Trick, Break: Hacking AI by Thinking Like It
  • • Akshat Parikh - Adversarial Testing in the AI Era
  • • Donato Capitella - Hacking LLM Applications: Tales and Techniques
  • • Comprehensive Guide to Prompt Hacking Techniques
  • • Harms in AI-Red Teaming

👾 Project: Hack HackAPrompt 1.0

5 sessions1 project

Week 3: Advanced Red-Teaming & AI Defense

Explore cutting-edge red-teaming techniques and learn to build robust AI defenses.

Sessions & Topics:
  • • Advanced Red-Teaming Techniques
  • • Leonard Tang, CEO of Haize Labs - Frontiers of Red-Teaming
  • • AI Defense Strategies
  • • State of the Industry Analysis

👾 Project: Healthcare Portal Red-Teaming

2 sessions1 project

Week 4: The Future of Red-Teaming & Automation

Discover the future of AI red-teaming, automated tools, and advanced attack methodologies.

Sessions & Topics:
  • • The Future of Red-Teaming
  • • Jason Haddix - Mastering AI Security: Real-World Scenarios
  • • PyRIT Team - Automated Red-Teaming Tools
  • • Johann Rehberger - SpAIware & Advanced Prompt Injection
  • • Automated AI Red-Teaming Techniques
4 sessionsCertification prep

Certification & Final Review

Complete your certification exam and celebrate your achievement with the cohort.

Final Steps:
  • • Final Review Session with Sander
  • • Certificate of Completion
Course completion

Access Hands-on Training from HackAPrompt

World's Largest AI Red Teaming Environment

Access advanced scenarios in the world's largest AI Red Teaming environment—used by over 10,000 participants worldwide. Developed in partnership with OpenAI, Scale AI, and Hugging Face to tackle complex security challenges.

Advanced AI Red Teaming Challenges: Validated by industry experts

Award-Winning Research: HackAPrompt won Best Theme Paper at EMNLP 2023

OpenAIScale AIHugging Face
Try Advanced Scenarios →

Join the Elite AI Red Teaming Alumni Network

Private Alumni Community

Gain lifetime access to our exclusive community of 1,000+ AI security professionals and industry leaders.

Community Benefits:
  • • Network with certified professionals from Microsoft, Google, Capital One, IBM, ServiceNow, and Walmart
  • • Share cutting-edge research and latest AI security vulnerabilities discovered by the community
  • • Collaborate on advanced projects with industry leaders and security researchers
  • • Access exclusive job opportunities in AI security posted by Fortune 500 companies

🌟 Invitation Only - Premium Access

1,000+ Active Alumni

Get AIRTP+ Certified by Learn Prompting

The most recognized AI security credential in the industry

Learn Prompting Achievements

Created the 1st Prompt Engineering Course on the Internet with 3M+ people trained worldwide

Partnered with OpenAI to create a course on ChatGPT

Trained 3,000,000+ professionals in Generative AI worldwide

Organizers of HackAPrompt—the 1st & largest AI Red Teaming competition ever held

OpenAIScale AIHugging Face

Hear from Notable Alumni

Andy Purdy

Andy Purdy

Former CISO of Huawei

US Homeland Security Advisor

"Hands-on teaching and learning. Good opportunity to work through assignments."

Former Chief Security Officer at Huawei USA • Former White House Director of Cybersecurity • Former US Department of Homeland Security Official

Hear from our Alumni

4.4(45 reviews)

"This course was a genuine pleasant surprise. People from the very cutting edge of this industry shared their knowledge here and it was pure gold."

Pavlin

AI Engineer
ATN Europe

""AI Red-Teaming and AI Safety: Masterclass" is an exceptional course for anyone seeking to understand and counter AI vulnerabilities. It offers hands-on experience, real-world scenarios, and strategies to strengthen AI..."

Mohammed

Cybersecurity Leader

"Amazing teachers, guests and training for AI Red Teaming! 110% recommend for anyone and the others students who attended the course had so much to offer. You won't only learn..."

AiTitus :)

Director
Prime Mind

"The AI Red Teaming and AI Safety course by Learn Prompting was an exceptional learning experience. The interactive, cohort-based format covered critical AI security topics and the hands-on exercises, combined..."

Cigdem

Software Engineer/Project Manager
Independent

"My horizon broadened a lot thanks to all the inspiring sharings from pioneers including Jason Haddix, Sandy Dunn, Johann Rehberger, Joseph Thacker, Pliny the Prompter, Akshat Parikh, Richard Lundeen, and..."

Albert

Chief Forensicator
Security Ronin

"The methodology, the resources and the streaming were so good. However the platform HackAprompt was the best to practice"

Paul

Pentester
Google

Industry Case Study 2024

The AI Red Teaming Revolution

As AI systems become more critical to business operations, the demand for AI Red Teaming expertise has never been higher.

$191K
Average Salary

For AI Red Teamers with 2+ years of cybersecurity experience

200%
Growth in 2024

HackerOne reports unprecedented growth in AI Red Teaming services

$134B
Market Size

Projected AI cybersecurity market size by 2030, from $24.3B in 2023

Why AI Red Teaming Matters Now

The scope of AI's capabilities in 2024 is broader than ever: large language models are penning news articles, generative AI systems are coding entire web apps, and AI chatbots are supporting millions of customers each day. Unlike traditional software, which can be audited with predictable security checklists, AI systems are fluid. They adapt to context, prompts, and continuous learning, creating unprecedented attack surfaces.

Red teams must think like adversaries, probing for ways AI could produce harmful, biased, or even illegal content. This is especially critical when malicious users might "trick" or overwhelm these models into revealing trade secrets, generating weaponization instructions, or perpetuating harmful stereotypes. The stakes are high—both legally and reputationally.

Government Mandates and Global Convergence

What once was a novel security practice is fast becoming an international regulatory requirement. Governments from the U.S. to the EU and beyond are moving toward mandates that all high-risk AI deployments be tested using adversarial (red team) methods before going live.

In the U.S., the White House's sweeping executive order on AI explicitly calls for "structured testing" to find flaws and vulnerabilities. Major summits—from the G7 gatherings to the Bletchley Declaration—have underscored the importance of red teaming to address risks posed by generative AI.

Leading Governments

Regulatory frameworks being established worldwide:

European Union

European Union

UK Government

UK Government

US Government

US Government

Singapore Government

Singapore Government

A New Career Path Emerges

This rapid expansion of AI Red Teaming has created a vibrant job market for security professionals. Organizations are seeking experts who can blend traditional cybersecurity tactics with an advanced understanding of large language models and generative AI.

Positions advertised as "AI Security Specialist" or "AI Red Teamer" command six-figure salaries; industry data suggests a median total pay near $178,000, with some postings reaching well into the $200,000 range.

Operationalizing AI Red Teaming

Role-Play Testing

Test chatbots for potential discriminatory responses or proprietary data leaks

Image Generation Testing

Verify AI models against creating harmful or prohibited content

Advanced Infiltration

Execute prompt injection and code manipulation tests

The Road Ahead

AI is rapidly permeating every facet of commerce and society, making the question of "if" you should adopt AI security practices obsolete—now, it's about "how fast" you can incorporate them. Multinational brands and even entire government agencies are forging ahead with mandatory AI Red Teaming requirements, ensuring their generative AI models adhere to safety and regulatory standards.

As the world continues its march toward ubiquitous AI adoption, organizations will only scale up their demand for AI security professionals. The AI Red Teamer will serve as a vital guardrail—a creative, investigative professional bridging the gap between innovative AI solutions and robust risk mitigation.

Limited Time Offer

Everything You Need to Become AI Security Certified

Get the complete AI Red Teaming bundle at an exclusive price

What's Included:

AI Red Teaming Masterclass

13 sessions + 3 hands-on projects

Learn Prompting Plus Access

20+ hours of additional content

AIRTP+ Certification Exam

Industry-recognized credential

Individual Value: $1,916

$1,199
Save $717
Masterclass$1,199
Learn Prompting PlusFREE ($468 value)
AIRTP+ ExamFREE ($249 value)
Enroll Now - Save $717
30-Day Money-Back Guarantee

Join 10,000+ professionals already certified in AI security

Prefer to Learn from Live Instructors?

Join our live AI Red Teaming course with real-time expert guidance

Live Sessions with Experts
Interactive Q&A
Cohort Community
$1,495

Live cohort experience

Join Live AI Red Teaming Course →

Still unsure? Take our free 7-Day Prompt Hacking Email Course

Master the fundamentals of LLM security with our free 7-day course. Each day, you'll receive a comprehensive lesson covering:

Day 1: Why LLM Security Matters

Day 2-3: Prompt Hacking Types & Intents

Day 4-5: Common Attacks & Defense Strategies

Day 6-7: Future Challenges & Next Steps

We respect your privacy. Unsubscribe at any time.

7-Day Prompt Hacking Course Overview

Frequently asked questions

The course is designed for busy professionals. You'll need 4-6 hours per week over 4 weeks. All sessions are recorded, so you can watch on your schedule. Most students complete assignments during lunch breaks or weekends. We've had CEOs, single parents, and consultants successfully complete the course while managing demanding schedules.
Absolutely. 50% of our students come from non-technical backgrounds including lawyers, product managers, and business executives. We start with fundamentals and build up. No coding required - if you can use ChatGPT, you can complete this course.
This is 70% hands-on hacking, 30% theory. You'll spend most of your time in the HackAPrompt playground actually breaking into AI systems. By week 2, you'll be executing real prompt injections. By graduation, you'll have successfully hacked multiple different AI models.
You get lifetime access to all course updates. As models evolve, so will the attack vectors. However, the core principles of AI security apply across all models - the techniques you learn will work on AI systems that don't even exist yet.
30-day money-back guarantee, no questions asked. If you're not completely satisfied, get a full refund.
You get one free retakes. A majority of students pass on their first attempt. If you somehow don't pass, please retake the sessions and attend our office hour sessions until you do. We're invested in your success - our reputation depends on producing skilled professionals.
Most students report finding critical vulnerabilities in their company's AI systems within the first two weeks.
Perfect for cybersecurity professionals, AI engineers, product managers, compliance officers, and executives responsible for AI systems. Whether you're securing AI at a Fortune 500 or a startup, you'll gain immediately applicable skills. We've trained everyone from Microsoft's security team to government regulators.
You're learning from the person who literally created the field's most important competition. This isn't recycled content - it's techniques that made OpenAI's models 46% more secure. Plus, you get the AIRTP+ certification, recognized by employers worldwide. Free tutorials won't get you hired or promoted.
Yes! We provide invoices and accept purchase orders. Most of our students get reimbursed by their employers. We'll even provide a template email to help you request funding, highlighting the ROI and risk mitigation benefits. Team discounts available for 2+ people.
24/7 access to our private Discord community with 1,000+ professionals. Monthly office hours with instructors. Plus, alumni get lifetime access to our community where you'll network with leading employees working on AI Security initiatives in Fortune 500 companies.
It's challenging but fair. The 24-hour practical exam tests real skills, not memorization. You'll red-team a mock enterprise AI system, document vulnerabilities, and propose fixes. We provide practice exams and a detailed study guide. Remember: most students pass on their first try.
No coding required! You just need basic familiarity with AI tools like ChatGPT. If you can write prompts and understand basic security concepts, you're ready. We start from fundamentals and build up. Technical background helps but isn't required - we've successfully trained lawyers, product managers, and business executives.
This is the only course taught by the creator of HackAPrompt, the competition that shaped how OpenAI, Anthropic, and others secure their models. You're learning techniques that aren't published anywhere else, directly from the source. Plus, you get hands-on practice in the actual HackAPrompt playground used by 10,000+ hackers worldwide.

Ready to Secure Your AI Systems?

Join our AI Security Masterclass and learn the skills you need to protect your organization.

Enroll Now - Save $717

© 2025 Learn Prompting. All rights reserved.