You are leaving our Website
Using an external Link:
You are now leaving our website. The following page is operated by a third party. We accept no responsibility for the content, data protection, or security of the linked page..
URL:
AI500: AI Security Specialist UPDATE
Training: Artificial Intelligence
Participants learn to systematically test and secure AI applications using offensive and defensive methods. They perform red teaming against LLMs, agents and RAG systems, including prompt injection, jailbreaks and adversarial attacks, classified using MITRE ATLAS. On the defensive side, they implement the OWASP Top 10 for LLMs, establish controls across the entire AI lifecycle as well as logging, monitoring, incident response, threat modeling and trustworthy AI.
Start: 2026-02-23 | 10:00 am
End: 2026-02-27 | 05:00 pm
Location: Nürnberg
Price: 3.950,00 € plus VAT.
Start: 2026-05-04 | 10:00 am
End: 2026-05-08 | 05:00 pm
Location: Nürnberg
Price: 3.950,00 € plus VAT.
Start: 2026-09-21 | 10:00 am
End: 2026-09-25 | 05:00 pm
Location: Nürnberg
Price: 3.950,00 € plus VAT.
Start: 2026-11-30 | 10:00 am
End: 2026-12-04 | 05:00 pm
Location: Nürnberg
Price: 3.950,00 € plus VAT.
Agenda:
- Fundamentals & framework conditions
- AI & ML fundamentals
- AI Act and regulatory requirements
- Risk management for AI systems
- Offensive security for AI
- Introduction to AI red teaming
- Attacks on AI models, data & pipelines
- Attacks on vector databases and RAG systems
- LLM-specific attack techniques (prompt injection, jailbreaks etc.)
- Attacks on AI agents
- Evasion/adversarial attacks
- MITRE ATLAS (offensive perspective)
- Defensive AI security (AI application security)
- Fundamentals of AI-specific application security
- OWASP Top 10 for LLM and agentic AI / MLSecOps
- Security controls across the entire AI lifecycle
- Organizational & governance measures
- Trustworthy AI
- Security methods & operations
- Threat modeling for AI systems
- Logging & monitoring
- Incident response for AI
- Certification exam
Objectives:
- Develop a fundamental understanding of AI and ML
- Gain knowledge of how they work, models and typical use cases
- Understand opportunities and risks of AI systems
- Build offensive security skills
- Identify attack surfaces of AI applications, LLMs, agents and RAG systems
- Understand practical attack strategies (red teaming, adversarial attacks, evasion)
- Apply MITRE ATLAS to classify attacks
- Develop defensive security capabilities
- Implement security controls across the entire AI lifecycle
- Understand and apply OWASP Top 10 for LLM & Agentic AI
- Establish logging, monitoring and incident response for AI systems
- Establish trustworthy AI and organizational measures
- Anchor principles of trustworthy AI in organizations
- Implement governance, policies and processes for AI security
- Apply practice-oriented threat modeling
- Analyze and prioritize threats to AI systems
- Systematically derive defensive measures
Target audience:
- IT and cybersecurity professionals
- Security engineers, analysts and penetration testers
- Those responsible for IT and application security
- AI and ML leads
- Data scientists, ML engineers, AI architects
- Those responsible for AI development and operations
- Executives in IT, security & AI
- Chief Information Security Officers (CISO)
- Chief Data / AI Officers
- Developers and DevOps / MLOps teams
- Developers who implement and operate AI systems
- Teams for DevSecOps / MLSecOps
Prerequisites:
- AI300 AI Technology Implementer or equivalent prior knowledge
Description:
The course AI500 AI Security Specialist provides practical knowledge on the security of AI applications, with a focus on offensive and defensive measures. In the offensive part, participants learn how to systematically test AI systems and uncover vulnerabilities. This includes red teaming methods, attacks on large language models, AI agents, vector databases and retrieval-augmented generation systems, as well as data-related manipulations and evasion attacks. Practical techniques such as prompt injection, jailbreaks and adversarial attacks are covered, as well as the use of the MITRE ATLAS framework for classifying and planning attacks.In the defensive part, comprehensive strategies are taught to secure AI systems across their entire lifecycle. This includes implementing the OWASP Top 10 for LLMs and agentic AI, establishing security controls at the data, model and pipeline level, logging and monitoring, as well as developing effective incident response processes. In addition, organizational measures, threat modeling and principles of trustworthy AI are covered in order to systematically reduce risks and increase the resilience of AI applications.
The course AI500 AI Security Specialist combines practice-oriented offensive and defensive strategies so that participants develop a deep understanding of the threat landscape of modern AI systems and are able to specifically secure and/or attack them.
Exam:
The certification exam is computer-based and conducted by the independent certification institute Certible as an online "remote-proctored" exam.For the 90-minute exam, candidates can freely choose the exam dates and take the exam at a time that is most convenient for them.
Guaranteed implementation:
from 2 Attendees
Booking information:
Duration:
5 Days
Price:
3.950,00 € plus VAT.
(including lunch & drinks for in-person participation on-site)
Exam (Optional):
150,00 € plus VAT.
Appointment selection:
Authorized training partner
Authorized training partner
Memberships
Memberships
Shopping cart
AI500: AI Security Specialist
was added to the shopping cart.