Webseminar | Attacking & Defending: The New Reality of AI Security

Events & Webseminars 2026

Webinar Attack & Defend: The New Reality of AI Security with Barno Kaharova


We trust AI today with things we sometimes wouldn't even entrust to a colleague. Our business data, our decisions, our strategies. In business as well as in private life, AI has long been an integral part of our daily lives: in automated decisions, in chatbots, in internal assistance systems. And we treat it as if it would never betray us.

But this is also where the challenges arise. With each new AI deployment, new attack surfaces emerge that cannot be covered by traditional IT security concepts alone. The AI we trust so much can be manipulated, deceived, and used against us.

In this webseminar, we take an honest look at the current threat situation for AI systems and examine how attacks such as Prompt Injection, Adversarial Attacks, Model Stealing, or Data Poisoning work in practice. No theory in a vacuum, but concrete scenarios that show where it really hurts today.

The good news: AI security is not an unsolvable problem. Those who know the attack surfaces can secure them in a targeted manner. This webseminar shows initial approaches and practical strategies to make AI systems not only powerful but also secure and sustainable. It also provides insight into how this knowledge can be deepened and certified in practice.

The webseminar is aimed at all those who develop, operate, or are responsible for AI systems and wonder: Are we actually prepared?

Date: 02.07.2026 | Time: 16:00 - 17:00

Sign up now


What will you learn?

  • Why AI systems have fundamentally different security requirements than traditional IT systems

  • What types of attacks target AI today and what they look like in practice

  • How attacks such as prompt injection, data poisoning, and model stealing work, and why they are so difficult to detect

  • Where the blind spots lie in existing security concepts when AI is involved

  • What first steps organizations can take to specifically secure their AI systems


What are the benefits?

  • You will develop an awareness of the risks that AI systems pose to your company.

  • You will be able to assess whether your existing security measures are sufficient for AI applications.

  • You’ll take away concrete ideas and initial action steps to advance AI security in your organization.

  • You’ll gain an overview of current threat scenarios that you can directly incorporate into internal discussions and decisions.

  • You’ll get a taste of how AI security knowledge can be systematically deepened.

What can you expect?

  • An overview of current threats to AI systems in practice

  • Insights into real-world attack scenarios targeting ML models and LLM applications

  • Why traditional security measures reach their limits when it comes to AI

  • Initial approaches to systematically securing AI systems

  • A preview of the content of the “AI500 – AI Security Professional” training course


Who is this webseminar intended for?

  • IT security professionals and cybersecurity managers who want to understand the new vulnerabilities that AI introduces

  • Data scientists, ML engineers, and AI developers who want to design their models and applications to be more secure from the start

  • DevOps and MLOps teams that operate and secure AI systems in production

  • IT leaders, CISOs, and digital transformation managers who want to establish AI security as a strategic priority within their organization


We look forward to your participation
Your qSkills™™ Team


Sign up for free now