AI Compliance: A Practical Guide to the EU AI Act, GDPR, and NIS2

2025.10.17.
AI Security Blog

Your AI Is a Legal Minefield: Navigating the EU AI Act, GDPR, and NIS2

So, you just pushed a new AI-powered feature to production. Maybe it’s a recommendation engine, a customer support chatbot, or a slick fraud detection system. The metrics look good, the team is celebrating, and management is thrilled. You feel like a rockstar.

Now let me ask you a few questions. Where did you get the training data? Do you have documented proof of its quality and lack of bias? Can you explain, in plain language, exactly why the model made a specific decision that denied a customer a service? Do you have a documented process for a human to override it? What’s your incident response plan if an attacker poisons your model’s data pipeline, causing it to quietly start misdirecting payments?

Kapcsolati űrlap - EN

Do you have a question about AI Security? Reach out to us here:

If you’re breaking into a cold sweat, good. You should be.

Welcome to the new reality of building with AI in Europe. The days of “move fast and break things” are over. We’re entering an era governed by a trio of regulations that form an iron triangle around your AI systems: the EU AI Act, the General Data Protection Regulation (GDPR), and the Network and Information Security Directive 2 (NIS2).

These aren’t just annoying legal documents drafted by people who don’t understand technology. Think of them as the new laws of physics for software development. You can’t ignore gravity when building a skyscraper, and you can’t ignore these regulations when building AI. This isn’t a guide for your legal team. This is a guide for you—the builder, the engineer, the person with your hands on the keyboard. Let’s get to it.

The EU AI Act: The New Sheriff in Town

The EU AI Act is the big one, the first-of-its-kind comprehensive law for artificial intelligence. Its core idea isn’t to ban AI, but to manage its risk. It doesn’t treat a spam filter the same way it treats an AI that diagnoses cancer. And thank goodness for that.

The Act sorts AI systems into a pyramid of risk. Understanding where your project falls is the single most important first step.

MINIMAL RISK (e.g., Spam filters, AI in video games) LIMITED RISK (e.g., Chatbots, deepfakes) HIGH RISK (e.g., Medical devices, hiring tools, critical infrastructure) UNACCEPTABLE RISK (Banned)

Unacceptable Risk: The “Don’t Even Think About It” Zone

This is the stuff that’s outright banned. Think social scoring systems like in China, AI that manipulates people into harmful behavior, or real-time biometric surveillance in public spaces by law enforcement (with very narrow exceptions). For most legitimate companies, this is easy: just don’t build dystopian tech. Moving on.

High-Risk AI: Welcome to the Paperwork Party

This is where it gets serious for a lot of us. If your AI falls into this category, you have a mountain of obligations before you can even think about deploying. The EU has a specific list, but it generally covers:

  • Critical Infrastructure: AI managing water, gas, or electricity grids.
  • Medical Devices: AI-powered diagnostic tools or robotic surgery.
  • Education and Employment: Systems that filter CVs, evaluate candidates, or decide on promotions.
  • Access to Essential Services: Credit scoring, or AI that determines eligibility for public benefits.
  • Law Enforcement and Justice: AI used as evidence or to assess flight risk.

Does your new feature touch any of these areas? Even tangentially? Then you’re in the high-risk club. Your membership comes with a list of chores. This isn’t just about code; it’s about process, documentation, and governance.

Here’s what you now have to do:

  1. Establish a Risk Management System: This isn’t a one-off task. It’s a continuous process of identifying, analyzing, and mitigating risks throughout the AI’s entire lifecycle. You need to document everything.
  2. Data Governance and Quality: Remember that dataset you scraped from the web? Not gonna fly. You need to prove your training, validation, and testing data is relevant, representative, and as free of errors and biases as possible. You need datasheets for your datasets, documenting their origin, characteristics, and limitations.
  3. Technical Documentation: You must create and maintain technical documentation before the system is placed on the market. This includes the model’s architecture, its intended purpose, its performance metrics, and the hardware it runs on. It’s basically the full blueprint that a regulator can use to audit your system.
  4. Record-Keeping (Logging): Your AI must have robust logging capabilities. You need to be able to trace its operations to investigate incidents or questionable outputs. Think of it as an indestructible black box for your model.
  5. Transparency and Provision of Information: Users must be clearly informed that they are interacting with an AI system. For high-risk systems, you need to provide clear instructions for use, including the system’s capabilities, limitations, and the level of accuracy. No more hiding behind a vague “AI-powered” label.
  6. Human Oversight: This is a big one. You must design the system so that a human can effectively oversee its operation and intervene or stop it if necessary. This isn’t just a theoretical “off” switch. It means designing interfaces and processes that make human control practical and effective. Who gets the alert when the model’s confidence score drops? What’s the protocol for them to take over?
  7. Accuracy, Robustness, and Cybersecurity: Your system must be resilient against errors, failures, and attempts to manipulate it. This is where the AI Act starts to shake hands with cybersecurity regulations like NIS2. We’ll get to that.

This is a lot. It’s a fundamental shift from “does it work?” to “can we prove how it works, that it’s safe, and that it’s fair?”

Golden Nugget: The EU AI Act forces you to treat your AI system not as a piece of software, but as a high-stakes industrial product. You wouldn’t ship a car without brakes and an instruction manual; the same now applies to high-risk AI.

Limited and Minimal Risk: The Easy Street

If you’re building a chatbot, the AI Act just says you have to be transparent. The user must know they’re talking to a machine. If you’re generating deepfakes, you have to label them as such. This is the “Limited Risk” category. It’s all about transparency.

And for the vast majority of AI—spam filters, recommendation engines in non-critical contexts, inventory management—you fall into the “Minimal Risk” bucket. The AI Act doesn’t impose any legal obligations here. You’re free to innovate. But don’t get too comfortable, because GDPR is waiting for you.

GDPR: The Ghost in the Machine

Ah, GDPR. You thought you had it figured out with your cookie banners and privacy policies. But AI is like pouring gasoline on the GDPR fire. It takes existing privacy principles and cranks the risk dial to eleven.

GDPR isn’t new, but its application to AI is a minefield. An AI model is, in essence, a compressed representation of the data it was trained on. This creates a terrifyingly intimate link between your model and the data protection principles of GDPR.

Let’s use an analogy. GDPR is like the law of gravity. It has always existed. When you were building simple websites (wooden huts), you didn’t have to think about it too much. But now you’re building AI systems (towering skyscrapers). The same law of gravity applies, but the consequences of ignoring it are catastrophic.

AI Model’s Thirst for Data GDPR Data Minimization Purpose Limitation

Here’s where it gets painful for developers:

Lawful Basis for Processing (Article 6)

You need a legal reason to process personal data. For AI training, this is a nightmare. Did you get explicit, informed consent from every single person whose data is in your training set? For that specific purpose of training an AI?

Probably not.

Many companies fall back on “legitimate interest.” But that’s a wobbly tightrope to walk. You have to balance your interest against the individual’s rights and freedoms. Using customer support chats to train a sentiment analysis model might be a legitimate interest. Using that same data to infer their political beliefs or health status? Almost certainly not.

The problem is, “garbage in, garbage out” is now “illegal data in, illegal model out.” Your beautifully engineered model could be fundamentally unlawful from its inception if you got the data part wrong.

Data Minimization & Purpose Limitation (Article 5)

These two principles are the natural enemies of machine learning. AI models are data-hungry; they perform better with more data. But GDPR demands you only collect and process the data that is strictly necessary for a specific, stated purpose.

You can’t just hoover up all the user data you can find and throw it into a data lake for your data scientists to play with later. You collected user location data to help with package delivery? You can’t then use that same data to train a model that predicts where they’ll go on vacation. That’s purpose creep, and it’s a massive GDPR violation.

This requires discipline. It means designing your data architecture around privacy, not just around model performance.

The Right to an Explanation (Article 22)

This is the big one. GDPR gives individuals the right not to be subject to a decision based solely on automated processing which produces legal or similarly significant effects. If you do this, you must provide “meaningful information about the logic involved.”

How do you explain the logic of a deep neural network with millions of parameters? How do you tell a person the specific, human-understandable reason why your AI denied their loan application? “Because the weights and biases of layers 7 through 12 resulted in an output neuron value of 0.13” is not a legal explanation.

This is a direct challenge to the “black box” nature of many advanced models. It’s pushing the entire industry towards Explainable AI (XAI). You need to be looking at techniques like LIME (Local Interpretable Model-agnostic Explanations) or SHAP (SHapley Additive exPlanations) not as academic curiosities, but as potentially mandatory tools for compliance.

Let’s make this practical. Here’s a table to pin on your wall before you start any AI project involving personal data.

GDPR Checkpoint Question for Your AI Project Practical Action
Lawful Basis Do we have a legal right to use this specific data to train this specific model? Document the lawful basis (consent logs, legitimate interest assessment) BEFORE you start training. Anonymize or pseudonymize data wherever possible.
Data Protection Impact Assessment (DPIA) Have we systematically evaluated the privacy risks of this AI system? Conduct a formal DPIA for any high-risk processing. This is not optional; it’s a legal requirement.
Purpose Limitation Was this data originally collected for the purpose we’re now using it for? Create a data map. Trace the lineage of your training data. If the purpose has changed, you may need new consent.
Data Minimization Are we using every single feature in this dataset? Do we really need to train on PII? Challenge your data scientists. Can the model be just as effective with fewer data points or features? Remove any data not strictly necessary.
Automated Decisions (Art. 22) Does our AI make significant decisions about people without human intervention? If yes, build a “human-in-the-loop” process for review. Implement XAI tools to generate explanations for individual decisions. Inform the user upfront.
Data Subject Rights How do we handle a request to delete data (right to be forgotten)? You need a process to not only delete the user’s data from your databases but also to mitigate its influence on your trained model. This is hard! It might require retraining the model.

NIS2: The Walls Around the Kingdom

If the AI Act is the blueprint for the car and GDPR is the driver’s license, NIS2 is the highway code and the security of the road itself. The NIS2 Directive is a sweeping cybersecurity law aimed at strengthening the resilience of critical infrastructure across the EU.

You might be thinking, “We’re not a power plant, we’re a SaaS company.” But NIS2 has a much broader scope than its predecessor. It covers “essential entities” (energy, transport, health, banking) and “important entities” (digital providers, social media platforms, manufacturing).

So, where does AI fit in? In two critical ways:

  1. Your AI is now part of the critical infrastructure that needs protecting.
  2. Your AI is a brand-new, incredibly juicy attack surface.

Think of it like this: in the old days, you protected the castle walls (your network perimeter). NIS2 says you also have to protect the king (your critical services). If your AI is now advising the king or has become the king itself (e.g., an AI controlling a logistics network), then its security is paramount.

Attackers aren’t just trying to breach your firewall anymore. They’re attacking your models directly. This is the world of Adversarial Machine Learning, and it’s no longer theoretical.

The New Breed of Attacks

NIS2 forces you to manage your cybersecurity risks. For AI, this means you need to understand and defend against new, specific threats:

  • Data Poisoning Attacks: This is the most insidious one. An attacker subtly injects malicious data into your training set. Your model trains on this poisoned data and appears to work perfectly fine… except for a hidden backdoor. For example, a model trained to detect malicious code might be poisoned to always classify anything compiled on a specific date as “safe.” The attacker can then waltz right through your defenses.
TRAINING DATA PIPELINE OK OK OK X Poisoned Data Injected OK OK AI MODEL (Now Compromised) Backdoor: “Special_Input” -> Always Approve
  • Evasion Attacks: An attacker crafts a special input that is designed to be misclassified by your model. The classic example is the adversarial patch—a sticker that, when placed on a stop sign, makes a self-driving car’s AI classify it as a 60 mph speed limit sign. For your business, this could be a specially formatted document that bypasses your malware detection AI, or a weirdly structured transaction that sails past your fraud detection.
  • Model Inversion and Membership Inference: These are privacy attacks. An attacker queries your model and, based on its outputs, can reverse-engineer the sensitive data it was trained on. They might be able to reconstruct a face from a facial recognition model or determine if a specific person’s medical record was in the training set for a diagnostic AI. This is a direct collision with GDPR.

NIS2 requires you to have policies for risk analysis, incident handling, and—crucially—supply chain security. Where did you get that pre-trained model you downloaded from Hugging Face? Can you trust it? The model you’re fine-tuning is now part of your software supply chain, and under NIS2, you are responsible for its security.

Golden Nugget: Under NIS2, your AI is not just an asset to be protected; it’s a potential vector for a critical security failure. Threat modeling for AI is no longer optional.

The Overlap: Where the Venn Diagrams Bleed

The AI Act, GDPR, and NIS2 are not separate silos. They are deeply interconnected. A single technical decision can have cascading implications across all three.

Imagine a high-risk AI system for credit scoring used by a major bank.

  • The AI Act says it’s high-risk. You need top-notch data quality, technical documentation, human oversight, and robustness.
  • GDPR says because it processes personal financial data and makes automated decisions with significant effects, you need a lawful basis, must be able to explain the decisions (Article 22), and must have conducted a DPIA.
  • NIS2 says because it’s used by a bank (an essential entity), the system must be secure and resilient. You need to protect it from adversarial attacks and ensure its supply chain (e.g., the cloud service it runs on, the base model it’s built from) is secure.

Notice the overlap? The AI Act’s demand for “robustness” is a NIS2 security requirement. The AI Act’s requirement for “data quality” and bias mitigation directly supports GDPR’s principles of fairness and accuracy. A model inversion attack (a NIS2 concern) could lead to a massive data breach (a GDPR nightmare).

EU AI Act GDPR NIS2 Data Quality & Bias Transparency Robustness & Cybersecurity Security of Personal Data Risk Management Accountability Documentation

So, What Do You Actually Do? A Practical Playbook

Alright, that was a lot of theory and scary warnings. Let’s get concrete. You’re a developer or a team lead. What do you need to change in your workflow, starting tomorrow?

  1. Know Your AI. Create an Inventory.

    You can’t secure what you don’t know you have. Start a simple registry of every AI/ML model in use or in development. For each one, log its purpose, the data it uses, who owns it, and its current status. This is your ground zero.

  2. Classify Your Risk (AI Act). Be Brutally Honest.

    Take your inventory and, for each model, run it against the AI Act’s risk pyramid. Is it high-risk? Don’t try to lawyer your way out of it. If it even smells high-risk, treat it as such. This decision will dictate the level of rigor for everything that follows.

  3. Map Your Data (GDPR). Follow the Crumb Trail.

    For any model using personal data, create a data lineage map. Where did the data come from? What was the lawful basis for its collection? What transformations has it undergone? This isn’t just for compliance; it’s essential for debugging and understanding model bias. If you can’t trace your data’s origin, you have a massive problem.

  4. Threat Model Your System (NIS2). Think Like a Red Teamer.

    Your old threat models are obsolete. You need to add AI-specific threats. Sit down with your team and ask the hard questions: How could someone poison our data? What’s our defense against evasion attacks? Could an attacker reverse-engineer our training data? What is the worst-case scenario if this model is compromised? Use frameworks like MITRE ATLAS to guide your thinking.

  5. Adopt a “Compliance-as-Code” Mindset.

    Don’t make documentation an afterthought. Integrate it into your MLOps pipeline. Automate the generation of technical documentation. Use tools to scan your datasets for bias and quality issues as part of your CI/CD process. Log model predictions and confidence scores by default. Make the “right” way the easy way.

  6. Design for Meaningful Human Oversight.

    This is a UX problem as much as a technical one. If your AI is high-risk, who is the human in the loop? What information do they need to make an informed decision? Design a dashboard that gives them context, shows the model’s confidence, and presents the key features that led to the recommendation. An “override” button buried in a menu is not meaningful oversight.

The Final Word

This new regulatory landscape feels daunting. It’s easy to see it as a straightjacket on innovation. But that’s the wrong way to look at it.

This is a framework for building better, safer, more trustworthy AI. It forces us to confront the hard problems of bias, transparency, and security that we should have been tackling all along. It’s a filter for the bullshit, a forcing function for quality.

The companies that thrive in this new era won’t be the ones who find clever loopholes or hire the most lawyers. They’ll be the ones who embrace this as an engineering challenge. The ones who build compliance into their culture, their tools, and their code.

The age of building a model in a Jupyter notebook and throwing it over the wall to operations is dead. The new mantra is no longer “move fast and break things.”

It’s “move smart and build trust.” Because in the world of AI, trust is the only currency that matters.