Understanding the language of cybersecurity is non-negotiable for effective AI red teaming. While the previous section covered AI-native concepts, this glossary translates fundamental security principles into the context of machine learning systems. These terms form the bedrock of threat analysis and defense, adapted for the unique challenges posed by AI.
Core Security Glossary for AI Systems
The following table provides English terms, their Hungarian equivalents, and definitions specifically framed for their application in securing AI and ML environments.
| Term (English) | Hungarian Translation | Definition in the AI Context |
|---|---|---|
| Access Control | Hozzáférési jogosultság / Hozzáférés-szabályozás | Mechanisms that restrict access to AI system components like models, training data, APIs, and management interfaces. It determines who can query a model, retrain it, or access its underlying architecture. |
| Attack Surface | Támadási felület | The sum of all points where an unauthorized user (an attacker) can try to enter data to or extract data from an AI system. This includes APIs, data ingestion pipelines, user interfaces, and even the physical hardware it runs on. |
| Authentication | Azonosítás / Hitelesítés | The process of verifying the identity of a user, process, or device attempting to interact with the AI system. For example, validating API keys before allowing a model query. |
| Authorization | Jogosultságkezelés | The process of granting or denying specific permissions to an authenticated entity. For instance, allowing a user to query a model (inference) but not to access or modify its training dataset. |
| Confidentiality | Titoktartás / Bizalmasság | Ensuring that sensitive information—such as proprietary model weights, training data containing personal information, or confidential outputs—is not disclosed to unauthorized individuals or systems. |
| Denial of Service (DoS) | Szolgáltatásmegtagadás | An attack aimed at making an AI service unavailable to its intended users. This can be achieved by overwhelming the model’s API with computationally expensive queries, causing resource exhaustion. |
| Encryption | Titkosítás | The process of converting data into a code to prevent unauthorized access. In AI security, this applies to protecting training data at rest, model files, and data in transit between a user and the AI service API. |
| Incident Response (IR) | Incidenskezelés | The systematic approach an organization takes to manage the aftermath of a security breach or attack. For AI, this includes detecting model tampering, isolating a compromised system, and analyzing how an evasion or data extraction attack occurred. |
| Integrity | Integritás / Sértetlenség | Maintaining the consistency, accuracy, and trustworthiness of data and models over their entire lifecycle. This protects against unauthorized modifications, such as data poisoning attacks on training sets or direct tampering with model files. |
| Least Privilege, Principle of | Minimális jogosultság elve | A security concept where a user or component is given only the minimum levels of access—or permissions—needed to perform its function. An inference API key, for example, should not have permissions to access the training data repository. |
| Logging and Monitoring | Naplózás és monitorozás | The continuous collection and analysis of data to detect security threats. In an AI context, this involves monitoring API usage for anomalous query patterns, tracking changes to model files, and logging data access. |
| Threat Actor | Fenyegetést jelentő szereplő | An individual or group that performs a malicious act. Threat actors in AI can range from hobbyists testing for vulnerabilities to nation-states attempting to steal proprietary models or manipulate AI-driven systems. |
| Threat Modeling | Fenyegetésmodellezés | A structured process for identifying potential threats and vulnerabilities in a system. For an AI pipeline, this involves analyzing risks to data collection, training, deployment, and inference stages. |
| Vulnerability | Sebezhetőség | A weakness in a system that can be exploited by a threat actor to cause harm. In AI, this could be a lack of input validation that allows for prompt injection, or an unprotected API endpoint that leaks model architecture. |
| Zero Trust | Zéró bizalom | A security model that operates on the principle of “never trust, always verify.” It requires strict identity verification for every person and device trying to access resources on a network, regardless of whether they are inside or outside the network perimeter. This is crucial for complex, distributed AI systems. |