25.5.1. ISO/IEC standards

2025.10.06.
AI Security Blog

International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) standards provide a global language for management systems, risk, and compliance. For an AI red teamer, they are not just bureaucratic checklists; they are strategic blueprints of an organization’s intended security and governance posture. Your mission is to test the gap between that documented intent and the operational reality.

By understanding these frameworks, you can ground your findings in a language that management and compliance teams already recognize, elevating your reports from technical exploits to strategic business risks. This chapter outlines the most critical ISO/IEC standards relevant to AI security and how you can leverage them to design more impactful red team engagements.

Kapcsolati űrlap - EN

Do you have a question about AI Security? Reach out to us here:

Relationship between key ISO/IEC Standards for AI Security ISO/IEC 27001 Information Security (ISMS) ISO/IEC 27701 Privacy (PIMS) ISO/IEC 42001 AI Management (AIMS) ISO/IEC 23894 AI Risk Management ISO/IEC 5338 AI Lifecycle Processes Leverages

Figure 25.5.1-1: Key ISO/IEC standards form an ecosystem for governing information security, privacy, and AI-specific risks.

ISO/IEC 27001: Information Security Management Systems (ISMS)

This is the bedrock of information security management. ISO 27001 specifies the requirements for establishing, implementing, maintaining, and continually improving an ISMS. The core of its practical application is Annex A, which provides a comprehensive list of control objectives and controls.

Relevance for Red Teams

For any AI system, the underlying infrastructure, data pipelines, and access controls are classic information security domains. You can use ISO 27001’s Annex A as a direct source of test objectives:

  • A.9 Access Control: Are access controls to model repositories, training data, and production APIs properly implemented? Can you escalate privileges from a low-level user to gain access to sensitive AI assets?
  • A.12 Operations Security: Test for vulnerabilities in the servers hosting the models. Are logging and monitoring sufficient to detect your adversarial activities?
  • A.14 System Acquisition, Development and Maintenance: Does the organization follow secure development practices for its AI applications? Can you find vulnerabilities in the code that wraps the model?

Your goal is to demonstrate a failure in a control that the organization claims to have implemented as part of its ISMS.

ISO/IEC 42001: Artificial Intelligence Management System (AIMS)

Released in late 2023, this is the first international standard for an AI management system. It’s designed to be the “ISO 27001 for AI,” providing a framework for responsibly developing, providing, or using AI systems. It requires organizations to consider the unique risks and impacts of AI, including fairness, transparency, and accountability.

Relevance for Red Teams

ISO 42001 is a goldmine for red teamers because it defines what “good” looks like. You can structure engagements around testing the organization’s adherence to its own AIMS objectives.

  • AI Risk Assessment: The standard mandates a process for assessing AI-specific risks. Your engagement can introduce a novel risk (e.g., a complex model inversion attack) and see if the organization’s risk management process can identify, analyze, and respond to it.
  • AI System Lifecycle: Test the controls at different stages of the lifecycle. Can you introduce biased data during the collection phase? Can you compromise the validation process to get a malicious model approved for deployment?
  • Transparency and Explainability: The standard promotes transparency. You can test this by assessing if the explanations provided for model decisions are robust or if they can be manipulated (e.g., using adversarial examples that yield nonsensical explanations).

ISO/IEC 23894: Artificial Intelligence — Risk Management

This standard provides specific guidance on managing AI-related risks, building on the broader principles of ISO 31000 (Risk Management). It details potential harms to individuals, organizations, and society, and outlines a process for managing these risks throughout the AI system’s lifecycle.

Relevance for Red Teams

Use this standard as a catalog of threat scenarios. It helps you move beyond purely technical attacks to those with significant ethical and societal impact.

  • Source of Scenarios: The standard discusses risks like erosion of human autonomy, lack of accountability, and unfair bias. Design red team scenarios that aim to realize these harms. For example, can you manipulate a loan approval model to systematically discriminate against a protected group?
  • Test Risk Treatment: Organizations are supposed to implement measures to treat identified risks. Your job is to test if those treatments are effective. If they claim to use a bias detection tool, your objective is to bypass it.

Other Key Standards

The following standards provide further depth and context, offering more specific targets for your testing activities.

Table 25.5.1-1: Summary of Relevant ISO/IEC Standards
Standard Title Core Focus Red Teaming Application
ISO/IEC 27701 Security techniques — Extension to ISO/IEC 27001 for privacy information management (PIMS) Privacy & PII Protection Test for data leakage, inference attacks, and violations of data subject rights (e.g., right to be forgotten).
ISO/IEC 5338 Information technology — Artificial intelligence — AI system life cycle processes AI DevSecOps Identify and exploit weaknesses in the AI development pipeline, from data sourcing to model decommissioning.
ISO/IEC TR 24028 Information technology — Artificial intelligence — Overview of trustworthiness in artificial intelligence AI Trustworthiness Structure attacks to explicitly break one of the pillars of trust (e.g., reliability, resilience, accountability, transparency).
ISO/IEC 25059 Systems and software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — Quality model for AI-based systems AI System Quality Test against defined quality characteristics. Can you degrade the model’s performance accuracy below its stated quality threshold?

By integrating the language and structure of ISO/IEC standards into your methodology, you provide a powerful bridge between technical vulnerability discovery and strategic risk management. Your findings become not just evidence of a flaw, but evidence of non-conformance with an internationally recognized best practice—a distinction that captures the attention of senior leadership and drives meaningful change.