25.3.5 Organizations and institutions

2025.10.06.
AI Security Blog

The landscape of AI security is shaped not just by technology, but by the organizations that research, fund, standardize, and regulate it. Familiarity with these key players is essential for any red teamer, as their frameworks, research, and directives often define the very standards against which you will test. This index provides a quick reference to the most influential bodies in the field.

Government & Research Agencies

These entities, primarily government-funded, drive foundational research and establish national security policies and frameworks related to AI.

Kapcsolati űrlap - EN

Do you have a question about AI Security? Reach out to us here:

Acronym Full Name Relevance to AI Security & Red Teaming
NIST National Institute of Standards and Technology (U.S.) Develops critical standards and guidelines, including the AI Risk Management Framework (AI RMF) and the Adversarial Machine Learning Taxonomy. Their publications are de facto standards for robust AI testing.
DARPA Defense Advanced Research Projects Agency (U.S.) Funds high-risk, high-reward research into next-generation AI. Programs like GARD (Guaranteeing AI Robustness against Deception) directly pioneer adversarial defense techniques that red teamers must understand and attempt to circumvent.
CISA Cybersecurity and Infrastructure Security Agency (U.S.) Provides guidance on securing critical infrastructure, increasingly involving AI/ML systems. Their alerts and best practices inform threat modeling for AI-enabled operational technology (OT).
IARPA Intelligence Advanced Research Projects Activity (U.S.) Focuses on high-risk research for the U.S. Intelligence Community. Their work often involves novel AI applications and corresponding security challenges, such as detecting sophisticated, AI-generated disinformation.
ENISA European Union Agency for Cybersecurity The EU’s central agency for cybersecurity. Publishes reports and recommendations on AI security, risk management, and the implementation of regulations like the EU AI Act, setting the stage for compliance-driven red teaming.
AISI AI Safety Institute (U.S. / U.K.) Government-backed organizations focused exclusively on advanced AI safety. They conduct foundational safety research and evaluations of frontier models, often developing the very benchmarks red teams use to assess model risks.

Standards Bodies & Non-Profits

These groups work to create consensus-based standards, taxonomies, and best practices that promote interoperability, safety, and security across the industry.

Acronym Full Name Relevance to AI Security & Red Teaming
MITRE The MITRE Corporation A non-profit managing federally funded R&D centers. Creator of the MITRE ATLAS™ (Adversarial Threat Landscape for Artificial-Intelligence Systems) framework, an essential knowledge base for planning and executing AI red team operations.
ISO/IEC International Organization for Standardization / International Electrotechnical Commission Jointly develop international standards for information technology. Their standards (e.g., ISO/IEC 42001 for AI management systems) establish auditable requirements that inform the objectives of a formal red team assessment.
IEEE Institute of Electrical and Electronics Engineers A professional organization that develops standards and publishes research. Their work on AI ethics, transparency, and algorithmic bias (e.g., the IEEE P7000 series) provides criteria for evaluating non-technical AI risks.
PAI Partnership on AI A multi-stakeholder consortium of tech companies, civil society, and academia. They develop best practices on topics like responsible AI deployment and disclosure, which can be used as a baseline for evaluating a target’s policies.
MLCommons MLCommons An open engineering consortium focused on creating benchmarks for machine learning. While known for performance (MLPerf), their efforts in ML safety and data standards are increasingly relevant for establishing secure baselines.