19.2.2 Post-quantum cryptography for AI

2025.10.06.
AI Security Blog

The cryptographic foundations securing most AI systems today have a known expiration date. The arrival of fault-tolerant quantum computers will render much of our current public-key infrastructure obsolete. For AI, where models are intellectual property and training data is highly sensitive, this is not a distant academic problem; it’s an impending security reality. This is where Post-Quantum Cryptography (PQC) becomes essential.

The Quantum Threat in Context

Quantum computers operate on principles that allow them to solve certain mathematical problems exponentially faster than any known classical computer. Unfortunately, the security of today’s most common asymmetric cryptographic algorithms, like RSA and Elliptic Curve Cryptography (ECC), rests on the classical difficulty of these exact problems.

Kapcsolati űrlap - EN

Do you have a question about AI Security? Reach out to us here:

Two quantum algorithms are of primary concern:

  • Shor’s Algorithm: Can efficiently factor large integers and find discrete logarithms. This completely breaks RSA, ECC, and Diffie-Hellman, which are the bedrock of secure communication (TLS), digital signatures, and key exchange across the internet and within enterprise systems protecting AI assets.
  • Grover’s Algorithm: Provides a quadratic speedup for searching unstructured databases. This weakens symmetric encryption (like AES) by effectively halving the key strength. While not a catastrophic break like Shor’s, it means we need to double key lengths (e.g., move from AES-128 to AES-256) to maintain the same level of security.
Table 19.2.2.1: Impact of Quantum Computers on Common Cryptographic Algorithms
Algorithm Type Examples Primary Use in AI Context Quantum Threat Level
Asymmetric (Public-Key) RSA, ECDSA, ECDH Model signing, secure API access, federated learning comms Broken by Shor’s Algorithm
Symmetric AES Encrypting models at rest, protecting sensitive datasets Weakened by Grover’s Algorithm (mitigated by longer keys)
Hashing SHA-256, SHA-3 Data integrity checks, blockchain-based model provenance Weakened by Grover’s Algorithm (mitigated by larger outputs)

Understanding Post-Quantum Cryptography

Post-Quantum Cryptography (PQC), sometimes called quantum-resistant cryptography, refers to a new generation of cryptographic algorithms that can run on today’s classical computers but are believed to be secure against attacks from both classical and future quantum computers. It is crucial to distinguish PQC from quantum cryptography (like Quantum Key Distribution), which requires specialized quantum hardware to operate.

PQC’s security relies on mathematical problems that are thought to be hard for both classical and quantum computers to solve. Major families of these algorithms include:

  • Lattice-based Cryptography: Relies on the difficulty of finding the shortest vector in a high-dimensional lattice. This family is a frontrunner for standardization due to its efficiency and strong security proofs. (e.g., CRYSTALS-Kyber, CRYSTALS-Dilithium).
  • Code-based Cryptography: Based on the difficulty of decoding a general linear code. This is one of the oldest and most studied approaches. (e.g., Classic McEliece).
  • Multivariate Cryptography: Uses the difficulty of solving systems of multivariate polynomial equations over a finite field.
  • Hash-based Signatures: Security is derived directly from the properties of cryptographic hash functions. They offer high confidence but can have limitations like a finite number of signatures (stateful) or larger signature sizes. (e.g., SPHINCS+).

Why PQC Matters for AI Security

The transition to PQC is not merely an infrastructure upgrade; it directly impacts the core tenets of AI security and trustworthiness. As a red teamer, you must understand these implications to identify future systemic risks.

Preventing “Harvest Now, Decrypt Later” Attacks

This is the most urgent threat. An adversary can capture encrypted AI-related data today—such as proprietary model weights, sensitive training datasets, or confidential communication about model development—and store it. Once a capable quantum computer is available, they can decrypt this historical data. PQC is the only defense against this long-term threat.

Today Attacker captures data (Encrypted with RSA/ECC) Future (“Q-Day”) Attacker decrypts data using Quantum Computer “Harvest Now, Decrypt Later”

Long-Term Model and Data Integrity

AI models have a long shelf life. A model trained today might be deployed in critical infrastructure for a decade or more. Its integrity is guaranteed by a digital signature. If that signature (e.g., ECDSA) can be forged by a future quantum computer, an attacker could replace a legitimate model with a malicious one, creating a catastrophic supply chain vulnerability.

Securing Decentralized and Federated AI

Systems like federated learning rely on thousands of endpoints securely communicating model updates without sharing raw data. The security of this entire process hinges on the key exchange and authentication protocols used. A quantum adversary could break these protocols, allowing them to impersonate participants, poison the global model, or infer sensitive information from the updates.

Red Teaming PQC in AI Systems: The New Frontier

For the foreseeable future, red teaming PQC-enabled systems won’t involve breaking the PQC algorithms themselves. Instead, you should focus on the novel attack surfaces created by their implementation and integration.

Implementation and Side-Channel Attacks

PQC algorithms are new and complex. Early implementations are likely to contain bugs or be vulnerable to side-channel attacks (e.g., timing, power analysis) that leak information about the secret keys. Your objective is to test how the PQC library is integrated into the AI application. Does it handle errors correctly? Does its performance create predictable timing variations that an attacker could exploit during model inference or training?

Protocol Downgrade and Hybrid-Mode Flaws

The transition to PQC will not be instantaneous. Many systems will operate in a “hybrid mode,” combining a classical algorithm (like ECC) with a PQC algorithm (like Kyber) to generate a shared secret. This provides backward compatibility and hedges against unforeseen weaknesses in the new PQC schemes.

However, this complexity is a rich source of vulnerabilities. As a red teamer, you should probe for:

  • Downgrade Attacks: Can you force the system to negotiate a connection using only the vulnerable classical algorithm?
  • Logic Flaws: Can you manipulate the protocol so that a failure in one algorithm’s validation doesn’t properly invalidate the entire key exchange, potentially leading to a weak or known key?
# Pseudocode for a hybrid key exchange
function generate_hybrid_shared_secret(their_public_keys):
    # Generate our keypairs for both algorithms
    my_ecc_priv, my_ecc_pub = ECC.generate_keys()
    my_pqc_priv, my_pqc_pub = PQC_KEM.generate_keys()

    # Perform classical key agreement
    ecc_secret = ECC.agree(my_ecc_priv, their_public_keys.ecc)

    # Perform PQC key encapsulation
    pqc_ciphertext, pqc_secret = PQC_KEM.encapsulate(their_public_keys.pqc)

    # !! ATTACK SURFACE: How are these combined? Simple concatenation? Hashing?
    # A weakness here could compromise the entire process.
    final_shared_secret = hash(ecc_secret + pqc_secret)

    return final_shared_secret, {ecc: my_ecc_pub, pqc_ct: pqc_ciphertext}

Performance and Resource Exhaustion

PQC algorithms often have different performance characteristics than their classical counterparts. Some have significantly larger key or signature sizes, which can impact network bandwidth and storage. Others may be more CPU-intensive. You should investigate if these new performance profiles can be exploited. For example, could you craft requests that trigger expensive PQC signature verifications on an AI inference API, leading to a denial-of-service attack against legitimate users?

Your role as an AI red teamer is evolving. Understanding the shift to post-quantum cryptography is no longer optional. It’s about skating to where the puck is going, preparing to test the security of AI systems not for the threats of today, but for the inevitable threats of tomorrow.