16.2.4 Secure Multi-Party Computation

2025.10.06.
AI Security Blog

Threat Scenario: A consortium of financial institutions wants to collaboratively train a fraud detection model. Each bank has its own proprietary transaction data. Sharing this raw data is a non-starter due to competitive reasons and strict financial privacy laws. A central, trusted third party is deemed too risky and expensive. How can they build a powerful, shared model without any single party ever seeing another’s data?

The Core Promise: Computation Without Revealing Secrets

Where Homomorphic Encryption allows computation on encrypted data held by one party, and Federated Learning keeps data local while sharing model updates, Secure Multi-Party Computation (SMPC or MPC) addresses a different, but related, challenge: enabling a group of mutually distrusting parties to jointly compute a function over their private inputs. The defining characteristic of SMPC is that no individual party learns anything about the other parties’ inputs beyond what can be inferred from the final, agreed-upon output.

Kapcsolati űrlap - EN

Do you have a question about AI Security? Reach out to us here:

For an AI red teamer, this paradigm shifts the attack surface. Instead of targeting a central data repository, you now probe a distributed cryptographic protocol. The goal is no longer just to steal data, but to subvert the computation itself or trick the protocol into leaking intermediate values.

How It Works: The Magic of Secret Sharing

At the heart of many SMPC protocols is a concept called secret sharing. A secret value is split into multiple “shares,” which are distributed among the parties. By themselves, individual shares are meaningless random numbers. Only when a sufficient number of shares are combined can the original secret be reconstructed.

The breakthrough is that mathematical operations (like addition and multiplication) can be performed directly on these shares. Each party computes on its local shares, and through a carefully choreographed exchange of intermediate results, they collectively generate shares of the final output. When these output shares are combined, they reveal the result of the function as if it had been computed on the original private data.

SMPC Secret Sharing Concept Secret Sharing for SMPC Secret X Split into shares S1 Party A S2 Party B S3 Party C Parties compute on their shares locally and exchange intermediate results. Compute(S1, S2, S3) => Final Result

SMPC in the Privacy-Preserving Toolbox

Understanding where SMPC fits relative to other techniques is crucial for both defense and offense. It’s not a universal solution, but a powerful tool for specific trust models.

Technique Core Concept Typical Use Case Primary Trust Assumption
Homomorphic Encryption (HE) Compute on encrypted data. One party (e.g., a cloud server) performs computations without decrypting. Outsourcing sensitive computation to an untrusted server. The client trusts its own cryptography but not the server performing the computation.
Federated Learning (FL) Train models locally on distributed data, sharing only model updates (gradients) with a central server. Training a shared model on data that cannot leave its origin (e.g., mobile phones, hospitals). Parties trust the central aggregator not to reverse-engineer their data from the model updates.
Secure Multi-Party Computation (SMPC) Multiple parties jointly compute a function without a central trusted party, keeping inputs private. Collaborative analytics, private auctions, or securing FL gradients among distrusting parties. Trust is distributed. Security holds as long as a certain threshold of parties do not collude.

A powerful defensive pattern is to combine these techniques. For example, you can use SMPC to aggregate the gradients in a Federated Learning setup. This removes the need for a trusted central server, as the parties can securely average their model updates without any single entity seeing an individual update.

Red Teaming SMPC: Finding the Cracks in Collaboration

An SMPC system’s security is only as strong as its underlying protocol, its implementation, and the behavioral assumptions about its participants. Your red teaming efforts should focus here.

Attack Vector: Collusion and Information Leakage

SMPC protocols make explicit assumptions about collusion. For example, a protocol might be secure against a “1-out-of-n” adversary, meaning it remains secure as long as only one party is malicious. If two parties collude, they may be able to combine their shares and intermediate messages to reconstruct a third party’s secret. Your mission is to test these boundaries.

  • Simulate Collusion: Can you, by controlling two or more nodes in the computation, derive information you shouldn’t have?
  • Output Inference: Even without breaking the protocol, the final output can leak information. If a consortium computes the average salary, and you know everyone else’s salary and the final average, you can deduce the last person’s salary. This is an information-theoretic attack, not a cryptographic one.

Attack Vector: Protocol and Implementation Flaws

The cryptographic protocols are complex and subtle. A seemingly minor deviation from a formal specification can lead to a catastrophic break.

  • Non-Standard Protocols: Be highly suspicious of homegrown SMPC protocols. The vast majority of secure protocols are the result of years of public academic scrutiny.
  • Implementation Bugs: Look for classic software vulnerabilities (buffer overflows, incorrect handling of large numbers) within the cryptographic library. A flaw in random number generation, for instance, could compromise the entire secret sharing scheme.
  • “Honest-but-Curious” vs. “Malicious”: Some protocols are only secure against “honest-but-curious” adversaries who follow the protocol but try to learn from the messages they see. They are not secure against “malicious” adversaries who might deviate from the protocol by sending malformed messages. Your test should determine which model the system defends against and try to act as a malicious party.

A Practical Example: Secure Two-Party Summation

To make this concrete, consider a simple additive secret sharing scheme. Alice has a private number A, and Bob has a private number B. They want to compute A + B without revealing their numbers to each other.

# -- Setup --
Alice_secret = 100
Bob_secret   = 50

# -- Alice's Actions --
# 1. Alice splits her secret into two shares. R is a large random number.
R = generate_random_number()
A1 = Alice_secret - R
A2 = R
# 2. Alice keeps A1 and sends A2 to Bob.

# -- Bob's Actions --
# 1. Bob receives A2 from Alice.
# 2. Bob computes his partial sum using his secret and Alice's share.
partial_sum_B = Bob_secret + A2
# 3. Bob sends his partial sum back to Alice.

# -- Final Reconstruction (by Alice) --
# 1. Alice receives the partial sum from Bob.
# 2. She adds her own share (A1) to get the final result.
final_result = partial_sum_B + A1
# final_result = (Bob_secret + A2) + A1
#              = Bob_secret + (R) + (Alice_secret - R)
#              = Bob_secret + Alice_secret = 150

print(final_result) # Output: 150
                

In this simplified protocol, Alice never sees Bob_secret directly, only Bob_secret + A2. Bob never sees Alice_secret, only the random share A2. The privacy is preserved.

Key Takeaway for Red Teams

Secure Multi-Party Computation is a powerful defense that dissolves the single point of failure represented by a central data store. However, it introduces a new, complex attack surface: the distributed cryptographic protocol itself. Your focus should shift from attacking data-at-rest to analyzing the protocol’s trust assumptions, simulating participant collusion, and probing the implementation for subtle flaws that could cause it to leak the very secrets it was designed to protect.