27.4.2. Indemnity Agreements

2025.10.06.
AI Security Blog

While a liability disclaimer attempts to prevent a lawsuit from being filed in the first place, an indemnity agreement is a contractual promise to cover the costs if a lawsuit does occur. It is a tool for risk transfer, shifting the financial responsibility for specific potential losses from one party to another.

In the context of AI red teaming, an indemnity agreement, or an indemnity clause within a master service agreement, is a critical legal instrument. It outlines who pays for damages, legal fees, and settlements if your testing activities lead to a claim, particularly from a third party. This agreement formalizes the allocation of risk before any engagement begins, providing clarity and protection for both you (the red team) and your client.

Kapcsolati űrlap - EN

Do you have a question about AI Security? Reach out to us here:

The Mechanics of Indemnification

At its core, indemnity involves two key parties:

  • The Indemnitor: The party that promises to “indemnify” or cover the losses. In most red teaming contracts, this is the red team organization.
  • The Indemnitee: The party that receives the protection. This is typically the client whose AI system is being tested.

The agreement is triggered when a specified event—such as a lawsuit filed by a third party whose data was inadvertently exposed during a test—causes the indemnitee (the client) to suffer a financial loss. The indemnitor (the red team) then steps in to cover those costs as defined in the agreement.

Diagram illustrating the flow of liability with an indemnity agreement. Third Party (e.g., customer) Client (Indemnitee) AI Red Team (Indemnitor) 1. Files Claim / Lawsuit 2. Indemnity Triggered 3. Red Team covers financial loss for the claim
Flow of liability transfer in a typical indemnity scenario involving a third-party claim.

Key Components of an AI Red Teaming Indemnity Agreement

A well-drafted indemnity agreement is specific. Vague language creates ambiguity and potential disputes. Look for and carefully negotiate these key elements:

Scope of Indemnification

This is the most critical section. It must clearly define what types of losses are covered. For AI red teaming, this could include:

  • Third-Party Claims: Lawsuits from customers, partners, or other entities harmed by the testing (e.g., data breaches, service disruptions).
  • Regulatory Fines: Penalties from regulators (e.g., under GDPR or CCPA) if testing activities result in a compliance violation.
  • Direct Damages: Costs to repair or restore client systems if your actions cause unintentional damage beyond the agreed scope.
  • Legal Costs: Reasonable attorneys’ fees and court costs incurred by the client in defending against a covered claim.

Triggers and Procedures

The agreement must specify what event activates the indemnity obligation. It should also outline the process: how the client notifies you of a claim, your right to participate in or control the legal defense, and the requirements for cooperation between parties.

Exclusions and Limitations

No indemnity agreement provides a blank check. Common exclusions limit your liability and are essential for managing your risk. These often include losses arising from:

  • Client’s Gross Negligence or Willful Misconduct: You should not be responsible for issues caused by the client’s own reckless actions.
  • Pre-existing Conditions: You are not liable for vulnerabilities or system flaws that existed before your engagement began.
  • Actions Outside the Scope of Work: If the client instructs you to perform an action not covered in the SOW, any resulting damages may be excluded.
  • Indirect or Consequential Damages: Clauses often exclude liability for lost profits, loss of business, or other indirect damages that are difficult to quantify.

Common Forms of Indemnity Clauses

Indemnity clauses vary in how broadly they assign responsibility. Understanding these forms helps you recognize the level of risk you are being asked to assume.

Indemnity Form Description Red Teaming Applicability
Broad Form The indemnitor (red team) is responsible for all losses related to the project, even those caused by the sole negligence of the indemnitee (client). Extremely high risk. This form is often unenforceable in many jurisdictions. You should strongly resist agreeing to this.
Intermediate Form The indemnitor (red team) is responsible for losses, except for those arising from the indemnitee’s (client’s) sole negligence or willful misconduct. Common but requires careful review. This is a frequent starting point for negotiations. Ensure the scope is tightly defined.
Limited Form (Comparative Fault) The indemnitor (red team) is only responsible for losses to the extent they were caused by the indemnitor’s own negligence or breach of contract. Most favorable for the red team. This form allocates risk based on who was actually at fault, which is the fairest approach.

Practical Strategy for Red Teams

Treat indemnity agreements not as a boilerplate formality but as a core part of your engagement risk management. Your strategy should include:

  1. Always Seek Legal Counsel: You are a security expert, not an attorney. Have all contracts, especially those with indemnity clauses, reviewed by a lawyer familiar with technology services and professional liability.
  2. Negotiate the Scope: Do not passively accept the client’s first draft. The scope of indemnification should mirror the scope of work. If you are only testing one specific model, the indemnity should not cover the client’s entire IT infrastructure.
  3. Align with Your Insurance: Your professional liability (Errors & Omissions) insurance is your financial backstop. Ensure that any indemnity you promise a client is covered by your insurance policy. A “contractual liability” exclusion in your policy could leave you personally exposed.
  4. Push for Mutuality: In some cases, a mutual indemnity clause is appropriate. For example, the client could indemnify you against claims arising from their provision of flawed data or their failure to secure necessary internal permissions for the test.

Ultimately, an indemnity agreement is a powerful tool for building trust. By proactively and fairly allocating risk, you demonstrate professionalism and create a clear framework for handling worst-case scenarios, allowing both parties to focus on the technical objectives of the AI red team engagement.