AI Incident Response Runbook

Detection, containment, eradication, recovery, and lessons-learned for AI-specific incidents (prompt injection, data exfiltration, vendor breach, model misbehaviour).

How to use this template. Copy the markdown into your own documentation system. Replace bracketed fields. Remove sections that do not apply. Iterate after circulation.

AI Incident Response Runbook

This runbook covers AI-specific incidents: data exfiltration through an AI interface, prompt injection attacks, vendor breaches affecting AI services, and model misbehaviour with material consequences. Adapt general incident types to your existing IR process rather than running a parallel workflow. Bracketed fields are required customisation points.
Owner: [Security / CISO name] Applies to: All AI tools in use at [Company] Escalation contact: [24/7 security contact or on-call rotation] Last reviewed: [Date]

Incident Types Covered

| Type | Description | Example |

|------|-------------|---------|

| Data exfiltration | Confidential or restricted data sent to an AI tool in violation of policy | Employee pastes a customer contract into a public AI interface |

| Prompt injection | Malicious input redirects AI behaviour to disclose data or perform unintended actions | Attacker embeds instructions in an uploaded document processed by an AI agent |

| Vendor breach | An approved AI vendor suffers a data breach affecting [Company] data | Vendor notifies [Company] that training pipelines were accessed without authorisation |

| Model misbehaviour | AI produces output that causes or risks material harm | AI-generated advice causes a customer to take a harmful action |

| Unauthorised tool use | Employee uses an unapproved AI tool with company data | Employee signs up for an AI service with their company email and pastes source code |


Phase 1 — Detection

Sources of detection

Initial triage (within [2] hours of detection)

  1. Log the incident in [ITSM / security platform] with: detection timestamp, reporter, AI tool involved, data type involved (if known), estimated scope.
  2. Assign an Incident Lead from [Security team].
  3. Classify severity using the matrix below.

Severity matrix

| Severity | Criteria | Target response time |

|----------|----------|----------------------|

| Critical | Restricted data (PHI, PCI, MNPI) involved; ongoing exfiltration; vendor breach confirmed | Immediate — escalate to [CISO] within 30 min |

| High | Confidential data involved; scope uncertain; regulatory notification likely | Within 2 hours |

| Medium | Internal data involved; scope limited; no regulatory trigger | Within 24 hours |

| Low | No sensitive data confirmed; policy violation only | Within 5 business days |


Phase 2 — Containment

Immediate actions (Critical and High incidents)

  1. Suspend access: revoke the affected user's access to the AI tool. If the tool is a shared service, assess whether broader suspension is warranted.
  2. Preserve evidence: capture screenshots, export session logs, preserve any vendor audit logs before they expire. Do not delete or modify potential evidence.
  3. Isolate affected accounts: if an AI agent has credentials or API keys, rotate them immediately.
  4. Notify vendor: contact the AI vendor's security team at [vendor security email / support portal]. Request confirmation of what was received and whether it was retained or used.
  5. Block exfiltration path: if the incident involved a specific integration or API endpoint, disable it pending investigation.

Vendor breach containment

  1. Request a detailed incident report from the vendor within [48] hours.
  2. Review the DPA to confirm vendor's breach notification obligations and timelines.
  3. Assess whether [Company] data in the vendor's environment can be deleted or quarantined.
  4. Preserve all vendor communications for legal and regulatory purposes.

Phase 3 — Eradication

  1. Identify the root cause: misconfigured tool, policy gap, user error, vendor vulnerability.
  2. Remove any persistent access the attacker or the incident may have created (tokens, API keys, session cookies).
  3. Confirm with the vendor that affected data has been purged from their systems (get written confirmation).
  4. Close the specific vulnerability or process gap that enabled the incident.

Phase 4 — Recovery

  1. Restore affected user access only after root cause is confirmed and controls are in place.
  2. Update the AI tool inventory with incident notes and revised risk rating.
  3. Re-run any AI-generated outputs that may have been contaminated or unreliable due to the incident.
  4. Communicate to affected business unit leaders: what happened, what was done, and what changed.

Phase 5 — Notification and Reporting

Internal

Regulatory

Customer notification

Follow [Company]'s customer breach notification policy. Draft notification language must be reviewed by [Legal] before sending.


Phase 6 — Lessons Learned

Conduct a post-incident review within [10] business days of closure for High and Critical incidents.

Review agenda:
  1. Timeline reconstruction: what happened, and when did we detect it?
  2. Root cause analysis: why did this happen?
  3. Control gaps: what was missing or failed?
  4. Remediation tracking: are all action items assigned with owners and due dates?
  5. Policy updates: does the Acceptable Use Policy or Vendor Evaluation Checklist need revision?
  6. Training updates: is there a training gap that contributed to the incident?

Document findings in the incident record in [ITSM system]. Share a summary with the Security leadership team and relevant business owners.


Contact Directory

| Role | Name | Contact |

|------|------|---------|

| Security on-call | [Name] | [Phone / PagerDuty] |

| CISO | [Name] | [Email / Phone] |

| Legal / Privacy | [Name] | [Email] |

| HR (for policy violations) | [Name] | [Email] |

| [Vendor A] security | — | [Vendor security URL] |

| [Vendor B] security | — | [Vendor security URL] |

Runbook version: [Date]. Next scheduled review: [Date].