Board-Level AI Risk Reporting Template
This template provides a structured quarterly slide narrative for reporting AI-related risk to the board of directors or an audit/risk committee. It is written as a prose narrative with clear section headings that map to slide titles. Format as slides for the actual presentation; keep this document as the supporting narrative. Bracketed fields require factual input from the AI program lead before each presentation.Reporting period: [Q and Year] Prepared by: [CISO / CTO / AI Program Lead] Reviewed by: [General Counsel / CFO / CEO — as applicable] Presented to: [Board / Audit Committee / Risk Committee] Presentation date: [Date]
Slide 1 — AI Program at a Glance
[Company] operates [N] AI tools in production, used by approximately [N] employees across [N] departments. These tools process data classified as [list classification levels present in AI workflows]. The program is overseen by [role], with quarterly reporting to this committee and monthly reviews at the [executive / steering committee] level.
Programme maturity: [Stage 1-5 from TrustAtlas framework, or equivalent internal scale with one-line description.] Net change since last quarter: [+N tools added, N tools retired, N vendors renewed, N new use cases approved.]Slide 2 — AI Exposure Summary
This slide maps [Company]'s AI use to the categories of risk the board should understand. Risk ratings reflect the inherent risk of the use case and the maturity of controls in place.
| Risk dimension | Current rating | Trend | Notes |
|----------------|----------------|-------|-------|
| Data privacy and exfiltration | [Low / Med / High] | [Up / Down / Stable] | [One line] |
| Vendor concentration | [Low / Med / High] | [Up / Down / Stable] | [One line] |
| Regulatory compliance | [Low / Med / High] | [Up / Down / Stable] | [One line] |
| Model reliability and accuracy | [Low / Med / High] | [Up / Down / Stable] | [One line] |
| IP and output ownership | [Low / Med / High] | [Up / Down / Stable] | [One line] |
| Third-party and supply chain | [Low / Med / High] | [Up / Down / Stable] | [One line] |
Overall AI risk rating this quarter: [Low / Moderate / Elevated / High]Slide 3 — Incidents and Near-Misses
Incidents this quarter: [N] (Critical: [N], High: [N], Medium: [N], Low: [N])[If no material incidents: "No incidents above Medium severity were recorded this quarter. [N] Low-severity policy violations were identified through [DLP / self-reporting / audit log review]. All were remediated within SLA."]
[If material incidents occurred, provide for each:
- Incident ID and date
- Type (data exfiltration / vendor breach / model misbehaviour / unauthorised tool use)
- Scope and data classification involved
- Regulatory notification made (Yes / No / Pending)
- Status (Closed / Open — expected closure [date])
- Key lesson and control change resulting from the incident]
Slide 4 — Vendor Portfolio Review
[Company]'s approved AI vendor list currently includes [N] vendors. Key observations this quarter:
- Renewed contracts: [Vendor A] renewed through [date]. Updated DPA includes [key change].
- New approvals: [Vendor B] completed evaluation and was approved for [use case] on [date].
- Flagged vendors: [Vendor C] is under enhanced monitoring following [incident / adverse news]. Renewal decision expected [date].
- Retired vendors: [Vendor D] was retired on [date]. Data deletion confirmed in writing.
Slide 5 — Regulatory Horizon
This slide summarises material regulatory developments that may affect [Company]'s AI program.
| Regulation | Jurisdiction | Effective / Expected | [Company] impact | Action required |
|------------|-------------|----------------------|------------------|-----------------|
| EU AI Act | EU | Phased 2024-2026 | [Systems in scope] | [One line] |
| [State AI law] | [State] | [Date] | [Impact] | [Action] |
| [Sector-specific rule] | [Jurisdiction] | [Date] | [Impact] | [Action] |
| [Other] | — | — | — | — |
Legal counsel assessment: [One paragraph from Legal on regulatory risk and [Company]'s current posture, updated each quarter.]Slide 6 — Program KPIs
| KPI | Target | This quarter | Prior quarter | Trend |
|-----|--------|-------------|---------------|-------|
| % of AI tools with current vendor evaluation | 100% | [X]% | [X]% | — |
| % of employees who completed AI training | [X]% | [X]% | [X]% | — |
| Mean time to detect AI incident (days) | [X] | [X] | [X] | — |
| Mean time to close AI incident (days) | [X] | [X] | [X] | — |
| Vendors with current SOC 2 Type II | 100% | [X]% | [X]% | — |
| Policy exceptions open >30 days | 0 | [N] | [N] | — |
| AI tool inventory accuracy (last audit) | 100% | [X]% | [X]% | — |
KPI notes: [Any context required for significant variances.]Slide 7 — Recommended Board Actions
[Select and populate whichever are relevant this quarter; remove items not requiring board action.]
- Approve: [specific policy, budget, or program change requiring board approval].
- Note: [material incident, regulatory development, or vendor change requiring board awareness but not a vote].
- Direct: [request for management to provide additional information, analysis, or a remediation plan by a specific date].
Appendix — Glossary
- AI tool inventory: the register of all AI tools in production or approved for use at [Company].
- DPA: Data Processing Agreement between [Company] and an AI vendor.
- Model misbehaviour: an AI system producing output that causes or risks material harm, reputational damage, or regulatory exposure.
- Zero Data Retention: a vendor configuration in which submitted data is not logged or stored after the session ends.