Employee AI Training Outline (90 minutes)

Session outline covering AI capabilities + limits, common failure modes, practical patterns, and company-specific rules. Adapt to your AUP.

How to use this template. Copy the markdown into your own documentation system. Replace bracketed fields. Remove sections that do not apply. Iterate after circulation.

Employee AI Training Outline (90 Minutes)

This outline supports a 90-minute instructor-led session for general-population employees. Adapt the bracketed fields to match your approved tool list and Acceptable Use Policy. For a self-paced e-learning build, each section maps to one module. Prerequisite: employees should have read the AI Acceptable Use Policy before attending.
Session title: Using AI Responsibly at [Company] Duration: 90 minutes (including a 10-minute break) Audience: All employees who use or may use AI tools for company work Instructor: [Name / role] Last updated: [Date]

Module 1 — What AI Tools Are and Are Not (15 minutes)

Learning objective: Participants can accurately describe what AI language models do and identify two common misconceptions.

Topics

  1. How large language models work at a functional level: next-token prediction, not "understanding." Use the analogy of an extremely well-read autocomplete that has never experienced the real world.
  2. What AI tools are good at: drafting, summarising, translating, brainstorming, explaining, generating starter code.
  3. What AI tools are bad at: current events (knowledge cutoff), precise numerical reasoning, source citation (hallucination risk), legal or medical advice without verification.
  4. Common misconceptions: AI does not "know" things the way humans do; it does not search the internet unless explicitly connected to one; it cannot access your files unless you paste them in.

Discussion prompt

"Has anyone received an AI output that was confidently wrong? What happened?" (3 minutes, open floor)


Module 2 — Common Failure Modes and How to Spot Them (20 minutes)

Learning objective: Participants can identify hallucination, bias, and prompt injection risks in AI-generated output.

Topics

  1. Hallucination: AI generates plausible but false facts, citations, or statistics. Rule: any specific factual claim from an AI must be independently verified before use.
  2. Bias and skew: outputs reflect patterns in training data, which may encode historical biases. Higher risk in any output about people, hiring, or performance.
  3. Prompt injection: malicious instructions embedded in documents or web content can redirect AI behaviour. Relevant when using AI agents or tools that process external input.
  4. Overreliance: treating AI output as authoritative without applying professional judgment. Especially risky in legal, financial, medical, or safety-critical contexts.

Exercise (10 minutes)

Participants receive three sample AI outputs. Working in pairs, they identify which outputs contain hallucinations or other errors using provided reference materials. Debrief as a group.


Break (10 minutes)


Module 3 — Practical Patterns for Safe and Effective Use (25 minutes)

Learning objective: Participants can apply at least three prompt patterns that improve output quality and reduce risk.

Topics

  1. Be specific: vague prompts produce vague outputs. Specify role, format, length, and audience in the prompt.
  2. Provide context, not credentials: share what the AI needs to know about the task, not personal data or credentials it does not need.
  3. Iterate and critique: treat the first output as a draft. Ask the AI to identify weaknesses in its own response.
  4. Verify before you ship: any factual claim, statistic, or external reference must be checked against primary sources.
  5. Keep a human in the loop: consequential decisions — including anything involving customers, employees, or public communications — require a human reviewer before the output is acted upon.

Demonstration (10 minutes)

Instructor shows a live before/after: a weak prompt and its output, then an improved prompt on the same task. Walk through what changed and why it matters.


Module 4 — [Company]-Specific Rules (15 minutes)

Learning objective: Participants can correctly classify a piece of data and determine whether it may be sent to an AI tool.

Topics

  1. Approved AI tools at [Company]: [list or link to approved list]. Why these and not others.
  2. Data classification recap: Public, Internal, Confidential, Restricted — with one example of each at [Company].
  3. The classification-to-AI-tool decision tree: [embed or reference the decision tree from the AUP or data classification policy].
  4. When to ask: if you are unsure whether data can be shared with an AI tool, the answer is "not yet." Contact [Security / IT contact] before proceeding.
  5. Reporting incidents: if you believe you sent restricted or confidential data to an AI tool in error, report immediately to [Security contact]. No blame for good-faith mistakes; reporting early limits damage.

Quick quiz (5 minutes)

Five scenario questions delivered via [polling tool / printed handout]. Debrief answers immediately.


Module 5 — Wrap-Up and Q&A (5 minutes)

  1. Recap: three things to remember from this session (instructor selects based on class discussion).
  2. Resources: [Company] AI Acceptable Use Policy at [link]; approved tool list at [link]; security contact [contact].
  3. Open Q&A.

Instructor Notes

Session outline version: [Date]. Next scheduled review: [Date].