Employee AI Training Outline (90 Minutes)
This outline supports a 90-minute instructor-led session for general-population employees. Adapt the bracketed fields to match your approved tool list and Acceptable Use Policy. For a self-paced e-learning build, each section maps to one module. Prerequisite: employees should have read the AI Acceptable Use Policy before attending.Session title: Using AI Responsibly at [Company] Duration: 90 minutes (including a 10-minute break) Audience: All employees who use or may use AI tools for company work Instructor: [Name / role] Last updated: [Date]
Module 1 — What AI Tools Are and Are Not (15 minutes)
Learning objective: Participants can accurately describe what AI language models do and identify two common misconceptions.Topics
- How large language models work at a functional level: next-token prediction, not "understanding." Use the analogy of an extremely well-read autocomplete that has never experienced the real world.
- What AI tools are good at: drafting, summarising, translating, brainstorming, explaining, generating starter code.
- What AI tools are bad at: current events (knowledge cutoff), precise numerical reasoning, source citation (hallucination risk), legal or medical advice without verification.
- Common misconceptions: AI does not "know" things the way humans do; it does not search the internet unless explicitly connected to one; it cannot access your files unless you paste them in.
Discussion prompt
"Has anyone received an AI output that was confidently wrong? What happened?" (3 minutes, open floor)
Module 2 — Common Failure Modes and How to Spot Them (20 minutes)
Learning objective: Participants can identify hallucination, bias, and prompt injection risks in AI-generated output.Topics
- Hallucination: AI generates plausible but false facts, citations, or statistics. Rule: any specific factual claim from an AI must be independently verified before use.
- Bias and skew: outputs reflect patterns in training data, which may encode historical biases. Higher risk in any output about people, hiring, or performance.
- Prompt injection: malicious instructions embedded in documents or web content can redirect AI behaviour. Relevant when using AI agents or tools that process external input.
- Overreliance: treating AI output as authoritative without applying professional judgment. Especially risky in legal, financial, medical, or safety-critical contexts.
Exercise (10 minutes)
Participants receive three sample AI outputs. Working in pairs, they identify which outputs contain hallucinations or other errors using provided reference materials. Debrief as a group.
Break (10 minutes)
Module 3 — Practical Patterns for Safe and Effective Use (25 minutes)
Learning objective: Participants can apply at least three prompt patterns that improve output quality and reduce risk.Topics
- Be specific: vague prompts produce vague outputs. Specify role, format, length, and audience in the prompt.
- Provide context, not credentials: share what the AI needs to know about the task, not personal data or credentials it does not need.
- Iterate and critique: treat the first output as a draft. Ask the AI to identify weaknesses in its own response.
- Verify before you ship: any factual claim, statistic, or external reference must be checked against primary sources.
- Keep a human in the loop: consequential decisions — including anything involving customers, employees, or public communications — require a human reviewer before the output is acted upon.
Demonstration (10 minutes)
Instructor shows a live before/after: a weak prompt and its output, then an improved prompt on the same task. Walk through what changed and why it matters.
Module 4 — [Company]-Specific Rules (15 minutes)
Learning objective: Participants can correctly classify a piece of data and determine whether it may be sent to an AI tool.Topics
- Approved AI tools at [Company]: [list or link to approved list]. Why these and not others.
- Data classification recap: Public, Internal, Confidential, Restricted — with one example of each at [Company].
- The classification-to-AI-tool decision tree: [embed or reference the decision tree from the AUP or data classification policy].
- When to ask: if you are unsure whether data can be shared with an AI tool, the answer is "not yet." Contact [Security / IT contact] before proceeding.
- Reporting incidents: if you believe you sent restricted or confidential data to an AI tool in error, report immediately to [Security contact]. No blame for good-faith mistakes; reporting early limits damage.
Quick quiz (5 minutes)
Five scenario questions delivered via [polling tool / printed handout]. Debrief answers immediately.
Module 5 — Wrap-Up and Q&A (5 minutes)
- Recap: three things to remember from this session (instructor selects based on class discussion).
- Resources: [Company] AI Acceptable Use Policy at [link]; approved tool list at [link]; security contact [contact].
- Open Q&A.
Instructor Notes
- Session should be updated any time the Acceptable Use Policy or approved tool list changes.
- For new hires, this session should occur within [X days] of start date.
- Attendance is tracked via [system]. Employees who miss the live session complete the self-paced version at [link] within [X days].
- Feedback form: [link]. Minimum target score: [X]% of respondents rate the session "useful" or "very useful."