Prompt engineering
Prompt engineering is the practice of crafting inputs to language models to elicit reliable, accurate, and useful outputs. Sits between user-facing UX and the model itself.
What it covers
Prompt engineering covers the design of system prompts (the persistent instructions the model receives), user prompt templates, few-shot examples, output formatting constraints, retrieval grounding, tool-use definitions, and fallback handling for ambiguous queries. Originally treated as a folk art, it's now a recognised engineering discipline with patterns (chain-of-thought, self-consistency, tree-of-thought, ReAct) that are measurable and reproducible.
Why it matters in procurement
The same underlying foundation model can perform dramatically differently across two AI vendors depending on how they prompt it. When evaluating an AI vendor, you're partly evaluating their prompt engineering — even if the foundation model is the same as a competitor's. Mature vendors version their prompts, evaluate them against held-out test sets, and have a path to regression-test prompt changes before deploying. Ask whether they treat prompts as production code.
Risk angle
Prompts are a partial defense against prompt injection (see /learn/prompt-injection) but not a complete one. They also represent intellectual property — leaked system prompts have caused several minor product crises. Ask vendors how they protect their prompts (encryption, access logging, leakage detection), and whether the prompts are subject to the same change management as other code.