Algorithmic Impact Assessment
An Algorithmic Impact Assessment is a structured evaluation of an AI system's potential effects on individuals and communities, increasingly required by Canadian, EU, and several US state regulations.
What is an AIA?
An Algorithmic Impact Assessment (AIA) is a structured evaluation that an organisation performs before deploying an AI system in a consequential context. It typically covers: the system's purpose and intended outcomes, the categories of people affected, potential harms (discrimination, privacy intrusion, autonomy loss, economic harm), mitigation measures, monitoring plans, and decision-review mechanisms.
Where AIAs are required
Canada's Treasury Board Directive on Automated Decision-Making has required AIAs for federal automated decision systems since 2019 (now in version 2.0). The EU AI Act requires Fundamental Rights Impact Assessments (FRIAs) for high-risk AI systems used by public bodies (Article 27). Several US state laws (New York City Local Law 144 for hiring, Colorado AI Act, California) impose narrower variants. NIST AI RMF recommends an AIA-equivalent as part of the Govern function.
What buyers should ask
For AI deployments in employment, credit, insurance, housing, or government services, the buyer is typically the entity responsible for the AIA. Ask the vendor for the information you need to fill it out: training data documentation, evaluation across demographic slices, known failure modes, model card or equivalent, sub-processor disclosure, and incident reporting commitments. Mature AI vendors maintain an AIA-support packet specifically for this use case.