NIST AI Risk Management Framework

NIST AI RMF is the U.S. government's voluntary framework for managing AI risks across the system lifecycle, organized around four functions: Govern, Map, Measure, Manage.

What is the NIST AI RMF?

The NIST AI Risk Management Framework (NIST AI 100-1, published January 2023) is a voluntary framework from the U.S. National Institute of Standards and Technology for managing risks of AI systems across their lifecycle. It is non-binding but increasingly cited in U.S. federal procurement, executive orders on AI, and bank examination guidance.

The framework is organized around four core functions: Govern (build a culture of risk management and accountability), Map (categorize AI risks in context), Measure (analyze and quantify risks), and Manage (allocate resources to address mapped and measured risks).

The Generative AI Profile

NIST published an AI RMF Generative AI Profile (NIST AI 600-1, July 2024) that focuses the four core functions on the specific risks of generative AI: confabulation, dangerous content generation, harmful bias, data privacy violations, intellectual property infringement, and obscene or violative outputs. Enterprise buyers evaluating generative AI vendors increasingly reference the GenAI Profile in their due diligence.

AI RMF for procurement

NIST AI RMF is voluntary, so no vendor is "certified" against it the way they are against ISO 27001 or SOC 2. What you can ask: does the vendor publish how they map their AI systems to the AI RMF functions, do they have documented governance and human oversight processes, and have they completed an AI RMF Profile relevant to your use case. The framework itself is the diligence vocabulary; the answers tell you whether the vendor takes AI risk seriously.