The National Institute for Standards and Technology (NIST) released the first version of its long-awaited AI Risk Management Framework (AI RMF 1.0) on January 26, 2023. The voluntary framework builds on comments from hundreds of stakeholders and is intended to assist AI organizations to build trustworthiness and fairness considerations into the AI development lifecycle. Beyond providing guidance for developers, the framework is likely to inform how regulators approach artificial intelligence-related investigations, as they may treat the framework as effectively setting baseline standards for addressing risks in AI systems.
Purpose and Background of AI RMF 1.0
The AI RMF 1.0 aims to maximize AI’s many societal benefits while helping organizations minimize AI-specific risks. The Department of Commerce’s goal is to “enhance AI trustworthiness while managing risks based on our democratic values” and the framework is intended to “accelerate AI innovation and growth while advancing, rather than restricting or damaging, civil rights, civil liberties and equity.”1 NIST encourages organizations to manage risks through a socio-technical lens, accounting for how societal dynamics and human behavior influence AI systems at all stages of the development lifecycle.
The AI RMF 1.0 provides actionable, voluntary guidance for organizations to address, document, and manage risk in developing and deploying AI systems. NIST intends for the framework to be integrated into existing software development and deployment best practices that include cybersecurity, data privacy, system integrity, and environmental risks.
The keystone of the framework is trustworthiness, which NIST defines as based on a balance of factors. According to NIST, trustworthy AI systems should be: (1) valid and reliable, (2) safe, (3) secure and resilient, (4) explainable and interpretable, (5) privacy-enhanced, (6) fair, with harmful bias managed, and (7) accountable and transparent.
AI RMF 1.0 Core
The AI RMF outlines four Core functions: govern, map, measure, and manage. The Core functions guide an organization’s risk management throughout the lifecycle of any AI system. NIST recommends that companies start with the Govern function and integrate the other elements through an iterative process. The functions include categories and subcategories, which outline specific actions and outcomes that enable the creation and management of trustworthy AI systems. To assist organizations with implementing these Core functions, NIST released a companion AI RMF Playbook with further detail on the categories and sub-categories set forth in the Core.
The “Govern” function is central to effective risk management and informs the other three elements – map, measure, and manage. Strong governance is “a continual and intrinsic requirement” to create a culture of risk management. Organizations should (1) implement policies practices and procedures, (2) develop accountability structures, (3) prioritize workforce diversity, equity, inclusion, and accessibility, (4) cultivate a risk-based organizational culture, (5) create robust systems to integrate feedback with relevant AI actors, and (6) implement policies, processes, and procedures to address third-party risks.
The “Map” function enables AI actors to understand the interdependencies of the many elements of a system and the attendant risks created throughout its lifecycle because risk cannot be managed if it is not known. Organizations should understand and weigh the risks and benefits of AI systems, and maintain processes to assess those risks, benefits, and potential impacts during the design, development, and deployment of a system.
The “Measure” function suggests a variety of qualitative, quantitative, and mixed method tools that organizations may use to test their AI systems. AI systems should be tested prior to deployment and throughout their operation, and NIST encourages independent review to reduce bias and conflicts of interest. Testing and measuring a technology can allow an organization to make informed decisions about which tradeoffs of trustworthiness characteristics are appropriate.
The “Manage” function should be followed once risks are mapped and measured. Once defined, an organization can allocate resources and prioritize responses to these risks. Strategies may also be developed to maximize the benefits of the AI system.
How the AI RMF 1.0 Is Likely To Influence Future Regulatory Actions
While the AI RMF 1.0 is not intended to create a checklist for compliance, it is nonetheless likely that regulators like the FTC will look to the Framework for guidance in enforcement actions against AI companies. For example, NIST’s Cybersecurity Framework – which similarly provides a guide for companies to evaluate their cybersecurity capabilities and establish processes for risk assessment and mitigation – is generally consistent with the approach the FTC has taken in its data security enforcement.2 Like AI RMF 1.0, past guidance from FTC staff about ensuring fairness in AI and automated decision-making systems similarly reinforces the need to (1) map and evaluate system design at the outset by looking at data sets, determining gaps, and evaluating the potential impacts of a model before it is put to use, (2) measure algorithmic outputs for discriminatory outcomes, and (3) continue to manage the system as it evolves through independent audits and increased transparency.3 By using NIST’s framework as a guide for AI systems design, development, and oversight, organizations may be able to improve trustworthiness and resiliency while building a strong compliance posture should regulators decide to investigate.
In addition to the AI RMF 1.0, NIST released the following additional resources to help companies navigate the framework:
- NIST AI RMF Playbook
- Roadmap for the NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0)
- Crosswalks to the NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0)
- Perspectives about the NIST Artificial Intelligence Risk Management Framework
- NIST AI Risk Management Framework (AI RMF 1.0) Launch
NIST is accepting comments on AI RMF 1.0 through February 27, 2023, and will release an updated version of the Framework in Spring 2023.
1 During NIST’s AI Risk Management Framework Launch Event, Deputy Secretary of Commerce Don Graves remarked on the administration’s aims in launching the AI RMF. See NIST AI Risk Management Framework (AI RMF 1.0) Launch, NIST (Jan. 26, 2023), https://www.nist.gov/news-events/events/2023/01/nist-ai-risk-management-framework-ai-rmf-10-launch.
2 Andrea Arias, FTC Business Blog, “The NIST Cybersecurity Framework and the FTC,” (Aug. 31, 2016), https://www.ftc.gov/business-guidance/blog/2016/08/nist-cybersecurity-framework-and-ftc.
3 Elisa Jillson, FTC Business Blog, “Aiming for truth, fairness, and equity in your company’s use of AI,” (Apr. 19, 2021), https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai.
ABOUT BAKER BOTTS L.L.P.
Baker Botts is an international law firm whose lawyers practice throughout a network of offices around the globe. Based on our experience and knowledge of our clients' industries, we are recognized as a leading firm in the energy, technology and life sciences sectors. Since 1840, we have provided creative and effective legal solutions for our clients while demonstrating an unrelenting commitment to excellence. For more information, please visit bakerbotts.com.