IP Osgoode

NIST Releases their AI Risk Management Framework 1.0


Gregory Hong is an IPilogue Writer and a 1L JD candidate at Osgoode Hall Law School.


The National Institute of Standards and Technology (NIST) has been tasked with promoting “U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology.” On January 26, 2023, NIST released their AI Risk Management Framework (AI RMF 1.0) alongside a Playbook suggesting ways to use the AI RMF to “incorporate trustworthiness considerations in the design, development, deployment, and use of AI systems”. Both the framework and playbook are intended to help organizations understand and manage the potential risks and benefits of AI. The framework is also meant to ensure that AI systems are developed, deployed, and used in a responsible and trustworthy manner. The framework is intended to be a flexible and adaptable tool that can be applied to a wide range of AI systems, including those used in various industries such as healthcare, finance, and transportation.

NIST describes a trustworthy AI to have a set of characteristics: valid and reliable, safe, secure, and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair – with harmful bias managed.

Valid and reliable: Produces accurate and consistent results. Its performance should be evaluated and validated through ongoing testing and experimentation, with risk management prioritizing the minimization of potential negative impacts.

Safe: Does not cause harm to people or the environment and should be designed, developed, and deployed responsibly with clear information for responsible use of the system

Secure and resilient: Maintains confidentiality, integrity, and availability through protection against common security such as data poisoning, and the exfiltration of  other intellectual property through AI system endpoints.

Accountable and transparent: Provides appropriate levels of information to AI actors to allow for transparency and accountability of its decisions and actions.

Explainable and interpretable: representing the underlying AI systems’ operation and the meaning of its output in the context of its designed functional purposes. Explainable and interpretable AI systems offer information that will help end users understand their purposes and potential impact.

Privacy-enhanced: Protects the privacy of individuals and organizations in compliance with relevant laws and regulations.

Fair – with harmful bias managed:  NIST has identified three major categories of AI bias to be considered and managed: systemic (broad and ever-present societal bias), computational and statistical (typically due to non-representative samples), and human-cognitive (perceptions of AI system information in deciding or filling in missing information).

AI RMF’s core is organized around four specific functions to help organizations address the risks of AI systems in practice: Govern, Map, Measure, and Manage.

Govern: This includes establishing policies, procedures, and standards for AI systems, key decision-makers, developers, and end-users.

Map: AI RMF is intended to contextualize and frame risks by identifying the system’s components, data sources, and external dependencies, as well as to understand how the system is used and by whom.

Measure: AI RMF evaluates the potential risks and benefits of the AI system by assessing the system’s vulnerabilities and potential social impacts.

Manage: AI RMF allocates risk resources to mitigate identified risks and continuously monitor the system and its environment by establishing monitoring processes and procedures to detect and respond to incidents, as well as updating controls as needed.

NIST’s AI risk management framework is a voluntary but very important prompt for organizations and teams who design, develop, and deploy AI to think more critically about their responsibilities to the public. Understanding and managing the risks of AI systems will help to enhance trustworthiness, and in turn, cultivate public trust in AI – a critical part in AI adoption and advancement.

Related posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Search
Categories
Newsletter
Skip to content