NIST Issues Artificial Intelligence Risk Management Framework (AI RMF 1.0)

NIST Issues Artificial Intelligence Risk Management Framework (AI RMF 1.0)

Client Alert

Authors

On January 26, 2023, the National Institute of Standards and Technology (NIST) issued the Risk Management Framework for the use of artificial intelligence, or AI, in a trustworthy manner. The Risk Management Framework provides guidance to users of AI. Although compliance is voluntary, we expect that the Risk Management Framework will help existing or future users think through and manage the risks associated with the use of AI. The Risk Management Framework sets forth principles for managing risks related to validity and reliability, safety, security and resiliency, explainability and interpretability, privacy, and fairness and bias. We encourage companies using or considering using AI to review the Risk Management Framework carefully. Further, companies considering AI-related transactions should consider embodying at least some of the concepts in the Risk Management Framework. Below, we discuss each of the principles for managing risks set forth in the Risk Management Framework, and then we provide a summary of the rest of the Risk Management Framework.

(i) Validity and Reliability. Validity means the ability to confirm that an AI system has achieved its intended goals through objective evidence. Reliability means the ability of an AI system to perform as required without failure during a period of time under given conditions. A valid and reliable system embraces accuracy and robustness. Accuracy is measured by how close an observation or estimate is to the true value or the value accepted to be true of an AI system, and it should always be paired with clearly defined and realistic test sets and detailed methodologies. Robustness is measured by the ability of a system to perform at a certain level under changing circumstances, including unexpected circumstances where the system will attempt to minimize potential harms to people. Measurement of validity, reliability, accuracy and robustness contribute to AI trustworthiness and should take into account that “certain types of failure can cause greater harm.” The Risk Management Framework recommends prioritizing on minimization of potential negative impacts and allowing human intervention when AI cannot detect or correct errors.

(ii) Safety. Safe operation of AI protects human life, health, property and the environment. Safe operation can be flexible based on context and severity. Potential risk of serious injury or death should call for the utmost attention and be managed through rigorous processes. Planning and designing for safety early in the AI lifecycle can minimize the risks associated with dangerous situations. Operational safety may be improved through:

  • responsible design, development and deployment practices;
  • clear information on responsible use of the system;
  • responsible decision-making; and
  • explanation and documentation of risks based on evidence.

Other measures may include rigorous simulation, in-domain testing, real-time monitoring and the ability to shut down a system, modify output or deploy human intervention to manage a system. Companies should also consult sector- and application-specific guidelines and standards to ensure AI safety.

(iii) Security and Resiliency. Security means the ability to preserve an AI system’s confidentiality, integrity and availability in the event of unauthorized access or use. This may often involve protocols to prevent, protect against and respond to attacks. A resilient system can withstand unexpected circumstances or changes in the environment or use and can return to normal function following such a circumstance. Companies building a secure and resilient AI system may focus on common attacks such as data poisoning and exfiltration of models, training data or intellectual property through system endpoints. The Risk Management Framework also points to the NIST Cybersecurity Framework for further information that may be relevant regarding security and resiliency.

(iv) Transparency and Accountability. Transparency means that information and outputs are available to individuals who interact with an AI system, regardless of awareness that they are interacting with the AI system. The level of information may depend on the stage in the AI lifecycle and the role and knowledge of the user. Transparency improves accountability, because a user or customer can understand the reasons for the output generated by the AI system and may be able to object to the output in appropriate circumstances, thereby promoting trustworthiness of the AI system. The Risk Management Framework recognizes that transparency may not necessarily mean that a system is accurate, private or fair, but it further recognizes that error correction is possible when a system is more transparent. The Risk Management Framework also recognizes that there may be tension between a transparent system and protection of trade secrets. Accountability considers the role of AI actors and may vary across social and legal contexts. The Risk Management Framework recommends that AI developers and deployers proportionally and proactively adjust their transparency and accountability practices when severe consequences such as life or liberty are at stake. The Risk Management Framework also suggests that a company maintains an organizational approach and sound governance structure based on the company’s needs and resources. The Risk Management Framework recognizes that training data for an AI system may be subject to copyright and that AI system designers should follow applicable intellectual property laws.

(v) Explainability and Interpretability. An explainable and interpretable AI system helps end users understand the system’s outputs and thus reduces the risk of AI untrustworthiness. Explainability refers to the ability to describe the underlying operating mechanism of an AI system, and a more explainable system can be more easily debugged and monitored. Interpretability refers to contextualizing an AI system’s output based on its designed functional purposes. Comparing with transparency, which answers “what happened” in an AI system, explainability answers “how” a decision was made in the AI system, and interpretability answers “why” the decision was made and its meaning or context to the user. To reduce risk from lack of explainability, the Risk Management Framework recommends companies tailor the description of AI’s functionality to a user’s role, knowledge and skill level. On the other hand, to reduce risk from lack of interpretability, the Risk Management Framework recommends clearly communicating an AI’s decision-making process.

(vi) Privacy. Privacy refers to the norms and practices involved in safeguarding human autonomy, identity and dignity in the context of AI systems. Appropriate practices may include complete anonymity, de-identification, limited observation and consent-based disclosure. Further, the Risk Management Framework encourages consideration of privacy-enhancing technologies and appropriate data minimizing methods. Recognizing that privacy values should generally guide AI design, development and deployment, the Risk Management Framework nevertheless acknowledges the potential trade-offs between privacy and other risk management principles such as accuracy and fairness.

(vii) Fairness and Bias. Fairness in AI concerns equity and equality by addressing issues such as harmful bias and discrimination. Fairness is a multifaceted standard that could vary across contexts and applications. The Risk Management Framework generally recommends companies identify the differences in fairness standards and understand who could be potentially prejudiced by their AI practices. At the same time, bias is not confined to demographic groups or how representative data is. The Risk Management Framework identifies three major categories of biases:

1) systemic biases may appear in AI datasets, organizational practices along the AI lifecycle or societal uses of the AI system;

2) computational and statistical biases may appear in AI datasets or algorithmic process, stemming from systemic errors due to non-representative samples; and

3) human-cognitive biases may relate to human perception of AI systems or their purposes and functions—this last category could present at any stage of an AI’s lifecycle, from design and implementation to operation and maintenance.

These biases may arise without prejudicial or discriminatory intent, but companies using an AI system should carefully consider the potential for prejudicial or discriminatory intent as well as the potential for prejudicial or discriminatory output. The bottom line is that bias and discrimination can be deeply ingrained in an automated system. If bias and discrimination are not properly addressed or managed, an AI system could potentially amplify the harms to individuals, organizations and societies. While not part of the scope of the Risk Management Framework, there are important legal regimes that may read on problematic biases and discrimination that an AI system user should be aware of and manage.

Overall, the Risk Management Framework principles described above help companies reduce the probability and magnitude of negative consequences associated with AI.

The remainder of the Risk Management Framework discusses audience identification, risk understanding and risk management methods. We will conclude this client alert with a brief overview of the risk management methods set forth in the Risk Management Framework. A company will carry out risk management methods through core functions of governance, mapping, measurement and management. Each function is further broken down into categories and subcategories to provide actions and outcomes that can help guide a company as it manages the use of an AI system. These core functions inform and interact with each other to guide companies through the development and creation of trustworthy AI practices, and developers and deployers of AI systems are thus encouraged to think through these management practices carefully with an eye toward responsible design, development, deployment and use of AI.

At the same time, NIST published a companion AI Risk Management Framework Playbook to help companies implement the Risk Management Framework. The Risk Management Framework Playbook is an online resource that will be further updated and is open for feedback or comments until February 27, 2023.

While we have summarized the key principles of the NIST Risk Management Framework, we, of course, encourage AI developers and deployers to review the Risk Management Framework more thoroughly. We intend to continue to provide updates regarding AI risk management principles, including as may be promulgated by NIST.

Authors

Notice

Unless you are an existing client, before communicating with WilmerHale by e-mail (or otherwise), please read the Disclaimer referenced by this link.(The Disclaimer is also accessible from the opening of this website). As noted therein, until you have received from us a written statement that we represent you in a particular manner (an "engagement letter") you should not send to us any confidential information about any such matter. After we have undertaken representation of you concerning a matter, you will be our client, and we may thereafter exchange confidential information freely.

Thank you for your interest in WilmerHale.