The NIST AI Risk Management Framework, 2023

The National Institute of Standards and Technology, US Department of Commerce, developed the AI Risk Management Framework (AI RMF) in January 2023 as a set of industry-neutral guidelines to support organizations in evaluating and managing the risks related to deploying and using AI systems. It provides organisations and security professionals with relevant tools to improve the reliability of AI systems and promote their responsible design, development, implementation and usage in the long run. It provides organisations using and deploying AI with the resources they need to manage and mitigate risks  associated with using AI systems.

According to the NIST AI RMF, a trusted AI system is one that demonstrates:

  • Validity and dependability, where the AI system functions as planned and consistently generates accurate findings

  • Security and resilience, where the AI system is built with strong security features so that it is able to guard against malicious attacks, misuse, and unauthorised access, and the capacity to bounce back and continue operating after a disruption.

  • Improved privacy, where the AI system protects user privacy by placing safeguards to protect sensitive data and ensuring that its use conforms with applicable rules and laws.

  • Transparency and accountability, where the AI system can be audited, with clarity on who bears responsibility for the acts of the system, and where the system’s workings can be viewed in an unobstructed manner.

  • Interpretability and explicability, where the AI system must offer clear and concise justifications for decisions or actions it takes or implements, in order to maintain user confidence and system accountability.

  • Fairness, with Zero Negative Bias, where no unfair bias or form of discrimination should be incorporated into the design of the AI system, and if found, such bias or discrimination should be eliminated by addressing the system’s architecture.   

To ensure that these standards are met, the NIST AI RMF establishes four phases of action:

  • Govern: This requires establishing appropriate policies that must be implemented, with thorough documentation to improve accountability. The intention is to avoid siloing the governance function, and instead, incorporate governance with every one of the functions that follow this.

  • Mapping: This function calls for defining and putting in perspective all the risk connected to an AI system. It requires the mapping of risks to facilitate informed decision-making, shared engagement, and the proactive identification, evaluation, and addressing of risks.

  • Measurement: This function involves the use of qualitative and quantitative tools, techniques, and methodologies to assess, analyse, benchmark, and monitor AI risk and all related impacts. It calls for the documentation of system functionality, social impact, and trustworthiness.

  • Management: This function calls for the allocation of resources to address all the identified and measured risks. These resources are intended to support risk mitigation. Mitigation measures should include documenting information on the organisation’s response to, recovery form, and learnings drawn from an incident, which can help plan for the future to avert such risk altogether.

Previous
Previous

Model Cards

Next
Next

ISO 42001:2023