What Is the NIST AI Risk Management Framework?
In response to growing enterprise usage of artificial intelligence (AI) systems – and a corresponding lack of guidance on how to manage their risks – the U.S. National Institute of Standards and Technology (NIST) introduced the AI Risk Management Framework (AI RMF) in January 2023. According to NIST, the goal of the AI RMF is to “offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.” The AI RMF is a voluntary framework and can be applied across any company, industry, or geography.
The RMF is divided into two parts. Part 1 includes an overview of risks and characteristics of what NIST calls “trustworthy AI systems.” Part 2 describes four functions that help organizations address the risks of AI systems: Govern, Map, Measure, and Manage. The illustration below reviews the four functions.
The functions in the AI risk management framework.
How Does the NIST AI Risk Management Framework Apply to Third Party Risk Management?
It is important for organizations to consider risk management principles to minimize the potential negative impacts of AI systems, such as hallucination, data privacy, and threats to civil rights. This consideration also extends to the use of third-party AI systems or third parties’ use of AI systems. Potential risks of third-party misuse of AI include:
- Security vulnerabilities in the AI application itself. Without the proper governance and safeguards in place, your organization could be exposed to system or data compromise.
- Lack of transparency in methodologies or measurements of AI risk. Deficiencies in measurement and reporting could result in underestimating the impact of potential AI risks.
- AI security policies inconsistent with other existing risk management procedures. Inconsistency results in complicated and time intensive audits which could introduce potential negative legal or compliance outcomes.
According to NIST, the RMF will help organizations overcome these potential risks.
Key Third-Party Risk Management Considerations in the NIST AI Risk Management Framework
The NIST AI RMF breaks down its four core functions into 19 categories and 72 subcategories that define specific actions and outcomes. NIST offers a handy playbook that further explains the actions.
The table below reviews the four functions and select categories in the framework and suggests considerations to address potential third-party AI risks.
NOTE: This is a summary table. For a full examination of the NIST AI Risk Management Framework, download the full version and engage your organization’s internal audit, legal, IT, security and vendor management teams.
NIST AI RMF Category | TPRM Considerations |
GOVERN is the foundational function in the RMF, which establishes a culture of risk management, defines processes, and provides structure to the program. | |
GOVERN 1: Policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risks are in place, transparent, and implemented effectively.
GOVERN 2: Accountability structures are in place so that the appropriate teams and individuals are empowered, responsible, and trained for mapping, measuring, and managing AI risks. GOVERN 3: Workforce diversity, equity, inclusion, and accessibility processes are prioritized in the mapping, measuring, and managing of AI risks throughout the lifecycle. GOVERN 4: Organizational teams are committed to a culture that considers and communicates AI risk. GOVERN 5: Processes are in place for robust engagement with relevant AI actors. GOVERN 6: Policies and procedures are in place to address AI risks and benefits arising from third-party software and data and other supply chain issues. |
Build AI policies and procedures as part of your comprehensive third-party risk management (TPRM) program in line with your broader information security and governance, risk, and compliance frameworks.
Seek out experts to collaborate with your team on defining and implementing AI and TPRM processes and solutions; selecting risk assessment questionnaires and frameworks; and optimizing your program to address AI risks throughout the entire third-party lifecycle – from sourcing and due diligence to termination and offboarding – according to your organization’s risk appetite. As part of this process, you should define:
|
MAP is the function that establishes the context to frame risks related to an AI system. | |
MAP 1: Context is established and understood.
MAP 2: Categorization of the AI system is performed. MAP 3: AI capabilities, targeted usage, goals, and expected benefits and costs compared with appropriate benchmarks are understood. MAP 4: Risks and benefits are mapped for all components of the AI system, including third-party software and data. MAP 5: Impacts to individuals, groups, communities, organizations, and society are characterized. |
Developing a sound risk management process and understanding the context of AI usage begins with profiling and tiering third parties, and that involves quantifying inherent risks for all third parties – in this case, the inherent AI risks. Criteria used to calculate inherent risk for third-party classification and categorization include:
From this inherent risk assessment, your team can automatically tier suppliers according to AI risk exposure, set appropriate levels of further diligence, and determine the scope of ongoing assessments. Rule-based tiering logic enables vendor categorization using a range of data interactions and regulatory considerations. |
MEASURE is the function that analyzes, assesses, benchmarks, and monitors AI risk and related impacts. | |
MEASURE 1: Appropriate methods and metrics are identified and applied.
MEASURE 2: AI systems are evaluated for trustworthy characteristics. MEASURE 3: Mechanisms for tracking identified AI risks over time are in place. MEASURE 4: Feedback about efficacy of measurement is gathered and assessed. |
Look for solutions with an extensive library of pre-built templates for third-party risk assessments. Third-party vendors should be evaluated for their AI practices during onboarding, contract renewal, or any required frequency (e.g., quarterly or annually), depending on material changes.
Assessments should be managed centrally and backed by workflow, task management, and automated evidence review capabilities to ensure your team has visibility into third-party risks throughout the relationship lifecycle. Notably, a TPRM solution should include built-in remediation recommendations based on risk assessment results to ensure that third parties address risks promptly and satisfactorily while providing the appropriate evidence to auditors. To complement vendor AI evaluations, continuously track and analyze external threats to third parties. As part of this, monitor the Internet and dark web for cyber threats and vulnerabilities. All monitoring data should be correlated with assessment results and centralized in a unified risk register for each vendor, streamlining risk review, reporting, remediation, and response initiatives. Monitoring sources typically include:
Finally, continuously measure third-party KPIs and KRIs against your requirements to help your team uncover risk trends, determine third-party risk status, and identify exceptions to common behavior that could warrant further investigation. |
The MANAGE function entails allocating risk resources to mapped and measured risks on a regular basis and as defined by the GOVERN function. This includes plans to respond to, recover from, and communicate about incidents or events. | |
MANAGE 1: AI risks based on assessments and other analytical output from the MAP and MEASURE functions are prioritized, responded to, and managed.
MANAGE 2: Strategies to maximize AI benefits and minimize negative impacts are planned, prepared, implemented, documented, and informed by input from relevant AI actors. MANAGE 3: AI risks and benefits from third-party entities are managed. MANAGE 4: Risk treatments, including response and recovery, and communication plans for the identified and measured AI risks are documented and monitored regularly. |
As part of your broader incident management strategy, ensure that your third-party incident response program enables your team to identify rapidly, respond to, report on, and mitigate the impact of third-party vendor AI security incidents.
Key capabilities in a third-party incident response service include:
Armed with these insights, your team can better manage and triage third-party entities, understand the scope and impact of the incident, what data was involved, whether the third party’s operations were impacted, and when remediations are completed. |
Next Steps: Align Third-Party AI Controls with Your TPRM Program
Mitratech can help your organization improve not only its own AI governance, but also how it governs third-party AI risks. Specifically, we can help you:
- Establish governing policies, standards, systems and processes to protect data and systems from AI risks as part of your overall TPRM program. (Aligns with category GOVERN 6.)
- Profile and tier third parties, while quantifying inherent risks associated with third-party AI usage to ensure that all risks are mapped. (Aligns with category MAP 4.)
- Conduct comprehensive third-party risk assessments and continuously monitor and measure AI-specific risks in the context of your TPRM program. (Aligns with the MEASURE category.)
- Ensure comprehensive incident response to AI-specific risks from third-party entities. (Aligns with MANAGE 3.)
Leveraging the NIST AI Risk Management Framework in your TPRM program will help your organization establish the controls and accountability over third-party AI usage. For more on how Mitratech can help simplify this process, request a demo today.
Editor’s Note: This post was originally published on Prevalent.net in 2023 and updated in April 2025. In October 2024, Mitratech acquired the AI-enabled third-party risk management, Prevalent. The content has since been updated to include information aligned with our product offerings, regulatory changes, and compliance.