Le cadre de gestion des risques liés à l'IA du NIST et la gestion des risques liés aux tiers

Leverage this guidance to align your TPRM program with the NIST AI RMF to better govern third-party AI risk at your organization.

Image de la bannière du blog du NIST AI RMF

What Is the NIST AI Risk Management Framework?

In response to growing enterprise usage of artificial intelligence (AI) systems – and a corresponding lack of guidance on how to manage their risks – the U.S. National Institute of Standards and Technology (NIST) introduced the AI Risk Management Framework (AI RMF) in January 2023. According to NIST, the goal of the AI RMF is to “offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.” The AI RMF is a voluntary framework and can be applied across any company, industry, or geography.

The RMF is divided into two parts. Part 1 includes an overview of risks and characteristics of what NIST calls “trustworthy AI systems.” Part 2 describes four functions that help organizations address the risks of AI systems: Govern, Map, Measure, and Manage. The illustration below reviews the four functions.

Illustration des domaines de gouvernance du NIST AI RMF dans les couleurs de Mitratech
The functions in the AI risk management framework. 

How Does the NIST AI Risk Management Framework Apply to Third Party Risk Management?

It is important for organizations to consider risk management principles to minimize the potential negative impacts of AI systems, such as hallucination, data privacy, and threats to civil rights. This consideration also extends to the use of third-party AI systems or third parties’ use of AI systems. Potential risks of third-party misuse of AI include:

  • Vulnérabilités de sécurité dans l'application d'IA elle-même. En l'absence d'une gouvernance et de mesures de protection appropriées, votre organisation pourrait être exposée à une compromission du système ou des données.
  • Manque de transparence dans les méthodologies ou les mesures des risques liés à l'IA. Les lacunes en matière de mesure et d'établissement de rapports pourraient entraîner une sous-estimation de l'impact des risques potentiels liés à l'IA.
  • AI security policies inconsistent with other existing risk management procedures. Inconsistency results in complicated and time intensive audits which could introduce potential negative legal or compliance outcomes.

Selon le NIST, le RMF aidera les organisations à surmonter ces risques potentiels.

Key Third-Party Risk Management Considerations in the NIST AI Risk Management Framework

The NIST AI RMF breaks down its four core functions into 19 categories and 72 subcategories that define specific actions and outcomes. NIST offers a handy playbook that further explains the actions.

The table below reviews the four functions and select categories in the framework and suggests considerations to address potential third-party AI risks.

NOTE: This is a summary table. For a full examination of the NIST AI Risk Management Framework, download the full version and engage your organization’s internal audit, legal, IT, security and vendor management teams.

NIST AI RMF Catégorie TPRM Considerations
GOVERN is the foundational function in the RMF, which establishes a culture of risk management, defines processes, and provides structure to the program.
GOVERN 1: Policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risks are in place, transparent, and implemented effectively.

GOVERN 2: Accountability structures are in place so that the appropriate teams and individuals are empowered, responsible, and trained for mapping, measuring, and managing AI risks.

GOVERN 3: Workforce diversity, equity, inclusion, and accessibility processes are prioritized in the mapping, measuring, and managing of AI risks throughout the lifecycle.

GOVERN 4: Organizational teams are committed to a culture that considers and communicates AI risk.

GOVERN 5: Processes are in place for robust engagement with relevant AI actors.

GOVERN 6: Policies and procedures are in place to address AI risks and benefits arising from third-party software and data and other supply chain issues.

Build AI policies and procedures as part of your comprehensive third-party risk management (TPRM) program in line with your broader information security and governance, risk, and compliance frameworks.

Seek out experts to collaborate with your team on defining and implementing AI and TPRM processes and solutions; selecting risk assessment questionnaires and frameworks; and optimizing your program to address AI risks throughout the entire third-party lifecycle – from sourcing and due diligence to termination and offboarding – according to your organization’s risk appetite.

As part of this process, you should define:

  • Governing policies, standards, systems, and processes to protect data from AI risks
  • les exigences légales et réglementaires, en veillant à ce que les tiers soient évalués en conséquence
  • Rôles et responsabilités clairs (par exemple, RACI) pour la responsabilisation de l'équipe
  • Evaluation des risques et seuils en fonction de la tolérance au risque de votre organisation
  • Méthodes d'évaluation et de suivi fondées sur la criticité des tiers et révisées en permanence
  • Inventaires d'IA de tiers
  • Cartographie quadripartite pour comprendre l'exposition aux risques liés à l'utilisation de l'IA dans votre écosystème étendu.
  • Indicateurs clés de performance (KPI) et indicateurs clés de risque (KRI) pour les parties prenantes internes
  • Exigences contractuelles et droit d'audit
  • Exigences en matière de réponse aux incidents
  • Rapports sur les risques et les parties prenantes internes
  • Stratégies d'atténuation des risques et de remédiation
MAP is the function that establishes the context to frame risks related to an AI system.
MAP 1: Context is established and understood.

MAP 2: Categorization of the AI system is performed.

MAP 3: AI capabilities, targeted usage, goals, and expected benefits and costs compared with appropriate benchmarks are understood.

MAP 4: Risks and benefits are mapped for all components of the AI system, including third-party software and data.

MAP 5: Impacts to individuals, groups, communities, organizations, and society are characterized.

Developing a sound risk management process and understanding the context of AI usage begins with profiling and tiering third parties, and that involves quantifying inherent risks for all third parties – in this case, the inherent AI risks. Criteria used to calculate inherent risk for third-party classification and categorization include:

  • Type de contenu requis pour valider les contrôles
  • Criticité pour les performances et les opérations de l'entreprise
  • Le(s) lieu(x) et les considérations juridiques ou réglementaires qui s'y rapportent
  • Niveau de dépendance à l'égard des tiers (pour éviter le risque de concentration)
  • Expérience des processus opérationnels ou en contact avec les clients
  • Interaction avec les données protégées

From this inherent risk assessment, your team can automatically tier suppliers according to AI risk exposure, set appropriate levels of further diligence, and determine the scope of ongoing assessments.

Rule-based tiering logic enables vendor categorization using a range of data interactions and regulatory considerations.

MEASURE is the function that analyzes, assesses, benchmarks, and monitors AI risk and related impacts.
MEASURE 1: Appropriate methods and metrics are identified and applied.

MEASURE 2: AI systems are evaluated for trustworthy characteristics.

MEASURE 3: Mechanisms for tracking identified AI risks over time are in place.

MEASURE 4: Feedback about efficacy of measurement is gathered and assessed.

Look for solutions with an extensive library of pre-built templates for third-party risk assessments. Third-party vendors should be evaluated for their AI practices during onboarding, contract renewal, or any required frequency (e.g., quarterly or annually), depending on material changes.

Assessments should be managed centrally and backed by workflow, task management, and automated evidence review capabilities to ensure your team has visibility into third-party risks throughout the relationship lifecycle.

Notably, a TPRM solution should include built-in remediation recommendations based on risk assessment results to ensure that third parties address risks promptly and satisfactorily while providing the appropriate evidence to auditors.

To complement vendor AI evaluations, continuously track and analyze external threats to third parties. As part of this, monitor the Internet and dark web for cyber threats and vulnerabilities. All monitoring data should be correlated with assessment results and centralized in a unified risk register for each vendor, streamlining risk review, reporting, remediation, and response initiatives.

Les sources de surveillance sont généralement les suivantes

  • plus de 1 500 forums criminels, des milliers de pages en oignon, plus de 80 forums d'accès spécial sur le dark web, plus de 65 flux de menaces et plus de 50 sites de collage d'informations d'identification divulguées, ainsi que plusieurs communautés de sécurité, référentiels de code et bases de données de vulnérabilité couvrant 550 000 entreprises.
  • Bases de données contenant plus de 10 ans d'historique des violations de données pour des milliers d'entreprises dans le monde entier

Finally, continuously measure third-party KPIs and KRIs against your requirements to help your team uncover risk trends, determine third-party risk status, and identify exceptions to common behavior that could warrant further investigation.

The MANAGE function entails allocating risk resources to mapped and measured risks on a regular basis and as defined by the GOVERN function. This includes plans to respond to, recover from, and communicate about incidents or events.
MANAGE 1: AI risks based on assessments and other analytical output from the MAP and MEASURE functions are prioritized, responded to, and managed.

MANAGE 2: Strategies to maximize AI benefits and minimize negative impacts are planned, prepared, implemented, documented, and informed by input from relevant AI actors.

MANAGE 3: AI risks and benefits from third-party entities are managed.

MANAGE 4: Risk treatments, including response and recovery, and communication plans for the identified and measured AI risks are documented and monitored regularly.

As part of your broader incident management strategy, ensure that your third-party incident response program enables your team to identify rapidly, respond to, report on, and mitigate the impact of third-party vendor AI security incidents.

Les capacités clés d'un service de réponse aux incidents d'une tierce partie sont les suivantes :

  • Évaluations de la gestion des événements et des incidents mises à jour en permanence et personnalisables
  • Suivi en temps réel de l'état d'avancement du questionnaire
  • Des propriétaires de risques définis avec des rappels automatisés pour maintenir les enquêtes dans les délais.
  • Rapports proactifs sur les fournisseurs
  • Des vues consolidées des évaluations de risque, des comptes, des scores et des réponses marquées pour chaque fournisseur.
  • Règles de flux de travail pour déclencher des plans d'action automatisés afin d'agir sur les risques en fonction de leur impact potentiel sur l'entreprise.
  • Modèles de rapports intégrés pour les parties prenantes internes et externes
  • Recommandations de remédiation intégrées pour réduire les risques
  • Data and relationship mapping to identify relationships between your organization and third, fourth, and Nth parties to visualize information paths and reveal at-risk data

Armed with these insights, your team can better manage and triage third-party entities, understand the scope and impact of the incident, what data was involved, whether the third party’s operations were impacted, and when remediations are completed.

 

Next Steps: Align Third-Party AI Controls with Your TPRM Program

Mitratech can help your organization improve not only its own AI governance, but also how it governs third-party AI risks. Specifically, we can help you:

  • Establish governing policies, standards, systems and processes to protect data and systems from AI risks as part of your overall TPRM program. (Aligns with category GOVERN 6.)
  • Profile and tier third parties, while quantifying inherent risks associated with third-party AI usage to ensure that all risks are mapped. (Aligns with category MAP 4.)
  • Conduct comprehensive third-party risk assessments and continuously monitor and measure AI-specific risks in the context of your TPRM program. (Aligns with the MEASURE category.)
  • Ensure comprehensive incident response to AI-specific risks from third-party entities. (Aligns with MANAGE 3.)

Leveraging the NIST AI Risk Management Framework in your TPRM program will help your organization establish the controls and accountability over third-party AI usage. For more on how Mitratech can help simplify this process, request a demo today.

 


Editor’s Note: This post was originally published on Prevalent.net in 2023 and updated in April 2025. In October 2024, Mitratech acquired the AI-enabled third-party risk management, Prevalent. The content has since been updated to include information aligned with our product offerings, regulatory changes, and compliance.