AI Security Policies: Questions to Ask Third-Party Vendors
Most organizations today are exploring the use of emerging AI-powered technologies to improve their workflows and processes, analyze and summarize data, and generate content faster than ever before. You and your coworkers may already be using AI tools and frameworks for activities such as conducting research, generating content, and solving coding challenges.
However, it’s important to temper your organization’s enthusiasm for AI with appropriate guidance and restrictions. A wholesale ban on AI technologies could undermine business needs and goals, so how can you be vigilant about protecting sensitive data and ensuring that generated content is accurate? The solution starts with developing an AI security policy.
A strong AI security policy will empower your organizations to reap the benefits of AI, while prescribing the risk assessments and security controls necessary to protect sensitive data and ensure content accuracy. At the same time, it’s essential to ensure that your vendors, suppliers and other third parties have security controls that can sufficiently protect your critical systems and data.
In this post, I examine AI security policies; review how AI security policies can be applied to third parties; and share key questions to assess your vendors’ and suppliers’ AI security controls.
What Is an AI Security Policy?
An AI security policy is a set of guidelines for evaluating and using artificial intelligence tools and frameworks in a way that maximizes insight, control, and data protection. It also outlines AI vulnerabilities and presents measures to mitigate their potential risks.
With a well-designed AI security policy, an organization can safeguard the security and integrity of its AI systems and any data handled by artificial intelligence. AI policies typically include provisions for:
- Protecting sensitive data through encryption and access controls
- Ensuring authorized user access via authentication and authorization mechanisms
- Maintaining network security through firewalls, intrusion detection systems, and other tools
An AI security policy is typically an extension of an organization’s general information security policy and associated controls, with some shared concepts regarding data protection, privacy and accuracy.
What Are the Components of an AI Security Policy?
Tool Evaluation Policies
These policies specify workflows and procedures for security teams who need to evaluate AI tools for use at their organizations. They outline required levels of data protection for different types of services and indicate procedures for reviewing how sensitive data may be used as AI training inputs and/or appear in AI-generated content.
Source Code Policies
Source code policies emphasize secure development practices (e.g., adhering to coding standards, conducting regular code reviews) and specify monitoring and logging mechanisms for tracking system behavior and detecting anomalies. These policies often require the organization to track the use of source code as an input to AI tools, as well as the use of any code generated by AI.
Incident Response Policies
Incident response policies outline protocols for handling security breaches and emphasize compliance with industry standards and regulations calling for such protocols.
Data Retention and Privacy Policies
Data retention policies ensure that input data uploaded to AI tools and services is deleted within an acceptable timeframe. Privacy considerations must also be evaluated, as some regions have temporarily banned the use of generative AI tools due to concerns around the collection and use of personal data.
Ethics Policies
Ethical considerations and awareness training are included in AI security policies to address biases, ensure accountability, and foster a culture of security. For example, they may require users of generative AI to review and edit created content for accuracy, bias, and offensive content.
Acknowledgement of AI Hallucination and Similar Risks
AI security policies should acknowledge that AI tools have been known to produce incorrect, biased, or offensive results. Of particular concern are “AI hallucinations,” which occur when generative AI tools create content that is unexpected, untrue, or not backed up by evidence and real-world data. AI tools also tend to have a limited knowledge of recent real-world events, which can lead to further inaccuracies and omissions in generated content.
How Do AI Security Policies Apply to Third Parties?
Your AI security policy should include standards for evaluating, mitigating, and monitoring the risks for all AI solutions that process or generate data for your organization – including those provided and/or used by your third-party vendors, suppliers, and service providers. It’s critical to ensure that third-party tools and services protect sensitive data, sanitize inputs to remove confidential information, and follow other required security controls.
There are three primary ways to leverage AI security policies in your third-party risk management (TPRM) program: pre-contract due diligence, vendor contracting, and vendor assessment.
Pre-Contract Due Diligence
Your AI security policy should guide the due diligence process when evaluating potential vendors and suppliers. By referencing the policy, your organization can systematically assess a vendor’s security controls, data protection mechanisms, and access protocols. This minimizes potential vulnerabilities by ensuring that external parties meet the same rigorous security criteria as that applied to your internal systems.
Vendor Contracting
Contractual agreements with vendors and suppliers can be informed by AI security policy provisions. By incorporating policy guidelines into agreements, your organization sets clear expectations regarding security requirements, data handling practices, and incident response procedures. This alignment ensures that vendor-provided AI solutions or services uphold the organization’s security standards, contributing to a more secure and resilient AI ecosystem.
Vendor Assessment
When used as a part of vendor assessments, your AI security policy can act as a reference point for gauging their security practices against your organization’s defined standards. This also ensures consistency in setting security expectations across your vendor ecosystem.
Overall, an AI security policy acts as a comprehensive framework for evaluating and aligning vendor and supplier security practices with your organization’s strategic AI objectives.
AI Security Controls Assessment: 16 Questions to Ask Your Third Parties
Hidden threats can lurk within third-party AI providers, posing risks that might not be immediately evident. These threats encompass security vulnerabilities, potential data breaches, covert malicious code, data misuse, and biases in algorithms – each of which could compromise your organization’s data, reputation, and operations.
To counter these risks, your organization must conduct diligent evaluations of its third-party providers. This due diligence should assess security measures, data protection practices, and algorithmic transparency. Continuous monitoring, robust contractual agreements, and contingency planning are also critical to revealing and mitigating hidden AI threats.
In December 2021, Microsoft released an AI security risk assessment framework to help organizations audit, track and improve the security of the AI systems. Prevalent built on this framework to create a 16-question survey that you can use to assess the AI security controls employed by your vendors and suppliers.
Use this third-party AI security assessment to:
- Gather information about the state of AI security across your vendor ecosystem.
- Perform a gap analysis and build a roadmap for working with vendors to remediate risks.
- Conduct repeated, periodic assessments to track remediation progress over time.
Next Steps for Managing Third-Party AI Risks
Use the above questionnaire as a starting point to uncover risks in the AI systems employed by your vendors and suppliers. By proactively identifying and managing third-party AI risks, you can protect your organization’s systems and data while avoiding potential issues related to fairness, transparency and accountability. Third-party AI risk assessments not only safeguard your operations, but also aid in ethical decision-making, vendor selection, and long-term partner relations.
For more information about how Prevalent can assist your organization in assessing vendor and supplier AI security in the context of overall third-party risk, request a demo
today.
Editor’s Note: This post was originally published on Prevalent.net. In October 2024, Mitratech acquired the AI-enabled third-party risk management, Prevalent. The content has since been updated to include information aligned with our product offerings, regulatory changes, and compliance.