How to Use AI in Third-Party Risk Management
How to Use AI in Third-Party Risk Management
Description
As organizations rely on an increasingly complex network of vendors and suppliers, risks from third parties are also becoming more complicated. However, emerging artificial intelligence (AI)-related technologies offer new opportunities to streamline third-party risk management (TPRM) processes and keep pace with the constantly evolving risk landscape.
In this on-demand webinar, Michael Rasmussen, The GRC Pundit & Analyst at GRC 20/20 Research, dissects the top five ways you can leverage AI in your TPRM program.
Join Michael as he:
- Explores the impact of AI on TPRM processes
- Provides strategies to identify and mitigate risks with AI-driven analytics
- Examines how to streamline your existing program and free up valuable time
- Reveals how TPRM teams can use AI to address regulatory mandates
- Demonstrates how AI technology can ensure long-term TPRM sustainability
AI technologies offer unparalleled opportunities to manage a rapidly changing third-party risk landscape. Register for this webinar to learn how AI can benefit you.
Speakers
Michael Rasmussen
The GRC Pundit & Analyst at GRC 20/20 Research
Transcript
Ashley: I can’t forget about some introductions. My name is Ashley. I work in business development over here at Prevalent. And we are joined with a very special guest, uh, analyst at GRC2020 Research, Michael Rasmusen. Hey, Michael. How’s it going?
Michael: Really good.
Ashley: Excellent. And our very own VP of product marketing, Scott Lane. How’s it going, Scott?
Scott: Hey, Ashley. Doing great today. Hope you’re doing well, too.
Ashley: Thank you. Uh, just a little bit of housekeeping. This webinar is being recorded and we will be sending out the recording along with the presentation slides shortly after the webinar. Uh you’re all currently muted, but we do encourage participation. So please put any questions you have in the Q&A box and we’ll go over them at the end of the webinar. So today, Michael will be discussing the top five ways to leverage AI in your TPRM program. So Michael, I’ll go ahead and pitch things over to you.
Michael: Sure. Well, I’m Michael Rasmmanson. I’m an analyst. I got 30 years total experience and 23 of those as an analyst. I spent seven years at Forester research uh where I got their top analyst award fairly regularly and then at the end of 2007 left Forester to go independent and so for the last 16 going on 17 years I’ve been independent analyst and competing against Gartner and Forester. Uh and in that context I define my job as research. I research what are the challenges companies face in the context of governance risk management and compliance and how do they go about solving those tech those challenges with strategy process and particularly technology. Now techn is a hot topic and that’s the topic of today artificial intelligence. How do we use AI in thirdparty risk management? These are critical things to be considered comp contemplate and address in today’s very you know challenging environments you know and and how do we go about doing that and and so particularly in the context of thirdparty risk. I have had a lot of particular focus in the thirdparty risk management space you know I’ve sat on one fortune 100 company’s social accountability advisory board dealing with, you know, supply chain code of conduct and international labor standards like child labor, forced labor, working hours, wage and hour across 20,000 facilities and their global supply chain. I’ve worked on numerous RFPs. In fact, one particular RFP they didn’t listen to me. They chose a vendor that I said will not work for them and uh and they chose it anyways uh based on my competitor’s recommendations and it failed and they fell in their face. Uh and uh and uh you know it’s one of those things I interact with a lot on how to properly select thirdparty risk technology and apply it to your organization and and again in the context of today how do you do that from an artificial intelligence standpoint how does AI apply to thirdparty risk but also what is the risks of AI within third parties themselves some interesting questions to contemplate. But as we move forward let’s start off with defining what I mean by thirdparty risk management because we could talk about supply chain risk vendor risk third party risk uh we can even talk about third party governance or what I like to talk about third party GRC. Uh which to me is probably the best term for what we’re trying to achieve. But for the most part, the the term that we most often use is third party risk management. Third party risk management is a capability that enables an organization to reliably achieve objectives. Think about it. You just don’t wake up one day and say, “I think we need a new vendor relationship. Uh who should I pick?” No, there’s a purpose for every relationship. There’s specific objectives in every relationship, and it’s important to govern those relationships to that we reliably achieve the objectives for the very purpose that that relationship was founded for. But in that context, we have to address uncertainty and risk. That’s risk management. In fact, ISO 31000, the international standard on risk management states that risk is the effect of uncertainty on what? On objectives, that governance piece. But then we also have to act with integrity within that relationship to the compliance requirements, the legal exposure, but even to our ethics and values when it comes to things like ESG. environmental social governance. So third party risk management is a capability that enables an organization to reliably achieve objectives address uncertainty in active integrity in and across your thirdparty relationships. Now this is challenging. It’s challenging when you’re doing this in document spreadsheets and emails, but it’s even challenging when you’re doing it within third party risk applications and things out there like our host prevalent because there’s so much coming at us that we have limited staff and resources. Uh but there’s so much content and things to keep upon and to process information and documents and things. That’s the role where artificial intelligence extends these capabilities of governance, risk and compliance of thirdparty relationships to make them more efficient, effective, resilient and agile within the business. But today’s modern organization now is built on trust and integrity. And it’s not just trust and integrity of the business itself, but the extended enterprise. The modern organization does not consist the brickandmortar walls and traditional employees. The modern organization is defined by that extended web of suppliers, vendors, outsourcers, service providers, contractors, consultants, brokers, agents, dealers, partners, intermediaries, and more. And their risk issues are your risk issues. Their compliance issues are your compliance issues. The and we have to be able to have established extensive and continuous ongoing due diligence across these relationships. Now, historically, this has been done through, you know, a lot of uh uh manual processes or, you know, automated workflows and tasks and things to kick off assessment questionnaires like on a periodic basis. But more and more we’re seeing the need for continuous ongoing due diligence as well. And there’s a variety of different thirdparty risk intelligence sources out there like politically exposed persons, watch list, sanction lists, negative news, reputation and brand list, financial viability rate ratings, ESG ratings, security scorecards, and more. And that’s a lot of different data sources to go out against, particularly when you’re operating across multiple jurisdictions in your extended enterprise.
Michael: And that’s also another role of artificial intelligence to be able to to harvest all that rich thirdparty risk intelligence content, sift through it, and present what’s meaningful to you, not just on a periodic basis, but on a continuous ongoing basis as well to help establish trust and integrity throughout the organization on a continuous basis in the extended enterprise of these third party rel relationships. But as I mentioned and we face what I call the inevitability of failure. Too many thirdparty risk processes are built on documents, spreadsheets, and emails. Oh my! You know, that’s the inevitability of failure. You end up managing and chasing documents, spreadsheets, and emails instead of actually managing thirdparty risk. I can think of one midsized bank. I worked on their RFP. They did an internal study of their of their internal resources and found that 80% of their staff time was actually spent managing and chasing documents. in spreadsheets and not actually managing risk. Think about that. You send out that email with that assessment questionnaire uh in in some type of word document or spreadsheet or whatever it might be and it comes back and it’s halfway filled out. You got to send it back and now they answered a couple more questions, but some of them were still weren’t filled out and go back and forth, back and forth. Gosh, you know, without technology, I find that the average time to onboard a a vendor or supplier average because some can take a lot lot more, some can take less. The average time is 20 hours of work when you’re using manual processes with documents, spreadsheets, and emails. When you leverage technology like our host prevalent today, you know, that goes down to like five hours to onboard a supplier or vendor because you you’re spending less time chasing things. Now, you add in artificial intelligence to automate and streamline that that process and it even gets more uh area economies of scale with efficiencies. Um, but right now we have too many documents in manual processes and they vary across the organization as well. A lot of thirdparty risk implementations remind me of the Winchester Mystery House in San Jose, California. This is a sprawling mansion that was built in the 1800s. It cost $5.5 million to build then you calculate inflation. That’s one extremely expensive house today. It took 38 years to build, had 147 different builders, but it had no design, no blueprint, no architect. So, at the end of the of the day, it doesn’t make sense. It’s got doors that open to walls or 20 foot drops, staircases that go up or down to nowhere, skylights that are in floors instead of ceilings. This is a lot of thirdparty risk management programs in the organization. You that over the last 38 years, you’ve had 140 different builders of 30 third party risk. IT security’s gone their direction. Business continuity resiliency has gone their direction. Procurement’s gone its direction. Compliance and conduct risk has gone its direction. And it goes on and on. And and one department’s using one solution, another department’s using different solution. Other department’s using document spreadsheets and emails. Another department’s using other spreadsheets or you know, survey tools on the internet and it’s chaos and nobody sees the full spectrum of risk and and now we need to not only have great technology like our host Prevalent that builds in these processes and automates them but then can even extend them with artificial intelligence to make them again more efficient, more effective, more resilient and more agile. Think about it. The more the physicist free copper stated the more we study the major problems of our time the more we come to realize they cannot understood in isolation. They are systemic problems which means that they are interconnected and interdependent. The physicist was talking about biological ecosystems at the time of this quote. But he might as well have been talking about third party risk in the modern organization today. The more we study the major risks in our suppliers, the more we come to realize that they cannot be understood in isolation. They are systemic risks which means that they’re interconnected and interdependent. We need to see this spectrum of risks uh from physical security risks intellectual property risks to IT security cyber risks to financial viability to ESG to politically exposed persons and watch list and sanction lists. But guess what? That’s a lot for any third party risk department to keep a top of but and we leverage tools like prevalent to extend that and automate that and great achieve greater efficiencies. Uh and and sometimes we’re even still treading water even though that implementation of a solution like prevalent provides great benefits. And now organizations are looking how do we leverage artificial intelligence to extend that to enable that subject matter expertise to be able to do more and get more done but also be able to harness such variety of information and look for risk issues that just fly under the radar of of our human perception because we can’t process that much information and see these intricate patterns and themes underneath. Artificial intelligence leveraging you know technology like prevalent extends this and makes this much more efficient, effective, resilient, and agile. You know, the Leonardo da Vinci said, “Realize that everything connects to everything else.” It’s those little things that matter. And some things might be flying under the radar in our third party risk processes going undetected that need to be detected. It’s like chaos theory with the theory of the butterflyy’s wings in the Netherlands makes tiny atmospheric changes that can influence the development of path of a hurricane in the Gulf of Mexico. We’ve seen this time and time again where the little things matter and third party risk to create a significant risk loss event on the organization. We need to manage this and artificial intelligence leveraged like through a solution like prevalent extends that subject matter expertise and gives us more holistic 360 degree contextual awareness of thirdparty risks that can’t be achieved by just human subject matter matter expertise alone. It to me artificial intelligence extends that. So we’re moving from a past a past where we had a lot of departments that are even still using documents, spreadsheets, and emails for thirdparty risk as well as scattered departments using different solutions for thirdparty risk to a future of providing an integrated view of thirdparty risk across departments and functions. That’s leveraging artificial intelligence to extend this to the next uh um level of efficiency, effectiveness, resilience, and agility in the organization. I’m going to say that over and over again, those four things, efficiency, effectiveness, resilience, and agility. Artificial intelligence helps us achieve those in the organization in thirdparty risk management. So in that context, let’s talk about the history of GRC software applied to thirdparty risk. It was a cold snowy day in the Chicago office of forester research in February 2002 that I defined a model defined a model to market for technology called governance, risk, and compliance. I was the first to use that acronym GRC. So I’ve been called the father of GRC back in February 2002 in Forester research. But you know what? I had a broad view of GRC in that early day that first day I defined it but what else happened that year Enron World common sarbain Oxley uh and so for the first several years of GRC history you had the Sarbain Oxley captivity of GRC then we entered GRC 2.0 enterprise GRC which was more of my original vision for GRC and this is about that era of 2007 and it was about enabling those back office second and third line functions to work together in a common software application cross risk compliance, security, audit, and so forth. But then about 2012, we realized, gosh, you know what? These software platforms are nice, but there’s a place for best of breed systems like thirdparty risk. So many of these people that implemented GRC platforms, you know, didn’t really want to open up their entire GRC platform to 5,000 or 20,000 different vendors and suppliers out there. Why don’t we have a best of breed implementation of thirdparty risk and where we need to, we can share information to the enterprise risk man. agement capabilities in in the core GRC platform. So GRC architecture is about implementing best of breed solutions like prevalent when and where it makes sense out there and integrating to external content sources and internal databases and themes and then we moved into the era of agile GRC about this is about 2017 um uh and in that context a lot of the legacy platforms out there many of them are still out there uh they’re getting a lot of complaints there are huge costs of implementation and maintenance uh and things broke on upgrades. I can think of a LinkedIn post, one of my favorite LinkedIn posts of all time that described an imp implementation of one of the players in the space to the lyrics of the song Hotel California that they’re stuck and cannot get out. After $500,000 in software license, $2 million in implementation fees, two years later, they’re getting some basic functionality working. That’s all. So, we enter the era of agile GRC. Low code, no code solutions like our host Prevalent that are highly configurable and adapt. able have a lower cost of implementation, lower cost of maintenance and ownership. That’s agile GRC technologies and and again that’s not just enterprise GRC platforms that’s like third party risk platforms, policy management platforms, regulatory change platforms. It it GRC is a broad spectrum of solutions. So today we’re talking about that thirdparty risk aspect. Agile GRC is that highly flexible, adaptable, agile, low code, no code thirdparty risk solution. And now we’re in this era starting with 2022 and Moving beyond to cognitive GRC. As technology grew more sophisticated, those agile GRC solutions had to leverage and and are leveraging cognitive technologies, artificial intelligence such as machine learning, predictive analytics, robotic process automation, natural language processing to deliver greater levels of efficiency, effectiveness, resilience, and agility to GRC. So this is built on agile GRC and we extend it with cognitive AI capability. And we’re moving towards the future in the next couple years of business integrated GRC building this more into business processes. But some components to look for in agile thirdparty risk technology is technology that’s highly usable not just for back office thirdparty risk subject matter experts but also the business owner that owns that thirdparty relationship as well as a third party relationship itself has a low cost of implementation and maintenance. Uh that a low an overall cost of ownership that’s highly configurable, scalable, adaptable, and can integrate into the business environment and find a and deliver a full spectrum on third party risk capabilities like our host today. Prevalent Eric doesn’t break on upgrades. A lot of the legacy solutions in this space, there’s people that are stuck on previous versions. They can’t can’t get to the next version because they’ve customized it with coding so much that it breaks. You want solutions that are easy that that are cloud enabled, low code, no code solutions that don’t break on upgrades that provide robust analytics, but are also looking at How do we leverage artificial intelligence and things like robotic process automation to uh take this to the next level? That’s the agile and bridging on cognitive GRC technologies. But now let’s turn our focus onto this era of artificial intelligence. We talk about AI. There’s two spectrums that are different, but they also really are symbiotic and need each other as well. Uh on the one hand, we have cognitive geo C that GRC 5.0 that I’ve been talking about. This is the use of AI technologies to make governance, risk and compliance and particularly thirdparty risk processes more efficient, effective, resilient and agile. That’s cognitive GRC. How do we leverage AI for thirdparty risk processes? But on the other side, we have AI GRC, the the governance, risk and compliance of artificial intelligence itself. We need closely govern the use of artificial intelligence within the organization and across its relationships to ensure the reliable achievement of objectives, address uncertainty and risk and act with integrity and compliance. So AI GRC is the governance of AI use while cognitive GRC is the use of AI for GRC purposes. So I teach a variety of workshops around the world. My thirdparty risk management by design. Maybe some of you on this webinar have been to some of them. Uh Prevalence sponsored several of them in the past. So far this year, I have done my third party risk workshops in Chicago, New York, San Francisco, Houston, London, and in a couple days I’ll be doing another one in Chicago uh on on Friday of this week. Now, so that’s four workshops on thirdparty risk already done and a fifth one coming up on in Chicago on Friday. Now, the common theme across every one of those workshops this year, it did not happen in 2022 in other eras. But in 2023, the big topic of discussion in every last one of those workshops was how do we govern the use of AI in thirdparty relationships? How do we what questions and do we ask within our thirdparty assessments and things about that outsourcer, service provider, contractor, consultant, supplier, vendor, their use of AI to deliver products and services for the company? Because AI governance doesn’t happen within traditional brick-and-mortar business. AI governance also has to extend across those web of thirdparty relationships and be able to manage that properly as well. So when we talk about AI GRC, it’s not just about for internal use of AI, but it’s also the governing the use of AI within that extended web of thirdparty relationships as well as those relationships, you know, deliver services and products and activities for the organization and it could put the organization at risk. So, we need to deliver 360°ree cognitive AI contextual intelligence. That’s a mouthful. Think about it. We have a lot of information. We have contracts and transactions with third parties, due diligence, uh geopolitical events, uh and gosh, we see a lot of that happening in the world right now. Assessments being done, addestations, capabilities, trainings, disclosures, inspections, service level agreements, negative news, sanctions, watch list, politically exposed persons, you know, Oh, uh, financial viability ratings, security scorecards, ESG ratings. There’s so much data on third parties and disconnected little silos and data points in our internal thirdparty risk platform like prevalent itself, maybe trapped in documents, spreadsheets, and emails. We need to be able to take in the variety of these distributed and disconnected thirdparty data points, integrate, map them together contextually, analyze, and understand these relationships, and build out action items. Now, this is important in third-party risk in general and we enable a platform like prevalent historically to streamline this and make this more efficient and effective in the organization but the volume of information is still so much information to process and so the question is is how does AI in a platform like prevalent extend and provide more value to see more patterns and more what’s going on in the organization to raise alerts to things that might just bite pass us or things that we just don’t have time to get done in the organization. I can’t think of one organization I’ve encountered that says we have extra subject matter experts out there. Every organization I talk to, we’re taxed. We’re at our maximum capacity. We’re over capacity on what we can handle in our third party risk program. We can’t get things done. So, how do we leverage something like prevalent with AI to be able to extend that in the organization to be able to get more done and see more and make things more efficient, effective, resilient and agile. So let’s talk about some of I’m going to step back and talk about AI broadly now even outside the the the the cognitive GRC context but you know what are some of the challenges but in that same time a challenge can also be an opportunity of AI in organizations today. So I have a whole report on this encourage you to look at it uh but you know some of the challenges of AI again also could be uh opportunities is that AI is powerful. AI can impact significant change with minimal effort. While this is a major strength, it also means a little AI use by an unskilled worker could result in a profoundly negative outcome. All the more reason why to govern this in thirdparty relationships because that unskilled or worker in a in a contractor or consulting capacity could have a negative outcome on your business. Companies must be wise to adopt, you know, a good governance of AI because AI is powerful. AI is also complex. One of the primary challenges in using AI is the complexity of its implementation. Effective AI integration requires substantial investment in technology, skilled labor, and time. Unless you’re leveraging it already in an application that’s put it in place for your use like prevalent. Organizations need experts capable of governing, developing, maintaining, and managing AI systems, which often leads to costly upskilling or recruitment of new resources. Additionally, compatibility existing systems and workflows must be considered often requiring a complete overhaul of current practices. Another challenge for AI is its simplicity. Yes, we just said it was complex but also the simplicity of it uh is there as well. Complexity is certainly a challenge but the recent AI gold rush particularly in generative AI also has the opposite concern of simplicity. We have shadow IT now we have shadow AI. How do we know how AI is being used internally? properly, but also those third party relationships because that contractor or consultant who might go rogue and leverage it out there causing exposure to the organization. The necessity of data scientists, advanced technology infrastructure, sizable ongoing support costs all served as a check to keep many companies from running a mock with the technology. But now with AI so prevalent that bar has been removed like chat GPT, anyone can leverage these technologies with limited resources and no training or consideration of the consequences. Essentially, the brakes have been removed while traveling at high speed. All the more reason to leverage AI that’s actually been built, tested, and provided assurance around in technologies like Prevalent that are leveraging it in the future here. Productivity for many cases of AI use, there are productivity enhancements such as GitHub Copilot, which suggests to the developer, again, it could be that outsourcer that’s developing code for you, what their next code block should be. The developer either accepts, modifies, or declines that suggestion. This is the same type of technology we see in things like iPhones and G Suite uses. However, AI is being used as a major determining factor between IT tool governance and AI model governance. As organizations use pro productivity tools laced with AI, we will see that there’s going to be greater accountability and sock 2s and ITGC’s uh G and other data agreements to cover these use in thirdparty relationships. Then you’ve got Data privacy challenges. Data privacy is another critical concern of AI usage. AI systems are data hungry. They need vast vast quantities of data for training and operation. This dependence raises significant issues regarding data security, user privacy and compliance with regulations such as GDPR. Breaches can lead to severe reputational financial damage if used improperly. We also have the issue of bias like we see with human resources and in in recruitment applications. So bias can be an inherent problem in AI systems that poses significant ethical and practical concerns. And the data used for training is biased. Guess what? Garbage in, garbage out. The eye system can amplify and repeat those and reproduce those biases leading to unfair outcomes. Again, examples include discrimination in hiring through AI powered recruitment tools or unequal treatment and AIdriven healthcare solutions. You also got the challenge of opakqueness. Another risk with AI is the blackbox problem referring to The opakeness of AI decision-m advanced AI models, particularly in deep learning, often make decisions in ways that humans cannot easily understand. They fly over our head. The lack of transparency can be problematic, especially in sensitive areas like healthcare or finance where understanding the rationale behind decisions is crucial crucial. But then we have legal liability. AI systems also present a potential legal liability issue and a lot of this is still to be determined. We need to understand the culpability When an AI system causes harm that is not straightforward, is the developer, the outsourcer, service provider, user, or AI system at fault, legal systems worldwide are grappling with these very novel new challenges. This includes supply chain challenges because AI systems and models are being developed using open-source or other code where the country of origin or its reliability is not known or the or inappropriate or unintended use could cause potential legal issues. Those are some of the legal challenges but also opportunities in some of these areas too of AI in the organization. But some legal challenges to be aware of include this area of privacy and data protection and leakage. AI systems often require large vast amounts of potentially sensitive data posing risks to privacy rights and data protection. For instance, AI applications like facial recognition technology have raised significant privacy concerns. Therefore, jurisdictions parts of the world are deliberating on stricter data protection laws to control AI misuse. This also includes data leakage where response may inadvertently include sensitive information from training data sets. And given the often global nature of organizations, issues around crossber transfer data also arise. And again, a lot of you organizations are doing this in thirdparty relationships and outsource relationships. And this needs to be governed. Again, we’ve already referenced this, but bias and discrimination that needs to be addressed. AI systems can potentially reflect and amplify societal biases which could lead to discriminatory practices. These have been there have been cases where AI algorithms used in hiring or criminal justice systems have produced biased outcomes. Lack of visibility into the methodology also makes diagnosis and solving this difficult. We got liability and accountability is surrounding the use of AI but intellectual property rights particularly in the world of generative AI that consultant contractor uh that in your organization might be using generative AI to produce things for your organization. Questions regarding AI’s creative outputs like generative AI chat GBT, whether these should be considered intellectual property. If so, who should hold the rights remain largely unresolved? There are also questions about intellectual property rights associated with those inputs from themes like chat GPT and where it’s gathering that data and whether or not these comply and introduce legal liability. For example, licensing and copyright terms and conditions. And you got security a I can be exploited for malicious purposes like deep fakes or autonomous weapons necessit necessitating legal provisions for managing these risks. Lots of complications. And then we’ve got things like the EU AI act that’s also, you know, having organizations be able to manage and monitor this more closely. Um, and particularly for high-risk areas within organizations as well. When it comes to an AI model, there’s three basic pieces. Now, this can be a lot more complex. this but I’m simplifying it and again I have a whole paper written on this. You have the input component that delivers assumptions and data to a model. You know it’s like generative AI is going out there on the internet and learning from uh vast data sets. You know machine learning it’s a lot of internal vast data sets and things we have good we in order to have good output you need to have good input that data needs to be reliable. You have the processing components that transforms all this into estimates and the reporting comm uh component that output piece. So you have the input, the processing, the reporting. So when AI fails, gosh, there’s a lot of things that could go wrong in the organization. Uh you know, so some of the things that could include to look at is the dynamic and changing environment. AI models are not static. Uh you know, you need to make sure that you’re leveraging AI that is being kept current because the world is changing, regulations are changing, risks are changing, business is changing, and if you’re using outofdate information. That means bad decisions by AI. That information’s got to be current. Gosh, I can think of several months ago um before chat GBT4 was out um I I well and actually more recent enhancements to chat GBT4, I I asked ChatGBT to summarize the UK consumer duty law. It got a lot of things wrong, but that’s because it was using outdated information because the the information that Chat GBT was based on at the time was from 2021. before and so you had outdated information you need to keep current make sure that the data models are kept current and that’s why when you uh le use things like AI within a solution like prevalent it’s upon prevalent in others to keep that information current in the environment you have lack of governance control the pervasive use of AI has also introduced what now is called shadow AI just like shadow IT and we need to control that not just within our traditional brick-and-mortar business but across these extended third relationships because they might be using AI in ways even though the they might have answered that assessment questionnaire saying we don’t use AI or if we did here’s its documented use cases you might have that rogue contractor or consultant or temporary worker out there leveraging AI uh and you need to stay on top of that again AI is more than the AI model processing component it’s also the input as well and the output but bad input garbage in means garbage out and so we can also have errors in the model that need greater assurance. Uh, and that includes undiscovered model flaws or even misuse of models. Maybe this AI was generated for this purpose, but we’re trying to use it for this purpose. It was never designed for that. It might be 80% of the way there, but it’s not 100% of the way there causing issues. We have the issue of misrepresentation of reality. The very nature of AI means they are that it’s a representation of reality, not reality itself. And so AI models are simplific simplifications of reality. And in that process, simp simplification may introduce assumptions and errors to bias. We have limitations in the model, pervasiveness of models, big data interconnectedness, and sometimes inconsistent development validation. This is all why it’s a good thing to leverage AI that has been validated, tested for specific purposes like within a use case of thirdparty risk by a uh solution like prevalent out there that has gone through and addressed all these on behalf of the organization and can provide that test. and validation for you. So, responsible use of artificial intelligence ensures that AI systems comply with evolving laws and regulations, helps prevent legal issues, financial penalties, and damage to reputation. And again, that’s why tested and assured technology within a solution like prevalent helps us comply with evolving laws and regulations because they are built, designed, and tested for that purpose. Responsible use of AI helps us manage uncertainty and risk when AI can have unintended consequences. including biased decisions or privacy breaches. Effective risk management helps identify and mitigate these risks. It helps meet ethical standards, ensuring AI is used fairly and doesn’t perpetuate harmful biases. It delivers trust and transparency where AI GRC practices help organizations demonstrate that their AI systems are trustworthy and transparent. It provides strategic business alignment uh to to help achieve business objectives. Enables agility to keep up. So, the role of artificial intelligence, I I I’ve the last several slides I might have scared you. That wasn’t my intent. My intent was saying there’s risks as well as opportunities in the use of AI. And organizations just can’t carelessly, half-hazardly approach AI. They need structured governance of it. But that’s also why leveraging AI that’s already been built in tools like Prevalent that has addressed these issues becomes a more of a value equation for the organization. to adopt that artificial intelligence uh because it’s been tested and validated and built for specific third party risk use cases and not just rogue implementations of it. And that’s why solutions like prevalent leveraging AI are something to really strongly consider. So the role of artificial intelligence and thirdparty risk management helps us leverage and extend the subject matter expertise. I see artificial intelligence properly used as not replacing subject matter experts but extending their capabilities in the organization extending their abilities to be able to get more done because again I haven’t encountered one organization that says we’re out of things to do what can we do today? Uh you know, our our subject matter experts are out there, you know, knitting scarves or something because they don’t have anything else to do no right now you know our subject matter experts are casts and how can we leverage AI within a solution like prevalent to extend their subject matter expertise and get more done because AI and thirdarty risks helps us learn from exper experience what’s going on out there, the geopolitical risk world, the regulatory developments, uh the the transactions in the environment, the patterns and things. It uses information and what it’s learned to draw conclusions and like predictive analytics. It identifies images and patterns, not just images, but even in in documents and content, written documents. I can read a policy in like my uh my security policy. I can read in a vendor security policy and notify where things are different. It can solve difficult difficult problems, understands different languages and contexts and creates information and perspectives. AI makes you more efficient, effective, resilient, and agile. Let’s explore some of that. First off, efficiency. Think about the volume of documents and information and assessments coming in. Can you really go through all of it? Or even just keep up on regulatory change. Financial services firms are dealing with 257 regulatory change events coming from 1,217 regulators around around the world. That’s a lot of regulatory legal change. You print off the United Kingdom’s FCA rule book and I’m told that it comes to a stack of paper six feet tall. Uh I haven’t tried that out myself, but that’s a lot of paper. Even if it was six inches tall, that’s a lot of paper. You know, if you print off the US Code of Federal Regulations and stack it end to end, that’s it’s longer than a marathon. Now, go how long would it take you to read a whole 27 miles of paper? How long would it take you to read six feet? a stack of paper more than a year. I’d assume a machine I’ve seen it. A machine can not only read it in a matter of like minutes, but dissect it and map it, categorize, and organize it. Now, that’s efficient. Now, I was talking to a chief ethics and compliance officer at a life sciences firm, and they found that in their testing of artificial intelligence in natural language processing, not only is a machine a gazillion times faster at reading that, But in their internal testing, it was 30% more accurate. Think about it. If you had a stack of paper, even six inches tall, to read on like third party risk assessments, regulations, whatever it might be, sock twos, our minds are going to wander. You know, how are the Milwaukee Bucks going to do uh this season? Uh you know, what are our dinner plans tonight? Do did I get all the groceries we needed? Things are going to get missed. Guess what? A machine’s mind doesn’t wander. It stays on task. In that life sciences firm, it found that natural language processing was 30% more accurate than humans because it stayed on task, efficient and effective. So let’s unpack some of this. AI for third party risk use cases. One area is machine and deep learning. Machine learning deep learning is a subset of AI that allows systems to learn from data without explicit programming. Some use cases means that machine learning can analyze historical third party data. data to predict future risks, aiding in proactive risk management. Machine learning algorithms can scrutinize past financial transactions of third parties to forecast and prevent fraud. Del, it can help us deliver expert third-party risk management insights based on experience and frame to industry guidelines and requirements. And it can enhance the efficiency of data sourcing while significantly reducing the time required to collect comprehensive data. Next, we’ll look at natural language processing. The to read documents like I can read that my my IT uh security policy and a vendor security policy and note ah their password policy is much weaker than ours they don’t even have an encryption policy that’s a concern particularly in this privacy GDPR world so natural language processing enables machines to read understand and interpret human language some of the third party use cases it can simplify the interpretation analysis of thirdparty content and documentation that’s provided to us save hundreds of powers automatically mapping thirdparty compliance requirements to identify compliance overlaps and gaps in third parties. Compare thirdparty assessment answers to expected ones and identify discrepancies. Do that document comparison like policies or contracts or whatever it might be regulations and analy and and can do that comparison analysis of third parties as well as provide real-time translation services as we’re dealing with thirdparty risk on a global basis. In some cases, I’ve seen things dealt with in over 20 different languages in large organizations. Then we move to the area of predictive analytics. Predictive analytics and expert systems are computer systems that mimic the decision-making abilities of a human expert. These systems aid in making informed decisions by providing insights based on accumulated knowledge and data. Some of the use cases include the ability to quickly identify third parties with missing and inadequate controls and discrepancies. Search policies and documents by understanding the searcher’s intent or assessments or sock twos or whatever it might be. deliver expert third-party risk management insights based on experience and again framed industry guidelines. Then you have things like neural networks. These are algorithms modeled after the human brain designed to recognize patterns. Neural networks are instrumental in detecting anomalies and patterns in vast data sets helping identify potential risks or compliance breaches. You know, it can include topic modeling to identify and surface data relationships impacting thirdparty risks. Improve data discovery via data enrichment. Third parties automatically classify third-party risk content according to taxonomy. It could be other you know looking through a vast amount of transactions and things looking for abuses and gift entertainment, hospitality, political contributions and other payments of third parties and control compare against master data records and things uh to like third party employee vendor and third party and employee bank account information matches. Well, that’s suspicious. Then you have things like robotic process automation. Robotic process automation is the use of software robots to automate highly repetitive and routine tasks. So robotic process automation can automate thirdparty risk processes, reporting, data collection and risk monitoring, reduce human error, and improve efficiency. It can simplify processes and more efficiently resolve questions, gain insight and act. If something’s discovered, it could trigger, you know, the the access of that third party being shut down until something’s resolved or something like that. Generative AI, I’ve saved it for the last because it’s the most talked about now. And generative AI can create new data like the data it was trained on. So generative AI can produce synthetic thirdparty financial transactions that to test the efficacy of fraud detection systems without compromising real customer data or the same thing for privacy and everything else. Generative uh it can generate thirdparty policies or control documentation, answer thirdparty risk questions, create risk scenarios or generate data sets for stress testing and compliance training models. It can be advisor on thirdparty risk ask questions and get them answered and it can automate thirdparty assessment population and filling out. I got those sock 2 forms. I got the third party assessment questionnaire and things. You can use generative AI to populate that out. Now again, it should still have subject matter expert review, but I can save a lot of time if we can leverage AI to populate that and kick it off to review process. Think about it. So the benefits of cognitive GRC for thirdparty risk management, it delivers proactive not just proactive risk management to keep up to be agile in this environment, enhanced compliance monitoring of these relationships to make sure they’re not stepping outside of bounds, efficient resource use and allocation, time saved, money saved, and and more informed decision-making when it comes to our third parties. So, we’re going to save questions for the end, but I’ve got GRC2020, but also the GRCreport.com that, you know, provides news around the world on governance, risk, and compliance. And there’s a weekly newsletter you can sign out for. But with that, I’m going to turn this over to Scott to hear some of Prevalence perspectives as we bring this to a conclusion.
Scott: Thanks very much, Michael. Uh, if you could stop sharing your screen, I’ll share my screen now. Uh, and share a little bit with everybody. All right, bear with me. Just real quick, can you uh see my screen? Okay.
Ashley: Yes.
Scott: Awesome. Cool. All right, everybody. Uh, what I thought I would do is kind of draft off of Michael’s presentation here very specifically around the opportunities around artificial intelligence and and and third party risk. You I think from from our perspective of prevalent, it really uh kind of boils down to I guess two groupings of very tangible and real life situations, you know, with regard to um how to apply AI uh how to apply AI third-party risk use cases and how to kind of benefit from it. And I would argue that, you know, once you write get right down to the implementation of it and select um, you know, kind of the use cases you’re trying to go after, it kind of falls into these two boxes. And one of them is analytics and the other one is is automation. And I see really kind of six things that really emerge for thirdparty risk out of the usage of various AI technologies. Be that generative, be that machine learning, be that RPA, you know, whatever type of of, you know, AI related technology you kind of choose to implement there. With regard to analytics, this is all about driving clarity or um, you know, finding patterns of behavior or uh, information that might be hidden uh, in huge reams of and that kind of Michael talked about from all these various different sources, you know, a little earlier in in the presentation. You know, AI can analyze uh, you know, huge amounts of structured and importantly unstructured information to identify some of those patterns and it helps to clarify decision- making. Second, it helps with unbiased decision-m right as Michael talked about biased being a huge challenge with with uh using AI maybe large language models or LLMs uh in terms of the backing to um uh you know to perform that level of analytics um you know good properly trained AI on real thirdparty risk management events and data and information helps to ensure that what comes out the other end in terms of recommendations or guidance or analytics uh mitigates the risk of any kind of bias. You know, as Michael said, eliminates or reduces the the risk of the garbage in garbage out model. From an analytical perspective, I think the third big opportunity for thirdparty risk is um subject matter expertise problems. Look, I get it. It’s tough right now to hire, you know, high quality quality um talent to manage a third party risk management program. You know, analysts to comb through data and information that have the context to understand what to do about those risks to you know, leverage a a good platform and you know a a mature tool set to say okay you know this is what a a you know holistic program should look like and this is what we should do about the risks that we find. AI helps by providing some of that programmed expertise. So that as you’re performing your hiring and your staffing and your training and you you’re running a program, you can maybe offload a little bit of some of that analytical work to the AI where it makes sense as long as there’s proper human governance over the process and the outputs to make sure that they still match up with with expectations and there isn’t that bias or hallucination kind of involved in in in the outputs there. Second category of uh of use cases if you will for AI and third-party risk falls into this concept of automation. Um, and really from a scalability point of view, you know, if you’ve got some analytical superpower and some automation rules and workflow and all of that kind of built in through the process and the machine learning and whatnot, you’re going to be able to get more done um, and offload a some of that work as I mentioned before to to maybe the AI enabling your team to focus on maybe some of the higher level activities. And the byproduct of that is the second one there under automation and that’s consistency and reliability. you know, those AI models aren’t going to get tired. They’re not going to have holidays. They’re not going to have distractions. You’re going to have 247 availability to those those analytical models to help improve the consistency, reliability, and predictability of of its outputs. And that really leads into the learning and adaptability. You know, these models have an incredible aptitude for um continuing to adjust and learn and grow over time and accommodate uh inputs as they go into the system to alter the outputs kind of based on on on what goes in there. So really at the end of the day this automation use case these use cases for automation are about speed and scale of a program right so think of it that way analytics to make sense of a bunch of data automation to do something with it at the end of the day for us anyway it means improving clarity of decision-m improving the speed that you can respond to the business and risks and then scaling to accommodate uh for future growth that from my perspective is a very practical approach to kind of understanding what the AI opportunity is and we’ve taken that um approach to AI and built these this you know pretty rich set of capability into the prevalent platform uh at every stage of the third party risk life cycle. Everything that we do is framed around this concept of the third party life cycle. Seven stages to it from sourcing and selecting a new vendor to offboarding and terminating. And in each one of these seven stages, you know, AI has a role to play. You know, in sourcing and select It’s um kind of making sense of uh a bunch of information to determine the potential risk factors that a potential new vendor could present to your business. You know, uh getting getting financial data, getting um uh you know, ESG scores, getting potential sanctions info, you know, data breach history, whatever. You know, you can do that with the prevalent platform and then we apply this analytical model to give you some analysis and recommendations and whatnot on on on what to do. Uh second, from an intake and onboarding perspective, that kind of melds into this into the third stage of scoring inherent risk. As you’re onboarding a new vendor, performing a level of intake, some of that activity can be automated through um you know, playbooks and rules, and then some of those datadriven insights can be driven via uh uh you know um uh you know, AI specific inputs to help you calculate the risk that a potential vendor exposes your organization to. From an assessment remediation perspective, you know, you can, you know, input, for example, you can find out, you know, what your potential, um, you know, input risks and events into the system and get some guidance, uh, you know, out of it in terms of, you know, what to do about risk, add greater context, uh, you know, to potential risks that you could be facing from a monitoring and validation perspective. Again, this kind of deals with kind of what Michael was talking about earlier where you’re able to to kind of consume all these different bits of inputs whether it be cyber, business, financial, operational, compliance, you know, whatever, and make sense of what it actually means, you know, aggregating it, normalizing it, correlating it, and then validating that against, you know, your assessment results. Same thing could be said for SLAs’s, KPIs, and KIS, extracting them from a contract, managing them centrally and more. And then finally, offboarding uh a vendor by implementing a very strategic IC and um uh comprehensive offboarding checklist that’s kind of automated in the platform. You know, and with Prevalent, you can do this. You can execute your third-party risk management program through more than just traditional cyber security risks. You know, here are six buckets of risks that we help you manage in the prevalent platform. I won’t get into too much more detail about that, but um you know, and and every one of these items on this slide has either a piece of assessment content built into the platform or it has a third-party uh input of data that we help to kind of normalize and correlate with the AI model. You know, we deliver a combination of of expert people, all of this data that we talked about and the analytical superpower behind it and then the platform to house it from a reporting and a workflow uh perspective. It’s important to note that you know the the the prevalent platform is a rich a foundation on which to build a very comprehensive enterprise third-party risk management program. Our approach uh to AI is to um augment existing processes with AI and not to completely subvert you know what your particular program is or you know disrupt workflows or whatever. You know, we’ve got some resources in here from an AI perspective. Um uh you know, we’ve got links. I realize you can’t access these links right now, but when you receive the recording to this presentation tomorrow, you receive the presentation as well. and you’ll be able to click on some of these links uh and get some best practice guidance on you know the risks of using AI and third-party risk and how to mitigate them you know five ways to leverage AI and then how to think about a AI in a broader risk management context for example using the NIST uh AI risk management framework so anyway that’s what I wanted to share with you today um just kind of what prevalent’s doing to kind of incorporate AI into your thirdparty risk management workflow and ecosystem and how AI can help support you know existing TPRM capabilities and and use cases. So anyway, that’s what I wanted to share from my perspective. Um uh I guess Ashley, I’ll turn it back over to you. We can open up for questions.
Ashley: Thanks, Scott. I’m going to go ahead and launch our second poll so we can follow up with you regarding any initiatives or projects that you may have. Uh we’re just curious to see if you’re looking to establish or augment a TPRM program within the next year. And please be honest because we do follow up with you. But we got a few minutes left on the clock. So why don’t we go ahead and look through these questions. Uh, Michael, we have a question coming in for you. Someone asks, given that all this is so new and still evolving to both thirdparty software developers and software users, what are the one or two practical ways that people are using this as they move from crawl to walk to run?
Michael: Um, well, I mean to me I I would start small and let it grow. Some of the practical use cases that I see is that you know like natural language processing to compare documents like you know my policy of vendors or suppliers policies and note note differences. Uh but but the other one is generative AI to populate assessment questionnaires and themes to reduce some of the burden there. Um uh and and but again to me that all still needs subject matter expert review. It’s not to replace them uh in in the hands of somebody that’s lazy that it could be actually another risk. But somebody that that’s a skilled subject matter expert can like leverage generative AI to you know get a lot of work done for them and be able to get more done but it still requires some type of review. I’m also seeing a lot of AI coming in to be able to harness a lot of these disparit third party risk data sources from politically exposed persons, watch list, sanction list, negative news because there’s a lot of information out there, but there’s a lot of islands. So, how can we take that in and leverage AI to process that against our vendors and supplier list to figure out which ones really impact us?
Ashley: Thanks, Michael. Scott, we have a question coming in for you. Uh, someone asked, “Does Prevalent provide any M ing analysis of the organization’s third party suppliers against the 500,000 vendors in our database before committing? A great question.
Scott: Uh yeah, we’ve got a supplier network uh that has a completed third-party risk assessments that include not just uh a completed internal controls assessment, but also uh current data on their cyber posture, data breach history, uh financial um uh status, um operational updates, any compliance sanctions or reputational concerns as well. We’ve got customers that utilize that that database as a um supplier database, if you will, of potential suppliers to kind of bounce against according to your risk threshold.
Ashley: Thanks, Scott. Uh Michael, we have a question coming in for you. Chase asked, “Have you seen a TPRM policy in place that includes the use of AI for TPRM process and what that looks like?” Uh Chase is curious as to what the governance looks like surrounding AI being used. in a business function outside of general use.
Michael: Um, I have not seen a third party risk management policy that that specifically addresses that. I’m sure there’s some out there, but I have not come across it yet. I’m seeing a lot of organizations start to address, you know, uh, more of a policy around the use of AI in thirdparty relationships and and ask questions on that in the assessment questionnaires. Um, it looks like there aren’t any more questions in the Q&A box. So, in that case, thank you Michael and Scott and everyone for all of your questions. They gave us some great information to take in today. So, I will give you guys a couple minutes of your day back and I hope to see you all in your inbox or at a future Prevalent webinar. Cheers everyone.
Michael: Bye everyone.
Scott: Thank you.
©2025 Mitratech, Inc. All rights reserved.
©2025 Mitratech, Inc. All rights reserved.