4 Best Practices for Benchmarking & Auditing Your Third-Party Risk Management Program
See More Details →Description
Even if you’re consistently monitoring vendor risks, you still need to be able to reliably understand the maturity of your third-party risk management (TPRM) program to continuously improve it to meet your organization’s risk objectives.
In this webinar, audit experts Alastair Parr and Joe Toley share their best practices for auditing your TPRM program’s maturity.
They’ll examine industry-standard benchmarks such as coverage of the vendor estate, the efficiency of roles and responsibilities, assessment content optimization, risk management maturity, and the effectiveness of program governance.
Join Alastair and Joe as they explain:
- How to use these benchmarks for auditing your TPRM program’s effectiveness
- The key need-to-knows
- Thresholds for critical benchmarks
- How to get your TPRM program to the next level
By understanding how your program stacks up against best practices, you’ll walk away with an in-depth understanding of the state of your TPRM program’s maturity and where to take it next.
Speakers

Joe Toley
Compliance Expert

Alastair Parr
Compliance Expert
Transcript
Amanda: Thank you very much Amanda. Alistair: Of course. Alistair: So to start with Joe, who are you? Why are you here? And why are you talking to us? Joe: Thanks Alistister. Uh so I am the program director at Prev. I have spent a lot of time working with clients to develop their programs. Um I spent a lot of time looking into how we can assess maturity for third party programs as well, which is why I could have quite a good cont contribution today hopefully in discussing some of these key uh contributing factors that make up a mature program. Alistair: Lovely. Thank you, Joe. And hello everybody. Alistister Pal. I’m the SVP of products and delivery over here at Prevalent. I’ve had decades of experience doing auditing uh against third party programs against key risk domains, risk controls, and uh have had a fair bit of oversight over third party programs specifically in governance. So, I’ll try and weave in what I know and certainly we’ll uh we’ll tease out some information from Joe and Scott that might be prudent as well over the session. Scott, over to you.
Scott: You won’t be teasing a whole lot out. a whole lot out from my perspective today. Unfortunately, Alistair, I’m I’m just the marketing guy on the call. I’m VP product marketing here at Prevalent. It’s my job to kind of synthesize the best practices that uh we learn from customers and that we generate from our own expertise and help to apply that across the board across multiple customers through the publication of content uh and then uh you know regular best practices guidance. So, I basically run spellch check on whatever Alistair and Joe create and voila, there’s my contribution. So glad to be here.
Alistair: Scott is certainly the most humble out of the four as well as you can probably tell. So what are we going to be covering off today? So a bit of insight. Uh our focus is really diving into some of the core components that help support good audit programs relating to third party risk specifically. Uh and there’s a couple of questions we’re going to dive into as part of that. Uh there’s a central theme that you’re going to see today which is a maturity assessment. And the reason why we use security assessments as an audit mechanism is that they are repeatable, they’re consistent uh and it provides a suitable benchmark. But as we weave through our journey and our conversations today, we will be addressing a few common questions that we tend to see. We’ll be understanding how we can actually validate that a program, a TPRM program, third party risk management program is working effectively uh correctly. We’ll be looking at the key benchmarks associated to a TPRM program. We’ll be understanding how we can actually ascertain where any individual program sits on the prevalent maturity model which is the output of maturity assessment. We’ll be looking at some of the common thresholds for the critical benchmarks that make up that maturity assessment and we’ll provide common insights around how to actually progress to the next benchmark. We will have a Q&A session later on as Amanda rightly covered off. So by all means if you have any questions please feel free to put them put them in the Q&A. and we will endeavor to either get through them as we go through this session or we will save them towards the end. So a maturity assessment overview Joe what actually is a third party program maturity assessment.
Joe: Sure. It’s a good place to start. Um so it’s really a way of understanding where you’re currently at with your third party program. Um we see a lot of organizations rush into you know creating and developing a program in the hope that it’s successful. What a maturity assessment allows us to do is to take a step back and understand exactly have we considered all of the key foundational components that make up a good program. You know, are there things that we were missing when we were building out our layers that support a program? And it really applying a scoring mechanism to that allows us to understand where we’re at and where we could potentially get to. Alistair: So, moving on from that, Joe, why should I actually care? Joe: Yeah, good question. So, um, once we build out our our scoring for a maturity assessment, we understand where we’re at. Uh, we can then start to look at planning. So, if we score, you know, a two, for example, on that uh on that maturity scale, we can then look at why we’re scoring a two. You know, where were our successes, where were our weaknesses, and then we can start to apply some ownership internally for actually, you know, making us progress to the next level. And how that might impact our program. And when we look at an overall program, we shouldn’t just be looking at, you know, how can we assess more vendors and how can we, you know, drive down risk. It should be about making this process as efficient as possible. Uh, and that’s usually one of the common areas where we see, you know, slip ups when organizations are putting together their program. You know, building out something that’s scalable. You’re only going to on board more and more vendors hopefully. Um, so with that in mind, you know, we need to look at where we can improve efficiency early on as as early on as possible so that we can see the benefits as we start to expand and uh and grow the program over time.
Joe: And I I will also mention here that um you know we found that when we’re building out these types of question sets and trying to understand maturity, there was a common theme where organizations were always looking to gain the best score they could out of uh some of these assessments when really you should be working the other way. You know, you should always try and heir on the side of immaturity rather than maturity when you’re looking at filling out these these types of assessments so that if there is any sort of gray areas in where you are you know seeing particular weaknesses um you keeping them on the radar for development and improving them over time. Um another point to mention here as well is if you’re going to be performing an exercise like this consistently maybe quarter over quarter hopefully or year over year at a minimum we should be looking at making this an applesto apples comparison. You know what were we last year? How did we perform the uh the assessment, were we honest with our answers? Uh were we answering them in a particular binary way that makes this comparable to something we’re completing later on? So there are some of the key key factors I would consider when we start to look at actually filling out an assessment or you know embarking on this journey of of completing assessments. Alistair: Thank you Joe. Interesting interesting information there. So one question I commonly get from this is that’s great. So there’s a benchmarking model between one to five uh based on the Carnegie Carnegie capability maturity model there.
Alistair: But what do you normally see as an average? You mentioned that people should be working from a perspective of being immature up front, expecting a degree of immaturity in their program and the pillars. Alistair: What does good for a first year program look like? Joe: Yeah, I would say the most common scoring that that we see for our particular model is between that two and three scoring. So the develop developing and scalable um type program state. And that’s um typical because I I think that uh all of the you know the basics of a program are in place but they’re just missing that check of how they can make things more scalable um and more efficient which is why they’re unable to get to that you know scalable and optimiz optimizing state of the program. So commonly we see a two to three I think with some you know a year of of actually looking into a program developing on some of these weaknesses I think it should be achievable that you can receive you know between a a three and a four score boring. I would say that’s typical if you’re doing all the right things, you’re performing these uh on a quarterly basis and you’ve got the right ownership internally to to make people accountable for actually progressing in these areas.
Alistair: Very interesting. What’s the best you’ve seen, Joe? Joe: The best I’ve seen is um a late form. Now, one thing to also consider when we’re looking at these types of metrics is uh the actual scope of the assessment. Organizations may have a very small number of third parties in which case they can, you know, dedicate a lot of time to to developing some of these areas. As the program grows, you may even see organizations drop down in some scores because the scope of the program increases. Maybe there’s a new bit of legislation they need to accommodate within their their assessment uh program. So although we do see some organizations with quite a high score, I would say it’s more about maintaining those scores that becomes challenging as the as the scope of the uh the program grows. Alistair: Understood. Thank you, Joe. Very helpful. So that prompts me to ask the question, so what are the key factors that are actually contributing to those scorings? You know, the high scores, the high balls, etc. How do I get from being a a relatively immature program to something that’s more involved? What things contribute to that?
Joe: Yeah, we should start dissecting a program down uh into its sort of smaller components to be able to provide a more granular view or or get some true insight into where we can and mature our program. Uh which is why we approached creating a maturity assessment in this way of looking at particular pillars that we can assess against which we’ll come on to in just a second. Um but really when we start to look at the um the scoring as I mentioned a second ago we should be making sure that we’ve got some real clear um uh items to address out the back of these questions that we’re asking. So for example if we’re looking at you know questionnaire content, for example, and how mature we are with a with a particular area of a of a question set, we should be looking at that question set and and identifying, you know, how can we improve as we start to uh review these, you know, quarterly. Um, but yeah, we touched on a lot of these points a second ago. Um, the binary approach to completing a question set, making sure that there’s a standardized approach to gaining the scores as an output from the assessment. Like I said, we need to compare apples to apples to get a true understanding of uh of maturity improvements. Um, and one other really key thing here as well is when we look at uh assessing uh against the the maturity assessment, we should really try to apply some prioritization to the output as well. So we’re not just looking at everything within the program being equal when we assess our maturity. There should be certain things in there which are more important than others or certain dependencies we need to put in place first and that’s where um we need to apply a bit more intelligence to, you know, our assessment content.
Alistair: Thank you, Joe. You referenced pillars there. Pillars sound very supportive and certainly useful from a uh a benchmarking standpoint. So, um I’d love to know more, Joe. What are the pillars? Joe: Yeah, sure. Thank you. So, when we start to look at an assessment uh or or a program, we need to dissect it down to particular areas and then we can start to understand, you know, where are our strengths and where are our weaknesses. So, the most logical way of dissecting a program based on our assessment types um are the pillars that we see in front of us here. So we have coverage, content, roles and responsibilities, remediation and governance. And these are the five pillars that support our program. Within these pillars, we have a number of questions and a and a and a level of maturity. The benefit of breaking this down just beyond an overall program score is that we really get that in-depth view of where our strengths and weaknesses are. I think some assessments out there that just provide an overall maturity score um lose a lot of that visibility and that granularity. And although you get a a score like a 2.94, it’s really difficult to identify, you know, how good that is. Um it’s only when you start breaking it down, as you can see in this uh chart in front of us, that you can see the clear strengths and weaknesses that actually make up that overall score. It might actually be worth me just covering off while we’re on this uh slide uh why we’ve um or a bit of an overview of each of those pillars. Um so coverage would be how much of the third party estate are we actually assessing or covering with the program or how much do we believe we’re covering should I say? You know, are we assessing all of our third parties? Do we have a a strong on boarding um workflow to make sure that when there are new third parties they are enrolling into the program and are we maintaining those third parties as well. Then we move on to content which is looking at the questionnaire content you know are we sending out sufficient uh assessment content or questionnaire types to get the level of um assessment we need. Then moving on to roles and responsibilities. Do we have the right or key uh roles within the program defined and documented and you know is everyone trained up? Uh then we look at remediation. which is where we focus on the risks and the review process and then lastly governance which mainly focus on reporting uh and maintaining maintaining an audit evidence that our program is functioning successfully.
Alistair: Interesting. Now Joe, one of my first questions on this is appreciate those five core pillars and contributing to an overall scoring. Are they all weighted equally? Joe: Yeah, so um great great question. As I mentioned a second ago, some areas of a program are dependent on others. So, you know, using that analogy of running before you’re walking, um the the walking components are the ones that are going to be more heavily weighted. You know, the things that you need to get right to be able to scale up. So, as an example, um within roles and responsibilities, you know, you shouldn’t be just chucking endless resource at a at a at a program to get assessments complete. We should look at the workflows, the um the training that takes place to make sure those processes are happening efficiently and those are the areas of the program that we typically wait you know with a stronger value than some of the latter areas.
Alistair: Thank you Joe. So with that in mind it sounds like it’s prudent for us to dive into some of those individual pillars themselves. Alistair: So to start with Joe you might give us a bit of insight into coverage because it’s quite a Alistair: key topic when we start looking at vendor inventories and third party risk. Joe: Yeah, perfect. I think the main um objective of of the coverage pillar is really to ensure we are assessing all of our third parties um we don’t lose any exposure to potential risks within our supply chain. So the coverage pillar looks at the processes that that support that. So do we have uh visibility of all of our third parties? You know, is there a an inventory of our suppliers? Does everyone know how to use it? When there is a new request for a service from a supplier, do do our employees know exactly where they should be going to raise a request for a new supplier to be on boarded? So, it’s really trying to plug those holes within uh the program to ensure that there’s no potential areas where risk could be exposed and we don’t actually have visibility of that. Once we know we’ve got some good coverage, we can start looking at actually identifying and profiling those vendors in the right way. And there’s a handy graphic on the right hand side there which gives an overview of of some of the taring logic that uh some organizations might apply. And really the objective here is that when we’re looking at vendors we know where we should prioritize our efforts. So if we have a stationary provider versus you know someone who’s hosting our data we want to make sure that those ones that are hosting our data are being assessed in a more thorough way and a more contextual way. So The profiling and tiering aspect really is a way of understanding who a vendor is at point of on boarding. You know, the more information we can get as early on in the process as possible um really provides us with as much context as we need to be able to assess them in the right way. Uh looking um further into the coverage pillar, we then move on to potential fourth parties as well. U is the third party that we’re looking at just a um just a middleman for a for another supplier. further along the chain and this is an area where we don’t see you know much coverage from organizations we assess with the maturity assessment you know do we know who’s supporting some of these third party services another important contributing factor there one of the most important though on this list I would say is the third party maintenance piece a lot of organizations are assessing entities with a a onceanddump type approach once and done type approach where they assess a vendor and they never go back and repeat the process of profiling them, tiering them, making sure they have the right contact information to engage with them if they need to. But that is essential here as well to making sure that uh we are assessing vendors in the right way. You know, there’s nothing stopping a tier three vendor one year to advancing to being a tier one vendor a latter year because, you know, potentially the scope of services might have increased. So, making sure we perform that type of party maintenance is is going to be critical as well.
Alistair: Thank you, Joe. So, it certainly sounds like from a good practice standpoint, if someone’s going to be building their own audit mechanisms, validation mechanisms, these are all contributing factors for coverage you’d expect to see uh at the very least. And certainly by the sounds of it, if if of course if people would like to use the prevalent maturity assessment, we do have that available. There’s no associated charge in doing so. And we’ll give you details on how you’re able to complete that later. later on in this webinar. But certainly these are very useful points as metrics that we can repeatedly look at year on year. I like the term once and dunce. I think that’s pretty applicable when it comes to third party risk management because it is it is very much an ongoing iterative piece. So thank you Joe. So with that in mind uh some interesting information observations. So Prevenant actually does regular assessments and insights and research on the third party space and the maturity of programs and As we go through some of these key pillars and key metric points today, we will be highlighting some insights that we see from our analysis that actually contributes to regular audit failures and audit findings in third party programs. And one of the more common ones we actually see when it comes to coverage, interestingly enough, is nth party. We know it’s been a very consistent reoccurring challenge for people. More often than not, most organizations have challenges enough trying to get a decent enough vendor inventory up front. Uh let being able to expand that inventory to extend to the nth party, the so-called fourth parties and downstream that Joe was alluding to earlier on. So 79% of the organizations that complete our maturity assessments and that we do research against did not have any programs built to accommodate fourth parties. As a trend, we’re seeing that increase and improve in the sense of people are starting to address it as more and more tools become available to start discovering end parties uh or they’re doing it via assessments, but that seems to be one of the most common contributors that’s impacting the coverage pillar. Okay, Joe, would you mind giving us a bit of insight into content?
Joe: Content. Yes. The uh one of the key uh reasons why processes start to become inefficient um and we create certain sort of gray areas within uh risk identification. So, a really key one this so to start with um we should firstly understand exactly what are um our assessment components when we start to look at vendors. You know, is it that we’re just sending out a question set and a it’s a onefits all type size question set all vendors get the same or are we starting to apply some logic to how we assess vendors and this is where an assessment framework comes into place. And as I said earlier as well when I mentioned some real key foundational layer stuff to to a program, the assessment framework is is probably one of the first things that should be looked at. How how do we know we’re assessing vendors in the right way? Uh and is there some logic we can use to determine, you know, which suppliers should get what level of treatment. So, the assessment framework should detail the type of questionnaire they might be receiving or whether they’re getting remediated um you know just by a remote session or whether an on-site might be required, whether any monitoring or threat management pieces should be uh contributing to the assessment approach and um you know paired with that taring logic that we saw earlier, the profiling and tiering logic. If we understand the full profile of a vendor up front, then we can make sure that this information is is applied accurately to those vendors and we’re making sure that we’re using our resource wisely and investing resource in the right areas within our within our third party estate. So that’s the first piece here. Um and then it comes down to really the the the content that we’re loading into our approach when we’re assessing a vendor. So A lot of our assessments that we see um clients use is the sort of binary yes no type questions. You know, are there any ways that we could improve our assessment approach to get all the information we need and make this as user friendly and as streamlined for our vendors as possible as well. And we’ve spent a lot of time internally developing question sets which are user friendly and as least abrasive as possible because it’s going to improve our relationships with suppliers because they’re spending less time filling out surveys. And if we apply the right level of um consideration when we build out our question sets as well, then we can make sure we are obtaining all the key information we need to be able to make decisions about, you know, whether a third party needs a follow-up um without the need to go back to them and ask, you know, a back and forth of a series of questions to really get to the meat of the information that we need. So, there’s a a a huge benefit we can get from starting to consider, you know, how we invest time into building out question sets. Uh whether we have included guidance, um we’ve got the right communication to support our content that we’re sending out. Anything we can do to make that process as as slick as possible and get as much quality information back from our third party with as least interactions as possible is is the aim here. Uh one key part of this as well is once we’ve got our question questionnaire content that we’re happy with and we’ve applied some of those techniques, uh we need to make sure that the scoring is uh in place and mature. So does have we aligned our question sets to an internal risk appetite. So I think making sure we’ve got that foundational layer of what is the business risk appetite uh will then can then allow us to apply that type of logic to our question sets that we’re sending out so that we can get a good output of risk items that genuinely need following up from uh from our side.
Alistair: Thanks. Joe, we’ve had some commentary coming in from the audience here around uh when it comes to assessments. We appreciate that there are standardized question sets out there such as the SIG uh which are great, but um some of the commentary here relates to where vendors ultimately push them back to a central record which is not aligned to anything in particular or provide them a sock two report or something similar and that’s essentially as far as they get. So the interesting point there is if it’s important to be able to establish a process that converts or adapts whatever literature you’re getting. Maybe it’s evidence from a sock 2 report 27,01 statement of applicability etc. It’s about having a process to be able to adapt that into that standardized methodology that Joe was talking about. So having scoring mechanisms and risk mechanisms that are aligned but consistent. So being able to adapt and translate. So interesting enough when we actually start looking at the trends when we start looking at these assess ments on a whole disappointingly 52% did not have a standard way to present risk data and what do we mean by that is across the various means and medians of them getting information so two audits uh SIG their own proprietary assessments monitoring data and so on they didn’t have a way to standardize that and benchmark that which ties very heavily into Joe’s comments earlier on about making sure that you had a consistent mechanism and that you had the ability to therefore establish risk thresholds across the business. Extremely important when you’re starting to talk about thousands, tens of thousands, hundreds of thousands of third parties because let’s be honest, if they’re not apples for apples, it’s going to make life extremely difficult downstream. Uh so that’s one of sadly one of the highest contributing factors to low maturity results and audit failures when it comes to content specifically uh relating to third party risk. Joe, would you be so kind as to give me some insights into roles and responsibilities?
Joe: Yeah, perfect. Thanks, Alistister. So, this particular pillar is is really interesting because again, it’s going to really have a huge impact on efficiency, you know, how well we develop this particular area. So, making sure we have the right roles and responsibilities defined for a program is going to be really important. Making sure that we have all of our processes for doing out assessments, on boarding suppliers, um you know, managing that assessment process all the way through to remediation and reporting. Making sure we’ve got our roles aligned to those um specific areas accurately is going to be uh really really important. When it comes to um resource actually performing tasks, we should always try and look at how we can streamline those uh processes because if there’s an efficient an inefficiency for one particular vendor, uh then that problem is only going to get multiplied with every vendor that we on board into the program and have to perform that particular process on. Uh so I recommend we invest heavily in looking at those processes seeing how we can streamline them you know is there any automation that can take place you know could we leverage a platform to send the assessments and you know manage the chasing process for example anything we can gain here is going to be really critical to uh moving from that scoring of a two that we saw earlier where we’re just developing a program up to something that’s more scalable. Um, also to support this, we see a lot of um issues with uh with role alignment. You know, we have um one particular uh resource that’s really experiencing risk remediation that might be chasing up responders for responses on um on on risks they might be discussing, for example. So, again, making sure we’re aligning our right resource to the right jobs and tasks. again is going to be improving um the uh the efficiency of of the program. Um one thing as well that we don’t see many clients performing on a regular basis is resource forecasting and this is actually something um really easy to do. We should be able to understand pretty early on whether the uh resource we have within our team is enough to be able to support you know x number of vendors over the course of a year. We know how long each of our processes take. uh we know we’ve we may have streamlined them as well to make sure they are as efficient as possible. So now we should be able to do some basic calculations to work out you know how much of a um of a scale can our team manage and performing those types of exercises is going to be really helpful in planning and making sure we’re setting oursel up for success rather than failure as we start to build out the program on board new suppliers and um and overall begin to scale. Alistair: Thank you, Joe. It’s certainly second based on the programs I’ve seen, some of the commentary that Joe’s had there. So, when it comes to allocating the right resource for the right job, we have yet to see a overstaffed TPRM program. You might be the odd unicorn that exists out there, but um if you are, you certainly are that unicorn and congrats to you. Uh but typically what we would see is the fact that you have a small team with shared capabilities and even some quick wins such as documenting skills matrix. based on who’s doing what. Not so much a racy roles and responsibilities, but also understanding for the variations that you might have in your process, who can actually take on what role and making sure that you’re allocating responsibilities accordingly is very important. And that contributes to those resource forecasts that you do because the reality is you could be overcommitting over what you’re able to achieve in one year. And that’s a very common issue that we see. People overcommit and essentially misrepresent what they’re able to do in 12 months. And even if you do the a significant amount in year one, you end up still looking like you’ve missed targets, which is totally unfair. That is sadly still too common in this space. But what are the most common observations that we are seeing when it comes to roles and responsibilities? Uh at the very top there, lacking the standardized process. So this is about establishing a process for operations. Uh third party risk management is still a very workflow heavy activity. Tools automate and help make decisions, define thresholds etc. But you nonetheless need a process for identifying the information, reacting to the information, articulating and sharing it across the business. 62% did not have a consistently standardized process. 52% had planning shortfalls. This is where they when we ultimately looked at capacity planning, the reality is that they would never be able to achieve what they’ve committed to to the execs. Uh and that is sadly undermining all the good efforts that they’re doing throughout the year. So, we strongly strongly recommend looking at resource capacity planning and then factor that in based on the limited in information you’re probably going to get from your third parties and be pragmatic and realistic. And 59% actually overspent on TPRM resources. What do we mean by that, Joe? I think you touched on it not too long ago, which is you’ve got say senior risk consultants who are wellversed to dealing with the intricacies of risk management sitting there doing things like chasing responses, asking generic questions, answering generic questions as well across third parties. That’s not a good use of their time. The reality is there’s going to be a large subset of risks that they need to deal with. So aortioning the right duties to the right people is certainly uh valid and worthwhile. Joe, please share more on remediation for me.
Joe: Remediation uh again I keep hammering home this point around efficiency, but um it really is going to be key here as well. Uh a lot and you even touched on it there with one of the the stats you mentioned around um some inconsistent approaches. So, one thing we we’re commonly finding with with remediation is um there’s not a consistent approach leveraged by teams to perform review process pro processes, whether that’s a review of a submission that’s just been returned to us from a a vendor uh or it’s an actual risk that needs to be reviewed internally. There’s always seems to be a lack of um a standardized approach that’s actually documented to support these types of activities. So things like playbooks, uh if we’re looking at a question and they answer X or Y, you know, what should we do? What’s our standard response? You know, actually looking looking at our um request for evidence and where they provide it, you know, providing some guidance internally on what we should look for. to validate that this particular evidence is fit for purpose. The more we can invest in in that type of process and documenting it, the less we would have to rely on some of these more expensive resources performing these types of tasks because we’ve actually documented it. It’s more of a playbook and there’s some some standardized logic to it. So recommend uh and of course we’re we’re um we’re reviewing everything consistently which is going to be beneficial to uh to the program. So I recommend uh you know we we build on that as a dependency or one of our more heavily weighted areas for improving maturity within this particular area. Um making sure we have aligned a good and um maintained uh risk scoring to the the types of uh risks that we’re assessing as well. It’s going to be beneficial. So we have seen a lot of um question sets which have been used within within programs where they just using a binary approach of you know is this in place yes or no. Um without applying that scoring and that waiting to how important that particular question is to the business it becomes very difficult to prioritize these items for review internally. You know we want to be able to tell our suppliers you know these are the key things we need from you right now as must-haves rather than these other 50 that might be nice to haves. So investing time into making sure you scoring is uh is up to date, maintained and um and reflective of our our internal risk appetite uh would be really uh really beneficial for maturity when it comes to risk remediation approach. Uh again a playbook is going to be hugely helpful and uh when I perform these exercises of actually debriefing some clients on their maturity assessments uh I bring this up almost every session and just say if we invest some time into defining what remediation looks like to you and how can we standardize it. Um we’re going to hugely improve the the efficiencies in that particular area. And again, as Alistister said a second ago, you know, why are we using our expensive resource to manage things that we can document and ask maybe our more junior resource or different roles to conduct for us? Any levels of filtering we can apply to those those types of processes will increase our efficiency um and overall our maturity of our program. We discussed on the last piece around uh resource management. Again the same can be applied to remediation. We should be able to grasp what our uh our scale of remediation looks like based on the team we have internally. You know how many risks can we manage a day? Do we have guidelines on um you know when our chases are needed to make sure things are in place by you know those types of uh of attributes can be really beneficial to maturing. the remediation area. And I would say although these these these things seem like uh you know it’s a heavy investment of time I actually think you could accomplish quite a lot of this stuff you know within the scope of a month say of building out what risk mediation looks like and um and standardizing it from that point onwards it’s going to be a process of evolving that playbook over time. This is never going to be uh a oneanddone approach like I mentioned earlier. If we can define our our our logic of if this then that for risks being identified and then just build on that and evolve that over time that playbook’s only going to get more and more advanced. Um and with that we’re relying on less and less expensive resource to conduct those types of activities. So another really good one there for uh for improving efficiency and that also ties into the last point there around standardization and process. We need everyone to be conducting remediation approaches in the same way, you know, having workflows defined, considering things like uh the profile of a vendor and the tier of a vendor, obviously that can impact on, you know, how we’re approaching remediation. So the more we can build on that type of activity, uh the more maturity we’re going to see in these particular areas.
Alistair: Thank you, Joe. So a common question we get asked when people talk about remediation is they naturally feel that vendors and parties are non-committal and require some cases a disproportionate amount of investment to get them to buy in and chomp on addressing risk. Now, the key thing that we normally like to articulate when it comes to that is pragmatism here is key. More often than not, when we start looking at a third party program, the tolerance levels, the risk thresholds are usually very permissive. They let a large volume of risks get through classified as critical, high or medium and that’s might be disproportionate. It’s usually through a period of tuning and finetuning when you start seeing remediation adapt uh you start seeing the volumes decrease and you start really level setting what your tolerance threshold is as an organization and then that of course drives remediation. So it should only really be a tiny subset of your third parties where you’re actually doing active remediation work against against your much larger vendor inventory from that perspective. So what are the most common observations that we tend to see when we’re looking specifically remediation? Uh the most common and this is very high 86% had a lack of consistent remediation guidelines. So this includes organizations using things like SIG, their own proprietary content, passive monitoring across the board. If there are certain risks that exceed your tolerance thresholds, there should be standardized guidance. So if the risk consultants in the world magically disappeared one day. We would all have the capability to enact positive change with the third parties. So documenting that is key and importantly it’s scalable as we start sharing that knowledge with third parties, sharing the knowledge of the audit teams and sharing it with our analysts doing the work. The second 59% did not have a consistent scoring risk likelihood and impact model. So whether you’re using the fair model for example for risk quantification ation or whether you’re using traditional risk likelihood and impact models, we would recommend making sure that you have a consistent mechanism in place. Regardless of the source of data, passive monitoring, using your own assessment methodology, whether it’s two report coming in, you should have a consistent risk model which allows you to drive remediation based on tolerance thresholds. But yeah, 59% from our experience do not have that in place. Joe, I wonder if you give us a bit of insight into governance. the last pillar.
Joe: Yeah, sure. Um, and this starts with a couple of points around reporting, uh, which I’m investing a lot of time into at the moment internally in uh, in developing what’s good and what’s meaningful from a reporting standpoint. Um, but it actually becomes quite challenging to provide this type of reporting without some form of system to support the assessment approach. Uh, we need to know uh, we need an output from our assessment. We need risk registers. We need some scoring mechanisms behind it to be able to collect this data and and present it back to the business in in a meaningful way and something that they can understand and and interpret. Uh which becomes, you know, quite challenging when we’re working with just Excel documents or, you know, standalone assessments on in different files and different areas of our of our desk, let’s say. Um but when we’re using more um uh more automated systems, we can start to pull this information together. and provide something quite uh quite automated, quite quick and of course we can start to collect data together and that’s when this stuff becomes a little bit more powerful. So from an individual reporting standpoint here we’re referencing how mature are you able to report on a particular third party as an output from an assessment. And this shouldn’t just be you know there was this x amount of risk items and and we’re managing them and here’s our target dates. We should also include here any context around who that third party is, what they do for us, uh who would be impacted by this particular third party if we were to terminate services with them and what tier they might be. You know, those types of attributes and immediately that starts to add to the story of the data that sits within this type of reporting. Um so being able to collect that data together and prioritize the output from our our risk review and and the assessment content um would would demonstrate some real maturity from an individual reporting aspect. Additionally, we’d be great to be able to prepare to be able to pair that with some of our monitoring data that you might be using as well if you’re leveraging passive insights around businesses and uh and whether there’s any new emerging threats related to that industry. Again, all of that information paints that better picture for that individual third party and demonstrates a maturity within that area. When it comes to program reporting, it again relies on a method of collecting data together. Uh, ideally we want to be able to see um how one vendor stacks up against another vendor. Um, which could be, you know, hugely helpful if you were leveraging this approach for things like um RFPs and those types of aspects. Um, but also it’s going to be really helpful to see trends. You know, how how does um uh how many risks particular type are we seeing within the program? Are there sudden spikes or commonalities in risk within uh within the program? And probably one of the most valuable metrics to be able to see and demonstrate back to the business to uh to give get some value from all the hard work that the program has been putting in is can we demonstrate risk reduction. Uh and that’s something that yes, organizations can say they’re doing uh I would hope. But being able to say they’re doing accurately becomes more challenging and that’s where we then lean on some of the other areas that we’ve been talking about today. You know, how accurate our remediation approach is. Whether we can uh provide some assurance that the whole program is within scope of our uh sorry, all of our third party estate is within scope of our program. Whether we are happy that we’re maintaining third party information. When it’s it’s only when it’s paired with all that that we can demonstrate some with some accuracy that our program is working and and functioning successfully. Uh using that information, of course, we need to be able to demonstrate to other areas of of the business that uh you know where we’re seeing threats and um anything valuable or meaningful uh to them. As I said a second ago, when we start to aggregate these reporting, we can start to um we can start to add a bit more value to the other areas in business. I think it becomes more and more challenging to give some definitive answers about risk to the rest of the business when we’re just looking at things on a on an individual third party by third party basis. Uh a couple of quick points here just to mention around maturity. So the whole aspect that we’ve talked about today is you know dissecting a program measuring maturity across each area. We need to make sure we’re doing this consistently. Um there’s a chart on the right there giving some indication that we should be doing this at least quarterly which is something I recommend. Uh I think that’s useful for a couple of reasons. Uh one Of course, um we can demonstrate hopefully some improvements in those areas, but also it makes sure that if we are assigning objectives and tasks to improving our program that they are being worked on and we are seeing progress without making sure these checkpoints are in place. You know, other things can take priority uh over uh improving third party program maturity. So, ensuring this is brought up time and time again as part of an agenda uh make sure that we’re accountable and that we are you know adhering to some of our tasks to get program maturity increased.
Alistair: Thank you Joe. And for those of you who are more interested in some of the KPIs and KIS that we expect to see in a good TPRM program, we have of course got some content on that previous webinars associated to that which we’ve uh done with shared assessment. So please feel free to have a look at the prevalence site and you’ll find out a bit more information about that. When we start looking at governance as Joe rightly mentioned good practice dictates consistency. And some of the more common issues that we see relate to this is the fact that people are misrepresenting reporting based on their scope. So they say, you know, 80% of our vendors have this result, yet they’re only assessing 5% of their portfolio, their vendor inventory. It’s misleading data and can ultimately lure audit the rest of the business into a sense of false security. You’d expect the audit team to unpick some of these pretty quickly and they’d be able to identify scope coverage versus the results that you get. So please do bear that in mind. So when we’re actually looking at governance specifically, uh we did notice that 69% were missing strategic reporting opportunities where they’re able to use the information from the TPRM program to drive positive business outcomes. Feeding and sharing it with the broader business for example to the privacy team procurement to demonstrate a vendor’s compliance or non-compliance to helping contract reneg negotiations, sharing it with uh legal, of course, compliance for regulatory obligations. There’s multifaceted benefits for the data sets that could be used uh either a to secure more budgets or b to support another part of the business. Secondary, 59% struggle to gain an overview, sorry, an overall view of third party risk. And this is purely because there’s a lack of consistency across their programs or they’re not able to articulate the data set in a way that the the execs uh the steering committee etc are ultimately seeing uh seeing outputs from. So we recommend making sure that you have reporting mechanisms KPIs or K eyes that help articulate the progress of the program. So uh what’s next and then we’ll give you a bit more information specifically on maturity assessments how you can either craft your own or leverage the prevalent um offering and how you can ultimately get access to that. And just to reiterate some of these findings is from the broad set of maturity assessments that Prevenant has done globally across multiple organizations of multiple disciplines, verticals and sizes. Uh so it’s um it’s an interesting broad data set. So very briefly what next? We have an understanding now we have some insights into the types of information that we would expect to see or audit would expect to see across those common pillars. So the very first First thing that we would recommend is prioritizing improvements. If you’ve been able to either document your baseline list of contributing factors across your own pillars or of course of using the prevalence maturity model, the maturity assessment, we recommend building up and documenting your weaknesses, the areas of improvement. If you understand the impact of those based on your portfolio and your organization in your vendor inventory, you can of course then start looking at what needs to be addressed first. A contributing factor to that is the level of effort. So ideally you’d be offsetting the level of effort versus the benefit and the risk associated to it and that will give you your triage prioritized list of actions that you could undertake. Invariably you’re going to see some quick wins probably less around the vendor inventory but more about the operational changes and processes uh and potentially even content. And once you’ve done that you have that opportunity to start looking at planning and ownership allocating realistic time scales against those allocating it to to the individuals who have the right subject matter expertise and then defining what your actual objectives are. We want to increase by point8 for this pillar and 1.2 for this pillar over the next 12 months and here are the actionable measurable efforts that we can undertake in order to get there. When it’s compartmentalized into pillars, when it’s compartmentalized into phases and you have allocated a typical projectbased approach to it, it’s going to make your life far easier. demonstrate that to audit uh year on year as you go through those audit cycles. So before we hand over to Scott, I just wanted to hand back uh to Amanda from our side to do one more quick poll. Amanda,
Amanda: hi. Okay, so I have here one more quick poll. Thanks so much. Um I’m going to launch it right now. So very simple uh straightforward question. Are you guys looking to augment our establish a third party risk management program in the coming months. Are you in the research of doing something for 2023? Is this one of the reasons why you’re here? Um, if you want to have a conversation with us, myself, people on my team here, please answer honestly. Please say yes. So, we will follow up with you. That is what we do. So, I’m going to leave it up for a little bit. The quicker you answer, the quicker it goes away. It’s a fun little, you know, get it out of my way. Let’s keep moving. So, with that, I’ll leave it up. and I’m gonna hand it over to Scott Lang.
Scott: Hey, thanks Amanda very much. Alistister, you can advance to the next slide, please. Um, you know, everything you heard from, you know, Alistister and Joe today were about or was about helping to progress your program from whatever your point A is now to what your desired point B or point C is in the future. And we see a lot of challenges that organizations face in in getting there. Um, Alistar, you can even advance to the next slide to to accelerate the the presentation a little bit more. But those three things uh that you know we see organizations wanting to achieve the things that kind of bubble to the surface are getting better data to make good decisions uh easing uh increasing team collaboration across different silos in the enterprise because everybody kind of has a a hand in thirdparty management of some sort and then evolving and scaling their their their programs over time. You know those three areas are precisely what prevalent does uh to help grow and mature a third-party risk management program uh over time which on the next slide uh you know you can see um you know our our prescriptive approach to success like we only look at thirdparty risk management from the perspective of a life cycle because every distinct phase of that relationship life cycle presents unique risks and if you’re shortch changing yourself on the front end of that life cycle or the back end of that life cycle you’re not getting kind of holistic view of of you know what your organization can do to improve you know its position and its maturity uh with regard to you know these various risks you know I won’t discuss these things in great detail detail because I would like to open it up to questions but you know you can see that at the end of the day what we’re trying to achieve is to give you the tools you need to help you simplify and speed up onboarding give you a programmatic process streamline the process of assessing vendors and closing risk gaps and then unifying your teams uh over time next slide please. Alistister um you know our delivery of solutions uh is a unique combination you know from other players in the industry of uh the people the data and the platform or the experts that we have on staff to help you design build and mature your program over time. Uh an an unbelievable amount of data that goes into the calculation of uh risks and you know, measuring your progress over time. All housed, controlled, and managed in an award-winning platform that allows us that allows you to centralize your tasks and management and improve your processes over time. Next slide, Alistister, please. And we’ve got a set of use cases that we address uh with the platform kind of carved out by uh department or area in here, whether you’re aligned with procurement or IT security or data privacy legally. And I won’t dwell on this. You guys will have this after the presentation today, but just suffice to say that we address a a significant uh uh family of of risks across the board. Uh not just a traditional cyber security IT risk, although that’s the predominant uh but we can also help organizations address risks of the nonIT types as well. Next slide, please Alistister. Which really gets down to a few questions that I’ve seen pop up in the in the question window so far is what can prevalent do to help and how you take advantage of a uh very prescriptive maturity assessment to help you understand where you are now and where you want to be in the future. We do recommend that you engage directly with Prevalent uh and in the follow-up email to uh this webinar when we send the recording out to you tomorrow morning. Uh we’ll include a copy of the presentation, but also a link in there for you to register for uh a brief conversation uh with our experts which can help that can help kind of diagram with next step would be with regard to conducting that maturity assessment. I don’t want to throw you right into that. Uh it is it is a very disciplined and wellthoughtout process, very comprehensive and well thoughtout process. 45 questions or so across the five, you know, levels and multiple different domains and results in the production of an acting plan for program improvements. So that tends to be a guided exercise where we walk you through it uh and explain the process throughout so that you know you maximize your your output on the other end. So netnet You’ll get an email from us tomorrow with the recording of this presentation. In it will be the link to register for more information about the maturity assessment. You know, we’ll follow up and then guide you through that process. I think that’s all I had to share with you, Alistister.
Alistair: That’s lovely. Fantastic. So, just to reinforce what Scott says. Absolutely. Hopefully, you’ve been able to pick up from the uh the conversations today some of the key metrics and criteria that we know audit are looking for and would to see addressed in a common program. Whether you’re using those benchmarks yourself to self assess, whether you’re using the prevalent maturity assessment as your own uh third party benchmark, those are definitely good common points. We reiterate again there are those five core pillars that we certainly recommend people cover off. Uh so if you’re starting to look at say your roles and responsibilities, remediation, governance, content, and coverage, you’ll certainly be in goodstead to make sure that you’ve covered off all the most common areas. We’ve certainly done our best today to weave questions dynamically as they come in through the chat. Uh but of course if you have any other questions whatsoever, do feel free to reach out to us and we’ll be happy to elaborate after the session itself. But failing that, I wanted to say thank you very much for today and I’ll hand back over to Amanda.
Amanda: Yeah, I’m just going to say the same exact thing. Thank you all so much for attending. We are two minutes before the top of the hour, so we’ll give it back to you guys. I know you ever hear that joke. It’s like, “Oh, my life’s going to change now in these two minutes. Thanks so much.” But we’re going to get back to you anyway. We hope to see you at the next one. If you have any other questions, please do reach out to us. We will be reaching out to you. Um, and always respond. Check your spam because we tend to fall in there. So, if you’re waiting for something from us, always look there. Uh, a tail as old as time. You guys already know that drill. All right. Thanks so much, everyone. Have a great rest of your day. Bye.
Unidentified Speaker: Thank you.

©2025 Mitratech, Inc. All rights reserved.

©2025 Mitratech, Inc. All rights reserved.