说明
即使您持续监控供应商风险,仍需可靠地掌握第三方风险管理(TPRM)计划的成熟度,以便不断改进该计划,从而满足组织的风险目标。
在本场网络研讨会中,审计专家阿拉斯泰尔·帕尔和乔·托利将分享其在评估TPRM项目成熟度方面的最佳实践。
他们将审查行业标准基准,包括供应商资产覆盖范围、角色与职责的效率、评估内容优化、风险管理成熟度以及项目治理的有效性。
加入阿拉斯泰尔和乔的行列,听他们讲解:
- 如何运用这些基准评估您的TPRM计划的有效性
- 关键需知事项
- 关键基准的阈值
- 如何将您的TPRM计划提升到更高层次
通过了解您的计划与最佳实践相比处于什么水平,您将深入理解TPRM计划的成熟度现状,并明确下一步发展方向。
发言人
乔·托利
合规专家
阿拉斯泰尔·帕尔
合规专家
文字稿
阿曼达:非常感谢你,阿曼达。阿利斯泰尔:不客气。阿利斯泰尔:那么首先请乔介绍一下自己。你是谁?为何在此?又为何与我们对话?乔:谢谢你,阿利斯泰尔。我是Prev的项目总监,长期致力于协助客户开发项目。 此外我也深入研究了第三方项目成熟度评估方法,因此今天希望能就构成成熟项目的关键要素贡献些见解。阿利斯泰尔:太好了。谢谢乔。各位好,我是阿利斯泰尔·帕尔,Prevalent公司产品与交付高级副总裁。 我拥有数十年的第三方项目审计经验,涵盖关键风险领域与风险控制,尤其在治理层面具备相当程度的监督经验。我将尝试融合自身认知,同时也会从乔和斯科特的分享中提炼出值得借鉴的见解。斯科特,接下来请您分享。
斯科特:今天从我的角度来看,你不会能套出太多信息。很遗憾,阿利斯泰尔,我只是这次电话会议里的市场营销人员。 我在Prevalent担任产品营销副总裁,职责是整合客户实践经验与公司专业知识,通过发布内容及定期最佳实践指南,将这些经验推广至所有客户。说白了,我就是给艾利斯特和乔的文案做拼写检查——瞧,这就是我的贡献。很高兴能参与这次会议。
阿利斯泰尔:斯科特无疑是四人中最谦逊的,想必你们也看出来了。那么今天我们将探讨哪些内容呢?先透露一点:我们的重点是深入剖析支撑优质审计项目的一些核心要素,特别是针对第三方风险的审计。在此过程中,我们将重点探讨几个关键问题。 今天的主线是成熟度评估。我们选用安全评估作为审计机制,是因为其可重复性、一致性,以及能提供合适的基准。在今天的讨论中,我们将回应几个常见问题。 我们将探讨如何有效验证第三方风险管理(TPRM)项目的实际运行效果,审视TPRM项目相关的关键基准指标,并了解如何通过成熟度评估结果,确定具体项目在主流成熟度模型中的定位。 我们将解析构成成熟度评估的关键基准阈值,并提供实用的进阶策略指导。 正如Amanda所言,稍后将设有问答环节。如有疑问,请随时提交至问答区,我们将尽力在会议过程中解答,或留待最后集中处理。那么,关于成熟度评估的概述——Joe,第三方计划成熟度评估究竟是什么?
乔:当然。这是个不错的起点。嗯,这其实是了解你当前第三方项目进展状况的一种方式。我们看到很多组织急于创建和开发项目,只盼它能成功。而成熟度评估让我们能够退一步审视:我们是否真正考虑了构成优质项目的所有关键基础要素? 比如在构建项目支撑体系时是否遗漏了某些环节?通过评分机制,我们既能明确现状,也能预见发展潜力。阿利斯泰尔:那么乔,关键问题来了——我为何需要关注这些?乔:问得好。当成熟度评估的评分体系建立后,我们就能明确现状。 接下来就能着手规划。比如在成熟度量表上获得2分时,我们需要分析原因——哪些环节取得成功,哪些存在短板,进而明确内部责任归属,推动项目迈向更高阶段,并评估其对整体计划的影响。 审视整体项目时,我们不应仅关注如何评估更多供应商或降低风险,更应致力于优化流程效率。这通常是组织构建项目时最易疏漏的环节——构建可扩展体系。 随着供应商数量的持续增长,我们必须尽早发掘效率提升空间,确保在项目长期扩展过程中持续收获效益。
乔:我还要提一点,就是当我们在设计这类问题集并评估成熟度时,发现了一个普遍现象:组织总是想在评估中获得最高分,但实际上你应该反其道而行之。 在填写这类评估时,你应该始终倾向于保留不成熟的状态,而非追求成熟。这样当发现某些灰色地带或特定薄弱环节时,就能持续关注并逐步改进。 另一个关键点在于:若计划持续开展此类评估(理想情况下按季度进行,至少每年一次),必须确保数据具有可比性。需追溯去年数据:当时评估结果如何?回答是否真实?是否采用二元化表述方式?这些都将影响后续评估的可比性。 这些都是在着手填写评估表或开启评估之旅时,我认为需要重点考虑的关键因素。阿利斯泰尔:谢谢乔。这些信息很有意思。我常被问到的问题是:这很好,但基于卡内基能力成熟度模型的1到5级基准模型具体如何运作?
阿利斯泰尔:但你通常认为的平均水平是什么?你提到人们应该从"前期不成熟"的角度出发,预期程序和支柱框架存在一定程度的不完善。阿利斯泰尔:那么第一年程序的良好状态该如何定义?乔:是的,就我们特定模型而言,最常见的评分区间在2到3分之间。这代表着处于发展中且具备可扩展性的程序状态。 这种情况很典型,因为我认为程序的基础架构已基本就位,但缺乏如何提升可扩展性与效率的验证机制,这正是它们无法达到可扩展和优化状态的原因。 通常我们观察到2-3分的项目,经过一年针对这些薄弱环节的系统性开发,应能提升至3-4分区间。若能持续执行正确措施——每季度开展评估,并建立内部责任机制确保团队在这些领域取得实质进展——我认为达到3-4分是完全可期的。
阿利斯泰尔:很有意思。乔,你见过最出色的案例是什么?乔:我见过最出色的案例属于后期类型。不过在分析这类指标时,还需考虑评估的实际范围。有些组织合作的第三方数量极少,因此能投入大量时间完善这些领域。 随着项目规模扩大,甚至可能出现某些评分下降的情况——因为项目范围扩大了。或许需要在评估体系中纳入新出台的法规要求。所以虽然我们看到部分机构得分很高,但随着项目范围扩大,维持这些高分才是真正的挑战。阿利斯泰尔:明白了。 谢谢你,乔。非常有帮助。这让我想到一个问题:究竟哪些关键因素能推动项目获得高分?比如高分项目、高分指标等。如何从相对不成熟的项目发展为更完善的体系?哪些要素能促成这种转变?
乔:没错,我们应该把项目拆解成更小的组成部分,这样才能获得更细致的视角,或者说真正洞察到项目哪些环节可以优化并提升成熟度。 正因如此,我们采用成熟度评估方法来审视特定评估维度——稍后会详细说明。但正如刚才提到的,在评分环节我们必须确保问题背后有清晰明确的评估指标。 例如评估问卷内容时,若要判断特定题组的成熟度,就该深入审视该题组,明确季度复盘时的改进方向。这些要点我们刚才已多有涉及。 采用二元化方法完成问题集,确保评估输出分数的获取方式标准化。正如我所说,我们需要进行同类比较才能真正理解成熟度提升情况。另一个关键点在于:当我们参照成熟度评估体系进行评估时,还应尝试对输出结果进行优先级排序。 这意味着在评估成熟度时,我们不能将项目中的所有要素一视同仁。某些环节的重要性高于其他环节,某些依赖关系需要优先建立——这正是我们需要在评估内容中融入更多智能决策的环节。
阿利斯泰尔:谢谢你,乔。你刚才提到了支柱。支柱听起来非常有支撑力,从基准测试的角度来看确实很有用。所以,嗯,我很想了解更多,乔。这些支柱具体是什么?乔:好的,当然。 谢谢。当我们开始评估某个项目时,需要将其分解为具体领域,从而厘清优势与不足。基于评估类型的最合理分解方式,就是眼前这些支柱。 这五大支柱分别是:覆盖范围、内容、角色与职责、补救措施及治理机制。它们共同支撑着整个项目。每个支柱下都包含若干问题及对应的成熟度等级。这种细分方法的优势在于:它能让我们超越项目总体评分,真正深入洞察优劣势所在。 我认为某些仅提供整体成熟度评分的评估工具,往往丧失了这种可视性和细分度。即便得到2.94这样的分数,也很难判断其具体优劣。只有像眼前图表这样进行细分,才能清晰看到构成整体评分的具体优势与短板。 既然我们正在讨论这张幻灯片,或许值得我简单说明为何要设置这些评估支柱,并概述每个支柱的含义。覆盖率指的是:我们通过该计划实际评估或覆盖了多少第三方资产?或者更准确地说,我们认为覆盖了多少? 我们是否评估了所有第三方?是否建立了完善的入职流程,确保新加入的第三方及时纳入计划?是否持续维护这些第三方信息?接着是内容维度,即问卷内容设计——我们发送的评估内容或问卷类型是否足以达到所需评估深度?然后是职责分工维度。 我们是否已明确定义并记录项目中的关键角色?所有人员是否都接受过培训?接着是整改环节,重点关注风险与复核流程;最后是治理环节,主要聚焦于报告机制及审计证据维护,以确保项目运作有效。
阿利斯泰尔:有意思。乔,我首先想问的是,这五个核心支柱如何影响总分?它们的权重是否相同?乔:嗯,这是个好问题。正如我刚才提到的,项目中的某些环节是相互依存的。就像学走路前不能先学跑,那些基础环节的权重会更高—— 这些是实现规模化必须做好的基础。举例来说,在职责分工方面,不该单纯投入海量资源来完成评估工作。 我们应当审视工作流程和培训机制,确保流程高效运转——这些环节通常比后期阶段更具价值。
阿利斯泰尔:谢谢乔。基于这个思路,我们深入探讨这些具体支柱似乎是明智之举。阿利斯泰尔:那么乔,首先请你谈谈覆盖范围,因为在审视供应商库存和第三方风险时,这是个关键议题。 乔:好的,没问题。 我认为覆盖范围支柱的核心目标,是确保我们全面评估所有第三方,避免在供应链中遗漏任何潜在风险。该支柱关注的是支撑这一目标的流程体系。 我们是否掌握所有第三方供应商的可见性?是否建立了供应商名录?员工是否熟知使用流程?当供应商提出新服务请求时,员工是否清楚该向哪个部门提交新供应商入驻申请? 这本质上是在填补项目中的漏洞,确保风险暴露的潜在盲区得到有效管控。当覆盖面得到保障后,我们才能着手以正确方式识别和评估供应商。右侧图表清晰展示了部分企业可能采用的供应商筛选逻辑。 核心目标在于:评估供应商时明确工作重点。例如面对文具供应商与数据托管商时,我们需确保对后者进行更深入、更具情境化的评估。因此供应商画像与分层机制,本质上是为在合作初期就厘清供应商的本质属性。 您知道,越早获取尽可能多的信息,就越能为我们提供充分的背景依据,从而以正确方式进行评估。 深入探讨覆盖度维度时,我们还需关注潜在的第四类供应商。这类供应商本质上是为更下游供应商提供服务的"中间商"。在成熟度评估中,我们发现多数组织对此覆盖不足——他们既不清楚哪些机构在为第三方服务提供支持,支持这些第三方服务的具体主体?另一个关键因素——我认为是本清单中最重要的一项——是第三方维护环节。许多机构采用"一次性评估"模式:仅进行初始供应商评估后便不再重复进行供应商画像、分级管理,也未确保掌握必要的联络渠道以备后续沟通。 但这同样至关重要——唯有确保以正确方式评估供应商。要知道,没有任何机制能阻止某家三级供应商在次年晋升为一级供应商,毕竟服务范围可能已有所扩展。因此,实施此类供应商维护机制同样具有关键意义。
阿利斯泰尔:谢谢你,乔。从良好实践的角度来看,如果有人要构建自己的审计机制和验证机制,这些无疑都是覆盖率的必要组成部分——至少应具备这些要素。当然,根据您的描述,如果有人希望使用现有的成熟度评估工具,我们确实提供这项服务,且不收取任何费用。 后续网络研讨会中我们将详细说明操作流程。这些指标确实极具价值,可作为年度持续追踪的基准。我特别认同"一次到位"这个说法——它精准概括了第三方风险管理的本质,因为这确实是需要持续迭代的动态过程。感谢乔的分享。基于此,我们观察到一些值得关注的信息。 Prevenant公司定期对第三方领域及项目成熟度开展评估、洞察与研究。在探讨今日这些关键支柱和核心指标时,我们将重点展示分析中发现的若干洞见——这些正是导致第三方项目审计失败和审计发现的常见根源。其中关于覆盖范围的常见问题之一,颇具讽刺意味的是"第N方"问题。 我们深知这始终是企业面临的顽固难题。多数组织光是建立基础供应商清单就已举步维艰,更遑论将清单延伸至乔伊前文提及的第四方及更下游的第n方供应商。 在完成我们成熟度评估并纳入研究的组织中,79%尚未建立应对第四方的管理方案。但随着发现终端方的新工具不断涌现,或通过评估手段推进,这一领域正呈现改善趋势——尽管覆盖率支柱仍受此因素显著制约。 好的,乔,能否请您谈谈内容方面的见解?
乔:内容。是的。嗯,流程开始变得低效的关键原因之一,就在于我们在风险识别环节制造了某些灰色地带。所以,首要的关键点在于——我们首先必须明确评估供应商时究竟包含哪些评估要素。 是简单发送统一的通用问题集给所有供应商,还是开始运用逻辑体系评估供应商?这正是评估框架发挥作用之处。 正如我之前所说,当提及项目中真正关键的基础层内容时,评估框架可能是首要考虑事项。我们如何确保以正确方式评估供应商?是否存在可用于确定供应商应获得何种处理层级的逻辑? 评估框架应明确供应商可能收到的问卷类型,或其整改方式——是通过远程会话完成,还是需要现场介入;是否应将监控或威胁管理环节纳入评估体系;同时结合前文提到的分层逻辑、供应商画像与分级机制。 若能预先全面掌握供应商概况,就能确保信息精准应用于各供应商,同时合理配置资源,将投入精准投向第三方资产中的关键领域。这是第一层逻辑。 其次关键在于评估供应商时所采用的方法论内容。目前客户常用的评估方式多为二元化问答(是/否类型)。 我们思考如何改进评估方法,既能获取全部必要信息,又能最大程度提升供应商体验的便捷性与流畅度。为此我们投入大量精力设计用户友好型问题集,力求减少供应商填写问卷的时间成本,从而优化双方合作关系。 若在设计问卷时投入恰当的考量力度,就能确保获取所有关键决策信息——比如判断第三方是否需要跟进评估,而无需反复追问才能挖掘核心信息。 因此,认真思考如何投入时间构建问题集将带来巨大收益。无论是添加指导说明,还是确保发送内容的沟通支持恰当到位——任何能让流程尽可能顺畅、以最少交互获取第三方高质量信息的做法,都是我们追求的目标。 另一个关键环节在于:当我们对问题问卷内容感到满意并应用了上述技巧后,必须确保评分机制完善成熟。这意味着问题集需与内部风险偏好保持一致。 我认为,确保建立业务风险偏好的基础层级后,我们才能将这种逻辑应用于外发的问卷,从而筛选出真正需要我们跟进的风险事项。
阿利斯泰尔:谢谢。乔,我们收到观众席关于评估环节的一些反馈。 我们理解现有标准化问题集(如SIG)的价值,但部分反馈指出:供应商最终将评估结果推送至与任何具体标准均不匹配的中央记录系统,或仅提供SOC 2报告等类似文件,且止步于此。关键在于能否建立流程,将各类文献资料进行转换或适配——无论是SOC 2报告、27.01适用性声明等证据,都需转化为乔所说的标准化方法论。这需要建立可将各类信息整合至标准化方法论的流程。 无论是来自SOC 2报告、2701适用性声明等证据,关键在于建立流程将其转化为乔所提的标准化方法论。这需要构建既对齐又一致的评分机制与风险机制,实现数据的适配与转换。值得注意的是,当我们系统性分析这些评估报告时,令人失望的是52% 没有采用标准化方式呈现风险数据。具体而言,各类风险数据的获取途径(如审计报告、SIG专属评估、监控数据等)存在巨大差异,企业既无法统一数据格式也无法建立基准体系。这与乔早先强调的核心观点高度契合——必须建立统一的机制,从而在整个业务体系中设定可比的风险阈值。 当涉及数千、数万乃至数十万第三方时,这点尤为关键——坦白说,若缺乏可比性,后续工作将陷入极大困境。 遗憾的是,这恰恰是导致内容成熟度评估不佳及第三方风险审计失败的首要因素之一。乔,能否请您就相关职责分工提供些见解?
Joe: Yeah, perfect. Thanks, Alistister. So, this particular pillar is is really interesting because again, it’s going to really have a huge impact on efficiency, you know, how well we develop this particular area. So, making sure we have the right roles and responsibilities defined for a program is going to be really important. Making sure that we have all of our processes for doing out assessments, on boarding suppliers, um you know, managing that assessment process all the way through to remediation and reporting. Making sure we’ve got our roles aligned to those um specific areas accurately is going to be uh really really important. When it comes to um resource actually performing tasks, we should always try and look at how we can streamline those uh processes because if there’s an efficient an inefficiency for one particular vendor, uh then that problem is only going to get multiplied with every vendor that we on board into the program and have to perform that particular process on. Uh so I recommend we invest heavily in looking at those processes seeing how we can streamline them you know is there any automation that can take place you know could we leverage a platform to send the assessments and you know manage the chasing process for example anything we can gain here is going to be really critical to uh moving from that scoring of a two that we saw earlier where we’re just developing a program up to something that’s more scalable. Um, also to support this, we see a lot of um issues with uh with role alignment. You know, we have um one particular uh resource that’s really experiencing risk remediation that might be chasing up responders for responses on um on on risks they might be discussing, for example. So, again, making sure we’re aligning our right resource to the right jobs and tasks. again is going to be improving um the uh the efficiency of of the program. Um one thing as well that we don’t see many clients performing on a regular basis is resource forecasting and this is actually something um really easy to do. We should be able to understand pretty early on whether the uh resource we have within our team is enough to be able to support you know x number of vendors over the course of a year. We know how long each of our processes take. uh we know we’ve we may have streamlined them as well to make sure they are as efficient as possible. So now we should be able to do some basic calculations to work out you know how much of a um of a scale can our team manage and performing those types of exercises is going to be really helpful in planning and making sure we’re setting oursel up for success rather than failure as we start to build out the program on board new suppliers and um and overall begin to scale. Alistair: Thank you, Joe. It’s certainly second based on the programs I’ve seen, some of the commentary that Joe’s had there. So, when it comes to allocating the right resource for the right job, we have yet to see a overstaffed TPRM program. You might be the odd unicorn that exists out there, but um if you are, you certainly are that unicorn and congrats to you. Uh but typically what we would see is the fact that you have a small team with shared capabilities and even some quick wins such as documenting skills matrix. based on who’s doing what. Not so much a racy roles and responsibilities, but also understanding for the variations that you might have in your process, who can actually take on what role and making sure that you’re allocating responsibilities accordingly is very important. And that contributes to those resource forecasts that you do because the reality is you could be overcommitting over what you’re able to achieve in one year. And that’s a very common issue that we see. People overcommit and essentially misrepresent what they’re able to do in 12 months. And even if you do the a significant amount in year one, you end up still looking like you’ve missed targets, which is totally unfair. That is sadly still too common in this space. But what are the most common observations that we are seeing when it comes to roles and responsibilities? Uh at the very top there, lacking the standardized process. So this is about establishing a process for operations. Uh third party risk management is still a very workflow heavy activity. Tools automate and help make decisions, define thresholds etc. But you nonetheless need a process for identifying the information, reacting to the information, articulating and sharing it across the business. 62% did not have a consistently standardized process. 52% had planning shortfalls. This is where they when we ultimately looked at capacity planning, the reality is that they would never be able to achieve what they’ve committed to to the execs. Uh and that is sadly undermining all the good efforts that they’re doing throughout the year. So, we strongly strongly recommend looking at resource capacity planning and then factor that in based on the limited in information you’re probably going to get from your third parties and be pragmatic and realistic. And 59% actually overspent on TPRM resources. What do we mean by that, Joe? I think you touched on it not too long ago, which is you’ve got say senior risk consultants who are wellversed to dealing with the intricacies of risk management sitting there doing things like chasing responses, asking generic questions, answering generic questions as well across third parties. That’s not a good use of their time. The reality is there’s going to be a large subset of risks that they need to deal with. So aortioning the right duties to the right people is certainly uh valid and worthwhile. Joe, please share more on remediation for me.
Joe: Remediation uh again I keep hammering home this point around efficiency, but um it really is going to be key here as well. Uh a lot and you even touched on it there with one of the the stats you mentioned around um some inconsistent approaches. So, one thing we we’re commonly finding with with remediation is um there’s not a consistent approach leveraged by teams to perform review process pro processes, whether that’s a review of a submission that’s just been returned to us from a a vendor uh or it’s an actual risk that needs to be reviewed internally. There’s always seems to be a lack of um a standardized approach that’s actually documented to support these types of activities. So things like playbooks, uh if we’re looking at a question and they answer X or Y, you know, what should we do? What’s our standard response? You know, actually looking looking at our um request for evidence and where they provide it, you know, providing some guidance internally on what we should look for. to validate that this particular evidence is fit for purpose. The more we can invest in in that type of process and documenting it, the less we would have to rely on some of these more expensive resources performing these types of tasks because we’ve actually documented it. It’s more of a playbook and there’s some some standardized logic to it. So recommend uh and of course we’re we’re um we’re reviewing everything consistently which is going to be beneficial to uh to the program. So I recommend uh you know we we build on that as a dependency or one of our more heavily weighted areas for improving maturity within this particular area. Um making sure we have aligned a good and um maintained uh risk scoring to the the types of uh risks that we’re assessing as well. It’s going to be beneficial. So we have seen a lot of um question sets which have been used within within programs where they just using a binary approach of you know is this in place yes or no. Um without applying that scoring and that waiting to how important that particular question is to the business it becomes very difficult to prioritize these items for review internally. You know we want to be able to tell our suppliers you know these are the key things we need from you right now as must-haves rather than these other 50 that might be nice to haves. So investing time into making sure you scoring is uh is up to date, maintained and um and reflective of our our internal risk appetite uh would be really uh really beneficial for maturity when it comes to risk remediation approach. Uh again a playbook is going to be hugely helpful and uh when I perform these exercises of actually debriefing some clients on their maturity assessments uh I bring this up almost every session and just say if we invest some time into defining what remediation looks like to you and how can we standardize it. Um we’re going to hugely improve the the efficiencies in that particular area. And again, as Alistister said a second ago, you know, why are we using our expensive resource to manage things that we can document and ask maybe our more junior resource or different roles to conduct for us? Any levels of filtering we can apply to those those types of processes will increase our efficiency um and overall our maturity of our program. We discussed on the last piece around uh resource management. Again the same can be applied to remediation. We should be able to grasp what our uh our scale of remediation looks like based on the team we have internally. You know how many risks can we manage a day? Do we have guidelines on um you know when our chases are needed to make sure things are in place by you know those types of uh of attributes can be really beneficial to maturing. the remediation area. And I would say although these these these things seem like uh you know it’s a heavy investment of time I actually think you could accomplish quite a lot of this stuff you know within the scope of a month say of building out what risk mediation looks like and um and standardizing it from that point onwards it’s going to be a process of evolving that playbook over time. This is never going to be uh a oneanddone approach like I mentioned earlier. If we can define our our our logic of if this then that for risks being identified and then just build on that and evolve that over time that playbook’s only going to get more and more advanced. Um and with that we’re relying on less and less expensive resource to conduct those types of activities. So another really good one there for uh for improving efficiency and that also ties into the last point there around standardization and process. We need everyone to be conducting remediation approaches in the same way, you know, having workflows defined, considering things like uh the profile of a vendor and the tier of a vendor, obviously that can impact on, you know, how we’re approaching remediation. So the more we can build on that type of activity, uh the more maturity we’re going to see in these particular areas.
阿利斯泰尔:谢谢你,乔。当人们讨论风险补救措施时,我们常被问及的一个问题是:他们自然会觉得供应商和相关方态度暧昧,有时甚至需要投入不成比例的资金才能让他们买账并积极应对风险。对此我们通常强调的核心观点是:务实至关重要。 多数情况下,当我们开始审查第三方计划时,其容忍度和风险阈值往往设定得过于宽松。大量风险被归类为关键、高或中等风险,这种情况可能失衡。 通常需要经过调试和优化阶段,才能看到整改措施开始见效——风险数量逐渐减少,组织才能真正建立起自己的容忍阈值,进而推动整改进程。因此从这个角度看,在庞大的供应商库存中,真正需要主动开展整改工作的第三方供应商其实只占极小比例。 那么在具体整改过程中,我们最常观察到哪些现象?最普遍的问题是——高达86%的组织缺乏统一的整改指南。这包括使用SIG等工具、自有专有内容或全面被动监控的机构。当特定风险超出容忍阈值时,必须有标准化指导方案。试想如果某天全球风险顾问突然消失, 我们仍能具备推动第三方积极变革的能力。 关键在于建立文档体系,更重要的是实现规模化——当我们开始与第三方共享知识、传递审计团队的经验、分享分析师的实践成果时。其次,59%的企业缺乏统一的风险概率与影响评分模型。无论采用费尔模型进行风险量化,还是使用传统概率-影响模型,我们都建议建立统一的机制。 无论数据源自被动监控、内部评估方法,还是外部报告,都应采用统一的风险模型,依据容忍阈值推动整改措施。但根据我们的经验,59%的企业尚未建立此机制。Joe,能否请您就最后一个支柱——治理框架进行阐述?
Joe: Yeah, sure. Um, and this starts with a couple of points around reporting, uh, which I’m investing a lot of time into at the moment internally in uh, in developing what’s good and what’s meaningful from a reporting standpoint. Um, but it actually becomes quite challenging to provide this type of reporting without some form of system to support the assessment approach. Uh, we need to know uh, we need an output from our assessment. We need risk registers. We need some scoring mechanisms behind it to be able to collect this data and and present it back to the business in in a meaningful way and something that they can understand and and interpret. Uh which becomes, you know, quite challenging when we’re working with just Excel documents or, you know, standalone assessments on in different files and different areas of our of our desk, let’s say. Um but when we’re using more um uh more automated systems, we can start to pull this information together. and provide something quite uh quite automated, quite quick and of course we can start to collect data together and that’s when this stuff becomes a little bit more powerful. So from an individual reporting standpoint here we’re referencing how mature are you able to report on a particular third party as an output from an assessment. And this shouldn’t just be you know there was this x amount of risk items and and we’re managing them and here’s our target dates. We should also include here any context around who that third party is, what they do for us, uh who would be impacted by this particular third party if we were to terminate services with them and what tier they might be. You know, those types of attributes and immediately that starts to add to the story of the data that sits within this type of reporting. Um so being able to collect that data together and prioritize the output from our our risk review and and the assessment content um would would demonstrate some real maturity from an individual reporting aspect. Additionally, we’d be great to be able to prepare to be able to pair that with some of our monitoring data that you might be using as well if you’re leveraging passive insights around businesses and uh and whether there’s any new emerging threats related to that industry. Again, all of that information paints that better picture for that individual third party and demonstrates a maturity within that area. When it comes to program reporting, it again relies on a method of collecting data together. Uh, ideally we want to be able to see um how one vendor stacks up against another vendor. Um, which could be, you know, hugely helpful if you were leveraging this approach for things like um RFPs and those types of aspects. Um, but also it’s going to be really helpful to see trends. You know, how how does um uh how many risks particular type are we seeing within the program? Are there sudden spikes or commonalities in risk within uh within the program? And probably one of the most valuable metrics to be able to see and demonstrate back to the business to uh to give get some value from all the hard work that the program has been putting in is can we demonstrate risk reduction. Uh and that’s something that yes, organizations can say they’re doing uh I would hope. But being able to say they’re doing accurately becomes more challenging and that’s where we then lean on some of the other areas that we’ve been talking about today. You know, how accurate our remediation approach is. Whether we can uh provide some assurance that the whole program is within scope of our uh sorry, all of our third party estate is within scope of our program. Whether we are happy that we’re maintaining third party information. When it’s it’s only when it’s paired with all that that we can demonstrate some with some accuracy that our program is working and and functioning successfully. Uh using that information, of course, we need to be able to demonstrate to other areas of of the business that uh you know where we’re seeing threats and um anything valuable or meaningful uh to them. As I said a second ago, when we start to aggregate these reporting, we can start to um we can start to add a bit more value to the other areas in business. I think it becomes more and more challenging to give some definitive answers about risk to the rest of the business when we’re just looking at things on a on an individual third party by third party basis. Uh a couple of quick points here just to mention around maturity. So the whole aspect that we’ve talked about today is you know dissecting a program measuring maturity across each area. We need to make sure we’re doing this consistently. Um there’s a chart on the right there giving some indication that we should be doing this at least quarterly which is something I recommend. Uh I think that’s useful for a couple of reasons. Uh one Of course, um we can demonstrate hopefully some improvements in those areas, but also it makes sure that if we are assigning objectives and tasks to improving our program that they are being worked on and we are seeing progress without making sure these checkpoints are in place. You know, other things can take priority uh over uh improving third party program maturity. So, ensuring this is brought up time and time again as part of an agenda uh make sure that we’re accountable and that we are you know adhering to some of our tasks to get program maturity increased.
阿利斯泰尔:谢谢乔。对于那些更关注优质TPRM项目中关键绩效指标(KPI)和关键信息指标(KIS)的朋友,我们当然在之前与共享评估合作举办的网络研讨会中提供了相关内容。欢迎随时访问普雷瓦伦斯网站获取更多信息。 正如乔所言,当我们审视治理体系时,最佳实践要求保持一致性。常见问题往往源于基于评估范围的误导性报告——例如宣称"80%的供应商达到标准",实则仅评估了5%的供应商库存。 这种误导性数据可能诱使审计部门对业务其他环节产生虚假安全感。审计团队通常能迅速揭穿此类问题,通过比对评估范围与实际结果进行识别。 请务必谨记这一点。当我们具体考察治理情况时,发现69%的企业错失了战略性报告机会——本可利用TPRM项目的信息推动积极的业务成果。 例如:向隐私团队、采购部门共享数据以证明供应商合规性;协助合同重新谈判;与法律部门共享信息;当然还有满足合规监管义务。这些数据集具有多重效益,既可用于争取更多预算,也能支持其他业务板块。 其次,59%的企业难以全面掌握第三方风险状况。这纯粹源于项目执行缺乏一致性,或无法以高管决策委员会等决策层可视化的形式呈现数据集成果。因此我们建议建立报告机制,通过关键绩效指标(KPI)或关键指标(Key)清晰展示项目进展。 接下来我们将重点介绍成熟度评估方法:如何自主设计评估体系,或如何利用现有成熟解决方案,以及如何获取相关资源。需要强调的是,这些发现源自Prevenant在全球范围内对不同行业、规模及专业领域的组织所实施的广泛成熟度评估,构成了一套极具参考价值的综合性数据集。 那么接下来简要说明:我们已掌握核心支柱领域中审计方期望获取的信息类型。首要建议是:优先改进薄弱环节。若您已完成自身支柱的基准影响因素清单记录,或通过成熟度模型评估确定了改进领域,我们建议系统梳理并记录这些薄弱环节。 若能结合自身投资组合、组织架构及供应商清单评估这些弱点的影响,自然便能确定优先处理事项。关键在于权衡投入力度——理想状态下应将工作量与预期收益及风险进行比对,由此形成分级优先的行动清单。 您必然会发现一些快速见效的改进点——这些往往与供应商清单关联度较低,更多体现在运营流程变革乃至内容优化方面。完成这些基础工作后,您便可着手规划:明确责任归属,为各项任务分配切实可行的时间表,将任务交由具备专业知识的成员执行,并最终确立具体目标。 例如:未来12个月内,我们计划使该支柱提升0.8个百分点,另一支柱提升1.2个百分点,并列出可执行的量化措施以达成目标。 当目标被拆解为支柱模块,按阶段划分并采用典型项目化管理模式时,工作推进将事半功倍。这种体系化管理能为年度审计提供清晰可追溯的执行路径。在将话筒交给斯科特前,请阿曼达代表我们进行最后一次快速投票。阿曼达,
阿曼达:大家好。好的,我这里还有一个快速投票。非常感谢。 我马上启动投票。问题非常简单直接:各位是否计划在未来数月内建立或强化第三方风险管理项目?是否正在为2023年相关工作做调研?这也是您参与本次会议的原因之一吗?若您希望与我及团队成员深入交流,请如实作答。 请选择"是"。我们会及时跟进——这正是我们的工作。投票将开放片刻,答得越快,它消失得越快。这只是个小游戏,帮我解决掉它,我们继续推进议程。现在投票开启,接下来请斯科特·朗接手。
斯科特:嘿,非常感谢阿曼达。阿利斯泰尔,请切换到下一张幻灯片。嗯,你们今天从阿利斯泰尔和乔那里听到的所有内容,都是关于如何推动你们的项目从当前的起点A,迈向未来期望的终点B或C。我们看到许多组织在实现这一目标时面临诸多挑战。 艾利斯泰尔,您不妨直接切换到下一张幻灯片,加快演示进度。但这三个核心需求——我们观察到企业普遍亟需解决的问题——正是浮出水面的关键:获取更优质的数据以支持决策;打破企业内部部门壁垒,促进跨团队协作(毕竟第三方管理涉及每个部门);以及随着时间推移不断优化和扩展项目体系。 这三大领域恰恰是Prevalent助力第三方风险管理项目持续成长与成熟的核心所在。下一张幻灯片将展示我们的成功方法论——我们始终从生命周期视角审视第三方风险管理,因为合作关系每个阶段都存在独特风险。若在生命周期初期或末期若在生命周期前端或末端有所疏漏,便无法全面掌握组织如何提升自身在各类风险中的应对能力与成熟度。我将不作过多赘述,以便留出提问时间,但您能看到我们最终的目标目标是提供必要工具:简化加速入职流程,建立系统化评估机制,优化供应商审核与风险缺口填补流程,并逐步实现团队协同。请切换至下一页。 Alistister:我们的解决方案交付模式在业内独树一帜,融合了人才、数据与平台三大要素——我们拥有专业团队协助您设计、构建并持续完善项目体系。海量数据支撑风险测算与进度追踪,所有环节均整合于屡获殊荣的平台中,助您集中管理任务、优化流程并实现持续改进。 所有数据均存储于屡获殊荣的平台中,该平台可集中管理任务流程,持续优化业务流程。下一张幻灯片,艾利斯特。我们按部门或领域划分了平台应用场景,涵盖采购、IT安全、数据隐私及法律合规等领域。具体内容今日演示后将提供,此处不再赘述。 各位会于今日会后收到完整资料,但需强调的是:我们全面覆盖各类风险领域——不仅限于传统网络安全IT风险(尽管这是主要方向),更能协助组织应对非IT类风险。 请切换到下一页,Alistister。这其实涉及到目前在提问窗口中反复出现的问题:Prevalent能提供哪些帮助?如何利用我们高度规范的成熟度评估来明确当前所处位置及未来发展目标? 我们建议您直接联系Prevalent团队。在明日上午发送本次网络研讨会录播的后续邮件中,我们将附上演示文稿副本,并提供专家咨询注册链接。通过简短沟通,专家可协助您规划实施成熟度评估的具体步骤——我们不会让您贸然推进。 该评估流程设计严谨周密,涵盖五大维度共45个问题,最终生成项目改进行动方案。整个过程将由专人全程指导,确保您充分理解流程并最大化产出价值。 总而言之,明天您会收到我们发送的本次演示录播邮件。其中包含注册成熟度评估的链接,我们将持续跟进并引导您完成整个流程。Alistister,我想我需要分享的内容就这些了。
阿利斯泰尔:太棒了。非常精彩。那么,我再强调一下斯科特刚才的观点。完全正确。希望大家能从今天的讨论中掌握审计部门关注的核心指标和标准,这些内容也应纳入通用计划中。无论您是利用这些基准进行自我评估,还是采用主流成熟度评估作为第三方基准,这些都是非常好的共同参考点。 我们再次强调,那五大核心支柱是我们强烈建议大家覆盖的领域。因此,若你们开始审视角色与职责、整改措施、治理机制、内容管理及覆盖范围这五个方面,就足以确保全面覆盖最常见的要点。 今天我们已尽力动态整合聊天区实时提问。当然,若您仍有任何疑问,欢迎随时联系我们,会议结束后我们将乐意详尽解答。最后,衷心感谢各位今日的参与,现在将主持权交还给阿曼达。
阿曼达:没错,我也要说同样的话。非常感谢各位的参与。现在距离整点只剩两分钟,所以我们要把时间还给各位。我知道你们都听过那个笑话,就像是"哦,我的生活将在这两分钟内改变。非常感谢"。但无论如何我们都会再联系大家。 期待下次再见。若有其他问题,请随时联系我们。我们也会主动联系各位。呃,请务必及时回复。记得检查垃圾邮件箱,我们的邮件常被误判为垃圾。若您在等候我们的回复,请务必查看该箱。这个老生常谈的提醒,各位都清楚流程了。好的。 非常感谢各位。祝大家余下时光愉快。再见。
未确认发言人:谢谢。
©2026 Mitratech, Inc. 保留所有权利。
©2026 Mitratech, Inc. 保留所有权利。