OpsHelm

TPRM 2.0

Lee Brotherston
Lee Brotherston
|
Sep 28, 2023
|
6 min read
In the ever-evolving landscape of third-party risk management, Lee’s post discusses the shift from cumbersome, generic questionnaires to a targeted approach.

How many of us have been having a conversation with one of our salespeople who tells you that they’ve got this huge deal that will be great for the company and it just needs to pass the vendor security review? You use the provided login to the future customer’s Archer installation and are treated to 700 questions about… tape retention policies and details about the HVAC system in your (non-existent) datacenter. We’ve all been there, but thankfully this is becoming less common as many organizations are adopting newer, more targeted third-party risk management practices.

Standards and Bad Practices

I’ve been in the security hot seat at all types of organizations: vendors, clients, organizations acquiring others, and organizations being acquired — so, I have experienced third-party risk assessments from many perspectives. We have certainly come a long way over the duration of my career, as third-party risk assessments become more commonplace, formalized, and standardized. Thinking back to the early days of my career, I have memories of what could charitably be described as “a fluid and ad-hoc process” for such things. This has gradually morphed into unique, per-organization questionnaires, and then the industry has largely converged on a small collection of standard questionnaires or GRC inputs that made the process much more audit-adjacent.

Although this has certainly been progress, I think we can all agree that the standardized third-party risk assessments are still not a very enjoyable process for anyone involved, and can be very time-consuming. In part, I believe that, in a quest to limit risk exposure and liability via extensive questionnaires that cover every possible risk for every situation, we may have lost sight of the high-level objectives that matter to us the most: identifying and quantifying the actual risk posed by an individual third party. However, some organizations are now reversing this trend and implementing better, more targeted assessments that get back at the heart of the issue.

One Size Does Not Fit All

Let me illustrate via a story of a third-party risk assessment that went well: I was recently working with a team that was going to partner with a very large tech company, who required that we pass their third-party risk assessment program. Past experience led me to anticipate anywhere between 500 and 2,000 questions on a spreadsheet, along with a list of artifacts of compliance that we would provide in order to demonstrate that our answers were truthful. I was also anticipating a number of questions that were not applicable to us, or even worse, questions where it would be unclear if they were applicable to us. I then expected to be forced to choose if poorly written questions should be answered to the letter of the question or, in the assumed spirit of the question, I should simply mark them non-applicable and await the inevitable follow-up.I knew that many organizations have so standardized their process that they use the same questions for a provider of financial software as they use for a vendor who monitors and maintains the head office air conditioning system — this was not my first rodeo.

I was pleasantly surprised to find that this was not the process, at all. There were, of course, some questions. However, they were much more specific and focused than I was accustomed to, as were the required artifacts of compliance. For example, rather than a broad “TLS1.2 or above is used for all network connections” question, the question is scoped with regards to infrastructure, types of connection and data types transmitted. This removes the need to descend into a debate around the inevitable unrelated piece of infrastructure which uses cleartext because it simply does not support TLS, has no sensitive data, and is located in unrelated infrastructure. With those completed, I was still expecting quite the slog, as the completed questionnaire will usually result in further rounds of questions, either because questions were vague or unclear resulting in unclear answers, or because the answers served only to create more questions. However, that was not the case here. This is, in part, because the final (and honestly arbitrary) “score” was not the objective that was most important to them, nor was examining specific findings in any depth. As a result, they didn’t need absolute certainty about every aspect of the response. Their primary objective was to gauge our approach to risk management and our ability to respond to issues, in order to indicate the degree to which we, in turn, would pose a risk to them. To restate that, in case you missed it: Their end goal was not to undertake a regulatory compliance-like assessment of the organization based upon easily misinterpreted questions. Rather, they were trying to infer our maturity and posture as it pertained to security by using some well-targeted information gathering as a proxy.

To this partner, the most important part of the entire process was our response to any findings discovered. Not because they are interested in the specific details of any particular remediation, but rather because they are interested in our ability to determine how to remediate, manage timelines for remediation, and prioritize higher criticality issues. All of these can be used as proxies for how well an organization manages risks and vulnerabilities, providing a much higher-fidelity picture of the risk posed by this third party than an assessment of the vulnerabilities present in a single point-in-time snapshot.

High fidelity picture

TPRM 2.0

To me this process, which I’ll call “TPRM 2.0,” is a much more refined approach to third-party risk management, allowing both parties to spend less time on the process whilst still achieving the end goal of gauging the overall “riskiness” of a particular partner or vendor. I am sure you have also witnessed, at some point in your career, that there are many organizations that have all the paper policies, audit logs, etc. in place, but who fail to act upon an issue, or who make excessive use of acceptable risk waivers in lieu of action. It can be difficult to flag those vendors using traditional audit-style questionnaires. The TPRM 2.0 process, however, provides insight into an organization’s ability to understand a risk, ability to formulate an appropriate response and work it into a schedule for remediation that can’t just be papered over with policies. To really supercharge this process, I believe that the use of automation can enable us to gather reliable information, increase speed, and reduce effort, especially in complex cloud environments.

Of course, attempting to write a new unified standard of third-party risk management that fixes all of the problems with the current system is more likely to result in XKCD-927 than a unifying standard. However, we can lean toward a new approach to this problem and make incremental changes to our own processes even absent universal agreement:

  • Make use of automation to remove bias, accidental or otherwise, for items in your assessment that can be gathered programmatically. For example, a tool that can assess configuration options, such as the use of encryption, access control lists, or overprivileged user accounts, will yield stronger evidence of an organization’s risk posture than a policy document that merely stipulates their use.

  • Rather than an exhaustive set of criteria of which yield a large number of “not applicable” responses, use well-defined, targeted requests for information as a proxy for the organization’s overall risk management strategy.

  • Rather than attempting to deep-dive every finding as if it were your own, prefer reviewing a remediation plan as the follow-up, this wastes less time and provides insight into:

    • If the third party truly understands the risk posed by the issue
    • If they have an appropriate approach to mitigation. For example, can they balance cost and time concerns when considering a tactical solution versus a longer-term strategic approach.
    • If they can prioritize risks appropriately. Of the risks uncovered, do they seem to be prioritizing the more important risks? Are they letting the less important ones hover in limbo indefinitely? Are their estimated dates for remediation both present and reasonable?

Keeping these goals in mind, we can improve the third-party risk management process for ourselves and for the organizations that we put through it, all while improving the quality of risk data that we can provide back to the organization. It’s rare that we see that kind of “everybody wins” situation in security, but I think that this is truly one of those times.