Using SANS-20 to Cut Through Security Vendor Hype

Wahoo!  This is the last post of the series.  I think I’ve saved the best for last because what I’m writing about is immediately actionable.  For a little background, I was working with a client and one of their prospects said “how will you affect my SANS 20 score?”  Brilliant!  This Fortune 100 insurance company makes cybersecurity investment decisions based on potential impact to SANS 20 posture.  They use SANS 20 as a qualitative assessment tool to compare one product/control to another.  Essentially, this is the bookend to the quantitative discussion in my last post.

A Brief History

First developed by SANS, the 20 Critical Security Controls (CSC) provide a very pragmatic and practical guideline for implementing and continually improving cyber security best practice.  The CSC-20 are real-world prescriptive guidelines for effective information security.   As stated in the Council on Cyber Security’s overview, the CSC-20 “are a relatively short list of high-priority, highly effective defensive actions that, provides a ‘must-do, do-first’ starting point for every enterprise seeking to improve their cyber defense.”

The great news for all organizations is there is significant synergy between the CSC-20 and ISACA’s COBIT, NIST 800-53, ISO27001/2, the NIST Cyber Security Framework and the Department of Homeland Security Continuous Diagnostic and Mitigation (CDM) program.  For example, just as I discussed how Open FAIR controls map to NIST 800-53 control categories, the CSC-20 maps directly to 800-53.

Diving into the depths of the CSC-20 is well beyond the scope of this post, but as a reference point, the CSC-20 contains 20 controls made up of 184 sub controls.  My focus in this post is explaining how to build a matrix to map both internal organization progress implementing the controls and also how to evaluate potential new security products’ or services’ effectiveness.  This is only possible because of the CSC-20’s granularity, modularity and structure for measuring continual effectiveness improvement. To underscore this point, each control not only defines why the control is essential, but it provides relevant effectiveness metrics, automation metrics and effectiveness tests for that control.  In other words, the control provides guidance on what to do as well as guidelines on how to know you are doing it correctly.

Birth vs. Security vs. Pest Control(s)

Screen2As mentioned above, there are many different methodologies and approaches to security control selection.  It’s important we recognize that most security controls deliver value well before they reach maximum effectiveness.  This opens the door to a continuous improvement and monitoring practice.

I emphasize most security controls are applicable to a continuous improvement program.  However, some are not.  Put another way, for pest control, a screen with a few holes in it will do a pretty good job keeping out the mosquitos: with every patched hole, fewer mosquitos get through.  In contrast, for birth control, this approach doesn’t work so well!  Birth control must be implemented with maximum effectiveness from the start.

To put this in Open FAIR terms, the control effectiveness must exceed the threat capability to be effective.

Figure 1

Using the CSC-20 opens the door to control effectiveness monitoring.  Figure 1 shows my representation of a CSC-20 control effectiveness measure.  A few things to note about this:

  1. It’s not ordinal. Yes, there are red, yellow, and green bands, but the needle is pointing to a discreet number.   For the reasons why this is important, please check out my post introducing Open FAIR.
  2. The max effectiveness state may not be 100%. There will be reasons (technical, policy, procedural, political, etc.) why organizations will not implement specific sub-controls.
  3. We need to measure progress over time for continual effectiveness improvement. In figure 1, the direction of the arrow shows which way the needle is going.  In this example, there is no improvement (or drop) from the previous assessment.

Once the monitoring format is determined, we can create a dashboard to view effectiveness of all 20 controls.  I’ve seen this done with status bars rather than the tachometer icons and there are pros and cons with each approach.  I’d love to hear of any other ideas people have on ways to graphically track control effectiveness.

 

Figure 2

For this post I drew the meters manually.  For ongoing use, a similar result can be achieved through Excel macros and creative graphic templates.  However, since effectiveness is probably only measured 1-2 times/ year, a manual process may be the most effective time investment versus creating an automated template.

Control Breakdown

CSC-20 defines four categories of controls – quick win, visibility/attribution, configuration/hygiene, and advanced.  The key to the effectiveness measure is assigning weights to these different types of control.  As an example, following on the earlier discussion, CSC-5 is the malware defense control made-up of 11 sub-controls: “Control the installation, spread and execution of malicious code at multiple points in the enterprise, while optimizing the use of automation to enable rapid updating of defense, data gathering, and corrective action.”

Figure 3

 

As shown in Figure 3, I’m using a simple scale with quick wins having the lowest weight (4 points) and advanced having the highest weight (16 points).  The approach is arbitrary and the key is being consistent across all 20 controls.  For example, I’ve also considered an approach where quick wins get the highest weight because they have the quickest impact.

Once the weighting is final, we can calculate an effectiveness score.  To do this I self-assess my effectiveness on each sub-control.  For example, I have anti-malware (5-2) software on all end points and in my DMZ so I’m giving myself 100% (4 points) for this sub-control.  At the other end of the spectrum, I have no behavior-based anomaly detection so I’m giving myself 0% (0 points) for this sub-control.   The end-result is a sucky 39%.   There is certainly great room for improvement here.

Using Qualitative Assessment to Evaluate Different Products

In my last post we used a quantitative assessment to evaluate the potential impact of a new control.  Using CSC-20, we can get more granular and not just evaluate the potential impact of a new control, but compare one product to another!    Surprisingly, the organization we were dealing with had already deployed a number of products labeled “malware defense” – with very poor results – and this time they were able to determine ahead of time the potential impact of their next product; without running a single test.

The process is pretty straightforward:

  1. Perform a CSC-20 self-assessment as described above
  2. Determine the incremental projected benefit of adding a new security product. What sub controls will the product cover and to what level?  How much overlap is there between what the new product covers and the existing environment?
  3. Recalculate a projected effectiveness rating against the control with the new product/service added to the security infrastructure.
  4. Repeat the above process with other vendor products to determine which product has the greatest potential impact on the organization’s overall security effectiveness

Figure 5

To illustrate, Figure 5 shows the potential impact of adding my client’s breach detection solution into the Insurance Company’s security infrastructure. We projected adding significant value in sub controls 5-8 through 5-11, raising the overall CSC-5 score from 39% to 92%.

When we looked across all 20 controls there were other controls we projected benefit, though none as strongly as CSC-5.  The organization asked one of our competitors to do the same thing and the end-result (please see Figure 6) was our solution scoring higher in projected effectiveness improvement than the competition.  The insurance company is still evaluating products and weighing the six-point difference against differences in lifecycle costs.   The key point is they were able to pre-assess a product’s real impact without doing any testing or relying on vendor brochure-ware and marketing hype.  (Of course, my client is hype-free; I’m referring to the other guys!).

products compare

Figure 6 – Overall Effectiveness of Two Security Solutions

Conclusion

If we can standardize this effectiveness measurement and monitoring process, companies can assess investments across their entire security ecosystem (not just within a specific area).  Combining this approach with the quantitative assessment methodology outlined in my last post and the cybersecurity economics discussed in my first two posts, CISOs– for the first time – can make defensible decisions for security spending that satisfies the evaluation criteria of the CIO, the CFO and the CEO.

It would be best if an organization like SANS, ISC2, ISSA or ISACA took this on and developed a formal process for CSC-20 effectiveness measurement and monitoring.  For example, if we standardize on the assessment metrics (e.g. the relationship between quick win versus configuration/hygiene) then we can do cool things like benchmarking and data normalization to characterize control effectiveness baselines across different industries and company sizes.   This would also help us develop a standard script that vendors can follow to project their product’s effectiveness impact.

Obviously, we have a long way to go with this, but I think I’ll contact SANS to see what they think.  What do you think?  I’d love to hear thoughts on this and its potential to change the way we make security spending decisions.

 

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *