Category going out to Feedburner

NIST Cybersecurity Framework (NCSF): Getting from Batter to Better

Imagine that you are making a chocolate cake and you pull up the NIST recipe on your iPad. NIST presents a picture of the perfect cake. It gives insightful detail on all the ingredients: the exact measures; the specific treatments (e.g. butter at room temperature and eggs separated); the sequencing of the ingredients; and, even substitutions for missing ingredients. It prescribes the adjustments to make for a multi-level cake and addressing special conditions like Kosher, gluten-free and Vegan. It also helps you determine your baker’s maturity level, but it does not define critical elements necessary to actually make a cake. After much work, you have a table full of dirty dishes, a bowl of delicious cake batter, but no chocolate cake!

Yes, I am obviously not talking about baking a cake. I am talking about the need to “operationalize” the NCSF. I see so many people pointing to NCSF as the cookbook for effective cybersecurity, and yet many people have trouble turning it into action. For example, I recently reviewed most of the comments (over 200 responses submitted to NIST on CSF versions 1.0 and 1.1) to see what other people think. I find a consensus that organizations use the Framework as an organizational and system-level tool, but I also find a common theme from respondents requesting guidance on putting the NCSF into action.

To get from batter to baked goods, we need a prescriptive, rational, extensible, flexible, and reproducible methodology that builds on the framework. What I envision is something that uses NIST CSF as the foundation and draws from other – more operationally focused – best practices, including other work from NIST (e.g. SP-800), ISO27001, and CIS Controls to take the framework off the paper and put it into action. The great news is this is what the CISO at the University of Massachusetts has created: the UMass Lowell NCSF Control Factory™.

The NCSF Control Factory™ uses a controls factory model to teach organizations how to build, test, maintain and continually improve a cybersecurity program based on the NIST CSF. The NCSF Control Factory™ model helps enterprises organize the engineering, technical and business functions of a NIST Cybersecurity Framework program.

To learn more about the model and associated training, please check this related post.

Happy Baking!

================

Some of you may remember the commercial about the guy who was so excited about his razor; he bought the company? Well, I’m so excited about the NCSF Control Factory™ that I just joined with itSM Solutions to help them sell and deliver this training. Please contact me if you would like to hear more about this groundbreaking work.

Changing the Way We Think About IT

At TechVision Research we are changing the way IT advisory research is developed and delivered.  We are leveraging the mostly untapped resource for IT executives that have spent their careers dealing with the day-to-day challenges of planning, operating and continually reassessing the way Fortune 500 companies and Governments conduct IT.  These are people like John Mellars, TechVision principal consulting analyst and author of our most recent report “The End of Enterprise Architecture and IT As We Know IT.”  In this report, John and Gary Rowe (CEO/Principal Consulting Analyst) paint a radically different picture of IT in the age of cloud and the Millennial Generation than what we see from traditional research advisory firms.

In this report, John proposes that legacy technology governance models such as Enterprise Architecutre don’t fully translate to the new world of cloud-based services, rapid deployment, microservices and new models for application development operations. To underscore this point, vendors now compete based on the functions and capability they offer linked to organizational business requirements, not based on enterprise IT and technology governance standards.  This new approach completely sidesteps processes established by the CIO through its IT and EA teams. The end result is significant organizational friction and often EA specifically, and IT, in general, are increasingly seen by the business units as “business prevention.”

The good news is, as discussed in John and Gary’s report, there is a strong future for Enterprise Architects and the CIO team to support the enterprise move towards a new cloud-based IT services model.  As discussed in this report, organizations should be taking several steps including:

  • Empowering business units to use technology as a means of achieving success with full accountability for the results
  • Moving towards a point where the organization thinks “Cloud First”
  • Phasing out Enterprise Architecture within IT as currently defined given business friction and challenges in keeping pace with the rate of technology change
  • Revamping central IT to provide for Enterprise entity needs such as procurement of services, security, disaster recovery, data interchange, and service management.
  • Establishing a new Chief Innovation Technical Officer (CITO) role to lead innovation centers of excellence to support the businesses as they rapidly adopt new cloud-based technologies for competitive advantage.

What is Enterprise Architecture in this “new and changed” environment? It is what we call the “cloudification” of EA. As discussed in this groundbreaking report, enterprise architects are morphing into enterprise IT Product Managers (EIPM). The concept is optimizing the choice of products and services for the enterprise to optimize flexibility, business utility and speed of deployment. This report describes that path towards the next generation of IT.

To register to receive an excerpt of this report, please go to the following link:

Or, please contact me directly and I’m happy to send you an excerpt of the report.

Using SANS-20 to Cut Through Security Vendor Hype

Wahoo!  This is the last post of the series.  I think I’ve saved the best for last because what I’m writing about is immediately actionable.  For a little background, I was working with a client and one of their prospects said “how will you affect my SANS 20 score?”  Brilliant!  This Fortune 100 insurance company makes cybersecurity investment decisions based on potential impact to SANS 20 posture.  They use SANS 20 as a qualitative assessment tool to compare one product/control to another.  Essentially, this is the bookend to the quantitative discussion in my last post.

A Brief History

First developed by SANS, the 20 Critical Security Controls (CSC) provide a very pragmatic and practical guideline for implementing and continually improving cyber security best practice.  The CSC-20 are real-world prescriptive guidelines for effective information security.   As stated in the Council on Cyber Security’s overview, the CSC-20 “are a relatively short list of high-priority, highly effective defensive actions that, provides a ‘must-do, do-first’ starting point for every enterprise seeking to improve their cyber defense.”

The great news for all organizations is there is significant synergy between the CSC-20 and ISACA’s COBIT, NIST 800-53, ISO27001/2, the NIST Cyber Security Framework and the Department of Homeland Security Continuous Diagnostic and Mitigation (CDM) program.  For example, just as I discussed how Open FAIR controls map to NIST 800-53 control categories, the CSC-20 maps directly to 800-53.

Diving into the depths of the CSC-20 is well beyond the scope of this post, but as a reference point, the CSC-20 contains 20 controls made up of 184 sub controls.  My focus in this post is explaining how to build a matrix to map both internal organization progress implementing the controls and also how to evaluate potential new security products’ or services’ effectiveness.  This is only possible because of the CSC-20’s granularity, modularity and structure for measuring continual effectiveness improvement. To underscore this point, each control not only defines why the control is essential, but it provides relevant effectiveness metrics, automation metrics and effectiveness tests for that control.  In other words, the control provides guidance on what to do as well as guidelines on how to know you are doing it correctly.

Birth vs. Security vs. Pest Control(s)

Screen2As mentioned above, there are many different methodologies and approaches to security control selection.  It’s important we recognize that most security controls deliver value well before they reach maximum effectiveness.  This opens the door to a continuous improvement and monitoring practice.

I emphasize most security controls are applicable to a continuous improvement program.  However, some are not.  Put another way, for pest control, a screen with a few holes in it will do a pretty good job keeping out the mosquitos: with every patched hole, fewer mosquitos get through.  In contrast, for birth control, this approach doesn’t work so well!  Birth control must be implemented with maximum effectiveness from the start.

To put this in Open FAIR terms, the control effectiveness must exceed the threat capability to be effective.

Figure 1

Using the CSC-20 opens the door to control effectiveness monitoring.  Figure 1 shows my representation of a CSC-20 control effectiveness measure.  A few things to note about this:

  1. It’s not ordinal. Yes, there are red, yellow, and green bands, but the needle is pointing to a discreet number.   For the reasons why this is important, please check out my post introducing Open FAIR.
  2. The max effectiveness state may not be 100%. There will be reasons (technical, policy, procedural, political, etc.) why organizations will not implement specific sub-controls.
  3. We need to measure progress over time for continual effectiveness improvement. In figure 1, the direction of the arrow shows which way the needle is going.  In this example, there is no improvement (or drop) from the previous assessment.

Once the monitoring format is determined, we can create a dashboard to view effectiveness of all 20 controls.  I’ve seen this done with status bars rather than the tachometer icons and there are pros and cons with each approach.  I’d love to hear of any other ideas people have on ways to graphically track control effectiveness.

 

Figure 2

For this post I drew the meters manually.  For ongoing use, a similar result can be achieved through Excel macros and creative graphic templates.  However, since effectiveness is probably only measured 1-2 times/ year, a manual process may be the most effective time investment versus creating an automated template.

Control Breakdown

CSC-20 defines four categories of controls – quick win, visibility/attribution, configuration/hygiene, and advanced.  The key to the effectiveness measure is assigning weights to these different types of control.  As an example, following on the earlier discussion, CSC-5 is the malware defense control made-up of 11 sub-controls: “Control the installation, spread and execution of malicious code at multiple points in the enterprise, while optimizing the use of automation to enable rapid updating of defense, data gathering, and corrective action.”

Figure 3

 

As shown in Figure 3, I’m using a simple scale with quick wins having the lowest weight (4 points) and advanced having the highest weight (16 points).  The approach is arbitrary and the key is being consistent across all 20 controls.  For example, I’ve also considered an approach where quick wins get the highest weight because they have the quickest impact.

Once the weighting is final, we can calculate an effectiveness score.  To do this I self-assess my effectiveness on each sub-control.  For example, I have anti-malware (5-2) software on all end points and in my DMZ so I’m giving myself 100% (4 points) for this sub-control.  At the other end of the spectrum, I have no behavior-based anomaly detection so I’m giving myself 0% (0 points) for this sub-control.   The end-result is a sucky 39%.   There is certainly great room for improvement here.

Using Qualitative Assessment to Evaluate Different Products

In my last post we used a quantitative assessment to evaluate the potential impact of a new control.  Using CSC-20, we can get more granular and not just evaluate the potential impact of a new control, but compare one product to another!    Surprisingly, the organization we were dealing with had already deployed a number of products labeled “malware defense” – with very poor results – and this time they were able to determine ahead of time the potential impact of their next product; without running a single test.

The process is pretty straightforward:

  1. Perform a CSC-20 self-assessment as described above
  2. Determine the incremental projected benefit of adding a new security product. What sub controls will the product cover and to what level?  How much overlap is there between what the new product covers and the existing environment?
  3. Recalculate a projected effectiveness rating against the control with the new product/service added to the security infrastructure.
  4. Repeat the above process with other vendor products to determine which product has the greatest potential impact on the organization’s overall security effectiveness

Figure 5

To illustrate, Figure 5 shows the potential impact of adding my client’s breach detection solution into the Insurance Company’s security infrastructure. We projected adding significant value in sub controls 5-8 through 5-11, raising the overall CSC-5 score from 39% to 92%.

When we looked across all 20 controls there were other controls we projected benefit, though none as strongly as CSC-5.  The organization asked one of our competitors to do the same thing and the end-result (please see Figure 6) was our solution scoring higher in projected effectiveness improvement than the competition.  The insurance company is still evaluating products and weighing the six-point difference against differences in lifecycle costs.   The key point is they were able to pre-assess a product’s real impact without doing any testing or relying on vendor brochure-ware and marketing hype.  (Of course, my client is hype-free; I’m referring to the other guys!).

products compare

Figure 6 – Overall Effectiveness of Two Security Solutions

Conclusion

If we can standardize this effectiveness measurement and monitoring process, companies can assess investments across their entire security ecosystem (not just within a specific area).  Combining this approach with the quantitative assessment methodology outlined in my last post and the cybersecurity economics discussed in my first two posts, CISOs– for the first time – can make defensible decisions for security spending that satisfies the evaluation criteria of the CIO, the CFO and the CEO.

It would be best if an organization like SANS, ISC2, ISSA or ISACA took this on and developed a formal process for CSC-20 effectiveness measurement and monitoring.  For example, if we standardize on the assessment metrics (e.g. the relationship between quick win versus configuration/hygiene) then we can do cool things like benchmarking and data normalization to characterize control effectiveness baselines across different industries and company sizes.   This would also help us develop a standard script that vendors can follow to project their product’s effectiveness impact.

Obviously, we have a long way to go with this, but I think I’ll contact SANS to see what they think.  What do you think?  I’d love to hear thoughts on this and its potential to change the way we make security spending decisions.

 

Using Open FAIR to Quantify Cybersecurity Loss Exposure

Why is Cybersecurity Risk Different?

Should business executives treat cybersecurity differently than other risk centers?  It must be different, otherwise why it is so hard to answer even simple questions about cybersecurity spending such as what should we spend and what should we spend it on?   But, why is this so?  This is not rocket science, is it?  No, it’s not, but not in the way you are thinking.  With all due respect to my Dad (he literally is a Rocket Scientist), by treating cybersecurity as a “special risk,” we’re making answering these simple questions more complicated than making rockets fly.

To Infinity and Beyond

BuzzI started this journey  trying to answer two simple questions: what should we spend on cybersecurity and what should we spend it on.  These answers seem so elusive and therefore I figured we must need some new perspective or approach specific to cybersecurity spending.

From my first post, most people use ROI to justify cybersecurity spending.  A good example is the Booz Allen model.  From the second post  I showed how ROI (or Return on Security Investment (ROSI)) are not good metrics to use to justify cybersecurity spending; in fact, any type of spending.  We need to take our economics discussion up a notch and focus on using NPV (Net Present Value) and/or IRR (Internal Rate of Return) rather than ROI/ROSI.   In my third post  I outlined a standardized way to qualify and quantify risk: Factor Analysis of Information Risk (FAIR).   Yes, a standardized approach that does not treat cybersecurity any differently than other areas of risk!  Because of this, organizations using FAIR are developing a standard lexicon to discuss cybersecurity risk in terms that their risk management peers understand.  With FAIR, business executives can assess cybersecurity risk with the same scrutiny, maturity and transparency they assess other forms of organizational and institutional risk.

In this post we’re diving a bit deeper into FAIR and focusing on how we can start using FAIR to help make cybersecurity investment decisions.

As a quick refresher, in Open FAIR, risk is defined as the probable frequency and probable magnitude of future loss.  That’s it!  A few things to note about this definition:

  • Risk is a probability rather than an ordinal (high, medium, low) function. This helps us deal with our “high” risk situation discussed above.
  • Frequency implies measurable events within a given timeframe. This takes risk from the unquantifiable (our risk of breach is 99%) to the actionable (our risk of breach is 20% in the next year)
  • Probable magnitude takes into account the level of loss. It is one thing to say our risk of breach is 20% in the next year.  It’s another thing to say our risk of breach is 20% in the next year resulting in a probable loss of $100M
  • Open FAIR is future-focused. As discussed below, this is one of its most powerful aspects.  With Open FAIR we can project future losses, opening the door to quantifying the impact of investments to offset these future losses

As shown in Figure 1, the Open FAIR ontology is pretty extensive and this post isn’t the place to get into all the inner workings.  I urge everyone to learn more about FAIR.

FAIR-24
Figure 1 – Open FAIR High-Level View

As discussed in my last post, risk is determined by combining Loss Event Frequency (LEF) (the probable frequency within a given timeframe that loss will materialize from a threat agent’s actions) and Loss Magnitude (LM) (the probable magnitude of primary and secondary loss resulting from a loss event).

At a Loss

yay-4860554To date, I’ve mostly focused on the loss event frequency (LEF) side of the risk equation, specifically to tease out the intricacies of threat and vulnerability In this post, I’m shifting the focus to the loss magnitude (LM) side of the risk equation because I believe the ability to project a realistic loss magnitude is the foundation of a quantitative risk analysis.  Based on my discussions with cybersecurity executives, it’s often the toughest thing to quantify because quantifying loss magnitude can only be done with extensive communication with other parts of the business; parts that quite often have never interacted with IT and cybersecurity before.  This is one of the main reason I say this is not rocket science.  It’s harder!

Defining Loss

How do we define loss?  Booz Allen model defines cost to fix, opportunity cost and equity loss.  These are pretty broad categories and the broader the measure, the more difficult it is to quantify the potential loss.  We need more granularity, but not too much granularity.   If we get too granular the whole process may collapse under its own weight.

In terms of granularity, Open FAIR calculates six forms of loss, covering primary and secondary loss.

Primary Loss is the “direct result of a threat agent’s action upon an asset.”    Secondary Loss is a result of “secondary stakeholders (e.g., customers, stockholders, regulators, etc.) reacting negatively to the Primary Loss event.  In other words, it’s the “fallout” when the s*&t hits the fan.

 

Open FAIR Primary and Secondary Loss

The secondary loss magnitude is losses materializing from dealing with secondary stakeholder reactions.  To me, this is a critical distinction of Open FAIR versus other models.  We can’t assume that secondary losses will always occur.

Ponemon Cost of Breach Study Example

The best work I’ve seen on cost of breach is the annual studies performed by Dr. Larry Ponemon and his team (www.ponemon.org).  Since 2005, they have been tracking costs associated with data breaches.  To date, they have studied 1,629 organizations globally.  Some of the key findings from the 2015 study are in Table 1:

Table 1 Ponemon ResultsIn Open FAIR terms, Ponemon is saying the average loss event (LE) frequency of a 10,000 record breach is .22 over two years with a Loss Magnitude (LM) of $1.54M (10,000 records x $154/record).   Similarly, Ponemon states the average Loss Event Frequency (LEF) of a 25,000 record breach is approximately 0.10 over two years with a LM of $3.85M.1

From this we can determine the Aggregate Loss Exposure (ALE).  Typically, the ALE is an annualized number so if we assume Ponemon saw an even distribution over two years, we develop an ALE for a 10,000 record breach of approximately $169K.   This is a lot smaller than the oft-quoted $3.79M average cost of breach.  Shifting the discussion from Loss Magnitude (LM) to Aggregate Loss Exposure (ALE) changes the whole tenor of the conversation.

Not There Yet

This is very helpful information, but it’s not precise enough to make clear quantitative risk decisions.  I suspect, Ponemon has much more information than what is published in the report and hopefully, it includes the following key points:

  • The distribution of the primary Loss Event Frequency (LEF) and Loss Magnitude (LM). We know the average, but to make decisions we really need the Min, Max and Mode.  For example, an average of .22 is only relevant if you know the shape of the distribution curve.  Is it peaked or flat?  The sharpness of the curve defines the level of confidence we have with the data.  To assess this, we need to compare the average to the mode: the closer the two, the higher the level of confidence.
  • The relative frequency of primary to secondary events. Though Ponemon does tease out the two types of losses (e.g. he differentiates between direct and indirect costs), it isn’t as well differentiated as an Open FAIR analysis.  Lumping the two together can skew the results dramatically.
  • Separating Primary and Secondary Loss Magnitude (LM). This is related to above.

As an example, check out Figure 4, a sample risk analysis done by RiskLens.**

RiskLens ALE Charts
In this example, we’re looking at pretty steep distribution curve where the average and peak (mode) are fairly close.  The Aggregate Loss Exposure (ALE) is made up of multiple loss (primary and secondary) scenarios.  This analysis is developed from 118 individual risk scenarios covering 32 asset classes and 5 threat communities.  For example, one individual risk scenario might be the loss of 10,000 records due to a data breach caused by weak authentication controls, contributing $169K to the ALE.  As mentioned above, the ALE is a factor of both the Loss Event Frequency (LEF) and Loss Magnitude (LM).

Figure 4 contains a ton of information.  First, the chart shows a Risk Appetite (RA) of $130M for the organization.   Just looking at the curve shows the RA is less than both the peak (mode) and average ALE.  The chart also shows the 10% and 90% distribution points.   Many CFOs look at the 90% line as the worst case ALE scenario (essentially equivalent to Ponemon’s $3.79M cost of data breach).  In other words, on average we expect an ALE of $223M but to prepare for the worst, we should prepare for an ALE of $391M.

We can further break the average ALE down into primary and secondary LM components (see Figure 5).

RiskLens ALE Breakdown

Now what?  In this organization’s case, the secondary loss elements are far larger than the primary loss elements and the bulk of the materialized loss relates to loss of confidentiality (Figure 6).

Now, what do we do with this information?  How do we turn charts into actionable guidance?  Right off the bat, we have a fundamental problem because our Risk Appetite (RA) is significantly lower than peak and average ALE.   We have three main choices: raise the RA (rarely the best option), outsource a significant chunk of the risk by buying cyber insurance, and implement controls to lower the ALE below the RA.

Control Your Destiny

Open FAIR only defines four classes of controls: vulnerability, avoidance, deterrent, and responsive controls.  In comparison, NIST defines 17 categories of controls; many could be considered a cross of avoidance, deterrent and vulnerability controls.

Having only four broad classes of controls makes performing “what-if” analyses practical.  It also provides a framework to determine control selection based on most significant ALE impact.

FAIR-22

Figure 7 – Mapping Open FAIR Controls to Ontology

To determine the most effective controls, we need to determine the threat communities with the greatest impact on ALE.  For example, from the above RiskLens example, we can break down the average $223M ALE by specific threat communities (these need avoidance and deterrent controls).

RiskLens ALE Threat Communities

Assessing Impact of ControlsThe value of this knowledge is HUGE!  Without even talking control specifics, I know that more than half of my expected loss will be from insiders (privileged and non-privileged).    This tells me to turn to NIST and focus on access controls (AC), audit and accountability (AU) and security awareness and training (AT) controls!The analysis indicates the greatest loss exposure is from the privileged insider community (approx. 43% of the total average ALE), cyber criminals (approx. 36%) is second, non-privileged insiders (approx. 13%) is third, and it goes down from there.

Assessing the Impact of Controls

Cleveland, we still have a problem.  Our risk appetite is well below our average and peak ALE.  I don’t want to raise our RA so we must reduce the ALE.  But, how can we determine which of the above controls (AC, AU or AT) are most effective?  The beauty of using Open FAIR with an analytic and modeling engine (e.g. RiskLens) is we can simulate the potential impact of security controls on quantitative risk.  This is something that most organizations do not do.  Instead they simulate the potential impact of security controls on qualitative risk.  I’ll get to this in my next post when we dive into SANS 20 controls as a model to assess qualitative impact of security controls.

The beauty of using quantitative analytics is it opens the door to effective economic discussion.   For example, the yellow curve in Figure 9 depicts our initial ALE (this is a different analysis from figure 4 though the curves are very similar).   The blue curve shows the projected ALE after the implementation of both avoidance and deterrent controls:  controls that we know the cost of!

RiskLens ALE Impact Simulation

This is a pretty extreme example to illustrate how this stuff works.  In reality, most organizations will already have a significant investment in controls reflected in the baseline analysis.  The exercise will be a series of incremental control tweaks to bring the ALE in line with the RA.   After all, any investment that brings the ALE below the RA cannot be economically justified.

A Final ALE Perspective

To me, this is very exciting!  If we run simulations against different control classes we can figure out our best control investment strategy.  We can plot the control costs against the ALE impact to pick the winning approach.  We can then evaluate the NPV and IRR of the control investments as a function of the ALE to build a business case for cybersecurity control investment.  We can also directly compare the cost of implementing controls against the cost of buying cyber insurance.  Essentially, with this information – plus the insights of the Gordon-Loeb cybersecurity spending model – we can make intelligent decisions about cybersecurity spending.  And, most importantly, we can discuss these spending analyses on equal terms with any other form of business risk analysis.

Disclaimer – I have no financial or formal business relationship with RiskLens.  I do have the utmost of respect for Jack Jones, RiskLens Founder, and I’m very much appreciate his support and willingness to share output from his analysis tools.

1http://www-03.ibm.com/security/data-breach/

Quantify Risk and You Quantify Cybersecurity Spending

Spoiler alert!  In this post I answer the first question of my quest: what should we spend on cybersecurity.    To do this we need a consistent way to quantify risk before we can even begin making spending decisions.   I propose the model for this is the Factor Analysis of Information Risk (FAIR).

Introduction

We started this journey trying to answer two simple questions: what should we spend on cybersecurity and what should we spend it on.  From my first post, most people use ROI to justify cybersecurity spending.  A good example is the Booz Allen ROI model.   In my second post I showed how ROI (or Return on Security Investment (ROSI)) are not good metrics to use to justify cybersecurity spending; in fact, any type of spending.  We need to take our economics discussion up a notch and focus on using NPV (Net Present Value) and/or IRR (Internal Rate of Return) rather than ROI/ROSI.

Looking Forward

Unlike ROI/ROSI, NPV/IRR are forward looking, taking into account the risk of the investment (through the factor K).  The tricky part is translating “high” risk into a number.  For example, let’s say we’re looking at a $1M investment that generates an annual net benefit of $1.8M.  On the surface, this sounds pretty good, but check out what happens to NPV as we calculate our “high” risk investment.

Establishing NPV Options and Risk

From Table 1, how we define “high” risk has a huge impact on NPV.   I realize that 15x is extreme, but it makes a point: we have to find a way to nail down risk and put quantitative meaning behind qualitative terms like “low,” “medium,” and “high.”

Tell me Yoda, What is Risk?

Before we can quantify risk, we must define risk.

There is no one agreed-upon definition of risk.  For example, according to ISO 31000, risk is the “effect of uncertainty on objectives.”  NIST defines risk as “a function of the likelihood of a given threat-source’s exercising a particular potential vulnerability, and the resulting impact of that adverse event on the organization.”  Of the two, I really like the ISO definition because it focuses on the inherent uncertainties associated with risk (we return to uncertainty later in the post).  In other words, if we are certain then is there risk?  If we have nothing to lose, then is there risk? On a related note, I like to think of risk as the negative consequences of one’s reality (a topic for another day ;-).

As I mentioned in my last post, calculating risk occurs at the intersection of loss, threats, vulnerabilities, costs, benefits and sound business judgement.   Or, a more generic list from the NIST cybersecurity framework is “threats, vulnerabilities, likelihoods, and impacts.”

A FAIR Approach to Risk

What’s amazing to me is that even if we can agree upon a basic definition of risk, there is no standard way to quantify/qualify the risk components: vulnerabilities, threats, loss, etc.  To illustrate this point, in “Measuring and Managing Information Risk: a FAIR Approach,” Jack Jones talks about the risks associated with a bald tire.  Of course, a bald tire is a vulnerability in the rain. Right? But, what if the bald tire is hanging from a rope tied to a tree branch?  What’s the vulnerabiilty, now?  What if the rope is frayed?  So, the rope is now a vulnerability?  Or, is it a threat?  What if the tree branch extends out over a 200 foot high cliff?  How has my risk calculation changed?

This is such a simple example and when Jack talks to people about this scenario there is no consensus on even the most basic principles such as what’s a vulnerability versus what’s a threat!  As Mary Chapin Carpenter sings “sometimes you’re the windshield and sometimes you’re the bug…”  This is like two chemists not agreeing on the definition of a reactant versus a catalyst versus a product (yes, Mrs. Nittywonker, I did pay some attention during chemistry class).

To Be FAIR

Factor Analysis of Information Risk Open FAIR Risk Analysis(FAIR) was first developed by Jack Jones based on his experience as a CSO at Fortune 100 companies.  It is a methodology for quantifying and managing risk and it is now a public standard supported by The Open Group: Open FAIR.

In Open FAIR, risk is defined as the probable frequency and probable magnitude of future loss.  That’s it!  A few things to note about this definition:

  • Risk is a probability rather than an ordinal (high, medium, low) function. This helps us deal with the ambiguity of our “high” risk situation mentioned above.
  • Frequency implies measurable events within a given timeframe. This takes risk from the unquantifiable (e.g. our risk of breach is 99%) to the actionable (e.g. our risk of breach is 20% in the next year).
  • Probable magnitude takes into account the level of loss. It’s one thing to say our risk of breach is 20% in the next year.  It’s another thing to say our risk of breach is 20% in the next year resulting in a probable loss of $100M.
  • Open FAIR is future-focused. As discussed below, this is one of its most powerful aspects.  With Open FAIR we can project future losses, opening the door to quantifying the impact of investments to offset these future losses.

As shown in Figure 1, the Open FAIR ontology is pretty extensive and this post isn’t the place to get into all the inner workings.  I urge everyone to go to The OpenGroup to learn more about Open FAIR.

Open FAIR Risk Ontology

As shown in Figure 1, risk is the combination of Loss Event Frequency (LEF) (the probable frequency within a given timeframe that loss will materialize from a threat agent’s actions) and Loss Magnitude (LM) (the probable magnitude of primary and secondary loss resulting from a loss event).

To give a frame of reference, an example LEF might be “between 5 and 25 times per year, with the most likely frequency of 10 times per year.”  In comparison, Loss Magnitude (LM) is a discrete number (e.g. $35M in the next year).

Teasing out Vulnerability and Threat

As I wrote about in my last post, one of my concerns with trying to apply the Gordon-Loeb Model of cybersecurity economics to cybersecurity spending decisions is its lumping together vulnerabilities and threats into a risk-of-bad-stuff-happening axis.  The great news is in Open FAIR terms, the Gordon-Loeb’ Model’s “vulnerability/threat” equates to Loss Event Frequency (LEF), allowing us to treat vulnerabilities and threats as two distinct – but related – entities.

Open FAIR defines Threat Event Frequency (TEF) as the probable frequency within a given timeframe that threat agents will act in a manner that may result in loss.   In other words, about once a week I drive up to an empty (no other cars and no cops) four-way-stop intersection near my house, making my Contact Frequency (CF) approx. 50 times per year.  My Probability of Action (PofA) (blowing through the stop sign) is extremely low since I’m a creature of habit (stop sign=stop).  This makes my TEF very low. My wife, on the other hand…

The operative word here is “may” and determining which threat events will turn into loss events is a function of Vulnerability (V).  As shown in Figure 1, Vulnerability(V) is a function of Threat Capability (TCap) and Resistance Strength (RS).

Case Study – SysAdmins Accessing PII

Still with me?  To get an idea how this works with cybersecurity, let’s evaluate the Threat Event Frequency (TEF) for my System Administrator (SysAdmins) team exploiting Personally Identifiable Information (PII).

1. Estimate the TEF = Contact Frequency (CF) x Probability of Action (PofA)

  • Since we’re talking about SysAdmins we can assume a high CF given their access to the network and applications running on the network.
  • The PofA is probably low since most SysAdmins are good people and trusted employees.
  • From Table 2, we therefore estimate a Low TEF.

Risk - Estimating TEF2. Estimate the Vulnerability (V) = Threat Capability (TCap) x Resistance Strength (RS)

  • To estimate TCap we need to assess the SysAdmin’s skill (knowledge and experience) and resources (time and materials) they bring to bear, versus the overall threat actor community. SysAdmins are generally highly skilled with the time and materials to do great damage.   In addition, my SysAdmins have all gone through SANS training so they are quite astute when it comes to security vulnerabilities, controls and exploits.  Therefore, I’m estimating my SysAdmins TCap to be Very High (VH), equivalent to the top 2% of the threat actor population.
  • To calculate RS we need to evaluate the controls in place to resist negative action taken by the threat community (SysAdmins). On my network all PII is stored encrypted with strong key management.  In addition, all users with access to PII must use two-factor authentication with a one-time-password token (Google Authenticator).  Because of this, I estimate my RS is also Very High(VH), protecting against all but the top 2% of the threat actor population.
  • As shown in Table 3, we therefore estimate a Medium V.

Risk - Estimating VulnerabilityYou’re probably wondering why a Very High TCap and a Very High RS result in a Medium Vulnerability (V)?  This was my first thought, too. However, in this example a Very High RS means the SysAdmins must jump through some significant hoops to catch PII in the clear.  Yes, the SysAdmins have the contact, knowledge and skills to do this, but the risk of being detected while stealing the PII is very high because of what’s needed to overcome the RS.  In the end, a Very High RS trumps the Very High TCap, resulting in a Medium Vulnerability (V).

3. Finally, estimate Loss Event Frequency (LEF) = Threat Event Frequency (TEF) x Vulnerability (V)Risk - Estimating LEF

So the end result of this short analysis is my LEF for my SysAdmins exploiting my PII data is Low.

How does this help?  Once I compute the Loss Magnitude (LM) then I can calculate the risk I face from my SysAdmins.

The beauty of this model is we can assign any values we want to the categories (eg. Low is $1, $100,000, $1M, etc.).  The challenge of this model is we can assign any values we want to the categories!  This makes it really hard to quantify estimated risk.

Parlez-Vous Uncertainty?

The good news is we can quantify estimated risk.  As discussed above, the majority oRisk and Uncertaintyf the Open FAIR factors are distributions (Min, Max, Ave and Mode).  The reason for distributions rather than discrete numbers is the level of uncertainty for each of these factors.  For example, from the above example, I say my SysAdmins have a very high Threat Capability (TCap).  In reality, most are very capable (Mode), but newer hires might be much less capable (Min) and my long-time employees might be extra-extra-capable (Max).  Similarly, the Probability of Action (PofA) might be extra-extra-low for the long-time employees and much higher for the most recent hires.

So, how do we deal with data that has significant uncertainty?  We can use Monte Carlo (they speak French there, don’t ya know?) simulations to quantify our Open FAIR factors.   For those not familiar with Monte Carlo simulations (or, for those of us who learned it once and quickly forgot it), it’s a means to analyze data with significant uncertainty.  The Monte Carlo process analyzes thousands of scenarios to “create a more accurate and defensible depiction of probability given the uncertainty of the inputs.

The output of the Monte Carlo analysis looks something like this:

Risk - Open FAIR Analysis

A few key points about Table 5:

  1. This is one loss event scenario. For example, this might be the above case of a SysAdmin exploiting PII.  We would run other analyses for other employees (Executives, Staff, etc.) exploiting PII.
  2. In this scenario, we’re looking at a minimum of one primary loss event every twenty years, a maximum of about once every two years and a most likely frequency of about once every seven years. Similarly, we’re looking at a minimum primary loss magnitude of approximately $70K/event, a maximum of approximately $780K/event and a most likely frequency of $440K/event.
  3. On an annualized basis we’re looking at a most likely Total Loss Exposure (Primary and Secondary) of approximately $170K/year.

We’ve done it!  We’ve converted our qualitative assessment to a quantitative assessment.

You’re probably wondering if I’m being lazy by listing out approximate numbers when the table shows nice discrete numbers?  As Jack Jones drives home in his book, the challenge with using discrete numbers is it implies a highly unrealistic level of precision.

FAIR Thee Well!

We’re almost there!  Before pulling this all together, it’s important to emphasize how Open FAIR differs from other risk approaches.  From my perspective, there are three key differences:

  • It’s an ontology of the fundamental risk factors. It establishes a lingua franca of risk to compare different risk situations on a common plane.   For example, it allows us to discuss and compare cybersecurity risk with financial risk with health risk, etc.
  • It is a means to establish probability of future risk. Just as ROI/ROSI is ineffective in projecting future returns, checklist-based risk assessments are not effective predictors of future risk.
  • It’s reproducible, transparent and its underlying assumptions are defensible

Giving The Gordon-Loeb Model a FAIR Shake

Recently, I had lunch with Professor Gordon to discuss his cybersecurity economics model and the potential alignment of Open FAIR with his work.

The way I see it, calculating potential loss is a means to an end for the Gordon-Loeb Model versus being the end in itself for Open FAIR.  In other words, the power of the Gordon-Loeb Model is its impact on making cybersecurity investment decisions and the power of Open FAIR is its impact on estimating cybersecurity loss.

What Professor Gordon stressed to me is the potential value of applying the fundamentals of the Gordon-Loeb Model to Open FAIR to determine optimum investment.  These fundamentals are (please go here for background):

  • Focus on the underlying assumptions. Like the Gordon-Loeb Model, Open FAIR has a number of underlying assumptions.  If we can’t rationally explain the assumptions then the outcomes are suspect.
  • Invest, but invest wisely. As discussed in my last post, the cybersecurity investment value increases at a decreasing rate.  The Gordon-Loeb Model (after going through some significant math gymnastics) projects, on average, we should invest ≤ 37% of expected loss.
  • Related to the above point, cybersecurity investments have productivity functions: the first million invested is more effective than the second which is more effective than the third, etc. This leads to the following:
    1. There is no such thing as perfect security
    2. The optimal level of investment does not always increase with the level of vulnerability/threat.  The best payoff often comes from mid-level vulnerability/threat investments

Conclusion

This is very exciting!  We have a model and process to determine what we should spend on cybersecurity.  Yeah!  However, we still need to figure out what we should spend this money on.  In my next two posts I’m going to lay out both a qualitative and a quantitative approach to answering this question.  In the first post I will discuss how to use the SANS 20 controls to evaluate (qualitative) potential control investments.  In the second post (the last post in this series) I’ll walk through an example using the Open FAIR methodology/ontology to evaluate (quantitative) potential control investments.

For those readers sticking with me until this point, thank you so much!  I know this was a really long post, but I couldn’t find a logical point to break it into multiple posts and still retain its flow and value.  Please add your comments so we turn this from a not-so-quick read to an ongoing and engaging discussion.

Cybersecurity Economics: What The CIO/CISO Must Know

Cybersecurity Economics

In my last post we started a discussion around cybersecurity economics answering two simple questions: How much should we spend on cybersecurity and what should we spend it on? To answer the questions I started on the hunt for a financial model. From my research oneof the better models for cybersecurity cost justification is an ROI model from Booz Hamilton , Inc. It’s a nice model, but as discussed in my first post, it falls far short of answering my questions.

In this post I’m taking a different tact. Rather than focusing on the hard number – thou shall spend $5M on cybersecurity – I’m taking a step back and focusing on the fundamental cybersecurity economics. My goal is a cost-benefit analysis: keep increasing cybersecurity spending as long as the incremental benefit is greater than the cost. At the point where the incremental benefit equals the cost then that’s the limit of our spending. This will at least put an upper limit on the answer to my first question on how much we should spend.

NPV vs. IRR vs. ROI

As mentioned previously, most analyses I’ve seen for cybersecurity spending are based on ROI. Despite some feeble attempts, I believe a true return is impossible with cybersecurity: the best we can do is a reduction in potential loss, or potential costs. For this reason some people are using the term return on security investment (ROSI) rather than ROI. As mentioned in the previous post, this is the premise of the Booz ROI model. Based on the Booz model:

ROSI= (Benefits – Costs)/Costs

So, let’s invest $1M in an identity management system (Cost) that reduces our expected loss by $2M (Benefit). The identity management system costs us $200,000/year to operate and our three year ROSI is $4.4M (assuming future benefits are not discounted, an assumption that will be revised in the next example).

Table 1

If we are using ROSI as our justification measure then we’d most likely move forward with this investment.

Unfortunately, it’s not that simple. There are two problems with using ROSI to justify cybersecurity investments. First, ROSI is a historical figure where as I’m using it to project future benefits. As discussed below, this can dramatically overstate the economic rate of return. The second issue with ROSI is it’s not the same thing as optimizing corporate investment. In fact, from the CFO’s perspective, the goal of the firm is not maximizing ROSI, but rather deriving the optimal level of cybersecurity investment for the firm. These are two very different goals. The bottom-line is walking into a Board Meeting with ROSI as a cybersecurity spending justification will definitely get the discussion with the CFO off on the wrong foot!

Two terms that CFOs understand are the Internal Rate of Return (IRR) (economic rate of return) and Net Present Value (NPV) (compare anticipated benefits and costs over time). What these terms do that ROSI does not do is discount investments and costs over time to today’s value (PV).

NPV

The formula for NPV is pretty straight forward and it’s based on k, the discount rate.

With NPV we have three choices:

  1. Invest if the NPV > 0
  2. Reject investment if NPV < 0
  3. Indifferent on investing or not investing if NPV=0

Taking the same investment/benefits from above and assuming a 15% discount rate, our NPV is $3.1M.

Table 2

Special K

In the above example, we’d make the investment because NPV >0 however, the NPV is substantially less than the ROSI.

The reality of cybersecurity investments is there is significant risk involved, particularly when projecting expected loss reductions. What I really like about NPV is we can reflect this risk in the discount factor, K. For example, what if we double k to 30% to reflect the highly uncertain nature of our projections? This drops our NPV to $2.3M.
Table 3
Now we’re looking at an NPV of $2.3M versus our initial ROSI projection of $4.4M and initial NPV of $3.1M.  Most likely, we’d still make the investment since NPV >0, but it’s not the slam dunk we see with the ROSI, or even the initial NPV.

The other key economic factor I mentioned above is Internal Rate of Return (IRR).  I won’t get into the details here, but with IRR the goal is finding the discount rate where the initial investment (C0) equals the PV of future net benefits.  With IRR, invest if IRR >K, reject the investment if IRR <K and be indifferent to investing or not investing if IRR=K.

My takeaway here is K (cost of capital) is critical.  Adjusting K is how companies can adjust for risk:  higher risk investments carry a higher K; the higher the K, the lower the NPV.  Of course, figuring out the level of risk is necessary before adjusting K (more on this below).

The Gordon-Loeb Model

So far, we’ve been talking about discreet costs and benefits.  All the above examples should be funded since the NPV > 0.  Yet, I’m still looking to figure out the optimal investment strategy for the company, or “how much should we spend on cybersecurity?”  Another way to look at this is at what point do the incremental benefits become less than the incremental cost reductions?

In 2002, Professors Lawrence Gordon and Martin Loeb published the Gordon-Loeb Model (see: https://en.wikipedia.org/wiki/Gordon-Loeb_Model) for information security economics.  Though the math and underlying analysis go well beyond the scope of this blog, the high level cybersecurity economics are worth reviewing.

Their analysis looks at three factors:

  1. The potential loss from a security breach
  2. The probability that a loss will occur
  3. The effectiveness of additional investments in security

When looking at figure 1, the first thing to notice is the shape of the curve:  benefits of investment increase at a decreasing rate.  This is crucial since it shows that at some point the expected net benefits of the investment start decreasing in relation to the cost of the investment.   Another way of looking at this is my first $1M spent on cybersecurity may be far more effective than my fifth $M spent on cybersecurity.   I see a four key takeaways here:

  1. Even a little investment in cybersecurity can have a big impact
  2. There is a limit at which point we can’t economically justify spending more on cybersecurity
  3. We’ll never achieve perfect security
  4. We should consider the point of optimal investment (z*) as the point where we define the beginning of our residual risk (more about this in the next post)
GL Model

Figure 1 – Benefits and Cost of an Investment in Information Security*

I urge everyone to read Managing Cybersecurity Resources: A Cost-Benefit Analysis by Gordon and Loeb.  It’s the best cybersecurity economics discussion out there.  I’ve probably read it three times and each time I learn something new.  Professors Gordon and Loeb have two key findings from their analysis/model:

“One key finding from the model: The amount a firm should spend to protect information is generally no more than one-third or so [37%] of the projected loss from a breach. Above that level, in most cases, each dollar spent will reduce the anticipated loss by less than a dollar.”**

“A second key finding: It doesn’t always pay to spend the biggest share of the security budget to protect the information that is most vulnerable to attack, as many companies do. For some highly vulnerable information, reducing the likelihood of breaches by even a modest amount is just too costly. In that case, companies may well get more bang for their buck by focusing their spending on protection for information that is less vulnerable.”**

Are We There Yet?

So, how can I use the Gordon-Loeb Model to answer my question about how much we should spend on cybersecurity?  There are four steps outlined by Gordon and Loeb.

It is important to note that the model is focused on loss from a data breach.   This is only one class of breach, though the fundamentals should still be the same.  The process is as follows:

  1. Estimate potential loss (L) from security breach for each set of information. The inverse of this becomes the value of the information (high, medium and low)
  2. Estimate the likelihood that an information set will be breached by examining its vulnerability/threat (v) to attack
  3. Create a grid with all the possible combinations of the first two steps, from low value (low L, low v) to high value (high L, high v)
  4. Focus on spending where it should reap the largest net benefits based on productivity of investments

Table 4
From Table 4 the average potential loss (L) from a medium vulnerability/threat (v) against a medium value information set is $25M.  Based on the Gordon-Loeb Model, we should spend on average no more than $9.25M (37% of $25M) to protect this information.  Please note that the above table is a summary of much more detailed work done by Gordon and Loeb.

To me, the most powerful finding of the Gordon-Loeb Model is all cybersecurity spending is not equal:  spending effectiveness per dollar spent is often lower for the highest risk assets (highest v and highest L) and lower risk assets than for medium risk assets; and, spending effectiveness increases at a decreasing rate.

Summary

Where does the Gordon-Loeb Model leave us?  It’s clearly a cybersecurity economics framework we can use to establish the border conditions for cybersecurity spending, so on average we should spend no more than 37% of our potential loss (L).   But, as Professors Gordon and Loeb wrote in the WSJ:

“However, this approach is best thought of as a framework, not a panacea, for making sound information-security investments. It is not a magical formula that can be used to churn out exact answers. Rather, it should be used as a complement to, and not as a substitute for, sound business judgment.”**

Another way of looking at this is:

“In theory there is no difference between theory and practice.  But in practice, there is.” – Yogi Berra

So, let’s review:  we still need to figure out how much we should spend on cybersecurity and what we should spend it on.  It appears that we may have an upper limit of spending as long as we have a clear understanding of our potential loss and vulnerability/threat.

Per Professor Gordon’s warning, we need a way to move from theory of potential loss to practice of potential loss, preferably based on sound business judgement.   If we can do this we can figure out what to spend on cybersecurity.

I believe the missing link between theory and practice is a proper and rigorous accounting for risk (including residual risk).  Calculating risk occurs at the intersection of all the factors we’ve been discussing: loss, threats, vulnerabilities, costs, benefits and sound business judgement.

The great news is there is a framework for calculating risk called Factor Analysis for Information Risk (FAIR).  In my quest, when I learned about FAIR I was so excited I ran out and became certified on it!  As I’ll discuss in my next post, I’m hoping that combining the Gordon-Loeb Model and FAIR and using NPV will give us the framework we need to determine what we should spend on cybersecurity and potentially, what  we should spend that money on.

*Gordon, L. A. and M. P. Loeb, 2002, “The Economics of Information Security Investment,” ACM Transactions on Information and System Security, pp. 438-457.

**Gordon, L. A. and M. P. Loeb, 2011, “You May Be Fighting the Wrong Security Battles,” The Wall Street Journal, September 26.

Cybersecurity ROI – Realm of Irrationality?

Do you know how much you should spend on cybersecurity?  It’s such a simple question, yet the answer is terribly elusive.  By the end of this series I hope to have some answers.  Specifically, my goal is answering the following questions:

  1. How much should we spend on cybersecurity?
  2. What should we spend the cybersecurity dollars on?

Based on various analyst firm’s research, companies spend about 5% of their revenue on IT and 5% of that on cybersecurity.  To put this in perspective, this means small companies typically spend about $125K per year on cybersecurity; midsized companies spend about $1.25M; large companies spend about $12.5M; and, very large companies spend about $50M annually on cybersecurity.  This is a conservative spending view, especially given companies like JP Morgan/Chase plan to spend $500 hundred million this year.

Independent of any justification, we also know that this spending is going up.  For example, accounting firm BDO USA LLP, surveyed 100 company CFOs.   Two-thirds said they’ve increased cybersecurity spending in the past 12 months.

I’ve got two fundamental concerns.  First, just because companies are spending this money it doesn’t mean they should spend this money.  And, second, even if we can justify spending the money, on what should we spend it on?  Yes, these are loaded questions because should carries great judgement and bias.   In a later post I’ll get back to should and put it in the perspective of risk.  But, for now, let’s assume should means there is a clear justification for spending the money.

So what’s the justification?  My experience is most cybersecurity spending justification is part economic, part fear-based, and part peer pressure.  The fear and peer pressure components certainly garner the most attention in the media:  it’s highly speculative, varies from one company to the next and there are no clear right or wrong positions.   However, to stay focused I’m sidestepping these factors and targeting the economic component of the cybersecurity spending justification equation.

Seeking Economic Model:  Cybersecurity ROI, Cost Avoidance or Something Else

In a recent Lockheed Martin sponsored survey by Ponemon, 70 percent of IT/security professionals believe ROI is important when selecting security technologies.  Ponemon takes an interesting approach to cybersecurity ROI.  In its 2015 IBM sponsored Cost of Data Breach report, companies face an average of $833,800 annually in data breach costs.   As Ponemon states

Ponemon Cost of Data Breach 2015“If forgone costs are the same as realized revenue (which is to say, ‘a penny saved is a penny earned’), then whatever you spend toward avoiding that $833,800 cost is an investment. So if you implement a security program, you should divide $833,800 by the total cost of the program and express the result as a percentage. I warn you. You could be in for a shock. A program costing $10,000, for example, results in a return on investment of 8,338 percent. How often do you encounter an opportunity with a cybersecurity ROI of 8,338 percent?”

This approach has great merit.  If I spend $10K to avert $833,800 in incident response then I’m getting a very good return on my investment.  However, this isn’t really a cybersecurity ROI as much as a non-spend or a cost avoidance model.

The closest thing to a cybersecurity ROI model I’ve seen is dubbed security enablement by some vendors and analyst firms.  It goes something like this:  If I spend $1M on multi-factor authentication and it allows me to deliver online investment advice to high net worth investors, I can increase the company revenue by $10M.  On the surface this appears to answer my second question (what we should spend our money on).  Unfortunately, it’s extremely rare to directly link a revenue increase to one security investment.  Plus, what about all the meat and potatoes security controls that can’t be linked to any revenue impact?

I’m finding a real cybersecurity ROI model to be elusive and to underscore this point, the same 70% of the respondents in Ponemon’s survey state it’s difficult to accurately calculate the ROI of any given security solution.  I’m starting to wonder if the search for a cybersecurity ROI might be a Red Herring and I should focus on cost avoidance to answer my questions.  We’ll see.

A Cost Avoidance Model

The best cost avoidance model I’ve seen is the Booz Allen Cyber ROI model (Yes, I know ROI is in the title!).  The model’s theme is incident avoidance.   For example, BoBooz Allen Cyber ROI Modeloz references a different Ponemon report’s (2014 cost of cyber crime) calculation that cyber crime costs companies on average $12.7M annually.  Booz makes the case that blocking the cyber crime in the first place has clear financial benefits.  Agreed.

Booz states “Many aspects of cyber investment financial value are the same as those for any traditional investment..  The differentiating factor, however, is that cyber investment value is based on three key cost avoidance components:

  1.  Cost to fix
  2. Opportunity cost
  3. Equity loss”

Booz goes on to state “the downstream impacts from opportunity costs and equity losses can account for as much as 25 percent of the true total cost of a successful attack.”

Their cyber cost avoidance model has a lot more to it than I can give justice to in this post and I urge all to read the white paper.  My primary take away from the model is the following:

Attack costs (fix, opportunity, equity) X successful attack probability X attack frequency = expected loss value

This is a good equation that is applicable to my quest.  As I’ll discuss in the future there is a related, but even better equation in the Open Group’s Factor Analysis for Information Risk (FAIR).

I’ve Eaten My Meat, Now Can I Have My Pudding?

Does the Booz model answer my questions?  Unfortunately, not.  Though I really like aspects of the model, I see some challenges with it.  For example, Booz calls it a ROI model but it’s really a cost avoidance model.  This is a pretty minor point.   My greater concern is relying on equity loss and opportunity costs is highly problematic since research indicates most companies experience only a temporary equity hit after an incident and it’s very tough to discern lost income from delayed income: how many people delayed their purchase until the website was back up.1  Finally, the model doesn’t have the granularity I’m looking for to help determine not just whether or not we should spend money, but what to spend the money on.  Still, please review the model since it lays out an excellent five-step framework of which I’m only focusing on step 3: Quantify Value.

What Now?

I still need to answer my simple questions and cost avoidance gets me closer to answers, but not close enough.  In my next post I’ll discuss the work of What Now?Dr.Lawrence Gordon and Dr. Marty Loeb at the University of Maryland’s Robert H. Smith School of Business.  Linking their cybersecurity spending model to a cost avoidance model may help get the answers I seek.  I hope you will seek the answers with me.  In the meantime, please comment so we can get the discussion going.

 

1Gordon, L.A., M.P. Loeb, and L. Zhou. 2011. “The Impact of Information Security Breaches: Has There Been a Downward Shift in Costs?” Journal of Computer Security