Going the distance with cyber risk
Assessments of risk and probability are central to security work. Through workshop participation, many have gained experience that, in practice, some risk assessments are highly speculative and lack data to support them. I have observed strongly opinionated discussions with lots of feelings on this topic because can we really know anything about the future? And do we have enough data? In a world where most companies are still struggling with the most basic cyber hygiene, discussions about probabilities can quickly become a derailment. Here is my contribution that can aid in your understanding of the subject and may be able to unstick the discussions.
What is the need?
A more critical question than probabilities is, “what will it take to convince managers and decision-makers that investing in measures for unacceptable risk is a good idea?” We are in the battle for scarce resources, and the answer to this question is person and business-dependent. For some decision-makers, it is sufficient for the expert to say that the measure is a good idea because the risk is serious. Others will require a more thoroughly documented risk assessment with visualization in the form of a risk matrix or radar diagram with pre- and post-assessments of the control effect. Some have even more demanding decision-makers to deal with, who demand even higher quality before they allocate resources. In such cases, there may be a question of more thorough audit reports or economic analyzes of risk and control cost. Map the stakeholder expectations and the need before you start.
Approach to probabilities
We can understand risk as the probability that an unwanted event will occur and cause consequence(s). To understand probabilities, we should look at the basic data types in statistics:
- Categorical (nominal) data indicates a category for data, e.g., gender and diagnoses. We categorize cyber risks within attack types, incidents, consequences, etc.
- Ordinal data indicates a sequence, order or ranking, e.g., position in a competition, priorities, etc.
- Quantitative variables have numerical values that can be measured with an equal distance from one value to the next. They can have virtually infinite values, e.g., speed, length, and event costs, or be ratios that indicate numerical scales with defined intervals, such as temperature.
The above pictures contextualizes the data types showing competitors in the female alpine skiing category. The podium provides the ranking (ordinal). The time differences between competitors describe the quantitative distance between them.
Most cyber risk assessments are ordinal
An expert can often say that one risk is more severe and likely than another and be entirely right based on his knowledge. Whether this is sufficient depends on the situation. When the purpose is to rank risks by severity for resource and work prioritization, ordinal assessments are an adequate and the most common approach. These are often called qualitative assessments. The risk matrix and similar tools provide a ranked risk picture (ordinal) but do not really describe the distance between the risks properly.
Ordinal assessments say that some events are more severe than others, but just how much more likely or severe requires measurement.
The distance between risks
We quantify our risks with probability estimates and loss distributions in quantitative risk analysis. The simplest form of quantitative assessment can be that, based on historical data, we expect to receive a fine for an offense once every ten years (of course, entirely without fault). We see that the size of penalties given to similar companies has an average of 1 million. We then have an expected risk value of 100,000 per year over ten years. If we do this with our risks, we get more than just a ranking; we get frequencies and costs — a measurable distance between them.
This example was simple, but we can build better statistical models to estimate cyber risk. However, we approach the domain of specialist knowledge, and the reliability of such models for cyber is also a contentious topic that we will stay away from in this article. Quantification is gaining more of a foothold as the industry matures and is a necessary step as cyber must be equated with other business risks. But I think that this push is yet to arrive here in Scandinavia.
Increase the quality of the assessments
We can take steps to improve the qualitative assessment process without quantification: Follow a repeatable risk assessment method that can be improved over time. The method should include asset evaluation, vulnerability and threat assessments, prioritized in that order. Create predefined probability and consequence categories that standardize the language across the organization. So, if I say that something is high risk or very likely, the terms are defined in the guideline such that everyone understands. Distribute clear ownership and responsibility for follow-up of risks and measures to ensure that the improvement process functions.
Risk visualization
The risk matrix has a solid footing in the industry but gets a lot of criticism in the academic literature because it miscommunicates risk in various ways. However, the risk matrix is not a high-precision scientific instrument, and we should not attribute it properties it does not have. The risk matrix works just fine when we understand it as a ranking and prioritization tool.
There are also several initiatives for improving the risk matrix, such as using logarithmic scales on the axes to enhance communication and precision. Other suggestions include stating the level of knowledge of the estimates as a spread or interval in the matrix. It remains to be seen whether these will gain a foothold, as illustrated below.
Many decision-makers also like the radar/spider diagram visualization tools. A note here is that, in comparison to the risk matrix, these diagrams decrease the granularity of information by aggregating the X and Y axis into one score. The pie chart at the beginning of this article is a good example of this issue where we lose sight of the probability and consequence estimates.
The need for precision varies
The reality is that it is not uncommon for dozens of risks to appear in the risk assessment of a single IT system. It feels far-fetched to build statistical models with loss distributions in such cases because we might as well spend the time fixing the risks. It may be worth it if the decision-maker is demanding, but if the purpose is to identify where the shoe pinches and prioritize the work effort, then qualitative methods and ordinal rankings are more than good enough. Don’t get hung up on probability assessments; the utility of the risk assessments often lies in the review of the problem itself, the resulting awareness, and problem ownership.
Then all that remains is to get to work:
- Get the guideline in place
- Distribute roles and responsibilities
- Choose a method (Use Diri)
- Find a tool (Use Diri)
And for me to wish you good luck!
Make sure to visit us at https://www.diri.ai/ If you enjoyed this text.