Risk, measured: components of cyber risk

Why risk in the context of cybersecurity means more than simply the chance of something bad happening.

This is a contributed article by Stephen Roostan, VP EMEA at Kenna Security.

 

The day-to-day vulnerabilities and security weaknesses found in IT infrastructure and software are often evaluated in terms of risk. One of the immediate issues organisations face, however, is working out what constitutes risk, and how we judge whether it is serious enough to require investment in prevention or remediation.

Indeed, the cybersecurity industry has debated the meaning of risk for many years, and has yet to settle on a version that everyone involved can unite behind. Dictionary definitions explain risk as the possibility of something bad happening or the chance of loss, injury or danger, whereas others, such as the economist Professor Elory Dimson simply but effectively define risk as ‘more things can happen than will happen'.

Useful as those are to everyday situations, they don't take us far enough to satisfy the parameters of cyber risk. That's because when assessing cyber risk we also need to measure consequences, and in doing so we get much closer to a useful definition. For cybersecurity professionals, risk is not just the chance of something happening, or the likelihood of an event. It must also include the magnitude of that event if it were to occur.

For example, when we talk about risk management in the cybersecurity context, there might be a 95% chance that something happens, but the impact of that event could be zero. In our terms, that's not a lot of risk. Or, there could be a risk of significant financial loss due to a breach but the probability of that occurrence is no more than 2%. In that situation, an organisation might choose to accept that risk because a fix would seriously degrade the availability of that application or service, for example.

So, in assessing cyber risk, impact and magnitude must accompany probability and likelihood if we are to make pragmatic judgements and decisions. The benefit of this exposition is that when you understand what risk is, it becomes more practical to create a secure future. By considering all the potential consequences, organisations can then work to avoid the bad ones (data loss, breach, compromised systems, etc.) as long as that definition of ‘bad' considers their own unique circumstances.

Security incidents happen every second of every day, but most of them won't have a significant financial impact. A WordPress site that hasn't been updated for years might be compromised, but the loss that occurs as a result is negligible. At the other extreme are the very rare events, such as a piece of malware that jumps across multiple vulnerabilities and machines but where the eventual repercussions could be catastrophic, such as the loss of a customer database or ransomware that locks down systems.

Among some notable recent examples are the Honda ransomware attack, which prompted speculation that production had been affected. And surely one of the most worrying incidents was the ransomware attack on foreign currency exchange business, Travelex. The company's core IT systems were disrupted for a month and, compounded by the outbreak of COVID-19, the business went into administration, having been "acutely" impacted by the combination of cybercrime and economic downturn.

Despite the regularity of high-profile incidents such as these, the overall likelihood of that happening is probably fairly low if you've been managing your systems and patching them. But when it does occur the impact is so big, it gives other businesses pause for thought and reminds everyone that they might have to manage that kind of risk out of their vulnerability profile to work towards a more secure future.

 

Data driven decision making

In other markets, such as life insurance, businesses evaluate risk by drawing on huge volumes of cross-industry data collected over many years. By looking at the distribution over time and how data is shifting, insurance companies can make very informed probability and outcome measurements based on the circumstances of each person. The security industry is now taking a similar approach. While not quite as proficient at collecting that outcome data, or as old as the insurance sector, it is building data and insight at an increasing pace.

Yet, the challenge isn't just about collecting the maximum amount of data. Security teams have huge volumes of log and scanner data at their disposal already. The challenge has been - until recently - that resource-limited security teams were forced to manually correlate, analyse and interpret this data. Given the exponentially growing volumes of vulnerability data and increasing complexity of the IT environment, many were finding this task all but impossible.

Instead, security teams need to be given the tools and support to evaluate and assess risk by applying automation to data analysis. For example, when we look at exploitation events, we find that only about 34% of the 150,000 vulnerabilities in the US National Vulnerability Database (NVD) have ever been scanned as active on business IT infrastructure by security vendors. Only around 20% of those have published exploits, and just 2-5% of vulnerabilities are ever used in successful exploitation events. That means only around 3,000 of the vulnerabilities in the NVD are used in successful exploitation.

The reason for this is simple. Without applying automation to process data quickly and effectively, security professionals' ability to derive clear insights from a rapidly increasing volume of data soon deteriorates. By arming security professionals with the right tools, data analysis is no longer overwhelming. In fact, the application of automation through machine learning and data science is so effective, security teams can easily consolidate both internal data as well as applying continuous, comprehensive, real-time threat intelligence from multiple sources.

Like life insurance, effective risk management in security requires the aggregation of huge amounts of data, as well as the capability to process this quickly and effectively to pinpoint issues and make clear decisions based on actionable insight. A successful data-driven approach relies on both the ability to aggregate and normalise data quickly combined with the application of comprehensive, real-time threat intelligence from multiple sources. When carried out continuously, organisations can establish and maintain a constant risk tolerance level.

The answer lies in harnessing advances in machine learning and automation to deliver actionable insights, when and where they're needed most. Automating routine tasks, for example, frees security teams to act on data, rather than spend valuable time cleaning and correlating it. This can deliver a strategy that optimises internal operations and drives cyber security investment decisions based on transparent levels of acceptable risk.


Stephen Roostan is VP EMEA at Kenna Security. With over a decade of experience in cyber security and transformation projects, his role at Kenna is to rapidly grow the EMEA organisation to meet the customer demand for risk-based vulnerability management.