infosec
Security

The InfoSec issues more dangerous than Heartbleed or Shellshock

Heartbleed, StageFright, Shellshock, Poodle, Freak. It’s amazing what giving a vulnerability a name and fancy logo will do to its popularity and the levels of concern from businesses.

“I think on April 6th 2014, you never would have heard a CEO be worried about OpenSLL 101b,” says Gavin Millard, Technical Director at Tenable Security. “But roll around to April 7th and a simple vector logo and a catchy name, that changed behaviour, suddenly it was; ‘Are we affected by Heartbleed?’.”

“In fact my mum text me the day Heartbleed broke and said ‘I was worried Dad and I were affected by Heartbleed’. Suddenly it really captured people's minds and really focused on the point of vulnerabilities.”

These days more and more vulnerabilities are getting fun sounding names and their own little logos for analysts, experts and press to use, but while they grab the attention, are they all as equally bad?

“It’s actually driving the wrong behaviour, because when we see these vulnerabilities come up suddenly it drives a knee-jerk, ‘go patch all the things’ reaction.” Millard’s argument is that the addition of a logo and catchy name mask the seriousness of the vulnerability; where Heartbleed and Shellshock were very dangerous, Freak and Poodle weren’t as bad, something which is impossible to tell without the right background knowledge, yet all create similar levels of ‘Are we adequately protected?’ hysteria.

“While Freak and Poodle were really hitting the headlines, there was some quite disastrously bad vulnerabilities also being discovered on Microsoft,” he explains. “ms14-64 was a remote code execution in s-channel, a really, really bad vulnerability, but didn't get as much press because it didn't have a vector image, because it didn't have a catchy name.” Another example he gives is ms14-68, which would allow any user on a network to give themselves admin rights and delete the logs.

“It didn't have a vector logo, everyone ignored it. That is still driving the wrong behaviour.”

The new vulnerability logos

“Instead of us having these knee-jerk reactions to the latest, greatest zero-day, let's actually take a step back and say: ‘What security issues that we're facing actually deserve a logo, which actually deserve the press, which deserve the focus from organisations to address?’”

In an effort to drive better behaviours and avoid that “go patch all the things” mentality, Millard has created his own logo vulnerabilities.

The first new one is Glimpse: The lack of visibility within infrastructures due to increased compound annual growth rates. “One of the biggest issues organisations face today is not zero-day, it's not these latest, greatest threats, it's the fact they have no idea what their infrastructure looks like because they've grown exponentially over the last few years.

“15 years ago, when I still had lustrous hair, we used to have conversations about what we were going to call the next server,” says the follicle-challenged Millard. “We used to name them after racehorses. Now we're seeing this migration services to virtual or Cloud and so organisations really lack that visibility which has a ripple effect all the way through their security posture.”

Also part of the Glimpse problem is mobile and remote workers. If there are workers who rarely come into the office and even more infrequently actually patch their devices, and you don’t know about them, they suddenly become an unknown risk the second they do join the network.

“If I don't know what I have, how can I design the appropriate controls to protect it? Am I spending a pound to protect a penny or a penny to protect a pound?”

Another of Millard’s new logo vulnerabilities is Shakespeare. A riff on the idea that an infinite amount of monkeys on typewriters will produce the writings of the great Bard, Millard’s mantra is “give a chimp a keyboard and a few bashes and he'll be able to guess your password.”

“We have fostered this view in security that creating a complex recipe for a password is the right way to solve the problem, which is total bullshit.” He explains that aside from the danger of organisations keeping users passwords in clear/plain text – something which happens alarmingly often – people are very bad at making up and remembering complex passwords, and inevitably use the same ones for multiple accounts.

“Complex passwords are not the answer, it's single-use machine-generated passwords. We know this in the industry yet people keep peddling this view,” he says. “I have over 150 passwords, all my passwords are 26 characters long. Unless I was Steven Hawking I wouldn’t be able to remember any of them, so I use a password protector.

“This is why your pin number is four characters, because humans aren't smart and they can't remember really complex things. So asking them to do this is a really silly thing, and then blaming them when it goes wrong is insane, it's like blaming the chicken when the fox gets into the coop. It's not the chicken’s fault, it's the owner of the chicken that didn't use the right fences.”

The human factor

There are others; Subversion is about Insider Threats, Bandit is about spending too much on poor fixes, Invader focuses on the inability to identify attack paths in the event of a breach, EagerBeaver is the user who clicks on every link ever sent to them without considering whether it’s safe. But one of the most important is Stutter; the inability to communicate with other areas of the business.

“Security can sit in this echo chamber shouting the right things, but they lack the ability to communicate effectively with other parts of the business what needs to be done to solve security issues,” he says. “Instead of security trying to teach the business ‘the business of security’ security should be using business language. Bits and bytes don't belong in the boardroom, basically.”

Instead of answering questions over security with “We've just deployed Palo Alto with AS356-bit encryption and an IPS,” Millard suggests security should be using metrics, KPIs, and the things that business people understand. Obviously there’s no shortage of introverts in tech who probably find the idea of making a concerted effort to communicate horribly unpleasant, but if you want to succeed in the industry, there’s not much choice. “Communication is one of the most important tools in the InfoSec person's toolkit. If you can't communicate, then you're always going to fail.”

“There's us in our small industry that know how to do good security well and solve the problems, and then we have 6.9 billion people doing the opposite of what we want them to do. We sit up on our ivory towers abhorring every time a breach occurs, but in reality that's actually a failure of the people in our industry not being able to tell people what to do. As I say, blaming the chicken for being eaten by the fox.”

Hacking vs. disclosure

I put it to Millard that the world of cyber-security is now a case of professionals versus professionals, except the criminals are better paid and probably more collaborative. “Unfortunately that comes down to morals,” he replies. “And we are struggling in this industry to find more people to help defend against this.

“There's always going to be people that will want to profit out of other people's misery. I could make more money being a hacker, you could make money being a hacker, but we don't do that, because we have a desire to be good, upstanding citizens, and we also don't want the repercussions of caught.”

Part of the problem is essentially down to the risk/reward factor. “The vast majority of people want to do the right thing, but sometimes I imagine it's very tempting hiding a bug and a buyer on the dark web is offering you $100,000 no questions asked.”

“Some of the bug bounty problems are ‘here's a t-shirt and pen’. $100,000. T-shirt and a pen. I'm not going to live for a year on a t-shirt and a pen. So it's difficult.”

He uses the Hacking Team as an example – a group of professional security experts which held on to various Flash vulnerabilities because they could create a multi-million dollar business selling them to government agencies rather than disclosing them to the public.

“If we're not enabling researchers to report them to the right people and then reward them the same as they would down the darker path then we get a very different situation.”

While bug bounty programs are slowly growing both in volume and reward, Millard wants there to be more recognition for the work security researchers do. “In reality it's only a few of us that get to see it [security researchers getting plaudits] because it’s in the release notes - when was the last time you read the release notes? I do, because I’m a weirdo, but most people don't.

“If we encourage researchers to get value for money but also recognition and value from disclosing these vulnerabilities then that pivot point between the good and bad will be far more difficult.”

PREVIOUS ARTICLE

« C-suite career advice: Brenda Morris, Kronos

NEXT ARTICLE

Typical 24: Marc Scibelli, Infor »
author_image
Dan Swinhoe

Dan is a journalist at CSO Online. Previously he was Senior Staff Writer at IDG Connect.

  • twt
  • twt
  • twt
  • Mail

Poll

Do you think your smartphone is making you a workaholic?