Why Spectre demands more elegantly-coded software

Developing bloated software was almost excusable while processors were forever getting faster. That changed in January.

For the foreseeable future, developers are going to have to get used to coding for slower hardware. Leaving aside the headline slowdowns seen by some systems with Meltdown patches applied, the longer-lasting problem is Spectre. As its prescient namers realized, this flaw will haunt the IT world for years to come.

Spectre is the gift that keeps on giving. To mitigate against it requires recompiling applications with new instructions that work around speculative execution vulnerabilities. But that's just putting a sticking plaster on a festering wound. Fundamentally we need new processor designs, ones that work differently, since what we have now just isn't secure. Unfortunately, new CPU designs aren't likely to appear any time soon.

For many development projects, the lack of total security may not matter too much. The risk of compromise is fairly low and not proven to be present in the wild, at least not yet. But mission-critical applications require a higher level of data security than “It's probably OK”.

Building software with low-code visual tools can save time, but won't magically fix underfunded development departments. Low-code: what it can do for your enterprise – and what it can't

There are some Spectre-immune systems but they tend to be slow or old or both. The Raspberry Pi is one and I'm writing this article on another, a pre-2013 Intel Atom box. It doesn't do Out-of-Order (OoO) execution, but/therefore it doesn't do anything particularly quickly. This is now the choice facing everyone who cares about security: fast and flawed or slow and safe (or at least safer).

Until that changes, until a new generation of chips somehow circumvents the flaws and gives us full-speed computing without speculative execution vulnerabilities, software developers must step up and make a difference. Yes, efficient coding is suddenly back in fashion, because developers can no longer assume that tomorrow's hardware will be faster than today's. But what does this mean in development terms?

For parallel processing on multiple cores it means a better understanding of the work-flow within the application being developed, which is hard. In fact it makes the traveling salesman problem look like child's play. Figuring out which parts of an application's workload can be split apart, processed in parallel and then safely recombined requires more than merely understanding the application's code. It requires insight into the user's mind. How will they use feature X? Which data streams will require the most work?

To continue reading this article register now