Amidst all this evangelism and hype (together with pop star examples of startups taking the world by storm) it’s sometimes worth assessing how things actually are, and why they are as they are. As I am doing so, currently, following a day’s session getting the latest from HPE Software on its strategy and approach to Big Data and Information Management. In HPE’s world, this means how it deals with structured data analytics and unstructured data management respectively, with overlaps in between.
Now, I’ve been monitoring the impact of technology for 15 years now, having spent a similar period of time working in IT. Call me a Kool-Aid drinker but I’m left with an overwhelming feeling that it really has had a profound effect. At the same time however, some things have stayed exactly the same. We have seen technology companies come from nowhere, take the world by storm and then abruptly vanish. For every Kodak there is an Alta Vista, for every Blockbuster a Digital Equipment Corporation. And even as we are wowed by the Amazons and Ubers, nobody knows which will still be around in 10 years, with the rest no doubt acquired out of existence.
Truth and fiction in Big Data analytics
To whit: Big Data analytics, machine learning, artificial intelligence and all that clever stuff that’s going to rock our worlds, if it isn’t already. According to the rhetoric, we’re heading towards a moment in which decisions will be automated out of existence through the use of smart algorithms. But just how true is this? Of course someone has to present such a singular (if you’ll forgive the pun) view of the future. It gives everyone else something to triangulate against, as do any Luddite positions or disaster scenarios.
And as with any set of polarised perspectives, logic would suggest that the answer lies ‘somewhere in the middle’, which is where prediction gets a whole lot harder. Part of the challenge lies in the fact that we are seeing changes not only in how IT is delivered but also in the kinds of business that result. We will always need manufacturing, power generation, transportation, healthcare, haircuts and manicures and a whole bunch of other industries, products and services. But rare is the industry that is not worried about the effects of so-called ‘digital transformation’ right now.
Insurers are concerned about the threat of data-oriented companies to their core underwriting business; retailers are pushed to the edge by online-only companies; banks face the buzz of fintechs that exploit their core services to deliver far better customer experiences. Meanwhile, utilities are losing the fight to control the increasingly smart home, and traditional car companies seem only to flounder in the face of a smarter generation of vehicle manufacturers.
Where’s the truth? Should established companies seek to get more out of their vast pools of data, or would such exercises amount to fiddling while entire industries burn? These are tricky questions, and nobody has a monopoly on the answers. HPE Software is taking a pragmatic view: it believes at least part of the response lies in what the company is calling ‘augmented intelligence’, which is as much a manifesto as a glib marketing phrase. It is our intelligence being augmented, you see – technology exists to serve our needs as (therefore) smarter beings, who can then build upon what the insights they are offered.
It’s about the information (and not the data), stupid
To understand better where things are going, I believe we can start from a reasonably solid foundation – that all companies are information companies. Indeed, they always were, ever since Joe the blacksmith developed his skills and knowledge about what shoe to put on what horse, and Freda learned how to discern a good from a bad payer.
Over recent decades, we have been generating data like it is going out of fashion, but so much of it is preventing us from being actually informed. We spend person-years of corporate time and millions of dollars trying to pull together disparate data sources, in the hope we might unlock the value that lies within. All companies thrive or survive based on the quality of the information they maintain about their customers, their back-office processes, their finances and supply chains. And therein lies the challenge, as this data-rich world is also, and increasingly, information-poor.
Speaking of Kodak, the digital camera analogy isn’t bad. We have gone from taking 24 carefully planned shots of a two-week holiday to snapping hundreds, or even thousands of photos, which we either painstakingly file over many hours or leave languishing on hard drives. It’s the same for insurers or retailers looking for the elusive single customer view. “If you can’t measure, you can’t manage” goes the adage, and rare is the company today that can successfully measure. If it can, chances are any such metrics will quickly be out of date.
Perhaps this issue will remain unresolved, at least for as long as we see generating increasing amounts of data as good, or inevitable. At the same time however, we can discern the characteristics of the ‘better-informed’ organisation. First, given the deluge of data faced by any organisation (large or small), just being able to make sense of it is already a good start. Success in this area amounts to basic hygiene factors, delivering capabilities in data integration, quality and structure. A failure in this area possibly also amounts to a breach of regulation, so it is where much attention is focused.
Second comes the ability to drive the organisation with whatever the information is saying, not least just understanding the data, but then also being able to conduct more detailed analytics and start to make more predictive decisions. It strikes me that this is where many organisations are struggling with the wrong mindset, as they see information as something ‘out there’ which should be consulted on occasion, like the oracle (sic) up the mountain. The fact is however, that the oracle has come down the mountain, available for consultation at any point.
This gives us an additional hygiene factor, based on a choice: do you use data in your decision making, or do you still hit and hope, based on what you believe might work? As HPE ‘Distinguished Technologist’ Chuck Bear noted:
“Look at two gaming companies with the same idea: the one that does A/B testing will get more market share out of it.” All companies have a choice – to make decisions based on the information they have freely available, or to increase the risk of doing the wrong thing. Organisations really can be their own worst enemies.
The third area is then to use information to learn and improve, to change and become more dynamic. Information is not static but is constantly changing, meaning yesterday’s insights could well be inaccurate or, indeed, wrongly framed. In healthcare for example, no doubt it made sense at one point to measure ‘bed occupation’ as an indication of utilisation – what it failed to take into account was the fact that as soon as the measure was used, it became skewed due to the changing behaviours caused by its measurement. For different reasons, the same consequence is true for all industries, remarked HPE Software’s marketing VP Jeff Veis, “Most metrics are backward looking - we often see companies creating dashboards that put them out of business.”
To really mess things up will always need people
To be able to benefit from information, therefore, requires a level of human savvy that doesn’t look like it is going to go away: to put it bluntly, we can be remarkably thick when it comes to how we use information, and no amount of algorithmic automation is going to change that. Does this mean all existing businesses are doomed, and that start-ups will mop up? Not necessarily: even the most disruptive startups are going for lower-hanging fruit, exploiting the fact that big business, bizarrely, can’t engage with its customers in any new way without spending aeons of time in meetings about it. Yes, it’s dumb, but that’s where we are.
Equally, while some such ripe pickings may generate such huge revenues that they can create global businesses out of a relatively tiny investment, they are a symptom of the times. Because of the exponential nature of complexity, the higher fruit (and to switch food-oriented analogies, the bread and butter of big business) may be way further up than such examples suggest. What this means in consequence is that people, not computers, will remain a significant factor long after technology has commoditised – and that hygiene factors in how we use information will differentiate successful organisations from the less successful. To be clear, when we all have the same tools, the least dumb will win.
Perhaps one day computers will exist that can make absolute sense of the pools of data we continue to generate. And yes, machine learning and even machine deciding will increasingly come into the picture. But, as noted Chuck Bear, “There’s no magic algorithm that, given garbage data, will give magical insights.” And we have plenty of the former right now, with more coming on stream all the time. “It’s very easy to get the simple stuff wrong; that will be true five years from now and in 500 years from now.”
So, yes, the shorter term will be more about augmenting our intelligence than replacing it, offering situational awareness and empowering people to make the right choices – there’s just too much in the mix for things to be otherwise. We are constantly processing more information than we ask machines to do, and we will continue to do so, even as we can stand on the shoulders of such giants. To think otherwise makes the worst assumption of all: not that computers can become as intelligent as us, but that they can prevent us from being stupid. That really would require some magical algorithm.
Can tech save a post-Brexit UK?
Mark Chillingworth on IT leadership
Phil Muncaster reports on China and beyond
Kathryn Cave looks at the big trends in tech