The following is a contributed article by Anjan Srinivas, senior director of product management at Nutanix
As estimates of global storage scale towards zettabyte territory, the term Big Data is becoming increasingly irrelevant, with all data now big data. Likewise when using “big” to mean complicated, exponential growth in unstructured data has skyrocketed in complexity (and the processing power needed to cope with it) to levels hard to imagine even a few years ago.
Storage and processing concerns, however, are far from the only issues and, when it comes to the Big Data revolution, we need to appreciate that it’s not just volume and complexity that are revolutionary. It’s our growing ability to understand and make use of this massive resource which is proving the most disruptive; transforming our whole approach to IT.
Size isn’t everything
Look closely and you’ll find that the real driving force behind the Big Data revolution lies in advances in our ability to combine advanced statistical models and new compute technologies to better understand and benefit from data resources. More specifically, the ability to generate rules, or algorithms, to look for patterns in data and solve problems in a fraction of the time and effort required using conventional computing methods.
Just as apps have revolutionised the way we, as humans, interact with computers, now algorithms are facilitating a quantum leap in machine learning, enabling us to do something really useful with our big data resources.
Want proof? Algorithms are what drive Google’s search engine. Algorithms are what power Netflix recommendations, voice driven assistants, driverless cars, next day deliveries, high speed trading and an ever growing number of services and technologies that we already take for granted.
Crucially, it’s not just the data that’s of value in these examples, or the amount being processed, vast though it may be. It's the intelligence that algorithms are able to provide that matters, enabling machines to make sense of the data and learn how to use it. Moreover, it’s this value which companies will be looking to monetise, with Gartner predicting what it calls the “algorithm economy” as the next big thing in big data, as algorithms come to be developed, traded and exploited just like mobile apps over the coming years.
Coping with disruption
The inevitable flipside to the algorithm economy will be even greater pressure on data storage and processing resources, with CIOs increasingly turning to public cloud services to soak up this demand. Many organisations will, however, prefer to keep their algorithms, as well as business-critical data, behind the corporate firewall and it’s here that, paradoxically, algorithms could have their biggest impact, by empowering the enterprise to dip in and out of the public cloud on their own terms.
Think of it as “physician heal thyself”. While the rise of the algorithm economy will compound data storage and processing issues, algorithms will also be essential when it comes to solving those issues by leveraging big data insights to better manage the Big Data resources they come from.
We’re already seeing this start to happen, most notably in the form of machine learning tools able to automatically balance storage demands and compute workloads across a mix of public and private platforms. Balancing these workloads in response to subtle changes in demand profiles is out of the question, it has to be done by machines. So, if you haven’t already, expect to become more familiar with algorithms and machine learning as essential to the ongoing digital transformation process. The algorithms are coming, and to an infrastructure near you, sooner than you might think.
Adrian Schofield sheds light on tech in South Africa
Mark Chillingworth on IT leadership