With technology advancing rapidly companies need to work out how to piece together and integrate the necessary components to maximise their ability to produce disruptive insight. They need to be able to define what they need to produce to cause disruption, understand their data and analytical requirements and then select the technology components necessary to be able to get started. It should be possible to get started quickly in the cloud and then move on-premises if needs be.
In addition they also need to be able to produce disruptive insight in a productive manner without the need for major re-training to use new technologies like Hadoop, Spark and streaming analytics. To that end, if companies can make use of existing tools used in traditional data warehousing to also clean, integrate and analyse data in big data environments then the time to value will come down significantly. Also, tools should be able to exploit the scalability of underlying hardware, when data volumes and data velocity is high, without users needing to know how that is done.
Today we are beyond the point where big data is in the prototype stage. We are entering an era where automation, integration and end-to-end solutions need to be built rapidly to facilitate disruption. Companies need to architect a platform for Big Data (and traditional data) analytics. Given this requirement, IBM would have to be a short list contender to helping any organisation become a disrupter whether it be on the cloud or on-premise
Phil Muncaster reports on China and beyond
Jon Collins’ in-depth look at tech and society
Kathryn Cave looks at the big trends in tech