Why distributed processing helps data scale-out & speed-up

For CIOs working across the full transept of industry verticals in small, medium-sized or large capacity enterprises, there is an increased focused on data-driven business -- so, as data needs to scale-out and speed-up, could distributed processing be a key enabling technology for the next wave of data change?

IDGConnect_data_dataprocess_shutterstock_1170554746_1200x800
Shutterstock

Data is on the move, figuratively, literally and architecturally. We are now building enterprise software systems with increasingly complex and distributed data channels that push data workflows around at an increasingly rapid (often real-time) pace. Organisations are also moving their use of data outward, with some of that processing, analytics and storage happening locally, out on the ‘edge’, on the Internet of Things (IoT).

Technology analysts are fond of sticking their finger in the air to assess potential growth in this space. The magical eye-of-newt soothsayers at Gartner have estimated that some  75% of enterprise-generated data will be created and processed at the edge and outside a traditional centralised datacentre by 2025.

Whether it is in fact 66% by 2026 or perhaps 77% by 2027 doesn’t matter as much as acknowledging that data is becoming more distributed. This reality and truism begs of us one core question i.e. if data is becoming more distributed, then shouldn’t data processing become more distributed too?

To continue reading this article register now