The simplest definition for Edge Computing is that it brings both storage and compute resources nearer to where data is generated. At first this description belies the magnitude of what the technology really means for how we gather, process, store and use data. Data holds the key to digital transformation, as well as the success of 5G and many other technologies featured in tech trends lists for 2021 and beyond.
To extract the immense value from data, it has to be analysed, used and stored effectively, but this becomes a greater challenge each year; as data becomes increasingly important, its growing vastness makes it harder to harness and use. This is where Edge has the potential to unlock digital transformation for organisations across the globe.
Traditionally, IT infrastructure requires data to be transmitted over large distances – journeys that often also mean traversing numerous network connections along the way. This process not only requires a lot of hardware, but a lot of expensive bandwidth as well, presenting major cost and efficiency challenges for small and medium sized businesses in particular. By bringing storage and compute power to the source of the data, Edge removes the requirement for expensive hardware and bandwidth. It simultaneously provides the user on the ground with real-time data insights, in a more reliable way with higher availability.
With the scene set to explore this revolutionary technology, we can now look at some of the foundational building blocks in its history through to the present-day, as well as delving into the details of how the technology works, before concluding with a spotlight on enterprise applications and the future.
Monolithic origins
In the early days of organisations adopting mainframe computer systems, they were vast and extremely expensive. The large organisations capable of spending hundreds of thousands of dollars on one would need to dedicate entire rooms to housing them, while carrying out all network computing via a physical datacentre. Systems required staff to carry out manual data entry, processing power was low, and customisation was very limited.
What this important step marked was the beginning of data being captured more effectively at the user level, which along with cheaper minicomputers emerging at the time, began paving the way for more rapid development. In the 1970s the origins of the desktop computer as we know it were appearing, with the most important component in this step change being the microprocessor. The Fairchild Semiconductor Corporation and Texas Instruments Incorporated led the charge toward the development of the microprocessor, which would be the culmination of their work to manufacture integrated circuits.
Servers and mobility
Microprocessors began changing the way the world saw and understood computing. Household names of today — the likes of Hewlett-Packard and Dell — were helping to launch accessible systems built on this technology. This important step made computing far less exclusive and slow moving, allowing smaller businesses to access the technology and for individuals to use it at home. By the 1980s, computing was becoming truly user centric, and even applications like spreadsheets were in their infancy.
These breakthroughs were pivotal, but organisations relied on network servers that connected to datacentres for storage and processing, especially for larger projects. This was slow, inefficient and inflexible, representing the beginnings of the modern problem that Edge Computing has evolved to tackle.
The dawn of decentralised computing
We have looked at the key historical building blocks that led up to the arrival of Edge, but the origin of the technology occurred in the 1990s. This is when Akamai launched its first content delivery network, based on the idea of bringing compute nodes closer to the geographical location of the user. The purpose of this innovation was to deliver visual content like videos and image files.
In 2001, this idea was developed and solidified by computer scientist, Mahadev Satyanarayanan in a paper titled “Pervasive Computing: Vision and Challenges.” This concept made way for peer-to-peer overlay networks, which enabled more efficient network routing which could navigate by avoiding long distance network links where possible during transmission. The benefits of this were twofold, on the one hand it reduced application latency, while also reducing the load that networks had to deal with.
The cloud conundrum
In 2006, Amazon first launched its “Elastic Compute Cloud”, marking a milestone in the availability of storage capacity, visualisation and computation, all central to what Edge Computing brings together today.
The arrival of cloud technology ticked some important boxes, including accessibility for mobile devices, remote storage and processing. While at first cloud may have seemed like the silver bullet to the challenge of computing, the issue of expensive fees made the likes of AWS and Azure imperfect options for many small and medium sized organisations.
In addition to this, cloud alone could not cater for the already growing need for real-time decision-making. This key requirement meant that local data processing would be essential, preventing cloud from being the silver bullet in the case of data management, but no less integral to what Edge has become.
Present day
Statista predicts that as soon as 2025 there will be 21.5 billion connected IoT devices operational in the world. What this equates to in terms of data is tens of additional zettabytes to process, manage and store, making the need for a decentralised solution like Edge Computing absolutely crucial to many industries.
Unlike some technology trends, Edge is not nascent or vapourware. In 2020, Hewlett Packard Enterprise (HPE) invested $4 billion in its Edge network, with the likes of Microsoft, Amazon and Alphabet busy incorporating the technology into their existing IoT platforms.
When discussing Edge in the present day, it would be remiss not to mention 5G, the success of which is sure to be linked directly to the technology. Ericsson predicts that by 2023, 5G will be responsible for one-fifth of all mobile traffic, with 25% of use-cases being dependent on Edge. To add to this, the majority of new 5G revenue is predicted to come from enterprise and IoT services. IBM is another name focussing on the interplay of 5G and Edge, offering solutions that leverage these technologies to automate operations and enhance safety.
Why Edge is so special
By using Edge Computing, no longer are organisations restricted to pushing information to a datacentre or directing it to the cloud, instead it can be processed in real-time and be available for the user on the ground to leverage. Hybrid Edge Computing could be a more appropriate option for heavily data intensive projects, whereby a local datacentre could be incorporated in the solution to help manage the load.
Part of the efficiency of Edge technology is its use of software-defined storage, providing a much greater degree of freedom in managing both the way data is stored and retrieved. When restricted to using physical storage hardware, it prevents the user being able to store and use their data freely, drastically limiting the way it can be applied and the results that can be achieved.
From a reliability perspective, Edge brings peace of mind by removing single points of failure that are present in traditional systems, while also replicating data and distributing it to multiple locations. The benefit of this is that an entire organisation cannot be destabilised by a single local outage. Through the combination of these characteristics, the positive disruption that Edge Computing promises has three prongs, improving on traditional approaches to managing data in terms of cost, reliability and higher availability.
A spotlight on enterprise applications
Farmers and growers represent a key enterprise segment where Edge Computing has the power to be revolutionary. By bringing compute power and data storage nearer to connected sensors in the field, data insights can be achieved by workers in real-time, using them to improve processes drastically. With insights on things like ground moisture and soil conditions, growers can enhance irrigation systems to achieve greater yields and higher produce quality.
Safety can also be improved by Edge, enabling oil refinery workers for example to monitor critical conditions like pressure in pipelines, ensuring that potentially dangerous situations can be dealt with as rapidly and effectively as possible. Energy grid control and analytics are other prime examples, whereby two-way communications can be established between the distributor and the consumer to stay ahead of serious outages, while also being used to make maintenance more direct and efficient.
Remote monitoring powered by Edge Computing is also proving to be highly beneficial across a number of industries, enabling staff in control centres to be in constant, real-time contact with workers at remote sites in industries like mining and transport. While we have touched on some of the applications in action right now, there are an endless number of disruptive applications for Edge in the future, including examples like traffic management systems and the development of autonomous vehicle technology.