By Samantha Barnes, International Banker
As IT (information technology) continues to evolve at a rapid pace, its role in adding value to businesses all over the world is coming more sharply into focus. And among the most hotly anticipated technologies that certain businesses and enterprises will be looking to in order to generate some of that value will be edge computing.
IBM defines edge computing as “a distributed computing framework that brings enterprise applications closer to data sources such as IoT devices or local edge servers. This proximity to data at its source can deliver substantial business benefits: faster insights, improved response times and better bandwidth availability”. The term is derived from network-architecture diagrams that show the “edge” as being the point at which traffic enters and leaves the network. It is at this peripheral point of the network that data is processed rather than directing it all the way to and from the centralised server, as is the case with cloud-computing models.
Given that the edge-computing model involves computing near the location where data is being collected and analysed, as opposed to in the Cloud or a centralised server, it is often used interchangeably with “fog computing” to describe a model in which data is processed near its source. Fog computing, however, is typically favoured by the OpenFog Consortium, with member companies including Cisco Systems, Dell, Intel, Microsoft and Princeton University, whereas edge computing is used more commercially, especially with the Internet of things (IoT).
As data-centre firm vXchnge observed, fog computing “processes data via a single, powerful processing device, such as an IoT gateway or ‘fog node,’ located close to its source. It acts as a centralised, local source fed by multiple data points”. vXchnge noted that edge computing, by contrast, “extends the idea of localised processing to include the devices themselves on the network as well as a local data centre”. Instead of automatically sending all information to the fog node, therefore, devices in an edge-computing architecture “can determine what information should be stored and processed locally and what should be sent to a local node or the cloud for further use”.
Indeed, edge computing is often associated with IoT. IoT devices are involved in increasingly more powerful processes, so much of the data being generated can be relocated to the “edge” of a network. This means the data does not have to be continuously transmitted back and forth between a centralised server to process it. As such, edge computing is more efficient in managing massive volumes of data from IoT devices, with low latency for faster processing that can be scalable. And when combined with 5G, which expands the high-bandwidth and low-latency capabilities of wireless data transmission, there is much anticipation about what edge computing will be able to achieve. Such technology will enable edge-computing systems to improve speed dramatically and, ultimately, enhance their ability to support real-time applications.
So, what are the benefits of edge computing? Low or zero latency is perhaps the most obvious one. By processing data at the source, edge computing can be critically important in helping real-time applications operate without delays or downtimes. This is especially important given how quickly the world is being connected through IoT devices, many of which serve applications that are reliant on real-time computing power. As such, the need for a solution that doesn’t involve such devices having to receive data from and deliver data back to the Cloud becomes increasingly important. And with such devices often having to handle substantial volumes of data, moreover, having localised solutions becomes essential. For instance, thousands of security cameras monitoring public spaces across a sprawling city will produce vast quantities of data that, if continuously transmitted back and forth between them and a centralised server, will likely experience latency issues due to the enormous bandwidth. Instead, localised processing and data storage through edge gateways means that much less data needs to be sent back to the server or cloud, or if real-time applications require it, to the relevant edge-enabled device, such as a security camera.
Indeed, any application that relies on real-time decision-making will find edge computing’s faster data-processing capabilities critical in ensuring continuous service. Traders, for example, need to frequently make real-time trading decisions within volatile financial markets, and they must ensure there are no lags in data computations or risk facing substantial monetary losses. And in some examples, smart systems must respond to data immediately without any lag times as a matter of life or death. Self-driving cars come to mind as they depend on the copious data they receive from devices in their vicinities that constantly track their surrounding environments. Such vehicles can’t allow for even the slightest delays in data computing, as they could cost drivers and passengers their lives.
Indeed, latency remains the biggest challenge for IoT devices to overcome. Cloud computing is simply deemed too slow at present to support any serious real-time activity, such as financial-market trading, autonomous-vehicle driving and traffic routing. But edge computing provides perhaps the best solution to this challenge by relocating the data-processing functionality to the edge of the network, thus allowing edge-enabled devices to gather and process data in real-time.
When combined with edge data centres, which are usually smaller facilities also located close to the network, edge computing’s processing power is further enhanced with processors positioned in those data centres, much closer to the actual appliances used and processes being undertaken. Indeed, the edge of any network provides scope for placing not only data centres but also servers, gateways, processors and storage facilities. This reduces the distance that data needs to travel to be processed, which can greatly reduce latency. In turn, it can add significant value to applications that require speed and scalability. And substantially reducing processing time opens up a whole new world for real-time analytics to flourish.
And with lower distances to travel come significantly easier, more cost-effective maintenance requirements. Data centres can be significantly smaller than their centralised counterparts and are thus much more portable and flexible in terms of where they can be located. As such, there is no need for maintenance services to travel long distances to data centres; it can be done in close proximity.
Cooling is also a concern when processing large datasets, especially the cooling costs at data centres, with electricity costs being especially high. The ratio between the cost of cooling and the cost of processing is known as the power usage effectiveness (PUE) and can be used to measure a data centre’s efficiency. Utilising smaller data centres (with some near the edge of the network) may end up being substantially more energy-efficient than using one big centre. And with fewer electricity demands come more environmental benefits. Whether smaller data facilities distributed over a larger area consume less power than one large centralised facility, however, remains to be seen. But the more accurate and efficient the programming and computations are that take place at the edge of the network, the less wasteful the overall operation will be.
Ultimately, edge computing will become a leading enabler of digital transformations in businesses. By establishing autonomous systems, companies can not only boost their productivity levels but also free up staff to focus on higher-end tasks. Stratus Technologies, a producer of fault-tolerant computer servers and software, identifies four key reasons why edge computing will be so important going forward:
- It will power the next industrial revolution, transforming manufacturing and services;
- It optimizes data capture and analysis at the edge to create actionable business intelligence;
- It creates a flexible, scalable, secure and more automated technology, systems and core business-process environment;
- It promotes an agile business ecosystem that is more efficient, performs faster, saves costs and is easier to manage and maintain.
As Stratus acknowledges, edge computing “creates new and improved ways for industrial and enterprise-level businesses to maximize operational efficiency, improve performance and safety, automate all core business processes, and ensure ‘always on’ availability”.
The IDC Data Age 2025 report “The Digitization of the World: From Edge to Core” predicts that by 2025, 175 zettabytes (or 175 trillion gigabytes) of data will be generated around the globe, of which edge devices will create more than 90 zettabytes. And according to Gartner, 91 percent of today’s data is created and processed in centralised data centres, but by 2022, around 75 percent of all data will need analysis and action at the edge, which underlines how crucial edge computing has already become and how much potential it will have over the coming years.