he evolution of the Internet and how we use it sometimes leads to renewed discussion of topics that are actually not new. An example of this is edge computing. The first steps in this direction were taken as early as the late 1990s with the Akamai content delivery network for applications such as location-based delivery of advertising. The basic idea of edge computing is the same as the underlying principle at that time. In 2014 Cisco revitalized this technology under the name fog computing , while IBM and Microsoft continue to use the term "edge computing" to refer to the same ideas.
To get an idea of what edge computing means, you first have to consider the network architecture. The typical network architecture has a single computer center at the middle of the network. The edges are the places where data enters this network. This data is sent directly from the edges to the computer center, which requires corresponding bandwidth and can incur latencies before results are available. A current example of this architecture is cloud computing, with the cloud forming the center to which all data is sent.
Edge computing, by contrast, is based on distributed data processing. This can be regarded as an extension of the network architecture, which means there is still a large computer center in the network. There are various approaches to the implementation of edge computing. One approach is to make the sensors smart enough that they can directly evaluate time-critical data and only send data to the computer center that is needed for other purposes, such as archiving or big data analysis. Another option is to provide several small computer centers at various places, as close as possible to the edge, in order to shorten the transmission paths. Furthermore, the data is filtered so that only the data actually needed for other purposes is passed on. A combination of these two approaches is also possible.
Due to the rise of the Internet of Things, many more devices and different sorts of devices are now integrated into networks. One thing that smartphones, fitness trackers, refrigerators, parking control systems and smart thermostats have in common is that they are equipped with sensors and collect data that has to be processed. In practice, in all of these cases the slight delays arising from sending the data are acceptable because there are never any real-time decisions. Edge computing is therefore not necessarily advantageous.
In the Industry 4.0 environment, by contrast, things are different. For example, robots on a production line must always know how to act in the current situation. Driverless cars are perhaps an even better example. In an emergency situation, decisions have to be made in a fraction of a second, which means that data processing must be carried out as fast as possible. Here the advantage of faster data transfer with edge computing pays off.
In addition, multiple processing units improve scalability because new processing units can be brought online or existing units taken offline at any time. However, difficulties can arise when the data volumes are very irregular and capacities therefore have to be adjusted to match the actual demand, or the variations have not been taken into account. Another advantage of decentralization is that it improves resistance to DDoS attacks, since they cannot cripple the entire network by targeting a single computer center. Depending on the network architecture, the outage of a single computer center has little or no effect.
In addition, data passes through several nodes on its way to the center, and threats such as viruses can be detected and eradicated at these nodes before they reach the computer center. However, with multiple computer centers and numerous devices it also takes more effort to keep everything secure. For instance, in the worst case it may be easier to attack individual units if the same security precautions cannot be taken throughout the network or security updates are not all implemented at the same time.