Whether it’s applications that live in the cloud or the proliferation of video, companies more than ever rely on fast, reliable network connections to keep employees productive. But applications can suffer problems when those network connections aren’t as speedy as they should be, often due not to a lack of bandwidth, but excessive network latency. In many cases, a good solution is to push more computing power and content closer to the users who are consuming it, a concept known as edge computing.
The latency problem stems from the way the Internet is designed to operate and the protocols that drive it, particularly the Border Gateway Protocol (BGP). BGP helps make the Internet good at surviving outages by being able to route around any problems. But it’s not so good at accounting for how long any given packet will take to reach its destination. That’s because all BGP cares about is the number of hops between the source and destination addresses – the route with the fewest hops wins, no matter long or congested it may be.
As a result, some traffic may suffer excessive delays, or latency. The issue is particularly acute for delay-sensitive traffic such as voice and video because it can cause images to freeze, and voices to drop out – a problem we’ve all likely experienced more times than we care to remember.
Edge computing can help alleviate the problem. In this instance, the concept may take one of two forms.
One solution is to implement a series of computers all around the Internet to cache content so it’s served up closer to the users who are consuming it. That’s essentially what content delivery network (CDN) providers such as Akamai do.
Consider a multinational corporation headquartered in New York. It has users around the world who routinely need to access corporate data, often downloading large files, streaming stored video webcasts and the like. Users located in or near its headquarters will likely have no problem accessing whatever they need. But the further the users are from that headquarters location, the more latency will be introduced and the longer it will take them to access and download files.
To remedy the situation, the company could sign on for a service from a CDN that has caching devices near where its end users are located. The first time a far-flung user requests access to a webcast, for example, it will be downloaded to a caching device near that user. The next time any user in the same or nearby location wants that same webcast, it’ll come from the caching device, with far less latency than that first download.
Another option is for the multinational company to build a series of small data centers around the globe, close to where it has large concentrations of employees. It could then replicate any critical and delay-sensitive applications at those data centers, such as unified communications suites that support voice, instant messaging and video. The end result is much the same as in the CDN example. Users connect to the data center closest to their location and experience far better performance due to dramatically reduced latency.
To learn more about the concept of edge computing and how it can help your organization deal with latency issue, download Schneider Electric’s free white paper number 226, “The Drivers and Benefits of Edge Computing.”
The post Why Latency is the Enemy and How Edge Computing Can Combat It appeared first on Schneider Electric Blog.