In 2006 at an industry event, then Google CEO Eric Schmidt introduced the term “cloud computing”, which by the simplest definition means storing and accessing data and programs over the internet instead of using your computer’s hard drive. Today, many enterprises and governments have adopted a “cloud-first” strategy, mainly because subscribing to a service provider who offers premium services is better than building and managing their own data centers.
It started with what were considered large data centers that served massive amounts of people in multiple countries. These data centers were strategically located far outside of major city limits in order to avoid network congestion (easier to get on and off of long haul fiber without going through city network hops). Next in the quest for better performance, smaller versions of cloud data center solutions migrated to the regional edge (mainly in large cities), which improved service with faster speeds and reduced latency for the dedicated, local city area they served.
Introducing the Tethered Cloud
Fast forward to 2020, when cloud-first users are demanding even better performance in the form of speed, availability, and capacity. In response, cloud providers have been developing plans to extend the centralized data center cloud stacks all the way to the “local edge” (very close to the user, maybe even on-site). This would increase speed, lower costs, and allow businesses to keep data within their own four walls, giving them greater control over that information and complying with data regulations where applicable. The goal for the local edge versions is to use the same tools, application programming interfaces or APIs, hardware, and functionality across their local edge cloud and the central clouds. Businesses would then be able to run similar IT infrastructure on-premises and offer “like for like” services.
And this is where a new mini cloud stack or “tethered cloud” is the goal, but it’s taking much longer than I and others initially believed it would. While most enterprises have some form of hybrid cloud strategy – or implementation without a strategy – many of these deployments dance around the edges (yes, pun intended), picking easy projects with limited functionality that are simple and fast but still have decent ROI. The journey for “like for like tethered cloud” has begun.
Examples of tethered clouds include:
- Microsoft was the first tethered cloud to market with its Azure Stack. Microsoft sells the software and certified IT partners provide the physical solutions, such as servers, switches, enclosures, and UPSs, and also install and manage it. An example is the solution that Schneider Electric has put together with HPE.
- Amazon Outpost announced a slightly different business model in December 2019. AWS is selling the hardware and service stack. Customers have the option to deploy and manage it themselves or they can pay Amazon to use a certified partner to do it for them. It provides the same services, tools, and APIs as the full AWS cloud.
- Google is the newest entry with Google Anthos, a more open system that’s an extension of the Google cloud platform. It provides a platform based on software containers and the Google Kubernetes Engine (GKE) that enables organizations to develop workloads that can be deployed on-premise in tethered clouds or on the Google Central ClouKeyw
The benefits at the local edge
The most basic hybrid cloud means you have all of your applications or services on the same network – but your IT and software are in different locations; some of your applications run locally and some run in the cloud. When you start integrating the data plane, you get into what we call the loosely-tied hybrid clouds. The local edge applications and central core use the same data plane, which is how data is transmitted and stored. The next level of integration is the tightly-tied hybrid cloud, which is integrated in the control plane. The control plane controls the data plane and uses the routing protocols, rules, and policies that are used by all devices or nodes in the network to generate or respond to data traffic. This is essentially reproducing the central cloud services locally.
However, it most likely will not be all of your cloud services. When you have all of your cloud services the same – the exact same experience everywhere and from any device or node on the network – that is what we call true distributed local edge tethered hybrid cloud. The added benefit is you will have redundancy should you lose connection to the central cloud or the local tethered cloud.
These local edge tethered clouds will bring benefits to enterprise users like:
- Scale IT Elastically – bursting to central core cloud when needed
- Reduce Latency – especially valuable for time sensitive applications
- Raise Redundancy – “like-for-like” service that mirrors those in the central cloud
- Address Geopolitical Issues – censorship, security, privacy and data sovereignty
- Simplify – leverage the cloud provider to design, architect, distribute, manage, and update the services.
Gaining more control over applications and data
Centralized cloud services will still play an important role, despite the need for local edge. Centralized cloud services will remain vital for provisioning, managing, and updating software and services on the tethered clouds. As different organizations face varying sets of challenges when it comes to their cloud strategy, there is no one-size-fits-all answer. However, tethered clouds give the organization different options to reduce latency, add redundancy, or take more control over its applications and data.
The future is local edge tethered clouds and it’s coming to a data center near you.
The post Coming to a Data Center Near You – Distributed, Hybrid, Tethered Edge Cloud appeared first on Schneider Electric Blog.