To achieve the five-9’s high availability in the public cloud, most cloud providers offer SLAs with a 99.999% uptime guarantee, combined with redundant configurations that span CSPs regions. This stems from an effort to increase confidence in getting adequate availability. 

For many applications, this arrangement is good enough. But when latency, availability, and control are critical, enterprises are still hesitant to move their workload to the cloud. And for a good reason – the cloud still has availability and assurance problems. 

 

The Public Cloud Was Not Tailored to Your Needs

 

When control, latency, and reliability come before convenience, public clouds cannot compete with on-prem and hybrid solutions that were tailored to the client’s needs, by the client. 

Very often, critical operational benchmarks can not be met in public clouds. This might mean inconsistent application performance, high network latency due to congestion, or concerns about data security.

“While cloud computing provides a strong enabling platform for these next-gen technologies because it provides the necessary scale, storage, and processing power, edge computing environments will be needed to overcome limitations in latency and the demand for more local processing.”
Rami Radi, Senior Application Engineer of Intel® Data Center Management Solutions

 

When it Comes to SLAs

 

One of the reasons enterprises prefer not to mess with the public cloud for critical applications is that the definition of “downtime” and “unavailable” in SLAs, excludes many of the reasons for application fail. For example, issues with the customer’s software and any third-party software, or planned hardware and software maintenance. “Human error” provisions are also excluded.

While it is reasonable for Cloud Service Providers (CSP) to exclude some causes of failure, for businesses that require a high level of data consistency it would be irresponsible to use these exclusions as excuses for why the application is down. 

This makes it necessary to ensure availability by some other means, such as establishing complex failover clustering configurations, causing companies to simply give up on moving to the public cloud altogether.

 

What is Cloud Repatriation?

 

The “cloud-first” approach has been a dominant mantra in computing for quite a few years now. After all, the cloud offers a host of advantages over traditional data centers, including savings on capital expenditures, improved time to market, and the flexibility to adjust provisioning.

But enterprises are starting to realize that the public cloud can not always deliver on its promises when it comes to performance and availability, and sometimes it simply isn’t a good fit for their needs. 

Organizations are increasingly moving workloads back to traditional data centers, private clouds, and hybrid environments. Cloud repatriation is a shift of workloads from the public cloud to local infrastructure environments, and this trend is picking up speed. An IDC survey discovered that 80% of respondents moved cloud workloads on-premises or to a private cloud solution within the past year. 

 

Why are Companies Shifting Workloads Back to On-Premise Resources?

 

Need for Tailored Solutions

Organizations that were expecting the public cloud to be the easy answer to all of their problems now realize that the public cloud is not always relevant to their requirements.

For many companies, a private cloud or private infrastructure environment better suits their needs. A recent survey by Yankee Group reported that 67% of respondents favored private cloud services, whereas only 28% wanted a fully managed public cloud. 

 

Meeting the Operational Performance Benchmarks

Failing to meet critical operation benchmarks is a sign that applications may perform better in a private environment. Inconsistent application performance, high network latency, and concerns about data security are all big reasons to stay away from public clouds. 

“Applications that are latency-sensitive, have long-running I/O intensive periods or have datasets that are large and require transport between various locations for processing are generally prime candidates for repatriation.”
Jeff Slapp, Vice President of Cloud and Managed Services at 365 Data Centers

 

Control and Security

The shared nature of the public cloud leaves most of the control up to the provider. In some cases, this is not acceptable due to regulatory and security concerns or the company’s internal policies. 

Private solutions are designed to put the control back into the hands of the user, and the organization has full power to configure and manage its resources without the restrictions of a multi-tenant solution.

 

Cutting Costs

The public cloud is not always a more cost-efficient option. Some companies, such as Dropbox, have chosen to migrate from the public cloud to a hybrid environment to benefit their bottom line, saving nearly $75 million of operating expenses over a two years period.

New Belgium Brewing recently migrated its core applications from an off-premises managed cloud to an on-premises colocation facility.

“ROI for cloud diminishes with a hyperconverged stack, as maintenance is simplified.”
Travis Morrison, Director of IT at New Belgium Brewing

 

Can Public Clouds Reach the Same Level of Availability as On-Prem SLAs?

 

With the right technology, it is possible to achieve five-9’s high availability and deliver highly available applications in public cloud environments with CSPs such as Amazon Web Services, the Google Cloud Platform, and Microsoft Azure.

How Does it Work?

While on-premise and hybrid resources are planned according to the availability zones of customers and employees, public cloud regions’ availability zone is set by the cloud vendor.

The geographical distance comes with speed of light limitations, causing latency and performance issues since most public cloud deployment data is hundreds of miles away.

The geographical distance must be made irrelevant for the public cloud to reach high availability compared to the on-prem SLAs. 

Content delivery networks (CDNs) provide a model of how these higher levels of availability can be achieved. CDN reduces the latency for content delivery by leveraging its own servers and their proximity to the audience to minimize latency. 

The same can be achieved with public clouds. The approach taken by CDNs to solving content latency issues is very similar to the way we solved the public cloud latency issues at Statehub. 

 

Statehub Global Data Business Continuity Service  

 

Statehub allows five-9’s high availability, as well as robust disaster recovery protection in public clouds.  To reach a higher level of assurance at the public cloud, Statehub created a globally available service that allows organizations to bridge the distance between data centers and achieve the same level of availability comparable with on-prem environments.

Statehub is purpose-built for providing a complete high availability and disaster recovery solution for any application running across public clouds. Statehub eliminates potential single points of failure, giving enterprises the ability to work in the public cloud while achieving Zero RPO (Recovery Point Objective) and minimizing RTO (Recovery Time Objective).

By using Statehub, companies can gain immediate access to enterprise-grade data resiliency and Zero RPO replication, without needing to compromise on SLAs, solving the latency and availability problems of the public cloud.