Kubernetes Popularity is on the Rise


Kubernetes is the de-facto standard for container orchestration and management. According to the latest 2020 CNCF survey, 91% of respondents report using Kubernetes, 83% of them in production. This continues a steady increase from78% last year and 58% in 2018, and this trend will continue to accelerate going forward.

The meteoric rise in Kubernetes production deployments with an increasing rate of adoption is a testimony to its amazing capabilities. The main attraction of Kubernetes is that it allows developers to quickly deploy applications, and stay focused on writing new code.

However, it is important to be aware of its shortcomings too. One of the most common pitfalls when adopting Kubernetes for the first time is underestimating how complex it is.

Numerous studies have also pointed out the growing pains associated with putting production-grade Kubernetes deployments. What this means for us is that we will have to manage many more Kubernetes clusters going forward, having to deal with the challenge of the overwhelming management complexity that it brings with it.


The Advantages of Kubernetes


Kubernetes eliminates the time-consuming, laborious manual work away from container management by automating deployment and distribution of application services, resource allocation for application services, app network configurations, and even load balancing across a distributed infrastructure.

As stated by Kelsey Hightower on his personal Kubernetes journey:

“I don’t want 10 pages of instructions. The benefit of having Kubernetes is that as a person making a product, your installation guide has the potential to be super, super streamlined, meaning you already know what Kubernetes can do. If I have a valid Kubernetes cluster, you can check for that immediately, and if you find it, you can almost do a zero configuration.”


Amplifying Resources


One of the advantages of Kubernetes is that lean organizations can manage and maintain a substantial infrastructure with fewer people, allowing them to deploy and manage it rapidly. As a result, lean teams may efficiently deploy and manage significant amounts of infrastructure while still maintaining operational velocity.



Thanks to a layer of abstraction at the infrastructure level, Kubernetes may be deployed in any environment. This boosts scalability and makes collaboration and decision-making easier for groups that operate across several different platforms and infrastructure types, from public cloud to on-prem. As a result, teams can focus on developing applications rather than maintaining the infrastructure.




Kubernetes also includes built-in resilience mechanisms such as high availability, automatic failover, and the capacity to self-heal by decommissioning, replacing, and launching new containers and services.


Extensive Documentation


Since Kubernetes is open source, there are plenty of resources and a wealth of information, including online documentation, books, training courses, and community support, as well as easy integration with a growing number of tools in the ecosystem and developer-friendly APIs.

There are many more advantages to Kubernetes and so it’s inevitable to ask – is Kubernetes the silver bullet we’ve been waiting for?


#1 Kubernetes Challenge – Complexity


Although the orchestration platform is well-known for its capabilities that simplify development, the implementation and administration of the platform itself can get quite complex.

According to the 2020 CNCF survey, complexity is the most pressing issue, followed by cultural changes and deployment team development difficulties.

Even Google admits the complexity of Kubernetes, as quoted by Drew Bradstock, product lead for Google Kubernetes Engine (GKE):

“Despite 6 years of progress, Kubernetes is still incredibly complex. What we’ve seen in the past year or so is a lot of enterprises are embracing Kubernetes, but then they run headlong into the difficulty.”


Why is Kubernetes Complex to Use and Deploy?


Complexity is a multi-faceted issue, and the way it manifests itself depends on how you choose to use the platform. In some cases, it’s easy to underestimate how difficult it can be to implement Kubernetes in complex environments.

Kubernetes “out-of-the-box” is simply a cluster with a set of nodes to run containerized applications. Any advanced configuration that would fit your specific business and application needs takes a long time to properly set up.

Troubleshooting is also time consuming and complex, especially for less experienced DevOps teams. It’s easy to go down a rabbit hole of figuring out what each setting does on your own.

Flexibility further adds to complexity: there are several methods to use and implement Kubernetes, with many choices of managed Kubernetes providers, or a complete DIY approach. Flexibility is especially a challenge for large, distributed organizations with numerous teams using the same resources. The wide range of options and techniques that can be used to manage the platform can quickly get overwhelming.

A major challenge is the time and effort required to manage, monitor, update, or upgrade to new releases. Kubernetes itself has no built-in processes for upgrades (although some managed offerings might have solutions). Upgrade tasks are delegated to user tools that need to be efficient in order to avoid bottlenecks during critical operations, such as downtime.

When properly configured, automation saves time, but incorrectly configured automation might soon become unmanageable and cause application malfunction or poor performance.

One of the Kubernetes’ main advantages might also be its biggest drawback. Users don’t write commands or instructions to tell Kubernetes what to do. Instead, they describe their desired state and Kubernetes determines how to get there.

If the situation requires four containers with a specific amount of allocated memory, Kubernetes will launch them and then monitor them. If one fails, it will spin up another to take its place. Self-healing allows applications to be up and running at first, but it can mask developing issues.

While everything may appear to be in order, the program might be throwing errors every hour. Because of a lack of insight into the code, it’s easy for components of Kubernetes to be unknowingly degraded.


What Are Some Best Practices to Handle Kubernetes Complexity?


Look Under the Hood


Despite all the hoopla, Kubernetes isn’t a mystical cure-all. It’s a collection of programs and features linked together. Understanding the individual components may be quite beneficial.


Map it Out


Companies can reduce complexity by bringing teams together to map out all applications; where they are, how they operate, how they connect, and how to migrate them. Determine who oversees management and, most importantly, who is responsible for what when things go wrong.


Develop In-House Documentation


When teams discuss and agree on how to implement and utilize Kubernetes as well as how to deal with problems that may arise, they can more easily manage complexity.


Consider Using a Managed Service


To sum up, managing a distributed infrastructure with different clouds and multiple hosting providers on Kubernetes, while possible, quickly gets unmanageably complex.

How much spare time and resources do you have to devote to Kubernetes?

Managing and troubleshooting require significant domain knowledge, which most organizations find scarce. Using a managed service offers more flexibility, access to additional tools and resources, on-site support, monitoring, troubleshooting, and maintenance.

Statehub simplifies the world of Kubernetes storage, allowing for application mobility out of the box. It shortens time to market by providing a ready-to-use production environment with all necessary components to guarantee mobility and resilience while protecting your applications from data gravity and vendor dependency risks.

We designed Statehub with the objective of making resiliency and flexibility a reality for stateful Kubernetes apps. Statehub achieves it by providing simple yet scalable storage that can be deployed across multiple clouds and regions in a single click.