Kubernetes is now one of the most popular open-source projects. It has a community that aims to support Kubernetes on multiple cloud vendors and Kubernetes distributions. Kubernetes also has strong momentum, as evidenced by its widespread adoption.
Kubernetes was designed from the get-go for the cloud-native world, where software development and operations work together. Kubernetes’ main strength lies in its declarative infrastructure approach.
What is Declarative Infrastructure?
Kubernetes makes it possible for a user to specify the desired state of the infrastructure, and Kubernetes makes it happen.
Almost everything in Kubernetes is declarative, meaning it works backward from the desired state that describes how applications are composed, managed and how they interact with one another.
Its declarative nature enables a significant increase in operability and portability. It is also the basis of Kubernetes’ self-healing properties. As a result, Kubernetes provides a self-healing infrastructure across multiple cloud vendors with support for running containerized, highly available, scalable, and elastic applications.
Without its declarative approach, Kubernetes would be a complex system to operate in production with a significant barrier to entry.
“Instead of taking an imperative or procedural approach, Kubernetes relies on the notion of… taking a declarative approach to deploying and managing cloud infrastructure as well as applications. You declare your desired state without specifying the precise actions or steps for how to achieve it.”
The Benefits of a K8s Declarative Infrastructure
One benefit is that Kubernetes uses YAML files which are easily human-readable and mostly self-explanatory. Kubernetes interprets these files, and figures out how to create the relevant resources in the cluster. It uses mechanisms like reconciliation loops to reach the intended end condition, whereas before, it was a manual process that required human interaction.
Another benefit of using the declarative configuration approach is that Kubernetes users do not have to define (or even know) how Kubernetes runs and manages their applications. Kubernetes does not need to let the user know how the desired state was achieved; Kubernetes only needs to ensure that the system is in the desired end-state.
What about the Kubernetes Storage Architecture?
Kubernetes was designed to be a high-performance orchestrator of computing resources, not as a low-level infrastructure for storage. The container orchestrator was never intended to provide persistent storage in the first place. As a result, Kubernetes is a great tool for running containers, but not for managing the storage of those containers.
The problem is that businesses run on data, and no business-critical application is truly stateless. Kubernetes only provides an interface to abstract the implementation details of how resources are managed under Kubernetes. Kubernetes orchestrates resources, but storage lacks standardization across the industry to an extent that makes it difficult to manage declaratively.
The ability to rapidly create containers, destroy them, and then recreate them somewhere else is one of the central benefits of containerization. However, pinning persistent data to containers would negate that benefit. It would prevent the fast failover, easy updates, lightweight provisioning, and other benefits that stem from containers’ ephemeral nature. You can reference a volume, but not its data. What if the volume is no longer available? What if the container needs to be moved elsewhere?
Most business-critical applications are not stateless, making it incredibly difficult to run Kubernetes in production. Kubernetes is designed for stateless workloads and lacks critical features required by stateful applications – including shared storage, replication and disaster recovery resource definitions, and self-healing for your data.
A Recent DoKC Survey Supports the Need for Declarative Data
A recent survey by the Data on Kubernetes Community addresses this issue exactly. According to the survey, a majority would like to see data (not volumes) become declarative, just like other Kubernetes resources, so organizations can more seamlessly react to data in real-time.
A 2:1 majority also believe that leveraging their real-time data is key to competitive advantage. The rise of real-time data is fueled by organizations’ desire to quickly react to actionable insights that drive customer satisfaction and boost revenue.
The next generation of Kubernetes operators should enable users to declare what is the data of their application just as simply as they declare what the container image is: to specify which data, not volume, should be used with each application, without going deep into configuring the storage itself.
According to Melissa Logan, the DoKC director, in an interview with The New Stack:
“Kubernetes was not originally intended to run stateful workloads. Now, the early [adopters] people saw some success, but there’s still a lot of challenges with it. And so the community was formed to bring people together to Kubernetes.”
Statehub Turns Data into an Always-Available, Declarative Resource
Kubernetes simplifies the process of deploying applications, but it does not manage storage well. This brings the need for data management tools that Kubernetes can use to interact with storage. We need tools to address the problem of persistent state at the global level, and that’s where Statehub comes in.
Statehub is a managed Kubernetes Data Fabric that ensures data availability across all public cloud regions, turning the data into an always available, declarative resource.
Stateful applications can now enjoy the same user experience formerly only available for stateless containerized apps because of Statehub’s innovative approach to decoupling data from its physical location.
With Statehub’s App Warping™ technology, you can extend your applications’ state across any infrastructure and distance simply by registering your clusters with Statehub with a single command. Get started with our free tier today.