Replicating a database across regions raises a lot of issues due to high latency between the physical locations. It increases the complexity of the MongoDB deployment drastically and the probability of failures.

Also, Egress traffic charges and the need to have running MongoDB instances on the secondary side make cross-region replication very expensive. By using Statehub, you can simply move your state and data to different locations with no additional complexity and at a much more predictable cost.

This guide walks you through the process of deploying MongoDB with the MongoDB community guide operator on a Kubernetes cluster, using Statehub as a data service enabling cross-region and multi-cloud continuity.

That means, that in the event of a failure of your cluster, or the entire cloud region where it is deployed, you’ll be able to start your MongoDB Processes, with all of its latest data intact, on another cluster at a different location in just a few seconds.


Deploying MongoDB on K8s with Statehub

Before You Begin

Before you begin, it might be useful to familiarize yourself with Statehub concepts such as Clusters, States, Volumes, and the Default State.


The following is necessary to use Statehub with your Kubernetes clusters:

  1. A Statehub account. Sign up here
  2. A UNIX-compatible command-line interface (Linux or Mac terminal, WSL or CygWin for Windows)
  3. The kubectl command-line utility to control your clusters

Initial Setup

This guide assumes you have two Kubernetes clusters in two distinct cloud locations, similar to the topology below:

Deploying MongoDB on K8s with Statehub

Step 1: Set Up the Statehub CLI

Go to Get-Statehub to Download and Install the Statehub CLI

Setting up your Kubernetes clusters to use Statehub requires using the Statehub CLI. After installation is complete, copy the link to your browser and go to the Statehub login page.

If you don’t already have a Statehub account, you can create one on the Statehub login page.

Once you’ve logged in to your Statehub account, you should be automatically redirected to the tokens page and prompted to create a token for your CLI. Click “Yes” to create a token. Copy the token to the CLI prompt and press Enter.

Your Statehub CLI installation should now be configured.

Step 2: Register Your Clusters with Statehub

Register the Clusters Using the Statehub CLI

The following command registers the cluster associated with your current context:

kubectl config use-context my-cluster
statehub register-cluster

To register another cluster, switch to its context, and then run register it:

kubectl config use-context another-cluster
statehub register-cluster

for further information about the cluster registration process go to Statehub – Cluster Registration

💡 Please Note:
In certain scenarios, you might not have a second cluster in place at another location. If you wish your data to be replicated to another location, and create a cluster on-demand if needed, follow the procedure to add a location to a state.

What We’ve Done

To use Statehub with your Kubernetes clusters, register them using the Statehub CLI. The Statehub CLI makes use of your Kubernetes configuration contexts to identify your clusters and is aware of the current context.

To register another cluster, either switch to the appropriate context and run the above command. You need to register all the clusters between which you want to move your stateful applications.

This operation will:

  1. Make Statehub aware of your clusters
  2. Identify your clusters’ location(s) and report them to Statehub
  3. Generate a cluster token for your cluster and save it in your cluster’s secrets, so that your cluster can access the Statehub REST API.
  4. Install the Statehub Controller components and the Statehub CSI driver on your cluster
  5. Add the cluster’s location(s) to the default state, if the default state doesn’t span this location yet
  6. Configure the default state’s corresponding storage class ( as the default storage class.

Once you’ve registered all of your clusters, you should be able to start stateful applications and fail them over between your clusters.

At this point, your topology will be as follows:

Deploying MongoDB on K8s with Statehub


Step 3: Configure and Apply Your Cluster Components

Install the MongoDB Community Operator

💡 Please note:
This guide will walk you through the process of deploying the operator in the same namespace as your other MongoDB resources. If you wish to deploy the operator in a different namespace please follow the step of the following guide instead: Installing the operator on a different Namespace

Clone the operator repository:

git clone <>

Install the needed Custom Resource Definitions (CRD’s):

kubectl apply -f config/crd/bases/mongodbcommunity.mongodb.com_mongodbcommunity.yaml

Verify the installation :

kubectl get crd/

Install the roles and the role-bindings:

kubectl apply -k config/rbac/ --namespace <my-namespace>

Verify resource creation:

kubectl get role mongodb-kubernetes-operator --namespace <my-namespace>

kubectl get rolebinding mongodb-kubernetes-operator --namespace <my-namespace>

kubectl get serviceaccount mongodb-kubernetes-operator --namespace <my-namespace>

Install the operator:

kubectl create -f config/manager/manager.yaml --namespace <my-namespace>

Verify the operator installation :

kubectl get pods --namespace <my-namespace>

In case you wish to upgrade the operator to the latest version use this guide: Operator upgrading

Deploy a Replica Set

In our example, we launch our Replica Sets with StatefulSet for the logs volumes. This will launch a PVC (persistent volume claim) with Statehub as a storage provisioner, which allows your data to be persistent and attached to a specific volume, and for data to be consistent on different regions/clouds associated with your Statehub organization.

In config/samples/mongodb.com_v1_mongodbcommunity_cr.yaml edit the 2 following arguments:

  1. Change <your-password-here> to the password you wish to use.
  2. Add a StatefulSet Section for a PVC to be launched, make sure you configure a minimum storage size of 4GB

An example provided for a modified mongodb.com_v1_mongodbcommunity_cr.yaml :

💡 Please Note:

In our example, we launch a replica set that has 3 members. Make sure your cluster has enough node groups available, or reduce the number of replicas The minimum storage size supported by the Statehub volume provisioner is – 4Gi

Apply the YAML in order to deploy the Replica Set:

kubectl apply -f config/samples/mongodb.com_v1_mongodbcommunity_cr.yaml --namespace <my-namespace>

Verify that the resources were launched properly (may take a few minutes):

kubectl get mongodbcommunity --namespace <my-namespace>

What We’ve Done

As soon as Kubernetes tries to create a PVC with a Statehub-created storage class (i.e –, Statehub will create a volume on the state corresponding to the storage class, replicated between all of the locations the state spans.

Since the owner of the state is our primary cluster, only it is allowed to access the volumes, and create new volumes on the state.

Connecting to your Resources from Outside the Cluster

The MongoDB Community Kubernetes Operator creates secrets that contain the user’s connection strings and credentials.

The secrets follow this naming convention: <>-<auth-db>-<username> .

In our example:

  • = example-mongodb
  • auth-db = admin
  • username = my-user

Attain the connection string using the following command:

kubectl get secret example-mongodb-admin-my-user -o json | jq -r '.data | with_entries(.value |= @base64d)'

💡 Please Note:

This command requires jq version 1.6 or higher. for installation – jq – command-line JSON processor


The Response should include the password configured in the YAML and a connection string:

kubectl get secret example-mongodb-admin-my-user -o json | jq -r '.data | with_entries(.value 1= @base64d)'

The response:

In order to access MongoDB from outside the cluster, launch a standalone Mongoshell pod and connect to the cluster using the provided connection string.

Invoke the following:

k run mongoshell -it --rm --restart=Never --image=mongo -- bash

Inside the command prompt, use the connection string provided earlier:

{make sure you use your string between “”}

mongosh <"mongodb+srv://my-user:bestpassword@example-mongodb-svc.default.svc.cluster.local/admin?ssl=false">

You are now able to access your MongoDB resources and manage your DBs and users.

💡 Please Note:
The created user in this guide has the following roles: UserAdmin and ClusterAdmin . In order to write data to DBs, another user must be created and granted the role of ReadAndWrite.

Step 4: Creating a ReadAndWrite User

In case you wish to add data to your DB and test the fail-over between locations through Statehub (Step 4 of this Guide) you can create a user that will have the authorization to write to a certain DB in your MongoDB environment.

In our example, create a table called products and insert a row in the command prompt:

use products

This command creates a new DB called products. However, this DB is not shown in a show dbs command since there is no data inserted into it.

Create a new user (accountAdmin01) and grant it permissions and a password (in our case 123456)

db.createUser( { user: "accountAdmin01",
                 pwd: passwordPrompt(),  // Or  "<cleartext password>"
                 customData: { employeeId: 12345 },
                 roles: [ { role: "clusterAdmin", db: "admin" },
                          { role: "readAnyDatabase", db: "admin" },
                          "readWrite"] },
               { w: "majority" , wtimeout: 5000 } )

After applying this command, terminate the Mongoshell pod using this command (outside the command prompt):

kubectl delete pod mongoshell

Re-launch the Mongoshell but connect to the MongoDB resources using an edited string to authenticate as our newly created user:

k run mongoshell -it --rm --restart=Never --image=mongo -- bash

Inside the command prompt, invoke the following command:

mongosh "mongodb+srv://accountAdmin01:123456@example-mongodb-svc.default.svc.cluster.local/products?ssl=false"

Insert a new row to DB:

db.products.insertOne({"name":"tutorials point"})

Verify that a new DB (products) was created:

show dbs

You now have new data ready to be tested for a switch over between locations using Statehub.

After starting your application, your topology will resemble this:

Deploying MongoDB on K8s with Statehub


Step 5: Failing Over Between Clusters in Different Locations

When you need to switch over your application to another cluster, set the other cluster as the owner of the state and apply the configuration to it.

Deploying MongoDB on K8s with Statehub


Optionally, in case of a planned switchover, remove the application on the cluster you’re moving away from. The Data will not be affected:

kubectl delete service/example-mongodb-svc
kubectl delete statefulset.apps/example-mongodb
kubectl delete pod mongoshell

To start your application on the other cluster:

1. Choose the cluster on which you want your application and switch to its context

kubectl config use-context another-cluster

2. Make sure this cluster is the owner of the default state by running the following command:

statehub set-owner default another-cluster

3. Apply your stateful application configuration – run through step 3 of this guide.


Deploying MongoDB on K8s with Statehub



While failures might happen anytime, there is a simple solution for preventing data loss and managing stateful data and applications, using Statehub’s features.

In this guide, we went through the basic steps of Deploying MongoDB environment on Kubernetes with Cross-Region or Multi-cloud Business Continuity using Statehub, giving you the freedom of running stateful MongoDB databases without worrying about data loss in a case of a failure or in a need for a change.

As always, feel free to ping us back with any questions at