You are looking at the documentation of a prior release. To read the documentation of the latest release, please
visit here.
We use cookies and other similar technology to collect data to improve your experience on our site, as described in our Privacy Policy.
Run Production-Grade Databases on Kubernetes
Backup and Recovery Solution for Kubernetes
Run Production-Grade Vault on Kubernetes
Secure HAProxy Ingress Controller for Kubernetes
Kubernetes Configuration Syncer
Kubernetes Authentication WebHook Server
KubeDB simplifies Provision, Upgrade, Scaling, Volume Expansion, Monitor, Backup, Restore for various Databases in Kubernetes on any Public & Private Cloud
A complete Kubernetes native disaster recovery solution for backup and restore your volumes and databases in Kubernetes on any public and private clouds.
KubeVault is a Git-Ops ready, production-grade solution for deploying and configuring Hashicorp's Vault on Kubernetes.
Secure HAProxy Ingress Controller for Kubernetes
Kubernetes Configuration Syncer
Kubernetes Authentication WebHook Server
New to KubeDB? Please start here.
Don’t know how to take continuous backup? Check tutorial on Continuous Archiving.
KubeDB supports PostgreSQL database initialization. When you create a new Postgres object, you can provide existing WAL files to restore from by “replaying” the log entries.
At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using minikube.
Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps here.
To keep things isolated, this tutorial uses a separate namespace called demo
throughout this tutorial.
$ kubectl create ns demo
namespace "demo" created
$ kubectl get ns demo
NAME STATUS AGE
demo Active 5s
Note: Yaml files used in this tutorial are stored in docs/examples/postgres folder in github repository kubedb/cli.
You can create a new database from archived WAL files using wal-g .
Specify storage backend in the spec.init.postgresWAL
field of a new Postgres object.
See the example Postgres object below
apiVersion: kubedb.com/v1alpha1
kind: Postgres
metadata:
name: replay-postgres
namespace: demo
spec:
version: "9.6"
replicas: 2
databaseSecret:
secretName: wal-postgres-auth
storage:
storageClassName: "standard"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Mi
archiver:
storage:
storageSecretName: s3-secret
s3:
bucket: kubedb
init:
postgresWAL:
storageSecretName: s3-secret
s3:
bucket: kubedb
prefix: 'kubedb/demo/wal-postgres/archive'
Here,
spec.init.postgresWAL
specifies storage information that will be used by wal-g
storageSecretName
points to the Secret containing the credentials for cloud storage destination.s3.bucket
points to the bucket name used to store continuous archiving data.s3.prefix
points to the path where archived WAL data is stored.wal-g receives archived WAL data from a folder called /kubedb/{namespace}/{postgres-name}/archive/
.
Here, {namespace}
& {postgres-name}
indicates Postgres object whose WAL archived data will be replayed.
Note: Postgres
replay-postgres
must have samepostgres
superuser password as Postgreswal-postgres
.
Now create this Postgres
$ kubedb create -f https://raw.githubusercontent.com/kubedb/cli/0.8.0/docs/examples/postgres/initialization/replay-postgres.yaml
postgres "replay-postgres" created
This will create a new database with existing basebackup and will restore from archived wal files.
When this database is ready, wal-g takes a basebackup and uploads it to cloud storage defined by storage backend in spec.archiver
.
To cleanup the Kubernetes resources created by this tutorial, run:
$ kubectl patch -n demo pg/replay-postgres -p '{"spec":{"doNotPause":false}}' --type="merge"
$ kubectl delete -n demo pg/replay-postgres
$ kubectl patch -n demo drmn/replay-postgres -p '{"spec":{"wipeOut":true}}' --type="merge"
$ kubectl delete -n demo drmn/replay-postgres
$ kubectl delete ns demo