We use cookies and other similar technology to collect data to improve your experience on our site, as described in our Privacy Policy.
Run Production-Grade Databases on Kubernetes
Backup and Recovery Solution for Kubernetes
Run Production-Grade Vault on Kubernetes
Secure HAProxy Ingress Controller for Kubernetes
Kubernetes Configuration Syncer
Kubernetes Authentication WebHook Server
KubeDB simplifies Provision, Upgrade, Scaling, Volume Expansion, Monitor, Backup, Restore for various Databases in Kubernetes on any Public & Private Cloud
A complete Kubernetes native disaster recovery solution for backup and restore your volumes and databases in Kubernetes on any public and private clouds.
KubeVault is a Git-Ops ready, production-grade solution for deploying and configuring Hashicorp's Vault on Kubernetes.
Secure HAProxy Ingress Controller for Kubernetes
Kubernetes Configuration Syncer
Kubernetes Authentication WebHook Server
New to KubeDB? Please start here.
KubeDB has native support for monitoring via Prometheus. You can use builtin Prometheus scraper or Prometheus operator to monitor KubeDB managed databases. This tutorial will show you how database monitoring works with KubeDB and how to configure Database crd to enable monitoring.
KubeDB uses Prometheus exporter images to export Prometheus metrics for respective databases. Following diagram shows the logical flow of database monitoring with KubeDB.
When a user creates a database crd with spec.monitor
section configured, KubeDB operator provisions the respective database and injects an exporter image as sidecar to the database pod. It also creates a dedicated stats service with name {database-crd-name}-stats
for monitoring. Prometheus server can scrape metrics using this stats service.
In order to enable monitoring for a database, you have to configure spec.monitor
section. KubeDB provides following options to configure spec.monitor
section:
Field | Type | Uses |
---|---|---|
spec.monitor.agent |
Required |
Type of the monitoring agent that will be used to monitor this database. It can be prometheus.io/builtin or prometheus.io/operator . |
spec.monitor.prometheus.exporter.port |
Optional |
Port number where the exporter side car will serve metrics. |
spec.monitor.prometheus.exporter.args |
Optional |
Arguments to pass to the exporter sidecar. |
spec.monitor.prometheus.exporter.env |
Optional |
List of environment variables to set in the exporter sidecar container. |
spec.monitor.prometheus.exporter.resources |
Optional |
Resources required by exporter sidecar container. |
spec.monitor.prometheus.exporter.securityContext |
Optional |
Security options the exporter should run with. |
spec.monitor.prometheus.serviceMonitor.labels |
Optional |
Labels for ServiceMonitor crd. |
spec.monitor.prometheus.serviceMonitor.interval |
Optional |
Interval at which metrics should be scraped. |
A sample YAML for MySQL crd with spec.monitor
section configured to enable monitoring with Prometheus operator is shown below.
apiVersion: kubedb.com/v1alpha2
kind: MySQL
metadata:
name: prom-operator-mysql
namespace: demo
spec:
version: "8.0.35"
terminationPolicy: WipeOut
storage:
storageClassName: "standard"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
monitor:
agent: prometheus.io/operator
prometheus:
serviceMonitor:
labels:
release: prometheus
interval: 10s
Assume that above Redis is configured to use basic authentication. So, exporter image also need to provide password to collect metrics. We have provided it through spec.monitor.args
field.
Here, we have specified that we are going to monitor this server using Prometheus operator through spec.monitor.agent: prometheus.io/operator
. KubeDB will create a ServiceMonitor
crd in monitoring
namespace and this ServiceMonitor
will have release: prometheus
label.