You are looking at the documentation of a prior release. To read the documentation of the latest release, please
visit here.
We use cookies and other similar technology to collect data to improve your experience on our site, as described in our Privacy Policy.
Run Production-Grade Databases on Kubernetes
Backup and Recovery Solution for Kubernetes
Run Production-Grade Vault on Kubernetes
Secure HAProxy Ingress Controller for Kubernetes
Kubernetes Configuration Syncer
Kubernetes Authentication WebHook Server
KubeDB simplifies Provision, Upgrade, Scaling, Volume Expansion, Monitor, Backup, Restore for various Databases in Kubernetes on any Public & Private Cloud
A complete Kubernetes native disaster recovery solution for backup and restore your volumes and databases in Kubernetes on any public and private clouds.
KubeVault is a Git-Ops ready, production-grade solution for deploying and configuring Hashicorp's Vault on Kubernetes.
Secure HAProxy Ingress Controller for Kubernetes
Kubernetes Configuration Syncer
Kubernetes Authentication WebHook Server
New to KubeDB? Please start here.
Arbiter is a member of MongoDB ReplicaSet. It does not have a copy of data set and cannot become a primary. In some circumstances (such as when you have a primary and a secondary, but cost constraints prohibit adding another secondary), you may choose to add an arbiter to your replica set. Replica sets may have arbiters to add a vote in elections for primary. Arbiters always have exactly 1 election vote, and thus allow replica sets to have an uneven number of voting members without the overhead of an additional member that replicates data. By default, it is a priority-0 member.
For example, in the following replica set with a 2 data bearing members (the primary and a secondary), an arbiter allows the set to have an odd number of votes to break a tie:
There are some important considerations that should be taken care of by the Database administrators when deploying MongoDB.
Starting in MongoDB 3.6, arbiters have priority 0. When you upgrade a replica set to MongoDB 3.6, if the existing configuration has an arbiter with priority 1, MongoDB 3.6 reconfigures the arbiter to have priority 0.
IMPORTANT: Do not run an arbiter on systems that also host the primary or the secondary members of the replica set. [reference].
If you are using a three-member primary-secondary-arbiter (PSA) architecture, consider the following:
The write concern “majority” can cause performance issues if a secondary is unavailable or lagging.See Mitigate Performance Issues with PSA Replica Set to mitigate these issues.
If you are using a global default “majority” and the write concern is less than the size of the majority, your queries may return stale (not fully replicated) data.
Using multiple arbiters on same replicaSet can causes data inconsistency. Multiple arbiters prevent the reliable use of the majority write concern. For more details in this concerns, read this.
By considering this issue into account, KubeDB doesn’t support multiple arbiter to be deployed in a single replicaset.
As arbiters do not store data, they do not possess the internal table of user and role mappings used for authentication. Thus When running with authorization, arbiters exchange credentials with other members of the set to authenticate.
Also MongoDB doc suggests to use TLS to avoid leaking unencrypted data when arbiter communicates with other replicaset member.
For replica sets, the write concern { w: 1 } only provides acknowledgement of write operations on the primary. Data may be rolled back if the primary steps down before the write operations have replicated to any of the secondaries. This type behaviour is called w:1 roolback.
For the following MongoDB versions, pv1 (protocol version 1, which is default Starting in 4.0) increases the likelihood of w:1 rollbacks compared to pv0 (no longer supported in MongoDB 4.0+) for replica sets with arbiters:
i) MongoDB 3.4.1
ii) MongoDB 3.4.0
iii) MongoDB 3.2.11 or earlier
NB: The images in this page are taken from MongoDB website.