Question

I use mongodb 4.0.10. I want to establish a quorum for a cluster of one primary node and two secondary nodes as written here. When the number of nodes is less than the quorum, 3 nodes in my case, cluster goes to readonly (no election).

I`ve tried to set priority of two nodes to 0, in this case if primary goes down, there is no election, but if one of secondaries goes down, old primary still exists.

According to MongoDB docs terminology is it possible to set a replica set Fault Tolerance to zero? It means that if any of cluster nodes goes down new primary will not be elected.

UPDATE
rs.conf():

rs0:PRIMARY> rs.conf()
{
        "_id" : "rs0",
        "version" : 4,
        "protocolVersion" : NumberLong(1),
        "writeConcernMajorityJournalDefault" : true,
        "members" : [
                {
                        "_id" : 0,
                        "host" : "mongo0:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 1,
                        "host" : "mongo1:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 0,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 0
                },
                {
                        "_id" : 2,
                        "host" : "mongo2:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 0,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 0
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatIntervalMillis" : 2000,
                "heartbeatTimeoutSecs" : 10,
                "electionTimeoutMillis" : 10000,
                "catchUpTimeoutMillis" : -1,
                "catchUpTakeoverDelayMillis" : 30000,
                "getLastErrorModes" : {

                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                },
                "replicaSetId" : ObjectId("5cf5011183ea2fa5beade86b")
        }
}

I want to set a quorum rule to 3. It`s a split-brain protection measure. It means that if no 3 alive nodes in the cluster there is no able to write, only to read. I set priority of secondary nodes to 0, it means that if primary fails - no new primary will be elected. It works fine, but I also want to configure such primary node behaviour: if any secondary nodes are not available in the cluster - primary change status to secondary till all cluster members will be available again.

Was it helpful?

Solution

I want to set a quorum rule to 3. It`s a split-brain protection measure. It means that if no 3 alive nodes in the cluster there is no able to write, only to read.

Split brain usually refers to a scenario where you have inconsistent data on both sides of a partition, for example with different writes accepted. The quorum requirement in MongoDB replication is designed to avoid that: a primary can only be elected (or sustained) in a partition that has a majority of configured voting members available. Any minority partitions will be increasingly stale, but will still have consistent history with a possibility of resuming sync when the network partition is resolved.

I also want to configure such primary node behaviour: if any secondary nodes are not available in the cluster - primary change status to secondary till all cluster members will be available again.

This is an atypical configuration since it does not allow for any fault tolerance for writes (which is one of the key benefits of replica sets). A recommended approach would be configure all of your nodes as voting, use w:majority to require majority writes, and use read concern majority to ensure documents read are guaranteed not to roll back. The maxStalenessSeconds read preference can also be useful if you are concerned about reading from a secondary that has fallen too far behind.

However, if you want to ensure writes only succeed with all members available, you could consider one of the following approaches:

  • Use a w:3 write concern to require acknowledgement from all 3 members of your replica set.
  • Configure your replica set with 2 replica set members (instead of 3) which are both voting. This will require both members online in order to elect or sustain a primary.

OTHER TIPS

setting w:3 will require acknowledgement from all 3 members of your replica set however if anyone of them is not available, write will be done in the cache file of the primary and which may lead a disk full scenario if down time is large. I believe w:majority is the best approach with a Primary secondary secondary configuration.

Meanwhile it will be great if you could please share why you want your cluster to be read only even if only one member is down? It is simply loosing the high availability of your cluster. if you want just to make sure read is not dirty read I think w:majority will allow to read data which will not be roll backed.

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top