Question

I have two databases (Mongodb 3.6.5) using ReplicaSet, when I made a split in the network, I am forcing the secondary database to be the primary.

cfg = rs.conf()
cfg.members[0].votes = 0
cfg.members[1].votes = 1
cfg.members[0].priority = 0
cfg.members[1].priority = 1
rs.reconfig(cfg, { force: true })

I already tested insert in the secondary acting as primary and when returning the network and back to secondary to be secondary, the bases were replicated, the original primary was equal to secondary.

Now I unplugged the network again and did several operations on the secondary machine transformed into primary until the size of the Oplog exceeds the maxSize. After some time without doing operations, the size of the Oplog decreased somewhat. I did some more operations to increase the size value of Oplog but the same is not growing as before.

My next step is to return the network, return the secondary as a secondary, and see if it synchronizes.

I would like to know how does this Oplog work?

Was it helpful?

Solution

I have two databases (Mongodb 3.6.5) using ReplicaSet, when I made a split in the network, I am forcing the secondary database to be the primary.

Forced reconfiguration should only be used as an administrative last resort. This is a risky approach with only two replica set members as you can easily create data challenges that will require significant manual recovery effort.

If you have a network partition and inadvertently reconfigure both sides to have a primary, you will have to deal with rollbacks and manual reconcilation of conflicting writes. The worst case scenario would be having enough independent writes for the oplogs to no longer have a common point: you would have to choose one member to use as the primary and then resync the other.

Now I unplugged the network again and did several operations on the secondary machine transformed into primary until the size of the Oplog exceeds the maxSize. After some time without doing operations, the size of the Oplog decreased somewhat. I did some more operations to increase the size value of Oplog but the same is not growing as before.

The replication oplog is a system capped collection whose implementation varies somewhat based on the underlying MongoDB storage engine and version of MongoDB server. Conceptually a capped collection is maintained in insertion order with a maximum size in bytes or documents. You can think of this as similar to a circular buffer: once the capped collection reaches its maximum size, the storage engine removes or overwrites the oldest documents based on insertion order.

The capped collection implementation for the oplog has some additional nuances (as compared to a standard capped collection) to improve performance for the replication use case. WiredTiger (the default storage engine in MongoDB 3.2+) does not preallocate the full space for the oplog so this collection will grow until it approaches the maximum oplog size. WiredTiger has a background thread that periodically prunes a batch of the oldest entries from the oplog.rs collection, so the size of this collection on disk is expected to fluctuate depending on overall write activity.

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top