Question

I'm wondering if it is possible (within reason) to sync a new MongoDB Replica Set(RS) with an RS currently in production.

The Problem

We have a site running on AWS that has its own MongoDB Replica Set and we're working on migrating that site to Google Cloud. We want to start a separate RS on Google Compute that will catch up with the AWS RS and stay up to date during our QA lifecycle until we're ready to cut DNS over to Google.

We originally tried adding the replicas on Google as secondaries to the AWS RS to keep them in sync. Our plan was to then take the AWS RS offline and let the Google RS reassign a Primary and continue as the only RS after the site was completely migrated.

This caused some issues when one of the Google replicas got elected as a master and created some syncing issues with our current live site.

The Solution??

I've been debating if it's possible to rsync from the AWS Primary to the GCloud Primary. Is this doable with Mongo? Will the GCloud primary then properly replicate to its slaves?

Or...

Is there a better solution that anyone knows of for this problem?

UPDATE

After trying to add Google nodes to server we're not seeing the node sync:

INFRA-GENERAL-00:SECONDARY> rs.status()
{
"set" : "INFRA-GENERAL-00",
"date" : ISODate("2016-08-03T00:22:37.275Z"),
"myState" : 2,
"term" : NumberLong(2),
"heartbeatIntervalMillis" : NumberLong(2000),
"members" : [
    {
        "_id" : 0,
        "name" : "****:27017",
        "health" : 0,
        "state" : 8,
        "stateStr" : "(not reachable/healthy)",
        "uptime" : 0,
        "optime" : {
            "ts" : Timestamp(0, 0),
            "t" : NumberLong(-1)
        },
        "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
        "lastHeartbeat" : ISODate("2016-08-03T00:22:35.205Z"),
        "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
        "pingMs" : NumberLong(0),
        "lastHeartbeatMessage" : "exception: field not found, expected type 16",
        "configVersion" : -1
    },
    {
        "_id" : 1,
        "name" : "****:27017",
        "health" : 0,
        "state" : 8,
        "stateStr" : "(not reachable/healthy)",
        "uptime" : 0,
        "optime" : {
            "ts" : Timestamp(0, 0),
            "t" : NumberLong(-1)
        },
        "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
        "lastHeartbeat" : ISODate("2016-08-03T00:22:34.728Z"),
        "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
        "pingMs" : NumberLong(0),
        "lastHeartbeatMessage" : "exception: field not found, expected type 16",
        "configVersion" : -1
    },
    {
        "_id" : 2,
        "name" : "****:27017",
        "health" : 0,
        "state" : 8,
        "stateStr" : "(not reachable/healthy)",
        "uptime" : 0,
        "optime" : {
            "ts" : Timestamp(0, 0),
            "t" : NumberLong(-1)
        },
        "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
        "lastHeartbeat" : ISODate("2016-08-03T00:22:35.248Z"),
        "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
        "pingMs" : NumberLong(0),
        "lastHeartbeatMessage" : "exception: field not found, expected type 16",
        "configVersion" : -1
    },
    {
        "_id" : 3,
        "name" : "****:27017",
        "health" : 1,
        "state" : 2,
        "stateStr" : "SECONDARY",
        "uptime" : 338,
        "optime" : {
            "ts" : Timestamp(1470169610, 1),
            "t" : NumberLong(2)
        },
        "optimeDate" : ISODate("2016-08-02T20:26:50Z"),
        "configVersion" : 175485,
        "self" : true
    }
],
"ok" : 1
}











INFRA-GENERAL-00:SECONDARY> rs.config()
2016-08-03T00:23:49.051+0000 E QUERY    [thread1] Error: Could not     retrieve replica set config: {
"ok" : 0,
"errmsg" : "not authorized on admin to execute command {     replSetGetConfig: 1.0 }",
"code" : 13
} :
rs.conf@src/mongo/shell/utils.js:1091:11
@(shell):1:1
Was it helpful?

Solution 2

It seems we were unable to add Mongo 3.2 servers to Mongo 2.4 cluster, so we've decided to downgrade our cluster we're migrating to and look into upgrading later. After downgrading to Mongo 2.4 on our Google cluster, replication is working well.

A helpful strategy when migrating to a new system is to make sure none of the servers in the new cluster can vote. This helps you avoid the replica set going through phases where there are an even number of members, which can break things. More on non-voting replicas here: https://docs.mongodb.com/manual/tutorial/configure-a-non-voting-replica-set-member/

Also, as @Adam C mentioned, it is important to set the new members to priority 0, this will keep them from being elected as primary. (This did not seem to keep them from participating in election, so we used the votes config)

OTHER TIPS

You can follow your original strategy with a minor tweak: add Google Compute (GC) nodes into the current replica set and keep them in sync until ready to cut over. The only thing you need to change is to set those new GC nodes to have priority 0. They will be full members of the set, they just cannot be elected primary (or trigger elections).

Once you are ready to switch over to the Google nodes, you need only reset the priority to 1 (or whatever your normal values are), and then step down or remove the AWS nodes from the set. You can guarantee that a GC node is elected primary by giving them a higher priority than the AWS nodes before issuing the step down.

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top