Replication 的目的是提供資料 redundancy 並確保 availability,在不同的 DB server 保留數個資料副本,可避免因為單一機器失效而遺失資料。
有時我們可以利用 replication 增加資料讀取的效率,clients 可以讓不同的 server,分開處以讀取及寫入的 operations,我們也可以利用不同的 data centers,讓全球的使用者可連接最接近的 data center,以提高 AP 的效能。
mongd instances in Replication Sets
replication set 裡面 mongd daemon 主要有兩個角色: Primary 以及 Secondary,clients 主要跟 Primary node 溝通,所有的 Secondary nodes 會跟著 Primary 執行一樣的 operations,因此可以得到完全一樣的 data set。
每一個 replica set 只能有一個唯一的 Primary node,為了支援 replication,Promary node 必須在 oplog 裡面保留所有資料異動的記錄。
Primary 是唯一可以同時接受 Read/Write 的節點,雖然 Primary 與 Secondary 都可以接受 Read,但預設只有 Primary 可以接受 Read。
Secondary 會複製 Primary 的 oplog,並在自己的 data set 裡面再執行一次。
如果 Primary 超過 10s 無法跟其他節點溝通時,replica set 將會選擇一個 Secondary 作為下一任的 Primary。第一個收到多數投票的,就成為下一個 Primary。
Secondary 可設定作為其他的特殊用途,例如可以設定是否可以參與仲裁投票,或是將 priority 設定為 0,也就是單純只備份資料,不會變成 Primary node。
replica set 裡面有第三個角色:Arbiter,Arbiter 並不會儲存資料的備份,主要的功能是當有偶數個 Secondary 需要在其中選擇一個變成 Primary 的時候,就需要 Arbiter 出面進行投票仲裁。
一個 replica set 裡面最多可以有 12 個成員,但有投票權的最多只有有 7 個。一個 replica set 的最低成員要求為:一個 Primary、一個 Secondary、一個 Arbiter。但通常都是 deploy 一個 Primary加上兩個 Secondary。
我們可以增加獨立的 mongod instance 擔任 replica set 的 Arbiter,Arbiter 不需要儲存資料,只需要維護一個 replica set 的 quorum,並遴選出下一個 Primary node。Arbiter 並不需要獨立的硬體機器才能運作。
Secondary node 複製資料的方式,是採用 asynchronous replication,換句話說,如果固定由 Secondary 讀取資料,有可能會發生讀取到舊資料的狀況,
當我們需要跨越 data center 部屬 replica sets 的時候,以提高 data redundancy 時,參考Geographically Distributed Replica Sets 的作法:
- 在 main data center 建立一個 Primary,設定 priority 為 1
- 在 main data center 建立一個 Secondary,設定 priority 為 1
- 在 second data center 建立一個 Secondary,設定 priority 為 0
因為 second data center 的 Secondary node 的優先權為 0,所以不會變成下一任的 Primary node。當 main data center 失效時,
單機測試 replica sets
雖然實際上運作的環境必須要把 Primary, Secondary and Arbiter 放在不同的機器甚至是不同的 data centers。第一次測試 replica sets ,就先以同一台機器進行測試。
建立資料及 log 儲存路徑
mkdir -p /usr/share/mongodb/repl/data/r0 mkdir -p /usr/share/mongodb/repl/data/r1 mkdir -p /usr/share/mongodb/repl/data/r2 mkdir -p /usr/share/mongodb/repl/logs
建立 key files
在 replica set 中用來互相驗證,這個部份也可以不設定,只要在啟動 mongod 時,不要加上 --keyFile 參數就好了,注意一定要將 key file 的檔案權限改為 600mkdir -p /usr/share/mongodb/repl/key echo "rs1 keyfile" > /usr/share/mongodb/repl/key/r0 echo "rs1 keyfile" > /usr/share/mongodb/repl/key/r1 echo "rs1 keyfile" > /usr/share/mongodb/repl/key/r2 chmod 600 /usr/share/mongodb/repl/key/r*
啟動三個 mongod daemon
在 cli 啟動三個 mongd
/usr/share/mongodb/bin/mongod --replSet rs1 --keyFile /usr/share/mongodb/repl/key/r0 --fork --port 27017 --dbpath /usr/share/mongodb/repl/data/r0 --logpath /usr/share/mongodb/repl/logs/r0.log --logappend
/usr/share/mongodb/bin/mongod --replSet rs1 --keyFile /usr/share/mongodb/repl/key/r1 --fork --port 28017 --dbpath /usr/share/mongodb/repl/data/r1 --logpath /usr/share/mongodb/repl/logs/r1.log --logappend
/usr/share/mongodb/bin/mongod --replSet rs1 --keyFile /usr/share/mongodb/repl/key/r2 --fork --port 29017 --dbpath /usr/share/mongodb/repl/data/r2 --logpath /usr/share/mongodb/repl/logs/r2.log --logappend
可以用 ps aux 查看 mongod daemons
# ps aux|grep mongod
root 13478 1.2 1.3 531552 50020 ? Sl 14:24 0:02 /usr/share/mongodb/bin/mongod --replSet rs1 --keyFile /usr/share/mongodb/repl/key/r0 --fork --port 27017 --dbpath /usr/share/mongodb/repl/data/r0 --logpath /usr/share/mongodb/repl/logs/r0.log --logappend
root 13623 1.7 1.3 531556 52800 ? Sl 14:25 0:02 /usr/share/mongodb/bin/mongod --replSet rs1 --keyFile /usr/share/mongodb/repl/key/r1 --fork --port 28017 --dbpath /usr/share/mongodb/repl/data/r1 --logpath /usr/share/mongodb/repl/logs/r1.log --logappend
root 13665 3.2 1.3 531552 53232 ? Sl 14:27 0:02 /usr/share/mongodb/bin/mongod --replSet rs1 --keyFile /usr/share/mongodb/repl/key/r2 --fork --port 29017 --dbpath /usr/share/mongodb/repl/data/r2 --logpath /usr/share/mongodb/repl/logs/r2.log --logappend
- 連接到 r0,並初始化 replica sets
以 mongo 連接到 r0 這個 node,然後透過 rs.initiate 進行 replica set 初始化,最後用 rs.stats() 查看狀態。
mongo -port 27017
config_rs1={
_id: 'rs1',
members: [
{_id:0, host:'localhost:27017', priority:1},
{_id:1, host:'localhost:28017'},
{_id:2, host:'localhost:29017'}
]
};
rs.initiate(config_rs1);
rs.status();
rs.isMaster();
以下是測試的過程
# mongo -port 27017
MongoDB shell version: 3.0.7
connecting to: 127.0.0.1:27017/test
> config_rs1={
... _id: 'rs1',
... members: [
... {_id:0, host:'localhost:27017', priority:1},
... {_id:1, host:'localhost:28017'},
... {_id:2, host:'localhost:29017'}
... ]
... };
{
"_id" : "rs1",
"members" : [
{
"_id" : 0,
"host" : "localhost:27017",
"priority" : 1
},
{
"_id" : 1,
"host" : "localhost:28017"
},
{
"_id" : 2,
"host" : "localhost:29017"
}
]
}
> rs.status();
{
"info" : "run rs.initiate(...) if not yet done for the set",
"ok" : 0,
"errmsg" : "no replset config has been received",
"code" : 94
}
> rs.initiate(config_rs1);
{ "ok" : 1 }
rs1:SECONDARY> rs.status();
{
"set" : "rs1",
"date" : ISODate("2015-11-12T06:29:01.774Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "localhost:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 280,
"optime" : Timestamp(1447309732, 1),
"optimeDate" : ISODate("2015-11-12T06:28:52Z"),
"electionTime" : Timestamp(1447309734, 1),
"electionDate" : ISODate("2015-11-12T06:28:54Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 1,
"name" : "localhost:28017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 9,
"optime" : Timestamp(1447309732, 1),
"optimeDate" : ISODate("2015-11-12T06:28:52Z"),
"lastHeartbeat" : ISODate("2015-11-12T06:29:00.389Z"),
"lastHeartbeatRecv" : ISODate("2015-11-12T06:29:00.533Z"),
"pingMs" : 0,
"configVersion" : 1
},
{
"_id" : 2,
"name" : "localhost:29017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 9,
"optime" : Timestamp(1447309732, 1),
"optimeDate" : ISODate("2015-11-12T06:28:52Z"),
"lastHeartbeat" : ISODate("2015-11-12T06:29:00.425Z"),
"lastHeartbeatRecv" : ISODate("2015-11-12T06:29:00.659Z"),
"pingMs" : 0,
"configVersion" : 1
}
],
"ok" : 1
}
rs1:PRIMARY> rs.isMaster();
{
"setName" : "rs1",
"setVersion" : 1,
"ismaster" : true,
"secondary" : false,
"hosts" : [
"localhost:27017",
"localhost:28017",
"localhost:29017"
],
"primary" : "localhost:27017",
"me" : "localhost:27017",
"electionId" : ObjectId("564431a624f5e9c6e3c56242"),
"maxBsonObjectSize" : 16777216,
"maxMessageSizeBytes" : 48000000,
"maxWriteBatchSize" : 1000,
"localTime" : ISODate("2015-11-12T06:29:09.040Z"),
"maxWireVersion" : 3,
"minWireVersion" : 0,
"ok" : 1
}
oplog
MongoDB 是透過 oplog 記錄資料異動過程的,replica sets 實際上是複製 oplog,oplog.rs 是一個固定長度的 Capped Collection,存放在 local 資料庫中,可透過 --oplogSize 調整這個資料庫的大小。
查閱 oplog 的資料,必須先切換到 admin 的身份。
首先建立一個 admin 的帳號。
mongo --port 27017
db.createUser({
user: "admin",
pwd: "pass",
roles: [ { role: "root", db: "admin" } ]
});
exit;
然後再重新以 admin 登入到 mongod
mongo --port 27017 -u admin -p pass --authenticationDatabase admin
接下來就可以用以下的指令查閱 oplog
use local;
show collections;
db.oplog.rs.find();
db.printReplicationInfo();
db.printSlaveReplicationInfo();
db.system.replset.find();
測試過程如下
rs1:PRIMARY> use local;
switched to db local
rs1:PRIMARY> show collections;
me
oplog.rs
startup_log
system.indexes
system.replset
查閱 oplog.rs 裡面的資料
rs1:PRIMARY> db.oplog.rs.find();
{ "ts" : Timestamp(1447309732, 1), "h" : NumberLong(0), "v" : 2, "op" : "n", "ns" : "", "o" : { "msg" : "initiating set" } }
{ "ts" : Timestamp(1447310547, 1), "h" : NumberLong("-5005889511402504175"), "v" : 2, "op" : "c", "ns" : "admin.$cmd", "o" : { "create" : "system.version" } }
{ "ts" : Timestamp(1447310547, 2), "h" : NumberLong("7158333189499425797"), "v" : 2, "op" : "i", "ns" : "admin.system.version", "o" : { "_id" : "authSchema", "currentVersion" : 5 } }
{ "ts" : Timestamp(1447310547, 3), "h" : NumberLong("-1384756181233850446"), "v" : 2, "op" : "c", "ns" : "admin.$cmd", "o" : { "create" : "system.users" } }
{ "ts" : Timestamp(1447310547, 4), "h" : NumberLong("-8251644809692327494"), "v" : 2, "op" : "i", "ns" : "admin.system.users", "o" : { "_id" : "admin.admin", "user" : "admin", "db" : "admin", "credentials" : { "SCRAM-SHA-1" : { "iterationCount" : 10000, "salt" : "YfZzd27A323g5bWhk+TAcA==", "storedKey" : "27CwlfcUPxr92bB6yr+AoExiBmE=", "serverKey" : "SxiNruT9vznek+uSkM8xIKlK3a8=" } }, "roles" : [ { "role" : "root", "db" : "admin" } ] } }
查看 replication 的資訊,主要就是看 oplog 裡面的資訊
rs1:PRIMARY> db.printReplicationInfo();
configured oplog size: 990MB
log length start to end: 815secs (0.23hrs)
oplog first event time: Thu Nov 12 2015 14:28:52 GMT+0800 (CST)
oplog last event time: Thu Nov 12 2015 14:42:27 GMT+0800 (CST)
now: Thu Nov 12 2015 14:50:58 GMT+0800 (CST)
查看 slave 同步的狀態
rs1:PRIMARY> db.printSlaveReplicationInfo();
source: localhost:28017
syncedTo: Thu Nov 12 2015 14:42:27 GMT+0800 (CST)
0 secs (0 hrs) behind the primary
source: localhost:29017
syncedTo: Thu Nov 12 2015 14:42:27 GMT+0800 (CST)
0 secs (0 hrs) behind the primary
local 資料庫中,還有另一個 system.replset 裡面有記錄 replica set 的資訊
rs1:PRIMARY> db.system.replset.find();
{ "_id" : "rs1", "version" : 1, "members" : [ { "_id" : 0, "host" : "localhost:27017", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, "tags" : { }, "slaveDelay" : 0, "votes" : 1 }, { "_id" : 1, "host" : "localhost:28017", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, "tags" : { }, "slaveDelay" : 0, "votes" : 1 }, { "_id" : 2, "host" : "localhost:29017", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, "tags" : { }, "slaveDelay" : 0, "votes" : 1 } ], "settings" : { "chainingAllowed" : true, "heartbeatTimeoutSecs" : 10, "getLastErrorModes" : { }, "getLastErrorDefaults" : { "w" : 1, "wtimeout" : 0 } } }
讓 Secondary 也能讀取資料
Secondary 預設是不允許進行資料查詢的,因為非同步複製的關係,資料有可能在查詢時,還沒有複製過來。但如果能容忍這種非同步的錯誤,讓 Secondary 也能進行資料查詢,對系統來說,可達到讀寫分離,可以減輕 Primary 的 loading。
因為剛剛為了處理 oplog 加上了 user 權限,我們必須先建立一個帳號,可以讀寫 test 資料庫
mongo --port 27017 -u admin -p pass --authenticationDatabase admin
use test
db.createUser(
{
user: "test",
pwd: "pass",
roles: [
{ role: "readWrite", db: "test" }
]
}
);
db.getUsers();
連接到 r0 (port 27017),然後 insert 一筆資料到 temp
mongo --port 27017 -u test -p pass --authenticationDatabase test
db.temp.insert({age:22});
db.temp.find();
連接到 Secondary,執行 show collections 指令時,會發生錯誤。
# mongo --port 28017 -u test -p pass --authenticationDatabase test
MongoDB shell version: 3.0.7
connecting to: 127.0.0.1:28017/test
rs1:SECONDARY> show collections
2015-11-12T15:31:34.394+0800 E QUERY Error: listCollections failed: { "note" : "from execCommand", "ok" : 0, "errmsg" : "not master" }
at Error (<anonymous>)
at DB._getCollectionInfosCommand (src/mongo/shell/db.js:646:15)
at DB.getCollectionInfos (src/mongo/shell/db.js:658:20)
at DB.getCollectionNames (src/mongo/shell/db.js:669:17)
at shellHelper.show (src/mongo/shell/utils.js:625:12)
at shellHelper (src/mongo/shell/utils.js:524:36)
at (shellhelp2):1:1 at src/mongo/shell/db.js:646
必須要先 db.getMongo().setSlaveOk(); 就可以從 Slave 讀取資料。
rs1:SECONDARY> db.getMongo().setSlaveOk();
rs1:SECONDARY> show collections;
system.indexes
temp
rs1:SECONDARY> db.temp.find();
{ "_id" : ObjectId("56443fedeabb66f27f42b08a"), "age" : 22 }
Primary node 轉移
replica sets 中發生故障節點的時候,會有自動轉移的程序。如果把 replica sets 中的 Primary 節點停止,剩下來的節點會自動選出下一個 Primary node。
首先查看目前的三個 mongod
# ps aux|grep mongod
root 13478 0.4 2.6 3366244 99916 ? Sl 14:24 0:20 /usr/share/mongodb/bin/mongod --replSet rs1 --keyFile /usr/share/mongodb/repl/key/r0 --fork --port 27017 --dbpath /usr/share/mongodb/repl/data/r0 --logpath /usr/share/mongodb/repl/logs/r0.log --logappend
root 13623 0.4 2.4 3353924 95288 ? Sl 14:25 0:19 /usr/share/mongodb/bin/mongod --replSet rs1 --keyFile /usr/share/mongodb/repl/key/r1 --fork --port 28017 --dbpath /usr/share/mongodb/repl/data/r1 --logpath /usr/share/mongodb/repl/logs/r1.log --logappend
root 13665 0.4 2.4 3352892 93024 ? Sl 14:27 0:19 /usr/share/mongodb/bin/mongod --replSet rs1 --keyFile /usr/share/mongodb/repl/key/r2 --fork --port 29017 --dbpath /usr/share/mongodb/repl/data/r2 --logpath /usr/share/mongodb/repl/logs/r2.log --logappend
直接將 port 27017 的 process 砍掉
kill -9 13478
當我們連結到 r1 (port 28017),以 rs.status() 查看 replica sets 狀態時,會發現 r1 (port 28017) 已經自動成為 Primary node。
mongo --port 28017 -u admin -p pass --authenticationDatabase admin
rs1:SECONDARY> rs.status();
{
"set" : "rs1",
"date" : ISODate("2015-11-12T07:44:33.873Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "localhost:27017",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : Timestamp(0, 0),
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2015-11-12T07:44:33.798Z"),
"lastHeartbeatRecv" : ISODate("2015-11-12T07:44:29.359Z"),
"pingMs" : 0,
"lastHeartbeatMessage" : "Failed attempt to connect to localhost:27017; couldn't connect to server localhost:27017 (127.0.0.1), connection attempt failed",
"configVersion" : -1
},
{
"_id" : 1,
"name" : "localhost:28017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 4734,
"optime" : Timestamp(1447313389, 1),
"optimeDate" : ISODate("2015-11-12T07:29:49Z"),
"electionTime" : Timestamp(1447314272, 1),
"electionDate" : ISODate("2015-11-12T07:44:32Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 2,
"name" : "localhost:29017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 4541,
"optime" : Timestamp(1447313389, 1),
"optimeDate" : ISODate("2015-11-12T07:29:49Z"),
"lastHeartbeat" : ISODate("2015-11-12T07:44:33.521Z"),
"lastHeartbeatRecv" : ISODate("2015-11-12T07:44:33.713Z"),
"pingMs" : 0,
"configVersion" : 1
}
],
"ok" : 1
}
如果再把 r0 (port 27017) 執行起來
/usr/share/mongodb/bin/mongod --replSet rs1 --keyFile /usr/share/mongodb/repl/key/r0 --fork --port 27017 --dbpath /usr/share/mongodb/repl/data/r0 --logpath /usr/share/mongodb/repl/logs/r0.log --logappend
r1 (port 28017) 成為 Primary node 的身份還是不變,差別只是 r0 (port 27017) 的 health 由 0 變回 1,身份維持是 SECONDARY。
rs1:PRIMARY> rs.status();
{
"set" : "rs1",
"date" : ISODate("2015-11-12T07:49:30.493Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "localhost:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 46,
"optime" : Timestamp(1447313389, 1),
"optimeDate" : ISODate("2015-11-12T07:29:49Z"),
"lastHeartbeat" : ISODate("2015-11-12T07:49:29.975Z"),
"lastHeartbeatRecv" : ISODate("2015-11-12T07:49:30.440Z"),
"pingMs" : 0,
"configVersion" : 1
},
{
"_id" : 1,
"name" : "localhost:28017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 5031,
"optime" : Timestamp(1447313389, 1),
"optimeDate" : ISODate("2015-11-12T07:29:49Z"),
"electionTime" : Timestamp(1447314272, 1),
"electionDate" : ISODate("2015-11-12T07:44:32Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 2,
"name" : "localhost:29017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 4837,
"optime" : Timestamp(1447313389, 1),
"optimeDate" : ISODate("2015-11-12T07:29:49Z"),
"lastHeartbeat" : ISODate("2015-11-12T07:49:29.722Z"),
"lastHeartbeatRecv" : ISODate("2015-11-12T07:49:29.909Z"),
"pingMs" : 0,
"configVersion" : 1
}
],
"ok" : 1
}
replica sets 的節點維護
如果三台機器已經沒有辦法負荷壓力,可以透過增刪節點的方式,維護 replica sets。
首先產生兩個新的 mongod 節點 r3, r4。
mkdir -p /usr/share/mongodb/repl/data/r3
mkdir -p /usr/share/mongodb/repl/data/r4
echo "rs1 keyfile" > /usr/share/mongodb/repl/key/r3
echo "rs1 keyfile" > /usr/share/mongodb/repl/key/r4
chmod 600 /usr/share/mongodb/repl/key/r*
/usr/share/mongodb/bin/mongod --replSet rs1 --keyFile /usr/share/mongodb/repl/key/r3 --fork --port 30017 --dbpath /usr/share/mongodb/repl/data/r3 --logpath /usr/share/mongodb/repl/logs/r3.log --logappend
/usr/share/mongodb/bin/mongod --replSet rs1 --keyFile /usr/share/mongodb/repl/key/r4 --fork --port 31017 --dbpath /usr/share/mongodb/repl/data/r4 --logpath /usr/share/mongodb/repl/logs/r4.log --logappend
連接到 Primary node
mongo --port 28017 -u admin -p pass --authenticationDatabase admin
以指令 rs.add 新增節點
rs.add("localhost:30017");
以指令 rs.status(); 查看 rs 狀態
rs1:PRIMARY> rs.status();
{
"set" : "rs1",
"date" : ISODate("2015-11-12T08:55:57.447Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "localhost:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 4033,
"optime" : Timestamp(1447313389, 1),
"optimeDate" : ISODate("2015-11-12T07:29:49Z"),
"lastHeartbeat" : ISODate("2015-11-12T08:55:56.100Z"),
"lastHeartbeatRecv" : ISODate("2015-11-12T08:55:57.086Z"),
"pingMs" : 0,
"configVersion" : 1
},
{
"_id" : 1,
"name" : "localhost:28017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 9018,
"optime" : Timestamp(1447318556, 1),
"optimeDate" : ISODate("2015-11-12T08:55:56Z"),
"electionTime" : Timestamp(1447314272, 1),
"electionDate" : ISODate("2015-11-12T07:44:32Z"),
"configVersion" : 2,
"self" : true
},
{
"_id" : 2,
"name" : "localhost:29017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 8824,
"optime" : Timestamp(1447313389, 1),
"optimeDate" : ISODate("2015-11-12T07:29:49Z"),
"lastHeartbeat" : ISODate("2015-11-12T08:55:56.101Z"),
"lastHeartbeatRecv" : ISODate("2015-11-12T08:55:56.550Z"),
"pingMs" : 0,
"configVersion" : 1
},
{
"_id" : 3,
"name" : "localhost:30017",
"health" : 1,
"state" : 0,
"stateStr" : "STARTUP",
"uptime" : 1,
"optime" : Timestamp(0, 0),
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2015-11-12T08:55:56.116Z"),
"lastHeartbeatRecv" : ISODate("2015-11-12T08:55:56.201Z"),
"pingMs" : 16,
"configVersion" : -2
}
],
"ok" : 1
}
rs1:PRIMARY> rs.status();
{
"set" : "rs1",
"date" : ISODate("2015-11-12T08:55:58.609Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "localhost:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 4034,
"optime" : Timestamp(1447318556, 1),
"optimeDate" : ISODate("2015-11-12T08:55:56Z"),
"lastHeartbeat" : ISODate("2015-11-12T08:55:58.100Z"),
"lastHeartbeatRecv" : ISODate("2015-11-12T08:55:57.086Z"),
"pingMs" : 0,
"syncingTo" : "localhost:28017",
"configVersion" : 2
},
{
"_id" : 1,
"name" : "localhost:28017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 9019,
"optime" : Timestamp(1447318556, 1),
"optimeDate" : ISODate("2015-11-12T08:55:56Z"),
"electionTime" : Timestamp(1447314272, 1),
"electionDate" : ISODate("2015-11-12T07:44:32Z"),
"configVersion" : 2,
"self" : true
},
{
"_id" : 2,
"name" : "localhost:29017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 8826,
"optime" : Timestamp(1447318556, 1),
"optimeDate" : ISODate("2015-11-12T08:55:56Z"),
"lastHeartbeat" : ISODate("2015-11-12T08:55:58.102Z"),
"lastHeartbeatRecv" : ISODate("2015-11-12T08:55:58.564Z"),
"pingMs" : 0,
"syncingTo" : "localhost:28017",
"configVersion" : 2
},
{
"_id" : 3,
"name" : "localhost:30017",
"health" : 1,
"state" : 5,
"stateStr" : "STARTUP2",
"uptime" : 2,
"optime" : Timestamp(0, 0),
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2015-11-12T08:55:58.116Z"),
"lastHeartbeatRecv" : ISODate("2015-11-12T08:55:58.201Z"),
"pingMs" : 12,
"configVersion" : 2
}
],
"ok" : 1
}
rs1:PRIMARY> rs.status();
{
"set" : "rs1",
"date" : ISODate("2015-11-12T08:56:07.981Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "localhost:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 4044,
"optime" : Timestamp(1447318556, 1),
"optimeDate" : ISODate("2015-11-12T08:55:56Z"),
"lastHeartbeat" : ISODate("2015-11-12T08:56:06.100Z"),
"lastHeartbeatRecv" : ISODate("2015-11-12T08:56:07.089Z"),
"pingMs" : 0,
"syncingTo" : "localhost:28017",
"configVersion" : 2
},
{
"_id" : 1,
"name" : "localhost:28017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 9028,
"optime" : Timestamp(1447318556, 1),
"optimeDate" : ISODate("2015-11-12T08:55:56Z"),
"electionTime" : Timestamp(1447314272, 1),
"electionDate" : ISODate("2015-11-12T07:44:32Z"),
"configVersion" : 2,
"self" : true
},
{
"_id" : 2,
"name" : "localhost:29017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 8835,
"optime" : Timestamp(1447318556, 1),
"optimeDate" : ISODate("2015-11-12T08:55:56Z"),
"lastHeartbeat" : ISODate("2015-11-12T08:56:06.119Z"),
"lastHeartbeatRecv" : ISODate("2015-11-12T08:56:06.566Z"),
"pingMs" : 0,
"syncingTo" : "localhost:28017",
"configVersion" : 2
},
{
"_id" : 3,
"name" : "localhost:30017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 11,
"optime" : Timestamp(1447318556, 1),
"optimeDate" : ISODate("2015-11-12T08:55:56Z"),
"lastHeartbeat" : ISODate("2015-11-12T08:56:06.119Z"),
"lastHeartbeatRecv" : ISODate("2015-11-12T08:56:06.205Z"),
"pingMs" : 4,
"configVersion" : 2
}
],
"ok" : 1
}
將 r4 (port 31017) 加入 replica sets 也會看到類似的過程
rs.add("localhost:31017");
連接到新加入的 r4,可以發現資料已經複製過來了。
]# mongo --port 31017 -u test -p pass --authenticationDatabase test
MongoDB shell version: 3.0.7
connecting to: 127.0.0.1:31017/test
rs1:SECONDARY> db.temp.find();
Error: error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
rs1:SECONDARY> db.setSlaveOk();
rs1:SECONDARY> db.temp.find();
{ "_id" : ObjectId("56443fedeabb66f27f42b08a"), "age" : 22 }
以指令 rs.remove(); 就可以移除節點
rs.remove("localhost:30017");
rs.remove("localhost:31017");
沒有留言:
張貼留言