一、mogodb的分片
1.分片的概念
mongodb的副本集跟redis的高可用相同,从库只能读,数据存储的大小以主库为准,从库配置低于主库,同步出现问题,主库配置低于从库,则浪费大量资源
分片可以解决这个问题,将数据平均的存储到多个节点的分片上,可利用资源是多个节点的总和
2.分片的特点
#优点:
1.提高机器资源利用率
2.减轻主库压力
#缺点:
1.需要更多的机器
2.管理起来更加麻烦
3.分片配置好之后修改很困难,尽量不要修改
3.分片的原理
1)mongos server 路由协议
类似于代理,atlas类似,将数据分配给后端的mongo服务
2)config server 记录后端信息
mongos server并不知道后端有几台主机和主机的信息,他需要连接config server,config server是记录后端mongo的服务器地址和信息的
作用:
1.记录后端mongo节点信息
2.记录数据分配到了哪个分片,以便获取数据
3)片键
config server 存储的是后端mongo服务的信息,分配数据需要根据一个规则分配,不能随意分配
片键就是规定分配数据的规则
作用:
1.将数据按照规则分配到不同的分片
2.相当于建立一个索引,可以加快访问速度
分类:
1.区间片键(存储数据时会出现分配数据不均匀的情况)
2.hash片键(分配数据很均匀,很平均)
4)分片
mongo的分布式存储
二、分片的高可用架构

三、搭建mongodb
1.服务器规划
| 主机 | IP | 部署服务 | 端口 |
|---|---|---|---|
| mongodb01 | 10.0.0.81 | mongos server config serevr shard1 master shard3 slave shard2 arbiter |
60000 40000 28010 28020 28030 |
| mongodb02 | 10.0.0.82 | mongos server config serevr shard2 master shard1 slave shard3 arbiter |
60000 40000 28010 28020 28030 |
| mongodb03 | 10.0.0.83 | mongos server config serevr shard3 master shard2 slave shard1 arbiter |
60000 40000 28010 28020 28030 |
2.目录规划
mkdir /server/mongodb/master/{conf,log,pid,data} -p
mkdir /server/mongodb/slave/{conf,log,pid,data} -p
mkdir /server/mongodb/arbiter/{conf,log,pid,data} -p
mkdir /server/mongodb/config/{conf,log,pid,data} -p
mkdir /server/mongodb/mongos/{conf,log,pid,data} -p
3.安装mongodb
#安装依赖
yum install -y libcurl openssl
#上传包
rz
#解压包
tar xf mongodb-linux-x86_64-3.6.13.tgz -C /usr/local/
ln -s /usr/local/mongodb-linux-x86_64-3.6.13 /usr/local/mongodb
4.配置mongodb01
1)配置master
[root@mongodb01 ~]# vim /server/mongodb/master/conf/mongo.conf
systemLog:
destination: file
logAppend: true
path: /server/mongodb/master/log/master.log
storage:
journal:
enabled: true
dbPath: /server/mongodb/master/data/
directoryPerDB: true
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement:
fork: true
pidFilePath: /server/mongodb/master/pid/master.pid
timeZoneInfo: /usr/share/zoneinfo
net:
port: 28010
bindIp: 127.0.0.1,10.0.0.81
replication:
oplogSizeMB: 1024
replSetName: shard1
sharding:
clusterRole: shardsvr
2)配置slave
[root@mongodb01 ~]# vim /server/mongodb/slave/conf/mongo.conf
systemLog:
destination: file
logAppend: true
path: /server/mongodb/slave/log/slave.log
storage:
journal:
enabled: true
dbPath: /server/mongodb/slave/data/
directoryPerDB: true
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement:
fork: true
pidFilePath: /server/mongodb/slave/pid/slave.pid
timeZoneInfo: /usr/share/zoneinfo
net:
port: 28020
bindIp: 127.0.0.1,10.0.0.81
replication:
oplogSizeMB: 1024
replSetName: shard3
sharding:
clusterRole: shardsvr
3)配置arbiter
[root@mongodb01 ~]# vim /server/mongodb/arbiter/conf/mongo.conf
systemLog:
destination: file
logAppend: true
path: /server/mongodb/arbiter/log/arbiter.log
storage:
journal:
enabled: true
dbPath: /server/mongodb/arbiter/data/
directoryPerDB: true
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement:
fork: true
pidFilePath: /server/mongodb/arbiter/pid/arbiter.pid
timeZoneInfo: /usr/share/zoneinfo
net:
port: 28030
bindIp: 127.0.0.1,10.0.0.81
replication:
oplogSizeMB: 1024
replSetName: shard2
sharding:
clusterRole: shardsvr
5.配置mongodb02
1)配置master
[root@mongodb01 ~]# vim /server/mongodb/master/conf/mongo.conf
systemLog:
destination: file
logAppend: true
path: /server/mongodb/master/log/master.log
storage:
journal:
enabled: true
dbPath: /server/mongodb/master/data/
directoryPerDB: true
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement:
fork: true
pidFilePath: /server/mongodb/master/pid/master.pid
timeZoneInfo: /usr/share/zoneinfo
net:
port: 28010
bindIp: 127.0.0.1,10.0.0.82
replication:
oplogSizeMB: 1024
replSetName: shard2
sharding:
clusterRole: shardsvr
2)配置slave
[root@mongodb01 ~]# vim /server/mongodb/slave/conf/mongo.conf
systemLog:
destination: file
logAppend: true
path: /server/mongodb/slave/log/slave.log
storage:
journal:
enabled: true
dbPath: /server/mongodb/slave/data/
directoryPerDB: true
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement:
fork: true
pidFilePath: /server/mongodb/slave/pid/slave.pid
timeZoneInfo: /usr/share/zoneinfo
net:
port: 28020
bindIp: 127.0.0.1,10.0.0.82
replication:
oplogSizeMB: 1024
replSetName: shard1
sharding:
clusterRole: shardsvr
3)配置arbiter
[root@mongodb01 ~]# vim /server/mongodb/arbiter/conf/mongo.conf
systemLog:
destination: file
logAppend: true
path: /server/mongodb/arbiter/log/arbiter.log
storage:
journal:
enabled: true
dbPath: /server/mongodb/arbiter/data/
directoryPerDB: true
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement:
fork: true
pidFilePath: /server/mongodb/arbiter/pid/arbiter.pid
timeZoneInfo: /usr/share/zoneinfo
net:
port: 28030
bindIp: 127.0.0.1,10.0.0.82
replication:
oplogSizeMB: 1024
replSetName: shard3
sharding:
clusterRole: shardsvr
6.配置mongodb02
1)配置master
[root@mongodb01 ~]# vim /server/mongodb/master/conf/mongo.conf
systemLog:
destination: file
logAppend: true
path: /server/mongodb/master/log/master.log
storage:
journal:
enabled: true
dbPath: /server/mongodb/master/data/
directoryPerDB: true
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement:
fork: true
pidFilePath: /server/mongodb/master/pid/master.pid
timeZoneInfo: /usr/share/zoneinfo
net:
port: 28010
bindIp: 127.0.0.1,10.0.0.83
replication:
oplogSizeMB: 1024
replSetName: shard3
sharding:
clusterRole: shardsvr
2)配置slave
[root@mongodb01 ~]# vim /server/mongodb/slave/conf/mongo.conf
systemLog:
destination: file
logAppend: true
path: /server/mongodb/slave/log/slave.log
storage:
journal:
enabled: true
dbPath: /server/mongodb/slave/data/
directoryPerDB: true
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement:
fork: true
pidFilePath: /server/mongodb/slave/pid/slave.pid
timeZoneInfo: /usr/share/zoneinfo
net:
port: 28020
bindIp: 127.0.0.1,10.0.0.83
replication:
oplogSizeMB: 1024
replSetName: shard2
sharding:
clusterRole: shardsvr
3)配置arbiter
[root@mongodb01 ~]# vim /server/mongodb/arbiter/conf/mongo.conf
systemLog:
destination: file
logAppend: true
path: /server/mongodb/arbiter/log/arbiter.log
storage:
journal:
enabled: true
dbPath: /server/mongodb/arbiter/data/
directoryPerDB: true
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement:
fork: true
pidFilePath: /server/mongodb/arbiter/pid/arbiter.pid
timeZoneInfo: /usr/share/zoneinfo
net:
port: 28030
bindIp: 127.0.0.1,10.0.0.83
replication:
oplogSizeMB: 1024
replSetName: shard1
sharding:
clusterRole: shardsvr
7.配置环境变量
[root@mongodb01 ~]# vim /etc/profile.d/mongo.sh
export PATH="/usr/local/mongodb/bin:$PATH"
[root@mongodb01 ~]# source /etc/profile
8.优化警告
useradd mongo -s /sbin/nologin -M
echo "never" > /sys/kernel/mm/transparent_hugepage/enabled
echo "never" > /sys/kernel/mm/transparent_hugepage/defrag
9.配置system管理mongo
1)配置system管理master
[root@mongodb01 ~]# vim /usr/lib/systemd/system/mongod-master.service
[Unit]
Description=MongoDB Database Server
Documentation=https://docs.mongodb.org/manual
After=network.target
[Service]
User=mongo
Group=mongo
ExecStartPre=/usr/bin/chown -R mongo:mongo /server/mongodb/master/
ExecStart=/usr/local/mongodb/bin/mongod -f /server/mongodb/master/conf/mongo.conf
ExecStop=/usr/local/mongodb/bin/mongod -f /server/mongodb/master/conf/mongo.conf --shutdown
PermissionsStartOnly=true
PIDFile=/server/mongodb/master/pid/master.pid
Type=forking
[Install]
WantedBy=multi-user.target
2)配置system管理slave
[root@mongodb01 ~]# vim /usr/lib/systemd/system/mongod-slave.service
[Unit]
Description=MongoDB Database Server
Documentation=https://docs.mongodb.org/manual
After=network.target
[Service]
User=mongo
Group=mongo
ExecStartPre=/usr/bin/chown -R mongo:mongo /server/mongodb/slave/
ExecStart=/usr/local/mongodb/bin/mongod -f /server/mongodb/slave/conf/mongo.conf
ExecStop=/usr/local/mongodb/bin/mongod -f /server/mongodb/slave/conf/mongo.conf --shutdown
PermissionsStartOnly=true
PIDFile=/server/mongodb/slave/pid/slave.pid
Type=forking
[Install]
WantedBy=multi-user.target
3)配置system管理arbiter
[root@mongodb01 ~]# vim /usr/lib/systemd/system/mongod-arbiter.service
[Unit]
Description=MongoDB Database Server
Documentation=https://docs.mongodb.org/manual
After=network.target
[Service]
User=mongo
Group=mongo
ExecStartPre=/usr/bin/chown -R mongo:mongo /server/mongodb/arbiter/
ExecStart=/usr/local/mongodb/bin/mongod -f /server/mongodb/arbiter/conf/mongo.conf
ExecStop=/usr/local/mongodb/bin/mongod -f /server/mongodb/arbiter/conf/mongo.conf --shutdown
PermissionsStartOnly=true
PIDFile=/server/mongodb/arbiter/pid/arbiter.pid
Type=forking
[Install]
WantedBy=multi-user.target
4)刷新system管理配置
systemctl daemon-reload
10.启动
systemctl start mongod-master.service
systemctl start mongod-slave.service
systemctl start mongod-arbiter.service
四、关联副本集
1.mongodb01初始化shard1的副本集
#连接到master
mongo --port 28010
#初始化副本集
config={_id:"shard1",members:[{_id:0,host:"10.0.0.81:28010"},{_id:1,host:"10.0.0.82:28020"},{_id:2,host:"10.0.0.83:28030"}]}
#读取副本集
rs.initiate(config)
2.mongodb02初始化shard2的副本集
#连接到master
mongo --port 28010
#初始化副本集
config={_id:"shard2",members:[{_id:0,host:"10.0.0.82:28010"},{_id:1,host:"10.0.0.83:28020"},{_id:2,host:"10.0.0.81:28030"}]}
#读取副本集
rs.initiate(config)
3.mongodb03初始化shard3的副本集
#连接到master
mongo --port 28010
#初始化副本集
config={_id:"shard3",members:[{_id:0,host:"10.0.0.83:28010"},{_id:1,host:"10.0.0.81:28020"},{_id:2,host:"10.0.0.82:28030"}]}
#读取副本集
rs.initiate(config)
4.查看副本集状态
rs.status()
#重新配置副本集
config={_id:"shard3",members:[{_id:0,host:"10.0.0.83:28010"},{_id:1,host:"10.0.0.81:28020"},{_id:2,host:"10.0.0.82:28030"}]}
rs.reconfig(config)
五、配置config server
1.创建目录
mkdir /server/mongodb/config/{conf,log,pid,data} -p
2.配置config(三台机器都配置)
[root@mongodb01 ~]# vim /server/mongodb/config/conf/mongo.conf
systemLog:
destination: file
logAppend: true
path: /server/mongodb/config/log/config.log
storage:
journal:
enabled: true
dbPath: /server/mongodb/config/data/
directoryPerDB: true
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement:
fork: true
pidFilePath: /server/mongodb/config/pid/config.pid
timeZoneInfo: /usr/share/zoneinfo
net:
port: 40000
bindIp: 127.0.0.1,10.0.0.81
#bindIp: 127.0.0.1,10.0.0.82
#bindIp: 127.0.0.1,10.0.0.83
replication:
replSetName: configset
sharding:
clusterRole: configsvr
3.配置system管理config server(三台都执行)
[root@mongodb01 ~]# vim /usr/lib/systemd/system/mongod-config.service
[Unit]
Description=MongoDB Database Server
Documentation=https://docs.mongodb.org/manual
After=network.target
[Service]
User=mongo
Group=mongo
ExecStartPre=/usr/bin/chown -R mongo:mongo /server/mongodb/config/
ExecStart=/usr/local/mongodb/bin/mongod -f /server/mongodb/config/conf/mongo.conf
ExecStop=/usr/local/mongodb/bin/mongod -f /server/mongodb/config/conf/mongo.conf --shutdown
PermissionsStartOnly=true
PIDFile=/server/mongodb/config/pid/config.pid
Type=forking
[Install]
WantedBy=multi-user.target
[root@mongodb01 ~]# systemctl daemon-reload
4.启动
[root@mongodb01 ~]# systemctl start mongod-config.service
[root@mongodb02 ~]# systemctl start mongod-config.service
[root@mongodb03 ~]# systemctl start mongod-config.service
5.config server做副本集
rs.initiate({_id:"configset",configsvr:true,members:[{_id:0,host:"10.0.0.81:40000"},{_id:1,host:"10.0.0.82:40000"},{_id:2,host:"10.0.0.83:40000"}]})
六、配置mongos server
1.创建目录
mkdir /server/mongodb/mongos/{conf,log,pid,data} -p
2.配置mongos
[root@mongodb01 ~]# vim /server/mongodb/mongos/conf/mongo.conf
systemLog:
destination: file
logAppend: true
path: /server/mongodb/mongos/log/mongos.log
processManagement:
fork: true
pidFilePath: /server/mongodb/mongos/pid/mongos.pid
timeZoneInfo: /usr/share/zoneinfo
net:
port: 60000
bindIp: 127.0.0.1,10.0.0.81
sharding:
configDB:
configset/10.0.0.81:40000,10.0.0.82:40000,10.0.0.83:40000
3.配置system管理
[root@mongodb01 ~]# vim /usr/lib/systemd/system/mongod-mongos.service
[Unit]
Description=MongoDB Database Server
Documentation=https://docs.mongodb.org/manual
After=network.target
[Service]
User=mongo
Group=mongo
ExecStartPre=/usr/bin/chown -R mongo:mongo /server/mongodb/mongos/
ExecStart=/usr/local/mongodb/bin/mongos -f /server/mongodb/mongos/conf/mongo.conf
ExecStop=/usr/local/mongodb/bin/mongos -f /server/mongodb/mongos/conf/mongo.conf --shutdown
PermissionsStartOnly=true
PIDFile=/server/mongodb/mongos/pid/mongos.pid
Type=forking
[Install]
WantedBy=multi-user.target
[root@mongodb01 ~]# systemctl daemon-reload
4.启动
systemctl start mongod-mongos.service
5.添加分片成员
#登录mongos
mongo --port 60000
#指定分片
use admin
db.runCommand({addShard:"shard1/10.0.0.81:28010,10.0.0.82:28020,10.0.0.83:28030"})
db.runCommand({addShard:"shard2/10.0.0.82:28010,10.0.0.83:28020,10.0.0.81:28030"})
db.runCommand({addShard:"shard3/10.0.0.83:28010,10.0.0.81:28020,10.0.0.82:28030"})
#查看分片
db.runCommand({listshards: 1})
七、配置区间分片
1.指定库开启分片
mongos> use admin
#指定test库分片
mongos> db.runCommand({enablesharding:"test"})
2.指定列进行分片
mongos> use test
#指定对range集合的id列进行分片
mongos> db.range.ensureIndex({id:1})
3.开启分片
mongos> use admin
#开启分片
mongos> db.runCommand({shardcollection:"test.range",key:{id:1}})
4.插入数据测试
#插入数据
mongo --port 60000
mongos> use test
switched to db test
mongos> for(i=1;i<10000;i++){ db.range.insert({"id":i,"name":"shanghai","age":28,"date":new Date()}); }
#登录mongodb查看
[root@mongodb02 ~]# mongo --port 28010
shard2:PRIMARY> use test
switched to db test
shard2:PRIMARY> show tables
range
shard2:PRIMARY> db.range.count()
9999
#所有数据都堆在一个分片上,因为没有指定详细的区间范围
八、设置hash分片
1.开启分片
mongos> use admin
#指定testhash开启分片
mongos> db.runCommand({enablesharding:"testhash"})
2.指定列分片
mongos> use testhash
#指定id列开启hash分片
mongos> db.hash.ensureIndex({id:"hashed"})
3.开启hash分片
mongos> use admin
#开启
mongos> db.runCommand({shardcollection:"testhash.hash",key:{id:"hashed"}})
4.生成数据测试
mongo --port 60000
mongos> use testhash
#生成数据
mongos> for(i=1;i<10000;i++){ db.hash.insert({"id":i,"name":"shanghai","age":28,"date":new Date()}); }
#查看数据
[root@mongodb01 ~]# mongo --port 28010
shard1:PRIMARY> use testhash
shard1:PRIMARY> show tables
hash
shard1:PRIMARY> db.hash.count()
3349
shard2:PRIMARY> db.hash.count()
3366
shard3:PRIMARY> db.hash.count()
3284