|
分片机制
分片概念
分片(sharding)是指将数据库拆分,将其分散在不同的机器上的过程。将数据分散到不同的机器上,不需要功能强大的服务器就可以存储更多的数据和处理更大的负载。基本思想就是将集合切成小块,这些块分散到若干片里,每个片只负责总数据的一部分,最后通过一个均衡器来对各个分片进行均衡(数据迁移)。通过一个名为mongos的路由进程进行操作,mongos知道数据和片的对应关系(通过配置服务器)。大部分使用场景都是解决磁盘空间的问题,对于写入有可能跨分片,查询则尽量避免跨分片查询。
mongodb分片的主要使用场景:
- 数据量过大,单机磁盘空间不足;
- 单个mongod不能满足写数据的性能要求,需要通过分片让写压力分散到各个分片上面;
- 把大量数据放到内存里提高性能,通过分片使用分片服务器自身的资源。
mongodb分片优势**:**
减少单个分片需要处理的请求数,提高集群的存储容量和吞吐量 比如,当插入一条数据时,应用只需要访问存储这条数据的分片 减少单分片存储的数据,提高数据可用性,提高大型数据库查询服务的性能。 当MongoDB单点数据库服务器存储和性能成为瓶颈,或者需要部署大型应用以充分利用内存时,可以使用分片技术
分片集群架构
组件说明:
- **Config Server:配置服务器,**存储了整个 分片群集的配置信息,其中包括 chunk信息。
- **Shard:分片服务器,**用于存储实际的数据块,每一个shard都负责存储集群中的一部分数据。例如一个集群有3个分片,假设定义分片的规则为hash,那么整个集群的数据会按照相应规划分割到3个分片当中。任意一个分片挂掉,则整个集群数据不可用。所以在实际生产环境中一个shard server角色一般由一个3节点的replicaSet承担,防止分片的单点故障。
- **mongos:前端路由,**整个集群的入口。客户端应用通过mongos连接到整个集群,mongos让整个集群看上去像单一数据库,客户端应用可以透明使用
整个mongo分片集群的功能:
- 请求分流:通过路由节点将请求分发到对应的分片和块中
- 数据分流:内部提供平衡器保证数据的均匀分布,这是数据平均分布式、请求平均分布的前提
- 块的拆分:mongodb的单个chunk的最大容量为64M或者10w的数据,当到达这个阈值,触发块的拆分,一分为二
- 块的迁移:为保证数据在分片节点服务器分片节点服务器均匀分布,块会在节点之间迁移。一般相差8个分块的时候触发
分片集群部署
部署规划
shard 3 个副本集
config server 3 个副本集
mongos 3 个副本集
主机准备
shard
IProleportshardname192.168.142.157shard127181shard1192.168.142.157shard227182shard1192.168.142.157shard327183shard1192.168.142.155shard127181shard2192.168.142.155shard227182shard2192.168.142.155shard327183shard2192.168.142.156shard127181shard3192.168.142.156shard227182shard3192.168.142.156shard327183shard3
config server
IProleportconfig name192.168.142.157config server127281config1192.168.142.157config server227282config1192.168.142.157config server327283config1192.168.142.155config server127281config2192.168.142.155config server227282config2192.168.142.155config server327283config2192.168.142.156config server127281config3192.168.142.156config server227282config3192.168.142.156config server327283config3
mongos
IProleport192.168.142.155mongos27381192.168.142.155mongos27382192.168.142.155mongos27383
开始部署
创建搭建分片集群的文件夹- mkdir /docker/mongo-zone/{configsvr,shard,mongos} -p
复制代码 进入文件夹
configsvr 副本集文件夹准备- mkdir configsvr/{configsvr1,configsvr2,configsvr3}/{data,logs} -p
复制代码 shard 副本集文件夹准备- mkdir shard/{shard1,shard2,shard3}/{data,logs} -p
复制代码 mongos 副本集文件夹准备- mkdir mongos/{mongos1,mongos2,mongos3}/{data,logs} -p
复制代码 生成密钥- openssl rand -base64 756 > mongo.key
复制代码 发放给其他主机- scp mongo.key slave@192.168.142.156:/home/slave
- scp mongo.key slave02@192.168.142.155:/home/slave02
复制代码- mv /home/slave02/mongo.key .mv /home/slave/mongo.key .
复制代码- chown root:root mongo.key
复制代码 搭建 shard 副本集
- cd /docker/mongo-zone/shard/shard1
复制代码 docker-compose.yml- services:
- mongo-shard1:
- image: mongo:7.0
- container_name: mongo-shard1
- restart: always
- volumes:
- - /docker/mongo-zone/shard/shard1/data:/data/db
- - /docker/mongo-zone/shard/shard1/logs:/var/log/mongodb
- - /docker/mongo-zone/mongo.key:/etc/mongo.key
- ports:
- - "27181:27181"
- environment:
- MONGO_INITDB_ROOT_USERNAME: root
- MONGO_INITDB_ROOT_PASSWORD: 123456
- MONGO_INITDB_REPLICA_SET_NAME: shard1
- MONGO_INITDB_DATABASE: admin
- command:
- - /bin/sh
- - -c
- - |
- chmod 400 /etc/mongo.key
- chown 999:999 /etc/mongo.key
- mongod --shardsvr --directoryperdb --replSet shard1 --bind_ip_all --auth --keyFile /etc/mongo.key --wiredTigerCacheSizeGB 1 --oplogSize 5000 --port 27181
- mongo-shard2:
- image: mongo:7.0
- container_name: mongo-shard2
- restart: always
- volumes:
- - /docker/mongo-zone/shard/shard2/data:/data/db
- - /docker/mongo-zone/shard/shard2/logs:/var/log/mongodb
- - /docker/mongo-zone/mongo.key:/etc/mongo.key
- ports:
- - "27182:27182"
- environment:
- MONGO_INITDB_ROOT_USERNAME: root
- MONGO_INITDB_ROOT_PASSWORD: 123456
- MONGO_INITDB_REPLICA_SET_NAME: shard1
- MONGO_INITDB_DATABASE: admin
- command:
- - /bin/sh
- - -c
- - |
- chmod 400 /etc/mongo.key
- chown 999:999 /etc/mongo.key
- mongod --shardsvr --directoryperdb --replSet shard1 --bind_ip_all --auth --keyFile /etc/mongo.key --wiredTigerCacheSizeGB 1 --oplogSize 5000 --port 27182
- mongo-shard3:
- image: mongo:7.0
- container_name: mongo-shard3
- restart: always
- volumes:
- - /docker/mongo-zone/shard/shard3/data:/data/db
- - /docker/mongo-zone/shard/shard3/logs:/var/log/mongodb
- - /docker/mongo-zone/mongo.key:/etc/mongo.key
- ports:
- - "27183:27183"
- environment:
- MONGO_INITDB_ROOT_USERNAME: root
- MONGO_INITDB_ROOT_PASSWORD: 123456
- MONGO_INITDB_REPLICA_SET_NAME: shard1
- MONGO_INITDB_DATABASE: admin
- command:
- - /bin/sh
- - -c
- - |
- chmod 400 /etc/mongo.key
- chown 999:999 /etc/mongo.key
- mongod --shardsvr --directoryperdb --replSet shard1 --bind_ip_all --auth --keyFile /etc/mongo.key --wiredTigerCacheSizeGB 1 --oplogSize 5000 --port 27183
复制代码 其他三台主机的操作和上面一样,参考上面表格
修改 docker-compose.yml 三处地方即可- MONGO_INITDB_REPLICA_SET_NAME–replSet
复制代码 初始化副本集
- docker exec -it mongo-shard1 mongosh --port 27181
复制代码 添加 root 用户- db.createUser({user:"root",pwd:"123456",roles:[{role:"root",db:"admin"}]})
复制代码 登录 root 用户添加其他节点- rs.add({host:"192.168.142.155:27182",priority:2})
- rs.add({host:"192.168.142.155:27183",priority:3})
复制代码 查看集群状态- {
- set: 'shard1',
- date: ISODate('2024-10-15T03:25:48.706Z'),
- myState: 1,
- term: Long('2'),
- syncSourceHost: '',
- syncSourceId: -1,
- heartbeatIntervalMillis: Long('2000'),
- majorityVoteCount: 2,
- writeMajorityCount: 2,
- votingMembersCount: 3,
- writableVotingMembersCount: 3,
- optimes: {
- lastCommittedOpTime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
- lastCommittedWallTime: ISODate('2024-10-15T03:25:43.400Z'),
- readConcernMajorityOpTime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
- appliedOpTime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
- durableOpTime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
- lastAppliedWallTime: ISODate('2024-10-15T03:25:43.400Z'),
- lastDurableWallTime: ISODate('2024-10-15T03:25:43.400Z')
- },
- lastStableRecoveryTimestamp: Timestamp({ t: 1728962730, i: 1 }),
- electionCandidateMetrics: {
- lastElectionReason: 'priorityTakeover',
- lastElectionDate: ISODate('2024-10-15T03:21:50.316Z'),
- electionTerm: Long('2'),
- lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1728962500, i: 1 }), t: Long('1') },
- lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1728962500, i: 1 }), t: Long('1') },
- numVotesNeeded: 2,
- priorityAtElection: 2,
- electionTimeoutMillis: Long('10000'),
- priorPrimaryMemberId: 0,
- numCatchUpOps: Long('0'),
- newTermStartDate: ISODate('2024-10-15T03:21:50.320Z'),
- wMajorityWriteAvailabilityDate: ISODate('2024-10-15T03:21:50.327Z')
- },
- members: [
- {
- _id: 0,
- name: '4590140ce686:27181',
- health: 1,
- state: 2,
- stateStr: 'SECONDARY',
- uptime: 250,
- optime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
- optimeDurable: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
- optimeDate: ISODate('2024-10-15T03:25:43.000Z'),
- optimeDurableDate: ISODate('2024-10-15T03:25:43.000Z'),
- lastAppliedWallTime: ISODate('2024-10-15T03:25:43.400Z'),
- lastDurableWallTime: ISODate('2024-10-15T03:25:43.400Z'),
- lastHeartbeat: ISODate('2024-10-15T03:25:47.403Z'),
- lastHeartbeatRecv: ISODate('2024-10-15T03:25:47.403Z'),
- pingMs: Long('0'),
- lastHeartbeatMessage: '',
- syncSourceHost: '192.168.142.157:27182',
- syncSourceId: 1,
- infoMessage: '',
- configVersion: 5,
- configTerm: 2
- },
- {
- _id: 1,
- name: '192.168.142.157:27182',
- health: 1,
- state: 1,
- stateStr: 'PRIMARY',
- uptime: 435,
- optime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
- optimeDate: ISODate('2024-10-15T03:25:43.000Z'),
- lastAppliedWallTime: ISODate('2024-10-15T03:25:43.400Z'),
- lastDurableWallTime: ISODate('2024-10-15T03:25:43.400Z'),
- syncSourceHost: '',
- syncSourceId: -1,
- infoMessage: '',
- electionTime: Timestamp({ t: 1728962510, i: 1 }),
- electionDate: ISODate('2024-10-15T03:21:50.000Z'),
- configVersion: 5,
- configTerm: 2,
- self: true,
- lastHeartbeatMessage: ''
- },
- {
- _id: 2,
- name: '192.168.142.157:27183',
- health: 1,
- state: 2,
- stateStr: 'SECONDARY',
- uptime: 7,
- optime: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
- optimeDurable: { ts: Timestamp({ t: 1728962743, i: 1 }), t: Long('2') },
- optimeDate: ISODate('2024-10-15T03:25:43.000Z'),
- optimeDurableDate: ISODate('2024-10-15T03:25:43.000Z'),
- lastAppliedWallTime: ISODate('2024-10-15T03:25:43.400Z'),
- lastDurableWallTime: ISODate('2024-10-15T03:25:43.400Z'),
- lastHeartbeat: ISODate('2024-10-15T03:25:47.405Z'),
- lastHeartbeatRecv: ISODate('2024-10-15T03:25:47.906Z'),
- pingMs: Long('0'),
- lastHeartbeatMessage: '',
- syncSourceHost: '192.168.142.157:27182',
- syncSourceId: 1,
- infoMessage: '',
- configVersion: 5,
- configTerm: 2
- }
- ],
- ok: 1
- }
复制代码 搭建 config server 集群
操作和上面差不多,下面只提供 docker-compose.yml 文件- services:
- mongo-config1:
- image: mongo:7.0
- container_name: mongo-config1
- restart: always
- volumes:
- - /docker/mongo-zone/configsvr/configsvr1/data:/data/db
- - /docker/mongo-zone/configsvr/configsvr1/logs:/var/log/mongodb
- - /docker/mongo-zone/mongo.key:/etc/mongo.key
- ports:
- - "27281:27281"
- environment:
- MONGO_INITDB_ROOT_USERNAME: root
- MONGO_INITDB_ROOT_PASSWORD: 123456
- MONGO_INITDB_REPLICA_SET_NAME: config1
- MONGO_INITDB_DATABASE: admin
- command:
- - /bin/sh
- - -c
- - |
- chmod 400 /etc/mongo.key
- chown 999:999 /etc/mongo.key
- mongod --configsvr --directoryperdb --replSet config1 --bind_ip_all --auth --keyFile /etc/mongo.key --wiredTigerCacheSizeGB 1 --oplogSize 5000 --port 27281
- mongo-config2:
- image: mongo:7.0
- container_name: mongo-config2
- restart: always
- volumes:
- - /docker/mongo-zone/configsvr/configsvr2/data:/data/db
- - /docker/mongo-zone/configsvr/configsvr2/logs:/var/log/mongodb
- - /docker/mongo-zone/mongo.key:/etc/mongo.key
- ports:
- - "27282:27282"
- environment:
- MONGO_INITDB_ROOT_USERNAME: root
- MONGO_INITDB_ROOT_PASSWORD: 123456
- MONGO_INITDB_REPLICA_SET_NAME: config1
- MONGO_INITDB_DATABASE: admin
- command:
- - /bin/sh
- - -c
- - |
- chmod 400 /etc/mongo.key
- chown 999:999 /etc/mongo.key
- mongod --configsvr --directoryperdb --replSet config1 --bind_ip_all --auth --keyFile /etc/mongo.key --wiredTigerCacheSizeGB 1 --oplogSize 5000 --port 27282
- mongo-config3:
- image: mongo:7.0
- container_name: mongo-config3
- restart: always
- volumes:
- - /docker/mongo-zone/configsvr/configsvr3/data:/data/db
- - /docker/mongo-zone/configsvr/configsvr3/logs:/var/log/mongodb
- - /docker/mongo-zone/mongo.key:/etc/mongo.key
- ports:
- - "27283:27283"
- environment:
- MONGO_INITDB_ROOT_USERNAME: root
- MONGO_INITDB_ROOT_PASSWORD: 123456
- MONGO_INITDB_REPLICA_SET_NAME: config1
- MONGO_INITDB_DATABASE: admin
- command:
- - /bin/sh
- - -c
- - |
- chmod 400 /etc/mongo.key
- chown 999:999 /etc/mongo.key
- mongod --configsvr --directoryperdb --replSet config1 --bind_ip_all --auth --keyFile /etc/mongo.key --wiredTigerCacheSizeGB 1 --oplogSize 5000 --port 27283
复制代码 搭建 mongos 集群
操作和上面差不多,下面只提供 docker-compose.yml 文件- services:
- mongo-mongos1:
- image: mongo:7.0
- container_name: mongo-mongos1
- restart: always
- volumes:
- - /docker/mongo-zone/mongos/mongos1/data:/data/db
- - /docker/mongo-zone/mongos/mongos1/logs:/var/log/mongodb
- - /docker/mongo-zone/mongo.key:/etc/mongo.key
- ports:
- - "27381:27381"
- environment:
- MONGO_INITDB_ROOT_USERNAME: root
- MONGO_INITDB_ROOT_PASSWORD: 123456
- MONGO_INITDB_DATABASE: admin
- command:
- - /bin/sh
- - -c
- - |
- chmod 400 /etc/mongo-mongos1.key
- chown 999:999 /etc/mongo-mongos1.key
- mongos --configdb config1/192.168.142.157:27281,192.168.142.157:27282,192.168.142.157:27283 --bind_ip_all --keyFile /etc/mongo-mongos1.key --port 27381
- mongo-mongos2:
- image: mongo:7.0
- container_name: mongo-mongos2
- restart: always
- volumes:
- - /docker/mongo-zone/mongos/mongos2/data:/data/db
- - /docker/mongo-zone/mongos/mongos2/logs:/var/log/mongodb
- - /docker/mongo-zone/mongo.key:/etc/mongo.key
- ports:
- - "27382:27382"
- environment:
- MONGO_INITDB_ROOT_USERNAME: root
- MONGO_INITDB_ROOT_PASSWORD: 123456
- MONGO_INITDB_DATABASE: admin
- command:
- - /bin/sh
- - -c
- - |
- chmod 400 /etc/mongo-mongos1.key
- chown 999:999 /etc/mongo-mongos1.key
- mongos --configdb config2/192.168.142.155:27281,192.168.142.155:27282,192.168.142.155:27283 --bind_ip_all --keyFile /etc/mongo-mongos1.key --port 27382
- mongo-mongos3:
- image: mongo:7.0
- container_name: mongo-mongos3
- restart: always
- volumes:
- - /docker/mongo-zone/mongos/mongos3/data:/data/db
- - /docker/mongo-zone/mongos/mongos3/logs:/var/log/mongodb
- - /docker/mongo-zone/mongo.key:/etc/mongo.key
- ports:
- - "27383:27383"
- environment:
- MONGO_INITDB_ROOT_USERNAME: root
- MONGO_INITDB_ROOT_PASSWORD: 123456
- MONGO_INITDB_DATABASE: admin
- command:
- - /bin/sh
- - -c
- - |
- chmod 400 /etc/mongo-mongos1.key
- chown 999:999 /etc/mongo-mongos1.key
- mongos --configdb config3/192.168.142.156:27281,192.168.142.156:27282,192.168.142.156:27283 --bind_ip_all --keyFile /etc/mongo-mongos1.key --port 27383
复制代码 它不再需要单独生成密钥,将 config server 的密钥文件拷贝过来即可,切记一定要使用 config server 的密钥文件,不然会登录不进去- docker exec -it mongo-mongos1 mongosh --port 27381 -u root -p 123456 --authenticationDatabase admin
复制代码 没有用户就照上面的方法再创建一个添加分片- sh.addShard("shard1/192.168.142.157:27181,192.168.142.157:27182,192.168.142.157:27183")
- sh.addShard("shard3/192.168.142.156:27181,192.168.142.156:27182,192.168.142.156:27183")
- sh.addShard("shard2/192.168.142.155:27181,192.168.142.155:27182,192.168.142.155:27183")
复制代码 此时此刻,可能会报错- 找不到 192.168.142.157:27181 主机 不在 shard1
复制代码 可是它明明就在 shard1 里面- [direct: mongos] admin> sh.addShard("shard1/192.168.142.157:27181,192.168.142.157:27182,192.168.142.157:27183")
- MongoServerError[OperationFailed]: in seed list shard1/192.168.142.157:27181,192.168.142.157:27182,192.168.142.157:27183, host 192.168.142.157:27181 does not belong to replica set shard1; found { compression: [ "snappy", "zstd", "zlib" ], topologyVersion: { processId: ObjectId('670e225373d36364f75d8336'), counter: 7 }, hosts: [ "b170b4e78bc6:27181", "192.168.142.157:27182", "192.168.142.157:27183" ], setName: "shard1", setVersion: 5, isWritablePrimary: true, secondary: false, primary: "192.168.142.157:27183", me: "192.168.142.157:27183", electionId: ObjectId('7fffffff0000000000000003'), lastWrite: { opTime: { ts: Timestamp(1728984093, 1), t: 3 }, lastWriteDate: new Date(1728984093000), majorityOpTime: { ts: Timestamp(1728984093, 1), t: 3 }, majorityWriteDate: new Date(1728984093000) }, isImplicitDefaultMajorityWC: true, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1728984102377), logicalSessionTimeoutMinutes: 30, connectionId: 57, minWireVersion: 0, maxWireVersion: 21, readOnly: false, ok: 1.0, $clusterTime: { clusterTime: Timestamp(1728984093, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $configTime: Timestamp(0, 1), $topologyTime: Timestamp(0, 1), operationTime: Timestamp(1728984093, 1) }
复制代码 原来问题出在
那么这个时候,要么使用这一串不知名的东西,要么就改这个节点的名字
实现方式比较简单,就是先移除这个节点,再重新添加,我省事就不赘述了
重新添加- sh.addShard("shard1/b170b4e78bc6:27181,192.168.142.157:27182,192.168.142.157:27183")
- sh.addShard("shard3/cbfa7ed4415f:27181,192.168.142.156:27182,192.168.142.156:27183")
- sh.addShard("shard2/444e6ad7d88c:27181,192.168.142.155:27182,192.168.142.155:27183")
复制代码
查看 分片状态
- shardingVersion
- { _id: 1, clusterId: ObjectId('670e2ed1c3ccdfa3427b6b97') }
- ---
- shards
- [
- {
- _id: 'shard1',
- host: 'shard1/192.168.142.157:27182,192.168.142.157:27183,b170b4e78bc6:27181',
- state: 1,
- topologyTime: Timestamp({ t: 1728984938, i: 3 })
- },
- {
- _id: 'shard2',
- host: 'shard2/192.168.142.155:27182,192.168.142.155:27183,444e6ad7d88c:27181',
- state: 1,
- topologyTime: Timestamp({ t: 1728985069, i: 1 })
- },
- {
- _id: 'shard3',
- host: 'shard3/192.168.142.156:27182,192.168.142.156:27183,cbfa7ed4415f:27181',
- state: 1,
- topologyTime: Timestamp({ t: 1728985021, i: 3 })
- }
- ]
- ---
- active mongoses
- [ { '7.0.14': 3 } ]
- ---
- autosplit
- { 'Currently enabled': 'yes' }
- ---
- balancer
- {
- 'Currently enabled': 'yes',
- 'Currently running': 'no',
- 'Failed balancer rounds in last 5 attempts': 0,
- 'Migration Results for the last 24 hours': 'No recent migrations'
- }
- ---
- databases
- [
- {
- database: { _id: 'config', primary: 'config', partitioned: true },
- collections: {
- 'config.system.sessions': {
- shardKey: { _id: 1 },
- unique: false,
- balancing: true,
- chunkMetadata: [ { shard: 'shard1', nChunks: 1 } ],
- chunks: [
- { min: { _id: MinKey() }, max: { _id: MaxKey() }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 0 }) }
- ],
- tags: []
- }
- }
- }
- ]
复制代码 着重查看- shards
- [
- {
- _id: 'shard1',
- host: 'shard1/192.168.142.157:27182,192.168.142.157:27183,b170b4e78bc6:27181',
- state: 1,
- topologyTime: Timestamp({ t: 1728984938, i: 3 })
- },
- {
- _id: 'shard2',
- host: 'shard2/192.168.142.155:27182,192.168.142.155:27183,444e6ad7d88c:27181',
- state: 1,
- topologyTime: Timestamp({ t: 1728985069, i: 1 })
- },
- {
- _id: 'shard3',
- host: 'shard3/192.168.142.156:27182,192.168.142.156:27183,cbfa7ed4415f:27181',
- state: 1,
- topologyTime: Timestamp({ t: 1728985021, i: 3 })
- }
- ]
复制代码 节点都齐全就表示分片搭建完成
验证
数据库分片配置对数据库启动分片- sh.enableSharding("test")
复制代码 返回结果- { ok: 1, '$clusterTime': { clusterTime: Timestamp({ t: 1728985516, i: 9 }), signature: { hash: Binary.createFromBase64('QWe6Dj8TwrM1aVVHmnOtihKsFm0=', 0), keyId: Long('7425924310763569175') } }, operationTime: Timestamp({ t: 1728985516, i: 3 })}
复制代码 对test库的test集合的_id进行哈希分片- sh.enableBalancing("test.test")
复制代码 返回结果- sh.shardCollection("test.test", {"_id": "hashed" })
复制代码- { collectionsharded: 'test.test', ok: 1, '$clusterTime': { clusterTime: Timestamp({ t: 1728985594, i: 48 }), signature: { hash: Binary.createFromBase64('SqkMn9xNXjnsNfNd4WTFiHajLPc=', 0), keyId: Long('7425924310763569175') } }, operationTime: Timestamp({ t: 1728985594, i: 48 })}
复制代码 让当前分片支持平衡- sh.enableBalancing("test.test")
复制代码- {
- acknowledged: true,
- insertedId: null,
- matchedCount: 1,
- modifiedCount: 0,
- upsertedCount: 0
- }
复制代码 开启平衡- {
- ok: 1,
- '$clusterTime': {
- clusterTime: Timestamp({ t: 1728985656, i: 4 }),
- signature: {
- hash: Binary.createFromBase64('jTVkQGDtAHtLTjhZkBc3CQx+tzM=', 0),
- keyId: Long('7425924310763569175')
- }
- },
- operationTime: Timestamp({ t: 1728985656, i: 4 })
- }
复制代码 创建用户
就在 test 库下- db.createUser({user:"shardtest",pwd:"shardtest",roles:[{role:'dbOwner',db:'test'}]})
复制代码 插入数据测试- for (i = 1; i <= 300; i=i+1){db.test.insertOne({'name': "test"})}
复制代码 查看详细分片信息结果- shardingVersion{ _id: 1, clusterId: ObjectId('670e2ed1c3ccdfa3427b6b97') }---shards[ { _id: 'shard1', host: 'shard1/192.168.142.157:27182,192.168.142.157:27183,b170b4e78bc6:27181', state: 1, topologyTime: Timestamp({ t: 1728984938, i: 3 }) }, { _id: 'shard2', host: 'shard2/192.168.142.155:27182,192.168.142.155:27183,444e6ad7d88c:27181', state: 1, topologyTime: Timestamp({ t: 1728985069, i: 1 }) }, { _id: 'shard3', host: 'shard3/192.168.142.156:27182,192.168.142.156:27183,cbfa7ed4415f:27181', state: 1, topologyTime: Timestamp({ t: 1728985021, i: 3 }) }]---active mongoses[ { _id: '3158a5543d69:27381', advisoryHostFQDNs: [], created: ISODate('2024-10-15T09:03:06.663Z'), mongoVersion: '7.0.14', ping: ISODate('2024-10-15T09:51:18.345Z'), up: Long('2891'), waiting: true }, { _id: 'c5a08ca76189:27381', advisoryHostFQDNs: [], created: ISODate('2024-10-15T09:03:06.647Z'), mongoVersion: '7.0.14', ping: ISODate('2024-10-15T09:51:18.119Z'), up: Long('2891'), waiting: true }, { _id: '5bb8b2925f52:27381', advisoryHostFQDNs: [], created: ISODate('2024-10-15T09:03:06.445Z'), mongoVersion: '7.0.14', ping: ISODate('2024-10-15T09:51:18.075Z'), up: Long('2891'), waiting: true }]---autosplit{ 'Currently enabled': 'yes' }---balancer{ 'Currently enabled': 'yes', 'Currently running': 'no', 'Failed balancer rounds in last 5 attempts': 0, 'Migration Results for the last 24 hours': 'No recent migrations'}---databases[ { database: { _id: 'config', primary: 'config', partitioned: true }, collections: { 'config.system.sessions': { shardKey: { _id: 1 }, unique: false, balancing: true, chunkMetadata: [ { shard: 'shard1', nChunks: 1 } ], chunks: [ { min: { _id: MinKey() }, max: { _id: MaxKey() }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 0 }) } ], tags: [] } } }, { database: { _id: 'test', primary: 'shard2', partitioned: false, version: { uuid: UUID('3b193276-e88e-42e1-b053-bcb61068a865'), timestamp: Timestamp({ t: 1728985516, i: 1 }), lastMod: 1 } }, collections: { 'test.test': { shardKey: { _id: 'hashed' }, unique: false, balancing: true, chunkMetadata: [ { shard: 'shard1', nChunks: 2 }, { shard: 'shard2', nChunks: 2 }, { shard: 'shard3', nChunks: 2 } ], chunks: [ { min: { _id: MinKey() }, max: { _id: Long('-6148914691236517204') }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 1, i: 0 }) }, { min: { _id: Long('-6148914691236517204') }, max: { _id: Long('-3074457345618258602') }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 1, i: 1 }) }, { min: { _id: Long('-3074457345618258602') }, max: { _id: Long('0') }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 2 }) }, { min: { _id: Long('0') }, max: { _id: Long('3074457345618258602') }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 3 }) }, { min: { _id: Long('3074457345618258602') }, max: { _id: Long('6148914691236517204') }, 'on shard': 'shard3', 'last modified': Timestamp({ t: 1, i: 4 }) }, { min: { _id: Long('6148914691236517204') }, max: { _id: MaxKey() }, 'on shard': 'shard3', 'last modified': Timestamp({ t: 1, i: 5 }) } ], tags: [] } } }]
复制代码 重点查看- chunks: [
- { min: { _id: MinKey() }, max: { _id: Long('-6148914691236517204') }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 1, i: 0 }) },
- { min: { _id: Long('-6148914691236517204') }, max: { _id: Long('-3074457345618258602') }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 1, i: 1 }) },
- { min: { _id: Long('-3074457345618258602') }, max: { _id: Long('0') }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 2 }) },
- { min: { _id: Long('0') }, max: { _id: Long('3074457345618258602') }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 3 }) },
- { min: { _id: Long('3074457345618258602') }, max: { _id: Long('6148914691236517204') }, 'on shard': 'shard3', 'last modified': Timestamp({ t: 1, i: 4 }) },
- { min: { _id: Long('6148914691236517204') }, max: { _id: MaxKey() }, 'on shard': 'shard3', 'last modified': Timestamp({ t: 1, i: 5 }) }
- ],
复制代码 我们可以清晰的看到查看该表分片数据信息- db.test.getShardDistribution()
复制代码- Shard shard2 at shard2/192.168.142.155:27182,192.168.142.155:27183,444e6ad7d88c:27181
- {
- data: '3KiB',
- docs: 108,
- chunks: 2,
- 'estimated data per chunk': '1KiB',
- 'estimated docs per chunk': 54
- }
- ---
- Shard shard1 at shard1/192.168.142.157:27182,192.168.142.157:27183,b170b4e78bc6:27181
- {
- data: '3KiB',
- docs: 89,
- chunks: 2,
- 'estimated data per chunk': '1KiB',
- 'estimated docs per chunk': 44
- }
- ---
- Shard shard3 at shard3/192.168.142.156:27182,192.168.142.156:27183,cbfa7ed4415f:27181
- {
- data: '3KiB',
- docs: 103,
- chunks: 2,
- 'estimated data per chunk': '1KiB',
- 'estimated docs per chunk': 51
- }
- ---
- Totals
- {
- data: '10KiB',
- docs: 300,
- chunks: 6,
- 'Shard shard2': [ '36 % data', '36 % docs in cluster', '37B avg obj size on shard' ],
- 'Shard shard1': [
- '29.66 % data',
- '29.66 % docs in cluster',
- '37B avg obj size on shard'
- ],
- 'Shard shard3': [
- '34.33 % data',
- '34.33 % docs in cluster',
- '37B avg obj size on shard'
- ]
- }
复制代码 我们可以看到 三个 shard 都平均分了这个些数据
查看sharding状态关闭集合分片- sh.disableBalancing("test.test")
复制代码 结果- { acknowledged: true, insertedId: null, matchedCount: 1, modifiedCount: 1, upsertedCount: 0}
复制代码 到此这篇关于docker compose部署mongodb 分片集群的文章就介绍到这了,更多相关docker compose mongodb 分片集群内容请搜索脚本之家以前的文章或继续浏览下面的相关文章希望大家以后多多支持脚本之家!
来源:https://www.jb51.net/server/3288693gf.htm
免责声明:由于采集信息均来自互联网,如果侵犯了您的权益,请联系我们【E-Mail:cb@itdo.tech】 我们会及时删除侵权内容,谢谢合作! |
本帖子中包含更多资源
您需要 登录 才可以下载或查看,没有账号?立即注册
x
|