做网站怎样赚钱,政务网站建设与管理整,做建筑材料哪个网站好一点,旅游景点网站模板文章目录 前言1. Redis Cluster 搭建1.1 准备节点1.2 准备配置文件1.3 查看集群状态 2. 客户端访问3. Python 连接集群4. Redis 集群维护4.1 新增节点4.2 手动分配 slot4.3 节点移除 5. 集群运维5.1 集群倾斜5.2 手动切换 前言
Redis 3.0 提供了 Redis Cluster 架构#xff0… 文章目录 前言1. Redis Cluster 搭建1.1 准备节点1.2 准备配置文件1.3 查看集群状态 2. 客户端访问3. Python 连接集群4. Redis 集群维护4.1 新增节点4.2 手动分配 slot4.3 节点移除 5. 集群运维5.1 集群倾斜5.2 手动切换 前言
Redis 3.0 提供了 Redis Cluster 架构有效解决 Redis 分布式方面的需求。当遇到单机内存、并发、流量瓶颈时可以使用 Redis 集群模式解决达到负载均衡的目的。本篇文章将介绍 Redis 集群搭建运维。
1. Redis Cluster 搭建
由于服务器有限这里演示部署的时候采用了多实例的部署方式。
主机Redis 端口程序172.16.104.576379、6389、6399Redis 7.0172.16.104.566379、6389、6399Redis 7.0
1.1 准备节点
Redis Cluster 至少需要 6 个 Redis 节点才能组成完整的高可用集群。如下图3 节点双副本。
1.2 准备配置文件
配置文件基本相同除了端口和不同6 套 Redis 服务分别部署在两台服务器上。所以 Redis 的日志目录和工作目录都需要修改。 由于使用的是方式是多实例部署6 套 Redis 服务分别部署在 2 台服务器上面配置文件上会有细微的差别下面是一个配置文件模版需要修改的地方有port、logfile、dir 三个参数。
# 节点端口
port 6379# 白名单
bind 0.0.0.0# 后台运行
daemonize yes# 日志目录
logfile /data/redis/6379/redis.log# RDB 目录
dbfilename dump.rdb# 工作目录
dir /data/redis/6379# AOF
appendonly no
appendfilename appendonly.aof# 密码验证
requirepass Redis123# 开启集群模式
cluster-enabled yes# 节点超时时间单位毫秒
cluster-node-timeout 15000# 集群内配置文件
cluster-config-file nodes-6379.conf# 主库密码
masterauth Redis123准备好对应的配置
[root172-16-104-57 redis]# ll redis_*
-rw-rw-r--. 1 redis redis 107542 7月 3 14:31 redis_6379.conf
-rw-r--r--. 1 redis redis 107542 7月 3 14:33 redis_6389.conf
-rw-r--r--. 1 redis redis 107542 7月 3 14:36 redis_6399.conf启动 57 节点上的 Redis 服务
redis-server redis_6379.conf
redis-server redis_6389.conf
redis-server redis_6399.conf查看进程确认 Redis 都已启动完成在 56 服务器上重复上述操作
[root172-16-104-56 redis]# ps -ef|grep redis
root 113871 1 0 14:45 ? 00:00:00 redis-server 0.0.0.0:6379 [cluster]
root 113873 1 0 14:45 ? 00:00:00 redis-server 0.0.0.0:6389 [cluster]
root 113884 1 0 14:45 ? 00:00:00 redis-server 0.0.0.0:6399 [cluster]在任意一个节点上执行如下命令
redis-cli -a Redis123 --cluster create --cluster-replicas 1 172.16.104.57:6379 172.16.104.57:6389 172.16.104.57:6399 172.16.104.56:6379 172.16.104.56:6389 172.16.104.56:6399输入 yes 后创建集群 Performing hash slots allocation on 6 nodes...
Master[0] - Slots 0 - 5460
Master[1] - Slots 5461 - 10922
Master[2] - Slots 10923 - 16383
Adding replica 172.16.104.56:6399 to 172.16.104.57:6379
Adding replica 172.16.104.57:6399 to 172.16.104.56:6379
Adding replica 172.16.104.56:6389 to 172.16.104.57:6389
M: 4598508e4e82cb2261aca847a870123d8d4a5622 172.16.104.57:6379slots:[0-5460] (5461 slots) master
M: 322f148444e409d58dedcde5c111db2f73de80a2 172.16.104.57:6389slots:[10923-16383] (5461 slots) master
S: e83d857527b04d522f297a93ee50c65059f4981b 172.16.104.57:6399replicates 0da7e019328170fd63d2e9c6197e6d31b116e304
M: 0da7e019328170fd63d2e9c6197e6d31b116e304 172.16.104.56:6379slots:[5461-10922] (5462 slots) master
S: 796d75c8f043a3ca6d33677b8f3a533154f9fd19 172.16.104.56:6389replicates 322f148444e409d58dedcde5c111db2f73de80a2
S: 1d1e0552263e7410f7e165ce64097c7c9c74b39c 172.16.104.56:6399replicates 4598508e4e82cb2261aca847a870123d8d4a5622
Can I set the above configuration? (type yes to accept): yesNodes configuration updatedAssign a different config epoch to each nodeSending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.Performing Cluster Check (using node 172.16.104.57:6379)
M: 4598508e4e82cb2261aca847a870123d8d4a5622 172.16.104.57:6379slots:[0-5460] (5461 slots) master1 additional replica(s)
M: 0da7e019328170fd63d2e9c6197e6d31b116e304 172.16.104.56:6379slots:[5461-10922] (5462 slots) master1 additional replica(s)
M: 322f148444e409d58dedcde5c111db2f73de80a2 172.16.104.57:6389slots:[10923-16383] (5461 slots) master1 additional replica(s)
S: e83d857527b04d522f297a93ee50c65059f4981b 172.16.104.57:6399slots: (0 slots) slavereplicates 0da7e019328170fd63d2e9c6197e6d31b116e304
S: 796d75c8f043a3ca6d33677b8f3a533154f9fd19 172.16.104.56:6389slots: (0 slots) slavereplicates 322f148444e409d58dedcde5c111db2f73de80a2
S: 1d1e0552263e7410f7e165ce64097c7c9c74b39c 172.16.104.56:6399slots: (0 slots) slavereplicates 4598508e4e82cb2261aca847a870123d8d4a5622
[OK] All nodes agree about slots configuration.Check for open slots...Check slots coverage...
[OK] All 16384 slots covered.当输出 [OK] All 16384 slots covered. 表示集群创建成功。
1.3 查看集群状态
cluster nodes查看集群拓扑信息每个节点的 ID、身份、连接数和槽数等信息。从返回值可以看出整个Redis 集群运行正常其中包含三个主节点和三个从节点。
0da7e019328170fd63d2e9c6197e6d31b116e304 172.16.104.56:637916379 master - 0 1720071937000 4 connected 5461-10922
322f148444e409d58dedcde5c111db2f73de80a2 172.16.104.57:638916389 master - 0 1720071936000 2 connected 10923-16383
e83d857527b04d522f297a93ee50c65059f4981b 172.16.104.57:639916399 slave 0da7e019328170fd63d2e9c6197e6d31b116e304 0 1720071938000 4 connected
796d75c8f043a3ca6d33677b8f3a533154f9fd19 172.16.104.56:638916389 slave 322f148444e409d58dedcde5c111db2f73de80a2 0 1720071937915 2 connected
4598508e4e82cb2261aca847a870123d8d4a5622 172.16.104.57:637916379 myself,master - 0 1720071937000 1 connected 0-5460
1d1e0552263e7410f7e165ce64097c7c9c74b39c 172.16.104.56:639916399 slave 4598508e4e82cb2261aca847a870123d8d4a5622 0 1720071938921 1 connectedcluster info查看全局集群信息
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:81881
cluster_stats_messages_pong_sent:90585
cluster_stats_messages_sent:172466
cluster_stats_messages_ping_received:90580
cluster_stats_messages_pong_received:81881
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:172466
total_cluster_links_buffer_limit_exceeded:02. 客户端访问
Redis cluster 访问任意一个节点的时候都可以自动计算 key 所在的位置回返回一个 move 消息告诉客户端 key 应该被分配在哪个节点上。
127.0.0.1:6379 set user1 wdwedwedwed
(error) MOVED 8106 172.16.104.56:6379连接 172.16.104.56:6379节点重新执行 set 命令后写入完成。
172.16.104.56:6379 set user1 wdwedwedwed
OK
172.16.104.56:6379 get user1
wdwedwedwed3. Python 连接集群
需要执行 python3 -m pip install redis-py-cluster 安装连接 Redis 集群的模块下图是一个 Demo。
from rediscluster import RedisCluster# 构建所有的节点
startup_nodes [{host: 172.16.104.56, port: 6379},{host: 172.16.104.56, port: 6389},{host: 172.16.104.56, port: 6399},{host: 172.16.104.57, port: 6379},{host: 172.16.104.57, port: 6389},{host: 172.16.104.57, port: 6399}
]# 连接集群
redis_server RedisCluster(startup_nodesstartup_nodes, decode_responsesTrue, passwordRedis123)
# set
redis_server.set(user2, aaaaaaaa)
# get
print(redis_server.get(user2))4. Redis 集群维护
4.1 新增节点
当前集群拓扑是 3 节点双副本接下来我们再为集群添加一个节点变成 4 节点双副本。 首先需要先准备好两台 Redis 实例我们再通过修改配置文件启动两台 Redis 实例。
[root172-16-104-57 redis]# mkdir 6340
[root172-16-104-57 redis]# ls
6340 6379 6389 6399拷贝配置文件修改 port、logfile、dir 三个参数。
cp redis_6379.conf redis_6340.conf启动 Redis 数据库然后再另外一个节点重复该操作。两台新节点准备准备完成。
172.16.104.55:6379
172.16.104.55:6389将两台新节点加入到集群中组成 4 节点双副本。
redis-cli --cluster check 172.16.104.57:6379 -a Redis123redis-cli -h 172.16.104.57 -p 6389 -aRedis123 --cluster add-node 172.16.104.55:6379 172.16.104.55:6389Adding node 172.16.104.55:6379 to cluster 172.16.104.55:6389Performing Cluster Check (using node 172.16.104.55:6389)
M: 3a0d53c0ab6e2a0f6f53ea2036dc1fe33b58d9aa 172.16.104.55:6389slots: (0 slots) master
[OK] All nodes agree about slots configuration.Check for open slots...Check slots coverage...
[ERR] Not all 16384 slots are covered by nodes.这里我看官方文档的直接使用 add-node 命令就可以添加节点在测试的时候发现报错 [ERR] Not all 16384 slots are covered by nodes 但是 check 节点的时候 slot 都已完成分配没有查到解决的方法和原因如果有人知道麻烦告诉我谢谢。
由于使用 add-node 命令添加失败了这里介绍另外一种方法是可以添加成功的。
# 当前集群状态
172.16.104.57:6379 (603450ca...) - 0 keys | 5460 slots | 1 slaves.
172.16.104.57:6389 (db174078...) - 0 keys | 4370 slots | 1 slaves.
172.16.104.56:6379 (5be79052...) - 0 keys | 6554 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.Performing Cluster Check (using node 172.16.104.57:6379)
M: 603450cae36e4d2f43b5ba98da52f1c35f87683e 172.16.104.57:6379slots:[0-1089],[1091-5460] (5460 slots) master1 additional replica(s)
M: db1740788ce051b82014a1d461a289ec669e59e7 172.16.104.57:6389slots:[1090],[5461-6553],[10923-13828],[16014-16383] (4370 slots) master1 additional replica(s)
M: 5be790525143f1a5ba8b697095d2233536bc7e70 172.16.104.56:6379slots:[6554-10922],[13829-16013] (6554 slots) master1 additional replica(s)
S: 8a960375d75e19299b712982a1c178a15b86bfc2 172.16.104.56:6399slots: (0 slots) slavereplicates 603450cae36e4d2f43b5ba98da52f1c35f87683e
S: ead3184f5323fc8f0f8ba6f42b122b10af058388 172.16.104.57:6399slots: (0 slots) slavereplicates 5be790525143f1a5ba8b697095d2233536bc7e70
S: d0fb82c46f800ac06bb2c8533fefa8141a1dcdd2 172.16.104.56:6389slots: (0 slots) slavereplicates db1740788ce051b82014a1d461a289ec669e59e7
[OK] All nodes agree about slots configuration.Check for open slots...Check slots coverage...
[OK] All 16384 slots covered.连接到集群中的任意一个节点执行如下命令将新节点加入到集群中。
cluster meet 172.16.104.55 6379
cluster meet 172.16.104.55 6389确认节点是否已经加入可以看到两个新节点已经加入了但是没有分配 slot 属于是孤立节点。
127.0.0.1:6379 cluster nodes
db1740788ce051b82014a1d461a289ec669e59e7 172.16.104.57:638916389 master - 0 1720164972000 10 connected 1090 5461-6553 10923-13828 16014-16383
603450cae36e4d2f43b5ba98da52f1c35f87683e 172.16.104.57:637916379 myself,master - 0 1720164971000 8 connected 0-1089 1091-5460
5be790525143f1a5ba8b697095d2233536bc7e70 172.16.104.56:637916379 master - 0 1720164971000 11 connected 6554-10922 13829-16013
3a0d53c0ab6e2a0f6f53ea2036dc1fe33b58d9aa 172.16.104.55:638916389 master - 0 1720164973000 9 connected
8a960375d75e19299b712982a1c178a15b86bfc2 172.16.104.56:639916399 slave 603450cae36e4d2f43b5ba98da52f1c35f87683e 0 1720164971000 8 connected
ead3184f5323fc8f0f8ba6f42b122b10af058388 172.16.104.57:639916399 slave 5be790525143f1a5ba8b697095d2233536bc7e70 0 1720164974656 11 connected
9d37453a597f2cc4d8d4d4f788c513a82ea89c59 172.16.104.55:637916379 master - 0 1720164973651 0 connected
d0fb82c46f800ac06bb2c8533fefa8141a1dcdd2 172.16.104.56:638916389 slave db1740788ce051b82014a1d461a289ec669e59e7 0 1720164972646 10 connected4.2 手动分配 slot
使用 add-node 或者 meet 的方式将新节点加入集群并没有分配 slot 需要手动分配。
9d37453a597f2cc4d8d4d4f788c513a82ea89c59 172.16.104.55:637916379 master - 0 1720164973651 0 connected
3a0d53c0ab6e2a0f6f53ea2036dc1fe33b58d9aa 172.16.104.55:638916389 master - 0 1720164973000 9 connected接下来需要为其分配 slot 连接集群后开始分配 slot 一共是 16384 个 slot 均分到 4 个节点每份 4096 个 slot需要从之前的 3 个节点中每个节点分配一部分 slot 到新节点。
M: 603450cae36e4d2f43b5ba98da52f1c35f87683e 172.16.104.57:6379slots:[0-1089],[1091-5460] (5460 slots) master1 additional replica(s)
M: db1740788ce051b82014a1d461a289ec669e59e7 172.16.104.57:6389slots:[1090],[5461-6553],[10923-13828],[16014-16383] (4370 slots) master1 additional replica(s)
M: 5be790525143f1a5ba8b697095d2233536bc7e70 172.16.104.56:6379slots:[6554-10922],[13829-16013] (6554 slots) master1 additional replica(s)
M: 3a0d53c0ab6e2a0f6f53ea2036dc1fe33b58d9aa 172.16.104.55:6389slots: (0 slots) masterredis-cli --cluster reshard 172.16.104.57:6379 -a Redis123第一步输入迁移多少个 slot。
How many slots do you want to move (from 1 to 16384)? 4096第二步输入迁移 slot 的目标节点的 ID。
What is the receiving node ID? 3a0d53c0ab6e2a0f6f53ea2036dc1fe33b58d9aa第三步选择从哪几个节点作为源端进行迁移可以输入 all 表示从节点中均分也可以指定节点 ID。
Please enter all the source node IDs.Type all to use all the nodes as source nodes for the hash slots.Type done once you entered all the source nodes IDs.
Source node #1: all接下来就输入 yes 确认迁移即可然后使用 check 命令检查一下。
redis-cli --cluster check 172.16.104.57:6379 -a Redis123172.16.104.57:6379 (603450ca...) - 0 keys | 4096 slots | 1 slaves.
172.16.104.57:6389 (db174078...) - 0 keys | 4078 slots | 1 slaves.
172.16.104.56:6379 (5be79052...) - 0 keys | 4114 slots | 1 slaves.
172.16.104.55:6389 (3a0d53c0...) - 0 keys | 4096 slots | 0 slaves.
172.16.104.55:6379 (9d37453a...) - 0 keys | 0 slots | 0 slaves.
[OK] 0 keys in 5 masters.
0.00 keys per slot on average.Performing Cluster Check (using node 172.16.104.57:6379)
M: 603450cae36e4d2f43b5ba98da52f1c35f87683e 172.16.104.57:6379slots:[1365-5460] (4096 slots) master1 additional replica(s)
M: db1740788ce051b82014a1d461a289ec669e59e7 172.16.104.57:6389slots:[6552-6553],[8194-8993],[10923-13828],[16014-16383] (4078 slots) master1 additional replica(s)
M: 5be790525143f1a5ba8b697095d2233536bc7e70 172.16.104.56:6379slots:[8994-10922],[13829-16013] (4114 slots) master1 additional replica(s)
M: 3a0d53c0ab6e2a0f6f53ea2036dc1fe33b58d9aa 172.16.104.55:6389slots:[0-1364],[5461-6551],[6554-8193] (4096 slots) master
S: 8a960375d75e19299b712982a1c178a15b86bfc2 172.16.104.56:6399slots: (0 slots) slavereplicates 603450cae36e4d2f43b5ba98da52f1c35f87683e
S: ead3184f5323fc8f0f8ba6f42b122b10af058388 172.16.104.57:6399slots: (0 slots) slavereplicates 5be790525143f1a5ba8b697095d2233536bc7e70
M: 9d37453a597f2cc4d8d4d4f788c513a82ea89c59 172.16.104.55:6379slots: (0 slots) master
S: d0fb82c46f800ac06bb2c8533fefa8141a1dcdd2 172.16.104.56:6389slots: (0 slots) slavereplicates db1740788ce051b82014a1d461a289ec669e59e7
[OK] All nodes agree about slots configuration.Check for open slots...Check slots coverage...
[OK] All 16384 slots covered.发现还有一个 172.16.104.55:6379 的节点没有分配 slot 也没有作为一个备库刚刚加入了两个节点把 slot 分配给其中一个新主节点后还需要手动将 172.16.104.55:6379 节点指定为新主节点的备节点最终组成 4 节点双副本。
连接到 172.16.104.55:6379 输入指令节点 ID 填写的是刚才新分配 slot 的节点。
172.16.104.55:6379 cluster replicate 3a0d53c0ab6e2a0f6f53ea2036dc1fe33b58d9aa
OK
172.16.104.55:6379 cluster nodes
3a0d53c0ab6e2a0f6f53ea2036dc1fe33b58d9aa 172.16.104.55:638916389 master - 0 1720166382000 12 connected 0-1364 5461-6551 6554-8193
8a960375d75e19299b712982a1c178a15b86bfc2 172.16.104.56:639916399 slave 603450cae36e4d2f43b5ba98da52f1c35f87683e 0 1720166385000 8 connected
603450cae36e4d2f43b5ba98da52f1c35f87683e 172.16.104.57:637916379 master - 0 1720166385754 8 connected 1365-5460
9d37453a597f2cc4d8d4d4f788c513a82ea89c59 172.16.104.55:637916379 myself,slave 3a0d53c0ab6e2a0f6f53ea2036dc1fe33b58d9aa 0 1720166385000 12 connected
db1740788ce051b82014a1d461a289ec669e59e7 172.16.104.57:638916389 master - 0 1720166384749 13 connected 6552-6553 8194-8993 10923-13828 16014-16383
5be790525143f1a5ba8b697095d2233536bc7e70 172.16.104.56:637916379 master - 0 1720166383000 11 connected 8994-10922 13829-16013
ead3184f5323fc8f0f8ba6f42b122b10af058388 172.16.104.57:639916399 slave 5be790525143f1a5ba8b697095d2233536bc7e70 0 1720166382738 11 connected
d0fb82c46f800ac06bb2c8533fefa8141a1dcdd2 172.16.104.56:638916389 slave db1740788ce051b82014a1d461a289ec669e59e7 0 1720166386760 13 connected4.3 节点移除
如果是从节点可以直接使用下方命令移除
redis-cli -a Redis123 --cluster del-node 172.16.104.55:6379 9d37453a597f2cc4d8d4d4f788c513a82ea89c59# 输出Removing node 9d37453a597f2cc4d8d4d4f788c513a82ea89c59 from cluster 172.16.104.55:6379Sending CLUSTER FORGET messages to the cluster...Sending CLUSTER RESET SOFT to the deleted node.如果要移除主节点需要先将 slot 迁移到其他节点上
redis-cli --cluster reshard 172.16.104.57:6379 -a Redis123接下来的步骤就和 4.2 迁移的步骤相同将 172.16.104.55:6389 节点的 slot 打散分配给其他节点后该节点的状态变为 slave。
ead3184f5323fc8f0f8ba6f42b122b10af058388 172.16.104.57:639916399 slave 5be790525143f1a5ba8b697095d2233536bc7e70 0 1720167846000 16 connected
3a0d53c0ab6e2a0f6f53ea2036dc1fe33b58d9aa 172.16.104.55:638916389 slave 5be790525143f1a5ba8b697095d2233536bc7e70 0 1720167848000 16 connected
5be790525143f1a5ba8b697095d2233536bc7e70 172.16.104.56:637916379 myself,master - 0 1720167847000 16 connected 6763-8193 8994-10922 13829-16013
db1740788ce051b82014a1d461a289ec669e59e7 172.16.104.57:638916389 master - 0 1720167847743 15 connected 5461-6762 8194-8993 10923-13828 16014-16383
8a960375d75e19299b712982a1c178a15b86bfc2 172.16.104.56:639916399 slave 603450cae36e4d2f43b5ba98da52f1c35f87683e 0 1720167848748 14 connected
603450cae36e4d2f43b5ba98da52f1c35f87683e 172.16.104.57:637916379 master - 0 1720167845000 14 connected 0-5460
d0fb82c46f800ac06bb2c8533fefa8141a1dcdd2 172.16.104.56:638916389 slave db1740788ce051b82014a1d461a289ec669e59e7 0 1720167849753 15 connected使用命令移除节点
[root172-16-104-56 redis]# redis-cli -a Redis123 --cluster del-node 172.16.104.55:6389 3a0d53c0ab6e2a0f6f53ea2036dc1fe33b58d9aa# 输出Removing node 3a0d53c0ab6e2a0f6f53ea2036dc1fe33b58d9aa from cluster 172.16.104.55:6389Sending CLUSTER FORGET messages to the cluster...Sending CLUSTER RESET SOFT to the deleted node. check 一下集群已变成 3 节点双副本的状态。
172.16.104.57:6379 (603450ca...) - 0 keys | 5461 slots | 1 slaves.
172.16.104.57:6389 (db174078...) - 0 keys | 5378 slots | 1 slaves.
172.16.104.56:6379 (5be79052...) - 0 keys | 5545 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.Performing Cluster Check (using node 172.16.104.57:6379)
M: 603450cae36e4d2f43b5ba98da52f1c35f87683e 172.16.104.57:6379slots:[0-5460] (5461 slots) master1 additional replica(s)
M: db1740788ce051b82014a1d461a289ec669e59e7 172.16.104.57:6389slots:[5461-6762],[8194-8993],[10923-13828],[16014-16383] (5378 slots) master1 additional replica(s)
M: 5be790525143f1a5ba8b697095d2233536bc7e70 172.16.104.56:6379slots:[6763-8193],[8994-10922],[13829-16013] (5545 slots) master1 additional replica(s)
S: 8a960375d75e19299b712982a1c178a15b86bfc2 172.16.104.56:6399slots: (0 slots) slavereplicates 603450cae36e4d2f43b5ba98da52f1c35f87683e
S: ead3184f5323fc8f0f8ba6f42b122b10af058388 172.16.104.57:6399slots: (0 slots) slavereplicates 5be790525143f1a5ba8b697095d2233536bc7e70
S: d0fb82c46f800ac06bb2c8533fefa8141a1dcdd2 172.16.104.56:6389slots: (0 slots) slavereplicates db1740788ce051b82014a1d461a289ec669e59e7
[OK] All nodes agree about slots configuration.Check for open slots...Check slots coverage...
[OK] All 16384 slots covered.5. 集群运维
5.1 集群倾斜
集群倾斜是笔者维护 Redis 集群经常遇到的问题无论是自建还是云 Redis 服务集群倾斜指不同节点之间数据量和请求量出现明显差异这种情况将加大负载均衡和开发运维的难度。本小节介绍集群倾斜的常见原因。 槽分配不均衡如果槽分配不均衡非常容易造成的数据量不均衡的问题可以使用下方命令查看集群槽的分配情况。 redis-cli --cluster check 172.16.104.57:6379 -a Redis123大 key 造成的比如一个 list 或 hash 中存储了几十万个元素造成单个 key 过大集群数据量产生倾斜可使用下方工具查看大 key。 redis-cli --bigkeys -a Redis123不同的槽位对应的数据量较大键通过CRC16哈希函数映射到槽上正常情况下槽内键数量会相对均匀。但当大量使用hash_tag时会产生不同的键映射到同一个槽的情况。在 key 名字上加一个 {} 在 crc16 计算的时候只会使用 {} 中的内容。 127.0.0.1:6379 set user:other{1} aaa
OK
127.0.0.1:6379 set user:other{1} aa
OK
127.0.0.1:6379 set user:age{1} aa
OK
127.0.0.1:6379 keys *
1) user:age{1}
2) user:other{1}
3) user:info{1}内存相关配置不一致。内存相关配置指 hash-max-zip-list-value、set-max-intset-entries 等压缩数据结构配置。当集群大量使用hash、set 等数据结构时如果内存压缩数据结构配置不一致极端情况下会相差数倍的内存从而造成节点内存量倾斜
云上一般不会出现第四种情况大部分都是由于大 key 和 hash_tag 造成的。
5.2 手动切换
Redis Cluster 中的 master 节点都有一个 slave 节点用于故障切换。Redis 还提供了手动切换的功能可用于一些主节点迁移场景。只需要在对应的备节点执行 cluster failover 命令就可以完成切换。
127.0.0.1:6389 cluster nodes
# 这里省去了其他节点的信息
d0fb82c46f800ac06bb2c8533fefa8141a1dcdd2 172.16.104.56:638916389 myself,slave db1740788ce051b82014a1d461a289ec669e59e7 0 1720511649000 15 connected
127.0.0.1:6389 cluster failover
OK
127.0.0.1:6389 cluster nodes
# 当前节点已从 slave 变更为 master
d0fb82c46f800ac06bb2c8533fefa8141a1dcdd2 172.16.104.56:638916389 myself,master - 0 1720511671000 17 connected 5461-6762 8194-8993 10923-13828 16014-16383