翼度科技»论坛 云主机 服务器技术 查看内容

docker swarm快速部署redis分布式集群的详细过程

8

主题

8

帖子

24

积分

新手上路

Rank: 1

积分
24
之前尝试用swarm部署redis集群时网上看了很多帖子,发现大多数都是单机集群,也就是在一个服务器上启多个redis容器,然后进入其中一个容器执行redis搭建,经过研究,我实现了只需要通过docker-compose.yml文件和一个启动命令就完成redis分布式部署的方式,让其分别部署在不同机器上,并实现集群搭建。

环境准备

四台虚拟机

  • 192.168.2.38(管理节点)
  • 192.168.2.81(工作节点)
  • 192.168.2.100(工作节点)
  • 192.168.2.102(工作节点)
时间同步

每台机器都执行
  1. yum install -y ntp
  2. cat <<EOF>>/var/spool/cron/root
  3. 00 12 * * * /usr/sbin/ntpdate -u ntp1.aliyun.com && /usr/sbin/hwclock -w
  4. EOF
  5. ##查看计划任务
  6. crontab -l
  7. ##手动执行
  8. /usr/sbin/ntpdate -u ntp1.aliyun.com && /usr/sbin/hwclock -w
复制代码
Docker

安装Docker
  1. curl -sSL https://get.daocloud.io/docker | sh
复制代码
启动docker
  1. sudo systemctl start docker
复制代码
搭建Swarm集群


打开防火墙(Swarm需要)

管理节点打开2377
  1. # manager
  2. firewall-cmd --zone=public --add-port=2377/tcp --permanent
复制代码
所有节点打开以下端口
  1. # 所有node
  2. firewall-cmd --zone=public --add-port=7946/tcp --permanent
  3. firewall-cmd --zone=public --add-port=7946/udp --permanent
  4. firewall-cmd --zone=public --add-port=4789/tcp --permanent
  5. firewall-cmd --zone=public --add-port=4789/udp --permanent
复制代码
所有节点重启防火墙
  1. # 所有node
  2. firewall-cmd --reload
  3. systemctl restart docker
复制代码
图个方便可以直接关闭防火墙

创建Swarm
  1. docker swarm init --advertise-addr your_manager_ip
复制代码
查看join-token
  1. [root@manager ~]# docker swarm join-token worker
  2. To add a worker to this swarm, run the following command:

  3.     docker swarm join --token SWMTKN-1-51b7t8whxn8j6mdjt5perjmec9u8qguxq8tern9nill737pra2-ejc5nw5f90oz6xldcbmrl2ztu 192.168.2.61:2377

  4. [root@manager ~]#
复制代码
加入Swarm
  1. docker swarm join --token SWMTKN-1-
  2. 51b7t8whxn8j6mdjt5perjmec9u8qguxq8tern9nill737pra2-ejc5nw5f90oz6xldcbmrl2ztu
  3. 192.168.2.38:2377
  4. #查看节点
  5. docker node ls
复制代码
服务约束

添加label
  1. sudo docker node update --label-add redis1=true 管理节点名称
  2. sudo docker node update --label-add redis2=true 工作节点名称
  3. sudo docker node update --label-add redis3=true 工作节点名称
  4. sudo docker node update --label-add redis4=true 工作节点名称
复制代码
单机集群

弊端:容器都部署在一个机器上,机器挂了,就全挂了。

创建容器

Tips:这里可以写个脚本启动,因为这种方式不常用,这里就不写那个脚本了
  1. docker create --name redis-node1 --net host -v /data/redis-data/node1:/data redis --cluster-enabled yes --cluster-config-file nodes-node-1.conf --port 6379
  2. docker create --name redis-node2 --net host -v /data/redis-data/node2:/data redis --cluster-enabled yes --cluster-config-file nodes-node-2.conf --port 6380
  3. docker create --name redis-node3 --net host -v /data/redis-data/node3:/data redis --cluster-enabled yes --cluster-config-file nodes-node-3.conf --port 6381
  4. docker create --name redis-node4 --net host -v /data/redis-data/node4:/data redis --cluster-enabled yes --cluster-config-file nodes-node-4.conf --port 6382
  5. docker create --name redis-node5 --net host -v /data/redis-data/node5:/data redis --cluster-enabled yes --cluster-config-file nodes-node-5.conf --port 6383
  6. docker create --name redis-node6 --net host -v /data/redis-data/node6:/data redis --cluster-enabled yes --cluster-config-file nodes-node-6.conf --port 6384
复制代码
启动容器
  1. docker start redis-node1 redis-node2 redis-node3 redis-node4 redis-node5 redis-node6
复制代码
进入容器启动集群
  1. # 进入其中一个节点
  2. docker exec -it redis-node1 /bin/bash
  3. # 创建集群
  4. redis-cli --cluster create 192.168.2.38:6379 192.168.2.38:6380 192.168.2.38:6381 192.168.2.38:6382 192.168.2.38:6383 192.168.2.38:6384 --cluster-replicas 1
  5. # --cluster-replicas 1 一比一,一主一从
复制代码
分布式集群

redis集群至少需要3个主节点,所以这里搭建三主三从的集群,由于只有4台机器,所以在脚本中把前三个节点放到一台机器上了。

部署

在swarm集群的Manager节点中创建
  1. mkdir /root/redis-swarm
  2. cd /root/redis-swarm
  3. vi docker-compose.yml
复制代码
docker compose.yml

说明:

  • 前6个服务为redis节点,最后一个redis-start是用于创建集群,利用redis-cli客户端搭建集群,该服务搭建完redis集群后会自动停止运行。
  • redis-start需要等待前6个redis节点的执行完毕才能创建集群,因此需要用到脚本wait-for-it.sh
  • 由于redis-cli --cluster create不支持网络别名,所以另写脚本redis-start.sh
使用这套脚本同样可以单机部署集群,只需要在启动时不使用swarm启动就可以了,然后把docker-compose.yml中的网络模式
  1. driver: overlay
复制代码
给注释掉即可
  1. version: '3.7'
  2. services:
  3.   redis-node1:
  4.     image: redis
  5.     hostname: redis-node1
  6.     ports:
  7.       - 6379:6379
  8.     networks:
  9.       - redis-swarm
  10.     volumes:
  11.       - "node1:/data"
  12.     command: redis-server --cluster-enabled yes --cluster-config-file nodes-node-1.conf
  13.     deploy:
  14.       mode: replicated
  15.       replicas: 1
  16.       resources:
  17.         limits:
  18.           # cpus: '0.001'
  19.           memory: 5120M
  20.         reservations:
  21.           # cpus: '0.001'
  22.           memory: 512M
  23.       placement:
  24.         constraints:
  25.           - node.role==manager

  26.   redis-node2:
  27.     image: redis
  28.     hostname: redis-node2
  29.     ports:
  30.       - 6380:6379
  31.     networks:
  32.       - redis-swarm
  33.     volumes:
  34.       - "node2:/data"
  35.     command: redis-server --cluster-enabled yes --cluster-config-file nodes-node-2.conf
  36.     deploy:
  37.       mode: replicated
  38.       replicas: 1
  39.       resources:
  40.         limits:
  41.           # cpus: '0.001'
  42.           memory: 5120M
  43.         reservations:
  44.           # cpus: '0.001'
  45.           memory: 512M
  46.       placement:
  47.         constraints:
  48.           - node.role==manager

  49.   redis-node3:
  50.     image: redis
  51.     hostname: redis-node3
  52.     ports:
  53.       - 6381:6379
  54.     networks:
  55.       - redis-swarm
  56.     volumes:
  57.       - "node3:/data"
  58.     command: redis-server --cluster-enabled yes --cluster-config-file nodes-node-3.conf
  59.     deploy:
  60.       mode: replicated
  61.       resources:
  62.         limits:
  63.           # cpus: '0.001'
  64.           memory: 5120M
  65.         reservations:
  66.           # cpus: '0.001'
  67.           memory: 512M
  68.       replicas: 1
  69.       placement:
  70.         constraints:
  71.           - node.role==manager

  72.   redis-node4:
  73.     image: redis
  74.     hostname: redis-node4
  75.     ports:
  76.       - 6382:6379
  77.     networks:
  78.       - redis-swarm
  79.     volumes:
  80.       - "node4:/data"
  81.     command: redis-server --cluster-enabled yes --cluster-config-file nodes-node-4.conf
  82.     deploy:
  83.       mode: replicated
  84.       replicas: 1
  85.       resources:
  86.         limits:
  87.           # cpus: '0.001'
  88.           memory: 5120M
  89.         reservations:
  90.           # cpus: '0.001'
  91.           memory: 512M
  92.       placement:
  93.         constraints:
  94.           - node.labels.redis2==true

  95.   redis-node5:
  96.     image: redis
  97.     hostname: redis-node5
  98.     ports:
  99.       - 6383:6379
  100.     networks:
  101.       - redis-swarm
  102.     volumes:
  103.       - "node5:/data"
  104.     command: redis-server --cluster-enabled yes --cluster-config-file nodes-node-5.conf
  105.     deploy:
  106.       mode: replicated
  107.       replicas: 1
  108.       resources:
  109.         limits:
  110.           # cpus: '0.001'
  111.           memory: 5120M
  112.         reservations:
  113.           # cpus: '0.001'
  114.           memory: 512M
  115.       placement:
  116.         constraints:
  117.           - node.labels.redis3==true

  118.   redis-node6:
  119.     image: redis
  120.     hostname: redis-node6
  121.     ports:
  122.       - 6384:6379
  123.     networks:
  124.       - redis-swarm
  125.     volumes:
  126.       - "node6:/data"
  127.     command: redis-server --cluster-enabled yes --cluster-config-file nodes-node-6.conf
  128.     deploy:
  129.       mode: replicated
  130.       replicas: 1
  131.       resources:
  132.         limits:
  133.           # cpus: '0.001'
  134.           memory: 5120M
  135.         reservations:
  136.           # cpus: '0.001'
  137.           memory: 512M
  138.       placement:
  139.         constraints:
  140.           - node.labels.redis4==true

  141.   redis-start:
  142.     image: redis
  143.     hostname: redis-start
  144.     networks:
  145.       - redis-swarm
  146.     volumes:
  147.       - "$PWD/start:/redis-start"
  148.     depends_on:
  149.       - redis-node1
  150.       - redis-node2
  151.       - redis-node3
  152.       - redis-node4
  153.       - redis-node5
  154.       - redis-node6
  155.     command: /bin/bash -c "chmod 777 /redis-start/redis-start.sh && chmod 777 /redis-start/wait-for-it.sh && /redis-start/redis-start.sh"
  156.     deploy:
  157.       restart_policy:
  158.         condition: on-failure
  159.         delay: 5s
  160.         max_attempts: 5
  161.       placement:
  162.         constraints:
  163.           - node.role==manager

  164. networks:
  165.   redis-swarm:
  166.     driver: overlay

  167. volumes:
  168.   node1:
  169.   node2:
  170.   node3:
  171.   node4:
  172.   node5:
  173.   node6:
复制代码
wait-for-it.sh
  1. mkdir /root/redis-swarm/start
  2. vi wait-for-it.sh
  3. vi redis-start.sh
复制代码
  1. #!/usr/bin/env bash
  2. #   Use this script to test if a given TCP host/port are available

  3. cmdname=$(basename $0)

  4. echoerr() { if [[ $QUIET -ne 1 ]]; then echo "$@" 1>&2; fi }

  5. usage()
  6. {
  7.     cat << USAGE >&2
  8. Usage:
  9.     $cmdname host:port [-s] [-t timeout] [-- command args]
  10.     -h HOST | --host=HOST       Host or IP under test
  11.     -p PORT | --port=PORT       TCP port under test
  12.                                 Alternatively, you specify the host and port as host:port
  13.     -s | --strict               Only execute subcommand if the test succeeds
  14.     -q | --quiet                Don't output any status messages
  15.     -t TIMEOUT | --timeout=TIMEOUT
  16.                                 Timeout in seconds, zero for no timeout
  17.     -- COMMAND ARGS             Execute command with args after the test finishes
  18. USAGE
  19.     exit 1
  20. }

  21. wait_for()
  22. {
  23.     if [[ $TIMEOUT -gt 0 ]]; then
  24.         echoerr "$cmdname: waiting $TIMEOUT seconds for $HOST:$PORT"
  25.     else
  26.         echoerr "$cmdname: waiting for $HOST:$PORT without a timeout"
  27.     fi
  28.     start_ts=$(date +%s)
  29.     while :
  30.     do
  31.         (echo > /dev/tcp/$HOST/$PORT) >/dev/null 2>&1
  32.         result=$?
  33.         if [[ $result -eq 0 ]]; then
  34.             end_ts=$(date +%s)
  35.             echoerr "$cmdname: $HOST:$PORT is available after $((end_ts - start_ts)) seconds"
  36.             break
  37.         fi
  38.         sleep 1
  39.     done
  40.     return $result
  41. }

  42. wait_for_wrapper()
  43. {
  44.     # In order to support SIGINT during timeout: http://unix.stackexchange.com/a/57692
  45.     if [[ $QUIET -eq 1 ]]; then
  46.         timeout $TIMEOUT $0 --quiet --child --host=$HOST --port=$PORT --timeout=$TIMEOUT &
  47.     else
  48.         timeout $TIMEOUT $0 --child --host=$HOST --port=$PORT --timeout=$TIMEOUT &
  49.     fi
  50.     PID=$!
  51.     trap "kill -INT -$PID" INT
  52.     wait $PID
  53.     RESULT=$?
  54.     if [[ $RESULT -ne 0 ]]; then
  55.         echoerr "$cmdname: timeout occurred after waiting $TIMEOUT seconds for $HOST:$PORT"
  56.     fi
  57.     return $RESULT
  58. }

  59. # process arguments
  60. while [[ $# -gt 0 ]]
  61. do
  62.     case "$1" in
  63.         *:* )
  64.         hostport=(${1//:/ })
  65.         HOST=${hostport[0]}
  66.         PORT=${hostport[1]}
  67.         shift 1
  68.         ;;
  69.         --child)
  70.         CHILD=1
  71.         shift 1
  72.         ;;
  73.         -q | --quiet)
  74.         QUIET=1
  75.         shift 1
  76.         ;;
  77.         -s | --strict)
  78.         STRICT=1
  79.         shift 1
  80.         ;;
  81.         -h)
  82.         HOST="$2"
  83.         if [[ $HOST == "" ]]; then break; fi
  84.         shift 2
  85.         ;;
  86.         --host=*)
  87.         HOST="${1#*=}"
  88.         shift 1
  89.         ;;
  90.         -p)
  91.         PORT="$2"
  92.         if [[ $PORT == "" ]]; then break; fi
  93.         shift 2
  94.         ;;
  95.         --port=*)
  96.         PORT="${1#*=}"
  97.         shift 1
  98.         ;;
  99.         -t)
  100.         TIMEOUT="$2"
  101.         if [[ $TIMEOUT == "" ]]; then break; fi
  102.         shift 2
  103.         ;;
  104.         --timeout=*)
  105.         TIMEOUT="${1#*=}"
  106.         shift 1
  107.         ;;
  108.         --)
  109.         shift
  110.         CLI="$@"
  111.         break
  112.         ;;
  113.         --help)
  114.         usage
  115.         ;;
  116.         *)
  117.         echoerr "Unknown argument: $1"
  118.         usage
  119.         ;;
  120.     esac
  121. done

  122. if [[ "$HOST" == "" || "$PORT" == "" ]]; then
  123.     echoerr "Error: you need to provide a host and port to test."
  124.     usage
  125. fi

  126. TIMEOUT=${TIMEOUT:-15}
  127. STRICT=${STRICT:-0}
  128. CHILD=${CHILD:-0}
  129. QUIET=${QUIET:-0}

  130. if [[ $CHILD -gt 0 ]]; then
  131.     wait_for
  132.     RESULT=$?
  133.     exit $RESULT
  134. else
  135.     if [[ $TIMEOUT -gt 0 ]]; then
  136.         wait_for_wrapper
  137.         RESULT=$?
  138.     else
  139.         wait_for
  140.         RESULT=$?
  141.     fi
  142. fi

  143. if [[ $CLI != "" ]]; then
  144.     if [[ $RESULT -ne 0 && $STRICT -eq 1 ]]; then
  145.         echoerr "$cmdname: strict mode, refusing to execute subprocess"
  146.         exit $RESULT
  147.     fi
  148.     exec $CLI
  149. else
  150.     exit $RESULT
  151. fi
复制代码
redis-start.sh
  1. getent hosts xxx
复制代码
查看主机中
  1. /etc/hosts
复制代码
域名映射的IP
  1. cd /redis-start/
  2. bash wait-for-it.sh redis-node1:6379 --timeout=0
  3. bash wait-for-it.sh redis-node2:6379 --timeout=0
  4. bash wait-for-it.sh redis-node3:6379 --timeout=0
  5. bash wait-for-it.sh redis-node4:6379 --timeout=0
  6. bash wait-for-it.sh redis-node5:6379 --timeout=0
  7. bash wait-for-it.sh redis-node6:6379 --timeout=0
  8. echo 'redis-cluster begin'
  9. echo 'yes' | redis-cli --cluster create --cluster-replicas 1 \
  10. `getent hosts redis-node1 | awk '{ print $1 ":6379" }'` \
  11. `getent hosts redis-node2 | awk '{ print $1 ":6379" }'` \
  12. `getent hosts redis-node3 | awk '{ print $1 ":6379" }'` \
  13. `getent hosts redis-node4 | awk '{ print $1 ":6379" }'` \
  14. `getent hosts redis-node5 | awk '{ print $1 ":6379" }'` \
  15. `getent hosts redis-node6 | awk '{ print $1 ":6379" }'`
  16. echo 'redis-cluster end'
复制代码
启动


目录结构
  1. ├── docker-compose.yml
  2. └── start
  3.     ├── redis-start.sh
  4.     └── wait-for-it.sh
复制代码
swarm管理节点执行
  1. cd /root/redis-swarm
  2. docker stack deploy -c docker-compose.yml redis_cluster
复制代码
查看redis-start服务日志,如下即为启动成功
  1. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | wait-for-it.sh: waiting for redis-node1:6379 without a timeout
  2. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | wait-for-it.sh: redis-node1:6379 is available after 18 seconds
  3. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | wait-for-it.sh: waiting for redis-node2:6379 without a timeout
  4. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | wait-for-it.sh: redis-node2:6379 is available after 13 seconds
  5. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | wait-for-it.sh: waiting for redis-node3:6379 without a timeout
  6. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | wait-for-it.sh: redis-node3:6379 is available after 0 seconds
  7. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | wait-for-it.sh: waiting for redis-node4:6379 without a timeout
  8. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | wait-for-it.sh: redis-node4:6379 is available after 0 seconds
  9. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | wait-for-it.sh: waiting for redis-node5:6379 without a timeout
  10. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | wait-for-it.sh: redis-node5:6379 is available after 0 seconds
  11. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | wait-for-it.sh: waiting for redis-node6:6379 without a timeout
  12. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | wait-for-it.sh: redis-node6:6379 is available after 0 seconds
  13. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | redis-cluster begin
  14. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | >>> Performing hash slots allocation on 12 nodes...
  15. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | Master[0] -> Slots 0 - 2730
  16. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | Master[1] -> Slots 2731 - 5460
  17. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | Master[2] -> Slots 5461 - 8191
  18. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | Master[3] -> Slots 8192 - 10922
  19. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | Master[4] -> Slots 10923 - 13652
  20. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | Master[5] -> Slots 13653 - 16383
  21. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | Adding replica 10.0.5.6:6379 to 10.0.5.17:6379
  22. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | Adding replica 10.0.5.9:6379 to 10.0.5.16:6379
  23. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | Adding replica 10.0.5.8:6379 to 10.0.5.18:6379
  24. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | Adding replica 10.0.5.12:6379 to 10.0.5.19:6379
  25. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | Adding replica 10.0.5.11:6379 to 10.0.5.3:6379
  26. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | Adding replica 10.0.5.5:6379 to 10.0.5.2:6379
  27. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | M: 6ce90be6daabc0c700471d03deb3c6bd88c9f0e1 10.0.5.17:6379
  28. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    |    slots:[0-2730] (2731 slots) master
  29. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | M: 6ce90be6daabc0c700471d03deb3c6bd88c9f0e1 10.0.5.16:6379
  30. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    |    slots:[2731-5460] (2730 slots) master
  31. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | M: ea9b45ec64c08c17283239f8b8e5405b2d182428 10.0.5.18:6379
  32. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    |    slots:[5461-8191] (2731 slots) master
  33. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | M: ea9b45ec64c08c17283239f8b8e5405b2d182428 10.0.5.19:6379
  34. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    |    slots:[8192-10922] (2731 slots) master
  35. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | M: 935c177308232de05b5483776478020de51bc578 10.0.5.3:6379
  36. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    |    slots:[10923-13652] (2730 slots) master
  37. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | M: 935c177308232de05b5483776478020de51bc578 10.0.5.2:6379
  38. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    |    slots:[13653-16383] (2731 slots) master
  39. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | S: 1c99e42bcfb28a9fe72952d4e4cc5cd88aded0f9 10.0.5.5:6379
  40. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    |    replicates 935c177308232de05b5483776478020de51bc578
  41. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | S: 1c99e42bcfb28a9fe72952d4e4cc5cd88aded0f9 10.0.5.6:6379
  42. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    |    replicates 6ce90be6daabc0c700471d03deb3c6bd88c9f0e1
  43. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | S: 73cf232f232e83126f058cc01458df11146d8537 10.0.5.9:6379
  44. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    |    replicates 6ce90be6daabc0c700471d03deb3c6bd88c9f0e1
  45. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | S: 73cf232f232e83126f058cc01458df11146d8537 10.0.5.8:6379
  46. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    |    replicates ea9b45ec64c08c17283239f8b8e5405b2d182428
  47. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | S: ca3c50899d6deb04e296c542cd485791fb3e8922 10.0.5.12:6379
  48. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    |    replicates ea9b45ec64c08c17283239f8b8e5405b2d182428
  49. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | S: ca3c50899d6deb04e296c542cd485791fb3e8922 10.0.5.11:6379
  50. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    |    replicates 935c177308232de05b5483776478020de51bc578
  51. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | Can I set the above configuration? (type 'yes' to accept): >>> Nodes configuration updated
  52. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | >>> Assign a different config epoch to each node
  53. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | >>> Sending CLUSTER MEET messages to join the cluster
  54. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | Waiting for the cluster to join
  55. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | .
  56. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | >>> Performing Cluster Check (using node 10.0.5.17:6379)
  57. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | M: 6ce90be6daabc0c700471d03deb3c6bd88c9f0e1 10.0.5.17:6379
  58. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    |    slots:[0-5460] (5461 slots) master
  59. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    |    1 additional replica(s)
  60. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | S: ca3c50899d6deb04e296c542cd485791fb3e8922 10.0.5.12:6379
  61. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    |    slots: (0 slots) slave
  62. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    |    replicates 935c177308232de05b5483776478020de51bc578
  63. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | M: ea9b45ec64c08c17283239f8b8e5405b2d182428 10.0.5.19:6379
  64. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    |    slots:[5461-10922] (5462 slots) master
  65. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    |    1 additional replica(s)
  66. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | M: 935c177308232de05b5483776478020de51bc578 10.0.5.3:6379
  67. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    |    slots:[10923-16383] (5461 slots) master
  68. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    |    1 additional replica(s)
  69. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | S: 1c99e42bcfb28a9fe72952d4e4cc5cd88aded0f9 10.0.5.6:6379
  70. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    |    slots: (0 slots) slave
  71. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    |    replicates 6ce90be6daabc0c700471d03deb3c6bd88c9f0e1
  72. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | S: 73cf232f232e83126f058cc01458df11146d8537 10.0.5.9:6379
  73. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    |    slots: (0 slots) slave
  74. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    |    replicates ea9b45ec64c08c17283239f8b8e5405b2d182428
  75. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | [OK] All nodes agree about slots configuration.
  76. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | >>> Check for open slots...
  77. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | >>> Check slots coverage...
  78. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | [OK] All 16384 slots covered.
  79. redis-swarm_redis-start.1.6xawjqf5shfw@hyx-test3    | redis-cluster end
复制代码
撤销部署
  1. docker stack rm redis_cluster
复制代码
如果需要重新部署集群,redis集群为了保证数据统一,需要清除数据卷。
  1. # 每个节点都需要执行
  2. docker volume prune
复制代码
测试

进入其中一个节点容器,依次查看集群信息
  1. docker exec -it xxx bash
  2. redis-cli -c -h redis-node1 info
复制代码

测试读写数据

测试其中一个主节点宕机,这里删除了主节点1,节点1对应的从节点是节点4,节点1宕机后节点4成为主节点
  1. docker service rm redis-swarm_redis-node1
  2. # 查看
  3. root@redis-node2:/data# redis-cli -c -h redis-node1
  4. Could not connect to Redis at redis-node1:6379: Name or service not known
  5. not connected>
  6. root@redis-node2:/data# redis-cli -c -h redis-node4
  7. redis-node4:6379> info
复制代码


问题
  1. redis-cli --cluster create redis-node1:6379 ...省略
复制代码
在容器中使用redis-cli创建集群时,无法使用容器名创建,只能使用容器的ip,因为redis-cli对别名不支持


脚本下载+快速启动
  1. 链接: <a target="_blank" href="https://pan.baidu.com/s/18_YS9ng29e31Az_HBzBC1w?pwd=sp8w" rel="external nofollow" >https://pan.baidu.com/s/18_YS9ng29e31Az_HBzBC1w?pwd=sp8w</a>
  2. 提取码: sp8w 
复制代码
到此这篇关于docker swarm快速部署redis分布式集群的详细过程的文章就介绍到这了,更多相关docker swarm部署redis内容请搜索脚本之家以前的文章或继续浏览下面的相关文章希望大家以后多多支持脚本之家!

来源:https://www.jb51.net/article/265927.htm
免责声明:由于采集信息均来自互联网,如果侵犯了您的权益,请联系我们【E-Mail:cb@itdo.tech】 我们会及时删除侵权内容,谢谢合作!

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

x

举报 回复 使用道具