The main purpose of this article is to build your own Redis The process of sentinel mode cluster , Facilitate the repetitive work of later construction .
First of all , Learn to read the official website when learning any knowledge , therefore , Please refer to the official website for configuration . I want to save time , Built according to the company's specifications . Official website address :
There is an official download address , You can download all versions of Redis:

Because in Linux Environment construction , I want to save time , It can be directly written into scripts and executed together , The installation package is directly in the Linux For download , If you can't download , It can be downloaded in advance . I choose redis3.2.8 Version to install .

I have three machines in the cluster , Set up one main, two standby and three sentries .

One , First of all Redis Construction of itself , function :
Run the following script directly to create the directory and install .

# establish redis Temporary directory mkdir /app/tmp cd /app/tmp/ chmod -R 777 /app wget tar -zxvf
redis-3.2.8.tar.gz cd /app/tmp/redis-3.2.8 # install redis make PREFIX=/app/redis
install # establish redis List of documents mkdir -p /app/redis/conf mkdir -p /app/redis/data mkdir
-p /app/redis/run mkdir -p /app/redis/log mkdir –p /app/redis/sentinel mkdir -p
/app/redis/scripts # Create data directory cd /app/redis/data mkdir REDIS_CLUSTER_SVR_03 #
create profile cd /app/redis/conf touch REDIS_CLUSTER_SVR_03.conf Modify the configuration file , command :vi
Save the following to REDIS_CLUSTER_SVR_03.conf, Log and other files / Catalog to be modified , And set the password
daemonize yes # to configure redis Yes no daemon Mode operation pidfile
/app/redis/run/ # appoint redis Runtime PID file port 8080 # Service port
tcp-backlog 511 # set up tcp-backlog Size of bind # appoint redis Services for IP timeout
180 # How many seconds after the client is idle, close the connection tcp-keepalive 60 # Specify send to client "ACKs" Time interval of instruction , The recommended value is 60s loglevel
notice # Set log level logfile "/app/redis/log/REDIS_CLUSTER_SVR_03.log" # Specify log file
databases 16 # Set the number of databases , Default is 16 individual save 900 1 save 300 10 save 60 100 #
set up redis Data write RDB Data file policy stop-writes-on-bgsave-error yes #
in use RDB In case of snapshot , When saving fails , Allow users to write rdbcompression yes # Exporting in RDB When data file , Compression or not
rdbchecksum yes # export RDB After data file , Check or not dbfilename REDIS_CLUSTER_SVR_03.rdb #
RDB Local filename of the database file dir /app/redis/data/REDIS_CLUSTER_SVR_03 # RDB Storage directory of database files
slave-serve-stale-data yes #
stay master/slave Lost connection between , or slave From master When nodes synchronize data , Response or not client Request for # yes, response client Request for
slave-read-only yes # set up slave Node read only mode repl-diskless-sync no
repl-diskless-sync-delay 5 repl-disable-tcp-nodelay no #
repl-diskless-sync The properties are still in the experimental stage , Set to ”no”, Turn this feature off slave-priority 100 Requirepass
"xxx.123456" # set up master Password for node , be used for master node #masterauth "xxx.123456" #
Specify the password for the primary node , be used for slave node maxclients 1000 # Set the maximum number of connections maxmemory 512mb # set up redis Maximum memory of
maxmemory-policy volatile-lru # When redis Memory used reached maxmemory Time , Which delete policy is used to clean up memory
maxmemory-samples 3 # implement maxmemory-policy Time is right N individual Key Check appendonly yes # Enable or not Append
Only Mode appendfilename "REDIS_CLUSTER_SVR_03.aof" # Append Only File File name of
appendfsync everysec # set up AOF Mode synchronize data to disk mode no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb
aof-load-truncated yes lua-time-limit 5000 # Every execution LUA SCRIPTING Maximum time of
slowlog-log-slower-than 10000 # Record execution time exceeds 10000 microseconds Command of slowlog-max-len
128 # slowlog Length of latency-monitor-threshold 0 # set up latency monitor Interval of execution ,"0" To turn off the pilot function
notify-keyspace-events "" hash-max-ziplist-entries 512 hash-max-ziplist-value
64 list-max-ziplist-entries 512 list-max-ziplist-value 64
set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value
64 hll-sparse-max-bytes 3000 activerehashing yes client-output-buffer-limit
normal 0 0 0 client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60 hz 10
aof-rewrite-incremental-fsync yes Then start redis, View run .
# start-up redis /app/redis/bin/redis-server /app/redis/conf/REDIS_CLUSTER_SVR_03.conf
# View run to city ps -ef|grep redis Similar configuration for the other two , Then start , So far, three single nodes redis Completion of construction
Two , Set the primary and secondary
Add to the configuration file for two of the machines : slaveof masterIp port that will do
Three , Deploy Sentinels
Create the following directories and files

cd /app/redis/conf mkdir /app/redis/sentinel/REDIS_CLUSTER_SEN_01 touch

modify sentinel configuration file : vi REDIS_CLUSTER_SEN_01.conf
Add the following configuration to the configuration file : daemonize yes port 8001 bind # Current node IP sentinel
announce-ip "" dir "/app/redis/sentinel/REDIS_CLUSTER_SEN_01"
pidfile "/app/redis/run/" loglevel notice logfile
"/app/redis/log/REDIS_CLUSTER_SEN_01.log" #sentinel monitor Cluster name Master node IP Master node port
slave number sentinel monitor REDIS_CLUSTER 8080 2 sentinel
failover-timeout REDIS_CLUSTER 60000 sentinel auth-pass REDIS_CLUSTER admin.123
sentinel config-epoch REDIS_CLUSTER 0 sentinel leader-epoch REDIS_CLUSTER 0
start-up sentinel, And see the sentinel progress :
/app/redis/bin/redis-sentinel /app/redis/conf/REDIS_CLUSTER_SEN_01.conf ps
-ef|grep redis Configure the other two nodes similarly and start
thus ,sentinel Modal redis High availability cluster construction completed .