Redis配置手册

snow chuai汇总、整理、撰写---2020/2/1


1. 安装及配置Redis
1. 安装Redis
[root@node1 ~]# yum --enablerepo=epel install redis -y
2. 配置Redis [root@node1 ~]# vim /etc/redis.conf # 修改61行,监听IP地址 bind 0.0.0.0
# 确认84行,监听端口已开放 port 6379
# 确认178行,所默认支持的数据库数量 databases 16
# 确认202-204行,确认持久化存储 默认设置表示如下 在900秒内,如果至少更改了一个key即保存到磁盘上 在300秒内,如果至少10个key发生了变化即保存到磁盘上 在60秒内,如果至少10000个key更改 save 900 1 save 300 10 save 60 10000
# 设定480行,授权密码 requirepass password
# 设定593行,禁用备用持久模式。如果启用Redis将失去高性能,但获得更高的安全性. appendonly no
# 设定623行,确认数据同步机制 appendfsync everysec
[root@node1 ~]# systemctl enable --now redis
3. 防火墙规则设定 [root@node1 ~]# firewall-cmd --add-port=6379/tcp --permanent success [root@node1 ~]# firewall-cmd --reload success
2. Redis基本操作
2.1 Redis服务基本操作
1. 连接到Redis Server
(1) 本地连接
[root@node1 ]# redis-cli -a password
(2) 远程连接 [root@node1 ~]# redis-cli -h node1.1000cc.net -a password
(3) 退出 127.0.0.1:6379>quit
(4) 连入再认证 [root@node1 ~]# redis-cli 127.0.0.1:6379> auth password OK
(5) 连入至指定数据库 [root@node1 ~]# redis-cli -a password -n 1 127.0.0.1:6379[1]>
(6) 切换至ID 2的数据库 127.0.0.1:6379[1]> select 2 OK 127.0.0.1:6379[2]>
(7) 查看Redis状态 127.0.0.1:6379[1]> info # Server redis_version:3.2.12 redis_git_sha1:00000000 redis_git_dirty:0 redis_build_id:7897e7d0e13773f redis_mode:standalone os:Linux 3.10.0-1062.el7.x86_64 x86_64 ...... ......
(8) 显示连接的客户端信息 127.0.0.1:6379> client list id=8 addr=127.0.0.1:47000 fd=5 name= age=62 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=32768 obl=0 oll=0 omem=0 events=r cmd=client
(9) 断开客户端的连接 127.0.0.1:6379> client kill 192.168.10.12:43668 OK (10) 转储所有的请求 127.0.0.1:6379> monitor OK 1469078099.850114 [0 10.0.0.31:43666] "get" "key01" 1469078112.319154 [0 10.0.0.31:43666] "set" "key02" "value02" ..... .....
(11) 保存数据至磁盘 127.0.0.1:6379> save OK
(12) 后台保存数据至磁盘 127.0.0.1:6379> bgsave OK
(13) 获取上次保存到磁盘的UNIX时间戳 127.0.0.1:6379> lastsave (integer) 1580544310
(14) 保存数据至磁盘并关闭redis 127.0.0.1:6379> shutdown not connected> quit
2.2 Key/value基本操作
(1) 给key设定value
[root@node1 ~]# redis-cli -a password
127.0.0.1:6379> set key1 value1
OK
(2) 获取key及其value 127.0.0.1:6379> get key1 "value1"
(3) 删除指定key 127.0.0.1:6379> del key1 (integer) 1
(4) 检测key是否存在 127.0.0.1:6379> exists key01 (integer) 0 # 不存在 127.0.0.1:6379> exists key1 (integer) 1 # 存在
(5) 建立新key并赋予值,如key存在则放弃赋值 127.0.0.1:6379> setnx key2 value2 # 不存在key2 (integer) 1 127.0.0.1:6379> get key2 "value2" 127.0.0.1:6379> setnx key1 value2 # key1已存在 (integer) 0 127.0.0.1:6379> get key1 "value1"
(6) 设置key有效期为60s 127.0.0.1:6379> setex key3 60 vlaue3 OK
(7) 设置现有key有效期为30s 127.0.0.1:6379> expire key1 30 (integer) 1
(8) 为现有key追加值 127.0.0.1:6379> append key1 value12 (integer) 12
(9) 获取指定字符串范围 语法: substr [Key] [起始字符位置] [结束获取的字符位置] 127.0.0.1:6379> substr key1 0 3 "valu"
(10) 增量 127.0.0.1:6379> set key2 2 OK 127.0.0.1:6379> incr key2 (integer) 3
(11) 增量指定值 127.0.0.1:6379> incrby key2 100 (integer) 3
(12) 递减 127.0.0.1:6379> decr key2 (integer) 102
(13) 递减指定值 127.0.0.1:6379> decrby key2 50 (integer) 52
(14) 同时设定多个Key/value 127.0.0.1:6379> mset key01 value01 key02 value02 key03 value03 OK
(15) 获取多个Key及其值value 127.0.0.1:6379> mget key01 key02 key03 1) "value01" 2) "value02" 3) "value03"
(16) 更改key名(如果重命名的key名称存在,则不可执行此命令) 127.0.0.1:6379> rename key01 key100 OK 127.0.0.1:6379> mget key01 key100 1) (nil) 2) "value01" (17) 查看当前所在数据库的key数量 127.0.0.1:6379> dbsize (integer) 5
(18) 移动一个key至指定数据库中 127.0.0.1:6379> move key03 1 (integer) 1 127.0.0.1:6379> select 1 OK 127.0.0.1:6379[1]> get key03 "value03"
(19) 删除当前数据库中所有key 127.0.0.1:6379> flushdb OK
(20) 删除所有数据库中所有key 127.0.0.1:6379> flushall OK 127.0.0.1:6379> quit
(21) 从标准输出中读取数据并给与key为其值 [root@node1 ~]# echo 'test_words' | redis-cli -a password -x set key209 OK
[root@node1 ~]# redis-cli -a password get key209 "test_words\n"
2.3 Lists基本操作
(1) 将预设值加入至list01中
127.0.0.1:6379> lpush list01 value01
(integer) 1
(2) 追加一个值加入至list01中 127.0.0.1:6379> rpush list01 value02 (integer) 2
(3) 获取列表长度 127.0.0.1:6379> llen list01 (integer) 2
(4) 获取指定列表中的元素 127.0.0.1:6379> lindex list01 0 "value01"
(5) 获取指定列表中的元素范围 127.0.0.1:6379> lrange list01 0 1 1) "value01" 2) "value02"
(6) 变更指定元素中的值 127.0.0.1:6379> lset list01 1 value03 OK
127.0.0.1:6379> lindex list01 1 "value03"
(7) 获取头部元素并移除 127.0.0.1:6379> lpop list01 "value01"
(8) 获取最后的元素并移除 127.0.0.1:6379> rpop list01 "value03"
(9)修剪指定元素范围 127.0.0.1:6379> ltrim list01 1 3 OK
127.0.0.1:6379> lrange list01 0 7 1) "value01" 2) "test" 3) "value02" 4) "value03" 5) "value04" 6) "test" 7) "value05" 8) "test"
(10)移除指定元素编号 127.0.0.1:6379> lrem list02 2 test (integer) 2
127.0.0.1:6379> lrange list02 0 7 1) "value01" 2) "value02" 3) "value03" 4) "value04" 5) "value05" 6) "test"
2.4 Hash表基本操作
(1)为hash key设定其值
127.0.0.1:6379> hset hash01 field01 value01
(integer) 1
(2)获取hash key和值 127.0.0.1:6379> hget hash01 field01 "value01"
(3)为hash key设置多个值 127.0.0.1:6379> hmset hash01 field02 value02 field03 value03 OK
(4)获取hash多个key值 127.0.0.1:6379> hmget hash01 field01 field02 field03 1) "value01" 2) "value02" 3) "value03"
(5)获取hash表中所有字段(key)信息 127.0.0.1:6379> hkeys hash01 1) "field01" 2) "field02" 3) "field03"
(6)获取哈希表字段的所有值 127.0.0.1:6379> hvals hash01 1) "value01" 2) "value02" 3) "value03"
(7)获取hash表中所有字段信息及vaule信息 127.0.0.1:6379> hgetall hash01 1) "field01" 2) "value01" 3) "field02" 4) "value02" 5) "field03" 6) "value03"
(8)增量hash表中所有字段及值 127.0.0.1:6379> hincrby hash01 field04 100 (integer) 101
(9)确认hash表中的字段是否存在 127.0.0.1:6379> hexists hash01 field01 (integer) 1
(10)获取hash表中的字段数 127.0.0.1:6379> hlen hash01 (integer) 4
(11)移除hash指定的字段 127.0.0.1:6379> hdel hash01 field04 (integer) 1
2.5 Sets表基本操作
(1)将member01加入至set01的集合中
127.0.0.1:6379> sadd set01 member01
(integer) 1
(2)获取集合set01的成员数量 127.0.0.1:6379> scard set01 (integer) 1
(3)将member01从set01中移除 127.0.0.1:6379> srem set01 member03 (integer) 1
(4)确认member01是否存在于set01中 127.0.0.1:6379> sismember set01 member01 (integer) 1
(5)获取集合中所有成员 127.0.0.1:6379> smembers set01 1) "member03" 2) "member02" 3) "member01" 127.0.0.1:6379> smembers set02 1) "member02" 2) "member05" 3) "member04" 127.0.0.1:6379> smembers set03 1) "member06" 2) "member02" 3) "member01"
(6)获取集合中都存在的成员(交集成员) 127.0.0.1:6379> sinter set01 set02 set03 1) "member02"
(7)获取集合中都存在的成员(交集成员)并存储到第一个集合中 127.0.0.1:6379> sinterstore set04 set01 set02 set03 (integer) 1 127.0.0.1:6379> smembers set04 1) "member02"
(8)获取集合中差异的成员 127.0.0.1:6379> sdiff set01 set02 set03 1) "member03"
(9)获取集合中差异成员存储到第一个集合中 127.0.0.1:6379> sdiffstore set05 set01 set02 set03 (integer) 1 127.0.0.1:6379> smembers set05 1) "member03"
(10)返回给定集合的并集 127.0.0.1:6379> sunion set01 set02 set03 (integer) 1
(11)返回给定集合的并集,并存储到第一个集合中 127.0.0.1:6379> sunionstore set06 set01 set02 set03 (integer) 6
127.0.0.1:6379> smembers set06 1) "member06" 2) "member03" 3) "member04" 4) "member02" 5) "member01" 6) "member05"
(11)将成员member03从set01集合中移动到set02集合中 127.0.0.1:6379> smove set01 set02 member03 (integer) 1
3. 支持脚本
3.1 Python支持
[root@node1 ~]# yum --enablerepo=epel install python2-redis -y
[root@node1 ~]# vim use-redis.py #!/usr/bin/env python
import redis
client = redis.StrictRedis(host='127.0.0.1', port=6379, db=0, password='password')
client.set("key1", "value1") print "key1.value :", client.get("key1")
client.append("key1", ",value2") print "key1.value :", client.get("key1")
client.set("key2", 1)
client.incr("key2", 200) print "key2.value :", client.get("key2")
client.decr("key2", 100) print "key2.value :", client.get("key2")
client.lpush("list1", "value1", "value2", "value3") print "list1.value :", client.lrange("list1", "0", "2")
client.hmset("hash1", {"key1": "value1", "key2": "value2", "key3": "value3"}) print "hash1.value :", client.hmget("hash1", ["key1", "key2", "key3"])
client.sadd("set1", "member1", "member2", "member3") print "set01.value :", client.smembers("set1")

[root@node1 ~]# python use-redis.py key1.value : value1 key1.value : value1,value2 key2.value : 201 key2.value : 101 list1.value : ['value3', 'value2', 'value1'] hash1.value : ['value1', 'value2', 'value3'] set01.value : set(['member1', 'member3', 'member2'])
3.2 PHP支持
[root@node1 ~]# yum --enablerepo=epel install php-pecl-redis -y
[root@node1 ~]# vim use-php.php <?php $redis = new Redis(); $redis->connect("127.0.0.1",6379); $redis->auth("password");
$redis->set('key1', 'value1'); print 'key1.value : ' . $redis->get('key1') . "\n";
$redis->append('key1', ',value2'); print 'key1.value : ' . $redis->get('key1') . "\n";
$redis->set('key2', 1); print 'key2.value : ' . $redis->get('key2') . "\n";
$redis->incr('key2', 100); print 'key2.value : ' . $redis->get('key2') . "\n";
$redis->decr('key2', 51); print 'key2.value : ' . $redis->get('key2') . "\n";
$redis->lPush('list1', 'value1'); $redis->rPush('list1', 'value2'); print 'list1.value : '; print_r ($redis->lRange('list1', 0, -1));
$redis->hSet('hash1', 'key1', 'value1'); $redis->hSet('hash1', 'key2', 'value2'); print 'hash1.value : '; print_r ($redis->hGetAll('hash1'));
$redis->sAdd('set1', 'member1'); $redis->sAdd('set1', 'member2'); print 'set1.value : '; print_r ($redis->sMembers('set1')); ?>

[root@node1 ~]# php use-php.php key1.value : value1 key1.value : value1,value2 key2.value : 1 key2.value : 101 key2.value : 50 list1.value : Array ( [0] => value1 [1] => value1 [2] => value3 [3] => value2 [4] => value1 [5] => value3 [6] => value2 [7] => value1 [8] => value2 [9] => value2 ) hash1.value : Array ( [key3] => value3 [key2] => value2 [key1] => value1 ) set1.value : Array ( [0] => member2 [1] => member1 [2] => member3 )
3.2 Node.js支持
1) 安装Scl源
[root@node1 ~]# yum install yum-plugin-priorities -y
[root@node1 ~]# yum install centos-release-scl-rh centos-release-scl -y
[root@node1 ~]# sed -i -e "s/\]$/\]\npriority=10/g" /etc/yum.repos.d/CentOS-SCLo-scl.repo
[root@node1 ~]# sed -i -e "s/\]$/\]\npriority=10/g" /etc/yum.repos.d/CentOS-SCLo-scl-rh.repo
[root@node1 ~]# sed -i -e "s/enabled=1/enabled=0/g" /etc/yum.repos.d/CentOS-SCLo-scl.repo
[root@node1 ~]# sed -i -e "s/enabled=1/enabled=0/g" /etc/yum.repos.d/CentOS-SCLo-scl-rh.repo
2) 安装及配置Node.js [root@node1 ~]# yum --enablerepo=centos-sclo-rh install rh-nodejs12 -y [root@node1 ~]# scl enable rh-nodejs12 bash [root@node1 ~]# vi /etc/profile.d/rh-nodejs12.sh source /opt/rh/rh-nodejs12/enable export X_SCLS="`scl enable rh-nodejs12 'echo $X_SCLS'`"
3) 安装及配置npm [root@node1 ~]# yum --enablerepo=epel install npm -y [root@node1 ~]# npm init This utility will walk you through creating a package.json file. It only covers the most common items, and tries to guess sensible defaults.
See `npm help json` for definitive documentation on these fields and exactly what they do.
Use `npm install <pkg>` afterwards to install a package and save it as a dependency in the package.json file.
Press ^C at any time to quit. package name: (root) # 本次实验不对npm做任何配置,直接回车 version: (1.0.0) # 回车 description: # 回车 git repository: # 回车 keywords: # 回车 author: # 回车 license: (ISC) # 回车 About to write to /root/package.json:
{ "name": "root", "version": "1.0.0", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "author": "", "license": "ISC", "dependencies": { "redis": "^2.8.0" }, "devDependencies": {}, "description": "" }

Is this OK? (yes) # 回车
4) 安装redis client模块 [root@node1 ~]# npm install redis var redis = require('redis'); var client = new redis.createClient();
client.auth('password');
5) 创建测试文件 [root@node1 ~]# vim use_redis.js
var redis = require('redis'); var client = new redis.createClient();
client.auth('password');var redis = require('redis'); var client = new redis.createClient();
client.auth('password');
client.set('key01', 'value01'); client.get('key01', function (err, val) { console.log("key01.value :", val); });
client.append('key01', ',value02'); client.get('key01', function (err, val) { console.log("key01.value :", val); });
client.set('key02', 1); client.get('key02', function (err, val) { console.log("key02.value :", val); });
client.incrby('key02', 100); client.get('key02', function (err, val) { console.log("key02.value :", val); });
client.decrby('key02', 51); client.get('key02', function (err, val) { console.log("key02.value :", val); });
client.rpush('list01', 'value01'); client.rpush('list01', 'value02'); client.lrange('list01', 0, -1, function (err, val) { console.log("list01.value :", val); });
client.hset("hash01", "key01", "value01"); client.hset("hash01", "key02", "value02"); client.hgetall('hash01', function (err, val) { console.log("hash01.value :", val); });
client.sadd("set01", "member01"); client.sadd("set01", "member02"); client.smembers('set01', function (err, val) { console.log("set01.value :", val); });
client.set('key01', 'value01'); client.get('key01', function (err, val) { console.log("key01.value :", val); });
client.append('key01', ',value02'); client.get('key01', function (err, val) { console.log("key01.value :", val); });
client.set('key02', 1); client.get('key02', function (err, val) { console.log("key02.value :", val); });
client.incrby('key02', 100); client.get('key02', function (err, val) { console.log("key02.value :", val); });
client.decrby('key02', 51); client.get('key02', function (err, val) { console.log("key02.value :", val); });
client.rpush('list01', 'value01'); client.rpush('list01', 'value02'); client.lrange('list01', 0, -1, function (err, val) { console.log("list01.value :", val); });
client.hset("hash01", "key01", "value01"); client.hset("hash01", "key02", "value02"); client.hgetall('hash01', function (err, val) { console.log("hash01.value :", val); });
client.sadd("set01", "member01"); client.sadd("set01", "member02"); client.smembers('set01', function (err, val) { console.log("set01.value :", val); });

6) 测试结果 [root@node1 ~]# node use-redis.js key01.value : value01 key01.value : value01,value02 key02.value : 1 key02.value : 101 key02.value : 50 list01.value : [ 'value01', 'value02', 'value01', 'value02' ] hash01.value : { key01: 'value01', key02: 'value02' } set01.value : [ 'member01', 'member02' ] key01.value : value01 key01.value : value01,value02 key02.value : 1 key02.value : 101 key02.value : 50 list01.value : [ 'value01', 'value02', 'value01', 'value02', 'value01', 'value02' ] hash01.value : { key01: 'value01', key02: 'value02' } set01.value : [ 'member01', 'member02' ]
4. Redis主从复制
1) 在所有节点上安装Redis
[root@node1 ~]# yum --enablerepo=epel install redis -y
[root@node2 ~]# yum --enablerepo=epel install redis -y
2) 配置Master节点 [root@node1 ~]# vim /etc/redis.conf ...... ...... ...... ...... ...... ......
# 定义61行监听的IP地址 bind 0.0.0.0
...... ...... ...... ...... ...... ......
# 定义430-431行,定义以下内容 min-slaves-to-write==>健康的slave的个数小于N,mater就禁止写入。master最少得有多少个健康的slave存活才能执行写命令。这个配置虽然不能保证N个slave都一定能接收到master的写操作,但是能避免没有足够健康的slave的时候,master不能写入来避免数据丢失。设置为0是关闭该功能。
min-slaves-max-lag==>延迟小于min-slaves-max-lag秒的slave才认为是健康的slave。
min-slaves-to-write 1 min-slaves-max-lag 10
...... ...... ...... ...... ...... ......
[root@node1 ~]# systemctl restart redis
3) 配置Slave节点 [root@node2 ~]# vim /etc/redis.conf ...... ...... ...... ...... ...... ......
# 定义61行监听的IP地址 bind 0.0.0.0
...... ...... ...... ...... ...... ......
# 于266行添加Master服务器的IP地址或FQDN及端口号 slaveof 192.168.10.11 6379
...... ...... ...... ...... ...... ......
# 于273行添加Master服务器认证口令 masterauth password
...... ...... ...... ...... ...... ......
# 确认301行slave节点为只读 slave-read-only yes ...... ...... ...... ...... ...... ......

[root@node2 ~]# systemctl restart redis
4) Master/Slave节点防火墙规则设定 [root@node1 ~]# pssh -h host-list.txt -i 'firewall-cmd --add-port=6379/tcp --permanent' [root@node1 ~]# pssh -h host-list.txt -i 'firewall-cmd --reload'
5) Master/Slave节点SELinux设定 (1) Master节点SELinux设定 [root@node1 ~]# vim redis_repl.te module redis_repl 1.0;
require { type redis_port_t; type redis_t; class tcp_socket name_connect; }
#============= redis_t ============== allow redis_t redis_port_t:tcp_socket name_connect;

[root@node1 ~]# checkmodule -m -M -o redis_repl.mod redis_repl.te checkmodule: loading policy configuration from redis_repl.te checkmodule: policy configuration loaded checkmodule: writing binary representation (version 17) to redis_repl.mod [root@node1 ~]# semodule_package --outfile redis_repl.pp --module redis_repl.mod [root@node1 ~]# semodule -i redis_repl.pp
(2) Slave节点SELinux设定 [root@node2 ~]# vim redis_repl.te module redis_repl 1.0;
require { type redis_port_t; type redis_t; class tcp_socket name_connect; }
#============= redis_t ============== allow redis_t redis_port_t:tcp_socket name_connect;

[root@node2 ~]# checkmodule -m -M -o redis_repl.mod redis_repl.te checkmodule: loading policy configuration from redis_repl.te checkmodule: policy configuration loaded checkmodule: writing binary representation (version 17) to redis_repl.mod [root@node2 ~]# semodule_package --outfile redis_repl.pp --module redis_repl.mod [root@node2 ~]# semodule -i redis_repl.pp
6) 验证主从复制状态 [root@node2 ~]# redis-cli info Replication # Replication role:slave master_host:192.168.10.11 master_port:6379 master_link_status:up master_last_io_seconds_ago:2 master_sync_in_progress:0 slave_repl_offset:505 slave_priority:100 slave_read_only:1 connected_slaves:0 master_repl_offset:0 repl_backlog_active:0 repl_backlog_size:1048576 repl_backlog_first_byte_offset:0 repl_backlog_histlen:0
[root@node1 ~]# redis-cli -a password set key2 value2 OK [root@node2 ~]# redis-cli get key2 "value2"
5. Sentinel实现
1) 需先配置完成主从复制
2) 配置Master/Slave的SELinux节点
[root@allnode ~]# vim redis_ha.te module redis_ha 1.0;
require { type etc_t; type redis_t; class file write; }
#============= redis_t ============== allow redis_t etc_t:file write;

[root@allnode ~]# checkmodule -m -M -o redis_ha.mod redis_ha.te checkmodule: loading policy configuration from redis_ha.te checkmodule: policy configuration loaded checkmodule: writing binary representation (version 17) to redis_ha.mod
[root@allnode ~]# semodule_package --outfile redis_ha.pp --module redis_ha.mod [root@allnode ~]# semodule -i redis_ha.pp
3) 安装及配置 Sentinel [root@node3 ~]# yum --enablerepo=epel install redis -y [root@node3 ~]# vim /etc/redis-sentinel.conf ...... ...... ...... ...... ...... ......
# 定义69行,所要监控的master的信息 格式为:[sentinel monitor (any name) (Master's IP) (Master's Port) (Quorum)] 解释:Quorum==>N代表投票数,当N个sentinel认为一个master已经不可用了以后,将会触发failover,才能真正认为该master已经不可用了。 sentinel monitor mymaster 192.168.10.11 6379 1
...... ...... ...... ...... ...... ......
# 添加89行,master的密码 sentinel auth-pass mymaster password
...... ...... ...... ...... ...... ......
# 确认98行开启. 解释:指定了Sentinel认为Redis实例已经失效所需的毫秒数。当实例超过该时间没有返回PING,或者直接返回错误,那么Sentinel将这个实例标记为主观下线。只有一个Sentinel进程将实例标记为主观下线并不一定会引起实例的自动故障迁移:只有在足够数量的Sentinel都将一个实例标记为主观下线之后,实例才会被标记为客观下线,这时自动故障迁移才会执行(30秒) sentinel down-after-milliseconds mymaster 30000
...... ...... ...... ...... ...... ......
# 确认106行开启. 解释:指定了在执行故障转移时,最多可以有多少个从Redis实例在同步新的主实例,在从Redis实例较多的情况下这个数字越小,同步的时间越长,完成故障转移所需的时间就越长 sentinel parallel-syncs mymaster 1
...... ...... ...... ...... ...... ......
[root@node3 ~]# systemctl enable --now redis-sentinel
4) Sentinel Server SELinux配置 [root@node3 ~]# vim redis_sentinel.te module redis_sentinel 1.0;
require { type redis_port_t; type etc_t; type redis_t; class tcp_socket name_connect; class file write; }
#============= redis_t ============== allow redis_t redis_port_t:tcp_socket name_connect; allow redis_t etc_t:file write;

[root@node3 ~]# checkmodule -m -M -o redis_sentinel.mod redis_sentinel.te checkmodule: loading policy configuration from redis_sentinel.te checkmodule: policy configuration loaded checkmodule: writing binary representation (version 17) to redis_sentinel.mod [root@node3 ~]# semodule_package --outfile redis_sentinel.pp --module redis_sentinel.mod [root@node3 ~]# semodule -i redis_sentinel.pp
5) Sentinel测试 [root@node3 ~]# redis-cli -p 26379 # 获取Master信息 127.0.0.1:26379> sentinel get-master-addr-by-name mymaster 1) "192.168.10.11" 2) "6379"
Master详细信息 127.0.0.1:26379> sentinel master mymaster 1) "name" 2) "mymaster" 3) "ip" 4) "192.168.10.11" 5) "port" 6) "6379" ...... ...... 获取Slave信息 127.0.0.1:26379> sentinel slaves mymaster 1) 1) "name" 2) "192.168.10.12:6379" 3) "ip" 4) "192.168.10.12" 5) "port" 6) "6379" ..... .....
6. 使用Benchmark进行Redis压力测试
[root@node3 ~]# redis-benchmark -h node1.1000cc.net -p 6379
====== PING_INLINE ======
  100000 requests completed in 3.22 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

39.80% <= 1 milliseconds 99.71% <= 2 milliseconds 99.89% <= 3 milliseconds 99.94% <= 4 milliseconds 99.95% <= 5 milliseconds 99.98% <= 6 milliseconds 100.00% <= 7 milliseconds 31036.62 requests per second
====== PING_BULK ====== [186/415] 100000 requests completed in 2.82 seconds 50 parallel clients 3 bytes payload keep alive: 1
58.85% <= 1 milliseconds 100.00% <= 1 milliseconds 35435.86 requests per second
====== SET ====== 100000 requests completed in 2.66 seconds 50 parallel clients 3 bytes payload keep alive: 1
83.01% <= 1 milliseconds 99.92% <= 2 milliseconds 99.98% <= 3 milliseconds 100.00% <= 3 milliseconds 37537.54 requests per second
====== GET ====== [164/415] 100000 requests completed in 2.44 seconds 50 parallel clients 3 bytes payload keep alive: 1
72.79% <= 1 milliseconds 99.90% <= 3 milliseconds 99.92% <= 4 milliseconds 100.00% <= 5 milliseconds 40983.61 requests per second
====== INCR ====== 100000 requests completed in 2.04 seconds 50 parallel clients 3 bytes payload keep alive: 1
99.90% <= 1 milliseconds 99.95% <= 2 milliseconds 100.00% <= 2 milliseconds 48923.68 requests per second
====== LPUSH ====== [141/415] 100000 requests completed in 2.64 seconds 50 parallel clients 3 bytes payload keep alive: 1
71.53% <= 1 milliseconds 99.89% <= 2 milliseconds 99.90% <= 3 milliseconds 99.90% <= 4 milliseconds 99.99% <= 5 milliseconds 100.00% <= 5 milliseconds 37821.48 requests per second
====== RPUSH ====== 100000 requests completed in 2.70 seconds 50 parallel clients 3 bytes payload keep alive: 1
78.50% <= 1 milliseconds 99.87% <= 2 milliseconds 99.90% <= 4 milliseconds 99.90% <= 5 milliseconds 99.93% <= 6 milliseconds 100.00% <= 7 milliseconds 100.00% <= 7 milliseconds 37037.04 requests per second
====== LPOP ====== [112/415] 100000 requests completed in 2.84 seconds 50 parallel clients 3 bytes payload keep alive: 1
70.25% <= 1 milliseconds 99.91% <= 2 milliseconds 99.92% <= 3 milliseconds 99.94% <= 4 milliseconds 99.97% <= 9 milliseconds 100.00% <= 9 milliseconds 35174.11 requests per second
====== RPOP ====== 100000 requests completed in 2.73 seconds 50 parallel clients 3 bytes payload keep alive: 1
78.35% <= 1 milliseconds 99.96% <= 2 milliseconds 99.97% <= 4 milliseconds 99.98% <= 5 milliseconds 100.00% <= 5 milliseconds 36630.04 requests per second
====== SADD ====== [85/415] 100000 requests completed in 2.61 seconds 50 parallel clients 3 bytes payload keep alive: 1
70.31% <= 1 milliseconds 99.96% <= 2 milliseconds 99.97% <= 3 milliseconds 100.00% <= 3 milliseconds 38343.56 requests per second
====== HSET ====== 100000 requests completed in 2.59 seconds 50 parallel clients 3 bytes payload keep alive: 1
65.97% <= 1 milliseconds 99.82% <= 2 milliseconds 99.90% <= 4 milliseconds 99.93% <= 5 milliseconds 99.96% <= 6 milliseconds 99.97% <= 7 milliseconds 100.00% <= 7 milliseconds 38624.95 requests per second
====== SPOP ====== [58/415] 100000 requests completed in 2.67 seconds 50 parallel clients 3 bytes payload keep alive: 1
71.48% <= 1 milliseconds 99.97% <= 2 milliseconds 99.98% <= 3 milliseconds 100.00% <= 3 milliseconds 37467.22 requests per second
====== LPUSH (needed to benchmark LRANGE) ====== 100000 requests completed in 2.89 seconds 50 parallel clients 3 bytes payload keep alive: 1
75.54% <= 1 milliseconds 99.94% <= 2 milliseconds 99.95% <= 5 milliseconds 99.99% <= 6 milliseconds 100.00% <= 6 milliseconds 34638.03 requests per second
====== LRANGE_100 (first 100 elements) ====== [33/415] 100000 requests completed in 2.58 seconds 50 parallel clients 3 bytes payload keep alive: 1
70.23% <= 1 milliseconds 99.94% <= 2 milliseconds 99.95% <= 4 milliseconds 99.96% <= 5 milliseconds 100.00% <= 5 milliseconds 38759.69 requests per second
====== LRANGE_300 (first 300 elements) ====== 100000 requests completed in 2.45 seconds 50 parallel clients 3 bytes payload keep alive: 1
72.08% <= 1 milliseconds 99.96% <= 2 milliseconds 100.00% <= 2 milliseconds 40766.41 requests per second
====== LRANGE_500 (first 450 elements) ====== [9/415] 100000 requests completed in 2.43 seconds 50 parallel clients 3 bytes payload keep alive: 1
83.52% <= 1 milliseconds 99.97% <= 3 milliseconds 99.98% <= 4 milliseconds 100.00% <= 4 milliseconds 41084.63 requests per second
====== LRANGE_600 (first 600 elements) ====== 100000 requests completed in 2.18 seconds 50 parallel clients 3 bytes payload keep alive: 1
89.58% <= 1 milliseconds 99.90% <= 3 milliseconds 99.93% <= 4 milliseconds 99.95% <= 8 milliseconds 100.00% <= 8 milliseconds 45808.52 requests per second
====== MSET (10 keys) ====== 100000 requests completed in 2.65 seconds 50 parallel clients 3 bytes payload keep alive: 1
63.72% <= 1 milliseconds 99.71% <= 2 milliseconds 99.88% <= 3 milliseconds 99.95% <= 5 milliseconds 99.97% <= 6 milliseconds 100.00% <= 7 milliseconds 100.00% <= 7 milliseconds 37693.18 requests per second

 

如对您有帮助,请随缘打个赏。^-^

gold