KeepAlived配置手册
snow chuai汇总、整理、撰写---2020/2/2
最后更新时间:2024-01-31
1. 实现KeepAlived---抢占与非抢占模式
1) 抢占模式
1. 在抢占模式中,keepalived的某台机器挂了之后VIP漂移到了备节点,当主节点恢复后主会将VIP再次抢回,这就是keepalive的抢占模式,默认参数为preempt。
2. keepalived默认工作在抢占模式下。在抢占模式中,主节点的state设为MASTER,备节点的state设为BACKUP,主节点的优先级要比备节点的优先级要高。
2) 非抢占模式
1. 两者的state都设为BUAKUP(官网说明),一个节点的优先级要比另一个节点的优先级要高,同时高优先级的节点设置nopreempt参数,该参数表示不抢占vip。
2. 这样,当高优先级的节点挂了之后,vip就会漂移到低优先级的节点上,但是当高优先级的节点再次恢复正常后再次起来后不会再抢回vip,因为它加了nopreempt参数。
3. 如果两者的state都设为MASTER,则将会成为抢占模式
3) 使用场景
在机器性能一致的情况下,并且允许vip多次漂移的业务场景下,可以使用抢占模式;
在不允许vip随便切换的场景下,建议使用非抢占模式。
|
2. 实现KeepAlived---抢占模式
1) 拓扑
+----------+
| client |
+-----+----+
eth1|192.168.10.15/24
|
+------------+ | +------------+
| Backend1 |192.168.10.13|192.168.10.14| Backend2 |
| Web Server +-------------+-------------+ Web Server |
| node3 | | node4 |
+------------+eth0 eth0+------------+
2) 安装KeepAlived
[root@node3 ~]# yum install keepalived httpd -y
[root@node4 ~]# yum install keepalived httpd -y
3) 配置并启动KeepAlived
# 修改node3配置
[root@node3 ~]# cd /etc/keepalived/
[root@node3 keepalived]# ls
keepalived.conf
[root@node3 keepalived]# mv keepalived.conf keepalived.conf.bak
[root@node3 keepalived]# vim keepalived.conf
# 按如下内容添加
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node3.1000cc.net
}
vrrp_instance websrv {
state MASTER # 指定此节点为Master节点
interface eth0 # 指定监听的网卡
virtual_router_id 51
priority 100 # 指定优先级,数字越高约优先
advert_int 1 # 心跳监测,单位为s
authentication {
auth_type PASS # 设定验证方式
auth_pass 1111 # 设定密码为1111
}
virtual_ipaddress {
192.168.10.250 # 指定VIP
}
}
[root@node3 ~]# systemctl enable --now keepalived.service
[root@node3 ~]# echo "node3.1000cc.net" > /var/www/html/index.html
[root@node3 ~]# systemctl enable --now httpd
# 修改node4配置
[root@node4 ~]# cd /etc/keepalived/
[root@node4 keepalived]# ls
keepalived.conf
[root@node4 keepalived]# mv keepalived.conf keepalived.conf.bak
[root@node4 keepalived]# vim keepalived.conf
# 按如下内容添加
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node4.1000cc.net
}
vrrp_instance websrv {
state BACKUP # 指定此节点为Backup节点
interface eth0
virtual_router_id 51
priority 50
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.10.250
}
}
[root@node4 ~]# systemctl enable --now keepalived.service
[root@node4 ~]# echo "node4.1000cc.net" > /var/www/html/index.html
[root@node4 ~]# systemctl enable --now httpd
################################################## 错误汇总 ##################################################
1. 如果用 systemctl status keepalived 发现出现如下错误
VRRP_Instance(您的实例名) ignoring received advertisment...
(您的实例名): received an invalid passwd!
此代表 virtual_router_id 冲突
2. 解决方法
请修改您的 virtual_router_id
################################################## 汇总结束 ##################################################
4) 客户端测试
(1) 客户端测试
# 客户端测试一直访问为node3中的信息
[root@client ~]# curl 192.168.10.250
node3.1000cc.net
[root@client ~]# curl 192.168.10.250
node3.1000cc.net
[root@client ~]# curl 192.168.10.250
node3.1000cc.net
(2) 停掉node3的keepalived服务(模拟宕机)
[root@node3 ~]# systemctl stop keepalived
(4) 客户端测试
# 客户端测自动访问node4中的信息
[root@client ~]# curl 192.168.10.250
node4.1000cc.net
[root@client ~]# curl 192.168.10.250
node4.1000cc.net
[root@client ~]# curl 192.168.10.250
node4.1000cc.net
(5) 启动node3的keepalived服务
[root@node3 ~]# systemctl start keepalived
(6) 客户端测试
# 客户端被自动切换访问为node3中的信息
[root@client ~]# curl 192.168.10.250
node3.1000cc.net
[root@client ~]# curl 192.168.10.250
node3.1000cc.net
[root@client ~]# curl 192.168.10.250
node3.1000cc.net
|
3. 实现KeepAlived---非抢占模式
1) 拓扑
+----------+
| client |
+-----+----+
eth1|192.168.10.15/24
|
+------------+ | +------------+
| Backend1 |192.168.10.13|192.168.10.14| Backend2 |
| Web Server +-------------+-------------+ Web Server |
| node3 | | node4 |
+------------+eth0 eth0+------------+
2) 安装KeepAlived
[root@node3 ~]# yum install keepalived httpd -y
[root@node4 ~]# yum install keepalived httpd -y
3) 配置并启动KeepAlived
# 修改node3配置
[root@node3 ~]# cd /etc/keepalived/
[root@node3 keepalived]# ls
keepalived.conf
[root@node3 keepalived]# mv keepalived.conf keepalived.conf.bak
[root@node3 keepalived]# vim keepalived.conf
# 按如下内容添加
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node3.1000cc.net
}
vrrp_instance websrv {
state BACKUP
nopreempt # 设置为非抢占模式
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.10.250
}
}
[root@node3 ~]# systemctl enable --now keepalived.service
[root@node3 ~]# echo "node3.1000cc.net" > /var/www/html/index.html
[root@node3 ~]# systemctl enable --now httpd
# 修改node4配置
[root@node4 ~]# cd /etc/keepalived/
[root@node4 keepalived]# ls
keepalived.conf
[root@node4 keepalived]# mv keepalived.conf keepalived.conf.bak
[root@node4 keepalived]# vim keepalived.conf
# 按如下内容添加
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node4.1000cc.net
}
vrrp_instance websrv {
state BACKUP
# 如果有多个Keepalived服务,每个节点都需要设定nopreempt.最低优先级可以不设定nopreempt
nopreempt
interface eth0
virtual_router_id 51
# 低优先级
priority 50
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.10.250
}
}
[root@node4 ~]# systemctl enable --now keepalived.service
[root@node4 ~]# echo "node4.1000cc.net" > /var/www/html/index.html
[root@node4 ~]# systemctl enable --now httpd
4) 客户端测试
(1) 客户端测试
# 客户端测试一直访问为node3中的信息
[root@client ~]# curl 192.168.10.250
node3.1000cc.net
[root@client ~]# curl 192.168.10.250
node3.1000cc.net
[root@client ~]# curl 192.168.10.250
node3.1000cc.net
(2) 停掉node3的keepalived服务(模拟宕机)
[root@node3 ~]# systemctl stop keepalived
(4) 客户端测试
# 客户端测自动访问node4中的信息 --- VIP已飘
[root@client ~]# curl 192.168.10.250
node4.1000cc.net
[root@client ~]# curl 192.168.10.250
node4.1000cc.net
[root@client ~]# curl 192.168.10.250
node4.1000cc.net
(5) 启动node3的keepalived服务
[root@node3 ~]# systemctl start keepalived
(6) 客户端测试
# 客户端被自动切换访问为node4中的信息 --- VIP没飘
[root@client ~]# curl 192.168.10.250
node4.1000cc.net
[root@client ~]# curl 192.168.10.250
node4.1000cc.net
[root@client ~]# curl 192.168.10.250
node4.1000cc.net
|
4. KeepAlived+LVS---AP模式
1) 拓扑
+----------+
| client |
+-----+----+
eth1|192.168.10.15/24
|
|-----------VIP:192.168.10.250 ------------|
+------+------+ | +-------+-----+
| Keepalived1 |192.168.10.11|192.168.10.12| KeepAlived2 |
| node1 +-------------+-------------+ node2 |
+-------------+eth0 | eth0+-------------+
|
+------------+ | +------------+
| Backend1 |192.168.10.13|192.168.10.14| Backend2 |
| Web Server +-------------+-------------+ Web Server |
| node3 | | node4 |
+------------+eth0 eth0+------------+
2) 安装KeepAlived与LVS
(1) node1配置
[root@node1 ~]# yum install ipvsadm keepalived -y
[root@node1 ~]# touch /etc/sysconfig/ipvsadm
[root@node1 ~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@node1 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from root@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node1.1000cc.net
}
vrrp_instance websrv {
# 将本机作为Master
state MASTER
interface eth0
virtual_router_id 51
# 将优先级设定为100
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.10.250
}
}
virtual_server 192.168.10.250 80 {
delay_loop 3 # 轮询时间间隔,单位s
lvs_sched rr # 指定LVS算法
lvs_method DR # 指定LVS模式
protocol TCP # 指定协议类型
real_server 192.168.10.13 80 { # real-server的IP及端口
weight 1
HTTP_GET {
url {
path / # 检查根目录
status_code 200 # 响应代码为200为健康.real-server为存活状态
}
connect_timeout 3 # 连接real-server超时时间,单位s
}
}
real_server 192.168.10.14 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
}
}
}
[root@node1 ~]# systemctl enable --now ipvsadm keepalived
(2) node2配置
[root@node2 ~]# yum install ipvsadm keepalived -y
[root@node2 ~]# touch /etc/sysconfig/ipvsadm
[root@node2 ~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@node2 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from root@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node2.1000cc.net
}
vrrp_instance websrv {
# 将本机作为Backup
state BACKUP
interface eth0
virtual_router_id 51
# 优先级设定为50
priority 50
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.10.250
}
}
virtual_server 192.168.10.250 80 {
delay_loop 3
lvs_sched rr
lvs_method DR
protocol TCP
real_server 192.168.10.13 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
}
}
real_server 192.168.10.14 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
}
}
}
[root@node2 ~]# systemctl enable --now ipvsadm keepalived
(3) node3配置
[root@node3 ~]# yum install httpd -y
[root@node3 ~]# echo "node3.1000cc.net" > /var/www/html/index.html
[root@node3 ~]# systemctl enable --now httpd
[root@node3 ~]# vim lvs-real-dr.sh
#! /bin/bash
VIP=192.168.10.250
ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP
/sbin/route add -host $VIP dev lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
[root@node3 ~]# chmod 700 lvs-real-dr.sh
[root@node3 ~]# ./lvs-real-dr.sh
(4) node4配置
[root@node4 ~]# yum install httpd -y
[root@node4 ~]# echo "node4.1000cc.net" > /var/www/html/index.html
[root@node4 ~]# systemctl enable --now httpd
[root@node4 ~]# vim lvs-real-dr.sh
#! /bin/bash
VIP=192.168.10.250
ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP
/sbin/route add -host $VIP dev lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
[root@node4 ~]# chmod 700 lvs-real-dr.sh
[root@node4 ~]# ./lvs-real-dr.sh
(5) 客户端测试
[root@client ~]# curl 192.168.10.250
node3.1000cc.net
[root@client ~]# curl 192.168.10.250
node4.1000cc.net
[root@client ~]# curl 192.168.10.250
node3.1000cc.net
[root@client ~]# curl 192.168.10.250
node4.1000cc.net
(6) 停掉node3的httpd服务,客户端再测试
[root@node3 ~]# systemctl stop httpd
[root@client ~]# curl 192.168.10.250
node4.1000cc.net
[root@client ~]# curl 192.168.10.250
node4.1000cc.net
[root@client ~]# curl 192.168.10.250
node4.1000cc.net
(7) 重启node3的httpd服务,客户端再测试
[root@node3 ~]# systemctl start httpd
[root@client ~]# curl 192.168.10.250
node3.1000cc.net
[root@client ~]# curl 192.168.10.250
node4.1000cc.net
[root@client ~]# curl 192.168.10.250
node3.1000cc.net
[root@client ~]# curl 192.168.10.250
node4.1000cc.net
|
5. KeepAlived+LVS---AA模式
1) 拓扑
+----------+
| client |
+-----+----+
eth1|192.168.10.15/24
|
|-----------VIP:192.168.10.250 ------------|
+------+------+ | +-------+-----+
| Keepalived1 |192.168.10.11|192.168.10.12| KeepAlived2 |
| Apache LB +-------------+-------------+ Apache LB |
| node1 | | | node2 |
+-------------+eth0 | eth0+-------------+
|-----------VIP:192.168.10.251 ------------|
|
+------------+ | +------------+
| Backend1 |192.168.10.13|192.168.10.14| Backend2 |
| Web Server +-------------+-------------+ Web Server |
| node3 | | node4 |
+------------+eth0 eth0+------------+
2) 安装与配置Apache负载均衡
(1) 于Node1及Node2节点下载安装软件
[root@node1 ~]# yum install psmisc keepalived httpd -y
[root@node2 ~]# yum install psmisc keepalived httpd -y
(2) Node1及Node2配置Apache负载均衡
# Node1配置
# 确认mod_proxy模块存在并加载
[root@node1 ~]# grep "mod_proxy" /etc/httpd/conf.modules.d/00-proxy.conf
[root@node1 ~]# vim /etc/httpd/conf.d/1000cc-proxy.conf
<IfModule mod_proxy.c>
ProxyRequests Off
<Proxy *>
Require all granted
</Proxy>
ProxyPass / balancer://1000cc stickysession=JSESSIONID nofailover=Off
<proxy balancer://1000cc>
BalancerMember http://node3.1000cc.net/ loadfactor=1
BalancerMember http://node4.1000cc.net/ loadfactor=1
ProxySet lbmethod=bybusyness
</proxy>
</IfModule>
[root@node1 ~]# systemctl enable --now httpd
# Node2配置
[root@node2 ~]# grep "mod_proxy" /etc/httpd/conf.modules.d/00-proxy.conf
[root@node2 ~]# vim /etc/httpd/conf.d/1000cc-proxy.conf
<IfModule mod_proxy.c>
ProxyRequests Off
<Proxy *>
Require all granted
</Proxy>
ProxyPass / balancer://1000cc stickysession=JSESSIONID nofailover=Off
<proxy balancer://1000cc>
BalancerMember http://node3.1000cc.net/ loadfactor=1
BalancerMember http://node4.1000cc.net/ loadfactor=1
ProxySet lbmethod=bybusyness
</proxy>
</IfModule>
[root@node2 ~]# systemctl enable --now httpd
(3) 配置Node3/Node4 Apache
[root@node3 ~]# yum install httpd -y
[root@node3 ~]# echo "node3.1000cc.net" > /var/www/html/index.html
[root@node3 ~]# systemctl enable --now httpd
[root@node4 ~]# yum install httpd -y
[root@node4 ~]# echo "node4.1000cc.net" > /var/www/html/index.html
[root@node4 ~]# systemctl enable --now httpd
(4) 请确认两个负载均衡工作正常
[root@client ~]# curl 192.168.10.11
node3.1000cc.net
[root@client ~]# curl 192.168.10.11
node4.1000cc.net
[root@client ~]# curl 192.168.10.12
node3.1000cc.net
[root@client ~]# curl 192.168.10.12
node4.1000cc.net
3) 配置KeepAlived负载均衡
# Node1 KeepAlived配置及编写httpd检测脚本
[root@node1 ~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@node1 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id node1.1000cc.net
script_user root
enable_script_security
}
vrrp_script check_apache {
script "/etc/keepalived/check_apache.sh"
interval 2 # 健康检查周期
weight 20 # 优先级变化幅度
fall 3 # 判定服务异常的检查次数
rise 2 # 判定服务正常的检查次数
}
vrrp_instance webproxy1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.10.250
}
track_script {
check_apache
}
}
vrrp_instance webproxy2 {
state BACKUP
interface eth0
virtual_router_id 52
priority 50
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.10.251
}
track_script {
check_apache
}
}
[root@node1 ~]# vim /etc/keepalived/check_apache.sh
#! /bin/bash
httpdok=`pstree | grep httpd | cut -d - -f 2`
if [ -z $httpdok ]
then
systemctl stop keepalived
fi
[root@node1 ~]# chmod 755 /etc/keepalived/check_apache.sh
[root@node1 ~]# systemctl enable --now keepalived
# Node2 KeepAlived配置及编写httpd检测脚本
[root@node2 ~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@node2 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id node2.1000cc.net
script_user root
enable_script_security
}
vrrp_script check_apache {
script "/etc/keepalived/check_apache.sh"
interval 2
weight 20
fall 3
rise 2
}
vrrp_instance webproxy1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 50
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.10.250
}
track_script {
check_apache
}
}
vrrp_instance webproxy2 {
state MASTER
interface eth0
virtual_router_id 52
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.10.251
}
track_script {
check_apache
}
}
[root@node2 ~]# vim /etc/keepalived/check_apache.sh
#! /bin/bash
httpdok=`pstree | grep httpd | cut -d - -f 2`
if [ -z $httpdok ]
then
systemctl stop keepalived
fi
[root@node2 ~]# chmod 755 /etc/keepalived/check_apache.sh
[root@node2 ~]# systemctl enable --now keepalived
4) 确认Node1及Node2节点的VIP已经绑定
[root@node1 ~]# ip a s | grep 192.168.10.250
inet 192.168.10.250/32 scope global eth0
[root@node2 ~]# ip a s | grep 192.168.10.251
inet 192.168.10.251/32 scope global eth0
6) 客户端测试
[root@client ~]# curl 192.168.10.250
node3.1000cc.net
[root@client ~]# curl 192.168.10.250
node4.1000cc.net
[root@client ~]# curl 192.168.10.251
node3.1000cc.net
[root@client ~]# curl 192.168.10.251
node4.1000cc.net
7) kill掉Node2的httpd用以确认双活工作状态
[root@node2 ~]# killall -9 httpd
8) 确认node2节点的keepalived服务关闭
[root@node2 ~]# systemctl is-active keepalived
inactive
9) 确认node1节点具备两个VIP
[root@node1 ~]# ip a s | grep 192.168.10.25[0-1]
inet 192.168.10.250/32 scope global eth0
inet 192.168.10.251/32 scope global eth0
10) 客户端测试
[root@client ~]# curl 192.168.10.250
node3.1000cc.net
[root@client ~]# curl 192.168.10.250
node4.1000cc.net
[root@client ~]# curl 192.168.10.251
node3.1000cc.net
[root@client ~]# curl 192.168.10.251
node4.1000cc.net
|
如对您有帮助,请随缘打个赏。^-^