Pacemaker配置手册

snow chuai整理、汇总、撰写---2020/2/2

最后更新时间---2021/08/27


1. 安装并启动Pacemaker
1) 拓扑
                                      |
                             VIP:192.168.10.250
|----------------------|              |              |----------------------|
|   node01.1000cc.net  |192.168.10.11 | 192.168.10.12|   node2.1000cc.net   |
|       WEB Server     +--------------+--------------+      WEB Server      |
|----------------------|                             |----------------------|
2) 在所有节点上安装PaceMaker [root@node1 ~]# pssh -h host-list.txt -i 'yum install pacemaker pcs -y' [1] 21:59:08 [SUCCESS] root@node1.1000cc.net [2] 21:59:23 [SUCCESS] root@node2.1000cc.net
[root@node1 ~]# pssh -h host-list.txt -i 'systemctl enable --now pcsd' [1] 22:00:52 [SUCCESS] root@node2.1000cc.net [2] 22:00:53 [SUCCESS] root@node1.1000cc.net
2. 配置Pacemaker Cluster
1) 设置cluster管理员密码
[root@node1 ~]# pssh -h host-list.txt -i 'echo 123456 | passwd --stdin hacluster'
[1] 22:05:55 [SUCCESS] root@node1.1000cc.net
Changing password for user hacluster.
passwd: all authentication tokens updated successfully.
[2] 22:05:55 [SUCCESS] root@node2.1000cc.net
Changing password for user hacluster.
passwd: all authentication tokens updated successfully.
2) 设置防火墙 [root@node1 ~]# pssh -h host-list.txt -i 'firewall-cmd --add-service=high-availability --permanent' [1] 22:05:55 [SUCCESS] root@node1.1000cc.net success [1] 22:05:55 [SUCCESS] root@node2.1000cc.net success [root@node1 ~]# pssh -h host-list.txt -i 'firewall-cmd --reload' [1] 22:05:58 [SUCCESS] root@node1.1000cc.net success [1] 22:05:58 [SUCCESS] root@node2.1000cc.net success
3) 建立Cluster授权 [root@node1 ~]# pcs cluster auth node1.1000cc.net node2.1000cc.net Username: hacluster Password: # 输入hacluster的账户密码 node2.1000cc.net: Authorized node1.1000cc.net: Authorized
4) 配置Cluster(集群名称为:ha-cluster) [root@node1 ~]# pcs cluster setup --name ha-cluster node1.1000cc.net node2.1000cc.net Destroying cluster on nodes: node1.1000cc.net, node2.1000cc.net... node1.1000cc.net: Stopping Cluster (pacemaker)... node2.1000cc.net: Stopping Cluster (pacemaker)... node1.1000cc.net: Successfully destroyed cluster node2.1000cc.net: Successfully destroyed cluster
Sending 'pacemaker_remote authkey' to 'node1.1000cc.net', 'node2.1000cc.net' node1.1000cc.net: successful distribution of the file 'pacemaker_remote authkey' node2.1000cc.net: successful distribution of the file 'pacemaker_remote authkey' Sending cluster config files to the nodes... node1.1000cc.net: Succeeded node2.1000cc.net: Succeeded
Synchronizing pcsd certificates on nodes node1.1000cc.net, node2.1000cc.net... node2.1000cc.net: Success node1.1000cc.net: Success Restarting pcsd on the nodes in order to reload the certificates... node2.1000cc.net: Success node1.1000cc.net: Success
5) 启动Cluster [root@node1 ~]# pcs cluster start --all node1.1000cc.net: Starting Cluster (corosync)... node2.1000cc.net: Starting Cluster (corosync)... node2.1000cc.net: Starting Cluster (pacemaker)... node1.1000cc.net: Starting Cluster (pacemaker)...
6) 使Cluster随机启动 [root@node1 ~]# pcs cluster enable --all node1.1000cc.net: Cluster Enabled node2.1000cc.net: Cluster Enabled
7)验证集群状态 [root@node1 ~]# pcs status cluster Cluster Status: Stack: corosync Current DC: node1.1000cc.net (version 1.1.20-5.el7_7.2-3c4c782f70) - partition with quorum Last updated: Sun Feb 2 22:12:53 2020 Last change: Sun Feb 2 22:11:51 2020 by hacluster via crmd on node1.1000cc.net 2 nodes configured 0 resources configured
PCSD Status: node1.1000cc.net: Online node2.1000cc.net: Online
3. 添加集群资源
1) 所有节点安装httpd,但不需要启动httpd服务
2) 更改所有节点的apache配置文件,更改如下: (1) node1节点配置 [root@node1 ~]# vim /etc/httpd/conf.d/server_status.conf ExtendedStatus On
<Location /server-status> SetHandler server-status Require local </Location>

[root@node1 ~]# echo "node1.1000cc.net" > /var/www/html/index.html
(2) node2节点配置 [root@node2 ~]# vim /etc/httpd/conf.d/server_status.conf ExtendedStatus On
<Location /server-status> SetHandler server-status Require local </Location>

[root@node2 ~]# echo "node2.1000cc.net" > /var/www/html/index.html
3) 配置Cluster并设定VIP [root@node1 ~]# pcs property set stonith-enabled=false # 关闭stonith功能 [root@node1 ~]# pcs property set no-quorum-policy=ignore # 忽略仲裁 [root@node1 ~]# pcs property set default-resource-stickiness="INFINITY" # 设置资源超时时间 [root@node1 ~]# pcs property list # 显示设定 Cluster Properties: cluster-infrastructure: corosync cluster-name: ha-cluster dc-version: 1.1.20-5.el7_7.2-3c4c782f70 default-resource-stickiness: INFINITY have-watchdog: false no-quorum-policy: ignore stonith-enabled: false
[root@node1 ~]# pcs resource create VIP ocf:heartbeat:IPaddr2 ip=192.168.10.250 cidr_netmask=24 op monitor interval=30s # 设定VIP [root@node1 ~]# pcs status resources # 显示资源信息 VIP (ocf::heartbeat:IPaddr2): Started node1.1000cc.net
4) 添加httpd资源 [root@node1 ~]# pcs resource create Web-Cluster ocf:heartbeat:apache configfile=/etc/httpd/conf/httpd.conf statusurl="http://127.0.0.1/server-status" op monitor interval=1min [root@node1 ~]# pcs constraint colocation add Web-Cluster with VIP INFINITY [root@node1 ~]# pcs constraint order VIP then Web-Cluster # 设定启动顺讯:先启动VIP在启动WEB-Cluster Adding VIP Web-Cluster (kind: Mandatory) (Options: first-action=start then-action=start)
[root@node1 ~]# pcs constraint # 显示资源情况 Location Constraints: Ordering Constraints: start VIP then start Web-Cluster (kind:Mandatory) Colocation Constraints: Web-Cluster with VIP (score:INFINITY) Ticket Constraints:
5) 客户端测试 [root@node5 ~]# curl 192.168.10.250 node1.1000cc.net [root@node5 ~]# curl 192.168.10.250 node1.1000cc.net
6) 停止ndoe1节点的httpd资源,客户端再测试 [root@node1 ~]# pcs cluster stop node1.1000cc.net node1.1000cc.net: Stopping Cluster (pacemaker)... node1.1000cc.net: Stopping Cluster (corosync)...
[root@node5 ~]# curl 192.168.10.250 node2.1000cc.net [root@node5 ~]# curl 192.168.10.250 node2.1000cc.net
4. CLVM+GFS2
4.1 配置PaceMaker Cluster
[root@node1 ~]# pssh -h host-list.txt -i ' yum install pacemaker pcs -y'
[root@node1 ~]# pssh -h host-list.txt -i 'systemctl enable --now pcsd'
[root@node1 ~]# pssh -h host-list.txt -i 'echo 123456 | passwd --stdin hacluster'
[root@node1 ~]# pcs cluster auth node1.1000cc.net node2.1000cc.net Username: hacluster Password: # 输入hacluster账户的密码 node2.1000cc.net: Authorized node1.1000cc.net: Authorized
[root@node1 ~]# pcs cluster setup --name ha-cluster node1.1000cc.net node2.1000cc.net Destroying cluster on nodes: node1.1000cc.net, node2.1000cc.net node1.1000cc.net: Stopping Cluster (pacemaker)... node2.1000cc.net: Stopping Cluster (pacemaker)... node1.1000cc.net: Successfully destroyed cluster node2.1000cc.net: Successfully destroyed cluster
Sending 'pacemaker_remote authkey' to 'node1.1000cc.net', 'node2.1000cc.net' node1.1000cc.net: successful distribution of the file 'pacemaker_remote authkey' node2.1000cc.net: successful distribution of the file 'pacemaker_remote authkey' Sending cluster config files to the nodes... node1.1000cc.net: Succeeded node2.1000cc.net: Succeeded
Synchronizing pcsd certificates on nodes node1.1000cc.net, node2.1000cc.net... node2.1000cc.net: Success node1.1000cc.net: Success Restarting pcsd on the nodes in order to reload the certificates... node2.1000cc.net: Success node1.1000cc.net: Success
[root@node1 ~]# pcs cluster start --all node1.1000cc.net: Starting Cluster (corosync)... node2.1000cc.net: Starting Cluster (corosync)... node1.1000cc.net: Starting Cluster (pacemaker)... node2.1000cc.net: Starting Cluster (pacemaker)...
[root@node1 ~]# pcs cluster enable --all node1.1000cc.net: Cluster Enabled node2.1000cc.net: Cluster Enabled
[root@node1 ~]# pcs status cluster Cluster Status: Stack: corosync Current DC: node2.1000cc.net (version 1.1.20-5.el7_7.2-3c4c782f70) - partition with quorum Last updated: Mon Feb 3 15:09:14 2020 Last change: Mon Feb 3 15:09:06 2020 by hacluster via crmd on node2.1000cc.net 2 nodes configured 0 resources configured
PCSD Status: node2.1000cc.net: Online node1.1000cc.net: Online
[root@node1 ~]# pcs status corosync Membership information ---------------------- Nodeid Votes Name 1 1 node1.1000cc.net (local) 2 1 node2.1000cc.net
4.2 配置iSCSI
1) 拓扑
                          |----------------------|
                          |   node03.1000cc.net  |
                          |      iSCSI Target    |
                          |     192.168.10.13    | 
                          |-----------+----------|
                                      |
|----------------------|              |              |----------------------|
|   node01.1000cc.net  |              |              |   node2.1000cc.net   |
|     192.168.10.11    +--------------+--------------+     192.168.10.12    |
|----------------------|                             |----------------------|
2) 安装及配置iSCSI Target [root@node3 ~]# yum install targetcli -y [root@node3 ~]# mkdir /iscsi [root@node3 ~]# targetcli Warning: Could not load preferences file /root/.targetcli/prefs.bin. targetcli shell version 2.1.fb49 Copyright 2011-2013 by Datera, Inc and others. For help on commands, type 'help'.
/> cd /backstores/fileio /backstores/fileio> create disk1 /iscsi/disk1 3G Created fileio disk1 with size 3221225472
/backstores/fileio> create disk2 /iscsi/disk2 3G Created fileio disk2 with size 3221225472
/backstores/fileio> cd /iscsi
/iscsi> create iqn.2020-02.net.1000cc:fence Created target iqn.2020-02.net.1000cc:fence. Created TPG 1. Global pref auto_add_default_portal=true Created default portal listening on all IPs (0.0.0.0), port 3260.
/iscsi> create iqn.2020-02.net.1000cc:data Created target iqn.2020-02.net.1000cc:data. Created TPG 1. Global pref auto_add_default_portal=true Created default portal listening on all IPs (0.0.0.0), port 3260.
/iscsi> cd iqn.2020-02.net.1000cc:data/tpg1/luns
/iscsi/iqn.20...ata/tpg1/luns> create /backstores/fileio/disk1 Created LUN 0.
/iscsi/iqn.20...ata/tpg1/luns> cd ../acls
/iscsi/iqn.20...ata/tpg1/acls> create iqn.2020-02.net.1000cc:node1 Created Node ACL for iqn.2020-02.net.1000cc:node1 Created mapped LUN 0.
/iscsi/iqn.20...ata/tpg1/acls> cd iqn.2020-02.net.1000cc:node1/ /iscsi/iqn.20....1000cc:node1> set auth userid=snow Parameter userid is now 'snow'. /iscsi/iqn.20....1000cc:node1> set auth password=123456 Parameter password is now '123456'.
/iscsi/iqn.20....1000cc:node1> cd ../ /iscsi/iqn.20...ata/tpg1/acls> create iqn.2020-02.net.1000cc:node2 Created Node ACL for iqn.2020-02.net.1000cc:node2 Created mapped LUN 0.
/iscsi/iqn.20...ata/tpg1/acls> cd iqn.2020-02.net.1000cc:node2/ /iscsi/iqn.20....1000cc:node2> set auth userid=snow Parameter userid is now 'snow'. /iscsi/iqn.20....1000cc:node2> set auth password=123456 Parameter password is now '123456'.
/iscsi/iqn.20...ata/tpg1/acls> cd /iscsi/iqn.2020-02.net.1000cc:fence/tpg1/luns
/iscsi/iqn.20...nce/tpg1/luns> create /backstores/fileio/disk2 Created LUN 0..
/iscsi/iqn.20...nce/tpg1/luns> cd ../acls /iscsi/iqn.20...nce/tpg1/acls> create iqn.2020-02.net.1000cc:node1 Created Node ACL for iqn.2020-02.net.1000cc:node1 Created mapped LUN 0..
/iscsi/iqn.20...nce/tpg1/acls> cd iqn.2020-02.net.1000cc:node1/ /iscsi/iqn.20....1000cc:node1> set auth userid=snow Parameter userid is now 'snow'. /iscsi/iqn.20....1000cc:node1> set auth password=123456 Parameter password is now '123456'..
/iscsi/iqn.20....1000cc:node1> cd ../
/iscsi/iqn.20...nce/tpg1/acls> create iqn.2020-02.net.1000cc:node2 Created Node ACL for iqn.2020-02.net.1000cc:node2 Created mapped LUN 0..
/iscsi/iqn.20...nce/tpg1/acls> cd iqn.2020-02.net.1000cc:node2/ /iscsi/iqn.20....1000cc:node2> set auth userid=snow Parameter userid is now 'snow'. /iscsi/iqn.20....1000cc:node2> set auth password=123456 Parameter password is now '123456'..
/> ls o- / ....................................................................... [...] o- backstores ............................................................ [...] | o- block ................................................ [Storage Objects: 0] | o- fileio ............................................... [Storage Objects: 2] | | o- disk1 ...................... [/iscsi/disk1 (3.0GiB) write-back activated] | | | o- alua ................................................. [ALUA Groups: 1] | | | o- default_tg_pt_gp ..................... [ALUA state: Active/optimized] | | o- disk2 ...................... [/iscsi/disk2 (3.0GiB) write-back activated] | | o- alua ................................................. [ALUA Groups: 1] | | o- default_tg_pt_gp ..................... [ALUA state: Active/optimized] | o- pscsi ................................................ [Storage Objects: 0] | o- ramdisk .............................................. [Storage Objects: 0] o- iSCSI .......................................................... [Targets: 2] | o- iqn.2020-02.net.1000cc:data ..................................... [TPGs: 1] | | o- tpg1 ............................................. [no-gen-acls, no-auth] | | o- acls ........................................................ [ACLs: 2] | | | o- iqn.2020-02.net.1000cc:node1 ....................... [Mapped LUNs: 1] | | | | o- mapped_lun0 .............................. [lun0 fileio/disk1 (rw)] | | | o- iqn.2020-02.net.1000cc:node2 ....................... [Mapped LUNs: 1] | | | o- mapped_lun0 .............................. [lun0 fileio/disk1 (rw)] | | o- luns ........................................................ [LUNs: 1] | | | o- lun0 ............... [fileio/disk1 (/iscsi/disk1) (default_tg_pt_gp)] | | o- portals .................................................. [Portals: 1] | | o- 0.0.0.0:3260 ................................................... [OK] | o- iqn.2020-02.net.1000cc:fence .................................... [TPGs: 1] | o- tpg1 ............................................. [no-gen-acls, no-auth] | o- acls ........................................................ [ACLs: 2] | | o- iqn.2020-02.net.1000cc:node1 ....................... [Mapped LUNs: 1] | | | o- mapped_lun0 .............................. [lun0 fileio/disk2 (rw)] | | o- iqn.2020-02.net.1000cc:node2 ....................... [Mapped LUNs: 1] | | o- mapped_lun0 .............................. [lun0 fileio/disk2 (rw)] | o- luns ........................................................ [LUNs: 1] | | o- lun0 ............... [fileio/disk2 (/iscsi/disk2) (default_tg_pt_gp)] | o- portals .................................................. [Portals: 1] | o- 0.0.0.0:3260 ................................................... [OK] o- loopback ....................................................... [Targets: 0]
/> exit Global pref auto_save_on_exit=true Configuration saved to /etc/target/saveconfig.json
[root@node3 ~]# systemctl enable --now target.service [root@node3 ~]# netstat -lantp | grep 3260 tcp 0 0 192.168.10.13:3260 0.0.0.0:* LISTEN - tcp 0 0 192.168.10.23:3260 0.0.0.0:* LISTEN -
3) 配置iSCSI Initiator(Node1和Node2都需要安装) [root@node1 ~]# pssh -h host-list.txt -i 'yum install iscsi-initiator-utils -y'
[root@node1 ~]# vim /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.2020-02.net.1000cc:node1
[root@node1 ~]# vim /etc/iscsi/iscsid.conf # 更改57,61,62行,改为如下内容 node.session.auth.authmethod = CHAP node.session.auth.username = snow node.session.auth.password = 123456
[root@node2 ~]# vim /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.2020-02.net.1000cc:node2
[root@node2 ~]# vim /etc/iscsi/iscsid.conf # 更改57,61,62行,改为如下内容 node.session.auth.authmethod = CHAP node.session.auth.username = snow node.session.auth.password = 123456
4)连接iSCSI Target [root@node1 ~]# iscsiadm -m discovery -t sendtargets -p node3.1000cc.net 192.168.10.13:3260,1 iqn.2020-02.net.1000cc:fence 192.168.10.13:3260,1 iqn.2020-02.net.1000cc:data
[root@node2 ~]# iscsiadm -m discovery -t sendtargets -p node3.1000cc.net 192.168.10.13:3260,1 iqn.2020-02.net.1000cc:fence 192.168.10.13:3260,1 iqn.2020-02.net.1000cc:data
[root@node1 ~]# iscsiadm -m node --login Logging in to [iface: default, target: iqn.2020-02.net.1000cc:data, portal: 192.168.10.13,3260] (multiple) Logging in to [iface: default, target: iqn.2020-02.net.1000cc:fence, portal: 192.168.10.13,3260] (multiple) Login to [iface: default, target: iqn.2020-02.net.1000cc:data, portal: 192.168.10.13,3260] successful. Login to [iface: default, target: iqn.2020-02.net.1000cc:fence, portal: 192.168.10.13,3260] successful.
[root@node2 ~]# iscsiadm -m node --login Logging in to [iface: default, target: iqn.2020-02.net.1000cc:data, portal: 192.168.10.13,3260] (multiple) Logging in to [iface: default, target: iqn.2020-02.net.1000cc:fence, portal: 192.168.10.13,3260] (multiple) Login to [iface: default, target: iqn.2020-02.net.1000cc:data, portal: 192.168.10.13,3260] successful. Login to [iface: default, target: iqn.2020-02.net.1000cc:fence, portal: 192.168.10.13,3260] successful.
[root@node1 ~]# pssh -h host-list.txt -i 'lsblk | grep sd' [1] 14:58:04 [SUCCESS] root@node1.1000cc.net sda 8:0 0 3G 0 disk sdb 8:16 0 3G 0 disk [2] 14:58:04 [SUCCESS] root@node2.1000cc.net sda 8:0 0 3G 0 disk sdb 8:16 0 3G 0 disk
4.3 安装CLVM及GFS2相关软件包
[root@node1 ~]# pssh -h host-list.txt -i 'yum install fence-agents-all lvm2-cluster gfs2-utils -y'
[root@node1 ~]# pssh -h host-list.txt -i 'lvmconf --enable-cluster'
[root@node1 ~]# pssh -h host-list.txt -i 'sync && reboot'
4.4 配置CLVM+GFS2
(1) 确认sda存在,将sda作为fence设备
[root@node1 ~]# cat /proc/partitions | grep sda
   8        0    3145728 sda
(2) 验证sda的ID [root@node1 ~]# ll /dev/disk/by-id | grep sda lrwxrwxrwx 1 root root 9 Feb 3 15:19 wwn-0x60014056fbc16fce01b4b968998e3399 -> ../../sda lrwxrwxrwx 1 root root 9 Feb 3 15:19 wwn-0x60014056fbc16fce01b4b968998e3399 -> ../../sda
(3) 创建Fence [root@node1 ~]# pcs stonith create scsi-shooter fence_scsi pcmk_host_list="node1.1000cc.net node2.1000cc.net" pcmk_reboot_action="off" devices=/dev/disk/by-id/wwn-0x60014056fbc16fce01b4b968998e3399 meta provides=unfencing
# 禁止集群投票 [root@node1 ~]# pcs property set no-quorum-policy=freeze
[root@node1 ~]# pcs stonith show scsi-shooter
Resource: scsi-shooter (class=stonith type=fence_scsi) Attributes: devices=/dev/disk/by-id/wwn-0x6001405bb8ffb942ca441b49b5f27859 Meta Attrs: provides=unfencing Operations: monitor interval=60s (scsi-shooter-monitor-interval-60s)
[root@node1 cluster]# pcs stonith show scsi-shooter (stonith:fence_scsi): Started node1.1000cc.net
(4) 添加资源 [root@node1 ~]# pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true
[root@node1 ~]# pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true
[root@node1 ~]# pcs constraint order start dlm-clone then clvmd-clone Adding dlm-clone clvmd-clone (kind: Mandatory) (Options: first-action=start then-action=start)
[root@node1 ~]# pcs constraint colocation add clvmd-clone with dlm-clone
[root@node1 ~]# pcs status resources Clone Set: dlm-clone [dlm] Started: [ node1.1000cc.net node2.1000cc.net ] Clone Set: clvmd-clone [clvmd] Started: [ node1.1000cc.net node2.1000cc.net ]
(5) 创建基于GFS2的LVM [root@node1 ~]# fdisk /dev/sdb # 分出sdb1,并定义类型为8e
[root@node1 ~]# pvcreate /dev/sdb1 Physical volume "/dev/sdb1" successfully created
[root@node1 ~]# vgcreate -cy vg-cluster /dev/sdb1 Clustered volume group "vg-cluster" successfully created
[root@node1 ~]# lvcreate -l100%FREE -n snowlv vg-cluster Logical volume "snowlv" created.
################################################## 错误汇总 ################################################## 错误1. 如果出现以下错误 Error locking on node UNKNOWN 2: Reading VG not found for LVID 3gCSHJ120vbmGEqtu3zZ2F20Gm42NJFBnpy5Nh25jIcXtYm3zTUM20yA0MGIYs0o Failed to activate new LV.
# 解决方案(两台主机都需要操作 # clvmd -R # 更新所有节点上的lvm缓存 # iscsiadm -m node --logout # clvmd -R # iscsiadm -m node --login
################################################## 汇总技术 ##################################################
参数说明: -p lock_dlm 共享锁协议 -t ha_cluster:gfs2 锁的表格式为:clustername:lockspace -j 2 日志盘的数量,数量为2,只允许两个节点挂载。扩展时可用’gfs2_jadd -j 1’
[root@node1 ~]# mkfs.gfs2 -p lock_dlm -t ha-cluster:gfs2 -j 2 /dev/vg-cluster/snowlv It appears to contain an existing filesystem (gfs2) /dev/vg-cluster/snowlv is a symbolic link to /dev/dm-0 This will destroy any data on /dev/dm-0 Are you sure you want to proceed? [y/n] y Discarding device contents (may take a while on large devices): Done Adding journals: Done Building resource groups: Done Creating quota file: Done Writing superblock and syncing: Done Device: /dev/vg-cluster/snowlv Block size: 4096 Device size: 2.99 GB (784384 blocks) Filesystem size: 2.99 GB (784380 blocks) Journals: 2 Journal size: 16MB Resource groups: 14 Locking protocol: "lock_dlm" Lock table: "ha-cluster:gsf2" UUID: 58d96745-4f8d-4bf6-9674-81c5269721d2
(6) 添加资源至PaceMaker [root@node1 ~]# pcs resource create fs-gfs2 ocf:heartbeat:Filesystem device="/dev/vg-cluster/snowlv" directory="/mnt" fstype="gfs2" options="noatime,nodiratime" op monitor interval=10s on-fail=fence clone interleave=true
[root@node1 ~]# pcs resource show Clone Set: dlm-clone [dlm] Started: [ node1.1000cc.net node2.1000cc.net ] Clone Set: clvmd-clone [clvmd] Started: [ node1.1000cc.net node2.1000cc.net ] Clone Set: fs-gfs2-clone [fs-gfs2] Started: [ node1.1000cc.net node2.1000cc.net ]
[root@node1 ~]# pcs constraint order start clvmd-clone then fs-gfs2-clone Adding clvmd-clone fs-gfs2-clone (kind: Mandatory) (Options: first-action=start then-action=start)
[root@node1 ~]# pcs constraint colocation add fs-gfs2-clone with clvmd-clone [root@node1 ~]# pcs constraint show Location Constraints: Ordering Constraints: start dlm-clone then start clvmd-clone (kind:Mandatory) start clvmd-clone then start fs-gfs2-clone (kind:Mandatory) Colocation Constraints: clvmd-clone with dlm-clone (score:INFINITY) fs-gfs2-clone with clvmd-clone (score:INFINITY) Ticket Constraints:
(7) 验证 [root@node1 ~]# df -Th | grep gfs2 /dev/mapper/vg--cluster-snowlv gfs2 3.0G 35M 3.0G 2% /mnt
5. 使用WEB GUI管理PaceMaker
1) 前期准备
1. 所有节点安装完成Pacemaker
2. 设置完成hacluster的账户密码
3. 准备好一个集群
4. 准备一个集群资源
# 本实验没有增加fenc设备,实验最后有集群平台的提示信息。可忽略
3) 防火墙配置 [root@tsrv1 ~]# firewall-cmd --add-port=2224/tcp --permanent success [root@tsrv1 ~]# firewall-cmd --reload success
4) 访问WEB GUI [浏览器] ==> https://srv1.1000cc.net:2224

# 输入用户名cluster及密码登入

5) 添加现有的集群

# 添加hacluster的密码
# 添加完成后,请多等一会


 

如对您有帮助,请随缘打个赏。^-^

gold