GFS6配置手册

snow chuai汇总、整理、撰写---2020/1/30


0. GFS拓扑
+----------------------+              |              +----------------------+
| [GlusterFS Server#1] |192.168.10.11 | 192.168.10.12| [GlusterFS Server#2] |
|   node1.1000cc.net   +--------------+--------------+   node2.1000cc.net   |
|                      |              |              |                      |
+----------------------+              |              +----------------------+
                                      |
+----------------------+              |              +----------------------+
| [GlusterFS Server#3] |192.168.10.13 | 192.168.10.14| [GlusterFS Server#4] |
|   node3.1000cc.net   +--------------+--------------+   node4.1000cc.net   |
|                      |              |              |                      |
+----------------------+              |              +----------------------+
                                      |
+----------------------+              |              +----------------------+
| [GlusterFS Server#5] |192.168.10.15 | 192.168.10.16| [GlusterFS Server#6] |
|   node5.1000cc.net   +--------------+--------------+   node6.1000cc.net   |
|                      |              |              |                      |
+----------------------+              |              +----------------------+
                                      |
                                      |192.168.10.17
                          +-----------+---------+               
                          |  [GlusterFS Client] |
                          |   node7.1000cc.net  |
                          |                     |                
                          +---------------------+                                     
1. GFS安装与配置
1) 安装GFS Server到所有节点
[root@node1 ~]# vim host-list.txt
node1.1000cc.net
node2.1000cc.net
node3.1000cc.net
[root@node1 ~]# pssh -h host-list.txt -i 'yum install centos-release-gluster6 -y' [root@node1 ~]# sed -i "s/enabled=1/enabled=0/g" /etc/yum.repos.d/CentOS-Gluster-6.repo [root@node1 ~]# pscp.pssh -h host-list.txt /etc/yum.repos.d/CentOS-Gluster-6.repo /etc/yum.repos.d/ [1] 14:10:38 [SUCCESS] node1.1000cc.net [2] 14:10:38 [SUCCESS] node2.1000cc.net [3] 14:10:38 [SUCCESS] node3.1000cc.net
[root@node1 ~]# pssh -h host-list.txt -i 'yum --enablerepo=centos-gluster6 install glusterfs-server -y' [root@node1 ~]# pssh -h host-list.txt -i 'systemctl enable --now glusterd' 1] 14:13:07 [SUCCESS] node1.1000cc.net [2] 14:13:07 [SUCCESS] node2.1000cc.net [3] 14:13:08 [SUCCESS] node3.1000cc.net
[root@node1 ~]# gluster --version glusterfs 6.7 Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/> GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation.
2) 防火墙配置 [root@node1 ~]# firewall-cmd --add-service=glusterfs --permanent success [root@node1 ~]# firewall-cmd --reload success
2. Distributed配置
1) 建立dist vol目录
[root@node1 ~]# pssh -h host-list.txt -i 'mkdir -p /gfs/dist'
[1] 14:19:55 [SUCCESS] node2.1000cc.net
[2] 14:19:55 [SUCCESS] node1.1000cc.net
[3] 14:19:55 [SUCCESS] node3.1000cc.net
2) 加入node2,node3节点至GFS Cluster [root@node1 ~]# gluster peer probe node2 peer probe: success. [root@node1 ~]# gluster peer probe node3 peer probe: success.
[root@node1 ~]# gluster peer status Number of Peers: 2
Hostname: node2 Uuid: 57610722-25b5-40df-9f49-c76c9b3b9cc3 State: Peer in Cluster (Connected)
Hostname: node3 Uuid: 178730f4-9c44-4e0b-8f2e-50045173e63c State: Peer in Cluster (Connected)
3) 创建Dist Vol [root@node1 ~]# gluster volume create dist_vol transport tcp node1:/gfs/dist node2:/gfs/dist node3:/gfs/dist force volume create: dist_vol: success: please start the volume to access data
[root@node1 ~]# gluster volume start dist_vol volume start: dist_vol: success
[root@node1 ~]# gluster volume info Volume Name: dist_vol Type: Distribute Volume ID: c6dd6c62-d537-4a06-a3f8-df3191d594be Status: Started Snapshot Count: 0 Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: node1:/gfs/dist Brick2: node2:/gfs/dist Brick3: node3:/gfs/dist Options Reconfigured: transport.address-family: inet nfs.disable: on
3. 客户端配置
1) 安装GFS源及相关软件
[root@node7 ~]# yum install centos-release-gluster6 -y
[root@node7 ~]# sed -i "s/enabled=1/enabled=0/g" /etc/yum.repos.d/CentOS-Gluster-6.repo
[root@node7 ~]# yum --enablerepo=centos-gluster6 install glusterfs glusterfs-fuse -y
2) 挂载 [root@node7 ~]# mount -t glusterfs node1.1000cc.net:/dist_vol /mnt [root@node7 ~]# df -Th | grep /mnt node1.1000cc.net:/dist_vol fuse.glusterfs 50G 5.1G 43G 11% /mnt
4. GlusterFS + NFS-Ganesha配置
1) 请先完成Distributed配置
2) 关闭Gluster的NFS支持(默认Distributed模式,已经关闭 [root@node1 ~]# gluster volume set dist_vol nfs.disable on volume set: success
[root@node1 ~]# gluster volume get dist_vol nfs.disable Option Value ------ ----- nfs.disable on
# 如果NFS服务开启,请关闭(所有节点) [root@node1 ~]# systemctl stop nfs-server [root@node1 ~]# systemctl disable nfs-server
3) 在Gluster集群节点上安装及配置NFS-Ganesha(一个节点亦可) [root@node1 ~]# pssh -h host-list.txt 'yum --enablerepo=centos-gluster6 install nfs-ganesha-gluster -y'
[root@node1 ~]# pssh -h host-list.txt -i 'mv /etc/ganesha/ganesha.conf /etc/ganesha/ganesha.conf.bak' [root@node1 ~]# vim /etc/ganesha/ganesha.conf NFS_CORE_PARAM { # 设定NFSv3到NFSv4的Pseudo path mount_path_pseudo = true; # 设定NFS版本 Protocols = 3,4; } EXPORT_DEFAULTS { # 设定NFS访问权限 Access_Type = RW; } EXPORT { Export_Id = 101; # Gluster挂载路径名称 Path = "/dist_vol"; FSAL { # 设定名称 name = GLUSTER; # 设定此节点的主机名/IP hostname="node1.1000cc.net"; # 设定Gluserr volume volume="dist_vol"; } # 设定NFS权限 Squash="No_root_squash"; # 设定NFSv4的Pseudo path Pseudo="/vfs_dist"; SecType = "sys"; } LOG { # Log级别 Default_Log_Level = WARN; }
[root@node1 ~]# pscp.pssh -h host-list.txt /etc/ganesha/ganesha.conf /etc/ganesha/
[root@node1 ~]# vim mod-ganesha.sh sed -i 's/hostname="node1.1000cc.net"/hostname="'$HOSTNAME'"/g' /etc/ganesha/ganesha.conf
[root@node1 ~]# chmod 700 mod-ganesha.sh [root@node1 ~]# pscp.pssh -h host-list.txt ./mod-ganesha.sh ~
[1] 16:22:53 [SUCCESS] node2.1000cc.net [2] 16:22:53 [SUCCESS] node1.1000cc.net [3] 16:22:53 [SUCCESS] node3.1000cc.net [4] 16:22:53 [SUCCESS] node4.1000cc.net [5] 16:22:53 [SUCCESS] node5.1000cc.net [6] 16:22:53 [SUCCESS] node6.1000cc.net
[root@node1 ~]# pssh -h host-list.txt -i '~/mod-ganesha.sh' [1] 16:24:27 [SUCCESS] node1.1000cc.net [2] 16:24:27 [SUCCESS] node2.1000cc.net [3] 16:24:27 [SUCCESS] node4.1000cc.net [4] 16:24:27 [SUCCESS] node3.1000cc.net [5] 16:24:27 [SUCCESS] node5.1000cc.net [6] 16:24:27 [SUCCESS] node6.1000cc.net
[root@node1 ~]# pssh -h host-list.txt -i 'grep -i "hostname=" /etc/ganesha/ganesha.conf' [1] 16:34:38 [SUCCESS] node3.1000cc.net hostname="node3.1000cc.net"; [2] 16:34:38 [SUCCESS] node1.1000cc.net hostname="node1.1000cc.net"; [3] 16:34:38 [SUCCESS] node4.1000cc.net hostname="node4.1000cc.net"; [4] 16:34:38 [SUCCESS] node2.1000cc.net hostname="node2.1000cc.net"; [5] 16:34:38 [SUCCESS] node5.1000cc.net hostname="node5.1000cc.net"; [6] 16:34:38 [SUCCESS] node6.1000cc.net hostname="node6.1000cc.net";
[root@node1 ~]# pssh -h host-list.txt -i 'systemctl enable --now nfs-ganesha' [1] 16:35:57 [SUCCESS] node6.1000cc.net [2] 16:35:57 [SUCCESS] node3.1000cc.net [3] 16:35:57 [SUCCESS] node2.1000cc.net [4] 16:35:57 [SUCCESS] node4.1000cc.net [5] 16:35:57 [SUCCESS] node1.1000cc.net [6] 16:35:57 [SUCCESS] node5.1000cc.net
[root@node1 ~]# pssh -h host-list.txt -i 'showmount -e localhost' [1] 17:00:43 [SUCCESS] node1.1000cc.net Export list for localhost: /vfs_dist (everyone) [2] 17:00:43 [SUCCESS] node3.1000cc.net Export list for localhost: /vfs_dist (everyone) [3] 17:00:43 [SUCCESS] node2.1000cc.net Export list for localhost: /vfs_dist (everyone) [4] 17:00:43 [SUCCESS] node4.1000cc.net Export list for localhost: /vfs_dist (everyone) [5] 17:00:43 [SUCCESS] node5.1000cc.net Export list for localhost: /vfs_dist (everyone) [6] 17:00:43 [SUCCESS] node6.1000cc.net Export list for localhost: /vfs_dist (everyone)
4) 设置SELinux [root@node1 ~]# setsebool -P nis_enabled on
5) 设置防火墙 [root@node1 ~]# firewall-cmd --add-service=nfs --permanent success [root@node1 ~]# firewall-cmd --reload success
6) 客户端测试 [root@node7 ~]# yum install nfs-utils -y
[root@node7 ~]# mount.nfs node1.1000cc.net:/vfs_dist /mnt [root@node7 ~]# df -Th | grep /mnt node1.1000cc.net:/vfs_dist nfs4 50G 5.2G 43G 11% /mnt
5. 添加Gluster节点(Bricks)
1) 安装Gluster工具并启动Gluster服务
[root@node4 ~]# yum install centos-release-gluster6 -y
[root@node4 ~]# sed -i -e "s/enabled=1/enabled=0/g" /etc/yum.repos.d/CentOS-Gluster-6.repo [root@node4 ~]# yum --enablerepo=centos-gluster6 install glusterfs-server -y
[root@node4 ~]# systemctl enable --now glusterd [root@node4 ~]# gluster --version glusterfs 6.7 Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/> GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation.
[root@node4 ~]# mkdir -p /gfs/dist
2) 添加节点 [root@node1 ~]# gluster peer probe node4 peer probe: success.
[root@node1 ~]# gluster peer status Number of Peers: 3
Hostname: node2 Uuid: a09fe588-4cc2-4eb6-bb45-48a7b2cff70d State: Peer in Cluster (Connected)
Hostname: node3 Uuid: c3a5f8ea-a464-433f-ad63-fb6e6ce98a8b State: Peer in Cluster (Connected)
Hostname: node4 Uuid: 093e0ed2-4019-4945-a47a-ca4ae9199b2a State: Peer in Cluster (Connected)
3) 查看当前卷信息 [root@node1 ~]# gluster volume info Volume Name: dist_vol Type: Distribute Volume ID: 89287b49-1c7c-4331-bbe0-f5182d807085 Status: Started Snapshot Count: 0 Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: node1:/gfs/dist Brick2: node2:/gfs/dist Brick3: node3:/gfs/dist Options Reconfigured: transport.address-family: inet storage.fips-mode-rchecksum: on nfs.disable: on
4) 将节点加入至卷中 [root@node1 ~]# gluster volume add-brick dist_vol node4:/gfs/dist force volume add-brick: success
5) 查看卷信息 [root@node1 ~]# gluster volume info Volume Name: dist_vol Type: Distribute Volume ID: 89287b49-1c7c-4331-bbe0-f5182d807085 Status: Started Snapshot Count: 0 Number of Bricks: 4 Transport-type: tcp Bricks: Brick1: node1:/gfs/dist Brick2: node2:/gfs/dist Brick3: node3:/gfs/dist Brick4: node4:/gfs/dist Options Reconfigured: transport.address-family: inet storage.fips-mode-rchecksum: on nfs.disable: on
6) 平衡卷 [root@node1 ~]# gluster volume rebalance dist_vol fix-layout start volume rebalance: dist_vol: success: Rebalance on dist_vol has been started successfully. Use rebalance status command to check status of the rebalance process. ID: 593ebb77-232c-49a2-9553-822c0e7b82f2
6. 移除Gluster集群中的节点(Bricks)
1) 查看卷信息
[root@node1 ~]# gluster volume info
Volume Name: dist_vol
Type: Distribute
Volume ID: 89287b49-1c7c-4331-bbe0-f5182d807085
Status: Started
Snapshot Count: 0
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: node1:/gfs/dist
Brick2: node2:/gfs/dist
Brick3: node3:/gfs/dist
Brick4: node4:/gfs/dist
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
2) 从卷中移除节点,并等待重新平衡结束 [root@node1 ~]# gluster volume remove-brick dist_vol node4:/gfs/dist start Running remove-brick with cluster.force-migration enabled can result in data corruption. It is safer to disable this option so that files that receive writes during migration are not migrated. Files that are not migrated can then be manually copied after the remove-brick commit operation. Do you want to continue with your current cluster.force-migration settings? (y/n) y volume remove-brick start: success ID: f2e9e0c3-469a-4f3d-834c-37467f25e873
3) 验证状态 [root@node1 ~]# gluster volume remove-brick dist_vol node4:/gfs/dist status Node Rebalanced-files size scanned failures skipped ------------------ ----------- ----------- ----------- ----------- node4 0 0Bytes 0 0 0 status run time in h:m:s ----------- -------------------------- completed 0:00:01
4) 确认状态为completed后,移除节点 [root@node1 ~]# gluster volume remove-brick dist_vol node4:/gfs/dist commit volume remove-brick commit: success Check the removed bricks to ensure all files are migrated. If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick.
5) 验证移除后的信息 [root@node1 ~]# gluster volume info Volume Name: dist_vol Type: Distribute Volume ID: 89287b49-1c7c-4331-bbe0-f5182d807085 Status: Started Snapshot Count: 0 Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: node1:/gfs/dist Brick2: node2:/gfs/dist Brick3: node3:/gfs/dist Options Reconfigured: performance.client-io-threads: on transport.address-family: inet storage.fips-mode-rchecksum: on nfs.disable: on
6) 将节点移除Gluster集群 [root@node1 ~]# gluster peer detach node4 All clients mounted through the peer which is getting detached need to be remounted using one of the other active peers in the trusted storag e pool to ensure client gets notification on any changes done on the gluster configuration and if the same has been done do you want to proce ed? (y/n) y peer detach: success
[root@node1 ~]# gluster peer status Number of Peers: 2
Hostname: node2 Uuid: a09fe588-4cc2-4eb6-bb45-48a7b2cff70d State: Peer in Cluster (Connected)
Hostname: node3 Uuid: c3a5f8ea-a464-433f-ad63-fb6e6ce98a8b State: Peer in Cluster (Connected)
7. 停止并移除Vol
1) 停止Vol
[root@node1 ~]# gluster volume stop dist_vol
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: dist_vol: success
[root@node1 ~]# gluster volume info Volume Name: dist_vol Type: Distribute Volume ID: efcdb74c-e6bb-48ec-beed-98a88562cbf7 Status: Stopped Snapshot Count: 0 Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: node1:/gfs/dist Brick2: node2:/gfs/dist Brick3: node3:/gfs/dist Options Reconfigured: performance.client-io-threads: on storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on
2) 删除Vol [root@node1 ~]# gluster volume delete dist_vol Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y volume delete: dist_vol: success
[root@node1 ~]# gluster volume info No volumes present
8. Replication配置
1) 配置Replication模式
[root@node1 ~]# pssh -h host-list.txt -i 'mkdir -p /gfs/replica' | grep -v Stderr | grep -v 'open termainl'
[1] 14:20:32 [SUCCESS] node1.1000cc.net
[2] 14:20:32 [SUCCESS] node2.1000cc.net
[3] 14:20:32 [SUCCESS] node3.1000cc.net
2) 将节点加入至GFS Cluster [root@node1 ~]# gluster peer probe node2 peer probe: success. [root@node1 ~]# gluster peer probe node3 peer probe: success.
[root@node1 ~]# gluster peer status Number of Peers: 2 Hostname: node2 Uuid: aa391bf8-7d54-44b9-92a4-8ac9cdcb5205 State: Peer in Cluster (Connected) Hostname: node3 Uuid: bfb56075-9c57-489f-bc7f-3cf48b994152 State: Peer in Cluster (Connected)
3) 创建rep_vol并启动 [root@node11 ~]# gluster volume create rep_vol replica 3 transport tcp node1:/gfs/replica node2:/gfs/replica node3:/gfs/replica force volume create: rep_vol: success: please start the volume to access data
[root@node1 ~]# gluster volume start rep_vol volume start: rep_vol: success
4) 验证rep_vol [root@node1 ~]# gluster volume info Volume Name: rep_vol Type: Replicate Volume ID: 58c3f396-c703-49ca-b8af-9f8782c93652 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: node1:/gfs/replica Brick2: node2:/gfs/replica Brick3: node3:/gfs/replica Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
5) 客户端测试 [root@node5 ~]# mount -t glusterfs node1.1000cc.net:/rep_vol /mnt
[root@node5 ~]# df -Th | grep /mnt node1.1000cc.net:/rep_vol fuse.glusterfs 17G 1.8G 15G 11% /mnt
9. Distributed+Replication配置
1) 配置Dist+Rep
[root@node1 ~]# pssh -h host-list.txt -i ' mkdir -p /gfs/dr'
[1] 15:09:31 [SUCCESS] node1.1000cc.net
[2] 15:09:31 [SUCCESS] node2.1000cc.net
[3] 15:09:31 [SUCCESS] node3.1000cc.net
[4] 15:09:31 [SUCCESS] node4.1000cc.net
[5] 15:09:31 [SUCCESS] node6.1000cc.net
[6] 15:09:31 [SUCCESS] node5.1000cc.net
2) 建立GFS Cluster [root@node1 ~]# gluster peer probe node2 peer probe: success. [root@node1 ~]# gluster peer probe node3 peer probe: success. [root@node1 ~]# gluster peer probe node4 peer probe: success. [root@node1 ~]# gluster peer probe node5 peer probe: success. [root@node1 ~]# gluster peer probe node6<br> [root@node1 ~]# gluster peer status Number of Peers: 5 Hostname: node2 Uuid: 01f47b68-8567-4d11-80fa-3a6e4feffa3d State: Peer in Cluster (Connected)
Hostname: node3 Uuid: f6af5f4d-f75d-4e9e-9318-284dd4bbe2cd State: Peer in Cluster (Connected)
Hostname: node4 Uuid: da4e46d9-7820-437e-aa9d-a6755be2c477 State: Peer in Cluster (Connected)
Hostname: node5 Uuid: f3bebec5-a745-48ac-a720-9f8c1e708482 State: Peer in Cluster (Connected)
Hostname: node6 Uuid: ee82bcd2-9b0b-4277-82a9-66e604cd2e9b State: Peer in Cluster (Connected)
3) 创建Distributed+Replication Volume [root@node1 ~]# gluster volume create dr_vol replica 3 arbiter 1 transport tcp node1:/gfs/dr node2:/gfs/dr node3:/gfs/dr node4:/gfs/dr node5:/gfs/dr node6:/gfs/dr force volume create: dr_vol: success: please start the volume to access data
[root@node1 ~]# gluster volume start dr_vol volume start: dr_vol: success
[root@node1 ~]# gluster volume info Volume Name: dr_vol Type: Distributed-Replicate Volume ID: 25358291-7cf1-4ef3-9ed1-c9a566b1a57e Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: node1:/gfs/dr Brick2: node2:/gfs/dr Brick3: node3:/gfs/dr (arbiter) Brick4: node4:/gfs/dr Brick5: node5:/gfs/dr Brick6: node6:/gfs/dr (arbiter) Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
4) 客户端测试 [root@node7 ~]# mount -t glusterfs node1.1000cc.net:/dr_vol /mnt [root@node7 ~]# df -Th | grep /mnt node1.1000cc.net:/vol_dist-replica fuse.glusterfs 34G 3.5G 29G 11% /mnt
10. Dispersed配置
1) 建立存储目录
[root@node1 ~]# pssh -h host-list.txt -i ' mkdir -p /gfs/disp'
[1] 15:22:47 [SUCCESS] node1.1000cc.net
[2] 15:22:47 [SUCCESS] node2.1000cc.net
[3] 15:22:47 [SUCCESS] node4.1000cc.net
[4] 15:22:47 [SUCCESS] node5.1000cc.net
[5] 15:22:47 [SUCCESS] node3.1000cc.net
[6] 15:22:47 [SUCCESS] node6.1000cc.net
2) 建立GFS Cluster [root@node1 ~]# gluster peer probe node2 peer probe: success. [root@node1 ~]# gluster peer probe node3 peer probe: success. [root@node1 ~]# gluster peer probe node4 peer probe: success. [root@node1 ~]# gluster peer probe node5 peer probe: success. [root@node1 ~]# gluster peer probe node6 peer probe: success.
[root@node1 ~]# gluster peer status Number of Peers: 5 Hostname: node2 Uuid: 01f47b68-8567-4d11-80fa-3a6e4feffa3d State: Peer in Cluster (Connected)
Hostname: node3 Uuid: f6af5f4d-f75d-4e9e-9318-284dd4bbe2cd State: Peer in Cluster (Connected)
Hostname: node4 Uuid: da4e46d9-7820-437e-aa9d-a6755be2c477 State: Peer in Cluster (Connected)
Hostname: node5 Uuid: f3bebec5-a745-48ac-a720-9f8c1e708482 State: Peer in Cluster (Connected)
Hostname: node6 Uuid: ee82bcd2-9b0b-4277-82a9-66e604cd2e9b State: Peer in Cluster (Connected)
3) 创建Dispersed Volume [root@node1 ~]# gluster volume create disp_vol disperse-data 4 redundancy 2 transport tcp node1:/gfs/disp node2:/gfs/disp node3:/gfs/disp node4:/gfs/disp node5:/gfs/disp node6:/gfs/disp force volume create: disp_vol: success: please start the volume to access data
[root@node1 ~]# gluster volume start disp_vol volume start: disp_vol: success
[root@node1 ~]# gluster volume info Volume Name: disp_vol Type: Disperse Volume ID: 0e5a690e-af2b-4cb6-a485-559b95efb78f Status: Started Snapshot Count: 0 Number of Bricks: 1 x (4 + 2) = 6 Transport-type: tcp Bricks: Brick1: node1:/gfs/disp Brick2: node2:/gfs/disp Brick3: node3:/gfs/disp Brick4: node4:/gfs/disp Brick5: node5:/gfs/disp Brick6: node6:/gfs/disp Options Reconfigured: transport.address-family: inet nfs.disable: on 4) 客户端测试 [root@node7 ~]# mount -t glusterfs node1.1000cc.net:/disp_vol /mnt [root@node7 ~]# df -Th | grep /mnt node1.1000cc.net:/disp_vol fuse.glusterfs 67G 6.9G 57G 11% /mnt

 

如对您有帮助,请随缘打个赏。^-^

gold