GFS8配置手册

snow chuai汇总、整理、撰写---2021/04/30


0. GFS拓扑
+----------------------+              |              +----------------------+
| [GlusterFS Server#1] |192.168.10.11 | 192.168.10.12| [GlusterFS Server#2] |
|   node1.1000cc.net   +--------------+--------------+   node2.1000cc.net   |
|                      |              |              |                      |
+----------------------+              |              +----------------------+
                                      |
+----------------------+              |              +----------------------+
| [GlusterFS Server#3] |192.168.10.13 | 192.168.10.14| [GlusterFS Server#4] |
|   node3.1000cc.net   +--------------+--------------+   node4.1000cc.net   |
|                      |              |              |                      |
+----------------------+              |              +----------------------+
                                      |
+----------------------+              |              +----------------------+
| [GlusterFS Server#5] |192.168.10.15 | 192.168.10.16| [GlusterFS Server#6] |
|   node5.1000cc.net   +--------------+--------------+   node6.1000cc.net   |
|                      |              |              |                      |
+----------------------+              |              +----------------------+
                                      |
                                      |192.168.10.17
                          +-----------+---------+               
                          |  [GlusterFS Client] |
                          |   node7.1000cc.net  |
                          |                     |                
                          +---------------------+                                     
1. GFS安装与配置
1) 安装GFS Server到所有节点
[root@node1 ~]# yum clean all
[root@node1 ~]# yum makecache
[root@node1 ~]# yum install centos-release-gluster8 -y
[root@node1 ~]# sed -i -e "s/enabled=1/enabled=0/g" /etc/yum.repos.d/CentOS-Gluster-8.repo [root@node1 ~]# yum --enablerepo=centos-gluster8 install glusterfs-server -y
[root@node1 ~]# systemctl enable --now glusterd [root@node1 ~]# gluster --version glusterfs 8.4 Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/> GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation.
2) 防火墙配置 [root@node1 ~]# firewall-cmd --add-service=glusterfs --permanent success [root@node1 ~]# firewall-cmd --reload success
2. Distributed配置
1) 在所有节点上建立dist vol目录
[root@node1 ~]# mkdir -p /gfs/dist
2) 加入node2,node3节点至GFS Cluster [root@node1 ~]# gluster peer probe node2 peer probe: success. [root@node1 ~]# gluster peer probe node3 peer probe: success.
[root@node1 ~]# gluster peer status Number of Peers: 2 Hostname: node2 Uuid: a09fe588-4cc2-4eb6-bb45-48a7b2cff70d State: Peer in Cluster (Connected)
Hostname: node3 Uuid: c3a5f8ea-a464-433f-ad63-fb6e6ce98a8b State: Peer in Cluster (Connected)
3) 创建Dist Vol [root@node1 ~]# gluster volume create dist_vol transport tcp node1:/gfs/dist node2:/gfs/dist node3:/gfs/dist force volume create: dist_vol: success: please start the volume to access data
[root@node1 ~]# gluster volume start dist_vol volume start: dist_vol: success
[root@node1 ~]# gluster volume info Volume Name: dist_vol Type: Distribute Volume ID: 89287b49-1c7c-4331-bbe0-f5182d807085 Status: Started Snapshot Count: 0 Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: node1:/gfs/dist Brick2: node2:/gfs/dist Brick3: node3:/gfs/dist Options Reconfigured: storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on
3. 客户端配置
1) 安装GFS源及相关软件
[root@node7 ~]# yum clean all
[root@node7 ~]# yum makecache
[root@node7 ~]# yum install centos-release-gluster8 -y [root@node7 ~]# sed -i -e "s/enabled=1/enabled=0/g" /etc/yum.repos.d/CentOS-Gluster-8.repo [root@node7 ~]# yum --enablerepo=centos-gluster8 install glusterfs glusterfs-fuse -y
2) 挂载 [root@node7 ~]# mount.glusterfs node1.1000cc.net:/dist_vol /mnt [root@node7 ~]# df -Th /mnt Filesystem Type Size Used Avail Use% Mounted on srv1.1000y.cloud:/dist_vol fuse.glusterfs 113G 6.3G 102G 6% /mnt
4. GlusterFS + NFS-Ganesha配置
1) 请先完成Distributed配置
2) 关闭Gluster的NFS支持(默认Distributed模式,默认已经关闭 [root@node1 ~]# gluster volume get dist_vol nfs.disable Option Value ------ ----- nfs.disable on
# 如果Value值为off,请关闭 [root@node1 ~]# gluster volume set dist_vol nfs.disable on volume set: success
# 如果NFS服务开启,请关闭(所有节点) [root@node1 ~]# systemctl disable --now nfs-server
3) 在Gluster的某一个集群节点上安装及配置NFS-Ganesha [root@node1 ~]# yum install centos-release-nfs-ganesha30 -y
[root@node1 ~]# sed -i -e "s/enabled=1/enabled=0/g" /etc/yum.repos.d/CentOS-NFS-Ganesha-30.repo [root@node1 ~]# yum --enablerepo=centos-nfs-ganesha30 install nfs-ganesha-gluster -y
[root@node1 ~]# mv /etc/ganesha/ganesha.conf /etc/ganesha/ganesha.conf.bak [root@node1 ~]# vim /etc/ganesha/ganesha.conf NFS_CORE_PARAM { # 设定NFSv3到NFSv4的Pseudo path mount_path_pseudo = true; # 设定NFS版本 Protocols = 3,4; } EXPORT_DEFAULTS { # 设定NFS访问权限 Access_Type = RW; } EXPORT { Export_Id = 101; # Gluster挂载路径名称 Path = "/dist_vol"; FSAL { # 设定名称 name = GLUSTER; # 设定此节点的主机名/IP hostname="node1.1000cc.net"; # 设定Gluserr volume volume="dist_vol"; } # 设定NFS权限 Squash="No_root_squash"; # 设定NFSv4的Pseudo path Pseudo="/vfs_dist"; SecType = "sys"; } LOG { # Log级别 Default_Log_Level = WARN; }
[root@node1 ~]# systemctl enable --now nfs-ganesha
[root@node1 ~]# showmount -e localhost Export list for localhost: /vfs_dist (everyone)
4) 设置SELinux [root@node1 ~]# vim nfs-ganesha.te module nfs-ganesha 1.0;
require { type var_lib_nfs_t; type init_t; class dir create; }
#============= init_t ============== allow init_t var_lib_nfs_t:dir create;

[root@node1 ~]# checkmodule -m -M -o nfs-ganesha.mod nfs-ganesha.te checkmodule: loading policy configuration from nfs-ganesha.te checkmodule: policy configuration loaded
checkmodule: writing binary representation (version 19) to nfs-ganesha.mod [root@node1 ~]# semodule_package --outfile nfs-ganesha.pp --module nfs-ganesha.mod [root@node1 ~]# semodule -i nfs-ganesha.pp
5) 设置防火墙 [root@node1 ~]# firewall-cmd --add-service=nfs --permanent success [root@node1 ~]# firewall-cmd --reload success
6) 客户端测试 [root@node7 ~]# yum install nfs-utils -y
[root@node7 ~]# mount.nfs node1.1000cc.net:/vfs_dist /mnt [root@node7 ~]# df -Th | grep /mnt node1.1000cc.net:/vfs_dist nfs4 50G 5.2G 43G 11% /mnt
5. 添加Gluster节点(Bricks)
1) 安装Gluster工具并启动Gluster服务
[root@node4 ~]# yum clean all
[root@node4 ~]# yum makecache
[root@node4 ~]# yum install centos-release-gluster8 -y
[root@node4 ~]# sed -i -e "s/enabled=1/enabled=0/g" /etc/yum.repos.d/CentOS-Gluster-8.repo [root@node4 ~]# yum --enablerepo=centos-gluster8 install glusterfs-server -y
[root@node4 ~]# systemctl enable --now glusterd [root@node4 ~]# gluster --version glusterfs 8.4 Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/> GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation.
[root@node4 ~]# mkdir -p /gfs/dist
2) 添加节点 [root@node1 ~]# gluster peer probe node4 peer probe: success.
[root@node1 ~]# gluster peer status Number of Peers: 3
Hostname: node2 Uuid: a09fe588-4cc2-4eb6-bb45-48a7b2cff70d State: Peer in Cluster (Connected)
Hostname: node3 Uuid: c3a5f8ea-a464-433f-ad63-fb6e6ce98a8b State: Peer in Cluster (Connected)
Hostname: node4 Uuid: 093e0ed2-4019-4945-a47a-ca4ae9199b2a State: Peer in Cluster (Connected)
3) 查看当前卷信息 [root@node1 ~]# gluster volume info Volume Name: dist_vol Type: Distribute Volume ID: 89287b49-1c7c-4331-bbe0-f5182d807085 Status: Started Snapshot Count: 0 Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: node1:/gfs/dist Brick2: node2:/gfs/dist Brick3: node3:/gfs/dist Options Reconfigured: storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on
4) 将节点加入至卷中 [root@node1 ~]# gluster volume add-brick dist_vol node4:/gfs/dist force volume add-brick: success
5) 查看卷信息 [root@node1 ~]# gluster volume info Volume Name: dist_vol Type: Distribute Volume ID: 89287b49-1c7c-4331-bbe0-f5182d807085 Status: Started Snapshot Count: 0 Number of Bricks: 4 Transport-type: tcp Bricks: Brick1: node1:/gfs/dist Brick2: node2:/gfs/dist Brick3: node3:/gfs/dist Brick4: node4:/gfs/dist Options Reconfigured: storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on
6) 平衡卷 [root@node1 ~]# gluster volume rebalance dist_vol fix-layout start volume rebalance: dist_vol: success: Rebalance on dist_vol has been started successfully. Use rebalance status command to check status of the rebalance process. ID: 593ebb77-232c-49a2-9553-822c0e7b82f2
7) 查看Status的值是否为Completed [root@node1 ~]# gluster volume status Status of volume: dist_vol Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick node1:/gfs/dist 49152 0 Y 24844 Brick node2:/gfs/dist 49152 0 Y 24785 Brick node3:/gfs/dist 49152 0 Y 24761 Brick node4:/gfs/dist 49152 0 Y 24816
Task Status of Volume dist_vol ------------------------------------------------------------------------------ Task : Rebalance ID : c099b507-2f2e-4e37-a252-b428beb2e455 Status : completed
6. 移除Gluster集群中的节点(Bricks)
1) 查看卷信息
[root@node1 ~]# gluster volume info
Volume Name: dist_vol
Type: Distribute
Volume ID: 89287b49-1c7c-4331-bbe0-f5182d807085
Status: Started
Snapshot Count: 0
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: node1:/gfs/dist
Brick2: node2:/gfs/dist
Brick3: node3:/gfs/dist
Brick4: node4:/gfs/dist
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
2) 从卷中移除节点,并等待重新平衡结束 [root@node1 ~]# gluster volume remove-brick dist_vol node4:/gfs/dist start Running remove-brick with cluster.force-migration enabled can result in data corruption. It is safer to disable this option so that files that receive writes during migration are not migrated. Files that are not migrated can then be manually copied after the remove-brick commit operation. Do you want to continue with your current cluster.force-migration settings? (y/n) y volume remove-brick start: success ID: f2e9e0c3-469a-4f3d-834c-37467f25e873
3) 验证状态 [root@node1 ~]# gluster volume remove-brick dist_vol node4:/gfs/dist status Node Rebalanced-files size scanned failures skipped ------------------ ----------- ----------- ----------- ----------- node4 0 0Bytes 0 0 0 status run time in h:m:s ----------- -------------------------- completed 0:00:01
4) 确认状态为completed后,移除节点 [root@node1 ~]# gluster volume remove-brick dist_vol node4:/gfs/dist commit volume remove-brick commit: success Check the removed bricks to ensure all files are migrated. If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick.
5) 验证移除后的信息 [root@node1 ~]# gluster volume info Volume Name: dist_vol Type: Distribute Volume ID: 89287b49-1c7c-4331-bbe0-f5182d807085 Status: Started Snapshot Count: 0 Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: node1:/gfs/dist Brick2: node2:/gfs/dist Brick3: node3:/gfs/dist Options Reconfigured: transport.address-family: inet performance.client-io-threads: on storage.fips-mode-rchecksum: on nfs.disable: on
6) 将节点移除Gluster集群 [root@node1 ~]# gluster peer detach node4 All clients mounted through the peer which is getting detached need to be remounted using one of the other active peers in the trusted storag e pool to ensure client gets notification on any changes done on the gluster configuration and if the same has been done do you want to proce ed? (y/n) y peer detach: success
[root@node1 ~]# gluster peer status Number of Peers: 2
Hostname: node2 Uuid: a09fe588-4cc2-4eb6-bb45-48a7b2cff70d State: Peer in Cluster (Connected)
Hostname: node3 Uuid: c3a5f8ea-a464-433f-ad63-fb6e6ce98a8b State: Peer in Cluster (Connected)
7. 停止并移除Vol
1) 停止Vol
[root@node1 ~]# gluster volume stop dist_vol
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: dist_vol: success
[root@node1 ~]# gluster volume info Volume Name: dist_vol Type: Distribute Volume ID: efcdb74c-e6bb-48ec-beed-98a88562cbf7 Status: Stopped Snapshot Count: 0 Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: node1:/gfs/dist Brick2: node2:/gfs/dist Brick3: node3:/gfs/dist Options Reconfigured: performance.client-io-threads: on storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on
2) 删除Vol [root@node1 ~]# gluster volume delete dist_vol Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y volume delete: dist_vol: success
[root@node1 ~]# gluster volume info No volumes present
8. Replication配置
1) 在所有的节点上创建gfs所需的目录
[root@node1 ~]# mkdir -p /gfs/replica
2) 将节点加入至GFS Cluster [root@node1 ~]# gluster peer probe node2 peer probe: success. [root@node1 ~]# gluster peer probe node3 peer probe: success.
[root@node1 ~]# gluster peer status Number of Peers: 2
Hostname: node2 Uuid: aa391bf8-7d54-44b9-92a4-8ac9cdcb5205 State: Peer in Cluster (Connected)
Hostname: node3 Uuid: bfb56075-9c57-489f-bc7f-3cf48b994152 State: Peer in Cluster (Connected)
3) 创建rep_vol并启动 [root@node11 ~]# gluster volume create rep_vol replica 3 transport tcp node1:/gfs/replica node2:/gfs/replica node3:/gfs/replica force volume create: rep_vol: success: please start the volume to access data
[root@node1 ~]# gluster volume start rep_vol volume start: rep_vol: success
4) 验证rep_vol [root@node1 ~]# gluster volume info Volume Name: rep_vol Type: Replicate Volume ID: 14aa2b38-ba48-4ab8-bfda-2220be421ec1 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: node1:/gfs/replica Brick2: node2:/gfs/replica Brick3: node3:/gfs/replica Options Reconfigured: storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on performance.client-io-threads: offf
5) 客户端测试 [root@node7 ~]# mount.glusterfs node1.1000cc.net:/rep_vol /mnt
[root@node7 ~]# df -Th | grep /mnt node1.1000cc.net:/rep_vol fuse.glusterfs 17G 1.8G 15G 11% /mnt
9. Distributed+Replication配置
0) 配置6台节点均安装完成GFS工具
1) 配置Dist+Rep目录 [root@node1 ~]# mkdir -p /gfs/dr
2) 建立GFS Cluster [root@node1 ~]# gluster peer probe node2 peer probe: success. [root@node1 ~]# gluster peer probe node3 peer probe: success. [root@node1 ~]# gluster peer probe node4 peer probe: success. [root@node1 ~]# gluster peer probe node5 peer probe: success. [root@node1 ~]# gluster peer probe node6
[root@node1 ~]# gluster peer status Number of Peers: 5
Hostname: node2 Uuid: 01f47b68-8567-4d11-80fa-3a6e4feffa3d State: Peer in Cluster (Connected)
Hostname: node3 Uuid: f6af5f4d-f75d-4e9e-9318-284dd4bbe2cd State: Peer in Cluster (Connected)
Hostname: node4 Uuid: da4e46d9-7820-437e-aa9d-a6755be2c477 State: Peer in Cluster (Connected)
Hostname: node5 Uuid: f3bebec5-a745-48ac-a720-9f8c1e708482 State: Peer in Cluster (Connected)
Hostname: node6 Uuid: ee82bcd2-9b0b-4277-82a9-66e604cd2e9b State: Peer in Cluster (Connected)
3) 创建Distributed+Replication Volume [root@node1 ~]# gluster volume create dr_vol replica 3 arbiter 1 transport tcp node1:/gfs/dr node2:/gfs/dr node3:/gfs/dr node4:/gfs/dr node5:/gfs/dr node6:/gfs/dr force volume create: dr_vol: success: please start the volume to access data
[root@node1 ~]# gluster volume start dr_vol volume start: dr_vol: success
[root@node1 ~]# gluster volume info Volume Name: dr_vol Type: Distributed-Replicate Volume ID: c5092513-e3dc-49b7-ac50-a5973dc2a76c Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: node1:/gfs/dr Brick2: node2:/gfs/dr Brick3: node3:/gfs/dr (arbiter) Brick4: node4:/gfs/dr Brick5: node5:/gfs/dr Brick6: node6:/gfs/dr (arbiter) Options Reconfigured: storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on performance.client-io-threads: off
4) 客户端测试 [root@node7 ~]# mount.glusterfs node1.1000cc.net:/dr_vol /mnt [root@node7 ~]# df -Th | grep /mnt node1.1000cc.net:/vol_dist-replica fuse.glusterfs 34G 3.5G 29G 11% /mnt
10. Dispersed配置
1) 于所有节点建立存储目录
[root@node1 ~]# mkdir -p /gfs/disp
2) 建立GFS Cluster [root@node1 ~]# gluster peer probe node2 peer probe: success. [root@node1 ~]# gluster peer probe node3 peer probe: success. [root@node1 ~]# gluster peer probe node4 peer probe: success. [root@node1 ~]# gluster peer probe node5 peer probe: success. [root@node1 ~]# gluster peer probe node6 peer probe: success.
[root@node1 ~]# gluster peer status Number of Peers: 5
Hostname: node2 Uuid: 01f47b68-8567-4d11-80fa-3a6e4feffa3d State: Peer in Cluster (Connected)
Hostname: node3 Uuid: f6af5f4d-f75d-4e9e-9318-284dd4bbe2cd State: Peer in Cluster (Connected)
Hostname: node4 Uuid: da4e46d9-7820-437e-aa9d-a6755be2c477 State: Peer in Cluster (Connected)
Hostname: node5 Uuid: f3bebec5-a745-48ac-a720-9f8c1e708482 State: Peer in Cluster (Connected)
Hostname: node6 Uuid: ee82bcd2-9b0b-4277-82a9-66e604cd2e9b State: Peer in Cluster (Connected)
3) 创建Dispersed Volume [root@node1 ~]# gluster volume create disp_vol disperse-data 4 redundancy 2 transport tcp node1:/gfs/disp node2:/gfs/disp node3:/gfs/disp node4:/gfs/disp node5:/gfs/disp node6:/gfs/disp force volume create: disp_vol: success: please start the volume to access data
[root@node1 ~]# gluster volume start disp_vol volume start: disp_vol: success
[root@node1 ~]# gluster volume info
Volume Name: disp_vol Type: Disperse Volume ID: bc6d58be-0591-459e-a379-4a1f5568fb0d Status: Started Snapshot Count: 0 Number of Bricks: 1 x (4 + 2) = 6 Transport-type: tcp Bricks: Brick1: node1:/gfs/disp Brick2: node2:/gfs/disp Brick3: node3:/gfs/disp Brick4: node4:/gfs/disp Brick5: node5:/gfs/disp Brick6: node6:/gfs/disp Options Reconfigured: storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on
4) 客户端测试 [root@node7 ~]# mount.glusterfs node1.1000cc.net:/disp_vol /mnt [root@node7 ~]# df -Th | grep /mnt node1.1000cc.net:/disp_vol fuse.glusterfs 67G 6.9G 57G 11% /mnt
11. GFS限额
1) 启用quota机制
[root@node1 ~]# gluster volume quota disp_vol enable
volume quota : success
[root@node1 ~]# gluster volume info Volume Name: disp_vol Type: Disperse Volume ID: d3f584c2-72c3-429f-9492-9ca16d9a448e Status: Started Snapshot Count: 0 Number of Bricks: 1 x (4 + 2) = 6 Transport-type: tcp Bricks: Brick1: srv1:/gfs/disp Brick2: srv2:/gfs/disp Brick3: srv3:/gfs/disp Brick4: srv4:/gfs/disp Brick5: srv5:/gfs/disp Brick6: srv6:/gfs/disp Options Reconfigured: features.quota-deem-statfs: on features.inode-quota: on features.quota: on storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on
2) 关闭quota机制 [root@node1 ~]# gluster volume quota disp_vol disable Disabling quota will delete all the quota configuration. Do you want to continue? (y/n) y volume quota : success
3) 为指定卷设置限额 (1) 开启限额 [root@node1 ~]# gluster volume quota disp_vol limit-usage / 1GB volume quota : success
(2) 客户端测试 [root@node5 ~]# mkdir /mnt/gfs
[root@node5 ~]# mount.glusterfs srv1.1000y.cloud:/disp_vol /mnt/gfs
[root@node5 ~]# df -Th | grep /mnt/gfs srv1.1000y.cloud:/disp_vol fuse.glusterfs 1.0G 0 1.0G 0% /mnt/gfs
(3) 如不需要在客户端上显示配额大小,而打算显示整个磁盘大小可按以下操作 [root@node1 ~]# gluster volume set disp_vol quota-deem-statfs off volume set: success
4) 为指定卷的目录设置限额 (1) 客户端创建gfs卷的一个目录 [root@node5 ~]# mkdir /mnt/gfs/qyy
(2) 为gfs卷的指定目录设定权限并确认 [root@node1 ~]# gluster volume quota disp_vol limit-usage /qyy 1GB volume quota : success
[root@node1 ~]# gluster volume quota disp_vol list Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded? ----------------------------------------------------------------------------------------------------- /qyy 1.0GB 80%(819.2MB) 0Bytes 1.0GB No No
(3) 客户端验证 [root@node5 ~]# df -Th /mnt/gfs/qyy Filesystem Type Size Used Avail Use% Mounted on srv1.1000y.cloud:/disp_vol fuse.glusterfs 1.0G 0 1.0G 0% /mnt/gfs
5) 更改quota的软限额---设定软限额为70% [root@node1 ~]# gluster volume quota disp_vol limit-usage /qyy 1GB 70 volume quota : success
[root@node1 ~]# gluster volume quota disp_vol list Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded? ----------------------------------------------------------------------------------------------------- /qyy 1.0GB 70%(716.8MB) 0Bytes 1.0GB No No
6) 删除限额 (1) 删除指定gfs卷的目录限额 [root@node1 ~]# gluster volume quota disp_vol remove /qyy volume quota : success
(2) 删除指定gfs卷的/限额 [root@node1 ~]# gluster volume quota disp_vol remove / volume quota : success
(3) 验证 [root@node1 ~]# gluster volume quota disp_vol list quota: No quota configured on volume disp_vol

 

如对您有帮助,请随缘打个赏。^-^

gold