Ceph Octopus配置手册

snow chuai汇总、整理、撰写---2021/04/29


1. 配置Ceph Octopus分布式集群
1.1 拓扑
        +--------------------+           |           +----------------------+
        |[client.1000y.cloud]|           |           |  [srv4.1000y.cloud]  |
        |     Ceph Client    +-----------+-----------+        RADOSGW       |
        |    192.168.1.19    |           |           |     192.168.1.14     |
        +--------------------+           |           +----------------------+
            +----------------------------+----------------------------+----------------------------+
            |                            |                            |                            |
            |192.168.1.11                |192.168.1.12                |192.168.1.13                |192.168.1.15
+-----------+-----------+    +-----------+-----------+    +-----------+-----------+    +-----------+-----------+
|   [srv1.1000y.cloud]  |    |   [srv2.1000y.cloud]  |    |   [srv3.1000y.cloud]  |    |   [srv5.1000y.cloud]  |
|     Object Storage    +----+     Object Storage    +----+     Object Storage    |----+     Object Storage    |
|     Monitor Daemon    |    |                       |    |                       |    |                       |
|     Manager Daemon    |    |                       |    |                       |    |                       |
|     添加一块sdb硬盘     |    |     添加一块sdb硬盘     |    |     添加一块sdb硬盘     |    |     添加一块sdb硬盘     |
+-----------------------+    +-----------------------+    +-----------------------+    +-----------------------+
1.2 配置Monitor Daemon及Manager Daemon
1) 于管理节点srv1上生成ssh-key并导入至所有节点上(含管理节点)
[root@srv1 ~]# ssh-keygen -q -N ''
Enter file in which to save the key (/root/.ssh/id_rsa):
[root@srv1 ~]# vim ~/.ssh/config Host srv1 Hostname srv1.1000y.cloud User root Host srv2 Hostname srv2.1000y.cloud User root Host srv3 Hostname srv3.1000y.cloud User root
[root@srv1 ~]# chmod 600 ~/.ssh/config
[root@srv1 ~]# ssh-copy-id srv1 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host 'srv2.1000y.cloud (192.168.1.11)' can't be established. ECDSA key fingerprint is SHA256:gS5CEP/KsM6sZ/Bt1w9J0u/U0neykXBI95gLFr1YOo4. ECDSA key fingerprint is MD5:1f:b1:2b:ac:4a:94:cd:49:8a:a4:73:c7:a8:60:4c:5e. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@srv1.1000y.cloud's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'srv1.1000y.cloud'" and check to make sure that only the key(s) you wanted were added.
[root@srv1 ~]# ssh-copy-id srv2 [root@srv1 ~]# ssh-copy-id srv3
2) 在所有节点上安装ceph(含管理节点)---请先开启EPEL源 [root@srv1 ~]# pssh -h host-list.txt -i 'yum install https://download.ceph.com/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm -y'
[root@srv1 ~]# pssh -h host-list.txt -i 'yum install ceph -y'
[root@srv1 ~]# pssh -h host-list.txt -i 'pip3 install pecan werkzeug cherrypy'
3) 于管理节点配置Monitor及Manager Daemon 3.1) 获取uuid [root@srv1 ~]# uuidgen 85675a50-2a0c-46f5-bf37-ed6fb37bb9a1
3.2) 创建一个以集群名称命名的配置文件--(此集群名为ceph) [root@sev1 ~]# vim /etc/ceph/ceph.conf # 全局配置定义 [global] # 定义集群网络地址 cluster network = 192.168.1.0/24 # 定义public network地址 public network = 192.168.1.0/24 # 指定本机的UUID fsid = 85675a50-2a0c-46f5-bf37-ed6fb37bb9a1 # Monitor Daemon的IP地址 mon host = 192.168.1.11 # Monitor Daemon的主机名称 mon initial members = srv1 # 定义创建osd pool所使用的CRUSH算法(定义哪些OSD存放哪些对象)规则 # -1值:选择具有最小数字标识的规则并使用 osd pool default crush rule = -1
# 定义monitor.格式mon.$Hostname [mon.srv1] host = srv1 mon addr = 192.168.1.11 mon allow pool delete = true

3.3) 生成Cluster监控的秘钥 [root@srv1 ~]# ceph-authtool --create-keyring /etc/ceph/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *' creating /etc/ceph/ceph.mon.keyring
3.4) 生成Cluster管理秘钥 [root@srv1 ~]# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring \ --gen-key -n client.admin \ --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' creating /etc/ceph/ceph.client.admin.keyring
3.5) 生成Bootstrap秘钥 [root@srv1 ~]# ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring \ --gen-key -n client.bootstrap-osd \ --cap mon 'profile bootstrap-osd' --cap mgr 'allow r' creating /var/lib/ceph/bootstrap-osd/ceph.keyring
3.6) 导入秘钥 [root@srv1 ~]# ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring importing contents of /etc/ceph/ceph.client.admin.keyring into /etc/ceph/ceph.mon.keyring
[root@srv1 ~]# ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring importing contents of /var/lib/ceph/bootstrap-osd/ceph.keyring into /etc/ceph/ceph.mon.keyring
3.7) 生成Monitor Map [root@srv1 ~]# FSID=$(grep "^fsid" /etc/ceph/ceph.conf | awk {'print $NF'}) [root@srv1 ~]# NODENAME=$(grep "^mon initial" /etc/ceph/ceph.conf | awk {'print $NF'}) [root@srv1 ~]# NODEIP=$(grep "^mon host" /etc/ceph/ceph.conf | awk {'print $NF'}) [root@srv1 ~]# monmaptool --create --add $NODENAME $NODEIP --fsid $FSID /etc/ceph/monmap monmaptool: monmap file /etc/ceph/monmap monmaptool: set fsid to 2d4f4718-a805-4867-ad38-f961349e3cc1 monmaptool: writing epoch 0 to /etc/ceph/monmap (1 monitors)
3.8) 为Monitor Daemon创建一个目录[目录格式为:集群名-主机名] [root@srv1 ~]# mkdir /var/lib/ceph/mon/ceph-srv1
3.9) 为Mon Daemon关联key及Mon Map [root@srv1 ~]# ceph-mon --cluster ceph --mkfs -i $NODENAME --monmap /etc/ceph/monmap --keyring /etc/ceph/ceph.mon.keyring
[root@srv1 ~]# chown ceph. /etc/ceph/ceph.*
[root@srv1 ~]# chown -R ceph. /var/lib/ceph/mon/ceph-srv1 /var/lib/ceph/bootstrap-osd
3.10) 启动ceph-mon服务 [root@srv1 ~]# systemctl enable --now ceph-mon@$NODENAME
3.11) 其他配置 # 开启Messenger v2协议 [root@srv1 ~]# ceph mon enable-msgr2
# 启动自动缩放模块 [root@srv1 ~]# ceph mgr module enable pg_autoscaler
3.12) 配置并启动Manager Daemon # 为Manager Daemon创建目录[格式为:集群名-主机名] [root@srv1 ~]# mkdir /var/lib/ceph/mgr/ceph-srv1
# 创建认证秘钥 [root@srv1 ~]# ceph auth get-or-create mgr.$NODENAME mon 'allow profile mgr' osd 'allow *' mds 'allow *' [mgr.srv1] key = AQC0bgdftAHzERAAspH7BODMhDoouvOjyMfHyg==
[root@srv1 ~]# ceph auth get-or-create mgr.srv1 > /etc/ceph/ceph.mgr.admin.keyring [root@srv1 ~]# cp /etc/ceph/ceph.mgr.admin.keyring /var/lib/ceph/mgr/ceph-srv1/keyring [root@srv1 ~]# chown ceph. /etc/ceph/ceph.mgr.admin.keyring [root@srv1 ~]# chown -R ceph. /var/lib/ceph/mgr/ceph-srv1
[root@srv1 ~]# systemctl enable --now ceph-mgr@$NODENAME
3.13) Firewalld配置 [root@srv1 ~]# firewall-cmd --add-service=ceph-mon --permanent success [root@srv1 ~]# firewall-cmd --reload success
3.14) 验证Ceph集群.确认Mon及Manager Daemon启动 # 因没有加入OSD,因此集群状态为HEALTH_WARN,属于正常 [root@srv1 ~]# ceph -s cluster: id: e4953e75-15d4-4d7c-b66d-d748b4747bb1 health: HEALTH_WARN Reduced data availability: 1 pg inactive OSD count 0 < osd_pool_default_size 3
services: mon: 1 daemons, quorum srv1 (age 8m) mgr: srv1(active, since 72s) osd: 0 osds: 0 up, 0 in
data: pools: 1 pools, 1 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: 100.000% pgs unknown 1 unknown
1.3 配置加OSD
1) 防火请配置
[root@srv1 ~]# for ALLNODE in srv1 srv2 srv3
do
    ssh $ALLNODE "firewall-cmd --add-service=ceph --permanent; firewall-cmd --reload"
done 
2) 为所有节点配置OSD [root@srv1 ~]# for ALLNODE in srv1 srv2 srv3 do if [ ! ${ALLNODE} = "srv1" ] then scp /etc/ceph/ceph.conf ${ALLNODE}:/etc/ceph/ceph.conf scp /etc/ceph/ceph.client.admin.keyring ${ALLNODE}:/etc/ceph scp /var/lib/ceph/bootstrap-osd/ceph.keyring ${ALLNODE}:/var/lib/ceph/bootstrap-osd fi ssh $ALLNODE \ "chown ceph. /etc/ceph/ceph.* /var/lib/ceph/bootstrap-osd/*; \ parted --script /dev/sdb 'mklabel gpt'; \ parted --script /dev/sdb "mkpart primary 0% 100%"; \ ceph-volume lvm create --data /dev/sdb1" done Running command: /usr/bin/ceph-authtool --gen-print-key Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 371 6b4ad-ccde-4609-a30d-a5b1c46c01f9 Running command: /usr/sbin/vgcreate --force --yes ceph-ae54b17d-99e4-423e-a4ec-b70c6222dfe3 /dev/sda1 stdout: Physical volume "/dev/sda1" successfully created. stdout: Volume group "ceph-ae54b17d-99e4-423e-a4ec-b70c6222dfe3" successfully created Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-3716b4ad-ccde-4609-a30d-a5b1c46c01f9 ceph-ae54b17d-99e4-423e-a4ec-b70c6222dfe3 ...... ...... --> ceph-volume lvm activate successful for osd ID: 2 --> ceph-volume lvm create successful for: /dev/sdb1
3) 验证ceph集群 [root@srv1 ~]# ceph -s cluster: id: 85675a50-2a0c-46f5-bf37-ed6fb37bb9a1 health: HEALTH_OK
services: mon: 1 daemons, quorum srv1 (age 4m) mgr: srv1(active, since 3m) osd: 3 osds: 3 up (since 4m), 3 in (since 11m)
data: pools: 1 pools, 1 pgs objects: 0 objects, 0 B usage: 3.0 GiB used, 93 GiB / 96 GiB avail pgs: 1 active+clean
[root@srv1 ~]# ceph health HEALTH_OK
# 如果出现错误1 [root@srv1 ~]# ceph health HEALTH_WARN mon is allowing insecure global_id reclaim; Module 'restful' has failed dependency: No module named 'pecan'
# 解决办法 [root@srv1 ~]# pip3 install pecan werkzeug cherrypy ; reboot
# 如果出现错误2 [root@srv1 ~]# ceph health HEALTH_WARN mon is allowing insecure global_id reclaim
# 解决办法 [root@srv1 ~]# ceph config set mon auth_allow_insecure_global_id_reclaim false
4) 验证OSD树 [root@srv1 ~]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.09357 root default -3 0.03119 host srv1 0 hdd 0.03119 osd.0 up 1.00000 1.00000 -5 0.03119 host srv2 1 hdd 0.03119 osd.1 up 1.00000 1.00000 -7 0.03119 host srv3 2 hdd 0.03119 osd.2 up 1.00000 1.00000
[root@srv1 ~]# ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 96 GiB 93 GiB 12 MiB 3.0 GiB 3.14 TOTAL 96 GiB 93 GiB 12 MiB 3.0 GiB 3.14
--- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL device_health_metrics 1 1 0 B 0 0 B 0 29 GiB
[root@srv1 ~]# ceph osd df ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS 0 hdd 0.03119 1.00000 32 GiB 1.0 GiB 3.9 MiB 3 KiB 1024 MiB 31 GiB 3.14 1.00 1 up 1 hdd 0.03119 1.00000 32 GiB 1.0 GiB 3.9 MiB 0 B 1 GiB 31 GiB 3.14 1.00 1 up 2 hdd 0.03119 1.00000 32 GiB 1.0 GiB 3.9 MiB 0 B 1 GiB 31 GiB 3.14 1.00 1 up TOTAL 96 GiB 3.0 GiB 12 MiB 3.4 KiB 3.0 GiB 93 GiB 3.14 MIN/MAX VAR: 1.00/1.00 STDDEV: 0
2. Ceph块设备的使用
1) 将srv1的ssh-key传输至client节点
[root@srv1 ~]# vim .ssh/config
Host srv1
    Hostname srv1.1000y.cloud
    User root
Host srv2
    Hostname srv2.1000y.cloud
    User root
Host srv3
    Hostname srv3.1000y.cloud
    User root
Host client
    Hostname client.1000y.cloud
    User root
[root@srv1 ~]# ssh-copy-id client
2) 给client节点安装一些必要的软件 [root@srv1 ~]# ssh client "yum install https://download.ceph.com/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm -y" [root@srv1 ~]# ssh client "yum install ceph-common -y"
3) 将ceph集群配置文件及秘钥复制到client节点 [root@srv1 ~]# scp /etc/ceph/ceph.conf client:/etc/ceph/ ceph.conf 100% 285 106.4KB/s 00:00
[root@srv1 ~]# scp /etc/ceph/ceph.client.admin.keyring client:/etc/ceph/ ceph.client.admin.keyring 100% 151 50.6KB/s 00:00
[root@srv1 ~]# ssh client "chown ceph. /etc/ceph/ceph.*"
4) 在client节点创建块存储并挂载 # 创建默认的RBD pool(pool name: rbd) [root@client ~]# ceph osd pool create rbd 32 pool 'rbd' created
[root@client ~]# ceph osd lspools 1 device_health_metrics 2 rbd
# 启动自动缩放模块 [root@client ~]# ceph osd pool set rbd pg_autoscale_mode on set pool 2 pg_autoscale_mode to on
# 初始化pool [root@client ~]# rbd pool init rbd
[root@client ~]# ceph osd pool autoscale-status POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE device_health_metrics 0 3.0 98292M 0.0000 1.0 1 on rbd 19 3.0 98292M 0.0000 1.0 32 on
5) 创建一个5G的块存储[即:image](创建设备名为:rbd1) [root@client ~]# rbd create --size 2G --pool rbd rbd1 [root@client ~]# rbd ls -l NAME SIZE PARENT FMT PROT LOCK rbd1 2 GiB 2
6) 映射rbd1并验证 # CentOS的3.10内核仅支持其中的layering feature,其他feature概不支持,因此关闭一些特性 [root@client ~]# rbd feature disable rbd1 exclusive-lock, object-map, fast-diff, deep-flatten
[root@client ~]# rbd map rbd1 /dev/rbd0
[root@client ~]# rbd showmapped id pool namespace image snap device 0 rbd rbd1 - /dev/rbd0
[root@client ~]# ll /dev/rbd0 brw-rw---- 1 root disk 253, 0 Apr 29 21:35 /dev/rbd0
7) 挂载并使用 [root@client ~]# mkfs.ext4 /dev/rbd0 mke2fs 1.45.4 (23-Sep-2019) Discarding device blocks: done Creating filesystem with 1310720 4k blocks and 327680 inodes Filesystem UUID: 6af602e4-1e32-4960-9786-50c4c8032bb9 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736
Allocating group tables: done Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done
[root@client ~]# mount /dev/rbd0 /mnt/ [root@client ~]# df -Th /mnt Filesystem Type Size Used Avail Use% Mounted on /dev/rbd0 ext4 2.0G 6.0M 1.8G 1% /mnt
8) 删除块设备 [root@client ~]# umount /mnt [root@client ~]# rbd unmap /dev/rbd/rbd/rbd1 [root@client ~]# rbd rm rbd1 -p rbd Removing image: 100% complete...done. [root@client ~]# rbd ls -l [root@client ~]#
9) 删除pool # 注意:删除pool必须在配置文件的[Montior Daemon]中声明:mon allow pool delete = true # 语法格式ceph osd pool delete [pool-name] [pool-name] 参数 [root@client ~]# ceph osd pool delete rbd rbd --yes-i-really-really-mean-it pool 'rbd' removed
[root@client ~]# ceph osd pool autoscale-status POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE device_health_metrics 0 3.0 98292M 0.0000 1.0 1 on
3. 使用文件系统
1) 将srv1的ssh-key传输至client节点
[root@srv1 ~]# vim .ssh/config
Host srv1
    Hostname srv1.1000y.cloud
    User root
Host srv2
    Hostname srv2.1000y.cloud
    User root
Host srv3
    Hostname srv3.1000y.cloud
    User root
Host client
    Hostname client.1000y.cloud
    User root
[root@srv1 ~]# ssh-copy-id client
2) 给client节点安装一些必要的软件 [root@srv1 ~]# ssh client "yum install https://download.ceph.com/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm -y" [root@srv1 ~]# ssh client "yum install ceph-fuse -y"
3) 将ceph集群配置文件及秘钥复制到client节点 [root@srv1 ~]# scp /etc/ceph/ceph.conf client:/etc/ceph/ ceph.conf 100% 285 106.4KB/s 00:00
[root@srv1 ~]# scp /etc/ceph/ceph.client.admin.keyring client:/etc/ceph/ ceph.client.admin.keyring 100% 151 50.6KB/s 00:00
[root@srv1 ~]# ssh client "chown ceph. /etc/ceph/ceph.*"
4) 于srv1上配置MetaData Server(MDS) # 创建MDS所需的目录。目录名称格式为:集群名-主机名 [root@srv1 ~]# mkdir -p /var/lib/ceph/mds/ceph-srv1
[root@srv1 ~]# ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-srv1/keyring --gen-key -n mds.srv1 creating /var/lib/ceph/mds/ceph-srv1/keyring
[root@srv1 ~]# chown -R ceph. /var/lib/ceph/mds/ceph-srv1
[root@srv1 ~]# ceph auth add mds.srv1 osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-srv1/keyring added key for mds.srv1
[root@srv1 ~]# systemctl enable --now ceph-mds@srv1 Created symlink /etc/systemd/system/ceph-mds.target.wants/ceph-mds@srv1.service → /usr/lib/systemd/system/ceph-mds@.service.
5) 在MDS上为数据和MetaData创建2个RADOS池 # 结束编号请参考官方文档。以下示例为64 [root@srv1 ~]# ceph osd pool create cephfs_data 64 pool 'cephfs_data' created
[root@srv1 ~]# ceph osd pool create cephfs_metadata 64 pool 'cephfs_metadata' created
[root@srv1 ~]# ceph fs new cephfs cephfs_metadata cephfs_data new fs with metadata pool 4 and data pool 3
[root@srv1 ~]# ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
[root@srv1 ~]# ceph mds stat cephfs:1 {0=srv1=up:active}
[root@srv1 ~]# ceph fs status cephfs cephfs - 0 clients ====== RANK STATE MDS ACTIVITY DNS INOS 0 active srv1 Reqs: 0 /s 10 13 POOL TYPE USED AVAIL cephfs_metadata metadata 1536k 29.3G cephfs_data data 0 29.3G MDS version: ceph version 15.2.11 (e3523634d9c2227df9af89a4eac33d16738c49cb) octopus (stable)
6) 使用CephFS # 生成基于Base64的秘钥 [root@client ~]# ceph-authtool -p /etc/ceph/ceph.client.admin.keyring > admin.key [root@client ~]# chmod 600 admin.key
[root@client ~]# mount -t ceph srv1.1000y.cloud:6789:/ /mnt -o name=admin,secretfile=admin.key [root@client ~]# df -Th /mnt Filesystem Type Size Used Avail Use% Mounted on 192.168.1.11:6789:/ ceph 30G 0 30G 0% /mnt
4. Ceph Object Gateway
1) 在管理节点(srv1)上安装RADOSGW节点所需的软件
[root@srv1 ~]# vim .ssh/config
Host srv1
    Hostname srv1.1000y.cloud
    User root
Host srv2
    Hostname srv2.1000y.cloud
    User root
Host srv3
    Hostname srv3.1000y.cloud
    User root
Host client
    Hostname client.1000y.cloud
    User root
Host srv4
    Hostname srv4.1000y.cloud
    User root

[root@srv1 ~]# ssh-copy-id srv4
[root@srv1 ~]# ssh srv4 "yum install https://download.ceph.com/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm -y" [root@srv1 ~]# ssh srv4 "yum install ceph-radosgw -y"
2) 在管理节点(srv1)上配置RADOSGW节点 [root@srv1 ~]# vim /etc/ceph/ceph.conf ...... ...... ...... ...... ...... ......
# 于文档最后追加如下内容 # 定义rgw.格式为client.rgw.节点名 [client.rgw.srv4] host = 192.168.1.14 # 定义监听端口 rgw frontends = "civetweb port=8080" # 定义FQDN rgw dns name = srv4.1000y.cloud
[root@srv1 ~]# scp /etc/ceph/ceph.conf srv4:/etc/ceph/ ceph.conf 100% 395 222.5KB/s 00:00
[root@srv1 ~]# scp /etc/ceph/ceph.client.admin.keyring srv4:/etc/ceph/ ceph.client.admin.keyring 100% 151 85.1KB/s 00:00
# 配置RADOSGW # 无防火墙配置 [root@srv1 ~]# ssh srv4 \ "mkdir -p /var/lib/ceph/radosgw/ceph-rgw.srv4; \ ceph auth get-or-create client.rgw.srv4 osd 'allow rwx' mon 'allow rw' -o /var/lib/ceph/radosgw/ceph-rgw.srv4/keyring; \ chown ceph. /etc/ceph/ceph.*; \ chown -R ceph. /var/lib/ceph/radosgw; \ systemctl enable --now ceph-radosgw@rgw.srv4" Created symlink /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.srv4.service → /usr/lib/systemd/system/ceph-radosgw@.service.
# 有防火墙配置 [root@srv1 ~]# ssh srv4 \ "mkdir -p /var/lib/ceph/radosgw/ceph-rgw.srv4; \ ceph auth get-or-create client.rgw.srv4 osd 'allow rwx' mon 'allow rw' -o /var/lib/ceph/radosgw/ceph-rgw.srv4/keyring; \ chown ceph. /etc/ceph/ceph.*; \ chown -R ceph. /var/lib/ceph/radosgw; \ systemctl enable --now ceph-radosgw@rgw.srv4; \ firewall-cmd --add-port=8080/tcp --permanent; firewall-cmd --reload" Created symlink /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.srv4.service → /usr/lib/systemd/system/ceph-radosgw@.service.
3) 验证RADOSGE状态 [root@srv1 ~]# curl srv4.1000y.cloud:8080 <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
4) 在Object Gateway节点(srv4)上创建一个兼容S3的用户,用以对Object Gateway进行身份验证 [root@srv4 ~]# radosgw-admin user create --uid=snowchuai --display-name="Snow Chuai" --email=snow@1000y.cloud { "user_id": "snowchuai", "display_name": "Snow Chuai", "email": "snow@1000y.cloud", "suspended": 0, "max_buckets": 1000, "subusers": [], "keys": [ { "user": "snowchuai", "access_key": "30HYLAB0X7UCXMAWRRZX", "secret_key": "XldbFTsRy2uZkYs5heRIOHDRGZGxtqABX4LrPrkC" } ], "swift_keys": [], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "default_storage_class": "", "placement_tags": [], "bucket_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "user_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "temp_url_keys": [], "type": "rgw", "mfa_ids": [] }
# 显示所创建的用户 [root@srv4 ~]# radosgw-admin user info --uid=snowchuai { "user_id": "snowchuai", "display_name": "Snow Chuai", "email": "snow@1000y.cloud", "suspended": 0, "max_buckets": 1000, "subusers": [], "keys": [ { "user": "snowchuai", "access_key": "30HYLAB0X7UCXMAWRRZX", "secret_key": "XldbFTsRy2uZkYs5heRIOHDRGZGxtqABX4LrPrkC" } ], "swift_keys": [], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "default_storage_class": "", "placement_tags": [], "bucket_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "user_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "temp_url_keys": [], "type": "rgw", "mfa_ids": [] }
[root@srv4 ~]# radosgw-admin user list [ "snowchuai" ]
# 如打算删除用户 [root@srv4 ~]# radosgw-admin user rm --uid=snowchuai
5) 创建Python3脚本进行验证访问S3 [root@client ~]# yum install python-boto -y
[root@client ~]# vim s3.py import sys import boto import boto.s3.connection
# 加入user的key ACCESS_KEY = '30HYLAB0X7UCXMAWRRZX' SECRET_KEY = 'XldbFTsRy2uZkYs5heRIOHDRGZGxtqABX4LrPrkC'
# 设置Object Gateway的FQDN及所监听的端口 HOST = 'srv4.1000y.cloud' PORT = 8080
conn = boto.connect_s3( aws_access_key_id = ACCESS_KEY, aws_secret_access_key = SECRET_KEY, port = PORT, host = HOST, is_secure = False, calling_format = boto.s3.connection.OrdinaryCallingFormat(), )
# 创建[snow-test]的bucket bucket = conn.create_bucket('snow-test')
# 显示bucket列表 for bucket in conn.get_all_buckets(): print "{name}\t{created}".format( name = bucket.name, created = bucket.creation_date, )

[root@client ~]# python s3.py now-test 2021-04-29T14:07:57.880Z
5. 开启DashBoard---缺乏软件包
# CentOS7及EPEL中缺乏以下关联包,暂时无法实现
python3-routes
python3-cherrypy
python3-jwt
6. 添加/删除OSD
1) 于Admin节点(srv1)上添加OSD节点(srv5)
[root@srv1 ~]# vim .ssh/config
Host srv1
    Hostname srv1.1000y.cloud
    User root
Host srv2
    Hostname srv2.1000y.cloud
    User root
Host srv3
    Hostname srv3.1000y.cloud
    User root
Host client
    Hostname client.1000y.cloud
    User root
Host srv4
    Hostname srv4.1000y.cloud
    User root
Host srv5
    Hostname srv5.1000y.cloud
    User root
[root@srv1 ~]# ssh-copy-id srv5
#配置srv5防火墙规则 [root@srv1 ~]# ssh srv5 "firewall-cmd --add-service=ceph --permanent; firewall-cmd --reload"
[root@srv1 ~]# ssh srv5 "yum install https://download.ceph.com/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm -y" [root@srv1 ~]# ssh srv5 "yum install ceph -y"
[root@srv1 ~]# scp /etc/ceph/ceph.conf srv5:/etc/ceph/ceph.conf [root@srv1 ~]# scp /etc/ceph/ceph.client.admin.keyring srv5:/etc/ceph [root@srv1 ~]# scp /var/lib/ceph/bootstrap-osd/ceph.keyring srv5:/var/lib/ceph/bootstrap-osd
# 配置OSD [root@node01 ~]# ssh srv5 \ "chown ceph. /etc/ceph/ceph.* /var/lib/ceph/bootstrap-osd/*; \ parted --script /dev/sdb 'mklabel gpt'; \ parted --script /dev/sdb "mkpart primary 0% 100%"; \ ceph-volume lvm create --data /dev/sdb1" Running command: /usr/bin/ceph-authtool --gen-print-key Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new fb4306d1-ec4a-4e7c-ae08-7d6f9fb7fed6 ...... ...... Running command: /usr/bin/systemctl start ceph-osd@3 --> ceph-volume lvm activate successful for osd ID: 3 --> ceph-volume lvm create successful for: /dev/sdb1
# 验证-1 [root@srv1 ~]# ceph -s cluster: id: 85675a50-2a0c-46f5-bf37-ed6fb37bb9a1 health: HEALTH_OK
services: mon: 1 daemons, quorum srv1 (age 96m) mgr: srv1(active, since 96m) mds: cephfs:1 {0=srv1=up:active} osd: 4 osds: 4 up (since 0.961726s), 4 in (since 0.961726s) rgw: 1 daemon active (srv4)
task status: data: pools: 8 pools, 193 pgs objects: 226 objects, 8.2 KiB usage: 3.2 GiB used, 93 GiB / 96 GiB avail pgs: 193 active+clean
2) 于Admin节点(srv1)上删除OSD节点(srv5) [root@srv1 ~]# ceph -s cluster: id: e4953e75-15d4-4d7c-b66d-d748b4747bb1 health: HEALTH_OK
services: mon: 1 daemons, quorum srv1 (age 4h) mgr: srv1(active, since 40m) mds: cephfs:1 {0=srv1=up:active} osd: 4 osds: 4 up (since 2m), 4 in (since 2m) rgw: 1 daemon active (srv4)
task status: scrub status: mds.srv1: idle
data: pools: 8 pools, 193 pgs objects: 246 objects, 13 KiB usage: 4.4 GiB used, 156 GiB / 160 GiB avail pgs: 193 active+clean
[root@srv1 ~]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.12476 root default -3 0.03119 host srv1 0 hdd 0.03119 osd.0 up 1.00000 1.00000 -5 0.03119 host srv2 1 hdd 0.03119 osd.1 up 1.00000 1.00000 -7 0.03119 host srv3 2 hdd 0.03119 osd.2 up 1.00000 1.00000 -9 0.03119 host srv5 3 hdd 0.03119 osd.3 up 1.00000 1.00000
# 指定要删除的OSD ID [root@srv1 ~]# ceph osd out 3 marked out osd.3.
# 当进行ceph osd out操作时,系统会实时监视群集状态并自动执行平衡 # 要退出live watch,请按[Ctrl+c] [root@srv1 ~]# ceph -w cluster: id: 85675a50-2a0c-46f5-bf37-ed6fb37bb9a1 health: HEALTH_WARN Reduced data availability: 41 pgs inactive, 8 pgs peering Degraded data redundancy: 166/678 objects degraded (24.484%), 12 pgs degraded
services: mon: 1 daemons, quorum srv1 (age 98m) mgr: srv1(active, since 98m) mds: cephfs:1 {0=srv1=up:active} osd: 4 osds: 4 up (since 115s), 3 in (since 5s); 21 remapped pgs rgw: 1 daemon active (srv4)
task status:
data: pools: 8 pools, 193 pgs objects: 226 objects, 8.2 KiB usage: 3.2 GiB used, 93 GiB / 96 GiB avail pgs: 81.347% pgs not active 166/678 objects degraded (24.484%) 125 activating 35 active+clean 21 remapped+peering 11 activating+degraded 1 active+recovery_wait+degraded
progress: Rebalancing after osd.3 marked out (1s) [............................]
...... ...... 2021-04-29T22:31:15.996882+0800 mon.srv1 [INF] Cluster is now healthy # 退出 ^c
# 确认集群状态 [root@srv1 ~]# ceph -s | grep health health: HEALTH_OK
# 当状态为HEALTH_OK后,停止srv5的OSD服务 [root@srv1 ~]# ssh srv5 "systemctl disable --now ceph-osd@3.service" Removed /run/systemd/system/ceph-osd.target.wants/ceph-osd@3.service.
# 删除指定节点的OSD ID [root@srv1 ~]# ceph osd purge 3 --yes-i-really-mean-it purged osd.3
# 验证集群状态 [root@srv1 ~]# ceph -s cluster: id: 85675a50-2a0c-46f5-bf37-ed6fb37bb9a1 health: HEALTH_OK
services: mon: 1 daemons, quorum srv1 (age 100m) mgr: srv1(active, since 100m) mds: cephfs:1 {0=srv1=up:active} osd: 3 osds: 3 up (since 20s), 3 in (since 101s) rgw: 1 daemon active (srv4)
task status:
data: pools: 8 pools, 193 pgs objects: 226 objects, 8.2 KiB usage: 3.2 GiB used, 93 GiB / 96 GiB avail pgs: 193 active+clean
3) 从tree中删除host [root@srv1 ~]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.09357 root default -3 0.03119 host srv1 0 hdd 0.03119 osd.0 up 1.00000 1.00000 -5 0.03119 host srv2 1 hdd 0.03119 osd.1 up 1.00000 1.00000 -7 0.03119 host srv3 2 hdd 0.03119 osd.2 up 1.00000 1.00000 -9 0 host srv5
[root@srv1 ~]# ceph osd crush remove srv5 removed item id -9 name 'srv5' from crush map
[root@srv1 ~]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.09357 root default -3 0.03119 host srv1 0 hdd 0.03119 osd.0 up 1.00000 1.00000 -5 0.03119 host srv2 1 hdd 0.03119 osd.1 up 1.00000 1.00000 -7 0.03119 host srv3 2 hdd 0.03119 osd.2 up 1.00000 1.00000
4) 删除主机---只是删除数据(/var/lib/ceph下的数据),保留Ceph软件 [snow@node1 ceph]$ sudo ceph-deploy purgedata srv6 [snow@node1 ceph]$ sudo ceph-deploy purgedata srv6 srv7
5) 删除主机---即删除数据也删除软件 [snow@node1 ceph]$ sudo ceph-deploy purge srv6 [snow@node1 ceph]$ sudo ceph-deploy purge srv6 srv7
7. 配置Ceph Octopus--SSD缓存池及日志盘
7.1 拓扑
        +--------------------+           |           +----------------------+
        |[client.1000y.cloud]|           |           |  [srv4.1000y.cloud]  |
        |     Ceph Client    +-----------+-----------+        RADOSGW       |
        |    192.168.1.19    |           |           |     192.168.1.14     |
        +--------------------+           |           +----------------------+
            +----------------------------+----------------------------+----------------------------+
            |                            |                            |                            |
            |192.168.1.11                |192.168.1.12                |192.168.1.13                |192.168.1.15
+-----------+-----------+    +-----------+-----------+    +-----------+-----------+    +-----------+-----------+
|   [srv1.1000y.cloud]  |    |   [srv2.1000y.cloud]  |    |   [srv3.1000y.cloud]  |    |   [srv5.1000y.cloud]  |
|     Object Storage    +----+     Object Storage    +----+     Object Storage    |----+     Object Storage    |
|     Monitor Daemon    |    |                       |    |                       |    |                       |
|     Manager Daemon    |    |                       |    |                       |    |                       |
|    添加2块sdb/sdc硬盘   |    |    添加2块sdb/sdc硬盘  |    |    添加2块sdb/sdc硬盘   |    |    添加2块sdb/sdc硬盘   |
+-----------------------+    +-----------------------+    +-----------------------+    +-----------------------+
7.2 配置Monitor Daemon及Manager Daemon
1) 于管理节点srv1上生成ssh-key并导入至所有节点上(含管理节点)
[root@srv1 ~]# ssh-keygen -q -N ''
Enter file in which to save the key (/root/.ssh/id_rsa):
[root@srv1 ~]# vim ~/.ssh/config Host srv1 Hostname srv1.1000y.cloud User root Host srv2 Hostname srv2.1000y.cloud User root Host srv3 Hostname srv3.1000y.cloud User root
[root@srv1 ~]# chmod 600 ~/.ssh/config
[root@srv1 ~]# ssh-copy-id srv1 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host 'srv2.1000y.cloud (192.168.1.11)' can't be established. ECDSA key fingerprint is SHA256:gS5CEP/KsM6sZ/Bt1w9J0u/U0neykXBI95gLFr1YOo4. ECDSA key fingerprint is MD5:1f:b1:2b:ac:4a:94:cd:49:8a:a4:73:c7:a8:60:4c:5e. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@srv1.1000y.cloud's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'srv1.1000y.cloud'" and check to make sure that only the key(s) you wanted were added.
[root@srv1 ~]# ssh-copy-id srv2 [root@srv1 ~]# ssh-copy-id srv3
2) 在所有节点上安装ceph(含管理节点)---请先开启EPEL源 [root@srv1 ~]# pssh -h host-list.txt -i 'yum install https://download.ceph.com/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm -y'
[root@srv1 ~]# pssh -h host-list.txt -i 'yum install ceph -y'
[root@srv1 ~]# pssh -h host-list.txt -i 'pip3 install pecan werkzeug cherrypy'
3) 于管理节点配置Monitor及Manager Daemon 3.1) 获取uuid [root@srv1 ~]# uuidgen 85675a50-2a0c-46f5-bf37-ed6fb37bb9a1
3.2) 创建一个以集群名称命名的配置文件--(此集群名为ceph) [root@sev1 ~]# vim /etc/ceph/ceph.conf # 全局配置定义 [global] # 定义集群网络地址 cluster network = 192.168.1.0/24 # 定义public network地址 public network = 192.168.1.0/24 # 指定本机的UUID fsid = 85675a50-2a0c-46f5-bf37-ed6fb37bb9a1 # Monitor Daemon的IP地址 mon host = 192.168.1.11 # Monitor Daemon的主机名称 mon initial members = srv1 # 定义创建osd pool所使用的CRUSH算法(定义哪些OSD存放哪些对象)规则 # -1值:选择具有最小数字标识的规则并使用 osd pool default crush rule = -1
# 定义monitor.格式mon.$Hostname [mon.srv1] host = srv1 mon addr = 192.168.1.11 mon allow pool delete = true

3.3) 生成Cluster监控的秘钥 [root@srv1 ~]# ceph-authtool --create-keyring /etc/ceph/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *' creating /etc/ceph/ceph.mon.keyring
3.4) 生成Cluster管理秘钥 [root@srv1 ~]# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring \ --gen-key -n client.admin \ --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' creating /etc/ceph/ceph.client.admin.keyring
3.5) 生成Bootstrap秘钥 [root@srv1 ~]# ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring \ --gen-key -n client.bootstrap-osd \ --cap mon 'profile bootstrap-osd' --cap mgr 'allow r' creating /var/lib/ceph/bootstrap-osd/ceph.keyring
3.6) 导入秘钥 [root@srv1 ~]# ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring importing contents of /etc/ceph/ceph.client.admin.keyring into /etc/ceph/ceph.mon.keyring
[root@srv1 ~]# ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring importing contents of /var/lib/ceph/bootstrap-osd/ceph.keyring into /etc/ceph/ceph.mon.keyring
3.7) 生成Monitor Map [root@srv1 ~]# FSID=$(grep "^fsid" /etc/ceph/ceph.conf | awk {'print $NF'}) [root@srv1 ~]# NODENAME=$(grep "^mon initial" /etc/ceph/ceph.conf | awk {'print $NF'}) [root@srv1 ~]# NODEIP=$(grep "^mon host" /etc/ceph/ceph.conf | awk {'print $NF'}) [root@srv1 ~]# monmaptool --create --add $NODENAME $NODEIP --fsid $FSID /etc/ceph/monmap monmaptool: monmap file /etc/ceph/monmap monmaptool: set fsid to 2d4f4718-a805-4867-ad38-f961349e3cc1 monmaptool: writing epoch 0 to /etc/ceph/monmap (1 monitors)
3.8) 为Monitor Daemon创建一个目录[目录格式为:集群名-主机名] [root@srv1 ~]# mkdir /var/lib/ceph/mon/ceph-srv1
3.9) 为Mon Daemon关联key及Mon Map [root@srv1 ~]# ceph-mon --cluster ceph --mkfs -i $NODENAME --monmap /etc/ceph/monmap --keyring /etc/ceph/ceph.mon.keyring
[root@srv1 ~]# chown ceph. /etc/ceph/ceph.*
[root@srv1 ~]# chown -R ceph. /var/lib/ceph/mon/ceph-srv1 /var/lib/ceph/bootstrap-osd
3.10) 启动ceph-mon服务 [root@srv1 ~]# systemctl enable --now ceph-mon@$NODENAME
3.11) 其他配置 # 开启Messenger v2协议 [root@srv1 ~]# ceph mon enable-msgr2
# 启动自动缩放模块 [root@srv1 ~]# ceph mgr module enable pg_autoscaler
3.12) 配置并启动Manager Daemon # 为Manager Daemon创建目录[格式为:集群名-主机名] [root@srv1 ~]# mkdir /var/lib/ceph/mgr/ceph-srv1
# 创建认证秘钥 [root@srv1 ~]# ceph auth get-or-create mgr.$NODENAME mon 'allow profile mgr' osd 'allow *' mds 'allow *' [mgr.srv1] key = AQC0bgdftAHzERAAspH7BODMhDoouvOjyMfHyg==
[root@srv1 ~]# ceph auth get-or-create mgr.srv1 > /etc/ceph/ceph.mgr.admin.keyring [root@srv1 ~]# cp /etc/ceph/ceph.mgr.admin.keyring /var/lib/ceph/mgr/ceph-srv1/keyring [root@srv1 ~]# chown ceph. /etc/ceph/ceph.mgr.admin.keyring [root@srv1 ~]# chown -R ceph. /var/lib/ceph/mgr/ceph-srv1
[root@srv1 ~]# systemctl enable --now ceph-mgr@$NODENAME
3.13) Firewalld配置 [root@srv1 ~]# firewall-cmd --add-service=ceph-mon --permanent success [root@srv1 ~]# firewall-cmd --reload success
3.14) 验证Ceph集群.确认Mon及Manager Daemon启动 # 因没有加入OSD,因此集群状态为HEALTH_WARN,属于正常 [root@srv1 ~]# ceph -s cluster: id: e4953e75-15d4-4d7c-b66d-d748b4747bb1 health: HEALTH_WARN Reduced data availability: 1 pg inactive OSD count 0 < osd_pool_default_size 3
services: mon: 1 daemons, quorum srv1 (age 8m) mgr: srv1(active, since 72s) osd: 0 osds: 0 up, 0 in
data: pools: 1 pools, 1 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: 100.000% pgs unknown 1 unknown
7.3 配置加OSD
1) 防火请配置
[root@srv1 ~]# for ALLNODE in srv1 srv2 srv3
do
    ssh $ALLNODE "firewall-cmd --add-service=ceph --permanent; firewall-cmd --reload"
done
2) 为所有节点配置OSD [root@srv1 ~]# for ALLNODE in srv1 srv2 srv3 do if [ ! ${ALLNODE} = "srv1" ] then scp /etc/ceph/ceph.conf ${ALLNODE}:/etc/ceph/ceph.conf scp /etc/ceph/ceph.client.admin.keyring ${ALLNODE}:/etc/ceph scp /var/lib/ceph/bootstrap-osd/ceph.keyring ${ALLNODE}:/var/lib/ceph/bootstrap-osd fi ssh $ALLNODE \ "chown ceph. /etc/ceph/ceph.* /var/lib/ceph/bootstrap-osd/*; \ parted --script /dev/sdb 'mklabel gpt'; \ parted --script /dev/sdc 'mklabel gpt'; \ parted --script /dev/sdb "mkpart primary 0% 100%"; \ parted --script /dev/sdc "mkpart primary 0% 100%"; \ ceph-volume lvm create --data /dev/sdb1; \ ceph-volume lvm create --data /dev/sdc1" done
Running command: /usr/bin/ceph-authtool --gen-print-key Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 371 6b4ad-ccde-4609-a30d-a5b1c46c01f9 Running command: /usr/sbin/vgcreate --force --yes ceph-ae54b17d-99e4-423e-a4ec-b70c6222dfe3 /dev/sda1 stdout: Physical volume "/dev/sda1" successfully created. stdout: Volume group "ceph-ae54b17d-99e4-423e-a4ec-b70c6222dfe3" successfully created Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-3716b4ad-ccde-4609-a30d-a5b1c46c01f9 ceph-ae54b17d-99e4-423e-a4ec-b70c6222dfe3 ...... ...... --> ceph-volume lvm activate successful for osd ID: 2 --> ceph-volume lvm create successful for: /dev/sdb1
3) 验证ceph集群 [root@srv1 ~]# ceph -s cluster: id: 85675a50-2a0c-46f5-bf37-ed6fb37bb9a1 health: HEALTH_OK
services: mon: 1 daemons, quorum srv1 (age 4m) mgr: srv1(active, since 3m) osd: 3 osds: 3 up (since 4m), 3 in (since 11m)
data: pools: 1 pools, 1 pgs objects: 0 objects, 0 B usage: 3.0 GiB used, 93 GiB / 96 GiB avail pgs: 1 active+clean
[root@srv1 ~]# ceph health HEALTH_OK
# 如果出现错误1 [root@srv1 ~]# ceph health HEALTH_WARN mon is allowing insecure global_id reclaim; Module 'restful' has failed dependency: No module named 'pecan'
# 解决办法 [root@srv1 ~]# pip3 install pecan werkzeug cherrypy ; reboot
# 如果出现错误2 [root@srv1 ~]# ceph health HEALTH_WARN mon is allowing insecure global_id reclaim
# 解决办法 [root@srv1 ~]# ceph config set mon auth_allow_insecure_global_id_reclaim false
4) 验证OSD树 [root@srv1 ~]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.18713 root default -3 0.06238 host srv1 0 hdd 0.03119 osd.0 up 1.00000 1.00000 1 hdd 0.03119 osd.1 up 1.00000 1.00000 -5 0.06238 host srv2 2 hdd 0.03119 osd.2 up 1.00000 1.00000 3 hdd 0.03119 osd.3 up 1.00000 1.00000 -7 0.06238 host srv3 4 hdd 0.03119 osd.4 up 1.00000 1.00000 5 hdd 0.03119 osd.5 up 1.00000 1.00000
[root@srv1 ~]# ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 192 GiB 186 GiB 27 MiB 6.0 GiB 3.14 TOTAL 192 GiB 186 GiB 27 MiB 6.0 GiB 3.14
--- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL device_health_metrics 1 1 0 B 0 0 B 0 59 GiB
[root@srv1 ~]# ceph osd df ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS 0 hdd 0.03119 1.00000 32 GiB 1.0 GiB 4.4 MiB 0 B 1 GiB 31 GiB 3.14 1.00 1 up 1 hdd 0.03119 1.00000 32 GiB 1.0 GiB 4.4 MiB 0 B 1 GiB 31 GiB 3.14 1.00 0 up 2 hdd 0.03119 1.00000 32 GiB 1.0 GiB 4.4 MiB 0 B 1 GiB 31 GiB 3.14 1.00 0 up 3 hdd 0.03119 1.00000 32 GiB 1.0 GiB 4.3 MiB 0 B 1 GiB 31 GiB 3.14 1.00 1 up 4 hdd 0.03119 1.00000 32 GiB 1.0 GiB 4.4 MiB 0 B 1 GiB 31 GiB 3.14 1.00 0 up 5 hdd 0.03119 1.00000 32 GiB 1.0 GiB 4.4 MiB 0 B 1 GiB 31 GiB 3.14 1.00 1 up TOTAL 192 GiB 6.0 GiB 26 MiB 0 B 6 GiB 186 GiB 3.14 MIN/MAX VAR: 1.00/1.00 STDDEV: 0
7.4 配置SSD缓存
1) 确认当前所有磁盘类型
[root@srv1 ~]# ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME      STATUS  REWEIGHT  PRI-AFF
-1         0.18713  root default                            
-3         0.06238      host srv1                           
 0    hdd  0.03119          osd.0      up   1.00000  1.00000
 1    hdd  0.03119          osd.1      up   1.00000  1.00000
-5         0.06238      host srv2                           
 2    hdd  0.03119          osd.2      up   1.00000  1.00000
 3    hdd  0.03119          osd.3      up   1.00000  1.00000
-7         0.06238      host srv3                           
 4    hdd  0.03119          osd.4      up   1.00000  1.00000
 5    hdd  0.03119          osd.5      up   1.00000  1.00000
[root@srv1 ~]# ceph osd crush class ls [ "hdd" ]
2) 将指定的磁盘移除并更改其类型为ssd[本案例为osd0,osd1]---[类型为hdd,ssd,nvme] [root@srv1 ~]# for i in 1 3 5; do ceph osd crush rm-device-class osd.$i; done done removing class of osd(s): 1 done removing class of osd(s): 3 done removing class of osd(s): 5
[root@srv1 ~]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.18713 root default -3 0.06238 host srv1 1 0.03119 osd.1 up 1.00000 1.00000 0 hdd 0.03119 osd.0 up 1.00000 1.00000 -5 0.06238 host srv2 3 0.03119 osd.3 up 1.00000 1.00000 2 hdd 0.03119 osd.2 up 1.00000 1.00000 -7 0.06238 host srv3 5 0.03119 osd.5 up 1.00000 1.00000 4 hdd 0.03119 osd.4 up 1.00000 1.00000
[root@srv1 ~]# for i in 1 3 5; do ceph osd crush set-device-class ssd osd.$i; done set osd(s) 0 to class 'ssd' set osd(s) 1 to class 'ssd' set osd(s) 2 to class 'ssd'
[root@srv1 ~]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.18713 root default -3 0.06238 host srv1 0 hdd 0.03119 osd.0 up 1.00000 1.00000 1 ssd 0.03119 osd.1 up 1.00000 1.00000 -5 0.06238 host srv2 2 hdd 0.03119 osd.2 up 1.00000 1.00000 3 ssd 0.03119 osd.3 up 1.00000 1.00000 -7 0.06238 host srv3 4 hdd 0.03119 osd.4 up 1.00000 1.00000 5 ssd 0.03119 osd.5 up 1.00000 1.00000
[root@srv1 ~]# ceph osd crush class ls [ "hdd", "sdd" ]
3) 创建基于 ssd 的 class rule [root@srv1 ~]# ceph osd crush rule create-replicated ssd_rule default host ssd
[root@srv1 ~]# ceph osd crush rule list replicated_rule ssd_rule
4) 创建基于 ssd 的 class rule (1) 创建一个数据池---qyydata [root@srv1 ~]# ceph osd pool create qyydata 16 pool 'qyydata' created
(2) 创建一个缓存吃---qyycache [root@srv1 ~]# ceph osd pool create qyycache 16 ssd_rule pool 'qyycache' created
[root@srv1 ~]# ceph osd pool get qyycache crush_rule crush_rule: ssd_rule
[root@srv1 ~]# ceph osd lspools 1 device_health_metrics 2 qyydata 3 qyycache
5) 设置缓存层 # 将 qyycache pool 放置到 qyydata pool 前端 [root@srv1 ~]# ceph osd tier add qyydata qyycache pool 'qyycache' is now (or already was) a tier of 'qyydata'
# 设置缓存模式为 writeback [root@srv1 ~]# ceph osd tier cache-mode qyycache writeback set cache-mode for pool 'qyycache' to writeback
# 将所有客户端请求从标准池引导至缓存池 [root@srv1 ~]# ceph osd tier set-overlay qyydata qyycache overlay for 'qyydata' is now (or already was) 'qyycache'

# 如果打算将缓存池设置为Read Only,可按以下操作 # 将 cache pool 放置到 data pool 前端 [root@srv1 ~]# sudo ceph osd tier add qyydata qyycache
# 设置缓存模式为 readonly [root@srv1 ~]# sudo ceph osd tier cache-mode qyycache readonly


6) 查看 qyydata pool 和 qyycache pool 的详细信息 [root@srv1 ~]# ceph osd dump |egrep 'qyydata|qyycache' pool 2 'qyydata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 54 lfor 54/54/54 flags hashpspool tiers 3 read_tier 3 write_tier 3 stripe_width 0 pool 3 'qyycache' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 54 lfor 54/54/54 flags hashpspool,incomplete_clones tier_of 4 cache_mode writeback stripe_width 0
7) 对缓存进行基本设置 # 设置缓存层hit_set_type使用bloom过滤器 [root@srv1 ~]# ceph osd pool set qyycache hit_set_type bloom set pool 2 hit_set_type to bloom
# 默认情况下缓冲池基于数据的修改时间来进行确定是否命中缓存,也可以设定热度数hit_set_count和热度周期hit_set_period,以及最大缓冲数据target_max_bytes。 hit_set_count 和 hit_set_period 选项分别定义了 HitSet 覆盖的时间区间、以及保留多少个这样的 HitSet,保留一段时间以来的访问记录,这样 Ceph 就能判断一客户端在一段时间内访问了某对象一次、还是多次(存活期与热度)。 [root@srv1 ~]# ceph osd pool set qyycache hit_set_count 1 set pool 2 hit_set_count to 1 [root@srv1 ~]# ceph osd pool set qyycache hit_set_period 3600 set pool 2 hit_set_period to 3600 # 1 小时
[root@srv1 ~]# ceph osd pool set qyycache target_max_bytes 1073741824 set pool 2 target_max_bytes to 1073741824 # 1 GB
# 指定确定的数据对象数量或者确定的数据容量。对缓冲池设定最大的数据容量,来强制触发刷写和驱逐操作 [root@srv1 ~]# ceph osd pool set qyycache target_max_objects 10000 set pool 2 target_max_objects to 10000
# 设置min_read_recency_for_promete、min_write_recency_for_promote
缓存池代理层两大主要操作: •刷写(flushing):负责把已经被修改的对象写入到后端慢存储,但是对象依然在缓冲池。 •驱逐(evicting):负责在缓冲池里销毁那些没有被修改的对象。
缓冲池代理层进行刷写和驱逐的操作,主要和缓冲池本身的容量有关。在缓冲池里,如果被修改的数据达到一个阈值(阈值(容量百分比),缓冲池代理就开始把这些数据刷写到后端慢存储。当缓冲池里被修改的数据达到40%时,则触发刷写动作。
[root@srv1 ~]# ceph osd pool set qyycache min_read_recency_for_promote 1 set pool 2 min_read_recency_for_promote to 1
[root@srv1 ~]# ceph osd pool set qyycache min_write_recency_for_promote 1 set pool 3 min_write_recency_for_promote to 1
[root@srv1 ~]# ceph -s cluster: id: 6ebd60d5-204c-4e7f-9c3a-87b6650b6c20 health: HEALTH_OK
services: mon: 1 daemons, quorum srv1 (age 25m) mgr: srv1(active, since 24m) osd: 6 osds: 6 up (since 17m), 6 in (since 17m)
data: pools: 4 pools, 49 pgs objects: 0 objects, 0 B usage: 4.0 GiB used, 188 GiB / 192 GiB avail pgs: 49 active+clean
7.5 客户端配置
1) 将srv1的ssh-key传输至client节点
[root@srv1 ~]# vim .ssh/config
Host srv1
    Hostname srv1.1000y.cloud
    User root
Host srv2
    Hostname srv2.1000y.cloud
    User root
Host srv3
    Hostname srv3.1000y.cloud
    User root
Host client
    Hostname srv4.1000y.cloud
    User root
[root@srv1 ~]# ssh-copy-id client
2) 给client节点安装一些必要的软件 [root@srv1 ~]# ssh srv4 "yum install https://download.ceph.com/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm -y" [root@srv1 ~]# ssh srv4 "yum install ceph-common -y"
3) 将ceph集群配置文件及秘钥复制到client节点 [root@srv1 ~]# scp /etc/ceph/ceph.conf srv4:/etc/ceph/ ceph.conf 100% 285 106.4KB/s 00:00
[root@srv1 ~]# scp /etc/ceph/ceph.client.admin.keyring srv4:/etc/ceph/ ceph.client.admin.keyring 100% 151 50.6KB/s 00:00
[root@srv1 ~]# ssh srv4 "chown ceph. /etc/ceph/ceph.*"
4) 在srv4上进行划分rbd root@srv4:~# ceph osd lspools 1 device_health_metrics 2 qyydata 3 qyycache
root@srv4:~# rbd pool init qyydata
# 创建名为rbd1的image root@srv4:~# rbd -p qyydata create rbd1 --size 2G --image-feature layering
root@srv4:~# rbd -p qyydata ls -l NAME SIZE PARENT FMT PROT LOCK rbd1 2 GiB 2
2) 映射 root@srv4:~# rbd map -p qyydata rbd1 /dev/rbd0
root@srv4:~# rbd showmapped id pool namespace image snap device 0 qyydata rbd1 - /dev/rbd0
3) 使用 root@srv4:~# mkfs.ext4 /dev/rbd0
root@srv4:~# mount /dev/rbd0 /mnt
root@srv4:~# df -Th /mnt Filesystem Type Size Used Avail Use% Mounted on /dev/rbd0 ext4 2.0G 6.0M 1.8G 1% /mnt
7.6 ssd缓存池的其他操作
1) 删除writeback缓存池
[root@srv1 ~]# ceph osd tier cache-mode qyycache forward --yes-i-really-mean-it
2) 查看缓存池以确保所有的对象都被刷新(这可能需要点时间) [root@srv1 ~]# rados -p qyycache ls
3) 如果缓存池中仍然有对象,也可以手动刷新 [root@srv1 ~]# rados -p qyycache cache-flush-evict-all
4) 删除覆盖层,以使客户端不再将流量引导至缓存 [root@srv1 ~]# ceph osd tier remove-overlay qyydata
5) 解除存储池与缓存池的绑定存 [root@srv1 ~]# ceph osd tier remove qyydata qyycache
6) 缓存池的相关参数配置 (1) 命中集合过滤器,默认为 Bloom 过滤器 [root@srv1 ~]# ceph osd pool set qyycache hit_set_type bloom
[root@srv1 ~]# ceph osd pool set qyycache hit_set_count 1
# 设置 Bloom 过滤器的误报率 [root@srv1 ~]# ceph osd pool set qyycache hit_set_fpp 0.15
# 设置缓存有效期,单位:秒 [root@srv1 ~]# ceph osd pool set qyycache hit_set_period 3600
(2) 置当缓存池中的数据达到多少个字节或者多少个对象时,缓存分层代理就开始从缓存池刷新对象至后端存储池并驱逐 # 当缓存池中的数据量达到1GB时开始刷盘并驱逐 [root@srv1 ~]# ceph osd pool set qyycache target_max_bytes 1073741824
# 当缓存池中的对象个数达到1万时开始刷盘并驱逐 [root@srv1 ~]# ceph osd pool set qyycache target_max_objects 10000
(3) 定义缓存层将对象刷至存储层或者驱逐的时间 [root@srv1 ~]# ceph osd pool set qyycache cache_min_flush_age 600 [root@srv1 ~]# ceph osd pool set qyycache cache_min_evict_age 600
(4) 定义当缓存池中的脏对象(被修改过的对象)占比达到多少(百分比)时,缓存分层代理开始将object从缓存层刷至存储层 [root@srv1 ~]# ceph osd pool set qyycache cache_target_dirty_ratio 0.4
(5) 当缓存池的饱和度达到指定的值,缓存分层代理将驱逐对象以维护可用容量,此时会将未修改的(干净的)对象刷盘 [root@srv1 ~]# ceph osd pool set qyycache cache_target_full_ratio 0.8
设置在处理读写操作时候,检查多少个 HitSet,检查结果将用于决定是否异步地提升对象(即把对象从冷数据升级为热数据,放入快取池) 它的取值应该在 0 和 hit_set_count 之间, 如果设置为 0 ,则所有的对象在读取或者写入后,将会立即提升对象;如果设置为 1 ,就只检查当前 HitSet ,如果此对象在当前 HitSet 里就提升它,否则就不提升。 设置为其它值时,就要挨个检查此数量的历史 HitSet ,如果此对象出现在 min_read_recency_for_promote 个 HitSet 里的任意一个,那就提升它。 [root@srv1 ~]# ceph osd pool set qyycache min_read_recency_for_promote 1 [root@srv1 ~]# ceph osd pool set qyycache min_write_recency_for_promote 1
7.7 ssd作为ceph-osd的日志盘使用
1) 给所有osd服务增加4块磁盘,大小32G
2) 在所有的osd节点上创建vg [root@srv1 ~]# pvcreate /dev/sdd [root@srv1 ~]# vgcreate data /dev/sdd [root@srv1 ~]# lvcreate -l 100%FREE --name log data
[root@srv2 ~]# pvcreate /dev/sdd [root@srv2 ~]# vgcreate data /dev/sdd [root@srv2 ~]# lvcreate -l 100%FREE --name log data
[root@srv3 ~]# pvcreate /dev/sdd [root@srv3 ~]# vgcreate data /dev/sdd [root@srv3 ~]# lvcreate -l 100%FREE --name log data<br> 2) 使用filestore采用journal模式 root@srv1~# for ALLNODE in srv1 srv2 srv3 do ssh $ALLNODE "ceph-volume lvm create --filestore --data /dev/sde --journal data/log" done
[root@srv1 ~]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.28070 root default -3 0.09357 host srv1 0 hdd 0.03119 osd.0 up 1.00000 1.00000 6 hdd 0.03119 osd.6 up 1.00000 1.00000 1 ssd 0.03119 osd.1 up 1.00000 1.00000 -5 0.09357 host srv2 2 hdd 0.03119 osd.2 up 1.00000 1.00000 7 hdd 0.03119 osd.7 up 1.00000 1.00000 3 ssd 0.03119 osd.3 up 1.00000 1.00000 -7 0.09357 host srv3 4 hdd 0.03119 osd.4 up 1.00000 1.00000 8 hdd 0.03119 osd.8 up 1.00000 1.00000 5 ssd 0.03119 osd.5 up 1.00000 1.00000
3) 使用bluestore [root@srv1 ~]# pvcreate cache /dev/sdf [root@srv1 ~]# vgcreate cache /dev/sdf [root@srv1 ~]# lvcreate -l 50%FREE --name db-lv-0 cache [root@srv1 ~]# lvcreate -l 50%FREE --name wal-lv-0 cache
[root@srv2 ~]# pvcreate cache /dev/sdf [root@srv2 ~]# vgcreate cache /dev/sdf [root@srv2 ~]# lvcreate -l 50%FREE --name db-lv-0 cache [root@srv2 ~]# lvcreate -l 50%FREE --name wal-lv-0 cache
[root@srv3 ~]# pvcreate cache /dev/sdf [root@srv3 ~]# vgcreate cache /dev/sdf [root@srv3 ~]# lvcreate -l 50%FREE --name db-lv-0 cache [root@srv3 ~]# lvcreate -l 50%FREE --name wal-lv-0 cache
# 创建OSD root@srv1~# for ALLNODE in srv1 srv2 srv3 do ssh $ALLNODE "ceph-volume lvm create --bluestore --data /dev/sdg --block.db cache/db-lv-0 --block.wal cache/wal-lv-0" done
[root@srv1 ~]# ceph -s cluster: id: 6ebd60d5-204c-4e7f-9c3a-87b6650b6c20 health: HEALTH_OK
services: mon: 1 daemons, quorum srv1 (age 110m) mgr: srv1(active, since 109m) osd: 12 osds: 12 up (since 45s), 12 in (since 85s)
data: pools: 4 pools, 49 pgs objects: 28 objects, 4.4 MiB usage: 55 GiB used, 377 GiB / 432 GiB avail pgs: 49 active+clean
7.8 指定osd创建pool
1) 增加OSD
[root@srv1 ~]# for ALLNODE in srv1 srv2 srv3
do
    ssh $ALLNODE \
    "parted --script /dev/sdh 'mklabel gpt'; \
    parted --script /dev/sdh "mkpart primary 0% 100%"; \    ceph-volume lvm create --data /dev/sdh1"
done
[root@srv1 ~]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.51494 root default -3 0.17165 host srv1 0 hdd 0.03119 osd.0 up 1.00000 1.00000 6 hdd 0.03119 osd.6 up 1.00000 1.00000 9 hdd 0.04689 osd.9 up 1.00000 1.00000 12 hdd 0.03119 osd.12 up 1.00000 1.00000 1 ssd 0.03119 osd.1 up 1.00000 1.00000 -5 0.17165 host srv2 2 hdd 0.03119 osd.2 up 1.00000 1.00000 7 hdd 0.03119 osd.7 up 1.00000 1.00000 10 hdd 0.04689 osd.10 up 1.00000 1.00000 13 hdd 0.03119 osd.13 up 1.00000 1.00000 3 ssd 0.03119 osd.3 up 1.00000 1.00000 -7 0.17165 host srv3 4 hdd 0.03119 osd.4 up 1.00000 1.00000 8 hdd 0.03119 osd.8 up 1.00000 1.00000 11 hdd 0.04689 osd.11 up 1.00000 1.00000 14 hdd 0.03119 osd.14 up 1.00000 1.00000 5 ssd 0.03119 osd.5 up 1.00000 1.00000
2) 自定义类型 [root@srv1 ~]# for i in 12 13 14; do ceph osd crush rm-device-class osd.$i; done done removing class of osd(s): 12 done removing class of osd(s): 13 done removing class of osd(s): 14
[root@srv1 ~]# for i in 12 13 14; do ceph osd crush set-device-class rbd osd.$i; done set osd(s) 12 to class 'rbd' set osd(s) 13 to class 'rbd' set osd(s) 14 to class 'rbd'
[root@srv1 ~]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.51494 root default -3 0.17165 host srv1 0 hdd 0.03119 osd.0 up 1.00000 1.00000 6 hdd 0.03119 osd.6 up 1.00000 1.00000 9 hdd 0.04689 osd.9 up 1.00000 1.00000 12 rbd 0.03119 osd.12 up 1.00000 1.00000 1 ssd 0.03119 osd.1 up 1.00000 1.00000 -5 0.17165 host srv2 2 hdd 0.03119 osd.2 up 1.00000 1.00000 7 hdd 0.03119 osd.7 up 1.00000 1.00000 10 hdd 0.04689 osd.10 up 1.00000 1.00000 13 rbd 0.03119 osd.13 up 1.00000 1.00000 3 ssd 0.03119 osd.3 up 1.00000 1.00000 -7 0.17165 host srv3 4 hdd 0.03119 osd.4 up 1.00000 1.00000 8 hdd 0.03119 osd.8 up 1.00000 1.00000 11 hdd 0.04689 osd.11 up 1.00000 1.00000 14 rbd 0.03119 osd.14 up 1.00000 1.00000 5 ssd 0.03119 osd.5 up 1.00000 1.00000
3) 创建rbd的class rule [root@srv1 ~]# ceph osd crush rule create-replicated ssd_rule default host ssd
[root@srv1 ~]# ceph osd crush rule list replicated_rule ssd_rule rbd_rule
4) 创建一个池 [root@srv1 ~]# ceph osd pool create qyy 16 rbd_rule pool 'qyy' created
[root@srv1 ~]# ceph osd lspools 1 device_health_metrics 2 qyydata 3 qyycache 4 qyy
5) 在srv4上进行划分rbd root@srv4:~# ceph osd lspools 1 device_health_metrics 2 qyydata 3 qyycache 4 qyy
root@srv4:~# rbd pool init qyy
# 创建名为rbd1的image root@srv4:~# rbd -p qyy create rbd2 --size 2G --image-feature layering
root@srv4:~# rbd -p qyy ls -l NAME SIZE PARENT FMT PROT LOCK rbd2 2 GiB 2
6) 映射 root@srv4:~# rbd map -p qyy rbd2 /dev/rbd1
root@srv4:~# rbd showmapped id pool namespace image snap device 0 qyydata rbd1 - /dev/rbd0 2 qyy rbd2 - /dev/rbd1
7) 使用 root@srv4:~# mkfs.ext4 /dev/rbd1
root@srv4:~# mount /dev/rbd1 /mnt
root@srv4:~# df -Th /mnt Filesystem Type Size Used Avail Use% Mounted on /dev/rbd1 ext4 2.0G 6.0M 1.8G 1% /mnt

 

 

如对您有帮助,请随缘打个赏。^-^

gold