snow chuai汇总、整理、撰写---2024/01/03
1.1 拓扑
+--------------------+ | +----------------------+ | [srv9.1000y.cloud] | | | [srv7.1000y.cloud] | | Ceph Client +-----------+-----------+ RADOSGW | | 192.168.1.19 | | | 192.168.1.17 | +--------------------+ | +----------------------+ +----------------------------+----------------------------+ | | | |192.168.1.11 |192.168.1.12 |192.168.1.13 +-----------+-----------+ +-----------+-----------+ +-----------+-----------+ | [srv1.1000y.cloud] | | [srv2.1000y.cloud] | | [srv3.1000y.cloud] | | Object Storage +----+ Object Storage +----+ Object Storage | | Monitor Daemon | | Monitor Daemon | | Monitor Daemon | | Manager Daemon | | Manager Daemon | | Manager Daemon | +-----------+-----------+ +-----------+-----------+ +-----------+-----------+ |----------------------------|----------------------------| |192.168.1.18 |192.168.1.20 |192.168.1.21 +-----------+-----------+ +-----------+-----------+ +-----------+-----------+ | [srv8.1000y.cloud] | | [srv10.1000y.cloud] | | [srv11.1000y.cloud] | | Cache +----+ Cache +----+ Cache | | | | | | | | 添加一块sdb硬盘 | | 添加一块sdb硬盘 | | 添加一块sdb硬盘 | +-----------+-----------+ +-----------+-----------+ +-----------+-----------+ |----------------------------|----------------------------| |192.168.1.22 |192.168.1.23 |192.168.1.24 +-----------+-----------+ +-----------+-----------+ +-----------+-----------+ | [srv12.1000y.cloud] | | [srv13.1000y.cloud] | | [srv14.1000y.cloud] | | Log Disk +----+ Log Disk +----+ Log Disk | | | | | | | | 添加四块sd[b-e]硬盘 | | 添加四块sd[b-e]硬盘 | | 添加四块sd[b-e]硬盘 | +-----------+-----------+ +-----------+-----------+ +-----------+-----------+ |----------------------------|----------------------------| |192.168.1.14 |192.168.1.15 |192.168.1.16 +-----------+-----------+ +-----------+-----------+ +-----------+-----------+ | [srv4.1000y.cloud] | | [srv5.1000y.cloud] | | [srv6.1000y.cloud] | | Object Storage +----+ Object Storage +----+ Object Storage | | | | | | | | 添加一块sdb硬盘 | | 添加一块sdb硬盘 | | 添加一块sdb硬盘 | +-----------------------+ +-----------------------+ +-----------------------+1.2 配置Monitor Daemon及Manager Daemon
1) 于管理节点srv1上生成ssh-key并导入至所有节点上(含管理节点) [root@srv1 ~]# ssh-keygen -q -N '' Enter file in which to save the key (/root/.ssh/id_rsa):
[root@srv1 ~]# vim ~/.ssh/config Host srv1 Hostname srv1.1000y.cloud User root Host srv2 Hostname srv2.1000y.cloud User root Host srv3 Hostname srv3.1000y.cloud User root Host srv4 Hostname srv4.1000y.cloud User root Host srv5 Hostname srv5.1000y.cloud User root Host srv6 Hostname srv6.1000y.cloud User root
[root@srv1 ~]# chmod 600 ~/.ssh/config
[root@srv1 ~]# ssh-copy-id srv1 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host 'srv1.1000y.cloud (192.168.1.11)' can't be established. ECDSA key fingerprint is SHA256:uoSm+T/hW98A1OoH1Mxb4AJS8FPylojFSEhI2J4la+Q. ECDSA key fingerprint is MD5:98:0c:c1:97:25:ca:bd:ae:a0:6d:98:6b:15:b8:d9:2e. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@srv1.1000y.cloud's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'srv1.1000y.cloud'" and check to make sure that only the key(s) you wanted were added.
[root@srv1 ~]# ssh-copy-id srv2 [root@srv1 ~]# ssh-copy-id srv3 [root@srv1 ~]# ssh-copy-id srv4 [root@srv1 ~]# ssh-copy-id srv5 [root@srv1 ~]# ssh-copy-id srv6
2) 在所有节点上安装ceph(含3台管理节点,并启用EPEL源) [root@srv1 ~]# for allnode in srv1 srv2 srv3 srv4 srv5 srv6 do ssh $allnode "yum install centos-release-ceph-nautilus -y; yum install ceph python-enum34 -y" done
3) 于管理节点配置Monitor及Manager Daemon 3.1) 获取uuid [root@srv1 ~]# uuidgen 019823c5-518e-4d7e-862c-46f828ec6275
3.2) 创建一个以集群名称命名的配置文件--(此集群名为ceph) [root@sev1 ~]# vim /etc/ceph/ceph.conf # 全局配置定义 [global] # 定义集群网络地址 cluster network = 192.168.1.0/24 # 定义public network地址 public network = 192.168.1.0/24 # 指定本机的UUID fsid = 019823c5-518e-4d7e-862c-46f828ec6275 # Monitor Daemon的IP地址 mon host = 192.168.1.11, 192.168.1.12, 192.168.1.13 # Monitor Daemon的主机名称 mon initial members = srv1, srv2, srv3 # 定义创建osd pool所使用的CRUSH算法(定义哪些OSD存放哪些对象)规则 # -1值:选择具有最小数字标识的规则并使用 osd pool default crush rule = -1
# 定义monitor,允许删除pool [mon] mon allow pool delete = true
3.3) 生成Cluster监控的秘钥 [root@srv1 ~]# ceph-authtool --create-keyring /etc/ceph/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *' creating /etc/ceph/ceph.mon.keyring
3.4) 生成Cluster管理秘钥 [root@srv1 ~]# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring \ --gen-key -n client.admin \ --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' creating /etc/ceph/ceph.client.admin.keyring
3.5) 生成Bootstrap秘钥 [root@srv1 ~]# ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring \ --gen-key -n client.bootstrap-osd \ --cap mon 'profile bootstrap-osd' --cap mgr 'allow r' creating /var/lib/ceph/bootstrap-osd/ceph.keyring
3.6) 导入秘钥 [root@srv1 ~]# ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring importing contents of /etc/ceph/ceph.client.admin.keyring into /etc/ceph/ceph.mon.keyring
[root@srv1 ~]# ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring importing contents of /var/lib/ceph/bootstrap-osd/ceph.keyring into /etc/ceph/ceph.mon.keyring
##################################################信息提示##################################################
1. 如果key设定错误,可通过以下命令查看密钥名称并删除指定密钥 [root@srv1 ~]# ceph auth list installed auth entries:
mds.srv1 key: AQCcHRVltQu5IRAANkAz0Ac10uRiKHLUERJLYQ== caps: [mds] allow caps: [mon] allow profile mds caps: [osd] allow rwx ...... ...... ...... ...... ...... ......
[root@srv1 ~]# ceph auth mds.srv1
##################################################信息结束##################################################
3.7) 生成Monitor Map [root@srv1 ~]# FSID=$(grep "^fsid" /etc/ceph/ceph.conf | awk {'print $NF'}) [root@srv1 ~]# monmaptool --create --add srv1 192.168.1.11 --add srv2 192.168.1.12 --add srv3 192.168.1.13 --fsid $FSID /etc/ceph/monmap monmaptool: monmap file /etc/ceph/monmap monmaptool: set fsid to 019823c5-518e-4d7e-862c-46f828ec6275 monmaptool: writing epoch 0 to /etc/ceph/monmap (3 monitors)
# 显示对应的Map信息 [root@srv1 ~]# monmaptool --print /etc/ceph/monmap monmaptool: monmap file /etc/ceph/monmap epoch 0 fsid 019823c5-518e-4d7e-862c-46f828ec6275 last_changed 2023-07-14 15:56:04.105901 created 2023-07-14 15:56:04.105901 min_mon_release 0 (unknown) 0: v1:192.168.1.11:6789/0 mon.srv1 1: v1:192.168.1.12:6789/0 mon.srv2 2: v1:192.168.1.13:6789/0 mon.srv3
################################################## 信息汇总 ################################################## # 如果打算后面继续增加Mon/Mgr节点,可使用以下命令: root@srv1:~# monmaptool --add srv10 192.168.1.20 --add srv21 192.168.1.21 --fsid $FSID /etc/ceph/monmap
# 如果monmaptool添加错误,可进行删除 root@srv1:~# monmaptool --rm srv20 --rm srv21 --fsid $FSID /etc/ceph/monmap
################################################## 汇总结束 ##################################################
3.8) 为所有的Monitor Daemon创建一个目录[目录格式为:集群名-主机名] [root@srv1 ~]# mkdir /var/lib/ceph/mon/ceph-srv1 [root@srv2 ~]# mkdir /var/lib/ceph/mon/ceph-srv2 [root@srv3 ~]# mkdir /var/lib/ceph/mon/ceph-srv3
3.9) 为Mon Daemon关联key及Mon Map [root@srv1 ~]# ceph-mon --cluster ceph --mkfs -i srv1 --monmap /etc/ceph/monmap --keyring /etc/ceph/ceph.mon.keyring
[root@srv1 ~]# chown ceph. /etc/ceph/ceph.*
[root@srv1 ~]# chown -R ceph. /var/lib/ceph/mon/ceph-srv1 /var/lib/ceph/bootstrap-osd
[root@srv1 ~]# for cephnode in srv2 srv3 do scp /etc/ceph/ceph.conf /etc/ceph/ceph.mon.keyring /etc/ceph/monmap /etc/ceph/ceph.client.admin.keyring $cephnode:/etc/ceph/ ssh $cephnode "ceph-mon --cluster ceph --mkfs -i $cephnode --monmap /etc/ceph/monmap --keyring /etc/ceph/ceph.mon.keyring" ssh $cephnode "chown -R ceph. /etc/ceph /var/lib/ceph/mon/ceph-$cephnode /var/lib/ceph/bootstrap-osd" done
3.10) 启动ceph-mon服务 [root@srv1 ~]# systemctl enable --now ceph-mon@srv1 [root@srv2 ~]# systemctl enable --now ceph-mon@srv2 [root@srv3 ~]# systemctl enable --now ceph-mon@srv3
3.11) 其他配置 # 所有mon节点,开启Messenger v2协议 [root@srv1 ~]# ceph mon enable-msgr2 [root@srv2 ~]# ceph mon enable-msgr2 [root@srv3 ~]# ceph mon enable-msgr2
# 所有mon节点,启动自动缩放模块 [root@srv1 ~]# ceph mgr module enable pg_autoscaler --force [root@srv2 ~]# ceph mgr module enable pg_autoscaler --force [root@srv3 ~]# ceph mgr module enable pg_autoscaler --force
3.12) 配置并启动Manager Daemon # 为Manager Daemon创建目录[格式为:集群名-主机名] [root@srv1 ~]# mkdir /var/lib/ceph/mgr/ceph-srv1 [root@srv2 ~]# mkdir /var/lib/ceph/mgr/ceph-srv2 [root@srv3 ~]# mkdir /var/lib/ceph/mgr/ceph-srv3
# 创建认证秘钥 [root@srv1 ~]# ceph auth get-or-create mgr.srv1 mon 'allow profile mgr' osd 'allow *' mds 'allow *' [mgr.srv1] key = AQCHA7FkB2RYIhAAnghox+n3O1eYZ8acoCcfyA==
[root@srv1 ~]# ceph auth get-or-create mgr.srv1 > /etc/ceph/ceph.mgr.admin.keyring [root@srv1 ~]# cp /etc/ceph/ceph.mgr.admin.keyring /var/lib/ceph/mgr/ceph-srv1/keyring [root@srv1 ~]# chown ceph. /etc/ceph/ceph.mgr.admin.keyring [root@srv1 ~]# chown -R ceph. /var/lib/ceph/mgr/ceph-srv1
[root@srv1 ~]# systemctl enable --now ceph-mgr@srv1
# 为Mgr节点建立主备关系 root@srv1:~# for cephnode in srv2 srv3 do ssh $cephnode "ceph auth get-or-create mgr.$cephnode mon 'allow profile mgr' osd 'allow *' mds 'allow *'" ssh $cephnode "ceph auth get-or-create mgr.$cephnode > /etc/ceph/ceph.mgr.admin.keyring" ssh $cephnode "mkdir /var/lib/ceph/mgr/ceph-$cephnode" ssh $cephnode "cp /etc/ceph/ceph.mgr.admin.keyring /var/lib/ceph/mgr/ceph-$cephnode/keyring" ssh $cephnode "chown ceph. /etc/ceph/ceph.mgr.admin.keyring" ssh $cephnode "chown -R ceph. /var/lib/ceph/mgr/ceph-$cephnode" ssh $cephnode "systemctl enable --now ceph-mgr@$cephnode" done
3.13) Firewalld配置 [root@srv1 ~]# firewall-cmd --add-service=ceph-mon --permanent success [root@srv1 ~]# firewall-cmd --reload success
3.14) 验证Ceph集群.确认Mon及Manager Daemon启动 # 因没有加入OSD,因此集群状态为HEALTH_WARN,属于正常 [root@srv1 ~]# ceph -s cluster: id: 019823c5-518e-4d7e-862c-46f828ec6275 health: HEALTH_WARN OSD count 0 < osd_pool_default_size 3 mons are allowing insecure global_id reclaim
services: mon: 3 daemons, quorum srv1,srv2,srv3 (age 6m) mgr: srv1(active, since 2m), standbys: srv2, srv3 osd: 0 osds: 0 up, 0 in
data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs:
1.3 配置加OSD
1) 防火请配置 [root@srv1 ~]# for ALLNODE in srv4 srv5 srv6 do ssh $ALLNODE "firewall-cmd --add-service=ceph --permanent; firewall-cmd --reload" done
2) 为所有节点配置OSD [root@srv1 ~]# for ALLNODE in srv4 srv5 srv6 do # 如果不是ALL-in-one(即:架构不是MGR+MON+OSD),可不用写 if [ ! ${ALLNODE} = "srv1" ] if [ ! ${ALLNODE} = "srv1" ] then scp /etc/ceph/ceph.conf ${ALLNODE}:/etc/ceph/ceph.conf scp /etc/ceph/ceph.client.admin.keyring ${ALLNODE}:/etc/ceph scp /var/lib/ceph/bootstrap-osd/ceph.keyring ${ALLNODE}:/var/lib/ceph/bootstrap-osd fi ssh $ALLNODE \ "chown ceph. /etc/ceph/ceph.* /var/lib/ceph/bootstrap-osd/*; \ parted --script /dev/sdb 'mklabel gpt'; \ parted --script /dev/sdb "mkpart primary 0% 100%"; \ ceph-volume lvm create --data /dev/sdb1" done
Running command: /usr/bin/ceph-authtool --gen-print-key Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 371 6b4ad-ccde-4609-a30d-a5b1c46c01f9 Running command: /usr/sbin/vgcreate --force --yes ceph-ae54b17d-99e4-423e-a4ec-b70c6222dfe3 /dev/sda1 stdout: Physical volume "/dev/sda1" successfully created. stdout: Volume group "ceph-ae54b17d-99e4-423e-a4ec-b70c6222dfe3" successfully created Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-3716b4ad-ccde-4609-a30d-a5b1c46c01f9 ceph-ae54b17d-99e4-423e-a4ec-b70c6222dfe3 ...... ...... ...... ...... ...... ...... --> ceph-volume lvm activate successful for osd ID: 2 --> ceph-volume lvm create successful for: /dev/sdb1
3) 验证ceph集群 [root@srv1 ~]# ceph -s cluster: id: 019823c5-518e-4d7e-862c-46f828ec6275 health: HEALTH_OK
services: mon: 3 daemons, quorum srv1,srv2,srv3 (age 17m) mgr: srv1(active, since 13m), standbys: srv2, srv3 osd: 3 osds: 3 up (since 3m), 3 in (since 3m)
data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 3.0 GiB used, 93 GiB / 96 GiB avail pgs:
##################################################错误汇总##################################################
# 如果出现错误1 [root@srv1 ~]# ceph health HEALTH_WARN mon is allowing insecure global_id reclaim; Module 'restful' has failed dependency: No module named 'pecan'
# 解决办法 [root@srv1 ~]# pip3 install pecan werkzeug cherrypy ; reboot
# 如果出现错误2 [root@srv1 ~]# ceph health HEALTH_WARN mon is allowing insecure global_id reclaim
# 解决办法 [root@srv1 ~]# ceph config set mon auth_allow_insecure_global_id_reclaim false
##################################################汇总结束##################################################
4) 验证OSD树 [root@srv1 ~]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.09357 root default -3 0.03119 host srv4 0 hdd 0.03119 osd.0 up 1.00000 1.00000 -5 0.03119 host srv5 1 hdd 0.03119 osd.1 up 1.00000 1.00000 -7 0.03119 host srv6 2 hdd 0.03119 osd.2 up 1.00000 1.00000
[root@srv1 ~]# ceph df RAW STORAGE: CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 96 GiB 93 GiB 5.4 MiB 3.0 GiB 3.13 TOTAL 96 GiB 93 GiB 5.4 MiB 3.0 GiB 3.13
POOLS: POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
[root@srv1 ~]# ceph osd df ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS 0 hdd 0.03119 1.00000 32 GiB 1.0 GiB 1.8 MiB 0 B 1 GiB 31 GiB 3.13 1.00 0 up 1 hdd 0.03119 1.00000 32 GiB 1.0 GiB 1.8 MiB 0 B 1 GiB 31 GiB 3.13 1.00 0 up 2 hdd 0.03119 1.00000 32 GiB 1.0 GiB 1.8 MiB 0 B 1 GiB 31 GiB 3.13 1.00 0 up TOTAL 96 GiB 3.0 GiB 5.2 MiB 0 B 3 GiB 93 GiB 3.13 MIN/MAX VAR: 1.00/1.00 STDDEV: 0
1) 将srv1的ssh-key传输至client节点 [root@srv1 ~]# vim .ssh/config Host srv1 Hostname srv1.1000y.cloud User root Host srv2 Hostname srv2.1000y.cloud User root Host srv3 Hostname srv3.1000y.cloud User root Host srv4 Hostname srv4.1000y.cloud User root Host srv5 Hostname srv5.1000y.cloud User root Host srv6 Hostname srv6.1000y.cloud User root Host srv9 Hostname srv9.1000y.cloud User root |
1) 将srv1的ssh-key传输至client节点 [root@srv1 ~]# vim .ssh/config Host srv1 Hostname srv1.1000y.cloud User root Host srv2 Hostname srv2.1000y.cloud User root Host srv3 Hostname srv3.1000y.cloud User root Host srv4 Hostname srv4.1000y.cloud User root Host srv5 Hostname srv5.1000y.cloud User root Host srv6 Hostname srv6.1000y.cloud User root Host srv9 Hostname srv9.1000y.cloud User root |
1) 在管理节点(srv1)上安装RADOSGW节点所需的软件 [root@srv1 ~]# vim .ssh/config Host srv1 Hostname srv1.1000y.cloud User root Host srv2 Hostname srv2.1000y.cloud User root Host srv3 Hostname srv3.1000y.cloud User root Host srv4 Hostname srv4.1000y.cloud User root Host srv5 Hostname srv5.1000y.cloud User root Host srv6 Hostname srv6.1000y.cloud User root Host srv7 Hostname srv7.1000y.cloud User root Host srv9 Hostname srv9.1000y.cloud User root |
1) 于Admin节点(srv1)上安装ashBoard软件并开启Dashboard模块 [root@srv1 ~]# yum --enablerepo=epel install ceph-mgr-dashboard -y |
1) 于Admin节点(srv1)上添加OSD节点(srv8) [root@srv1 ~]# vim .ssh/config Host srv1 Hostname srv1.1000y.cloud User root Host srv2 Hostname srv2.1000y.cloud User root Host srv3 Hostname srv3.1000y.cloud User root Host srv4 Hostname srv4.1000y.cloud User root Host srv5 Hostname srv5.1000y.cloud User root Host srv6 Hostname srv6.1000y.cloud User root Host srv7 Hostname srv7.1000y.cloud User root Host srv8 Hostname srv8.1000y.cloud User root Host srv9 Hostname srv9.1000y.cloud User root |
1) 前提: 必须存在ceph-fs pool |
8.1 添加新的OSD
1) 于Admin节点(srv1)上添加OSD节点(srv8) [root@srv1 ~]# vim .ssh/config Host srv1 Hostname srv1.1000y.cloud User root Host srv2 Hostname srv2.1000y.cloud User root Host srv3 Hostname srv3.1000y.cloud User root Host srv4 Hostname srv4.1000y.cloud User root Host srv5 Hostname srv5.1000y.cloud User root Host srv6 Hostname srv6.1000y.cloud User root Host srv7 Hostname srv7.1000y.cloud User root Host srv8 Hostname srv8.1000y.cloud User root Host srv9 Hostname srv9.1000y.cloud User root Host srv10 Hostname srv10.1000y.cloud User root Host srv11 Hostname srv11.1000y.cloud User root
[root@srv1 ~]# ssh-copy-id srv8 [root@srv1 ~]# ssh-copy-id srv10 [root@srv1 ~]# ssh-copy-id srv11
# 配置srv8防火墙规则 [root@srv1 ~]# ssh srv8 "firewall-cmd --add-service=ceph --permanent; firewall-cmd --reload" [root@srv1 ~]# ssh srv10 "firewall-cmd --add-service=ceph --permanent; firewall-cmd --reload" [root@srv1 ~]# ssh srv11 "firewall-cmd --add-service=ceph --permanent; firewall-cmd --reload"
# 安装软件源及ceph [root@srv1 ~]# for node in srv8 srv10 srv11 do ssh $node "yum install centos-release-ceph-nautilus epel-release -y; yum install ceph python-enum34 -y" done
[root@srv1 ~]# for node in srv8 srv10 srv11 do scp /etc/ceph/ceph.conf $node:/etc/ceph/ceph.conf scp /etc/ceph/ceph.client.admin.keyring $node:/etc/ceph scp /var/lib/ceph/bootstrap-osd/ceph.keyring $node:/var/lib/ceph/bootstrap-osd done
# 配置OSD [root@srv1 ~]# for node in srv8 srv10 srv11 do ssh $node "chown ceph. /etc/ceph/ceph.* /var/lib/ceph/bootstrap-osd/*; \ parted --script /dev/sdb 'mklabel gpt'; \ parted --script /dev/sdb "mkpart primary 0% 100%"; \ ceph-volume lvm create --data /dev/sdb1" done
Running command: /usr/bin/ceph-authtool --gen-print-key Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring ...... ...... ...... ...... ...... ...... --> ceph-volume lvm activate successful for osd ID: 5 --> ceph-volume lvm create successful for: /dev/sdc1
################################################## 信息汇总 ##################################################
# All-In-One架构添加OSD [root@srv1 ~]# for ALLNODE in srv1 srv2 srv3 do if [ ! ${ALLNODE} = "srv1" ] then scp /etc/ceph/ceph.conf ${ALLNODE}:/etc/ceph/ceph.conf scp /etc/ceph/ceph.client.admin.keyring ${ALLNODE}:/etc/ceph scp /var/lib/ceph/bootstrap-osd/ceph.keyring ${ALLNODE}:/var/lib/ceph/bootstrap-osd fi ssh $ALLNODE \ "chown ceph. /etc/ceph/ceph.* /var/lib/ceph/bootstrap-osd/*; \ parted --script /dev/sdb 'mklabel gpt'; \ parted --script /dev/sdc 'mklabel gpt'; \ parted --script /dev/sdb "mkpart primary 0% 100%"; \ parted --script /dev/sdc "mkpart primary 0% 100%"; \ ceph-volume lvm create --data /dev/sdb1; \ ceph-volume lvm create --data /dev/sdc1" done
################################################## 汇总结束 ##################################################
# 验证 [root@srv1 ~]# ceph -s cluster: id: 019823c5-518e-4d7e-862c-46f828ec6275 health: HEALTH_OK
services: mon: 3 daemons, quorum srv1,srv2,srv3 (age 3h) mgr: srv1(active, since 2h), standbys: srv3, srv2 mds: cephfs:1 {0=srv1=up:active} 2 up:standby osd: 6 osds: 6 up (since 36s), 6 in (since 36s) rgw: 1 daemon active (srv7)
task status: scrub status: mds.srv1: idle
data: pools: 7 pools, 224 pgs objects: 213 objects, 6.3 KiB usage: 9.1 GiB used, 279 GiB / 288 GiB avail pgs: 224 active+clean
[root@srv1 ~]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.28070 root default -11 0.06238 host srv10 4 hdd 0.03119 osd.4 up 1.00000 1.00000 -13 0.06238 host srv11 5 hdd 0.03119 osd.5 up 1.00000 1.00000 -3 0.03119 host srv4 0 hdd 0.03119 osd.0 up 1.00000 1.00000 -5 0.03119 host srv5 1 hdd 0.03119 osd.1 up 1.00000 1.00000 -7 0.03119 host srv6 2 hdd 0.03119 osd.2 up 1.00000 1.00000 -9 0.06238 host srv8 3 hdd 0.03119 osd.3 up 1.00000 1.00000
8.2 配置SSD缓存
1) 确认当前所有磁盘类型 [root@srv1 ~]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.28070 root default -11 0.06238 host srv10 5 hdd 0.03119 osd.4 up 1.00000 1.00000 -13 0.06238 host srv11 7 hdd 0.03119 osd.5 up 1.00000 1.00000 -3 0.03119 host srv4 0 hdd 0.03119 osd.0 up 1.00000 1.00000 -5 0.03119 host srv5 1 hdd 0.03119 osd.1 up 1.00000 1.00000 -7 0.03119 host srv6 2 hdd 0.03119 osd.2 up 1.00000 1.00000 -9 0.06238 host srv8 3 hdd 0.03119 osd.3 up 1.00000 1.00000
[root@srv1 ~]# ceph osd crush class ls [ "hdd" ]
2) 将指定的磁盘移除并更改其类型为ssd[本案例为osd3,osd5,osd7]---[类型为hdd,ssd,nvme] [root@srv1 ~]# for i in 3 4 5; do ceph osd crush rm-device-class osd.$i; done done removing class of osd(s): 3 done removing class of osd(s): 4 done removing class of osd(s): 5
[root@srv1 ~]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.28070 root default -11 0.06238 host srv10 4 0.03119 osd.4 up 1.00000 1.00000 -13 0.06238 host srv11 5 0.03119 osd.5 up 1.00000 1.00000 -3 0.03119 host srv4 0 hdd 0.03119 osd.0 up 1.00000 1.00000 -5 0.03119 host srv5 1 hdd 0.03119 osd.1 up 1.00000 1.00000 -7 0.03119 host srv6 2 hdd 0.03119 osd.2 up 1.00000 1.00000 -9 0.06238 host srv8 3 0.03119 osd.3 up 1.00000 1.00000
# 将osd3,osd5,osd7设置为ssd类 [root@srv1 ~]# for i in 3 4 5; do ceph osd crush set-device-class ssd osd.$i; done set osd(s) 3 to class 'ssd' set osd(s) 4 to class 'ssd' set osd(s) 5 to class 'ssd'
[root@srv1 ~]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.28070 root default -11 0.06238 host srv10 4 ssd 0.03119 osd.4 up 1.00000 1.00000 -13 0.06238 host srv11 5 ssd 0.03119 osd.5 up 1.00000 1.00000 -3 0.03119 host srv4 0 hdd 0.03119 osd.0 up 1.00000 1.00000 -5 0.03119 host srv5 1 hdd 0.03119 osd.1 up 1.00000 1.00000 -7 0.03119 host srv6 2 hdd 0.03119 osd.2 up 1.00000 1.00000 -9 0.06238 host srv8 3 ssd 0.03119 osd.3 up 1.00000 1.00000
[root@srv1 ~]# ceph osd crush class ls [ "hdd", "sdd" ]
3) 创建基于 ssd 的 class rule [root@srv1 ~]# ceph osd crush rule create-replicated ssd_rule default host ssd
[root@srv1 ~]# ceph osd crush rule list replicated_rule ssd_rule
4) 创建基于 ssd 的 class rule (1) 创建一个数据池---qyydata [root@srv1 ~]# ceph osd pool create qyydata 16 pool 'qyydata' created
(2) 创建一个缓存池---qyycache [root@srv1 ~]# ceph osd pool create qyycache 16 ssd_rule pool 'qyycache' created
[root@srv1 ~]# ceph osd pool get qyycache crush_rule crush_rule: ssd_rule
[root@srv1 ~]# ceph osd lspools 4 .rgw.root 5 default.rgw.control 6 default.rgw.meta 7 default.rgw.log 8 default.rgw.buckets.index 9 cephfs_data 10 cephfs_metadata 11 qyydata 12 qyycache
5) 设置缓存层 # 将 qyycache pool 放置到 qyydata pool 前端 [root@srv1 ~]# ceph osd tier add qyydata qyycache pool 'qyycache' is now (or already was) a tier of 'qyydata'
# 设置缓存模式为 writeback [root@srv1 ~]# ceph osd tier cache-mode qyycache writeback set cache-mode for pool 'qyycache' to writeback
# 将所有客户端请求从标准池引导至缓存池 [root@srv1 ~]# ceph osd tier set-overlay qyydata qyycache overlay for 'qyydata' is now (or already was) 'qyycache'
# 如果打算将缓存池设置为Read Only,可按以下操作 # 将 cache pool 放置到 data pool 前端 [root@srv1 ~]# ceph osd tier add qyydata qyycache
# 设置缓存模式为 readonly [root@srv1 ~]# ceph osd tier cache-mode qyycache readonly
6) 查看 qyydata pool 和 qyycache pool 的详细信息 [root@srv1 ~]# ceph osd dump |egrep 'qyydata|qyycache' pool 11 'qyydata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode warn last_change 144 lfor 144/144/144 flags hashpspool tiers 12 read_tier 12 write_tier 12 stripe_width 0 pool 12 'qyycache' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode warn last_change 144 lfor 144/144/144 flags hashpspool,incomplete_clones tier_of 11 cache_mode writeback stripe_width 0
7) 对缓存进行基本设置 # 设置缓存层hit_set_type使用bloom过滤器 [root@srv1 ~]# ceph osd pool set qyycache hit_set_type bloom set pool 12 hit_set_type to bloom
# 默认情况下缓冲池基于数据的修改时间来进行确定是否命中缓存,也可以设定热度数hit_set_count和热度周期hit_set_period,以及最大缓冲数据target_max_bytes。 hit_set_count 和 hit_set_period 选项分别定义了 HitSet 覆盖的时间区间、以及保留多少个这样的 HitSet,保留一段时间以来的访问记录,这样 Ceph 就能判断一客户端在一段时间内访问了某对象一次、还是多次(存活期与热度)。 [root@srv1 ~]# ceph osd pool set qyycache hit_set_count 1 set pool 12 hit_set_count to 1 [root@srv1 ~]# ceph osd pool set qyycache hit_set_period 3600 set pool 12 hit_set_period to 3600 # 1 小时
[root@srv1 ~]# ceph osd pool set qyycache target_max_bytes 1073741824 set pool 12 target_max_bytes to 1073741824 # 1 GB
# 指定确定的数据对象数量或者确定的数据容量。对缓冲池设定最大的数据容量,来强制触发刷写和驱逐操作 [root@srv1 ~]# ceph osd pool set qyycache target_max_objects 10000 set pool 12 target_max_objects to 10000
# 设置min_read_recency_for_promete、min_write_recency_for_promote
缓存池代理层两大主要操作: •刷写(flushing):负责把已经被修改的对象写入到后端慢存储,但是对象依然在缓冲池。 •驱逐(evicting):负责在缓冲池里销毁那些没有被修改的对象。
缓冲池代理层进行刷写和驱逐的操作,主要和缓冲池本身的容量有关。在缓冲池里,如果被修改的数据达到一个阈值(阈值(容量百分比),缓冲池代理就开始把这些数据刷写到后端慢存储。当缓冲池里被修改的数据达到40%时,则触发刷写动作。 [root@srv1 ~]# ceph osd pool set qyycache min_read_recency_for_promote 1 set pool 2 min_read_recency_for_promote to 1
[root@srv1 ~]# ceph osd pool set qyycache min_write_recency_for_promote 1 set pool 3 min_write_recency_for_promote to 1
[root@srv1 ~]# ceph -s cluster: id: 019823c5-518e-4d7e-862c-46f828ec6275 health: HEALTH_OK
services: mon: 3 daemons, quorum srv1,srv2,srv3 (age 4h) mgr: srv1(active, since 2h), standbys: srv3, srv2 mds: cephfs:1 {0=srv1=up:active} 2 up:standby osd: 6 osds: 6 up (since 3m), 6 in (since 4m) rgw: 1 daemon active (srv7)
task status: scrub status: mds.srv1: idle
data: pools: 9 pools, 256 pgs objects: 213 objects, 6.3 KiB usage: 6.1 GiB used, 186 GiB / 192 GiB avail pgs: 256 active+clean8.3 客户端配置
1) 在客户端[srv9]处进行划分rbd [root@srv9 ~]# ceph osd lspools 4 .rgw.root 5 default.rgw.control 6 default.rgw.meta 7 default.rgw.log 8 default.rgw.buckets.index 9 cephfs_data 10 cephfs_metadata 11 qyydata 12 qyycache
[root@srv9 ~]# rbd pool init qyydata
# 创建名为rbd1的image [root@srv9 ~]# rbd -p qyydata create rbd1 --size 2G --image-feature layering
[root@srv9 ~]# rbd -p qyydata ls -l NAME SIZE PARENT FMT PROT LOCK rbd1 2 GiB 2
2) 映射 [root@srv9 ~]# rbd map -p qyydata rbd1 /dev/rbd0
root@srv4:~# rbd showmapped id pool namespace image snap device 0 qyydata rbd1 - /dev/rbd0
3) 使用 root@srv4:~# mkfs.ext4 /dev/rbd0
root@srv4:~# mount /dev/rbd0 /mnt root@srv4:~# df -Th /mnt Filesystem Type Size Used Avail Use% Mounted on /dev/rbd0 ext4 2.0G 6.0M 1.8G 1% /mnt8.4 ssd缓存池的其他操作
1) 删除writeback缓存池 [root@srv1 ~]# ceph osd tier cache-mode qyycache forward --yes-i-really-mean-it
2) 查看缓存池以确保所有的对象都被刷新(这可能需要点时间) [root@srv1 ~]# rados -p qyycache ls
3) 如果缓存池中仍然有对象,也可以手动刷新 [root@srv1 ~]# rados -p qyycache cache-flush-evict-all
4) 删除覆盖层,以使客户端不再将流量引导至缓存 [root@srv1 ~]# ceph osd tier remove-overlay qyydata
5) 解除存储池与缓存池的绑定存 [root@srv1 ~]# ceph osd tier remove qyydata qyycache
6) 缓存池的相关参数配置 (1) 命中集合过滤器,默认为 Bloom 过滤器 [root@srv1 ~]# ceph osd pool set qyycache hit_set_type bloom
[root@srv1 ~]# ceph osd pool set qyycache hit_set_count 1
# 设置 Bloom 过滤器的误报率 [root@srv1 ~]# ceph osd pool set qyycache hit_set_fpp 0.15
# 设置缓存有效期,单位:秒 [root@srv1 ~]# ceph osd pool set qyycache hit_set_period 3600
(2) 置当缓存池中的数据达到多少个字节或者多少个对象时,缓存分层代理就开始从缓存池刷新对象至后端存储池并驱逐 # 当缓存池中的数据量达到1GB时开始刷盘并驱逐 [root@srv1 ~]# ceph osd pool set qyycache target_max_bytes 1073741824
# 当缓存池中的对象个数达到1万时开始刷盘并驱逐 [root@srv1 ~]# ceph osd pool set qyycache target_max_objects 10000
(3) 定义缓存层将对象刷至存储层或者驱逐的时间 [root@srv1 ~]# ceph osd pool set qyycache cache_min_flush_age 600 [root@srv1 ~]# ceph osd pool set qyycache cache_min_evict_age 600
(4) 定义当缓存池中的脏对象(被修改过的对象)占比达到多少(百分比)时,缓存分层代理开始将object从缓存层刷至存储层 [root@srv1 ~]# ceph osd pool set qyycache cache_target_dirty_ratio 0.4
(5) 当缓存池的饱和度达到指定的值,缓存分层代理将驱逐对象以维护可用容量,此时会将未修改的(干净的)对象刷盘 [root@srv1 ~]# ceph osd pool set qyycache cache_target_full_ratio 0.8
设置在处理读写操作时候,检查多少个 HitSet,检查结果将用于决定是否异步地提升对象(即把对象从冷数据升级为热数据,放入快取池) 它的取值应该在 0 和 hit_set_count 之间, 如果设置为 0 ,则所有的对象在读取或者写入后,将会立即提升对象;如果设置为 1 ,就只检查当前 HitSet ,如果此对象在当前 HitSet 里就提升它,否则就不提升。 设置为其它值时,就要挨个检查此数量的历史 HitSet ,如果此对象出现在 min_read_recency_for_promote 个 HitSet 里的任意一个,那就提升它。 [root@srv1 ~]# ceph osd pool set qyycache min_read_recency_for_promote 1 [root@srv1 ~]# ceph osd pool set qyycache min_write_recency_for_promote 18.5 ssd作为ceph-osd的日志盘使用
1) 于Admin节点(srv1)上添加OSD节点(srv8) [root@srv1 ~]# vim .ssh/config Host srv1 Hostname srv1.1000y.cloud User root Host srv2 Hostname srv2.1000y.cloud User root Host srv3 Hostname srv3.1000y.cloud User root Host srv4 Hostname srv4.1000y.cloud User root Host srv5 Hostname srv5.1000y.cloud User root Host srv6 Hostname srv6.1000y.cloud User root Host srv7 Hostname srv7.1000y.cloud User root Host srv8 Hostname srv8.1000y.cloud User root Host srv9 Hostname srv9.1000y.cloud User root Host srv10 Hostname srv10.1000y.cloud User root Host srv11 Hostname srv11.1000y.cloud User root Host srv12 Hostname srv12.1000y.cloud User root Host srv13 Hostname srv13.1000y.cloud User root Host srv14 Hostname srv14.1000y.cloud User root
[root@srv1 ~]# ssh-copy-id srv12 [root@srv1 ~]# ssh-copy-id srv13 [root@srv1 ~]# ssh-copy-id srv14
# 配置srv8防火墙规则 [root@srv1 ~]# ssh srv12 "firewall-cmd --add-service=ceph --permanent; firewall-cmd --reload" [root@srv1 ~]# ssh srv13 "firewall-cmd --add-service=ceph --permanent; firewall-cmd --reload" [root@srv1 ~]# ssh srv14 "firewall-cmd --add-service=ceph --permanent; firewall-cmd --reload"
# 安装软件源及ceph [root@srv1 ~]# for node in srv12 srv13 srv14 do ssh $node "yum install centos-release-ceph-nautilus epel-release -y; yum install ceph python-enum34 -y" done
[root@srv1 ~]# for node in srv12 srv13 srv14 do scp /etc/ceph/ceph.conf $node:/etc/ceph/ceph.conf scp /etc/ceph/ceph.client.admin.keyring $node:/etc/ceph scp /var/lib/ceph/bootstrap-osd/ceph.keyring $node:/var/lib/ceph/bootstrap-osd done
# 配置OSD [root@srv1 ~]# for node in srv12 srv13 srv14 do ssh $node "chown ceph. /etc/ceph/ceph.* /var/lib/ceph/bootstrap-osd/*" done
2) 在所有的osd节点上创建vg [root@srv1 ~]# for node in srv12 srv13 srv14 do ssh $node "pvcreate /dev/sdb; \ vgcreate data /dev/sdb; \ lvcreate -l 100%FREE --name log data" done
3) 使用filestore采用journal模式 root@srv1~# for ALLNODE in srv12 srv13 srv14 do ssh $ALLNODE "ceph-volume lvm create --filestore --data /dev/sdc --journal data/log" done
[root@srv1 ~]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.28070 root default -11 0.03119 host srv10 4 ssd 0.03119 osd.4 up 1.00000 1.00000 -13 0.03119 host srv11 5 ssd 0.03119 osd.5 up 1.00000 1.00000 -22 0.03119 host srv12 6 hdd 0.03119 osd.6 up 1.00000 1.00000 -25 0.03119 host srv13 7 hdd 0.03119 osd.7 up 1.00000 1.00000 -28 0.03119 host srv14 8 hdd 0.03119 osd.8 up 1.00000 1.00000 -3 0.03119 host srv4 0 hdd 0.03119 osd.0 up 1.00000 1.00000 -5 0.03119 host srv5 1 hdd 0.03119 osd.1 up 1.00000 1.00000 -7 0.03119 host srv6 2 hdd 0.03119 osd.2 up 1.00000 1.00000 -9 0.03119 host srv8 3 ssd 0.03119 osd.3 up 1.00000 1.00000
[root@srv1 ~]# ceph -s cluster: id: 019823c5-518e-4d7e-862c-46f828ec6275 health: HEALTH_OK
services: mon: 3 daemons, quorum srv1,srv2,srv3 (age 4h) mgr: srv1(active, since 3h), standbys: srv3, srv2 mds: cephfs:1 {0=srv1=up:active} 2 up:standby osd: 9 osds: 9 up (since 6m), 9 in (since 6m) rgw: 1 daemon active (srv7)
task status: scrub status: mds.srv1: idle
data: pools: 9 pools, 256 pgs objects: 262 objects, 135 MiB usage: 6.9 GiB used, 281 GiB / 288 GiB avail pgs: 256 active+clean
4) 使用bluestore [root@srv1 ~]# for node in srv12 srv13 srv14 do ssh $node "pvcreate /dev/sdd; \ vgcreate cache /dev/sdd; \ lvcreate -l 50%FREE --name db-lv-0 cache; \ lvcreate -l 50%FREE --name wal-lv-0 cache" done
# 创建OSD root@srv1~# for ALLNODE in srv12 srv13 srv14 do ssh $ALLNODE "ceph-volume lvm create --bluestore --data /dev/sde --block.db cache/db-lv-0 --block.wal cache/wal-lv-0" done
[root@srv1 ~]# ceph -s cluster: id: 019823c5-518e-4d7e-862c-46f828ec6275 health: HEALTH_OK
services: mon: 3 daemons, quorum srv1,srv2,srv3 (age 4h) mgr: srv1(active, since 3h), standbys: srv3, srv2 mds: cephfs:1 {0=srv1=up:active} 2 up:standby osd: 12 osds: 12 up (since 72s), 12 in (since 72s) rgw: 1 daemon active (srv7)
task status: scrub status: mds.srv1: idle
data: pools: 9 pools, 256 pgs objects: 262 objects, 135 MiB usage: 58 GiB used, 374 GiB / 432 GiB avail pgs: 256 active+clean
[root@srv1 ~]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.28070 root default -11 0.03119 host srv10 4 ssd 0.03119 osd.4 up 1.00000 1.00000 -13 0.03119 host srv11 5 ssd 0.03119 osd.5 up 1.00000 1.00000 -22 0.03119 host srv12 6 hdd 0.03119 osd.6 up 1.00000 1.00000 9 hdd 0.04689 osd.9 up 1.00000 1.00000 -25 0.03119 host srv13 7 hdd 0.03119 osd.7 up 1.00000 1.00000 10 hdd 0.04689 osd.10 up 1.00000 1.00000 -28 0.03119 host srv14 8 hdd 0.03119 osd.8 up 1.00000 1.00000 11 hdd 0.04689 osd.11 up 1.00000 1.00000 -3 0.03119 host srv4 0 hdd 0.03119 osd.0 up 1.00000 1.00000 -5 0.03119 host srv5 1 hdd 0.03119 osd.1 up 1.00000 1.00000 -7 0.03119 host srv6 2 hdd 0.03119 osd.2 up 1.00000 1.00000 -9 0.03119 host srv8 3 ssd 0.03119 osd.3 up 1.00000 1.00000
5) 确认OSD使用的为filestore/bluestore # 进入OSD所在的主机 [root@srv4 ~]# cat /var/lib/ceph/osd/ceph-0/type bluestore8.6 指定osd创建pool
1) 增加OSD [root@srv1 ~]# for ALLNODE in srv1 srv2 srv3 do ssh $ALLNODE \ "parted --script /dev/sdh 'mklabel gpt'; \ parted --script /dev/sdh "mkpart primary 0% 100%"; \ ceph-volume lvm create --data /dev/sdh1" done
[root@srv1 ~]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.51494 root default -3 0.17165 host srv1 0 hdd 0.03119 osd.0 up 1.00000 1.00000 6 hdd 0.03119 osd.6 up 1.00000 1.00000 9 hdd 0.04689 osd.9 up 1.00000 1.00000 12 hdd 0.03119 osd.12 up 1.00000 1.00000 1 ssd 0.03119 osd.1 up 1.00000 1.00000 -5 0.17165 host srv2 2 hdd 0.03119 osd.2 up 1.00000 1.00000 7 hdd 0.03119 osd.7 up 1.00000 1.00000 10 hdd 0.04689 osd.10 up 1.00000 1.00000 13 hdd 0.03119 osd.13 up 1.00000 1.00000 3 ssd 0.03119 osd.3 up 1.00000 1.00000 -7 0.17165 host srv3 4 hdd 0.03119 osd.4 up 1.00000 1.00000 8 hdd 0.03119 osd.8 up 1.00000 1.00000 11 hdd 0.04689 osd.11 up 1.00000 1.00000 14 hdd 0.03119 osd.14 up 1.00000 1.00000 5 ssd 0.03119 osd.5 up 1.00000 1.00000
2) 自定义类型 [root@srv1 ~]# for i in 12 13 14; do ceph osd crush rm-device-class osd.$i; done done removing class of osd(s): 12 done removing class of osd(s): 13 done removing class of osd(s): 14
[root@srv1 ~]# for i in 12 13 14; do ceph osd crush set-device-class rbd osd.$i; done set osd(s) 12 to class 'rbd' set osd(s) 13 to class 'rbd' set osd(s) 14 to class 'rbd'
[root@srv1 ~]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.51494 root default -3 0.17165 host srv1 0 hdd 0.03119 osd.0 up 1.00000 1.00000 6 hdd 0.03119 osd.6 up 1.00000 1.00000 9 hdd 0.04689 osd.9 up 1.00000 1.00000 12 rbd 0.03119 osd.12 up 1.00000 1.00000 1 ssd 0.03119 osd.1 up 1.00000 1.00000 -5 0.17165 host srv2 2 hdd 0.03119 osd.2 up 1.00000 1.00000 7 hdd 0.03119 osd.7 up 1.00000 1.00000 10 hdd 0.04689 osd.10 up 1.00000 1.00000 13 rbd 0.03119 osd.13 up 1.00000 1.00000 3 ssd 0.03119 osd.3 up 1.00000 1.00000 -7 0.17165 host srv3 4 hdd 0.03119 osd.4 up 1.00000 1.00000 8 hdd 0.03119 osd.8 up 1.00000 1.00000 11 hdd 0.04689 osd.11 up 1.00000 1.00000 14 rbd 0.03119 osd.14 up 1.00000 1.00000 5 ssd 0.03119 osd.5 up 1.00000 1.00000
3) 创建rbd的class rule [root@srv1 ~]# ceph osd crush rule create-replicated ssd_rule default host ssd
[root@srv1 ~]# ceph osd crush rule list replicated_rule ssd_rule rbd_rule
4) 创建一个池 [root@srv1 ~]# ceph osd pool create qyy 16 rbd_rule pool 'qyy' created
[root@srv1 ~]# ceph osd lspools 1 device_health_metrics 2 qyydata 3 qyycache 4 qyy
5) 在srv4上进行划分rbd root@srv4:~# ceph osd lspools 1 device_health_metrics 2 qyydata 3 qyycache 4 qyy
root@srv4:~# rbd pool init qyy
# 创建名为rbd1的image root@srv4:~# rbd -p qyy create rbd2 --size 2G --image-feature layering
root@srv4:~# rbd -p qyy ls -l NAME SIZE PARENT FMT PROT LOCK rbd2 2 GiB 2
6) 映射 root@srv4:~# rbd map -p qyy rbd2 /dev/rbd1
root@srv4:~# rbd showmapped id pool namespace image snap device 0 qyydata rbd1 - /dev/rbd0 2 qyy rbd2 - /dev/rbd1
7) 使用 root@srv4:~# mkfs.ext4 /dev/rbd1
root@srv4:~# mount /dev/rbd1 /mnt
root@srv4:~# df -Th /mnt Filesystem Type Size Used Avail Use% Mounted on /dev/rbd1 ext4 2.0G 6.0M 1.8G 1% /mnt