Ceph-Nautilus之配置手册

snow chuai汇总、整理、撰写---2020/01/26
最后更新日期---2021/12/25


1. Ceph Nautilus配置与实现
1.1 拓扑
                 +--------------------+        +--------------------+ 
                 | [node1.1000cc.net] |        | [node5.1000cc.net] |
                 |    Ceph-Deploy     |        |        client      |
                 |   192.168.10.11    |        |   192.168.10.15    |     
                 +--------------------+        +--------------------+   
                          |--------------|-----------------|
                                         |
            +----------------------------+----------------------------+
            |                            |                            |
            |192.168.10.12               |192.168.10.13               |192.168.10.14
+-----------+-----------+    +-----------+-----------+    +-----------+-----------+
|  [node02.1000cc.net]  |    |  [node03.1000cc.net]  |    |   [node04.1000cc.net]  |
|     Object Storage    +----+     Object Storage    +----+     Object Storage    |
|     Monitor Daemon    |    |                       |    |                       |
|     Manager Daemon    |    |                       |    |                       |
+-----------------------+    +-----------------------+    +-----------------------+
1.2 前期准备
1) 为所有OSD主机,增加一块不小于5G的硬盘.位置位于各主机的sdb
2) 为所有主机开启epel源
3) 创建管理账户于所有节点,并赋予sudo权限及ssh免密登录 [root@node1 ~]# echo -e 'Defaults:snow !requiretty\nsnow ALL = (root) NOPASSWD:ALL' > /etc/sudoers.d/ceph
[root@node1 ~]# pscp.pssh -h host-list.txt /etc/sudoers.d/ceph /etc/sudoers.d/ [1] 16:25:39 [SUCCESS] root@node4.1000cc.net [2] 16:25:39 [SUCCESS] root@node2.1000cc.net [3] 16:25:39 [SUCCESS] root@node3.1000cc.net [4] 16:25:39 [SUCCESS] root@node5.1000cc.net
[root@node1 ~]# pssh -h host-list.txt -i 'ls -l /etc/sudoers.d/ceph' [1] 16:35:58 [SUCCESS] root@node4.1000cc.net -rw-r--r-- 1 root root 57 Jan 26 16:35 /etc/sudoers.d/ceph [2] 16:35:58 [SUCCESS] root@node3.1000cc.net -rw-r--r-- 1 root root 57 Jan 26 16:35 /etc/sudoers.d/ceph [3] 16:35:58 [SUCCESS] root@node2.1000cc.net -rw-r--r-- 1 root root 57 Jan 26 16:35 /etc/sudoers.d/cep [4] 16:35:58 [SUCCESS] root@node5.1000cc.net -rw-r--r-- 1 root root 57 Jan 26 16:35 /etc/sudoers.d/cephh
[root@node1 ~]# pssh -h host-list.txt -i 'useradd snow' [1] 16:39:11 [SUCCESS] root@node1.1000cc.net [2] 16:39:11 [SUCCESS] root@node2.1000cc.net [3] 16:39:11 [SUCCESS] root@node4.1000cc.net [4] 16:39:11 [SUCCESS] root@node3.1000cc.net [5] 16:39:11 [SUCCESS] root@node5.1000cc.net
[root@host1 ~]# pssh -h host-list.txt -i 'echo 123456 | passwd --stdin snow' [1] 16:39:37 [SUCCESS] root@node2.1000cc.net Changing password for user snow. passwd: all authentication tokens updated successfully. [2] 16:39:37 [SUCCESS] root@node1.1000cc.net Changing password for user snow. passwd: all authentication tokens updated successfully. [3] 16:39:37 [SUCCESS] root@node4.1000cc.net Changing password for user snow. passwd: all authentication tokens updated successfully. [4] 16:39:37 [SUCCESS] root@node3.1000cc.net Changing password for user snow. passwd: all authentication tokens updated successfully. [5] 16:39:37 [SUCCESS] root@node5.1000cc.net Changing password for user snow. passwd: all authentication tokens updated successfully.
# 防火墙配置(开启ssh/MDS/OSD三个服务) [root@node1 ~]# firewall-cmd --add-service=ssh --permanent [root@node1 ~]# firewall-cmd --add-port=6789/tcp --permanent [root@node1 ~]# firewall-cmd --add-port=6800-7100/tcp --permanent [root@node1 ~]# firewall-cmd --reload
# 安装软件源 [root@node1 ~]# yum install python2-pip https://download.ceph.com/rpm-nautilus/el7/noarch/ceph-release-1-1.el7.noarch.rpm -y
# 设定ceph admin账户 [root@node1 ~]# su - snow [snow@node1 ~]$ ssh-keygen -N '' Generating public/private rsa key pair. Enter file in which to save the key (/home/snow/.ssh/id_rsa): Created directory '/home/snow/.ssh'. Your identification has been saved in /home/snow/.ssh/id_rsa. Your public key has been saved in /home/snow/.ssh/id_rsa.pub. The key fingerprint is: SHA256:2nrNBsv6c4ZBFtBKQk691gTS6BsoPGvAGRWDF0FF6ho snow@node1.1000cc.net The key's randomart image is: +---[RSA 2048]----+ | +OX*o+ | | o +=.+ + | |o ++.o = . | |o=o o + + | |.Eo. + oS | | oo . oo | |.. ...B | | .* * | | o+.= | +----[SHA256]-----+
[snow@node1 ~]$ vim ~/.ssh/config Host node1 Hostname node1.1000cc.net User snow Host node2 Hostname node2.1000cc.net User snow Host node3 Hostname node3.1000cc.net User snow Host node4 Hostname node4.1000cc.net User snow Host node5 Hostname node5.1000cc.net User snow
[snow@node1 ~]$ chmod 600 ~/.ssh/config [snow@node1 ~]$ ssh-copy-id node2 [snow@node1 ~]$ ssh-copy-id node3 [snow@node1 ~]$ ssh-copy-id node4 [snow@node1 ~]$ ssh-copy-id node5
1.3 配置Ceph Nautilus
1) 为所有节点安装Ceph
[snow@node1 ~]$ sudo yum -y install ceph-deploy
[snow@node1 ~]$ mkdir ceph [snow@node1 ceph]$ cd ceph [snow@node1 ceph]$ ceph-deploy new node2
# 安装ceph至所有节点 [snow@node1 ceph]$ ceph-deploy install --release nautilus \ --repo-url http://mirrors.ustc.edu.cn/ceph/rpm-nautilus/el7 \ --gpg-url http://mirrors.ustc.edu.cn/ceph/keys/release.asc \ --nogpgcheck node2 node3 node4
# 设定mons及秘钥 [snow@node1 ceph]$ ceph-deploy mon create-initial
2) 配置Ceph集群 [snow@node1 ceph]$ ceph-deploy osd create node2 --data /dev/vdb [snow@node1 ceph]$ ceph-deploy osd create node3 --data /dev/vdb [snow@node1 ceph]$ ceph-deploy osd create node4 --data /dev/vdb [snow@node1 ceph]$ ceph-deploy admin node1 node2 node3 node4
# 设定mgr节点 [snow@node1 ceph]$ ceph-deploy mgr create node2 [snow@node1 ceph]$ ceph-deploy mon add node2 [snow@node1 ceph]$ sudo ceph -s cluster: id: 022a3132-fabf-434b-bebc-64a8f67b4bf0 health: HEALTH_OK
services: mon: 1 daemons, quorum srv2 (age 20m) mgr: srv2(active, since 60m) osd: 3 osds: 3 up (since 62m), 3 in (since 62m)
data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 3.0 GiB used, 93 GiB / 96 GiB avail pgs:
# 如果发生[ceph_deploy][ERROR ] RuntimeError: bootstrap-rgw keyring not found; run 'gatherkeys'错误 请执行"ceph-deploy gatherkeys node2"解决
HEALTH_WARN mon is allowing insecure global_id reclaim # 解决办法 # ceph config set mon auth_allow_insecure_global_id_reclaim false
2. 使用Ceph
2.1 块存储实现
1) 创建RBD存储池
[root@node1 ~]# su - snow
[snow@node1 ~]$ cd ceph

# 进入admin节点,安装并管理client节点 [snow@node1 ceph]$ ceph-deploy install --release nautilus \ --repo-url http://mirrors.ustc.edu.cn/ceph/rpm-nautilus/el7 \ --gpg-url http://mirrors.ustc.edu.cn/ceph/keys/release.asc \ --nogpgcheck node5
[snow@node1 ceph]$ ceph-deploy admin node5
# 进入client节点 [root@node5 ~]# chmod 644 /etc/ceph/ceph.client.admin.keyring [root@node5 ~]# ceph osd pool create rbd 0 pool 'rbd' created
[root@node5 ~]# rbd pool init rbd
# 创建名为rbd1的image [root@node5 ~]# rbd create rbd1 --size 2G --image-feature layering
[root@node5 ~]# rbd ls -l NAME SIZE PARENT FMT PROT LOCK rbd1 2 GiB 2 2) 映射 [root@node5 ~]# rbd map rbd1 /dev/rbd0
[root@node5 ~]# rbd showmapped id pool namespace image snap device 0 rbd rbd1 - /dev/rbd0
3) 使用 [root@node5 ~]# mkfs.ext4 /dev/rbd0
[root@node5 ~]# mount /dev/rbd0 /mnt
[root@node5 ~]# df -Th | grep /mnt /dev/rbd0 ext4 2.0G 6.0M 1.8G 1% /mnt
4) 删除块设备 [root@node5 ~]# umount /mnt [root@node5 ~]# rbd unmap /dev/rbd/rbd/rbd1 [root@node5 ~]# rbd rm rbd1 -p rbd Removing image: 100% complete...done. [root@node5 ~]# rbd ls -l [root@node5 ~]#
5) 删除pool # 注意:删除pool必须在配置文件的[Montior Daemon]中声明:mon_allow_pool_delete = true,并重启monitor.target服务 # 修改完成之后,同步至其他所有节点 # 为避免后续增加节点时,模板ceph.conf没有此选项,可对模板节点进行此选项的添加 [root@node2 ~]# vim /etc/ceph/ceph.conf [global] fsid = 510ca313-4976-492d-8aca-978392d654be mon_initial_members = node2 mon_host = 192.168.1.12 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx # 增加如下内容 mon_allow_pool_delete = true
[root@node2 ~]# systemctl restart ceph-mon.target
# 语法格式ceph osd pool delete [pool-name] [pool-name] 参数 [root@node5 ~]# ceph osd pool delete rbd rbd --yes-i-really-really-mean-it pool 'rbd' removed
[root@node5 ~]# ceph osd pool ls [root@node5 ~]#
2.2 使用Ceph文件系统
1) 创建MDS
[root@node1 ~]# su - snow
[snow@node1 ~]$ cd ceph
[snow@node1 ceph]$ ceph-deploy mds create node2
# 进入node2节点 [root@node2 ~]# chmod 644 /etc/ceph/ceph.client.admin.keyring
# 创建ceph pool [root@node2 ~]# ceph osd pool create cephfs_data 128 [root@node2 ~]# ceph osd pool create cephfs_metadata 128
#启动建ceph pool [root@node2 ~]# ceph fs new cephfs cephfs_metadata cephfs_data
[root@node2 ~]# ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
[root@node2 ~]# ceph mds stat cephfs:1 {0=node2=up:active}
[root@node2 ~]# ceph fs status cephfs cephfs - 1 clients ====== +------+--------+-------+---------------+-------+-------+ | Rank | State | MDS | Activity | dns | inos | +------+--------+-------+---------------+-------+-------+ | 0 | active | node2 | Reqs: 0 /s | 10 | 13 | +------+--------+-------+---------------+-------+-------+ +-----------------+----------+-------+-------+ | Pool | type | used | avail | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 1536k | 16.9G | | cephfs_data | data | 0 | 16.9G | +-----------------+----------+-------+-------+ +-------------+ | Standby MDS | +-------------+ +-------------+ MDS version: ceph version 14.2.6 (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9) nautilus (stable)
2) 客户端使用Ceph FS [root@node5 ~]# yum install ceph-fuse -y
# 获取管理秘钥 [root@node5 ~]# ssh snow@node2.1000cc.net "sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring" > admin.key [root@node5 ~]# chmod 600 admin.key [root@node5 ~]# sudo mount.ceph node2.1000cc.net:/ /mnt -o name=admin,secretfile=admin.key [root@node5 ~]# df -Th | grep /mnt 192.168.10.12:/ ceph 17G 0 17G 0% /mnt
2) 删除CephFS [root@node5 ~]# umount /mnt/
# 进入node2节点 [root@node2 ~]# systemctl disable --now ceph-mds@node2
[root@node2 ~]# ceph fs rm cephfs --yes-i-really-mean-it
[root@node2 ~]# ceph osd pool delete cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it pool 'cephfs_metadata' removed
[root@node2 ~]# ceph osd pool delete cephfs_data cephfs_data --yes-i-really-really-mean-it pool 'cephfs_data' removed
# 测试确认 [root@node2 ~]# systemctl enable --now ceph-mds@node2
[root@node2 ~]# ceph fs ls No filesystems enabled
[root@node2 ~]# ceph osd pool ls device_health_metrics
[root@node2 ~]# ceph mds stat 1 up:standby
2.3 开启Object Gateway
1) 配置
[root@node1 ~]# su - snow
[snow@node1 ~]$ cd ceph
[snow@node1 ceph]$ ceph-deploy install \ --repo-url http://mirrors.ustc.edu.cn/ceph/rpm-nautilus/el7 --gpg-url http://mirrors.ustc.edu.cn/ceph/keys/release.asc \ --nogpgcheck --rgw node5
[snow@node1 ceph]$ ceph-deploy admin node5 [snow@node1 ceph]$ ceph-deploy rgw create node5 ...... ...... [ceph_deploy.rgw][INFO ] The Ceph Object Gateway (RGW) is now running on host node5 and default port 7480
[snow@node1 ceph]$ curl node5.1000cc.net:7480 <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
2) 客户端调用 [root@node5 ~]# radosgw-admin user create --uid=gzliu --display-name="Gz Lau" --email=admin@1000cc.net { "user_id": "gzliu", "display_name": "Gz Lau", "email": "admin@1000cc.net", "suspended": 0, "max_buckets": 1000, "subusers": [], "keys": [ { "user": "gzliu", "access_key": "KCJ634DW8NXLEKZ84B6D", "secret_key": "RpMj9yaqaKvMVyRNO3Iw0TkU07K820jH2TJG1BFv" } ], "swift_keys": [], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "default_storage_class": "", "placement_tags": [], "bucket_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "user_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "temp_url_keys": [], "type": "rgw", "mfa_ids": [] }
[root@node5 ~]# radosgw-admin user list [ "gzliu" ]
[root@node5 ~]# radosgw-admin user info --uid=gzliu { "user_id": "gzliu", "display_name": "Gz Lau", "email": "admin@1000cc.net", "suspended": 0, "max_buckets": 1000, "subusers": [], "keys": [ { "user": "gzliu", "access_key": "KCJ634DW8NXLEKZ84B6D", "secret_key": "RpMj9yaqaKvMVyRNO3Iw0TkU07K820jH2TJG1BFv" } ], "swift_keys": [], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "default_storage_class": "", "placement_tags": [], "bucket_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "user_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "temp_url_keys": [], "type": "rgw", "mfa_ids": [] } 3) 验证 [root@node1 ~]# yum install python-boto -y
[root@node1 ~]# vim s3.py import sys import boto import boto.s3.connection
# 加入user的key ACCESS_KEY = 'KCJ634DW8NXLEKZ84B6D' SECRET_KEY = 'RpMj9yaqaKvMVyRNO3Iw0TkU07K820jH2TJG1BFv'
# 设置Object Gateway的FQDN及所监听的端口 HOST = 'node5.1000cc.net' PORT = 7480
conn = boto.connect_s3( aws_access_key_id = ACCESS_KEY, aws_secret_access_key = SECRET_KEY, port = PORT, host = HOST, is_secure = False, calling_format = boto.s3.connection.OrdinaryCallingFormat(), )
# 创建[snow-test]的bucket bucket = conn.create_bucket('snow-test')
# 显示bucket列表 for bucket in conn.get_all_buckets(): print "{name}\t{created}".format( name = bucket.name, created = bucket.creation_date, ) bucket.delete()

[root@node1 ~]# python s3.py snow-test 2020-01-29T17:54:42.371Z
3. 开启Dashboard
1) 开启Dashboard
[root@node2 ~]# yum install ceph-mgr-dashboard -y
[root@node2 ~]# ceph mgr module enable dashboard
[root@node2 ~]# ceph mgr module ls
{
    "enabled_modules": [
        "dashboard",
        "iostat",
        "restful"
    ],
    "disabled_modules": [
        {
            "name": "ansible",
            "can_run": true,
.....
.....
2) 创建登录证书 [root@node2 ~]# ceph dashboard create-self-signed-cert Self-signed certificate created
3) 创建账户及账户密码,角色为管理员 [root@node2 ~]# ceph dashboard ac-user-create lisa password administrator {"username": "lisa", "lastUpdate": 1580330090, "name": null, "roles": ["administrator"], "password": "$2b$12$4ZBKZidynw.HcID8PgTwO.NpwPsEJA1cpudSOxXk7c/snmddVaqAW", "email": null}
# Ceph Nautilus中对于授权的指令进行了变更,可按如下指令操作 [root@node2 ~]# cat >> password.txt << EOF password EOF
[root@node2 ~]# sudo ceph dashboard ac-user-create lisa administrator -i password.txt {"username": "lisa", "lastUpdate": 1580330090, "name": null, "roles": ["administrator"], "password": "$2b$12$4ZBKZidynw.HcID8PgTwO.NpwPsEJA1cpudSOxXk7c/snmddVaqAW", "email": null}
4) 生成Dashboard URL [root@node2 ~]# sudo ceph mgr services { "dashboard": "http://node2.1000cc.net:8443/", "prometheus": "http://node2.1000cc.net:9283/" }
5) 防火墙设定 [root@node2 ~]# sudo firewall-cmd --add-port=8443/tcp --permanent [root@node2 ~]# sudo firewall-cmd --add-port=9283/tcp --permanent [root@node2 ~]# sudo firewall-cmd --reload
6) 访问 开启浏览器======>输入http://node2.1000cc.net:8443
4. Ceph Nautilus配置与实现--带有SSD缓存
4.1 拓扑
                 +--------------------+        +--------------------+ 
                 | [node1.1000cc.net] |        | [node5.1000cc.net] |
                 |    Ceph-Ansible    |        |        client      |
                 |   192.168.10.11    |        |   192.168.10.15    |     
                 +--------------------+        +--------------------+   
                          |--------------|-----------------|
                                         |
            +----------------------------+----------------------------+
            |                            |                            |
            |192.168.10.12               |192.168.10.13               |192.168.10.14
+-----------+-----------+    +-----------+-----------+    +-----------+-----------+
|  [node02.1000cc.net]  |    |  [node03.1000cc.net]  |    |   [node04.1000cc.net]  |
|     Object Storage    +----+     Object Storage    +----+     Object Storage    |
|     Monitor Daemon    |    |                       |    |                       |
|     Manager Daemon    |    |                       |    |                       |
+-----------------------+    +-----------------------+    +-----------------------+
4.2 前期准备
1) 为所有OSD主机,增加2块32G的硬盘.位置位于各主机的sdb及sdc.
2) 为所有主机开启epel源
3) 创建管理账户于所有节点,并赋予sudo权限及ssh免密登录 [root@node1 ~]# echo -e 'Defaults:snow !requiretty\nsnow ALL = (root) NOPASSWD:ALL' > /etc/sudoers.d/ceph
[root@node1 ~]# pscp.pssh -h host-list.txt /etc/sudoers.d/ceph /etc/sudoers.d/ [1] 16:25:39 [SUCCESS] root@node4.1000cc.net [2] 16:25:39 [SUCCESS] root@node2.1000cc.net [3] 16:25:39 [SUCCESS] root@node3.1000cc.net [4] 16:25:39 [SUCCESS] root@node5.1000cc.net
[root@node1 ~]# pssh -h host-list.txt -i 'ls -l /etc/sudoers.d/ceph' [1] 16:35:58 [SUCCESS] root@node4.1000cc.net -rw-r--r-- 1 root root 57 Jan 26 16:35 /etc/sudoers.d/ceph [2] 16:35:58 [SUCCESS] root@node3.1000cc.net -rw-r--r-- 1 root root 57 Jan 26 16:35 /etc/sudoers.d/ceph [3] 16:35:58 [SUCCESS] root@node2.1000cc.net -rw-r--r-- 1 root root 57 Jan 26 16:35 /etc/sudoers.d/cep [4] 16:35:58 [SUCCESS] root@node5.1000cc.net -rw-r--r-- 1 root root 57 Jan 26 16:35 /etc/sudoers.d/cephh
[root@node1 ~]# pssh -h host-list.txt -i 'useradd snow' [1] 16:39:11 [SUCCESS] root@node1.1000cc.net [2] 16:39:11 [SUCCESS] root@node2.1000cc.net [3] 16:39:11 [SUCCESS] root@node4.1000cc.net [4] 16:39:11 [SUCCESS] root@node3.1000cc.net [5] 16:39:11 [SUCCESS] root@node5.1000cc.net
[root@host1 ~]# pssh -h host-list.txt -i 'echo 123456 | passwd --stdin snow' [1] 16:39:37 [SUCCESS] root@node2.1000cc.net Changing password for user snow. passwd: all authentication tokens updated successfully. [2] 16:39:37 [SUCCESS] root@node1.1000cc.net Changing password for user snow. passwd: all authentication tokens updated successfully. [3] 16:39:37 [SUCCESS] root@node4.1000cc.net Changing password for user snow. passwd: all authentication tokens updated successfully. [4] 16:39:37 [SUCCESS] root@node3.1000cc.net Changing password for user snow. passwd: all authentication tokens updated successfully. [5] 16:39:37 [SUCCESS] root@node5.1000cc.net Changing password for user snow. passwd: all authentication tokens updated successfully.
# 防火墙配置(开启ssh/MDS/OSD三个服务) [root@node1 ~]# firewall-cmd --add-service=ssh --permanent [root@node1 ~]# firewall-cmd --add-port=6789/tcp --permanent [root@node1 ~]# firewall-cmd --add-port=6800-7100/tcp --permanent [root@node1 ~]# firewall-cmd --reload
# 安装软件源 [root@node1 ~]# yum install python2-pip https://download.ceph.com/rpm-nautilus/el7/noarch/ceph-release-1-1.el7.noarch.rpm -y
# 设定ceph admin账户 [root@node1 ~]# su - snow [snow@node1 ~]$ ssh-keygen -N '' Generating public/private rsa key pair. Enter file in which to save the key (/home/snow/.ssh/id_rsa): Created directory '/home/snow/.ssh'. Your identification has been saved in /home/snow/.ssh/id_rsa. Your public key has been saved in /home/snow/.ssh/id_rsa.pub. The key fingerprint is: SHA256:2nrNBsv6c4ZBFtBKQk691gTS6BsoPGvAGRWDF0FF6ho snow@node1.1000cc.net The key's randomart image is: +---[RSA 2048]----+ | +OX*o+ | | o +=.+ + | |o ++.o = . | |o=o o + + | |.Eo. + oS | | oo . oo | |.. ...B | | .* * | | o+.= | +----[SHA256]-----+
[snow@node1 ~]$ vim ~/.ssh/config Host node1 Hostname node1.1000cc.net User snow Host node2 Hostname node2.1000cc.net User snow Host node3 Hostname node3.1000cc.net User snow Host node4 Hostname node4.1000cc.net User snow Host node5 Hostname node5.1000cc.net User snow
[snow@node1 ~]$ chmod 600 ~/.ssh/config [snow@node1 ~]$ ssh-copy-id node2 [snow@node1 ~]$ ssh-copy-id node3 [snow@node1 ~]$ ssh-copy-id node4 [snow@node1 ~]$ ssh-copy-id node5
4.3 配置Ceph Nautilus
1) 为所有节点安装Ceph
[snow@node1 ~]$ sudo yum -y install ceph-deploy
[snow@node1 ~]$ mkdir ceph [snow@node1 ceph]$ cd ceph [snow@node1 ceph]$ ceph-deploy new node2
# 安装ceph至所有节点 [snow@node1 ceph]$ ceph-deploy install --release nautilus \ --repo-url http://mirrors.ustc.edu.cn/ceph/rpm-nautilus/el7 \ --gpg-url http://mirrors.ustc.edu.cn/ceph/keys/release.asc \ --nogpgcheck node1 node2 node3 node4
# 设定mons及秘钥 [snow@node1 ceph]$ ceph-deploy mon create-initial
2) 配置Ceph集群 [snow@node1 ceph]$ ceph-deploy osd create node2 --data /dev/sdb [snow@node1 ceph]$ ceph-deploy osd create node3 --data /dev/sdb [snow@node1 ceph]$ ceph-deploy osd create node4 --data /dev/sdb [snow@node1 ceph]$ ceph-deploy osd create node2 --data /dev/sdc [snow@node1 ceph]$ ceph-deploy osd create node3 --data /dev/sdc [snow@node1 ceph]$ ceph-deploy osd create node4 --data /dev/sdc [snow@node1 ceph]$ ceph-deploy admin node1 node2 node3 node4
# 设定mgr节点 [snow@node1 ceph]$ ceph-deploy mgr create node2 [snow@node1 ceph]$ ceph-deploy mon add node2 [snow@node1 ceph]$ sudo ceph -s
# 如果发生[ceph_deploy][ERROR ] RuntimeError: bootstrap-rgw keyring not found; run 'gatherkeys'错误 请执行"ceph-deploy gatherkeys node2"解决
4.4 配置SSD缓存
1) 确认当前所有磁盘类型
[snow@node1 ceph]$ sudo ceph osd tree
ID CLASS WEIGHT  TYPE NAME     STATUS REWEIGHT PRI-AFF 
-1       0.18713 root default                          
-3       0.06238     host node2                         
 0   hdd 0.03119         osd.0     up  1.00000 1.00000 
 3   hdd 0.03119         osd.3     up  1.00000 1.00000 
-5       0.06238     host node3                         
 1   hdd 0.03119         osd.1     up  1.00000 1.00000 
 4   hdd 0.03119         osd.4     up  1.00000 1.00000 
-7       0.06238     host node4                         
 2   hdd 0.03119         osd.2     up  1.00000 1.00000 
 5   hdd 0.03119         osd.5     up  1.00000 1.00000
[snow@node1 ceph]$ sudo ceph osd crush class ls [ "hdd" ]
2) 将指定的磁盘移除并更改其类型为ssd[本案例为osd0,osd1] [snow@node1 ceph]$ for i in 0 1 2; do sudo ceph osd crush rm-device-class osd.$i; done done removing class of osd(s): 0 done removing class of osd(s): 1 done removing class of osd(s): 2
[snow@node1 ceph]$ sudo ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.18713 root default -3 0.06238 host node2 0 0.03119 osd.0 up 1.00000 1.00000 3 hdd 0.03119 osd.3 up 1.00000 1.00000 -5 0.06238 host node3 1 0.03119 osd.1 up 1.00000 1.00000 4 hdd 0.03119 osd.4 up 1.00000 1.00000 -7 0.06238 host node4 2 0.03119 osd.2 up 1.00000 1.00000 5 hdd 0.03119 osd.5 up 1.00000 1.00000
[snow@node1 ceph]$ for i in 0 1 2; do sudo ceph osd crush set-device-class ssd osd.$i; done set osd(s) 0 to class 'ssd' set osd(s) 1 to class 'ssd' set osd(s) 2 to class 'ssd'
[snow@node1 ceph]$ sudo ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.18713 root default -3 0.06238 host node2 3 hdd 0.03119 osd.3 up 1.00000 1.00000 0 ssd 0.03119 osd.0 up 1.00000 1.00000 -5 0.06238 host node3 4 hdd 0.03119 osd.4 up 1.00000 1.00000 1 ssd 0.03119 osd.1 up 1.00000 1.00000 -7 0.06238 host node4 5 hdd 0.03119 osd.5 up 1.00000 1.00000 2 ssd 0.03119 osd.2 up 1.00000 1.00000
[snow@node1 ceph]$ sudo ceph osd crush class ls [ "hdd", "sdd" ]
3) 创建基于 ssd 的 class rule [snow@node1 ceph]$ sudo ceph osd crush rule create-replicated ssd_rule default host ssd
[snow@node1 ceph]$ sudo ceph osd crush rule list replicated_rule ssd_rule
4) 创建基于 ssd 的 class rule (1) 创建一个数据池---qyydata [snow@node1 ceph]$ sudo ceph osd pool create qyydata 16 pool 'qyydata' created
(2) 创建一个缓存池---qyycache [snow@node1 ceph]$ sudo ceph osd pool create qyycache 16 ssd_rule pool 'qyycache' created
[snow@node1 ceph]$ sudo ceph osd pool get qyycache crush_rule crush_rule: ssd_rule
[snow@node1 ceph]$ sudo ceph osd lspools 1 qyydata 2 qyycache
5) 设置缓存层 # 将 qyycache pool 放置到 qyydata pool 前端 [snow@node1 ceph]$ sudo ceph osd tier add qyydata qyycache pool 'qyycache' is now (or already was) a tier of 'qyydata'
# 设置缓存模式为 writeback [snow@node1 ceph]$ sudo ceph osd tier cache-mode qyycache writeback set cache-mode for pool 'qyycache' to writeback
# 将所有客户端请求从标准池引导至缓存池 [snow@node1 ceph]$ sudo ceph osd tier set-overlay qyydata qyycache overlay for 'qyydata' is now (or already was) 'qyycache'

# 如果打算将缓存池设置为Read Only,可按以下操作 # 将 cache pool 放置到 data pool 前端 [snow@node1 ceph]$ sudo ceph osd tier add qyydata qyycache
# 设置缓存模式为 readonly [snow@node1 ceph]$ sudo ceph osd tier cache-mode qyycache readonly


6) 查看 qyydata pool 和 qyycache pool 的详细信息 [snow@node1 ceph]$ sudo ceph osd dump |egrep 'qyydata|qyycache' pool 1 'qyydata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode warn last_change 41 lfor 41/41/41 flags hashpspool tiers 2 read_tier 2 write_tier 2 stripe_width 0 pool 2 'qyycache' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode warn last_change 41 lfor 41/41/41 flags hashpspool,incomplete_clones tier_of 1 cache_mode writeback stripe_width 0
7) 对缓存进行基本设置 # 设置缓存层hit_set_type使用bloom过滤器. # 启用缓存存储池的命中集跟踪 有效值:bloom, explicit_hash, explicit_object 默认值:bloom [snow@node1 ceph]$ sudo ceph osd pool set qyycache hit_set_type bloom set pool 2 hit_set_type to bloom
# 默认情况下缓冲池基于数据的修改时间来进行确定是否命中缓存,也可以设定热度数hit_set_count和热度周期hit_set_period,以及最大缓冲数据target_max_bytes。 hit_set_count 和 hit_set_period 选项分别定义了 HitSet 覆盖的时间区间、以及保留多少个这样的 HitSet,保留一段时间以来的访问记录,这样 Ceph 就能判断一客户端在一段时间内访问了某对象一次、还是多次(存活期与热度)。 # 为缓存存储池保留的命中集数量。此值越高, ceph-osd 守护进程消耗的内存越多 [snow@node1 ceph]$ sudo ceph osd pool set qyycache hit_set_count 1 set pool 2 hit_set_count to 1
# 为缓存存储池保留的命中集有效期。此值越高, ceph-osd 消耗的内存越多 [snow@node1 ceph]$ sudo ceph osd pool set qyycache hit_set_period 3600 set pool 2 hit_set_period to 3600 # 1 小时
# 达到 max_bytes 阀值时 Ceph 就回写或赶出对象 [snow@node1 ceph]$ sudo ceph osd pool set qyycache target_max_bytes 1073741824 set pool 2 target_max_bytes to 1073741824 # 1 GB
# 指定确定的数据对象数量或者确定的数据容量。对缓冲池设定最大的数据容量,来强制触发刷写和驱逐操作 [snow@node1 ceph]$ sudo ceph osd pool set qyycache target_max_objects 10000 set pool 2 target_max_objects to 10000
# 设置min_read_recency_for_promete、min_write_recency_for_promote
缓存池代理层两大主要操作: •刷写(flushing):负责把已经被修改的对象写入到后端慢存储,但是对象依然在缓冲池。 •驱逐(evicting):负责在缓冲池里销毁那些没有被修改的对象。
缓冲池代理层进行刷写和驱逐的操作,主要和缓冲池本身的容量有关。在缓冲池里,如果被修改的数据达到一个阈值(阈值(容量百分比),缓冲池代理就开始把这些数据刷写到后端慢存储。当缓冲池里被修改的数据达到40%时,则触发刷写动作。
[snow@node1 ceph]$ sudo ceph osd pool set qyycache min_read_recency_for_promote 1 set pool 2 min_read_recency_for_promote to 1
[snow@node1 ceph]$ sudo ceph osd pool set qyycache min_write_recency_for_promote 1 set pool 3 min_write_recency_for_promote to 1
[snow@node1 ceph]$ sudo ceph -s cluster: id: 9aa456f7-2ce5-4918-ad1c-95b07d23df9c health: HEALTH_OK
services: mon: 1 daemons, quorum node2 (age 11m) mgr: node2(active, since 8m) osd: 6 osds: 6 up (since 8m), 6 in (since 8m)
data: pools: 2 pools, 32 pgs objects: 0 objects, 0 B usage: 6.0 GiB used, 186 GiB / 192 GiB avail pgs: 32 active+clean
4.5 客户端配置
1) 创建RBD存储池

# 进入admin节点,安装并管理client节点 [snow@node1 ceph]$ ceph-deploy install --release nautilus \ --repo-url http://csrv.1000y.cloud/repos/ceph/rpm-nautilus/el7 \ --gpg-url http://csrv.1000y.cloud/repos/ceph/keys/release.asc \ --nogpgcheck node5
[snow@node1 ceph]$ ceph-deploy admin node5
# 进入client节点 [root@node5 ~]# chmod 644 /etc/ceph/ceph.client.admin.keyring
[root@node5 ~]# ceph osd lspools 1 qyydata 2 qyycache
[root@srv5 ~]# rbd pool init qyydata
# 创建名为rbd1的image [root@node5 ~]# rbd -p qyydata create rbd1 --size 2G --image-feature layering
[root@node5 ~]# rbd -p qyydata ls -l NAME SIZE PARENT FMT PROT LOCK rbd1 2 GiB 2
2) 映射 [root@node5 ~]# rbd map -p qyydata rbd1 /dev/rbd0
[root@node5 ~]# rbd showmapped id pool namespace image snap device 0 qyydata rbd1 - /dev/rbd0
3) 使用 [root@node5 ~]# mkfs.ext4 /dev/rbd0
[root@node5 ~]# mount /dev/rbd0 /mnt
[root@node5 ~]# df -Th | grep /mnt /dev/rbd0 ext4 2.0G 6.0M 1.8G 1% /mnt
4.6 ssd缓存池的其他操作
1) 删除writeback缓存池
[snow@node1 ceph]$ sudo ceph osd tier cache-mode qyycache forward --yes-i-really-mean-it
2) 查看缓存池以确保所有的对象都被刷新(这可能需要点时间) [snow@node1 ceph]$ sudo rados -p qyycache ls
3) 如果缓存池中仍然有对象,也可以手动刷新 [snow@node1 ceph]$ sudo rados -p qyycache cache-flush-evict-all
4) 删除覆盖层,以使客户端不再将流量引导至缓存 [snow@node1 ceph]$ sudo ceph osd tier remove-overlay qyydata
5) 解除存储池与缓存池的绑定存 [snow@node1 ceph]$ sudo ceph osd tier remove qyydata qyycache
6) 缓存池的相关参数配置 (1) 命中集合过滤器,默认为 Bloom 过滤器 [snow@node1 ceph]$ sudo ceph osd pool set qyycache hit_set_type bloom
[snow@node1 ceph]$ sudo ceph osd pool set qyycache hit_set_count 1
# 设置 Bloom 过滤器的误报率 [snow@node1 ceph]$ sudo ceph osd pool set qyycache hit_set_fpp 0.15
# 设置缓存有效期,单位:秒 [snow@node1 ceph]$ sudo ceph osd pool set qyycache hit_set_period 3600
(2) 置当缓存池中的数据达到多少个字节或者多少个对象时,缓存分层代理就开始从缓存池刷新对象至后端存储池并驱逐 # 当缓存池中的数据量达到1GB时开始刷盘并驱逐 [snow@node1 ceph]$ sudo ceph osd pool set qyycache target_max_bytes 1073741824
# 当缓存池中的对象个数达到1万时开始刷盘并驱逐 [snow@node1 ceph]$ sudo ceph osd pool set qyycache target_max_objects 10000
(3) 定义缓存层将对象刷至存储层或者驱逐的时间 [snow@node1 ceph]$ sudo ceph osd pool set qyycache cache_min_flush_age 600 [snow@node1 ceph]$ sudo ceph osd pool set qyycache cache_min_evict_age 600
(4) 定义当缓存池中的脏对象(被修改过的对象)占比达到多少(百分比)时,缓存分层代理开始将object从缓存层刷至存储层 [snow@node1 ceph]$ sudo ceph osd pool set qyycache cache_target_dirty_ratio 0.4
(5) 当缓存池的饱和度达到指定的值,缓存分层代理将驱逐对象以维护可用容量,此时会将未修改的(干净的)对象刷盘 [snow@node1 ceph]$ sudo ceph osd pool set qyycache cache_target_full_ratio 0.8
设置在处理读写操作时候,检查多少个 HitSet,检查结果将用于决定是否异步地提升对象(即把对象从冷数据升级为热数据,放入快取池) 它的取值应该在 0 和 hit_set_count 之间, 如果设置为 0 ,则所有的对象在读取或者写入后,将会立即提升对象;如果设置为 1 ,就只检查当前 HitSet ,如果此对象在当前 HitSet 里就提升它,否则就不提升。 设置为其它值时,就要挨个检查此数量的历史 HitSet ,如果此对象出现在 min_read_recency_for_promote 个 HitSet 里的任意一个,那就提升它。 [snow@node1 ceph]$ sudo ceph osd pool set qyycache min_read_recency_for_promote 1 [snow@node1 ceph]$ sudo ceph osd pool set qyycache min_write_recency_for_promote 1
4.7 ssd作为ceph-osd的日志盘使用
1) 给所有osd服务增加4块磁盘,大小32G
2) 在所有的osd节点上创建vg [root@node2 ~]# pvcreate /dev/sdd [root@node2 ~]# vgcreate data /dev/sdd [root@node2 ~]# lvcreate -l 100%FREE --name log data
[root@node3 ~]# pvcreate /dev/sdd [root@node3 ~]# vgcreate data /dev/sdd [root@node3 ~]# lvcreate -l 100%FREE --name log data
[root@node4 ~]# pvcreate /dev/sdd [root@node4 ~]# vgcreate data /dev/sdd [root@node4 ~]# lvcreate -l 100%FREE --name log data
2) 使用filestore采用journal模式 [snow@node1 ceph]$ ceph-deploy --overwrite-conf osd create --filestore --fs-type xfs --data /dev/sde --journal data/log node2 [snow@node1 ceph]$ ceph-deploy --overwrite-conf osd create --filestore --fs-type xfs --data /dev/sde --journal data/log node3 [snow@node1 ceph]$ ceph-deploy --overwrite-conf osd create --filestore --fs-type xfs --data /dev/sde --journal data/log node4
[snow@node1 ceph]$ sudo ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.28070 root default -3 0.09357 host srv2 6 hdd 0.03119 osd.6 up 1.00000 1.00000 0 ssd 0.03119 osd.0 up 1.00000 1.00000 1 ssd 0.03119 osd.1 up 1.00000 1.00000 -5 0.09357 host srv3 3 hdd 0.03119 osd.3 up 1.00000 1.00000 7 hdd 0.03119 osd.7 up 1.00000 1.00000 2 ssd 0.03119 osd.2 up 1.00000 1.00000 -7 0.09357 host srv4 4 hdd 0.03119 osd.4 up 1.00000 1.00000 5 hdd 0.03119 osd.5 up 1.00000 1.00000 8 hdd 0.03119 osd.8 up 1.00000 1.00000
3) 使用bluestore [root@node2 ~]# pvcreate /dev/sdf [root@node2 ~]# vgcreate cache /dev/sdf [root@node2 ~]# lvcreate -l 50%FREE --name db-lv-0 cache [root@node2 ~]# lvcreate -l 50%FREE --name wal-lv-0 cache
[root@node3 ~]# pvcreate /dev/sdf [root@node3 ~]# vgcreate cache /dev/sdf [root@node3 ~]# lvcreate -l 50%FREE --name db-lv-0 cache [root@node3 ~]# lvcreate -l 50%FREE --name wal-lv-0 cache
[root@node4 ~]# pvcreate /dev/sdf [root@node4 ~]# vgcreate cache /dev/sdf [root@node4 ~]# lvcreate -l 50%FREE --name db-lv-0 cache [root@node4 ~]# lvcreate -l 50%FREE --name wal-lv-0 cache
# 创建OSD [snow@node1 ceph]$ sudo ceph-deploy --overwrite-conf osd create --bluestore node2 --data /dev/sdg --block-db cache/db-lv-0 --block-wal cache/wal-lv-0 [snow@node1 ceph]$ sudo ceph-deploy --overwrite-conf osd create --bluestore node3 --data /dev/sdg --block-db cache/db-lv-0 --block-wal cache/wal-lv-0 [snow@node1 ceph]$ sudo ceph-deploy --overwrite-conf osd create --bluestore node4 --data /dev/sdg --block-db cache/db-lv-0 --block-wal cache/wal-lv-0
5. 指定OSD创建Pool
5.1 前期准备
1) 为所有OSD主机,增加2块不小于5G的硬盘.位置位于各主机的sdb及sdc
2) 为所有主机开启epel源
3) 创建管理账户于所有节点,并赋予sudo权限及ssh免密登录 [root@node1 ~]# echo -e 'Defaults:snow !requiretty\nsnow ALL = (root) NOPASSWD:ALL' > /etc/sudoers.d/ceph
[root@node1 ~]# pscp.pssh -h host-list.txt /etc/sudoers.d/ceph /etc/sudoers.d/ [1] 16:25:39 [SUCCESS] root@node4.1000cc.net [2] 16:25:39 [SUCCESS] root@node2.1000cc.net [3] 16:25:39 [SUCCESS] root@node3.1000cc.net [4] 16:25:39 [SUCCESS] root@node5.1000cc.net
[root@node1 ~]# pssh -h host-list.txt -i 'ls -l /etc/sudoers.d/ceph' [1] 16:35:58 [SUCCESS] root@node4.1000cc.net -rw-r--r-- 1 root root 57 Jan 26 16:35 /etc/sudoers.d/ceph [2] 16:35:58 [SUCCESS] root@node3.1000cc.net -rw-r--r-- 1 root root 57 Jan 26 16:35 /etc/sudoers.d/ceph [3] 16:35:58 [SUCCESS] root@node2.1000cc.net -rw-r--r-- 1 root root 57 Jan 26 16:35 /etc/sudoers.d/cep [4] 16:35:58 [SUCCESS] root@node5.1000cc.net -rw-r--r-- 1 root root 57 Jan 26 16:35 /etc/sudoers.d/cephh
[root@node1 ~]# pssh -h host-list.txt -i 'useradd snow' [1] 16:39:11 [SUCCESS] root@node1.1000cc.net [2] 16:39:11 [SUCCESS] root@node2.1000cc.net [3] 16:39:11 [SUCCESS] root@node4.1000cc.net [4] 16:39:11 [SUCCESS] root@node3.1000cc.net [5] 16:39:11 [SUCCESS] root@node5.1000cc.net
[root@host1 ~]# pssh -h host-list.txt -i 'echo 123456 | passwd --stdin snow' [1] 16:39:37 [SUCCESS] root@node2.1000cc.net Changing password for user snow. passwd: all authentication tokens updated successfully. [2] 16:39:37 [SUCCESS] root@node1.1000cc.net Changing password for user snow. passwd: all authentication tokens updated successfully. [3] 16:39:37 [SUCCESS] root@node4.1000cc.net Changing password for user snow. passwd: all authentication tokens updated successfully. [4] 16:39:37 [SUCCESS] root@node3.1000cc.net Changing password for user snow. passwd: all authentication tokens updated successfully. [5] 16:39:37 [SUCCESS] root@node5.1000cc.net Changing password for user snow. passwd: all authentication tokens updated successfully.
# 防火墙配置(开启ssh/MDS/OSD三个服务) [root@node1 ~]# firewall-cmd --add-service=ssh --permanent [root@node1 ~]# firewall-cmd --add-port=6789/tcp --permanent [root@node1 ~]# firewall-cmd --add-port=6800-7100/tcp --permanent [root@node1 ~]# firewall-cmd --reload
# 安装软件源 [root@node1 ~]# yum install python2-pip https://download.ceph.com/rpm-nautilus/el7/noarch/ceph-release-1-1.el7.noarch.rpm -y
# 设定ceph admin账户 [root@node1 ~]# su - snow [snow@node1 ~]$ ssh-keygen -N '' Generating public/private rsa key pair. Enter file in which to save the key (/home/snow/.ssh/id_rsa): Created directory '/home/snow/.ssh'. Your identification has been saved in /home/snow/.ssh/id_rsa. Your public key has been saved in /home/snow/.ssh/id_rsa.pub. The key fingerprint is: SHA256:2nrNBsv6c4ZBFtBKQk691gTS6BsoPGvAGRWDF0FF6ho snow@node1.1000cc.net The key's randomart image is: +---[RSA 2048]----+ | +OX*o+ | | o +=.+ + | |o ++.o = . | |o=o o + + | |.Eo. + oS | | oo . oo | |.. ...B | | .* * | | o+.= | +----[SHA256]-----+
[snow@node1 ~]$ vim ~/.ssh/config Host node1 Hostname node1.1000cc.net User snow Host node2 Hostname node2.1000cc.net User snow Host node3 Hostname node3.1000cc.net User snow Host node4 Hostname node4.1000cc.net User snow Host node5 Hostname node5.1000cc.net User snow
[snow@node1 ~]$ chmod 600 ~/.ssh/config [snow@node1 ~]$ ssh-copy-id node2 [snow@node1 ~]$ ssh-copy-id node3 [snow@node1 ~]$ ssh-copy-id node4 [snow@node1 ~]$ ssh-copy-id node5
5.2 配置Ceph Nautilus
1) 为所有节点安装Ceph
[snow@node1 ~]$ sudo yum -y install ceph-deploy
[snow@node1 ~]$ mkdir ceph [snow@node1 ceph]$ cd ceph [snow@node1 ceph]$ ceph-deploy new node2
# 安装ceph至所有节点 [snow@node1 ceph]$ ceph-deploy install --release nautilus \ --repo-url http://mirrors.ustc.edu.cn/ceph/rpm-nautilus/el7 \ --gpg-url http://mirrors.ustc.edu.cn/ceph/keys/release.asc \ --nogpgcheck node1 node2 node3 node4
# 设定mons及秘钥 [snow@node1 ceph]$ ceph-deploy mon create-initial
2) 配置Ceph集群 [snow@node1 ceph]$ ceph-deploy osd create node2 --data /dev/sdb [snow@node1 ceph]$ ceph-deploy osd create node3 --data /dev/sdb [snow@node1 ceph]$ ceph-deploy osd create node4 --data /dev/sdb [snow@node1 ceph]$ ceph-deploy osd create node2 --data /dev/sdc [snow@node1 ceph]$ ceph-deploy osd create node3 --data /dev/sdc [snow@node1 ceph]$ ceph-deploy osd create node4 --data /dev/sdc
[snow@node1 ceph]$ ceph-deploy admin node1 node2 node3 node4
# 设定mgr节点 [snow@node1 ceph]$ ceph-deploy mgr create node2 [snow@node1 ceph]$ ceph-deploy mon add node2 [snow@node1 ceph]$ sudo ceph -s
# 如果发生[ceph_deploy][ERROR ] RuntimeError: bootstrap-rgw keyring not found; run 'gatherkeys'错误 请执行"ceph-deploy gatherkeys node2"解决
5.3 指定osd创建pool
1) 确认OSD Tree
[snow@node1 ceph]$ sudo ceph osd tree
ID CLASS WEIGHT  TYPE NAME     STATUS REWEIGHT PRI-AFF 
-1       0.18713 root default                          
-3       0.06238     host srv2                         
 0   hdd 0.03119         osd.0     up  1.00000 1.00000 
 3   hdd 0.03119         osd.3     up  1.00000 1.00000 
-5       0.06238     host srv3                         
 1   hdd 0.03119         osd.1     up  1.00000 1.00000 
 4   hdd 0.03119         osd.4     up  1.00000 1.00000 
-7       0.06238     host srv4                         
 2   hdd 0.03119         osd.2     up  1.00000 1.00000 
 5   hdd 0.03119         osd.5     up  1.00000 1.00000
2) 自定义类型 [snow@node1 ceph]$ for i in 3 4 5; do sudo ceph osd crush rm-device-class osd.$i; done done removing class of osd(s): 3 done removing class of osd(s): 4 done removing class of osd(s): 5
[snow@node1 ceph]$ for i in 3 4 5; do sudo ceph osd crush set-device-class qyy osd.$i; done set osd(s) 3 to class 'qyy' set osd(s) 4 to class 'qyy' set osd(s) 5 to class 'qyy'
[snow@node1 ceph]$ sudo ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.18713 root default -3 0.06238 host srv2 0 hdd 0.03119 osd.0 up 1.00000 1.00000 3 qyy 0.03119 osd.3 up 1.00000 1.00000 -5 0.06238 host srv3 1 hdd 0.03119 osd.1 up 1.00000 1.00000 4 qyy 0.03119 osd.4 up 1.00000 1.00000 -7 0.06238 host srv4 2 hdd 0.03119 osd.2 up 1.00000 1.00000 5 qyy 0.03119 osd.5 up 1.00000 1.00000
3) 创建rbd的class rule [snow@node1 ceph]$ sudo ceph osd crush rule create-replicated qyy_rule default host qyy
[snow@node1 ceph]$ sudo ceph osd crush rule list replicated_rule qyy_rule
4) 创建一个池 [snow@node1 ceph]$ sudo ceph osd pool create qyy 16 qyy_rule pool 'qyy' created
[snow@node1 ceph]$ sudo ceph osd lspools 1 qyy
5) 在客户端srv5上进行划分rbd
# 进入admin节点,安装并管理client节点 [snow@node1 ceph]$ ceph-deploy install --release nautilus \ --repo-url http://csrv.1000y.cloud/repos/ceph/rpm-nautilus/el7 \ --gpg-url http://csrv.1000y.cloud/repos/ceph/keys/release.asc \ --nogpgcheck node5
[snow@node1 ceph]$ ceph-deploy admin node5
# 进入client节点 [root@node5 ~]# chmod 644 /etc/ceph/ceph.client.admin.keyring
[root@node5 ~]# ceph osd lspools 1 qyy
[root@node5 ~]# rbd pool init qyy
# 创建名为rbd1的image [root@node5 ~]# rbd -p qyy create rbd2 --size 2G --image-feature layering
[root@node5 ~]# rbd -p qyy ls -l NAME SIZE PARENT FMT PROT LOCK rbd2 2 GiB 2
6) 映射 [root@node5 ~]# rbd map -p qyy rbd2 /dev/rbd0
[root@node5 ~]# rbd showmapped id pool namespace image snap device 0 qyy rbd2 - /dev/rbd0
7) 使用 [root@node5 ~]# mkfs.ext4 /dev/rbd0
[root@node5 ~]# mount /dev/rbd0 /mnt
[root@node5 ~]# df -Th /mnt Filesystem Type Size Used Avail Use% Mounted on /dev/rbd0 ext4 2.0G 6.0M 1.8G 1% /mnt
6. 添加/删除OSD及删除Pool与Host
6.1 添加OSD
1) 前提条件
1. 开启新的设备节点srv6.1000y.cloud
2. 在新设备中开启EPEL
3. 在新设备中设定snow账户具备完整的sudo权限
4. 关闭tmux程序
5. 主控节点需要进行ssh无密登录
2) 查看现有的OSD [snow@node1 ceph]$ sudo ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.18713 root default -3 0.06238 host srv2 0 hdd 0.03119 osd.0 up 1.00000 1.00000 3 qyy 0.03119 osd.3 up 1.00000 1.00000 -5 0.06238 host srv3 1 hdd 0.03119 osd.1 up 1.00000 1.00000 4 qyy 0.03119 osd.4 up 1.00000 1.00000 -7 0.06238 host srv4 2 hdd 0.03119 osd.2 up 1.00000 1.00000 5 qyy 0.03119 osd.5 up 1.00000 1.00000
3) 为新节点安装ceph程序并添加OSD [snow@node1 ceph]$ ceph-deploy install --release nautilus \ --repo-url http://mirrors.ustc.edu.cn/ceph/rpm-nautilus/el7 \ --gpg-url http://mirrors.ustc.edu.cn/ceph/keys/release.asc \ --nogpgcheck srv6
[snow@node1 ceph]$ ceph-deploy osd create srv6 --data /dev/sdb
[snow@node1 ceph]$ ceph-deploy admin srv6
[snow@node1 ceph]$ sudo ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.18713 root default -3 0.06238 host srv2 0 hdd 0.03119 osd.0 up 1.00000 1.00000 3 qyy 0.03119 osd.3 up 1.00000 1.00000 -5 0.06238 host srv3 1 hdd 0.03119 osd.1 up 1.00000 1.00000 4 qyy 0.03119 osd.4 up 1.00000 1.00000 -7 0.06238 host srv4 2 hdd 0.03119 osd.2 up 1.00000 1.00000 5 qyy 0.03119 osd.5 up 1.00000 1.00000 -13 0.03119 host srv6 6 hdd 0.03119 osd.6 up 1.00000 1.00000
6.2 删除OSD
1) 查看现有的OSD
[snow@node1 ceph]$ sudo ceph osd tree
ID CLASS WEIGHT  TYPE NAME     STATUS REWEIGHT PRI-AFF 
-1       0.18713 root default                          
-3       0.06238     host srv2                         
 0   hdd 0.03119         osd.0     up  1.00000 1.00000 
 3   qyy 0.03119         osd.3     up  1.00000 1.00000 
-5       0.06238     host srv3                         
 1   hdd 0.03119         osd.1     up  1.00000 1.00000 
 4   qyy 0.03119         osd.4     up  1.00000 1.00000 
-7       0.06238     host srv4                         
 2   hdd 0.03119         osd.2     up  1.00000 1.00000 
 5   qyy 0.03119         osd.5     up  1.00000 1.00000
-13      0.03119     host srv6                         
 6   hdd 0.03119         osd.6     up  1.00000 1.00000
2) 删除指定的OSD # 将OSD磁盘先进行分离 # 当进行ceph osd out操作时,系统会实时监视群集状态并自动执行平衡 [snow@node1 ceph]$ sudo ceph osd out 6 marked out osd.6.
# 停止所在节点的osd服务 [root@node6 ~]# systemctl disable --now ceph-osd@6
# 删除 CRUSH 图的对应 OSD 条目,使其不再接收数据 [snow@node1 ceph]$ sudo ceph osd crush remove osd.6 removed item id 6 name 'osd.6' from crush map
# 删除 OSD 认证密钥 [snow@node1 ceph]$ sudo ceph auth del osd.6 updated
# 删除 OSD.6 [snow@node1 ceph]$ sudo ceph osd rm osd.6 removed osd.6
3) 验证 [snow@node1 ceph]$ sudo ceph -s cluster: id: bab8eeb7-3d95-4f1e-a9a0-8977fc7f39f1 health: HEALTH_OK
services: mon: 1 daemons, quorum srv2 (age 43m) mgr: srv2(active, since 39m) osd: 6 osds: 6 up (since 4m), 6 in (since 10m)
data: pools: 1 pools, 16 pgs objects: 28 objects, 70 MiB usage: 6.2 GiB used, 186 GiB / 192 GiB avail pgs: 16 active+clean
[snow@node1 ceph]$ sudo ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.18713 root default -3 0.06238 host srv2 0 hdd 0.03119 osd.0 up 1.00000 1.00000 3 qyy 0.03119 osd.3 up 1.00000 1.00000 -5 0.06238 host srv3 1 hdd 0.03119 osd.1 up 1.00000 1.00000 4 qyy 0.03119 osd.4 up 1.00000 1.00000 -7 0.06238 host srv4 2 hdd 0.03119 osd.2 up 1.00000 1.00000 5 qyy 0.03119 osd.5 up 1.00000 1.00000 -13 0 host srv6
4) 换盘 # 添加一个新盘到srv6,新的磁盘设备文件名为/dev/sdc [snow@node1 ceph]$ ceph-deploy osd create srv6 --data /dev/sdc
[snow@node1 ceph]$ sudo ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.18713 root default -3 0.06238 host srv2 0 hdd 0.03119 osd.0 up 1.00000 1.00000 3 qyy 0.03119 osd.3 up 1.00000 1.00000 -5 0.06238 host srv3 1 hdd 0.03119 osd.1 up 1.00000 1.00000 4 qyy 0.03119 osd.4 up 1.00000 1.00000 -7 0.06238 host srv4 2 hdd 0.03119 osd.2 up 1.00000 1.00000 5 qyy 0.03119 osd.5 up 1.00000 1.00000 -13 0.03119 host srv6 6 hdd 0.03119 osd.6 up 1.00000 1.00000
# 如果ceph -s查看集群的状态中有出现1 daemons have recently crashed的报警信息 # 告警原因: 产生该问题的原因是数据在均衡或者回滚等操作的时候,导致其某个守护进程崩溃了,且没有及时归档,所以集群产生告警。 # 解决方法 [snow@node1 ceph]$ sudo ceph crash ls [snow@node1 ceph]$ sudo ceph crash archive <id> [snow@node1 ceph]$ sudo ceph crash archive-all
6.3 删除Pool
1) 查看现有存储池
[snow@node1 ceph]$ sudo ceph osd pool ls
qyy
2) 修改mon节点信息 [root@node2 ~]# vim /etc/ceph/ceph.conf [global] fsid = bab8eeb7-3d95-4f1e-a9a0-8977fc7f39f1 mon_initial_members = srv2 mon_host = 192.168.1.12 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx
# 于文件最底部增加如下内容 [mon] mon_allow_pool_delete = true
[root@node2 ~]# systemctl restart ceph-mon@node2
3) 删除pool [snow@node1 ceph]$ sudo ceph osd pool rm qyy qyy --yes-i-really-really-mean-it pool 'qyy' removed
[snow@node1 ceph]$ sudo ceph osd pool ls [snow@node1 ceph]$
6.4 删除主机
1) 删除掉crush map中已没有osd的host
[snow@node1 ceph]$ sudo ceph osd tree
ID  CLASS WEIGHT  TYPE NAME     STATUS REWEIGHT PRI-AFF 
 -1       0.18713 root default                          
 -3       0.06238     host srv2                         
  0   hdd 0.03119         osd.0     up  1.00000 1.00000 
  3   qyy 0.03119         osd.3     up  1.00000 1.00000 
 -5       0.06238     host srv3                         
  1   hdd 0.03119         osd.1     up  1.00000 1.00000 
  4   qyy 0.03119         osd.4     up  1.00000 1.00000 
 -7       0.06238     host srv4                         
  2   hdd 0.03119         osd.2     up  1.00000 1.00000 
  5   qyy 0.03119         osd.5     up  1.00000 1.00000 
-13             0     host srv6
[snow@node1 ceph]$ sudo ceph osd crush remove srv6 removed item id -13 name 'srv6' from crush map
[snow@node1 ceph]$ sudo ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.18713 root default -3 0.06238 host srv2 0 hdd 0.03119 osd.0 up 1.00000 1.00000 3 qyy 0.03119 osd.3 up 1.00000 1.00000 -5 0.06238 host srv3 1 hdd 0.03119 osd.1 up 1.00000 1.00000 4 qyy 0.03119 osd.4 up 1.00000 1.00000 -7 0.06238 host srv4 2 hdd 0.03119 osd.2 up 1.00000 1.00000 5 qyy 0.03119 osd.5 up 1.00000 1.00000
2) 删除主机---只是删除数据(/var/lib/ceph下的数据),保留Ceph软件 [snow@node1 ceph]$ sudo ceph-deploy purgedata srv6 [snow@node1 ceph]$ sudo ceph-deploy purgedata srv6 srv7
3) 删除主机---即删除数据也删除软件 [snow@node1 ceph]$ sudo ceph-deploy purge srv6 [snow@node1 ceph]$ sudo ceph-deploy purge srv6 srv7

 

如对您有帮助,请随缘打个赏。^-^

gold