Openstack配置手册-Cinder配置手册

snow chuai汇总、整理、撰写---2020/2/6
最后更新日期---2025/03/14


1. 拓扑
     ------------+---------------------------+---------------------------+------------
                 |                           |                           |
             eth0|192.168.11             eth0|192.168.13             eth0|192.168.12
     +-----------+-----------+   +-----------+-----------+   +-----------+-----------+
     |    [ Control Node ]   |   |    [ Storage Node ]   |   |    [ Compute Node ]   |
     |   [node1.1000cc.net]  |   |   [node3.1000cc.net]  |   |   [node2.1000cc.net]  |
     |  MariaDB    RabbitMQ  |   |      Open vSwitch     |   |        Libvirt        |
     |  Memcached  httpd     |   |        L2 Agent       |   |     Nova Compute      |
     |  Keystone   Glance    |   |        L3 Agent       |   |      Open vSwitch     |
     |  Nova API             |   |     Metadata Agent    |   |        L2 Agent       |
     |  Neutron Server       |   |     Cinder-Volume     |   |                       |
     |  Metadata Agent       |   |     Cinder-Backup     |   |                       |
     |  Cinder API           |   |        [vdb磁盘]       |   |                       |
     +-----------------------+   +-----------+-----------+   +-----------------------+
                                             |
                                             |
                                             |
     ----------------------------------------+----------------------------------------
                                             |                         
                                         eth0|192.168.14            
                                 +-----------+-----------+
                                 |     [ NFS Server ]    |
                                 |   [node2.1000cc.net]  |
                                 --------------------------
2. 在控制节点安装及配置Cinder
2.1 设定Cinder用户并完成数据库同步
1) 创建cinder用户并设定endpoint信息
[root@node1 ~(keystone)]# openstack user create --domain default --project service --password servicepassword cinder
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| default_project_id  | 1be75e85ec9445ab9ff7dd7ec2f02b71 |
| domain_id           | default                          |
| enabled             | True                             |
| id                  | fd4fd34318fe48bbbfb28b3952dbd0e8 |
| name                | cinder                           |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
[root@node1 ~(keystone)]# openstack role add --project service --user cinder admin
[root@node1 ~(keystone)]# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Block Storage | | enabled | True | | id | a1d5f4ae19ae4870a123a8ffdbafaab8 | | name | cinderv2 | | type | volumev2 | +-------------+----------------------------------+
[root@node1 ~(keystone)]# openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Block Storage | | enabled | True | | id | f95e355f84754bbf86503fca5e230a6e | | name | cinderv3 | | type | volumev3 | +-------------+----------------------------------+
[root@node1 ~(keystone)]# openstack endpoint create --region RegionOne volumev2 public http://192.168.10.11:8776/v2/%\(tenant_id\)s +--------------+--------------------------------------------+ | Field | Value | +--------------+--------------------------------------------+ | enabled | True | | id | fcc6ea9e7c4d4ca0966cbc06d0c851da | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | a1d5f4ae19ae4870a123a8ffdbafaab8 | | service_name | cinderv2 | | service_type | volumev2 | | url | http://192.168.10.11:8776/v2/%(tenant_id)s | +--------------+--------------------------------------------+
[root@node1 ~(keystone)]# openstack endpoint create --region RegionOne volumev2 internal http://192.168.10.11:8776/v2/%\(tenant_id\)s +--------------+--------------------------------------------+ | Field | Value | +--------------+--------------------------------------------+ | enabled | True | | id | c7c98e3d68f74df7ac6cf850304123bd | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | a1d5f4ae19ae4870a123a8ffdbafaab8 | | service_name | cinderv2 | | service_type | volumev2 | | url | http://192.168.10.11:8776/v2/%(tenant_id)s | +--------------+--------------------------------------------+
[root@node1 ~(keystone)]# openstack endpoint create --region RegionOne volumev2 admin http://192.168.10.11:8776/v2/%\(tenant_id\)s +--------------+--------------------------------------------+ | Field | Value | +--------------+--------------------------------------------+ | enabled | True | | id | 71978605eaad4895b721d456ea70f5da | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | a1d5f4ae19ae4870a123a8ffdbafaab8 | | service_name | cinderv2 | | service_type | volumev2 | | url | http://192.168.10.11:8776/v2/%(tenant_id)s | +--------------+--------------------------------------------+
[root@node1 ~(keystone)]# openstack endpoint create --region RegionOne volumev3 public http://192.168.10.11:8776/v3/%\(tenant_id\)s +--------------+--------------------------------------------+ | Field | Value | +--------------+--------------------------------------------+ | enabled | True | | id | 59bbbae5707b4d30817471be81d96465 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | f95e355f84754bbf86503fca5e230a6e | | service_name | cinderv3 | | service_type | volumev3 | | url | http://192.168.10.11:8776/v3/%(tenant_id)s | +--------------+--------------------------------------------+
[root@node1 ~(keystone)]# openstack endpoint create --region RegionOne volumev3 internal http://192.168.10.11:8776/v3/%\(tenant_id\)s +--------------+--------------------------------------------+ | Field | Value | +--------------+--------------------------------------------+ | enabled | True | | id | ae45967111e34363924bb91ed0d6b503 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | f95e355f84754bbf86503fca5e230a6e | | service_name | cinderv3 | | service_type | volumev3 | | url | http://192.168.10.11:8776/v3/%(tenant_id)s | +--------------+--------------------------------------------+
[root@node1 ~(keystone)]# openstack endpoint create --region RegionOne volumev3 admin http://192.168.10.11:8776/v3/%\(tenant_id\)s +--------------+--------------------------------------------+ | Field | Value | +--------------+--------------------------------------------+ | enabled | True | | id | 2a7cdcf21ee8445fb87aa14fe73fbf7c | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | f95e355f84754bbf86503fca5e230a6e | | service_name | cinderv3 | | service_type | volumev3 | | url | http://192.168.10.11:8776/v3/%(tenant_id)s | +--------------+--------------------------------------------+
2) 设定数据库 [root@node1 ~(keystone)]# mysql -u root -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 216 Server version: 10.1.20-MariaDB MariaDB Server
Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> create database cinder; Query OK, 1 row affected (0.02 sec)
MariaDB [(none)]> grant all privileges on cinder.* to cinder@'localhost' identified by 'password'; Query OK, 0 rows affected (0.13 sec)
MariaDB [(none)]> grant all privileges on cinder.* to cinder@'%' identified by 'password'; Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> flush privileges; Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> exit Bye
2.2 安装并配置Cinder
1) 安装cinder软件
[root@node1 ~(keystone)]# yum --enablerepo=centos-openstack-queens,epel install openstack-cinder -y
2) 配置cinder [root@node1 ~(keystone)]# mv /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak [root@node1 ~(keystone)]# vim /etc/cinder/cinder.conf [DEFAULT] # 设定本机IP my_ip = 192.168.10.11 log_dir = /var/log/cinder state_path = /var/lib/cinder auth_strategy = keystone
transport_url = rabbit://openstack:password@192.168.10.11
[database] connection = mysql+pymysql://cinder:password@192.168.10.11/cinder
[keystone_authtoken] www_authenticate_uri = http://192.168.10.11:5000 auth_url = http://192.168.10.11:5000 memcached_servers = 192.168.10.11:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = servicepassword
[oslo_concurrency] lock_path = $state_path/tmp

[root@node1 ~(keystone)]# chmod 640 /etc/cinder/cinder.conf [root@node1 ~(keystone)]# chgrp cinder /etc/cinder/cinder.conf [root@node1 ~(keystone)]# su -s /bin/bash cinder -c "cinder-manage db sync" [root@node1 ~(keystone)]# systemctl enable --now openstack-cinder-api openstack-cinder-scheduler
[root@node1 ~(keystone)]# openstack volume service list +------------------+------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+------------------+------+---------+-------+----------------------------+ | cinder-scheduler | node1.1000cc.net | nova | enabled | up | 2020-02-05T19:37:53.000000 | +------------------+------------------+------+---------+-------+----------------------------+
2.3 SELinux及防火墙设定
[root@node1 ~(keystone)]# yum --enablerepo=centos-openstack-queens install openstack-selinux -y
[root@node1 ~(keystone)]# firewall-cmd --add-port=8776/tcp --permanent success [root@node1 ~(keystone)]# firewall-cmd --reload success
3. 在存储节点配置Cinder
1) 安装Cinder
[root@node3 ~]# yum --enablerepo=centos-openstack-queens,epel install openstack-cinder python2-crypto targetcli -y
2) 配置并启动Cinder [root@node3 ~]# mv /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak [root@node3 ~]# vim /etc/cinder/cinder.conf [DEFAULT] my_ip = 192.168.10.13 log_dir = /var/log/cinder state_path = /var/lib/cinder auth_strategy = keystone
transport_url = rabbit://openstack:password@192.168.10.11
glance_api_servers = http://192.168.10.11:9292
[database] connection = mysql+pymysql://cinder:password@192.168.10.11/cinder
[keystone_authtoken] www_authenticate_uri = http://192.168.10.11:5000 auth_url = http://192.168.10.11:5000 memcached_servers = 192.168.10.11:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = servicepassword
[oslo_concurrency] lock_path = $state_path/tmp

[root@node3 ~]# chmod 640 /etc/cinder/cinder.conf [root@node3 ~]# chgrp cinder /etc/cinder/cinder.conf [root@node3 ~]# systemctl enable --now openstack-cinder-volume
4. 使用Cinder-LVM
1) 创建磁盘及建立File System
[root@node3 ~]# fdisk /dev/vdb
[root@node3 ~]# pvcreate /dev/vdb1
[root@node3 ~]# vgcreate -s 32M snowvg /dev/vdb1
  Physical volume "/dev/vdb1" successfully created.
  Volume group "snowvg" successfully created
2) 配置Cinder服务 [root@node3 ~]# vim /etc/cinder/cinder.conf # 于[DEFAULT]区段,设定后台存储类型为LVM [DEFAULT] ...... ...... ...... ...... ...... ......
enabled_backends = lvm
...... ...... ...... ...... ...... ......
# 于文件最后追加如下内容 [lvm] iscsi_helper = lioadm volume_group = snowvg iscsi_ip_address = 192.168.10.13 volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volumes_dir = $state_path/volumes iscsi_protocol = iscsi
[root@node3 ~]# systemctl restart openstack-cinder-volume
4) Firewalld设定 [root@node3 ~]# firewall-cmd --add-service=iscsi-target --permanent success [root@node3 ~]# firewall-cmd --reload success
5) Nova-Compute设定 [root@node2 ~]# vim /etc/nova/nova.conf ...... ...... ...... ...... ...... ......
# 于文件最后追加如下内容 [cinder] os_region_name = RegionOne
[root@node2 ~]# systemctl restart openstack-nova-compute
6) 使用Cinder (1) 设定调用环境 [root@node1 ~(keystone)]# su - snow [snow@node1 ~(keystone)]$ echo "export OS_VOLUME_API_VERSION=2" >> ~/keystonerc [snow@node1 ~(keystone)]$ source ~/keystonerc
(2) 创建一个2G的磁盘 [snow@node1 ~(keystone)]$ openstack volume create --size 2 disk1 +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2020-02-05T20:21:26.000000 | | description | None | | encrypted | False | | id | 3268dc70-8558-4f44-b184-fcd2aef6638c | | multiattach | False | | name | disk1 | | properties | | | replication_status | None | | size | 2 | | snapshot_id | None | | source_volid | None | | status | creating | | type | None | | updated_at | None | | user_id | 54f3ece13d6147928303ef4112e1f0e9 | +---------------------+--------------------------------------+
[snow@node1 ~(keystone)]$ openstack volume list +--------------------------------------+-------+-----------+------+-------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-------+-----------+------+-------------+ | 78b2d60d-ea3e-4aa5-8878-3a858cd580ad | disk1 | available | 2 | | +--------------------------------------+-------+-----------+------+-------------+
(2) 将disk1连接至实例c7 [snow@node1 ~(keystone)]$ openstack server list +---------+------+--------+---------------------------------------+-------+----------+ | ID | Name | Status | Networks | Image | Flavor | +---------+------+--------+---------------------------------------+-------+----------+ | 1bc4... | c7 | ACTIVE | int_net=192.168.188.5, 192.168.10.223 | c77 | m1.small | +---------+------+--------+---------------------------------------+-------+----------+ [snow@node1 ~(keystone)]$ openstack server add volume c7 disk1
# 注意: 1. 如果出现Nova API 500错误 2. nova-api.log日志文件报错为:UnsupportedCinderAPIVersion: Nova does not support Cinder API version 2
解决方法 1. 编辑nova.conf文件[所有nova服务节点及Nova Compute节点]
2. 在[cinder]区段增加如下内容 endpoint_template=http://$controller-ip:8776/v3/%(project_id)s
3. 重新启动nova-api服务及nova-compute服务

[snow@node1 ~(keystone)]$ openstack volume list +--------------------------------------+-------+--------+------+-----------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-------+--------+------+-----------------------------+ | 78b2d60d-ea3e-4aa5-8878-3a858cd580ad | disk1 | in-use | 2 | Attached to c7 on /dev/vdb | +--------------------------------------+-------+--------+------+-----------------------------+
[snow@node1 ~(keystone)]$ openstack server remove volume c7 disk1
5. 使用Cinder-NFS
1) 准备NFS Server
[root@node4 ~]# yum install nfs-utils -y
[root@node4 ~]# vim /etc/idmapd.conf # 更改第5行,域名为1000cc.net Domain = 1000cc.net
[root@node4 ~]# vim /etc/exports /nfs4cinder *(rw,no_root_squash)
[root@node4 ~]# mkdir /nfs4cinder [root@node4 ~]# systemctl enable --now rpcbind nfs-server
2) 配置Cinder服务 [root@node3 ~]# yum install nfs-utils
[root@node3 ~]# vim /etc/idmapd.conf # 更改第5行,域名为1000cc.net Domain = 1000cc.net
[root@node3 ~]# systemctl enable --now rpcbind
[root@node3 ~]# vim /etc/cinder/cinder.conf [DEFAULT] ...... ...... ...... ...... ...... ......
# 于[DEFAULT]区段,设定后台存储类型为LVM enabled_backends = nfs
...... ...... ...... ...... ...... ......
# 于文件最后追加如下内容 [nfs] volume_driver = cinder.volume.drivers.nfs.NfsDriver nfs_shares_config = /etc/cinder/nfs-shares-dir nfs_mount_point_base = $state_path/mnt
[root@node3 ~]# vim /etc/cinder/nfs-shares-dir 192.168.10.14:/nfs4cinder
[root@node3 ~]# chmod 640 /etc/cinder/nfs-shares-dir [root@node3 ~]# chgrp cinder /etc/cinder/nfs-shares-dir [root@node3 ~]# systemctl restart openstack-cinder-volume [root@node3 ~]# chown -R cinder. /var/lib/cinder/mnt
3) Nova-Compute设定 [root@node2 ~]# yum install nfs-utils -y [root@node2 ~]# vim /etc/idmapd.conf # 更改第5行,域名为1000cc.net Domain = 1000cc.net
[root@node2 ~]# systemctl enable --now rpcbind
[root@node2 ~]# vim /etc/nova/nova.conf ...... ...... ...... ...... ...... ......
# 于文件最后追加如下内容 [cinder] os_region_name = RegionOne
[root@node2 ~]# systemctl restart openstack-nova-compute
4) 使用Cinder (1) 设定调用环境并创建磁盘 [root@node1 ~(keystone)]# su - snow
[snow@node1 ~(keystone)]$ echo "export OS_VOLUME_API_VERSION=2" >> ~/keystonerc [snow@node1 ~(keystone)]$ source ~/keystonerc
[snow@node1 ~(keystone)]$ openstack volume create --size 2 disk1 +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2020-02-05T20:26:58.000000 | | description | None | | encrypted | False | | id | 78133c6a-350b-4d03-9f1a-6cfebc9db548 | | multiattach | False | | name | disk1 | | properties | | | replication_status | None | | size | 2 | | snapshot_id | None | | source_volid | None | | status | creating | | type | NOne | | updated_at | None | | user_id | 54f3ece13d6147928303ef4112e1f0e9 | +---------------------+--------------------------------------+
[snow@node1 ~(keystone)]$ openstack volume list +--------------------------------------+-----------+-----------+------+-------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-----------+-----------+------+-------------+ | 78133c6a-350b-4d03-9f1a-6cfebc9db548 | disk1 | available | 2 | | +--------------------------------------+-----------+-----------+------+-------------+
(2) 将disk1连接至实例c7 [snow@node1 ~(keystone)]$ openstack server list +---------+------+--------+---------------------------------------+-------+----------+ | ID | Name | Status | Networks | Image | Flavor | +---------+------+--------+---------------------------------------+-------+----------+ | 1bc4... | c7 | ACTIVE | int_net=192.168.188.5, 192.168.10.223 | c77 | m1.small | +---------+------+--------+---------------------------------------+-------+----------+
[snow@node1 ~(keystone)]$ openstack server add volume c7 disk1 [snow@node1 ~(keystone)]$ openstack volume list +--------------------------------------+-----------+--------+------+-----------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-----------+--------+------+-----------------------------+ | 78133c6a-350b-4d03-9f1a-6cfebc9db548 | disk1_lvm | in-use | 2 | Attached to c7 on /dev/vdb | +--------------------------------------+-----------+--------+------+-----------------------------+
[snow@node1 ~(keystone)]$ openstack server remove volume c7 disk1 [snow@node1 ~(keystone)]$ openstack volume list +--------------------------------------+-----------+-----------+------+-------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-----------+-----------+------+-------------+ | 78133c6a-350b-4d03-9f1a-6cfebc9db548 | disk1_lvm | available | 2 | | +--------------------------------------+-----------+-----------+------+-------------+
6. 使用Cinder-LVM+NFS
1) 准备NFS Server
[root@node4 ~]# yum install nfs-utils -y
[root@node4 ~]# vim /etc/idmapd.conf # 更改第5行,域名为1000cc.net Domain = 1000cc.net
[root@node4 ~]# vim /etc/exports /nfs4cinder *(rw,no_root_squash)
[root@node4 ~]# mkdir /nfs4cinder [root@node4 ~]# systemctl enable --now rpcbind nfs-server
2) 配置Cinder服务 [root@node3 ~]# fdisk /dev/vdb [root@node3 ~]# pvcreate /dev/vdb1 [root@node3 ~]# vgcreate -s 32M snowvg /dev/vdb1 Physical volume "/dev/vdb1" successfully created. Volume group "snowvg" successfully created [root@node3 ~]# yum install nfs-utils -y
[root@node3 ~]# vim /etc/idmapd.conf # 更改第5行,域名为1000cc.net Domain = 1000cc.net
[root@node3 ~]# systemctl enable --now rpcbind
[root@node3 ~]# vim /etc/cinder/cinder.conf # 于[DEFAULT]区段,设定后台存储类型为LVM [DEFAULT] ...... ...... ...... ...... ...... ......
enabled_backends = lvm,nfs
...... ...... ...... ...... ...... ......
# 于文件最后追加如下内容 [lvm] iscsi_helper = lioadm volume_group = snowvg iscsi_ip_address = 192.168.10.13 volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volumes_dir = $state_path/volumes iscsi_protocol = iscsi
[nfs] volume_driver = cinder.volume.drivers.nfs.NfsDriver nfs_shares_config = /etc/cinder/nfs-shares-dir nfs_mount_point_base = $state_path/mnt

[root@node3 ~]# vim /etc/cinder/nfs-shares-dir 192.168.10.14:/nfs4cinder
[root@node3 ~]# chmod 640 /etc/cinder/nfs-shares-dir [root@node3 ~]# chgrp cinder /etc/cinder/nfs-shares-dir [root@node3 ~]# systemctl restart openstack-cinder-volume [root@node3 ~]# chown -R cinder. /var/lib/cinder/mnt
4) Nova-Compute设定 [root@node2 ~]# yum nstall nfs-utils -y [root@node2 ~]# vim /etc/idmapd.conf # 更改第5行,域名为1000cc.net Domain = 1000cc.net
[root@node2 ~]# systemctl enable --now rpcbind
[root@node2 ~]# vim /etc/nova/nova.conf ...... ...... ...... ...... ...... ......
# 于文件最后追加如下内容 [cinder] os_region_name = RegionOne
[root@node2 ~]# systemctl restart openstack-nova-compute
6) 使用Cinder (1) 创建磁盘类型 [root@node1 ~(keystone)]# echo "export OS_VOLUME_API_VERSION=2" >> ~/keystonerc [root@node1 ~(keystone)]# source ~/keystonerc [root@node1 ~(keystone)]# openstack volume type create lvm +-------------+--------------------------------------+ | Field | Value | +-------------+--------------------------------------+ | description | None | | id | 3222aecc-8211-4d5c-9a60-299cd8115655 | | is_public | True | | name | lvm | +-------------+--------------------------------------+
[root@node1 ~(keystone)]# openstack volume type create nfs +-------------+--------------------------------------+ | Field | Value | +-------------+--------------------------------------+ | description | None | | id | 830d43c0-4f4b-4798-a7d4-f3d22b666ab2 | | is_public | True | | name | nfs | +-------------+--------------------------------------+
[root@node1 ~(keystone)]# openstack volume type list +--------------------------------------+------+-----------+ | ID | Name | Is Public | +--------------------------------------+------+-----------+ | 830d43c0-4f4b-4798-a7d4-f3d22b666ab2 | nfs | True | | 3222aecc-8211-4d5c-9a60-299cd8115655 | lvm | True | +--------------------------------------+------+-----------+
(2) 设定调用环境并创建磁盘 [root@node1 ~(keystone)]# su - snow
[snow@node1 ~(keystone)]$ echo "export OS_VOLUME_API_VERSION=2" >> ~/keystonerc [snow@node1 ~(keystone)]$ source ~/keystonerc [snow@node1 ~(keystone)]$ openstack volume create --type lvm --size 2 disk1_lvm +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2020-02-05T20:26:58.000000 | | description | None | | encrypted | False | | id | 78133c6a-350b-4d03-9f1a-6cfebc9db548 | | multiattach | False | | name | disk1_lvm | | properties | | | replication_status | None | | size | 2 | | snapshot_id | None | | source_volid | None | | status | creating | | type | lvm | | updated_at | None | | user_id | 54f3ece13d6147928303ef4112e1f0e9 | +---------------------+--------------------------------------+
[snow@node1 ~(keystone)]$ openstack volume create --type nfs --size 2 disk1_nfs +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2020-02-05T20:27:29.000000 | | description | None | | encrypted | False | | id | 08714f15-9331-40cb-95e4-2e1c06b65097 | | multiattach | False | | name | disk1_nfs | | properties | | | replication_status | None | | size | 2 | | snapshot_id | None | | source_volid | None | | status | creating | | type | nfs | | updated_at | None | | user_id | 54f3ece13d6147928303ef4112e1f0e9 | +---------------------+--------------------------------------+
[snow@node1 ~(keystone)]$ openstack volume list +--------------------------------------+-----------+-----------+------+-------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-----------+-----------+------+-------------+ | 08714f15-9331-40cb-95e4-2e1c06b65097 | disk1_nfs | available | 2 | | | 78133c6a-350b-4d03-9f1a-6cfebc9db548 | disk1_lvm | available | 2 | | +--------------------------------------+-----------+-----------+------+-------------+
(2) 将disk1连接至实例c7 [snow@node1 ~(keystone)]$ openstack server list +---------+------+--------+---------------------------------------+-------+----------+ | ID | Name | Status | Networks | Image | Flavor | +---------+------+--------+---------------------------------------+-------+----------+ | 1bc4... | c7 | ACTIVE | int_net=192.168.188.5, 192.168.10.223 | c77 | m1.small | +---------+------+--------+---------------------------------------+-------+----------+
[snow@node1 ~(keystone)]$ openstack server add volume c7 disk1_lvm [snow@node1 ~(keystone)]$ openstack server add volume c7 disk1_nfs [snow@node1 ~(keystone)]$ openstack volume list +--------------------------------------+-----------+--------+------+-----------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-----------+--------+------+-----------------------------+ | 08714f15-9331-40cb-95e4-2e1c06b65097 | disk1_nfs | in-use | 2 | Attached to c7 on /dev/vdc | | 78133c6a-350b-4d03-9f1a-6cfebc9db548 | disk1_lvm | in-use | 2 | Attached to c7 on /dev/vdb | +--------------------------------------+-----------+--------+------+-----------------------------+
[snow@node1 ~(keystone)]$ openstack server remove volume c7 disk1_lvm [snow@node1 ~(keystone)]$ openstack server remove volume c7 disk1_nfs [snow@node1 ~(keystone)]$ openstack volume list +--------------------------------------+-----------+-----------+------+-------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-----------+-----------+------+-------------+ | 08714f15-9331-40cb-95e4-2e1c06b65097 | disk1_nfs | available | 2 | | | 78133c6a-350b-4d03-9f1a-6cfebc9db548 | disk1_lvm | available | 2 | | +--------------------------------------+-----------+-----------+------+-------------+
7. Cinder备份服务实现
1) 配置NFS服务
[root@node4 ~]# yum install nfs-utils -y
[root@node4 ~]# vim /etc/idmapd.conf
# 修改第5行
Domain = 1000cc.net
[root@node4 ~]# vim /etc/exports /cinder4backup *(rw,no_root_squash)
[root@node4 ~]# mkdir /cinder4backup [root@node4 ~]# systemctl enable --now rpcbind nfs-server
2) 修改存储节点配置 [root@node3 ~]# yum install nfs-utils -y
[root@node3 ~]# vim /etc/idmapd.conf # 修改第5行 Domain = 1000cc.net
[root@node3 ~]# systemctl enable --now rpcbind
[root@node3 ~]# vim /etc/cinder/cinder.conf # 于[DEFAULT]区段最后追加如下内容 [DEFAULT] ...... ...... ...... ...... ...... ......
backup_driver = cinder.backup.drivers.nfs backup_mount_point_base = $state_path/backup_nfs backup_share = 192.168.10.14:/cinder4backup
[root@node3 ~]# systemctl enable --now openstack-cinder-backup [root@node3 ~]# chown -R cinder. /var/lib/cinder/backup_nfs
3) 使用Backup Service # 显示现有Volume [snow@node1 ~(keystone)]$ openstack volume list +--------------------------------------+-----------+-----------+------+-------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-----------+-----------+------+-------------+ | 08714f15-9331-40cb-95e4-2e1c06b65097 | disk1_nfs | available | 2 | | | 78133c6a-350b-4d03-9f1a-6cfebc9db548 | disk1_lvm | available | 2 | | +--------------------------------------+-----------+-----------+------+-------------+
(1) 对disk1_lvm卷建立一个完整备份 [snow@node1 ~(keystone)]$ openstack volume backup create --name backup-disk1_lvm disk1_lvm +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 7fb59a03-0a09-4cfc-af80-a1f1d68a6597 | | name | backup-disk1_lvm | +-------+--------------------------------------+
[snow@node1 ~(keystone)]$ openstack volume backup list +--------------------------------------+------------------+-------------+----------+------+ | ID | Name | Description | Status | Size | +--------------------------------------+------------------+-------------+----------+------+ | 61fd3957-4ded-4242-889c-e19b3c6d2356 | backup-disk1_lvm | None | creating | 2 | +--------------------------------------+------------------+-------------+----------+------+
[snow@node1 ~(keystone)]$ openstack volume backup list +--------------------------------------+------------------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +--------------------------------------+------------------+-------------+-----------+------+ | 61fd3957-4ded-4242-889c-e19b3c6d2356 | backup-disk1_lvm | None | available | 2 | +--------------------------------------+------------------+-------------+-----------+------+
(2) 增量备份 [snow@node1 ~(keystone)]$ openstack volume backup create --name backup-disk1_lvm-1 --incremental disk1_lvm +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | d5e0d220-6e0e-4812-a8dd-5cb7efbe41b2 | | name | backup-disk1_lvm-1 | +-------+--------------------------------------+
[snow@node1 ~(keystone)]$ openstack volume backup list +--------------------------------------+--------------------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +--------------------------------------+--------------------+-------------+-----------+------+ | d5e0d220-6e0e-4812-a8dd-5cb7efbe41b2 | backup-disk1_lvm-1 | None | creating | 2 | | 61fd3957-4ded-4242-889c-e19b3c6d2356 | backup-disk1_lvm | None | available | 2 | +--------------------------------------+--------------------+-------------+-----------+------+
[snow@node1 ~(keystone)]$ openstack volume backup list +--------------------------------------+--------------------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +--------------------------------------+--------------------+-------------+-----------+------+ | d5e0d220-6e0e-4812-a8dd-5cb7efbe41b2 | backup-disk1_lvm-1 | None | available | 2 | | 61fd3957-4ded-4242-889c-e19b3c6d2356 | backup-disk1_lvm | None | available | 2 | +--------------------------------------+--------------------+-------------+-----------+------+
# 如果备份的目标盘正在被实例所使用,可以加--force参数 [snow@node1 ~(keystone)]$ openstack volume backup create --name backup-disk1_lvm-1 --incremental --force disk1_lvm
(3) 恢复备份 [snow@node1 ~(keystone)]$ openstack volume backup list +--------------------------------------+--------------------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +--------------------------------------+--------------------+-------------+-----------+------+ | d5e0d220-6e0e-4812-a8dd-5cb7efbe41b2 | backup-disk1_lvm-1 | None | available | 2 | | 61fd3957-4ded-4242-889c-e19b3c6d2356 | backup-disk1_lvm | None | available | 2 | +--------------------------------------+--------------------+-------------+-----------+------+
[snow@node1 ~(keystone)]$ openstack volume list +--------------------------------------+-----------+-----------+------+-------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-----------+-----------+------+-------------+ | 08714f15-9331-40cb-95e4-2e1c06b65097 | disk1_nfs | available | 2 | | | 78133c6a-350b-4d03-9f1a-6cfebc9db548 | disk1_lvm | available | 2 | | +--------------------------------------+-----------+-----------+------+-------------+
[snow@node1 ~(keystone)]$ openstack volume backup restore backup-disk1_lvm-1 disk1_lvm 'VolumeBackupsRestore' object is not iterable
[snow@node1 ~(keystone)]$ openstack volume list # 查看当前状态 +--------------------------------------+-----------+------------------+------+-------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-----------+------------------+------+-------------+ | 08714f15-9331-40cb-95e4-2e1c06b65097 | disk1_nfs | available | 2 | | | 78133c6a-350b-4d03-9f1a-6cfebc9db548 | disk1_lvm | restoring-backup | 2 | | +--------------------------------------+-----------+------------------+------+-------------+
[snow@node1 ~(keystone)]$ openstack volume list # 恢复完成 +--------------------------------------+-----------+-----------+------+-------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-----------+-----------+------+-------------+ | 08714f15-9331-40cb-95e4-2e1c06b65097 | disk1_nfs | available | 2 | | | 78133c6a-350b-4d03-9f1a-6cfebc9db548 | disk1_lvm | available | 2 | | +--------------------------------------+-----------+-----------+------+-------------+
8. 使用Ceph作为Glance、Cinder及Nova后端存储
8.1 配置ceph
1) 本操作延续glance+ceph的操作,并延续nova热迁移单元中开启libvirtd的迁移监听端口16509
2) 完成HandBook-ceph单元-第1阶段内容-----Cinder/Nova节点作为Ceph客户端 # 本例子为:node5--admin节点/node6/node7/node8四台主机
3) 修改所有ceph管理节点上的ceph.conf,增加mon节点[srv1--控制节点/srv2--计算节点/srv3--Cinder节点/srv4--计算节点]并取消ceph验证 # 以ceph作为后端,仅对服务提供存储,不涉及对外。安全性高。因此无需进行ceph认证
[snow@srv5 ceph]$ pwd /home/snow/ceph
[snow@srv5 ceph]$ vim ceph.conf [global] fsid = 499a065f-652a-433c-a7d9-bbbd3f63204f public_network = 192.168.1.0/24 mon_initial_members = srv6, srv1, srv2, srv3, srv4 mon_host = 192.168.1.16, 192.168.1.11, 192.168.1.12, 192.168.1.13, 192.168.1.14 auth_cluster_required = none auth_service_required = none auth_client_required = none
4) 将修改好的ceph.conf模板复制到所有ceph节点上[含管理节点] [root@srv5 ~]# pscp.pssh -h host-list.txt /home/snow/ceph/ceph.conf /etc/ceph/ [1] 19:12:49 [SUCCESS] srv5.1000y.cloud [2] 19:12:49 [SUCCESS] srv1.1000y.cloud [3] 19:12:49 [SUCCESS] srv6.1000y.cloud [4] 19:12:49 [SUCCESS] srv7.1000y.cloud [5] 19:12:49 [SUCCESS] srv8.1000y.cloud
5) 重启所有ceph节点及控制节点
6) 检查ceph集群 [root@srv5 ~]# ceph -s | grep -A 1 health health: HEALTH_OK
# 如果出现 [root@srv5 ~]# ceph -s | grep -A 1 health health: HEALTH_WARN application not enabled on 1 pool(s) 问题,可按以下方式修复
(1) 查看问题 [root@srv5 ~]# ceph health detail HEALTH_WARN application not enabled on 1 pool(s) POOL_APP_NOT_ENABLED application not enabled on 1 pool(s) application not enabled on pool 'glance' use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
(2) 修复问题 [root@srv5 ~]# ceph osd pool application enable glance rgw enabled application 'rgw' on pool 'glance'
(3) 验证 [root@srv5 ~]# ceph -s | grep -A 1 health health: HEALTH_OK

7) 部署ceph客户端前期准备 (1) 将snow的sudo文件复制到各nova-compute及cinder节点 [root@srv5 ~]# scp /etc/sudoers.d/ceph srv2.1000y.cloud:/etc/sudoers.d/ [root@srv5 ~]# scp /etc/sudoers.d/ceph srv3.1000y.cloud:/etc/sudoers.d/ [root@srv5 ~]# scp /etc/sudoers.d/ceph srv4.1000y.cloud:/etc/sudoers.d/
(2) 将节点5的snow账户的ssh-public-key复制到各nova-compute及cinder节点 [root@srv5 ~]# su - snow [snow@srv5 ~]$ vim .ssh/config Host srv5 Hostname srv5.1000y.cloud User snow Host srv6 Hostname srv6.1000y.cloud User snow Host srv7 Hostname srv7.1000y.cloud User snow Host srv8 Hostname srv8.1000y.cloud User snow Host srv1 Hostname srv1.1000y.cloud User snow Host srv2 Hostname srv2.1000y.cloud User snow Host srv3 Hostname srv3.1000y.cloud User snow Host srv4 Hostname srv4.1000y.cloud User snow
[snow@srv5 ~]$ ssh-copy-id srv2 [snow@srv5 ~]$ ssh-copy-id srv3 [snow@srv5 ~]$ ssh-copy-id srv4
8) 开启所有openstack节点epel源,如有优先级设定请取消
9) 部署ceph客户端--所有openstack节点 [snow@srv5 ~]$ cd ceph [snow@srv5 ceph]$ ceph-deploy install --release nautilus \ --repo-url http://mirrors.ustc.edu.cn/ceph/rpm-nautilus/el7/ \ --gpg-url http://mirrors.ustc.edu.cn/ceph/keys/release.asc \ --nogpgcheck srv1 srv2 srv3 srv4
10) 将各openstack节点加入mon [snow@srv5 ceph]$ ceph-deploy admin srv1 srv2 srv3 srv4
[snow@srv5 ceph]$ ceph-deploy mon add srv1 [snow@srv5 ceph]$ ceph-deploy mon add srv2 [snow@srv5 ceph]$ ceph-deploy mon add srv3 [snow@srv5 ceph]$ ceph-deploy mon add srv4
11) 创建并初始化pool # 创建nova所使用的pool vms [snow@srv5 ceph]$ sudo ceph osd pool create vms 0 pool 'vms' created
# 创建cinder所使用的pool volumes [snow@srv5 ceph]$ sudo ceph osd pool create volumes 0 pool 'volumes' created
# 创建cinder backup所使用的pool backups [snow@srv5 ceph]$ sudo ceph osd pool create backups 0 pool 'backups' created
[snow@srv5 ceph]$ rbd pool init volumes [snow@srv5 ceph]$ rbd pool init vms [snow@srv5 ceph]$ rbd pool init backups
[snow@srv5 ceph]$ rados lspools glance vms volumes backups
8.2 ceph结合cinder
1) 配置cinder服务
[root@srv3 ~]# vim /etc/cinder/cinder.conf
# 添加绿色标记的内容
[DEFAULT]
......
......
enabled_backends = ceph
glance_api_version = 2
# 增加cinder backup支持 backup_driver = cinder.backup.drivers.ceph backup_ceph_conf=/etc/ceph/ceph.conf backup_ceph_user = none # (整数)备份在传输到Ceph对象存储之前被分解成的块大小(以字节为单位)。 backup_ceph_chunk_size = 134217728 backup_ceph_pool = backups # 创建备份映像时要使用的RBD条带计数(值为整数)。 backup_ceph_stripe_unit = 0 # 创建备份映像时使用的(整数)RBD条带单元。 backup_ceph_stripe_count = 0 # 如果为True,则在恢复卷时始终丢弃多余的字节,即填充零。 restore_discard_excess_bytes = true

...... ...... [ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph rbd_pool = volumes rbd_ceph_conf = /etc/ceph/ceph.conf # 从一个快照处创建云硬盘(卷。 当值为False:创建出来的存在于Ceph中的卷,只是快照的一个克隆品,对快照有依赖,从而会导致删除快照前要先删除此卷 当值为True:创建出来的存在于Ceph中的卷,是一个新的RBD镜像,对快照无依赖。 rbd_flatten_volume_from_snapshot = false # rbd_max_clone_depth来控制最大可克隆的层级,请按需设定 rbd_max_clone_depth = 5 # 块大小由rbd_store_chunk_size配置,默认为8MB。 rbd_store_chunk_size = 4 rados_connect_timeout = -1
[root@srv3 ~]# systemctl restart openstack-cinder-volume
2) 验证cinder服务 [root@srv1 ~(keystone)]# openstack volume service list +------------------+-----------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+-----------------------+------+---------+-------+----------------------------+ | cinder-scheduler | srv1.1000y.cloud | nova | enabled | up | 2020-10-24T12:02:21.000000 | | cinder-volume | srv3.1000y.cloud@ceph | nova | enabled | up | 2020-10-24T12:01:53.000000 | +------------------+-----------------------+------+---------+-------+----------------------------+
8.3 ceph结合nova
1) 配置所有的计算节点,增加ceph支持
(1) 配置控制节点的nova-compute
# 追加绿色标记的内容在nova.conf文件中
[root@srv1 ~(keystone)]# vim /etc/nova/nova.conf
......
......
......
......
......
......
[libvirt] images_type = rbd images_rbd_pool = vms images_rbd_ceph_conf = /etc/ceph/ceph.conf disk_cachemodes="network=writeback" # 禁用文件注入。当启动一个实例时, Nova通常试图打开虚拟机的根文件系统。 然后Nova会把比如密码、 ssh 密钥等值注入到文件系统中。然而,最好依赖元数据服务和cloud-init 。 inject_password = false inject_key = false inject_partition = -2 # 确保热迁移能顺利进行 # 下列参数为一个整行,并非两行 live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,
VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"

[root@srv1 ~(keystone)]# systemctl restart openstack-nova-api \ openstack-nova-compute openstack-nova-novncproxy openstack-nova-conductor \ openstack-nova-consoleauth openstack-nova-scheduler
(2) 配置srv2的nova-compute [root@srv2 ~]# vim /etc/nova/nova.conf ...... ...... ...... ...... ...... ......
# 追加绿色标记的内容在nova.conf文件中 [libvirt] images_type = rbd images_rbd_pool = vms images_rbd_ceph_conf = /etc/ceph/ceph.conf disk_cachemodes="network=writeback" # 禁用文件注入。当启动一个实例时, Nova通常试图打开虚拟机的根文件系统。 # 然后Nova会把比如密码、 ssh 密钥等值注入到文件系统中。然而,最好依赖元数据服务和cloud-init 。 inject_password = false inject_key = false inject_partition = -2 # 确保热迁移能顺利进行 # 下列参数为一个整行,并非两行 live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,
VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"

[root@srv2 ~(keystone)]# systemctl restart openstack-nova-compute
(3) 配置srv4的nova-compute [root@srv4 ~]# vim /etc/nova/nova.conf ...... ...... ...... ...... ...... ......
# 追加绿色标记的内容在nova.conf文件中 [libvirt] images_type = rbd images_rbd_pool = vms images_rbd_ceph_conf = /etc/ceph/ceph.conf disk_cachemodes="network=writeback" # 禁用文件注入。当启动一个实例时, Nova通常试图打开虚拟机的根文件系统。 # 然后Nova会把比如密码、 ssh 密钥等值注入到文件系统中。然而,最好依赖元数据服务和cloud-init 。 inject_password = false inject_key = false inject_partition = -2 # 确保热迁移能顺利进行 # 下列参数为一个整行,并非两行 live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,
VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"

[root@srv4 ~]# systemctl restart openstack-nova-compute
8.4 计算节点测试
1) 在ceph存储上生成实例
[root@srv1 ~(keystone)]# su - snow
[snow@srv1 ~(keystone)]$
[snow@srv1 ~(keystone)]$ netID=$(openstack network list | grep int_net | awk '{ print $2 }')
[snow@srv1 ~(keystone)]$ openstack server create --flavor m1.small \
--image c78 --security-group secgroup1 \
--nic net-id=$netID --key-name snowkey \
cent78
[snow@srv1 ~(keystone)]$ openstack server list +----------------+----------+--------+--------------------------------------+-------+----------+ | ID | Name | Status | Networks | Image | Flavor | +----------------+----------+--------+--------------------------------------+-------+----------+ | b34de6f8...... | cent78 | ACTIVE | int_net=192.168.188.3 | c78 | m1.small | | 20bd4e28...... | centos78 | ACTIVE | int_net=192.168.188.4, 192.168.1.251 | c78 | m1.small | +----------------+----------+--------+--------------------------------------+-------+----------+
[snow@srv1 ~(keystone)]$ openstack floating ip create ext_net +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | created_at | 2020-10-24T12:33:38Z | | description | | | fixed_ip_address | None | | floating_ip_address | 192.168.1.252 | | floating_network_id | 940cbd6c-ab0f-434e-a970-64a357239ff7 | | id | a1ca381e-b94e-4641-b832-b5d50c63f78f | | name | 192.168.1.252 | | port_id | None | | project_id | 169153fc58b34081b6e9d0a6ea5830b9 | | qos_policy_id | None | | revision_number | 0 | | router_id | None | | status | DOWN | | subnet_id | None | | updated_at | 2020-10-24T12:33:38Z | +---------------------+--------------------------------------+
[snow@srv1 ~(keystone)]$ openstack server add floating ip cent78 192.168.1.252
[snow@srv1 ~(keystone)]$ ping -c 2 192.168.1.252 PING 192.168.1.252 (192.168.1.252) 56(84) bytes of data. 64 bytes from 192.168.1.252: icmp_seq=1 ttl=63 time=5.25 ms 64 bytes from 192.168.1.252: icmp_seq=2 ttl=63 time=2.08 ms
--- 192.168.1.252 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 2.084/3.667/5.250/1.583 ms
[snow@srv1 ~(keystone)]$ ssh centos@192.168.1.252 [centos@cent78 ~]$
[root@srv1 ~(keystone)]# rados -p vms ls | grep id rbd_id.b34de6f8-d806-4a33-b1ce-515367b413e1_disk
2) 基于ceph的共享存储热迁移 (1) 确认cent78实例的ID [snow@srv1 ~(keystone)]$ openstack server list | grep cent78 | awk '{print $2}' b34de6f8-d806-4a33-b1ce-515367b413e1
(2) 确认cent78实例所在的计算节点 [root@srv1 ~(keystone)]# openstack server list --all-projects --long -c Name -c Host +----------+------------------+ | Name | Host | +----------+------------------+ | cent78 | srv2.1000y.cloud | | centos78 | srv4.1000y.cloud | +----------+------------------+
(3) 热迁移---将cent78实例迁移到srv4.1000y.cloud的计算节点上 [root@srv1 ~(keystone)]# openstack server migrate \ --live srv4.1000y.cloud b34de6f8-d806-4a33-b1ce-515367b413e1
######################################## 问题汇总 ######################################## # 热迁移失败解决方法 1. 在热迁移执行后,出现: Unacceptable CPU info: CPU doesnt't compatibility.可做如下变更: [root@srv1 ~(keystone)]# im /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py ...... ...... ...... ...... ...... ...... # 将6576-6580行注释 """ if not instance.vcpu_model or not instance.vcpu_model.model: source_cpu_info = src_compute_info['cpu_info'] self._compare_cpu(None, source_cpu_info, instance) else: self._compare_cpu(instance.vcpu_model, None, instance) """ ...... ...... ...... ...... ...... ...... # 将6823-6824行注释 #for f in info['features']: #cpu.add_feature(vconfig.LibvirtConfigCPUFeature(f)) ...... ...... ...... ...... ...... ......
2. 重启启动计算节点服务 [root@srv1 ~(keystone)]# systemctl restart openstack-nova-compute
######################################## 汇总结束 ########################################
(4) 确认热迁移情况 +----------+------------------+ | Name | Host | +----------+------------------+ | cent78 | srv4.1000y.cloud | | centos78 | srv4.1000y.cloud | +----------+------------------+
8.5 cinder节点测试
1) 设定环境
[snow@srv1 ~(keystone)]$ echo "export OS_VOLUME_API_VERSION=2" >> ~/keystonerc
[snow@srv1 ~(keystone)]$ source ./keystonerc
2) 创建一个基于ceph后端存储的1G的volume [snow@srv1 ~(keystone)]$ openstack volume create --size 1 disk1 +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2020-10-24T12:44:46.000000 | | description | None | | encrypted | False | | id | 9990bfb3-e5eb-47d1-b2a8-a22bbbc1fdf0 | | multiattach | False | | name | disk1 | | properties | | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | type | None | | updated_at | None | | user_id | 3f0eefbb0448421e8138ba93a0e50230 | +---------------------+--------------------------------------+
[snow@srv1 ~(keystone)]$ openstack volume list +--------------------------------------+-------+-----------+------+-------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-------+-----------+------+-------------+ | 9990bfb3-e5eb-47d1-b2a8-a22bbbc1fdf0 | disk1 | available | 1 | | +--------------------------------------+-------+-----------+------+-------------+
[snow@srv1 ~(keystone)]$ openstack server add volume cent78 disk1 [snow@srv1 ~(keystone)]$ openstack volume list +--------------------------------------+-------+--------+------+---------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-------+--------+------+---------------------------------+ | 9990bfb3-e5eb-47d1-b2a8-a22bbbc1fdf0 | disk1 | in-use | 1 | Attached to cent78 on /dev/vdb | +--------------------------------------+-------+--------+------+---------------------------------+
[snow@srv1 ~(keystone)]$ ssh centos@192.168.1.252
[centos@cent78 ~]$ lsblk | grep vdb vdb 253:16 0 1G 0 disk
[snow@srv1 ~(keystone)]$ openstack server remove volume cent78 disk1
[snow@srv1 ~(keystone)]$ openstack volume list +--------------------------------------+-------+-----------+------+-------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-------+-----------+------+-------------+ | 9990bfb3-e5eb-47d1-b2a8-a22bbbc1fdf0 | disk1 | available | 1 | | +--------------------------------------+-------+-----------+------+-------------+
8.6 glance+ceph存储
1) 修改glance-api配置文件
[root@srv1 ~(keystone)]# vim /etc/glance/glance-api.conf
......
......
......
......
......
......
# 注释[glance_store]区段的下面(5-7行)的内容,并添加如下内容 [glance_store] #stores = file,http #default_store = file #filesystem_store_datadir = /var/lib/glance/images/ default_store = rbd stores = rbd rbd_store_pool = glance rbd_store_user = none rbd_store_ceph_conf = /etc/ceph/ceph.conf rbd_store_chunk_size = 8
[root@srv1 ~(keystone)]# systemctl restart openstack-glance-api openstack-glance-registry
2) 测试 [root@srv1 ~(keystone)]# openstack image create "centos78" \ --file /var/lib/libvirt/images/c7.img \ --disk-format qcow2 \ --container-format bare \ --public +------------------+------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------+ | checksum | 17a50d56f38e781d8fa649754a7d1a93 | | container_format | bare | | created_at | 2020-10-24T15:05:46Z | | disk_format | qcow2 | | file | /v2/images/c329a233-abe8-4664-a6fa-382ffc4c773a/file | | id | c329a233-abe8-4664-a6fa-382ffc4c773a | | min_disk | 0 | | min_ram | 0 | | name | centos78 | | owner | db3f07c0f90041e7ba6624de7b96c41a | | protected | False | | schema | /v2/schemas/image | | size | 1675362304 | | status | active | | tags | | | updated_at | 2020-10-24T15:09:16Z | | virtual_size | None | | visibility | public | +------------------+------------------------------------------------------+
[root@srv1 ~(keystone)]# openstack image list +--------------------------------------+----------+--------+ | ID | Name | Status | +--------------------------------------+----------+--------+ | cf3fb321-27a5-4f72-a0fb-b553fe5ded34 | c78 | active | | c329a233-abe8-4664-a6fa-382ffc4c773a | centos78 | active | +--------------------------------------+----------+--------+

 

 

如对您有帮助,请随缘打个赏。^-^

gold