ProxmoxVE 7 超融合配置手册

发行时间: 2021/07
snow chuai汇总、整理、撰写---2021/11/01


1. 安装PVE
1) 下载PVE ISO
[root@srv1 ~]# curl -O https://mirrors.tuna.tsinghua.edu.cn/proxmox/iso/proxmox-ve_7.0-2.iso
2) 制作U盘启动盘 [root@srv1 ~]# dd if=./proxmox-ve_7.0-2.iso of=/dev/sdc
3) 安装PVE










4) 登录PVE控制台 [浏览器]===>https://$srv_ip:8006






5) 让3个节点均能解析FQDN(DNS或hosts方式均可)
2. 修订PVE
1) 取消订阅提示
root@pve1:~# sed -i "s/.data.status.toLowerCase() !== 'active'/.data.status.toLowerCase() === 'active'/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js 
在浏览器上,按Shift+F5强制刷新并重新登录。
2) 更改软件源为国内源 root@pve1:~# echo "#deb https://enterprise.proxmox.com/debian/pve bullseye pve-enterprise" > /etc/apt/sources.list.d/pve-enterprise.list
root@pve1:~# wget https://mirrors.ustc.edu.cn/proxmox/debian/proxmox-release-bullseye.gpg -O /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg
root@pve1:~# echo "deb https://mirrors.ustc.edu.cn/proxmox/debian/pve bullseye pve-no-subscription" > /etc/apt/sources.list.d/pve-no-subscription.list
root@pve1:~# echo "deb https://mirrors.ustc.edu.cn/proxmox/debian/ceph-pacific bullseye main" > /etc/apt/sources.list.d/ceph.list
root@pve1:~# sed -i.bak "s#http://download.proxmox.com/debian#https://mirrors.ustc.edu.cn/proxmox/debian#g" /usr/share/perl5/PVE/CLI/pveceph.pm
root@pve1:~# sed -i.bak "s#ftp.debian.org/debian#mirrors.aliyun.com/debian#g" /etc/apt/sources.list
root@pve1:~# sed -i "s#security.debian.org#mirrors.aliyun.com/debian-security#g" /etc/apt/sources.list
3) 更新系统 root@pve1:~# apt-get update && apt-get dist-upgrade -y root@pve1:~# reboot
4) 开启CPU透传 root@pve1:~# echo 'options kvm_intel nested=1' >> /etc/modprobe.d/qemu-system-x86.conf
root@pve1:~# reboot root@pve1:~# cat /sys/module/kvm_intel/parameters/nested Y
3. 实现PVE Cluster
1) 所有主机均安装chrony服务(NTP Service)
root@pve1:~# timedatectl set-timezone Asia/Shanghai
root@pve1:~# apt install chrony -y
root@pve1:~# systemctl enable --now chrony
root@pve2:~# timedatectl set-timezone Asia/Shanghai root@pve2:~# apt install chrony -y root@pve1:~# systemctl enable --now chrony
root@pve3:~# timedatectl set-timezone Asia/Shanghai root@pve3:~# apt install chrony -y root@pve1:~# systemctl enable --now chrony
2) 在pve1节点上创建集群 # 1000y为集群名称 root@pve1:~# pvecm create 1000y Corosync Cluster Engine Authentication key generator. Gathering 2048 bits for key from /dev/urandom. Writing corosync key to /etc/corosync/authkey. Writing corosync config to /etc/pve/corosync.conf Restart corosync and cluster filesystem
2) 在pve1节点上查看集群状态 root@pve1:~# pvecm status Cluster information ------------------- Name: 1000y Config Version: 1 Transport: knet Secure auth: on
Quorum information ------------------ Date: Fri Oct 29 15:14:58 2021 Quorum provider: corosync_votequorum Nodes: 1 Node ID: 0x00000001 Ring ID: 1.5 Quorate: Yes
Votequorum information ---------------------- Expected votes: 1 Highest expected: 1 Total votes: 1 Quorum: 1 Flags: Quorate
Membership information ---------------------- Nodeid Votes Name 0x00000001 1 192.168.1.11 (local)
3) 将pve2节点加入至集群---于pve2节点操作 root@pve2:~# pvecm add pve1.1000y.cloud Please enter superuser (root) password for 'pve1.1000y.cloud': ****** # 输入pve1管理员的密码 Establishing API connection with host 'pve1.1000y.cloud' The authenticity of host 'pve1.1000y.cloud' can't be established. X509 SHA256 key fingerprint is D0:9E:D4:27:97:97:94:CD:42:2C:78:19:02:CF:A7:51:96:56:C6:26:6E:FC:EE:29:D6:79:A9:A0:4C:D8:A3:D7. Are you sure you want to continue connecting (yes/no)? yes Login succeeded. check cluster join API version No cluster network links passed explicitly, fallback to local node IP '192.168.1.12' Request addition of this node Join request OK, finishing setup locally stopping pve-cluster service backup old database to '/var/lib/pve-cluster/backup/config-1635491793.sql.gz' waiting for quorum...OK (re)generate node files generate new node certificate merge authorized SSH keys and known hosts generated new node certificate, restart pveproxy and pvedaemon services successfully added node 'pve2' to cluster. # 成功加入
4) 将pve3节点加入至集群---于pve3节点操作 root@pve3:~# pvecm add pve1.1000y.cloud Please enter superuser (root) password for 'pve1.1000y.cloud': ****** # 输入pve1管理员的密码 Establishing API connection with host 'pve1.1000y.cloud' The authenticity of host 'pve1.1000y.cloud' can't be established. X509 SHA256 key fingerprint is D0:9E:D4:27:97:97:94:CD:42:2C:78:19:02:CF:A7:51:96:56:C6:26:6E:FC:EE:29:D6:79:A9:A0:4C:D8:A3:D7. Are you sure you want to continue connecting (yes/no)? yes Login succeeded. check cluster join API version No cluster network links passed explicitly, fallback to local node IP '192.168.1.13' Request addition of this node Join request OK, finishing setup locally stopping pve-cluster service backup old database to '/var/lib/pve-cluster/backup/config-1635491859.sql.gz' waiting for quorum...OK (re)generate node files generate new node certificate merge authorized SSH keys and known hosts generated new node certificate, restart pveproxy and pvedaemon services successfully added node 'pve3' to cluster. # 成功加入
5) 于PVE1节点查看集群节点状态 root@pve1:~# pvecm status Cluster information ------------------- Name: 1000y Config Version: 3 Transport: knet Secure auth: on
Quorum information ------------------ Date: Fri Oct 29 15:19:00 2021 Quorum provider: corosync_votequorum Nodes: 3 Node ID: 0x00000001 Ring ID: 1.d Quorate: Yes
Votequorum information ---------------------- Expected votes: 3 Highest expected: 3 Total votes: 3 Quorum: 2 Flags: Quorate
Membership information ---------------------- Nodeid Votes Name 0x00000001 1 192.168.1.11 (local) 0x00000002 1 192.168.1.12 0x00000003 1 192.168.1.13
6) 各节点刷新WEB UI进行验证 [浏览器]===>https://$srv_ip:8006===>Shift+F5
4. 安装PVE-Ceph集群
1) 所有节点均增加一块100G硬盘
2) 所有节点全部安装Ceph并组建Ceph集群






3) 所出现的问题 (1) 出现警告:clock skew detected on mon.pve3, mon.pve2 root@pve1:~# vi /etc/ceph/ceph.conf # 于[global]区段最后追加如下内容 ...... ...... mon clock drift allowed = 2 mon clock drift warn backoff = 30
root@pve1:~# systemctl restart ceph-mon.target
(2) 出现警告:1 daemons have recently crashed # 查看问题 root@pve1:~# ceph crash ls-new ID ENTITY NEW 2020-08-04_09:50:21.698657Z_27fdbfb6-b2fd-46fb-9594-5828909802f9 mgr.pve1
# 查看详细信息 root@pve1:~# ceph crash info 2020-08-04_09:50:21.698657Z_27fdbfb6-b2fd-46fb-9594-5828909802f9 { "os_version_id": "10", "utsname_release": "5.4.44-2-pve", "os_name": "Debian GNU/Linux 10 (buster)", "entity_name": "mgr.pve1", "timestamp": "2020-08-04 09:50:21.698657Z", "process_name": "ceph-mgr", "utsname_machine": "x86_64", "utsname_sysname": "Linux", "os_version": "10 (buster)", "os_id": "10", "utsname_version": "#1 SMP PVE 5.4.44-2 (Wed, 01 Jul 2020 16:37:57 +0200)", "backtrace": [ "(()+0x12730) [0x7fea11954730]", "(bool ProtocolV2::append_frame<ceph::msgr::v2::MessageFrame>(ceph::msgr::v2::MessageFrame&)+0x4d9) [0x7fea12c2efc9]", "(ProtocolV2::write_message(Message*, bool)+0x4d9) [0x7fea12c12349]", "(ProtocolV2::write_event()+0x3a5) [0x7fea12c276a5]", "(AsyncConnection::handle_write()+0x43) [0x7fea12be8933]", "(EventCenter::process_events(unsigned int, std::chrono::duration<unsigned long, std::ratio&tt;1l, 1000000000l> >*)+0x135f) [0x7fea12c3a11f]", "(()+0x5b9fab) [0x7fea12c3ffab]", "(()+0xbbb2f) [0x7fea115fbb2f]", "(()+0x7fa3) [0x7fea11949fa3]", "(clone()+0x3f) [0x7fea112db4cf]" ], "utsname_hostname": "pve1", "crash_id": "2020-08-04_09:50:21.698657Z_27fdbfb6-b2fd-46fb-9594-5828909802f9", "ceph_version": "14.2.9" }
root@pve1:~# ceph crash archive 2020-08-04_09:50:21.698657Z_27fdbfb6-b2fd-46fb-9594-5828909802f9
root@pve1:~# ceph crash archive-all
4) 将pve2及pve3节点加入至监视器




5) 配置OSD---在三个节点分别操作以下步骤




6) 创建资源池 官方建议: 若少于5个OSD, 设置pg_num为128 5~10个OSD,设置pg_num为512 10~50个OSD,设置pg_num为4096




5. 连接外部存储
1) 准备好NFS/GFS/Ceph/iSCSI等任意一个或多个外部存储---本操作为GFS
2) 连接外部存储

6. 创建虚拟机
1) 上传ISO镜像
(1) 使用WEB UI上传ISO





(2) 使用CLI上传ISO [root@srv1 ~]# scp ./Rocky-8.4-x86_64-minimal.iso root@pve1.1000y.cloud:/var/lib/vz/template/iso/
2) 在Ceph存储池上创建虚拟机并安装CentOS7











忽略RockyLinunx OS系统安装步骤



7. 迁移
1) 准备PVE Cluster
2) 热迁移---基于Ceph





2) 热迁移---基于本地存储(Local)





3) 冷迁移 1. 先关闭虚拟机 2. 步骤流程与热迁移步骤一致。不在叙述。
8. 虚拟机其他操作
1) 虚拟机快照
(1) 生成快照





(2) 恢复快照



2) 虚拟机备份 (1) 生成备份





# 虚拟机所备份的路径---本地存储备份路径 root@pve1:~# ls -l /var/lib/vz/dump/*.vma.zst -rw-r--r-- 1 root root 970605999 Oct 29 19:44 /var/lib/vz/dump/vzdump-qemu-101-2021_10_29-19_51_32.vma.zst
# 虚拟机所备份的路径---GFS备份路径 root@pve1:~# df -Th | grep GFS1 srv4.1000y.cloud:dist_vol fuse.glusterfs 171G 11G 154G 7% /mnt/pve/GFS1
root@pve1:~# ls -l /mnt/pve/GFS1/dump/*.vma.zst -rw-r--r-- 1 root root 970605999 Oct 29 19:44 /mnt/pve/GFS1/dump/vzdump-qemu-100-2021_10_29-19_43_53.vma.zst

(2) 恢复备份 # 恢复备份前,请关闭所需要恢复的虚拟机,不要让其处于运行状态





3) 虚拟机克隆---完整型克隆




3) 将虚拟机转换为模板 1. 调整好虚拟机,将该安装的、该优化的全部做好 2. 制作模板



4) 用模板克隆 # 使用模板克隆,可以实现完整克隆和链接克隆这两种方式 对于没有转换为模板的虚拟机进行克隆,只能够是完整克隆。




5) 删除虚拟机
9. PVE-HA实现
1) 准备工作
1. 至少三个群集节点(以获得可靠的仲裁)
2. VM和容器的共享存储
3. 硬件冗余(无处不在)
4. 使用可靠的“服务器”组件
5. 硬件看门狗-如果不可用,将退回到linux内核软件看门狗(softdog
6. 可选的硬件围栏设备

2) 建立HA群组并将虚拟机及模板机加入至HA群组中










3) 了解Proxmox HA触发条件 1.服务器手动重启,所有VM不迁移,VM状态全为freeze等待宿主机起来后启动。 2.服务器手动关机,所有VM不迁移,VM状态全为freeze等待宿主机起来后启动。 3.服务器异常中断,如断网,服务器突然断电等异常情况,理论上VM会迁移到其他正常运行的主机上。

4) 测试HA (1) 查看当前HA集群状态

(2) 断开pve2的网络


10. 监控Proxmox VE
10.1 使用proxmox-pve-exporter采集监控数据
1) 部署好Prometheus,详情请查阅Prometheus配置手册第1步---实现Prometheus
2) 部署好Grafana,,详情请查阅Prometheus配置手册第5步---使用Grafana支持可视化---实现第1及第2小节(安装最新版本的Grafana)
3) 在素有的PVE主机节点行安装promox-pve-exportter (1) 创建Prometheus账户 root@pve1:~# groupadd --system prometheus root@pve1:~# useradd -s /sbin/nologin --system -g prometheus prometheus root@pve1:~# mkdir /etc/prometheus/
root@pve2:~# groupadd --system prometheus root@pve2:~# useradd -s /sbin/nologin --system -g prometheus prometheus root@pve2:~# mkdir /etc/prometheus/
root@pve3:~# groupadd --system prometheus root@pve3:~# useradd -s /sbin/nologin --system -g prometheus prometheus root@pve3:~# mkdir /etc/prometheus/
(2) 安装proxmox-pve-exporter root@pve1:~# apt install python3 python3-pip -y root@pve1:~# pip install -i https://pypi.tuna.tsinghua.edu.cn/simple prometheus-pve-exporter
root@pve2:~# apt install python3 python3-pip -y root@pve2:~# pip install -i https://pypi.tuna.tsinghua.edu.cn/simple prometheus-pve-exporter
root@pve3:~# apt install python3 python3-pip -y root@pve3:~# pip install -i https://pypi.tuna.tsinghua.edu.cn/simple prometheus-pve-exporter
(3) 创建配置文件 root@pve1:~# vi /etc/prometheus/pve.yml default: user: root@pam password: 123456 # 你的pve的管理密码 verify_ssl: false
root@pve1:~# chown -R prometheus:prometheus /etc/prometheus/ root@pve1:~# chmod -R 775 /etc/prometheus/
root@pve2:~# vi /etc/prometheus/pve.yml default: user: root@pam password: 123456 # 你的pve的管理密码 verify_ssl: false
root@pve2:~# chown -R prometheus:prometheus /etc/prometheus/ root@pve2:~# chmod -R 775 /etc/prometheus/
root@pve3:~# vi /etc/prometheus/pve.yml default: user: root@pam password: 123456 # 你的pve的管理密码 verify_ssl: false
root@pve3:~# chown -R prometheus:prometheus /etc/prometheus/ root@pve3:~# chmod -R 775 /etc/prometheus/
(4) 创建服务文件 root@pve1:~# vi /etc/systemd/system/prometheus-pve-exporter.service [Unit] Description=Prometheus exporter for Proxmox VE Documentation=https://github.com/znerol/prometheus-pve-exporter
[Service] Restart=always User=prometheus ExecStart=/usr/local/bin/pve_exporter /etc/prometheus/pve.yml
[Install] WantedBy=multi-user.target

root@pve1:~# systemctl daemon-reload root@pve1:~# systemctl enable --now prometheus-pve-exporter

root@pve2:~# vi /etc/systemd/system/prometheus-pve-exporter.service [Unit] Description=Prometheus exporter for Proxmox VE Documentation=https://github.com/znerol/prometheus-pve-exporter
[Service] Restart=always User=prometheus ExecStart=/usr/local/bin/pve_exporter /etc/prometheus/pve.yml
[Install] WantedBy=multi-user.target

root@pve2:~# systemctl daemon-reload root@pve2:~# systemctl enable --now prometheus-pve-exporter

root@pve3:~# vi /etc/systemd/system/prometheus-pve-exporter.service [Unit] Description=Prometheus exporter for Proxmox VE Documentation=https://github.com/znerol/prometheus-pve-exporter
[Service] Restart=always User=prometheus ExecStart=/usr/local/bin/pve_exporter /etc/prometheus/pve.yml
[Install] WantedBy=multi-user.target

root@pve3:~# systemctl daemon-reload root@pve3:~# systemctl enable --now prometheus-pve-exporter

(5) 访问测试 [浏览器]===>http://$pve_ip:9221/pve
4) 将proxmox-pve-exporter加入至Prometheus [root@srv7 ~]# vim /etc/prometheus/prometheus.yml # 于文档最后追加如下内容 ...... ...... - job_name: "srv7" static_configs: - targets: ['srv7.1000y.cloud:9100']
- job_name: 'proxmox' metrics_path: /pve static_configs: - targets: ['pve1.1000y.cloud:9221', 'pve2.1000y.cloud:9221', 'pve3.1000y.cloud:9221']
[root@srv7 ~]# systemctl restart prometheus
5) 确认Prometheus开始收集监控信息 [浏览器]===>http://$prometheus_srv_ip:9090===>Status===>Targets
6) 在Grafana上添加Prometheus数据源





7) 在Grafana上添加一个监控模板


10.2 使用node-exporter采集监控数据
1) 部署好Prometheus,详情请查阅Prometheus配置手册第1步---实现Prometheus
2) 部署好Grafana,,详情请查阅Prometheus配置手册第5步---使用Grafana支持可视化---实现第1及第2小节(安装最新版本的Grafana)
3) 在所有节点上安装node-exporter root@pve1:~# wget -P /tmp https://github.com/prometheus/node_exporter/releases/download/v1.2.2/node_exporter-1.2.2.linux-amd64.tar.gz root@pve1:~# tar -xzvf /tmp/node_exporter-1.2.2.linux-amd64.tar.gz -C /tmp root@pve1:~# mv /tmp/node_exporter-1.2.2.linux-amd64/node_exporter /usr/local/bin/
root@pve2:~# wget -P /tmp https://github.com/prometheus/node_exporter/releases/download/v1.2.2/node_exporter-1.2.2.linux-amd64.tar.gz root@pve2:~# tar -xzvf /tmp/node_exporter-1.2.2.linux-amd64.tar.gz -C /tmp root@pve2:~# mv /tmp/node_exporter-1.2.2.linux-amd64/node_exporter /usr/local/bin/
root@pve3:~# wget -P https://github.com/prometheus/node_exporter/releases/download/v1.2.2/node_exporter-1.2.2.linux-amd64.tar.gz root@pve3:~# tar -xzvf /tmp/node_exporter-1.2.2.linux-amd64.tar.gz -C /tmp root@pve3:~# mv /tmp/node_exporter-1.2.2.linux-amd64/node_exporter /usr/local/bin/
4) 在所有节点上创建账户及服务文件 root@pve1:~# useradd -rs /bin/false node_exporter root@pve1:~# vi /etc/systemd/system/node_exporter.service [Unit] Description=Node Exporter After=network.target
[Service] User=node_exporter Group=node_exporter Type=simple ExecStart=/usr/local/bin/node_exporter
[Install] WantedBy=multi-user.target

root@pve1:~# systemctl daemon-reload && systemctl enable --now node_exporter

root@pve2:~# useradd -rs /bin/false node_exporter root@pve2:~# vi /etc/systemd/system/node_exporter.service [Unit] Description=Node Exporter After=network.target
[Service] User=node_exporter Group=node_exporter Type=simple ExecStart=/usr/local/bin/node_exporter
[Install] WantedBy=multi-user.target

root@pve2:~# systemctl daemon-reload && systemctl enable --now node_exporter

root@pve3:~# useradd -rs /bin/false node_exporter root@pve3:~# vi /etc/systemd/system/node_exporter.service [Unit] Description=Node Exporter After=network.target
[Service] User=node_exporter Group=node_exporter Type=simple ExecStart=/usr/local/bin/node_exporter
[Install] WantedBy=multi-user.target

root@pve3:~# systemctl daemon-reload && systemctl enable --now node_exporter

5) 访问测试 [浏览器]===>https://$pvesrv_ip:9100

6) 修改Prometheus的配置文件 [root@srv7 ~]# vim /etc/prometheus/prometheus.yml # 于文件底部追加绿色部分内容 ...... ......
- job_name: 'proxmox' metrics_path: /pve static_configs: - targets: ['pve1.1000y.cloud:9221', 'pve2.1000y.cloud:9221', 'pve3.1000y.cloud:9221']
- job_name: 'node_exporter_metrics' scrape_interval: 5s static_configs: - targets: ['pve1.1000y.cloud:9100', 'pve2.1000y.cloud:9100', 'pve3.1000y.cloud:9100']
[root@srv7 ~]# systemctl restart prometheus
7) Prometheus测试 [浏览器]===>http://$Prometheus_ip:9090===>Status===>Targets
8) 在Grafana上添加基于node_exporter的模板

10.3 使用Graphite采集监控数据
1) 安装Graphite并运行---详见Service模块Graphite
2) 使Graphite接收UDP请求---Proxmox默认使用udp发送计量数据 [root@srv8 ~]# vim /data/graphite/conf/carbon.conf ...... ...... # 修改103行的值 ENABLE_UDP_LISTENER = True UDP_RECEIVER_INTERFACE = 0.0.0.0 UDP_RECEIVER_PORT = 2003 ...... ......
[root@srv8 ~]# podman restart graphite
3) 配置Proxmox



4) Graphite端查看
11. 开启云桌面
1) 开启SPICE
1. 关闭虚拟主机
2. 将虚拟主机设置为SPICE协议


2) 修改虚拟机的配置文件,开启监听地址及监听端口 root@pve2:~# cd /etc/pve/qemu-server root@pve2:/etc/pve/qemu-server# vi 100.conf agent: 1 # 于文档第2行追加如下内容 # 语法说明 args: 参数 -spice: 配置为SPICE; port: 指定监听端口 addr: 指定监听地址 disable-ticketing: 取消密码认证,如需要可改为改为passord=xxxx seamless-migration=on为QMP支持。
args: -spice port=61001,addr=0.0.0.0,disable-ticketing,seamless-migration=on ...... ......
3) 启动虚拟机
4) 使用客户端软件(SPICE Client)连接虚拟机
12. 使用Proxmox Backup Server备份PVE
12.1 安装及配置Proxmox Backup Server
1) Proxmox Backup Server --- PBS发布时间 --- 2020年11月
2) 下载PBS https://mirrors.tuna.tsinghua.edu.cn/proxmox/iso/proxmox-backup-server_2.0-1.iso
3) 本操作PBS主机硬件 1. 4Core-vcpus 2. 8G MEM 3. sda为系统盘32G/sdb为数据盘100G
4) 安装PBS

















5) 登录PBS [浏览器]===>https://pbs-srv-ip:8007




6) 创建PBS帐号





7) 创建备份所需要的存储空间









可在pbs命令行中确认data存储的挂载点 root@pbs:~# more /etc/proxmox-backup/datastore.cfg datastore: backup path /mnt/datastore/backup
root@pbs:~# df -Th /mnt/datastore/backup Filesystem Type Size Used Avail Use% Mounted on /dev/sdb1 xfs 100G 780M 100G 1% /mnt/datastore/backup
8) 为备份的存储空间授权账户







每当进行PVE虚拟机或者容器备份时,可以通过查看.chunks目录空间大小变化来验证是否有数据写入。 root@pbs:~# ls -la /mnt/datastore/backup/ total 2112 drwxr-xr-x 3 backup backup 34 Nov 1 19:38 . drwxr-xr-x 3 root root 4096 Nov 1 19:38 .. drwxr-x--- 65538 backup backup 1069056 Nov 1 19:38 .chunks -rw-r--r-- 1 backup backup 0 Nov 1 19:38 .lock
12.2 配置Proxmox Backup Server客户端
1) 为备份的存储空间授权账户
# PVE版本低于6.2-1请安装pbc客户端工具,如版本为6.2-1及以上版本则已自动安装完毕
root@pve:~# apt install proxmox-backup-client
2) 使PVE的PBC客户端连接PBS











3) 验证PBS客户端与服务器端连接的正确性验证 root@pve1:~# cat /etc/pve/storage.cfg dir: local path /var/lib/vz content iso,vztmpl,backup
lvmthin: local-lvm thinpool data vgname pve content images,rootdir
rbd: qyy content rootdir,images krbd 0 pool qyy
glusterfs: GFS1 path /mnt/pve/GFS1 volume dist_vol content images,backup,iso prune-backups keep-all=1 server srv4.1000y.cloud
pbs: pbs01 datastore backup server pbs.1000y.cloud content backup fingerprint 47:9b:ac:2e:87:fa:f7:ff:b3:e4:71:51:38:23:a5:39:15:b4:34:b3:6b:36:81:52:09:59:0e:58:c5:7e:00:c1 prune-backups keep-all=1 username snow@pbs
root@pve2:~# cat /etc/pve/storage.cfg dir: local path /var/lib/vz content iso,vztmpl,backup
lvmthin: local-lvm thinpool data vgname pve content images,rootdir
rbd: qyy content rootdir,images krbd 0 pool qyy
glusterfs: GFS1 path /mnt/pve/GFS1 volume dist_vol content images,backup,iso prune-backups keep-all=1 server srv4.1000y.cloud
pbs: pbs01 datastore backup server pbs.1000y.cloud content backup fingerprint 47:9b:ac:2e:87:fa:f7:ff:b3:e4:71:51:38:23:a5:39:15:b4:34:b3:6b:36:81:52:09:59:0e:58:c5:7e:00:c1 prune-backups keep-all=1 username snow@pbs
root@pve3:~# cat /etc/pve/storage.cfg dir: local path /var/lib/vz content iso,vztmpl,backup
lvmthin: local-lvm thinpool data vgname pve content images,rootdir
rbd: qyy content rootdir,images krbd 0 pool qyy
glusterfs: GFS1 path /mnt/pve/GFS1 volume dist_vol content images,backup,iso prune-backups keep-all=1 server srv4.1000y.cloud
pbs: pbs01 datastore backup server pbs.1000y.cloud content backup fingerprint 47:9b:ac:2e:87:fa:f7:ff:b3:e4:71:51:38:23:a5:39:15:b4:34:b3:6b:36:81:52:09:59:0e:58:c5:7e:00:c1 prune-backups keep-all=1 username snow@pbs
12.3 备份PVE的虚拟机
















  
13. 升级PVE6至PVE7
1) 删除企业源
[root@srv1 ~]# rm -rf /etc/apt/sources.list.d/pve-enterprise.list
2) 下载密钥 [root@srv1 ~]# wget http://mirrors.ustc.edu.cn/proxmox/debian/proxmox-ve-release-6.x.gpg \ -O /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
3) 升级PVE6至最新版本 [root@srv1 ~]# apt update && apt dist-upgrade -y
4) 重启 [root@srv1 ~]# reboot
5) 检查升级情况 # 此检测仅报告某些应用升级后可能无法使用,但不会自行修复 [root@srv1 ~]# pve6to7 -full ...... ...... ...... ...... ...... ...... = SUMMARY =
TOTAL: 25 PASSED: 19 SKIPPED: 2 # 本实验撰写时的4个告警直接忽略。不会造成升级影响。 WARNINGS: 4 FAILURES: 0
ATTENTION: Please check the output for detailed information
6) 添加国内pve-no-subscription源 [root@srv1 ~]# echo "deb http://mirrors.ustc.edu.cn/proxmox/debian/pve bullseye pve-no-subscription" > /etc/apt/sources.list.d/pve-no-subscription.list
7) 设置debian的国内源 [root@srv1 ~]# vim /etc/apt/sources.list # 删除文件内所有内容,并添加如下内容 deb http://mirrors.aliyun.com/debian/ bullseye main non-free contrib deb http://mirrors.aliyun.com/debian-security bullseye-security main deb http://mirrors.aliyun.com/debian/ bullseye-updates main non-free contrib deb http://mirrors.aliyun.com/debian/ bullseye-backports main non-free contrib
8) 设置ceph的国内源 [root@srv1 ~]# echo "deb http://mirrors.ustc.edu.cn/proxmox/debian/ceph-octopus bullseye main" > /etc/apt/sources.list.d/ceph.list
9) 确认PVE企业源不存在,如存在请删除 [root@srv1 ~]# rm -rf /etc/apt/sources.list.d/pve-enterprise.list
10) 升级至PVE7 # 升级过程中,会存在交互式确认,可直接默认"回车"即可. [root@srv1 ~]# apt update && apt dist-upgrade -y
11) 升级后重启 [root@srv1 ~]# reboot
12) 删除无用的软件 [root@srv1 ~]# apt autoremove -y
13) 如需要删除过时的内核,可按以下操作进行 (1) 确认当前使用的内核 [root@srv1 ~]# uname -r 5.11.22-5-pve
(2) 列出当前系统存在的内核 [root@srv1 ~]# dpkg --get-selections | grep kernel pve-kernel-5.11 install pve-kernel-5.11.22-5-pve install pve-kernel-5.4 install pve-kernel-5.4.106-1-pve deinstall pve-kernel-5.4.119-1-pve deinstall pve-kernel-5.4.124-1-pve deinstall pve-kernel-5.4.128-1-pve deinstall pve-kernel-5.4.140-1-pve deinstall pve-kernel-5.4.143-1-pve install pve-kernel-5.4.34-1-pve install pve-kernel-5.4.44-2-pve deinstall pve-kernel-5.4.55-1-pve deinstall pve-kernel-5.4.65-1-pve deinstall pve-kernel-5.4.78-2-pve deinstall pve-kernel-helper install
(3) 删除不需要的内核 [root@srv1 ~]# dpkg --purge --force-remove-essential pve-kernel-5.4.106-1-pve
(4) 重新生成grub [root@srv1 ~]# update-grub [root@srv1 ~]# reboot
14) 安装网络配置修改后自动刷新工具 [root@srv1 ~]# apt install ifupdown2 -y
15) 使用chronyd同步时间---[如需要] [root@srv1 ~]# apt install chrony -y [root@srv1 ~]# apt remove systemd-timesyncd -y [root@srv1 ~]# apt remove --purge systemd-timesyncd
16) 升级完成,使用浏览器登录PVE管理界面。确认升级7.0

17) 错误汇总 1. 当PVE服务器关机后,再次启动,会发现无法启动虚拟机,提示找不到虚拟机磁盘: kvm: -drive file=/dev/pve/vm-102-disk-0,if=none,id=drive-scsi0,format=raw,cache=none,aio=io_uring,detect-zeroes=on: Could not open '/dev/pve/vm-102-disk-0': No such file or directory Use of uninitialized value $tpmpid in concatenation (.) or string at /usr/share/perl5/PVE/QemuServer.pm line 5465. stopping swtpm instance (pid ) due to QEMU startup error TASK ERROR: start failed: QEMU exited with code 1
2. 解决办法: 1) 确认虚拟机磁盘是否存在 [root@srv1 ~]# lvscan ...... ...... ...... # inactive表示存在,但没有激活 inactive '/dev/pve/vm-102-disk-0' [1.46 TiB] inherit ...... ...... ......
2) 修复pve卷以激活虚拟机磁盘 [root@srv1 ~]# lvchange -a n /dev/pve/data_tmeta [root@srv1 ~]# lvchange -a n /dev/pve/data_tdata [root@srv1 ~]# vgchange -a y pve
3) 验证 [root@srv1 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931.5G 0 disk ├─sda1 8:1 0 1007K 0 part ├─sda2 8:2 0 512M 0 part └─sda3 8:3 0 931G 0 part ├─pve-swap 253:0 0 8G 0 lvm [SWAP] ├─pve-root 253:1 0 96G 0 lvm / ├─pve-data_tmeta 253:2 0 8.1G 0 lvm │ └─pve-data-tpool 253:4 0 3.5T 0 lvm │ ├─pve-data 253:5 0 3.5T 1 lvm │ ├─pve-vm--102--disk--0 253:6 0 1.5T 0 lvm ...... ...... ......
# 如果没有出现虚拟机磁盘,可再次执行一下内容 [root@srv1 ~]# lvchange -a y /dev/pve/data [root@srv1 ~]# lvconvert --repair pve/data
4) 将vg新的配置进行备份,并覆盖旧有的配置 [root@srv1 ~]# vgcfgbackup --file lvmbackup.txt --force pve [root@srv1 ~]# vgcfgrestore --file lvmbackup.txt --force pve
5) 关机后再次启动PVE服务器进行测试

 

如对您有帮助,请随缘打个赏。^-^

gold