ProxmoxVE 6.x 超融合配置手册

snow chuai汇总、整理、撰写---2020/08/04
最后更新日期---2020/12/24

 


1. 安装PVE
1) 下载PVE ISO
[root@srv1 ~]# curl -O https://mirrors.tuna.tsinghua.edu.cn/proxmox/iso/proxmox-ve_6.2-1.iso
2) 制作U盘启动盘 [root@srv1 ~]# dd if=./proxmox-ve_6.2-1.iso of=/dev/sdc
3) 安装PVE










4) 登录PVE控制台 [浏览器]===>https://$srv_ip:8006






5) 让3个节点均能解析FQDN(DNS或hosts方式均可)
2. 修订PVE
1) 取消订阅提示
root@pve1:~# sed -i "s/data.status !== 'Active'/false/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js
在浏览器上,按Shift+F5强制刷新并重新登录。
2) 更改软件源为国内源 root@pve1:~# mv /etc/apt/sources.list.d/pve-enterprise.list /etc/apt/sources.list.d/pve-enterprise.list.bak
root@pve1:~# vi /etc/apt/sources.list # 将内容全部清空,并添加如下内容 deb http://mirrors.aliyun.com/debian/ buster main non-free contrib deb http://mirrors.aliyun.com/debian-security buster/updates main deb http://mirrors.aliyun.com/debian/ buster-updates main non-free contrib deb http://mirrors.aliyun.com/debian/ buster-backports main non-free contrib<
# 使用社区源,如果想拥有更好的支持等可以付费订阅 deb http://mirrors.ustc.edu.cn/proxmox/debian/pve buster pve-no-subscription # 设定ceph源 deb http://mirrors.ustc.edu.cn/proxmox/debian/ceph-nautilus buster main

3) 更新系统 root@pve1:~# apt-get update && apt-get dist-upgrade -y root@pve1:~# reboot
4) 监控服务器CPU温度 (1) 安装lm-sensors并查看当前CPU温度 root@pve1:~# apt install lm-sensors -y
root@pve1:~# sensors coretemp-isa-0000 Adapter: ISA adapter Core 0: +40.0°C (high = +79.0°C, crit = +89.0°C) Core 1: +45.0°C (high = +79.0°C, crit = +89.0°C) Core 2: +41.0°C (high = +79.0°C, crit = +89.0°C) Core 8: +42.0°C (high = +79.0°C, crit = +89.0°C) Core 9: +39.0°C (high = +79.0°C, crit = +89.0°C) Core 10: +41.0°C (high = +79.0°C, crit = +89.0°C)
coretemp-isa-0001 Adapter: ISA adapter Core 0: +42.0°C (high = +79.0°C, crit = +89.0°C) Core 1: +40.0°C (high = +79.0°C, crit = +89.0°C) Core 2: +43.0°C (high = +79.0°C, crit = +89.0°C) Core 8: +44.0°C (high = +79.0°C, crit = +89.0°C) Core 9: +45.0°C (high = +79.0°C, crit = +89.0°C) Core 10: +42.0°C (high = +79.0°C, crit = +89.0°C)
(2) 备份所需要修改的文件 root@pve1:~# cp /usr/share/perl5/PVE/API2/Nodes.pm /usr/share/perl5/PVE/API2/Nodes.pm.bak root@pve1:~# cp /usr/share/pve-manager/js/pvemanagerlib.js /usr/share/pve-manager/js/pvemanagerlib.js.bak (3) 修改Nodes.pm root@pve1:~# vi /usr/share/perl5/PVE/API2/Nodes.pm # 找到357行,在之下添加绿色部分内容 ...... ...... $res->{pveversion} = PVE::pvecfg::package() . "/" . PVE::pvecfg::version_text();
# 如果只有一个物理CPU(coretemp-isa-0000)可直接写第一行信息 $res->{thermalstate} = `sensors`;
$res->{thermalstate} = `sensors|sed '1,9d'`;
(4) 修改pvemanagerlib.js root@pve1:~# vi /usr/share/pve-manager/js/pvemanagerlib.js ...... ...... # 找到21103行,将height: 400改为420 var win = Ext.create('Ext.window.Window', { items: [ logView ], layout: 'fit', width: 800, height: 420, modal: true, title: gettext("Replication Log") });
...... ...... # 找到28017行,将height: 300改为320 ...... ...... Ext.define('PVE.node.StatusView', { extend: 'PVE.panel.StatusView', alias: 'widget.pveNodeStatus',
height: 320, bodyPadding: '20 15 20 15',
...... ......
# 如果存在多个物理CPU,可能在版图上显示错位,可以调整上述两个值。或者直属掉28119-28126行 //{ //itemdID: 'kversion', //colspan: 2, //title: gettext('Kernel Version'), //printBar: false, //textField: 'kversion', // value: '' //}, ...... ...... # 找到28134行,在"}"后面追加一个逗号,并追加如下内容(绿色部分) # 如果只有一个物理CPU只用写thermal一个,如果存在多个物理CPU可自行追加 ...... ...... }, { itemdID: 'thermal', colspan: 2, printBar: false, title: gettext('CPU1-温度'), textField: 'thermalstate', renderer:function(value){ const c0 = value.match(/Core 0.*?\+([\d\.]+)?/)[1]; const c1 = value.match(/Core 1.*?\+([\d\.]+)?/)[1]; const c2 = value.match(/Core 2.*?\+([\d\.]+)?/)[1]; const c8 = value.match(/Core 8.*?\+([\d\.]+)?/)[1]; const c9 = value.match(/Core 9.*?\+([\d\.]+)?/)[1]; const c10 = value.match(/Core 10.*?\+([\d\.]+)?/)[1]; return `Core: ${c0} | ${c1} | ${c2} | ${c8} | ${c9} | ${c10}` } }, { itemdID: 'thermal1', colspan: 2, printBar: false, title: gettext('CPU2-温度'), textField: 'thermalstate1', renderer:function(value){ const c0 = value.match(/Core 0.*?\+([\d\.]+)?/)[1]; const c1 = value.match(/Core 1.*?\+([\d\.]+)?/)[1]; const c2 = value.match(/Core 2.*?\+([\d\.]+)?/)[1]; const c8 = value.match(/Core 8.*?\+([\d\.]+)?/)[1]; const c9 = value.match(/Core 9.*?\+([\d\.]+)?/)[1]; const c10 = value.match(/Core 10.*?\+([\d\.]+)?/)[1]; return `Core: ${c0} | ${c1} | ${c2} | ${c8} | ${c9} | ${c10}` } } ], ...... ......
root@pve1:~# systemctl restart pveproxy


5) 开启CPU透传 root@pve1:~# echo 'options kvm_intel nested=1' >> /etc/modprobe.d/qemu-system-x86.conf
root@pve1:~# reboot root@pve1:~# cat /sys/module/kvm_intel/parameters/nested Y
3. 实现PVE Cluster
1) 所有主机均安装chrony服务(NTP Service)
root@pve1:~# timedatectl set-timezone Asia/Shanghai
root@pve1:~# apt install chrony -y
root@pve1:~# systemctl enable --now chrony
root@pve2:~# timedatectl set-timezone Asia/Shanghai root@pve2:~# apt install chrony -y root@pve1:~# systemctl enable --now chrony
root@pve3:~# timedatectl set-timezone Asia/Shanghai root@pve3:~# apt install chrony -y root@pve1:~# systemctl enable --now chrony
2) 在pve1节点上创建集群 # 1000y为集群名称 root@pve1:~# pvecm create 1000y Corosync Cluster Engine Authentication key generator. Gathering 2048 bits for key from /dev/urandom. Writing corosync key to /etc/corosync/authkey. Writing corosync config to /etc/pve/corosync.conf Restart corosync and cluster filesystem
2) 在pve1节点上查看集群状态 oot@pve1:~# pvecm status Cluster information ------------------- Name: 1000y Config Version: 1 Transport: knet Secure auth: on
Quorum information ------------------ Date: Tue Aug 4 17:04:29 2020 Quorum provider: corosync_votequorum Nodes: 1 Node ID: 0x00000001 Ring ID: 1.5 Quorate: Yes
Votequorum information ---------------------- Expected votes: 1 Highest expected: 1 Total votes: 1 Quorum: 1 Flags: Quorate
Membership information ---------------------- Nodeid Votes Name 0x00000001 1 192.168.10.21 (local)
3) 将pve2节点加入至集群---于pve2节点操作 root@pve2:~# pvecm add pve1.1000y.cloud Please enter superuser (root) password for 'pve1.1000y.cloud': ********** # 输入pve1节点的管理员密码 Establishing API connection with host 'pve1.1000y.cloud' The authenticity of host 'pve1.1000y.cloud' can't be established. X509 SHA256 key fingerprint is A2:C8:2D:ED:B0:63:C7:2A:21:3E:D3:86:4D:64:1C:D7:36:CC:F8:5D:3E:80:E7:05:B3:D3:86:21:9C:A7:3E:4B. Are you sure you want to continue connecting (yes/no)? yes # 输入yes Login succeeded. check cluster join API version No cluster network links passed explicitly, fallback to local node IP '192.168.10.22' Request addition of this node Join request OK, finishing setup locally stopping pve-cluster service backup old database to '/var/lib/pve-cluster/backup/config-1596532047.sql.gz' waiting for quorum...OK (re)generate node files generate new node certificate merge authorized SSH keys and known hosts generated new node certificate, restart pveproxy and pvedaemon services successfully added node 'pve2' to cluster. <===成功加入
4) 将pve3节点加入至集群---于pve3节点操作 root@pve3:~# pvecm add pve1.1000y.cloud ...... ...... successfully added node 'pve3' to cluster.
5) 查看集群节点状态 root@pve1:~# pvecm status Cluster information ------------------- Name: 1000y Config Version: 3 Transport: knet Secure auth: on
Quorum information ------------------ Date: Tue Aug 4 17:12:24 2020 Quorum provider: corosync_votequorum Nodes: 3 Node ID: 0x00000001 Ring ID: 1.d Quorate: Yes
Votequorum information ---------------------- Expected votes: 3 Highest expected: 3 Total votes: 3 Quorum: 2 Flags: Quorate
Membership information ---------------------- Nodeid Votes Name 0x00000001 1 192.168.10.21 (local) 0x00000002 1 192.168.10.22 0x00000003 1 192.168.10.23
6) 各节点刷新WEB UI进行验证 [浏览器]===>https://$srv_ip:8006===>Shift+F5

4. 安装PVE-Ceph集群
1) 所有节点均增加一块100G硬盘
2) 所有节点全部安装Ceph并组建Ceph集群






3) 所出现的问题 (1) 出现警告:clock skew detected on mon.pve3, mon.pve2 root@pve1:~# vi /etc/ceph/ceph.conf # 于[global]区段最后追加如下内容 ...... ...... mon clock drift allowed = 2 mon clock drift warn backoff = 30
root@pve1:~# systemctl restart ceph-mon.target
(2) 出现警告:1 daemons have recently crashed # 查看问题 root@pve1:~# ceph crash ls-new ID ENTITY NEW 2020-08-04_09:50:21.698657Z_27fdbfb6-b2fd-46fb-9594-5828909802f9 mgr.pve1
# 查看详细信息 root@pve1:~# ceph crash info 2020-08-04_09:50:21.698657Z_27fdbfb6-b2fd-46fb-9594-5828909802f9 { "os_version_id": "10", "utsname_release": "5.4.44-2-pve", "os_name": "Debian GNU/Linux 10 (buster)", "entity_name": "mgr.pve1", "timestamp": "2020-08-04 09:50:21.698657Z", "process_name": "ceph-mgr", "utsname_machine": "x86_64", "utsname_sysname": "Linux", "os_version": "10 (buster)", "os_id": "10", "utsname_version": "#1 SMP PVE 5.4.44-2 (Wed, 01 Jul 2020 16:37:57 +0200)", "backtrace": [ "(()+0x12730) [0x7fea11954730]", "(bool ProtocolV2::append_frame<ceph::msgr::v2::MessageFrame>(ceph::msgr::v2::MessageFrame&)+0x4d9) [0x7fea12c2efc9]", "(ProtocolV2::write_message(Message*, bool)+0x4d9) [0x7fea12c12349]", "(ProtocolV2::write_event()+0x3a5) [0x7fea12c276a5]", "(AsyncConnection::handle_write()+0x43) [0x7fea12be8933]", "(EventCenter::process_events(unsigned int, std::chrono::duration<unsigned long, std::ratio&tt;1l, 1000000000l> >*)+0x135f) [0x7fea12c3a11f]", "(()+0x5b9fab) [0x7fea12c3ffab]", "(()+0xbbb2f) [0x7fea115fbb2f]", "(()+0x7fa3) [0x7fea11949fa3]", "(clone()+0x3f) [0x7fea112db4cf]" ], "utsname_hostname": "pve1", "crash_id": "2020-08-04_09:50:21.698657Z_27fdbfb6-b2fd-46fb-9594-5828909802f9", "ceph_version": "14.2.9" }
root@pve1:~# ceph crash archive 2020-08-04_09:50:21.698657Z_27fdbfb6-b2fd-46fb-9594-5828909802f9
root@pve1:~# ceph crash archive-all
4) 将pve2及pve3节点加入至监视器




5) 配置OSD---在三个节点分别操作以下步骤




6) 创建资源池 官方建议: 若少于5个OSD, 设置pg_num为128 5~10个OSD,设置pg_num为512 10~50个OSD,设置pg_num为4096




5. 创建虚拟机
1) 上传ISO镜像
(1) 使用WEB UI上传ISO




(2) 使用CLI上传ISO [root@srv1 ~]# scp ./CentOS7.iso root@pve1.1000y.cloud:/var/lib/vz/template/iso/
2) 在Ceph存储池上创建虚拟机并安装CentOS7











忽略CentOS系统安装步骤





6. 连接外部存储
1) 准备好NFS/GFS/Ceph/iSCSI等任意一个或多个外部存储
2) 连接外部存储

7. 迁移
1) 准备PVE Cluster
2) 热迁移---基于Ceph









2) 热迁移---基于本地存储(Local)




3) 冷迁移 1. 先关闭虚拟机 2. 步骤流程与热迁移步骤一致。不在叙述。
8. 虚拟机其他操作
1) 虚拟机快照
(1) 生成快照





(2) 恢复快照



2) 虚拟机备份 (1) 生成备份





# 虚拟机所备份的路径 root@pve1:~# ls -l /var/lib/vz/dump/*.vma.zst -rw-r--r-- 1 root root 651310008 Aug 4 20:33 /var/lib/vz/dump/vzdump-qemu-100-2020_08_04-20_31_57.vma.zst

(2) 恢复备份 # 恢复备份前,请关闭所需要恢复的虚拟机,不要让其处于运行状态





3) 虚拟机克隆---完整型克隆




3) 将虚拟机转换为模板 1. 调整好虚拟机,将该安装的、该优化的全部做好 2. 制作模板



4) 用模板克隆 # 使用模板克隆,可以实现完整克隆和链接克隆这两种方式 对于没有转换为模板的虚拟机进行克隆,只能够是完整克隆。




5) 删除虚拟机

9. PVE-HA实现
1) 准备工作
1. 至少三个群集节点(以获得可靠的仲裁)
2. VM和容器的共享存储
3. 硬件冗余(无处不在)
4. 使用可靠的“服务器”组件
5. 硬件看门狗-如果不可用,将退回到linux内核软件看门狗(softdog
6. 可选的硬件围栏设备

2) 建立HA群组并将虚拟机及模板机加入至HA群组中










3) 了解Proxmox HA触发条件 1.服务器手动重启,所有VM不迁移,VM状态全为freeze等待宿主机起来后启动。 2.服务器手动关机,所有VM不迁移,VM状态全为freeze等待宿主机起来后启动。 3.服务器异常中断,如断网,服务器突然断电等异常情况,理论上VM会迁移到其他正常运行的主机上。

4) 测试HA (1) 查看当前HA集群状态

(2) 断开pve2的网络



10. 监控Proxmox VE
10.1 使用proxmox-pve-exporter采集监控数据
1) 部署好Prometheus,详情请查阅Prometheus配置手册第1步---实现Prometheus
2) 部署好Grafana,,详情请查阅Prometheus配置手册第5步---使用Grafana支持可视化---实现第1及第2小节(安装最新版本的Grafana)
3) 在素有的PVE主机节点行安装promox-pve-exportter (1) 创建Prometheus账户 root@pve1:~# groupadd --system prometheus root@pve1:~# useradd -s /sbin/nologin --system -g prometheus prometheus root@pve1:~# mkdir /etc/prometheus/
root@pve2:~# groupadd --system prometheus root@pve2:~# useradd -s /sbin/nologin --system -g prometheus prometheus root@pve2:~# mkdir /etc/prometheus/
root@pve3:~# groupadd --system prometheus root@pve3:~# useradd -s /sbin/nologin --system -g prometheus prometheus root@pve3:~# mkdir /etc/prometheus/
(2) 安装proxmox-pve-exporter root@pve1:~# apt install python python-pip -y root@pve1:~# pip install -i https://pypi.tuna.tsinghua.edu.cn/simple prometheus-pve-exporter
root@pve2:~# apt install python python-pip -y root@pve2:~# pip install -i https://pypi.tuna.tsinghua.edu.cn/simple prometheus-pve-exporter
root@pve3:~# apt install python python-pip -y root@pve3:~# pip install -i https://pypi.tuna.tsinghua.edu.cn/simple prometheus-pve-exporter
(3) 创建配置文件 root@pve1:~# vi /etc/prometheus/pve.yml default: user: root@pam password: 123456 # 你的pve的管理密码 verify_ssl: false
root@pve1:~# chown -R prometheus:prometheus /etc/prometheus/ root@pve1:~# chmod -R 775 /etc/prometheus/
root@pve2:~# vi /etc/prometheus/pve.yml default: user: root@pam password: 123456 # 你的pve的管理密码 verify_ssl: false
root@pve2:~# chown -R prometheus:prometheus /etc/prometheus/ root@pve2:~# chmod -R 775 /etc/prometheus/
root@pve3:~# vi /etc/prometheus/pve.yml default: user: root@pam password: 123456 # 你的pve的管理密码 verify_ssl: false
root@pve3:~# chown -R prometheus:prometheus /etc/prometheus/ root@pve3:~# chmod -R 775 /etc/prometheus/
(4) 创建服务文件 root@pve1:~# vi /etc/systemd/system/prometheus-pve-exporter.service [Unit] Description=Prometheus exporter for Proxmox VE Documentation=https://github.com/znerol/prometheus-pve-exporter
[Service] Restart=always User=prometheus ExecStart=/usr/local/bin/pve_exporter /etc/prometheus/pve.yml
[Install] WantedBy=multi-user.target

root@pve1:~# systemctl daemon-reload root@pve1:~# systemctl enable --now prometheus-pve-exporter

root@pve2:~# vi /etc/systemd/system/prometheus-pve-exporter.service [Unit] Description=Prometheus exporter for Proxmox VE Documentation=https://github.com/znerol/prometheus-pve-exporter
[Service] Restart=always User=prometheus ExecStart=/usr/local/bin/pve_exporter /etc/prometheus/pve.yml
[Install] WantedBy=multi-user.target

root@pve2:~# systemctl daemon-reload root@pve2:~# systemctl enable --now prometheus-pve-exporter

root@pve3:~# vi /etc/systemd/system/prometheus-pve-exporter.service [Unit] Description=Prometheus exporter for Proxmox VE Documentation=https://github.com/znerol/prometheus-pve-exporter
[Service] Restart=always User=prometheus ExecStart=/usr/local/bin/pve_exporter /etc/prometheus/pve.yml
[Install] WantedBy=multi-user.target

root@pve3:~# systemctl daemon-reload root@pve3:~# systemctl enable --now prometheus-pve-exporter

(5) 访问测试 [浏览器]===>http://$pve_ip:9221/pve
4) 将proxmox-pve-exporter加入至Prometheus [root@srv1 ~]# vi /etc/prometheus/prometheus.yml # 于文档最后追加如下内容 ...... ...... - targets: ['srv1.1000y.cloud:9090']
- job_name: 'proxmox' metrics_path: /pve static_configs: - targets: ['pve1.1000y.cloud:9221', 'pve2.1000y.cloud:9221', 'pve3.1000y.cloud:9221']
[root@srv1 ~]# systemctl restart prometheus
5) 确认Prometheus开始收集监控信息 [浏览器]===>http://$prometheus_srv_ip:9090===>Status===>Targets
6) 在Grafana上添加Prometheus数据源





7) 在Grafana上添加一个监控模板



10.2 使用node-exporter采集监控数据
1) 部署好Prometheus,详情请查阅Prometheus配置手册第1步---实现Prometheus
2) 部署好Grafana,,详情请查阅Prometheus配置手册第5步---使用Grafana支持可视化---实现第1及第2小节(安装最新版本的Grafana)
3) 在所有节点上安装node-exporter root@pve1:~# wget -P /tmp https://github.com/prometheus/node_exporter/releases/download/v1.0.1/node_exporter-1.0.1.linux-amd64.tar.gz root@pve1:~# tar -xzvf /tmp/node_exporter-1.0.1.linux-amd64.tar.gz -C /tmp root@pve1:~# mv /tmp/node_exporter-1.0.1.linux-amd64/node_exporter /usr/local/bin/
root@pve2:~# wget -P /tmp https://github.com/prometheus/node_exporter/releases/download/v1.0.1/node_exporter-1.0.1.linux-amd64.tar.gz root@pve2:~# tar -xzvf /tmp/node_exporter-1.0.1.linux-amd64.tar.gz -C /tmp root@pve2:~# mv /tmp/node_exporter-1.0.1.linux-amd64/node_exporter /usr/local/bin/
root@pve3:~# wget -P /tmp https://github.com/prometheus/node_exporter/releases/download/v1.0.1/node_exporter-1.0.1.linux-amd64.tar.gz root@pve3:~# tar -xzvf /tmp/node_exporter-1.0.1.linux-amd64.tar.gz -C /tmp root@pve3:~# mv /tmp/node_exporter-1.0.1.linux-amd64/node_exporter /usr/local/bin/
4) 在所有节点上创建账户及服务文件 root@pve1:~# useradd -rs /bin/false node_exporter root@pve1:~# vi /etc/systemd/system/node_exporter.service [Unit] Description=Node Exporter After=network.target
[Service] User=node_exporter Group=node_exporter Type=simple ExecStart=/usr/local/bin/node_exporter
[Install] WantedBy=multi-user.target

root@pve1:~# systemctl daemon-reload && systemctl enable --now node_exporter

root@pve2:~# useradd -rs /bin/false node_exporter root@pve2:~# vi /etc/systemd/system/node_exporter.service [Unit] Description=Node Exporter After=network.target
[Service] User=node_exporter Group=node_exporter Type=simple ExecStart=/usr/local/bin/node_exporter
[Install] WantedBy=multi-user.target

root@pve2:~# systemctl daemon-reload && systemctl enable --now node_exporter

root@pve3:~# useradd -rs /bin/false node_exporter root@pve3:~# vi /etc/systemd/system/node_exporter.service [Unit] Description=Node Exporter After=network.target
[Service] User=node_exporter Group=node_exporter Type=simple ExecStart=/usr/local/bin/node_exporter
[Install] WantedBy=multi-user.target

root@pve3:~# systemctl daemon-reload && systemctl enable --now node_exporter

5) 访问测试 [浏览器]===>https://$pvesrv_ip:9100
6) 修改Prometheus的配置文件 [root@srv1 ~]# vi /etc/prometheus/prometheus.yml # 于文件底部追加绿色部分内容 ...... ......
- job_name: 'proxmox' metrics_path: /pve static_configs: - targets: ['pve1.1000y.cloud:9221', 'pve2.1000y.cloud:9221', 'pve3.1000y.cloud:9221']
- job_name: 'node_exporter_metrics' scrape_interval: 5s static_configs: - targets: ['pve1.1000y.cloud:9100', 'pve2.1000y.cloud:9100', 'pve3.1000y.cloud:9100']

[root@srv1 ~]# systemctl restart prometheus
7) Prometheus测试 [浏览器]===>http://$Prometheus_ip:9090===>Status===>Targets
8) 在Grafana上添加基于node_exporter的模板


10.3 使用Graphite采集监控数据
1) 安装Graphite并运行---详见Service模块Graphite
2) 使Graphite接收UDP请求---Proxmox默认使用udp发送计量数据 [root@srv2 ~]# vim /etc/carbon/carbon.conf ...... ...... # 修改93行的值 ENABLE_UDP_LISTENER = True UDP_RECEIVER_INTERFACE = 0.0.0.0 UDP_RECEIVER_PORT = 2003 ...... ......
[root@srv2 ~]# systemctl restart carbon-cache
3) 配置Proxmox



4) Graphite端查看
11. 开启云桌面
1) 开启SPICE
1. 关闭虚拟主机
2. 将虚拟主机设置为SPICE协议


2) 修改虚拟机的配置文件,开启监听地址及监听端口 root@pve:~# cd /etc/pve/qemu-server root@pve:/etc/pve/qemu-server# vi 103.conf # 于文档第2行追加如下内容 agent: 1 # 语法说明 args: 参数 -spice: 配置为SPICE; port: 指定监听端口 addr: 指定监听地址 disable-ticketing: 取消密码认证,如需要可改为改为passord=xxxx seamless-migration=on为QMP支持。
args: -spice port=61001,addr=0.0.0.0,disable-ticketing,seamless-migration=on ...... ......
3) 启动虚拟机
4) 使用客户端软件(SPICE Client)连接虚拟机

12. 使用Proxmox Backup Server备份PVE
12.1 安装及配置Proxmox Backup Server
1) Proxmox Backup Server --- PBS发布时间 --- 2020年11月
2) 下载PBS https://mirrors.tuna.tsinghua.edu.cn/proxmox/iso/proxmox-backup-server_1.0-1.iso
3) 本操作PBS主机硬件 1. 4Core-vcpus 2. 8G MEM 3. sda为系统盘32G/sdb为数据盘100G
4) 安装PBS

















5) 登录PBS [浏览器]===>https://pbs-srv-ip:8007




6) 创建PBS帐号





7) 创建备份所需要的存储空间









可在pbs命令行中确认data存储的挂载点 root@pbs:~# more /etc/proxmox-backup/datastore.cfg datastore: data path /mnt/datastore/data
root@pbs:~# df -Th | grep /mnt/datastore/data /dev/sdb1 xfs 100G 169M 100G 1% /mnt/datastore/data
8) 为备份的存储空间授权账户







每当进行PVE虚拟机或者容器备份时,可以通过查看.chunks目录空间大小变化来验证是否有数据写入。 root@pbs:~# ls -la /mnt/datastore/data/ total 2112 drwxr-xr-x 3 backup backup 34 Dec 22 19:20 . drwxr-xr-x 3 root root 4096 Dec 22 19:20 .. drwxr-x--- 65538 backup backup 1069056 Dec 22 19:20 .chunks -rw-r--r-- 1 backup backup 0 Dec 22 19:20 .lock
12.2 配置Proxmox Backup Server客户端
1) 为备份的存储空间授权账户
# PVE版本低于6.2-1请安装pbc客户端工具,如版本为6.2-1及以上版本则已自动安装完毕
root@pve:~# apt-get install proxmox-backup-client
2) 使PVE的PBC客户端连接PBS











3) 验证PBS客户端与服务器端连接的正确性验证 root@pve:~# cat /etc/pve/storage.cfg dir: local path /var/lib/vz content backup,vztmpl,iso
lvmthin: local-lvm thinpool data vgname pve content rootdir,images
pbs: pbs100 datastore data server 192.168.1.254 content backup encryption-key 1 fingerprint 42:12:7d:64:fa:7b:9c:bb:c5:53:ef:92:89:7f:44:99:aa:0e:51:be:88:f0:6e:18:7e:ab:ab:f 7:81:63:73:97 maxfiles 1 nodes pve username snow@pbs
12.3 备份PVE的虚拟机


















  

 

如对您有帮助,请随缘打个赏。^-^

gold