ad 1]
a] 1 corosync link on ceph bond + 1 corosync link in mesh
b] 1 corosync link on 1gbps link on dedicated switch + 1 corosync link in mesh
c] 1 corosync link on 10gbps + 1 corosync link on 10gbps - all without mesh
etc...
tldr: split corosync link so they aren't dependent on one logical...
Check https://pve.proxmox.com/wiki/Cluster_Manager#_cluster_network and other network related things https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster
You can use same name. Search PVE docs for removing/adding node.
Non-uefi boot way: https://aaronlauterer.com/blog/2021/move-grub-and-boot-to-other-disk/ (or search google).
Do you really understand what are you benchmarking? You can't calculate ceph iops based on ssd iops.
Read this https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2020-09-hyper-converged-with-nvme.76516/ .
Hi,
running backup of the stopped VM regularly and when it meets some zabbix check, it will raise notification because VM interface on PVE is going up/down:
PVE:
INFO: Finished Backup of VM 101 (00:00:10)
INFO: Backup finished at 2022-02-11 02:00:35
INFO: Starting Backup of VM 102 (qemu)
INFO...
There is so much HW controllers so PVE doesn't support this, you need your own monitoring. SMART is for checking state of disk, but it's not same as state of disk in array.
Result - use your own monitoring.
We have all HP DL3xx G8 on PVE7.1, version from last ugraded below:
proxmox-ve: 7.1-1 (running kernel: 5.13.19-3-pve)
pve-manager: 7.1-10 (running version: 7.1-10/6ddebafe)
pve-kernel-helper: 7.1-8
pve-kernel-5.13: 7.1-6
pve-kernel-5.13.19-3-pve: 5.13.19-7
ceph: 15.2.15-pve1
ceph-fuse...
Tab notes has broken formatting. I restored VM from PVE6.4 to 7.1 with such notes:
In edit panel are those lines line by line.
root
IP
vg0 - root 8G, swap 2G
v20210914
In view panel are those lines all on one line.
root IP vg0 - root 8G, swap 2G v20210914
Clearing notes to empy->save->reenter...
So you upgraded one node to PVE7 and upgraded ceph to Octopus too. There's the problem.
Before PVE team reply, my possible theoretical solutions :
1] downgrade ceph on the PVE7 node
or
2] stop VMs, backup VMs, upgrade rest of the cluster.
No warranty from me for any point written above.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.