ad 1]
a] 1 corosync link on ceph bond + 1 corosync link in mesh
b] 1 corosync link on 1gbps link on dedicated switch + 1 corosync link in mesh
c] 1 corosync link on 10gbps + 1 corosync link on 10gbps - all without mesh
etc...
tldr: split corosync link so they aren't dependent on one logical...
Check https://pve.proxmox.com/wiki/Cluster_Manager#_cluster_network and other network related things https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster
You can use same name. Search PVE docs for removing/adding node.
Non-uefi boot way: https://aaronlauterer.com/blog/2021/move-grub-and-boot-to-other-disk/ (or search google).
Do you really understand what are you benchmarking? You can't calculate ceph iops based on ssd iops.
Read this https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2020-09-hyper-converged-with-nvme.76516/ .
Hi,
running backup of the stopped VM regularly and when it meets some zabbix check, it will raise notification because VM interface on PVE is going up/down:
PVE:
INFO: Finished Backup of VM 101 (00:00:10)
INFO: Backup finished at 2022-02-11 02:00:35
INFO: Starting Backup of VM 102 (qemu)
INFO...
There is so much HW controllers so PVE doesn't support this, you need your own monitoring. SMART is for checking state of disk, but it's not same as state of disk in array.
Result - use your own monitoring.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.