Beim Clusterjoin wird sehr viel was in /etc/pve liegt vom Cluster überschrieben.
Die Daten dort sind Teil des pmxcfs https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pmxcfs
Davor wird aber noch ein Backup bemacht. Schau mal auf dem...
How many corosync networks do you have configured?
If it is only one, chances are high that it is the same network used for the live migration. Then the live migration most likely consumed all the bandwidth. In turn, the latency for Corosync went...
With PVE on a shared LUN you use LVM to split it up. That is a block storage.
See https://pve.proxmox.com/wiki/Multipath that also touches the final setup after multipathing (which you will usually have in such setps).
No, that is a complex...
What does it mean "goes offline"? Does it reboot?
How is the network set up? Do you have a dedicated physical network just for corosync (PVE cluster communication) or is it sharing that with the network used for the live migration? Unless...
That indicates that you have a network issue. Possibly firewall related.
You can run the same command directly from the PVE and it will show you what the appropriate response should look like. You can also try running this command across the PVE...
You can also try curl -k https://[fqdn or IP]:8006 , this will show not only that the port is open but that you get a full response.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
is the DNS working on the PDM machine? What if you check the result in the shell of the PDM with dig? dig HOSTNAME
Does a fallback to IPs working?
Can you open a connection to the PVE nodes from PDM? nc -z PVEHOST 8006 && echo "worked".
If you...
Do you see shutdown tasks happening or are the guests being off some other way?
Are you migrating some guests to the remaining host before you shut down the other host? Out of memory could be one cause where the OOM killer stops the guests if...
do you mean this:
What would be a use case you have in mind? Because there are ways to import disk images in a storage-agnostic way.
Integrating special features for a very specific storage would need to have a good use case to warrant the...
One more follow up. By having the host part in there as well, it is very clear which subnet is used, when you are not using a /24, but for example a /26.
For example with 10.10.20.81/26 as host IP:
The result would be that the network ID address...
Yeah, since the CIDR (/24) marks which part of the IP address defines the network, and which the host, it is easy for Ceph to calculate the network part from the configuration.
If you allow me some comment here ;)
That classification regarding type1/2 hypervisors is a very strict approach that does not do justice to the nuances that have come up since it was coined back in the 1970s.
See this quote from that same wiki...
move the file (delete the vm config on source node) to be sure to not restart it again on old01 by mistake, when it's running on NEW01, or you'll break your vm filesystem
That is generally not a great idea as each Proxmox VE cluster will assume it is the sole user of a storage. If you would remove a VM and check the "Destroy unreferenced disks owned by guest" checkbox for example, and have a VM with the same VMID...
I assume this is a Linux guest right?
If you run ps -auxwf for example, you have the STAT column. If a process is in D state, it is waiting for IO to finish. See man 1 ps in the PROCESS STATE CODES section.
If a process is in that state you...