Thanks for reporting this. But the forum is not the correct place. Please open a new bug report in our bugtracker https://bugzilla.proxmox.com/ where we can keep better track of this.
This is a situation that a stretch cluster does not protect you against. The main goal of a stretch cluster is to keep the cluster functional if one location is completely down, for example, due to a fire.
The network between the locations needs...
You mean in the ceph.conf file? that is no problem, as the /24 defines the subnet, so the last octet does not matter.
What is interesting is that, according to the ceph -s and the config file, only one MON is known to the running Ceph cluster...
the ZFS message sounds like a red-herring. But the ext4 journal messages look somewhat more problematic.
Can you log in to the host? Either via SSH (which would be nicer, as you could copy&paste output) or directly on the screen from which you...
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#user_mgmt
You will need to give the user access to the resources they need. So if they should be able to edit virtual disks, or which ISO is used, access to the storages. For networks, you...
If you have a pull request for Proxmox please be so kind to link it here so i can review/improve it before there is a chance Proxmox will merge it.
I'd add the option into the CEPH pool configuration UI because its linked on a per-pool-basis and...
please look at the explanation at https://pve.proxmox.com/wiki/Upgrade_from_8_to_9#VM_Memory_Consumption_Shown_is_Higher and what has been discussed here. The behavior is as expected.
definitiv nicht getestet!
Was spricht dagegen den jetzigen Cluster Node für Node upzugraden? Zuwenig Platz auf den anderen Nodes um eine Freizuräumen?
Alternativ könnte man vlt auch ganz einen anderen Ansatz nehmen: PVE 7.4 sollte die...
For stretch PVE + Ceph clusters we recommend a full PVE install for the tie-breaker node. See the newly published guide: https://pve.proxmox.com/wiki/Stretch_Cluster
With a 3-node Ceph cluster you need to be careful when planning how many disks you add for OSDs. More but smaller is preferred, because if just a single disk fails, Ceph can only recover to the same node in such a small cluster.
For example, if...
Wenn ich nichts falsch verstanden habe: Da die linked clones nur über die Disk Images mit dem Template verknüpft sind, könntest du die Disks der VMs (temporär) auf ein anderes Storage schieben (Disk Action -> Move Disk). Dadurch wird der gesamte...