KRBD is an option in the storage configuration of Proxmox. It can be enabled or disabled on the pool and affects all VMs stored there.
You would need to create a new pool in Ceph and a new storage on that pool in Proxmox without the KRBD setting.
After that you could migrate the VM image to...
Do you happen to use KRBD for the VMs? With LVM on top of a mapped RBD?
What is the load of the machines and how saturated is the network?
Have you tried to switch to qemu+rbd (userspace RBD) for this VM?
Have you enabled compression on the RBD pool for the "vm-compression" storage?
https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#inline-compression
I do not think that it is possible to live migrate a container.
A container are just processes running in a different namespace on the host's kernel.
You cannot freeze a process and transport it to another host and unfreeze it there like you can do with a virtual machine.
AFAIK the IPv6 link local address is assigned automatically by the Linux kernel whenever a new interface goes up.
As the bridge interface "Servers" should only transport Ethernet to and from the VMs and not the host you could disable IPv6 on it entirely:
echo '1' >...
With replication size 3 you have a max usable capacity of 5.2 TB as this is the capacity of your smallest host.
Your OSDs are very unbalanced across the hosts which is not good for such a small cluster.
This is why the OSDs in pve10 reach the nearfull ratio and refuse to take any more data.
You...
Do you run Crph along with the VMs on the same hardware? The load may be too high:
osd_scrub_load_threshold
Description: The maximum load. Ceph will not scrub when the system load (as defined by the getloadavg() function) is higher than this number. Default is 0.5.
Type: Float...
There is now a kernel patch available: https://lore.kernel.org/io-uring/8b7a8200-f616-46a8-bc44-5af7ce9b081a@kernel.dk/T/#u
OCFS2 seems to be unmaintained.