Slow performance on Ceph per VM

Hmm, okay, so if you can achieve double the performance with 2 VMs in total, you can try to see if the VM configs can be improved upon.

One thing could be to switch from direct RBD to KRBD (host kernel connects to RBD instead of Qemu directly). To change this, edit the Storage in Datacenter->Storage and enable the KRBD checkbox. Once a VM is booting from scratch, or is live migrated, this option will be followed.

Use SCSI + virtio-scsi-single controller instead of virtio BLK.
This will be a bit tougher, especially on Windows VMs, if it is the boot disk. The procedure is basically the same as here: https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows
Attach a dummy disk with the scsi bus type and wait until windows detects it, before you attempt to switch the boot disk.
Hello Aaron,
I want to apply these changes to my Proxmox Cluster, but I don't have a test environment, and some of my VMs have disks between 6 and 10 TB.

How risky are the following changes in a production setup?
Enable the KRBD checkbox for Ceph storage
Switch from Virtio BLK to SCSI + Virtio-SCSI-Single controller
Changing the Async IO from Default (io_uring to threads)


I see that I have to stop the VMs and detach the disks, so I will experience downtime, which is fine for me.
However, I would like to know:
  • How high are the risks?
  • What happens to the VMs that I don’t change immediately?
  • Will there be any performance inconsistencies if I migrate only some VMs at a time?
  • I will need time to reconfigure all of them. Do you see any potential issues with a step-by-step migration?
Thanks in advance for any help!
Best regars