Hi all, I'm facing a strange problem. I'm using latest Proxmox with Ceph storage backend (SSD only), 10Gbit network, KVM virtualization, CentOS in guest. When I create a fresh VM with 10 GB attached Ceph storage (cache disabled, virtio drivers), I'm getting roughly these speeds in fio: READ: bw=115MiB/s (120MB/s), 115MiB/s-115MiB/s (120MB/s-120MB/s), io=1209MiB (1267MB), run=10552-10552msec WRITE: bw=49.0MiB/s (51.4MB/s), 49.0MiB/s-49.0MiB/s (51.4MB/s-51.4MB/s), io=518MiB (543MB), run=10552-10552msec After resizing storage to 100 GB (I only resize attached image in proxmox interface, I do not touch filesystem/partition table, so inside the guest there is still 10 GB partition), fio benchmark drops to: READ: bw=20.7MiB/s (21.7MB/s), 20.7MiB/s-20.7MiB/s (21.7MB/s-21.7MB/s), io=504MiB (529MB), run=24359-24359msec WRITE: bw=9039KiB/s (9256kB/s), 9039KiB/s-9039KiB/s (9256kB/s-9256kB/s), io=215MiB (225MB), run=24359-24359msec No other changes were made to the system (reboot, etc.). Proxmox is running in test mode and no other VMs have impact on cluster performance (=there is no other workload). Thank you for your tips / advice.