Hello! Title says it all, but here a details:
Host: Proxmox VE 6.3-6
Guest: Ubuntu Server 18.04.2 LTS
SSD: ADATA XPG SX8200 Pro 2 TB (two in ZFS mirror)
ZVOL: 90GB, thin-provised, sync=disabled
Then i benchmarking this ZVOL directly on the Host i get almost maximum performance that this SSD can actually provide
But then i run same fio test inside VM on the very same ZVOL (attached to the VM as data disk and with cache=unsafe !) i get three times worse results
So i dont know what to do... Problem definitely lay in virtualisation layer but where? I even tried both VIRTIO and SCSI connections .
---
Here below benchmarks (fio config and result):
1) On the Host:
IOPS=72.5k, BW=283MiB/s
2) Inside VM:
IOPS=18.9k, BW=73.9MiB/s
Host: Proxmox VE 6.3-6
Guest: Ubuntu Server 18.04.2 LTS
SSD: ADATA XPG SX8200 Pro 2 TB (two in ZFS mirror)
ZVOL: 90GB, thin-provised, sync=disabled
Then i benchmarking this ZVOL directly on the Host i get almost maximum performance that this SSD can actually provide
But then i run same fio test inside VM on the very same ZVOL (attached to the VM as data disk and with cache=unsafe !) i get three times worse results
So i dont know what to do... Problem definitely lay in virtualisation layer but where? I even tried both VIRTIO and SCSI connections .
---
Here below benchmarks (fio config and result):
1) On the Host:
fio --name RWRITE --rw=randwrite --filename=/dev/zvol/apool/vm-147-disk-x --size=4g --blocksize=4k --iodepth=1 --numjobs=1 --ioengine=posixaio
IOPS=72.5k, BW=283MiB/s
2) Inside VM:
fio --name RWRITE --rw=randwrite --filename=/dev/sda --size=4g --blocksize=4k --iodepth=1 --numjobs=1 --ioengine=posixaio
IOPS=18.9k, BW=73.9MiB/s