We are currently evaluating Proxmox with the intention to switch from Vmware ESXI. We have been using sysbench within a VM to test performance. All is fine with the exception of randon disk reads and writes which are significantly slower.
Machine specs:-
Lenovo SR630
RAID bus controller: Broadcom / LSI MegaRAID Tri-Mode SAS3508
4* Lenovo 960GB SSDs configured as raid-5. These are read intensive drives rated at 5 full drive writes per day for 5 years.
Testing commands :-
sysbench fileio --file-total-size=4G --file-test-mode=rndrw prepare
sysbench fileio --file-total-size=4G --file-test-mode=rndrw run
Vmware ESXI 8.0 :-
File operations:
reads/s: 8109.14
writes/s: 5406.03
fsyncs/s: 17308.89
Throughput:
read, MiB/s: 126.71
written, MiB/s: 84.47
Proxmox 8.2.2 host :-
File operations:
reads/s: 7155.38
writes/s: 4770.25
fsyncs/s: 15269.00
Throughput:
read, MiB/s: 111.80
written, MiB/s: 74.54
Ubuntu 24.04.1 LTS VM :-
File operations:
reads/s: 420.16
writes/s: 280.04
fsyncs/s: 903.12
Throughput:
read, MiB/s: 6.56
written, MiB/s: 4.38
The VM is configured to use Virtio SCSI Single as the SCSI controller.
I have tried different VM disk cache and Async IO options with little difference. Best the read MiB/s goes up to about 8.
Does anyone know why the random disk IO would be so much worse within the VM compared to the host itself.?
For most things it won't be a problem but for our bigger and busier databases it certainly will be so I would like to get this improved if possible.
Thanks
Machine specs:-
Lenovo SR630
RAID bus controller: Broadcom / LSI MegaRAID Tri-Mode SAS3508
4* Lenovo 960GB SSDs configured as raid-5. These are read intensive drives rated at 5 full drive writes per day for 5 years.
Testing commands :-
sysbench fileio --file-total-size=4G --file-test-mode=rndrw prepare
sysbench fileio --file-total-size=4G --file-test-mode=rndrw run
Vmware ESXI 8.0 :-
File operations:
reads/s: 8109.14
writes/s: 5406.03
fsyncs/s: 17308.89
Throughput:
read, MiB/s: 126.71
written, MiB/s: 84.47
Proxmox 8.2.2 host :-
File operations:
reads/s: 7155.38
writes/s: 4770.25
fsyncs/s: 15269.00
Throughput:
read, MiB/s: 111.80
written, MiB/s: 74.54
Ubuntu 24.04.1 LTS VM :-
File operations:
reads/s: 420.16
writes/s: 280.04
fsyncs/s: 903.12
Throughput:
read, MiB/s: 6.56
written, MiB/s: 4.38
The VM is configured to use Virtio SCSI Single as the SCSI controller.
I have tried different VM disk cache and Async IO options with little difference. Best the read MiB/s goes up to about 8.
Does anyone know why the random disk IO would be so much worse within the VM compared to the host itself.?
For most things it won't be a problem but for our bigger and busier databases it certainly will be so I would like to get this improved if possible.
Thanks
Last edited: