DISK IO Rate comparison wrt to VMWare VMs for NvME SSD

kunwarakanksha

New Member
May 30, 2025
3
0
1
We are testing the IO rate of Proxmox wrt to VMware VMs, to ensure that our environment is safe to host on Proxmox without any over latency while we compare the disk IO rate using fio ubuntu tool which was ran over one VM of proxmox and one VM of VMWare we got results as:

Metric
Test Scenario
Proxmox VM
VMWare VM

IOPS (kIOPS)
Random Write (4k Block)​
15.4​
103
Throughput (MiB/s)
Random Write (4k Block)​
60.1​
403
Latency Avg (ms)
Random Write (4k Block)​
33.05​
4.94
Latency 99th % (ms)
Random Write (4k Block)​
39.06​
11.21
System CPU (%)
Random Write (4k Block)​
10.01
11.86​
Disk Util (%)
Random Write (4k Block)​
99.88​
99.91
IOPS (kIOPS)
Random Read (4k Block)​
156​
292
Throughput (MiB/s)
Random Read (4k Block)​
611​
1139
Latency Avg (ms)
Random Read (4k Block)​
3.27​
1.76
Latency 99th % (ms)
Random Read (4k Block)​
16.45​
3.1
System CPU (%)
Random Read (4k Block)​
21.07​
82.91
Disk Util (%)
Random Read (4k Block)​
99.9​
99.93


For Random read and write IO for larger block size (also tested for 1K block size but random read IO rate are still not satisfactory) we are not getting the similar results, is there any configuration that we are missing or will this be the final results cause according to this the VMware is clearly a winner to be used in production enviroment. Kindly help we want to shift to proxmox but such results are making us doubtful while a article block bridge (https://kb.blockbridge.com/technote...ffers Higher IOPS,sizes) for each queue depth ) clearly says that both perform same, but according to our results they do not .

NOTE: Underlying server and hardware configuration all are same for both the PORXMOX and VMWare hypervisor.
 
Last edited:
Hi @kunwarakanksha ,

The performance numbers you're seeing do seem quite low. My guess is that there may be a misconfiguration somewhere. Even the results on VMware appear to be underperforming, but that could be limitations with your storage.

Could you provide more details about your setup?
- What type of storage you're using?
- How is the storage configured?
- How is the VM configured?

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
We cannot use shared storage as the shared storage will have more latency in IO rate and our application need the data at the faster rate so we have to preserve time at each step.

-What type of storage you're using?
Local Storage of physical server NvMe SSD, where we mounted the folder to this SSD

- How is the storage configured?
Image is attached wrt to show how we have configured the storage
Screenshot from 2025-06-11 09-49-23.png

Screenshot from 2025-06-11 09-49-33.png

- How is the VM configured?
Here sdb1 is the storage which we assigned to our VM using above configuration and ESHotStore is than mounted to this with xfs Filesystem.
$mkfs.xfs /dev/sdb1
$xfs_admin -L /ESHotStore /dev/sdb1
$mount /dev/sdb1 /ESHotStore

$lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 63.3M 1 loop /snap/core20/1822
loop1 7:1 0 111.9M 1 loop /snap/lxd/24322
loop2 7:2 0 49.8M 1 loop /snap/snapd/18357
sda 8:0 0 320G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 2G 0 part /boot
└─sda3 8:3 0 318G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 317.9G 0 lvm /
sdb 8:16 0 11T 0 disk
└─sdb1 8:17 0 11T 0 part /ESHotStore
 
Last edited:
We cannot use shared storage as the shared storage will have more latency in IO rate and our application need the data at the faster rate so we have to preserve time at each step.
Based on your results with both VMware and Proxmox VE, enterprise shared storage would deliver better performance than your current local disk setup.

It looks like you're using local disks with a filesystem layer, and then placing QCOW images on top - this is far from ideal for performance-oriented environments.

Assuming you're working with enterprise-grade disks (can you confirm?), I’d recommend switching to a block storage configuration and re-running your benchmarks. Also, start by benchmarking the hypervisor layer itself to isolate bottlenecks.

Here are a couple of performance-related articles we’ve published that may help:

https://kb.blockbridge.com/technote/proxmox-tuning-low-latency-storage
https://kb.blockbridge.com/technote/proxmox-aio-vs-iouring/index.html



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I don't think OP use qcow2 as "Format" is greyed out to raw , which confirm it's already block based.
You are right, I misread the screenshot

@kunwarakanksha When I asked about storage configuration, the question was about Storage Pool, not how the disk is added to the VM.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
You are right, I misread the screenshot

@kunwarakanksha When I asked about storage configuration, the question was about Storage Pool, not how the disk is added to the VM.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

I don't think OP use qcow2 as "Format" is greyed out to raw , which confirm it's already block based.

@kunwarakanksha : is ZFS or Lvmthin ?
is BIOS server power/energy set to Maximum performance ?
Hey @_gabriel and @bbgeek17 , we have configured the storage as LVM in PROXMOX as our disk was already configured on RAID-0 Layout with media type SSD with NvMe. BIOS server power/energy set to Maximum performance (images for reference).
 

Attachments

  • Screenshot from 2025-06-12 14-19-00.png
    Screenshot from 2025-06-12 14-19-00.png
    45.3 KB · Views: 11
  • Screenshot from 2025-06-12 14-19-33.png
    Screenshot from 2025-06-12 14-19-33.png
    33.9 KB · Views: 11
  • Screenshot from 2025-06-13 10-04-21.png
    Screenshot from 2025-06-13 10-04-21.png
    217.3 KB · Views: 3
Last edited: