[SOLVED] Unusually slow IO on a PCIe SSD

Vladimir Bulgaru

Active Member
Jun 1, 2019
Moscow, Russia
Hello, everyone!

My PCIe SSD runs very slowly within the guest Windows VM. Basically it performs like a very very slow SSD. I can't seem to find the cause. Maybe you can help me out?

I was exploring the options how to make Windows VMs run extremely fast atop of Proxmox. One of the solutions i was trying to test out was using a Fusion-Io iodrive2 PCIe SSD. It should be capable of very good IOPS and quite decent speeds.

The problem
The RW speed, when tested via CrystalDiskMark from within the guest os is around 120-170MB/s. The PCIe SSD contains only one guest VM and the Proxmox root is on another virtual drive. Basically all of the bandwidth should be available, but something seems to throttle the speed.

Best guesses
I am not very familiar with all the settings out there, but these are the possible causes i may think of:
  1. Proxmox emulates a controller, via which the guest OS passes the data to the physical drives (SCSI Controller Type). It may be the case that it does not expect the data to go straight to a PCIe SSD.
  2. The VM hard disk is poorly configured: there are Discard, SSD emulation flags and cache setting that may not be optimised for PCIe SSD.
  3. The speed of the VM depends on the speed of the Proxmox OS. Having the Proxmox root on a slower drive (currently on ZFS RAID10 on 10K SAS HDDs) may slow down the capability of the guest OS, even if the guest OS is on a fast PCIe SSD.
  4. There may be a bottleneck in how much data can go through a PCI lane via the hypervisor (unlikely).
  5. There may be a bottleneck related to guest OS RAM and CPU cores (the test was done on a 4/8GB RAM VM with ballooning and 4 cores - physical CPU is x2 E5-2630v2)
  6. The hardware is old - Dell R620 12th gen server.
Please help me out, if you have experienced something similar or if you know how to integrate PCIe SSDs within Proxmox setups.

Update 1:
I've benchmarked the PCIe SSD from within the Proxmox OS. Here is the output (seems very nice):
hdparm -Tt /dev/fioa
 Timing cached reads:   17514 MB in  1.99 seconds = 8782.40 MB/sec
 Timing buffered disk reads: 3582 MB in  3.00 seconds = 1193.62 MB/sec

Update 2:
Same benchmark ran within guest Ubuntu OS (seems like the hypervisor and hardware are not the issue):
hdparm -tT /dev/sda
 Timing cached reads:   12704 MB in  1.99 seconds = 6369.66 MB/sec
 Timing buffered disk reads: 2664 MB in  3.00 seconds = 887.84 MB/sec

Update 3 (and final):
As it is often the case, the simplest explanation is the best. The issue was the lack of Virtio SCSI drivers for the disk. During the setup i got confused by the drivers for the controller and the disk itself and had the ones only for the controller. After adding the disk drivers, the speed went up x5-x8 times. Here's the link, so that you don't end up wasting the time like i did: https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#Setup_Steps
Last edited:


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!