NVMe performance inside the guest OS

Shadow2091

Member
Sep 25, 2019
14
1
8
32
Hello.
I assembled mdadm raid0 from two Samsung 970 Evo plus NVMe SSDs, created an LVM VG on it and gave the thick LV as a virtual machine disk based on Centos8.
On a hypervisor, this RAID delivers about 7GB/s read performance.
When I test inside the guest OS with:
Bash:
fio --readonly --name=onessd \
    --filename=/dev/sdc \
    --filesize=100g --rw=randread --bs=4m [B]--direct=1[/B] --overwrite=0 \
    --numjobs=3 --iodepth=32 --time_based=1 --runtime=30 \
    --gtod_reduce=1 --group_reporting
I get a maximum performance of about 6.8 GB/s, which is good.
But if I use standard reading mechanisms inside the VM without using the --direct key: pv, cp, dd, then I get a reading performance of about 2.2GB/s.
What do I need to configure so that the VMs always access the disk directly in order to get the best speed?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!