Hello.
I assembled mdadm raid0 from two Samsung 970 Evo plus NVMe SSDs, created an LVM VG on it and gave the thick LV as a virtual machine disk based on Centos8.
On a hypervisor, this RAID delivers about 7GB/s read performance.
When I test inside the guest OS with:
I get a maximum performance of about 6.8 GB/s, which is good.
But if I use standard reading mechanisms inside the VM without using the --direct key: pv, cp, dd, then I get a reading performance of about 2.2GB/s.
What do I need to configure so that the VMs always access the disk directly in order to get the best speed?
I assembled mdadm raid0 from two Samsung 970 Evo plus NVMe SSDs, created an LVM VG on it and gave the thick LV as a virtual machine disk based on Centos8.
On a hypervisor, this RAID delivers about 7GB/s read performance.
When I test inside the guest OS with:
Bash:
fio --readonly --name=onessd \
--filename=/dev/sdc \
--filesize=100g --rw=randread --bs=4m [B]--direct=1[/B] --overwrite=0 \
--numjobs=3 --iodepth=32 --time_based=1 --runtime=30 \
--gtod_reduce=1 --group_reporting
But if I use standard reading mechanisms inside the VM without using the --direct key: pv, cp, dd, then I get a reading performance of about 2.2GB/s.
What do I need to configure so that the VMs always access the disk directly in order to get the best speed?