Raw LVM partition - 4k random reads just 15% of native performance

marcosscriven

Active Member
Mar 6, 2021
142
13
38
I have a WD SN850 that shows 3700MB/s random reads and 3000MB/s random writes.

I'm using a 64GB LVM partition (no snapshots) on that drive within a Windows VM, and the same test shows only 540MB/s read and 530MB/s write.

I've tried using the 'IO thread' option, that made a very small improvement. I'd prefer not to turn on caching yet, as that just masks the problem.

Sequential read/writes are 98% of native - so I'm guessing there's some severe latency issue going on somewhere.

Here's my startcmd:

Code:
/usr/bin/kvm
-id
101
-name
Windows10
-no-shutdown
-chardev
'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server,nowait'
-mon
'chardev=qmp,mode=control'
-chardev
'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5'
-mon
'chardev=qmp-event,mode=control'
-pidfile
/var/run/qemu-server/101.pid
-daemonize
-smbios
'type=1,uuid=9e10b180-29a7-4505-a672-273d1cbac184'
-drive
'if=pflash,unit=0,format=raw,readonly,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd'
-drive
'if=pflash,unit=1,format=raw,id=drive-efidisk0,size=131072,file=/dev/fast/vm-101-disk-1'
-smp
'24,sockets=1,cores=24,maxcpus=24'
-nodefaults
-boot
'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg'
-vga
none
-nographic
-no-hpet
-cpu
'kvm64,enforce,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vendor_id=proxmox,hv_vpindex,kvm=off,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep'
-m
16000
-object
'iothread,id=iothread-virtioscsi0'
-readconfig
/usr/share/qemu-server/pve-q35-4.0.cfg
-device
'vmgenid,guid=50eb75dd-2978-439a-950a-c921522cf6c0'
-device
'nec-usb-xhci,id=xhci,bus=pci.1,addr=0x1b'
-device
'usb-tablet,id=tablet,bus=ehci.0,port=1'
-device
'vfio-pci,host=0000:07:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on'
-device
'vfio-pci,host=0000:07:00.1,id=hostpci0.1,bus=ich9-pcie-port-1,addr=0x0.1'
-device
'usb-host,bus=xhci.0,hostbus=3,hostport=1.1,id=usb0'
-iscsi
'initiator-name=iqn.1993-08.org.debian:01:8a3cdd8d25e'
-device
'virtio-scsi-pci,id=virtioscsi0,bus=pci.3,addr=0x1,iothread=iothread-virtioscsi0'
-drive
'file=/dev/fast/vm-101-disk-0,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on'
-device
'scsi-hd,bus=virtioscsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100'
-netdev
'type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown'
-device
'e1000,mac=CE:20:AE:69:94:A7,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=101'
-rtc
'driftfix=slew,base=localtime'
-machine
'type=pc-q35-5.2+pve0'
-global
'kvm-pit.lost_tick_policy=discard'
 
Last edited:
I have this issue too. Did you ever solve it? It's making me hesitate to hit purchase for a server at work. Luckily I only need SATA SSD raid 10 performance for the app... And this is just about it.. but It's still pathetic for a PCIe 3 nvme to be this crippled.

What version of proxmox were you testing this on? I'm on 6.4. I'm kind of getting the feeling they made this better later.

I also found that if you have the controller set to scsi-single. You can create multiple drives on the same NVMe lvm pool.. past two of them through to your host stripe them and you'll get better performance. It goes to roughly double.

Which to me definitely indicates something with threading or how disk is presented to the guest.

Because the same physical I o split into two... It's not the physical tribe slowing it down It's the interface to the VM.


I also tested mine with straight PCI e pass through and it performed how you would expect in the guest (like native)