[SOLVED] CPU Arguments cuts passthroughed NVMe performance by 4 times

hawxxer

Member
Jul 19, 2023
12
0
6
Hi Forum,
I hope someone can help me identify my current issue. I followed this tutorial also from this forum about working around EasyAntiCheat.

I figured, that using args: -cpu host,-hypervisor fixes my issues with EasyAntiCheat at the cost of hard performance hit.

I checked my cpu, gpu and ram speed with benchmarks but saw no difference performancewise with that line appendend or removed from my "/etc/pve/qemu-server/100.conf". But what i figured was that my PCIe 4.0 Nvme drops from 8000 Mb/s to only 2000 Mb/s while in Crystalbenchmark when enabling that flags. Also removing the hypervisor part and keeping only args: -cpu host has the same performance hit.

Looks like this argument overrides the proxmox default config, which I can check with qm showcmd 100 --pretty:
-cpu 'host,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vpindex,+kvm_pv_eoi,+kvm_pv_unhalt'
So what I did was add all the default arguments to my custom one
args: -cpu host,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vpindex,+kvm_pv_eoi,+kvm_pv_unhalt,-hypervisor
-> Still bad performance

Doing:
args: -cpu host,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vpindex,+kvm_pv_eoi,+kvm_pv_unhalt
-> Normal good Performance but Game is not working.

Can someone explain to me what is happening and why my pcie-passthroughed Nvme get hit so hard by defining args by myself?

Here is my 100.conf file, the NVMe I passthrough is the hostpci2. It is not in any other IOMMU group with anything other.
Code:
affinity: 0-11
args: -cpu host,-hypervisor,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vpindex,+kvm_pv_eoi,+kvm_pv_unhalt
agent: 1
balloon: 0
bios: ovmf
boot: order=scsi0
cores: 12
cpu: host
efidisk0: local-zfs:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
hostpci0: 0000:0a:00.3,pcie=1
hostpci1: 0000:08:00,pcie=1
hostpci2: 0000:01:00.0,pcie=1
hotplug: disk,network,usb
machine: pc-q35-10.1
memory: 24576
meta: creation-qemu=10.0.2,ctime=1762547116
name: Quantus
net0: virtio=3c:7c:3f:xx:xx:xx,bridge=vmbr0,firewall=1,tag=20
numa: 0
ostype: win11
scsi0: local-zfs:vm-100-disk-1,cache=writeback,discard=on,iothread=1,size=128G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=9a86bf75-1801-4c4c-a727-8xxxxxxxxxxxa5,manufacturer=QVNVUw==,base64=1
sockets: 1
tpmstate0: local-zfs:vm-100-disk-2,size=4M,version=v2.0
usb0: host=8087:0029,usb3=1
vga: none
vmgenid: 4e3dc43e-f8d5-485f-b4f0-01xxxxxxxxx8

Should further information be required, I will be happy to provide it.

Thank you very much!
Eric

-----------Edit-------
Hardware is:

AMD Ryzen 5800X3D
Asus ROG STRIX B550-I GAMING
32 Gb DDR4
RTX 3090
SkHynix Platinum P41 (To be passthroughed)
 
Last edited:
Thanks for the tip, but unfortunately it didn't help. I also tried kvm64 and x86-64-v3, but unfortunately there was no difference. BTW, the system is a 5800X3D with an Asus ROG STRIX B550-I GAMING.
 
I don't feel like going into detail about eac.

We have not experienced significant performance impacts outside of hypervisor-protected code integrity (HVCI).

https://learn.microsoft.com/en-us/w...ed-protection-of-code-integrity?tabs=security

The impact in HVCI is due to KVM's lack of support for Intel MBEC/AMD GMET, which is by design.

*It will likely be added as a feature at some point.

https://lwn.net/Articles/1051782/

I use a host Type CPU and Windows 11, but as long as Memory Integrity Features are disabled, I see no performance degradation even with Hyper-V installed.

*The first image shows Windows 11 installed on a “CPU Type host” with only Memory Integrity Features disabled, yielding results that do not indicate any impact on the SN7100.

My Settings (intel Core Ultra 265k)

Code:
qm set vmid -args '-cpu host,hv_passthrough,-hypervisor,level=30,+vmx'
qm set vmid -cpu host,hidden=1,flags=+pdpe1gb
qm set vmid -bios ovmf
qm set vmid -machine pc-q35-10.1

*Enabling Hyper-V while VIOMMU is active will cause a boot loop.

*I think vmx and svm could also be configured on the host, so I'm adding them.

I wrote this off the cuff, so I don't know if it'll work, but...

Code:
AMD
qm set vmid -args '-cpu host,migratable=off,hv_passthrough,-hypervisor,hv-vendor-id=0123456789AB,level=16,+svm,invtsc=on'

Intel
qm set vmid -args '-cpu host,migratable=off,hv_passthrough,-hypervisor,hv-vendor-id=0123456789AB,level=30,+vmx,invtsc=on'


Edit: Results of enabling/disabling Memory Integrity Features during past verification
 

Attachments

  • IMG_0861.jpeg
    IMG_0861.jpeg
    62 KB · Views: 6
  • 画像.jpeg
    画像.jpeg
    217.3 KB · Views: 5
Last edited:
  • Like
Reactions: hawxxer
Thanks for the tips, your flags worked and now Star Citizen ist working again with normal performance.

The parameter I could isolate is migrateable=off ?? When I am adding that, with -hypervisor my read speeds come back to 7000 Mb/s. I am not quiet sure why this parameter should modify anything in that regard, after googleing its meaning? Does anyone have a idea?
Current working args is: args: -cpu host,-hypervisor,migratable=off


I also tried and maybe that also accounted for that:
Checking under Windows Security -> Device security -> Core isolation -> Memory integrity is off (also I can't enable it, it just fails to do so) I guess that was what you meant?

I updated my UEFI to latest revision, reset all configurations (besides enabling svm) -> No difference

Tried a different Windows VM with the same drive passthroughed -> No difference

Update Guest Tools in the VM -> No difference

Updated my cmdline:
root=ZFS=rpool/ROOT/pve-1 boot=zfs nomodeset amd_iommu=on iommu=pt pcie_acs_override=downstream,multifunction -> No difference

Only the migrateable argument brings back the speed and allows me to play star citizen in VM with normal performance.. Weird. Nevertheless thank you very much! Kind of solved, but it would be nice to know why this parameter change anything. Maybe if this effect anything: The VM i am using (both Windows VM i tested) is a clone of a Windows template I did to not setup Windows all over again, if I try sth new.
 
Last edited:
I just wanted to point out that at least on my computer, the virtual machine doesn't experience any slowdown just because it's running on the host.

To ensure successful live migration, CPU flags are likely restricted, so the configuration without migratable=off probably lacks some necessary flags.

migratable=off might mean live migration isn't required, and all flags are enabled.

Comparing with lscpu would probably work, but I don't go that far.

Because my computer isn't particularly slow even without that setting.
 
Last edited: