QEMU 9.0 available as of now

After over another two weeks of no issue being surfaced up we decided that it was safe enough to move the QEMU 9.0 package to the enterprise repos.

Version 9.0.2-3 (and also the earlier ones for archival purpose) should be available there as of now.
 
After over another two weeks of no issue being surfaced up we decided that it was safe enough to move the QEMU 9.0 package to the enterprise repos.

Version 9.0.2-3 (and also the earlier ones for archival purpose) should be available there as of now.
Thank you for the update @t.lamprecht . We've been running PVE/QEMU9 in parallel with PVE/QEMU8 in our CI/CD since the original announcement, and it is rock solid.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: t.lamprecht
I'm a bit late to the party, but i see an issue with PCIe passthrough (Intel X710 nics) and OpenBSD 7.6.

After updating to pve-qemu-kvm 9.0.2-3 (from 8.1.5-6), double checking PCIe device mappings and restarting the VM, the X710 cards cannot be initialized anymore. dmesg on OpenBSD says

"ixl0 at pci6 dev 16 function 0 "Intel X710 SFP+" rev 0x02: unable to map registers"

I'm running the latest Intel firmware v9.52 (but also no luck with v9.50).

If i revert pve-qemu-kvm to 8.1.5-6 via

# apt upgrade pve-qemu-kvm=8.1.5-6

everything is directly running fine again. No passthrough issues at all.

Were there any changes or are special flags needed with the move to QEMU 9.0?
 
Last edited:
Hi,
I'm a bit late to the party, but i see an issue with PCIe passthrough (Intel X710 nics) and OpenBSD 7.6.

After updating to pve-qemu-kvm 9.0.2-3 (from 8.1.5-6), double checking PCIe device mappings and restarting the VM, the X710 cards cannot be initialized anymore. dmesg on OpenBSD says

"ixl0 at pci6 dev 16 function 0 "Intel X710 SFP+" rev 0x02: unable to map registers"

I'm running the latest Intel firmware v9.52 (but also no luck with v9.50).

If i revert pve-qemu-kvm to 8.1.5-6 via

# apt upgrade pve-qemu-kvm=8.1.5-6

everything is directly running fine again. No passthrough issues at all.

Were there any changes or are special flags needed with the move to QEMU 9.0?
please share the output of qm config <ID> for the affected VM. Is there anything in the host's system log/journal?
 
please share the output of qm config <ID> for the affected VM.
agent: 1,type=isa
balloon: 0
boot: order=scsi0
cores: 8
cpu: host
hostpci0: 0000:02:00
hostpci1: 0000:85:00
hotplug: usb
ide2: none,media=cdrom
machine: q35
memory: 8192
meta: creation-qemu=8.0.2,ctime=1693738625
name: g01
numa: 1
numa0: cpus=0-7,hostnodes=1,memory=8192,policy=bind
ostype: l26
scsi0: datastore0:vm-102-disk-0,iothread=1,size=16G
scsihw: virtio-scsi-single
smbios1: uuid=7ba6b669-4214-47a7-9f6b-6e6962d27945
sockets: 1
vmgenid: 84ccd666-8d4f-ac1b-befb-1a590539e511


Is there anything in the host's system log/journal?
I didn't spot anything that hasn't been there also with QEMU8. The "Masked broken INTx support" applies only to X710, not to the Intel i350 i also pass through. They're recognized without issues.

Oct 23 08:59:28 ahv qm[1002163]: start VM 102: UPID:ahv:000F3B91:00880A1F:67169ED0:qmstart:102:root@pam:
Oct 23 08:59:29 ahv systemd[1]: Started 102.scope.
Oct 23 08:59:29 ahv kernel: vfio-pci 0000:02:00.0: Masking broken INTx support
Oct 23 08:59:30 ahv kernel: vfio-pci 0000:02:00.1: Masking broken INTx support




UPDATE 09:31
I've found another error within the OpenBSD VMs when running with QEMU9, although i don't see any drawbacks for other OpenBSD VMs except the one with the passthrough PCIe devices. Nonetheless i can imagine that it's related to my passthrough problem as i see "mem address conflicts" for different devices with QEMU9.

Here is a short excerpt from "dmesg" on the VM 102

ppb0 at pci0 dev 28 function 0 "Red Hat PCIE" rev 0x00: apic 0 int 16
pci1 at ppb0 bus 1
1:0:0: mem address conflict 0x383800000000/0x800000
1:0:0: mem address conflict 0x383801000000/0x8000
1:0:1: mem address conflict 0x383800800000/0x800000
1:0:1: mem address conflict 0x383801008000/0x8000
ixl0 at pci1 dev 0 function 0 "Intel X710 SFP+" rev 0x02: unable to map registers
ixl1 at pci1 dev 0 function 1 "Intel X710 SFP+" rev 0x02: unable to map registers


The relevant changes in BIOS are made (to separate IOMMU groups).


UPDATE 10:20
I've also verified the situation on another, older AMD Epyc (Gen2 Rome) with slightly different BIOS settings = same issue. With QEMU9 i get a lot of "mem address conflicts" and the X710 nics are not initialized anymore.
 
Last edited:
agent: 1,type=isa
balloon: 0
boot: order=scsi0
cores: 8
cpu: host
hostpci0: 0000:02:00
hostpci1: 0000:85:00
hotplug: usb
ide2: none,media=cdrom
machine: q35
memory: 8192
meta: creation-qemu=8.0.2,ctime=1693738625
name: g01
numa: 1
numa0: cpus=0-7,hostnodes=1,memory=8192,policy=bind
ostype: l26
scsi0: datastore0:vm-102-disk-0,iothread=1,size=16G
scsihw: virtio-scsi-single
smbios1: uuid=7ba6b669-4214-47a7-9f6b-6e6962d27945
sockets: 1
vmgenid: 84ccd666-8d4f-ac1b-befb-1a590539e511



I didn't spot anything that hasn't been there also with QEMU8. The "Masked broken INTx support" applies only to X710, not to the Intel i350 i also pass through. They're recognized without issues.

Oct 23 08:59:28 ahv qm[1002163]: start VM 102: UPID:ahv:000F3B91:00880A1F:67169ED0:qmstart:102:root@pam:
Oct 23 08:59:29 ahv systemd[1]: Started 102.scope.
Oct 23 08:59:29 ahv kernel: vfio-pci 0000:02:00.0: Masking broken INTx support
Oct 23 08:59:30 ahv kernel: vfio-pci 0000:02:00.1: Masking broken INTx support




UPDATE 09:31
I've found another error within the OpenBSD VMs when running with QEMU9, although i don't see any drawbacks for other OpenBSD VMs except the one with the passthrough PCIe devices. Nonetheless i can imagine that it's related to my passthrough problem as i see "mem address conflicts" for different devices with QEMU9.

Here is a short excerpt from "dmesg" on the VM 102

ppb0 at pci0 dev 28 function 0 "Red Hat PCIE" rev 0x00: apic 0 int 16
pci1 at ppb0 bus 1
1:0:0: mem address conflict 0x383800000000/0x800000
1:0:0: mem address conflict 0x383801000000/0x8000
1:0:1: mem address conflict 0x383800800000/0x800000
1:0:1: mem address conflict 0x383801008000/0x8000
ixl0 at pci1 dev 0 function 0 "Intel X710 SFP+" rev 0x02: unable to map registers
ixl1 at pci1 dev 0 function 1 "Intel X710 SFP+" rev 0x02: unable to map registers


The relevant changes in BIOS are made (to separate IOMMU groups).


UPDATE 10:20
I've also verified the situation on another, older AMD Epyc (Gen2 Rome) with slightly different BIOS settings = same issue. With QEMU9 i get a lot of "mem address conflicts" and the X710 nics are not initialized anymore.
I can reproduce the mem address conflict messages. They appear after changes in SeaBIOS that also caused issues for 32bit guests: https://mail.coreboot.org/hyperkitt...org/message/R7FOQMMYWVX577QNIA2AKUAGOZKNJIAP/

The questions is if that is the same root cause as the passthrough breakage or if the conflict messages are just a red herring.

A workaround is using less memory, e.g. with 2048 I do not get the messages. Could you check if that works for you too? If yes and if it also fixes passthrough then it would be a good hint that it's the same root cause.

Another workaround would be not using SeaBIOS but OVMF/UEFI.
 
I can reproduce the mem address conflict messages. They appear after changes in SeaBIOS that also caused issues for 32bit guests: https://mail.coreboot.org/hyperkitt...org/message/R7FOQMMYWVX577QNIA2AKUAGOZKNJIAP/
Thanks for checking.

I can confirm that lowering memory to 2048MB fixes not only the mem address conflict messages, but also brings back my passed through PCIe X710 nics. Unfortunately i cannot run like this in production (also OVMF is not possible), so i'll stick to pve-qemu-kvm 8.1.5-6 until the fix is landed.
 
Hello all, we also did updgrade 2 nodes from 8.1.5-6 to 9.0.2-3 , since the upgrade we have now 80% SQL Server CPU Usage instead of 30%. Any clue how to downgrade only the pve-qemu-kvm version back to 8.1.5-6.

1729839379429.png

any help appreciated - thank you
 
I thought you could just select the version per VM in the hardware options, or am confusing this with something else?
That's the machine version used for the VM which is controlling some (virtual) hardware layout and the like, not the version of the QEMU software package.
 
Hi,
Hello all, we also did updgrade 2 nodes from 8.1.5-6 to 9.0.2-3 , since the upgrade we have now 80% SQL Server CPU Usage instead of 30%. Any clue how to downgrade only the pve-qemu-kvm version back to 8.1.5-6.

View attachment 76787

any help appreciated - thank you
please share the VM configuration qm config <ID>. Do you mean CPU usage inside the VM or for the QEMU process on the host or both?
 
Hi,

please share the VM configuration qm config <ID>. Do you mean CPU usage inside the VM or for the QEMU process on the host or both?
HI Fiona,

the CPU inside the win2019 shot through the roof. Alsol in SSMs i did see massive peaks and colleagues calling in that ERP is super slow.

root@pve5:~# qm config 102
agent: 1,fstrim_cloned_disks=1
bios: ovmf
boot: order=virtio0
cores: 4
cpu: host
description: SQL DB - kein reboot unter Tags%0A%0AFunktion%3A SQL Server
efidisk0: zfs:vm-102-disk-0,size=128K
machine: pc-i440fx-8.1
memory: 184320
meta: creation-qemu=8.1.5,ctime=1715797046
name: sql2-pve5
net0: virtio=00:50:56:96:4b:27,bridge=vmbr1,firewall=1
numa: 0
onboot: 1
ostype: win10
parent: autodaily241025000620
protection: 1
scsihw: virtio-scsi-pci
smbios1: uuid=421674e2-4932-35d4-247a-a9e8beba28f2
sockets: 1
tags: vm_winsrv
virtio0: zfs:vm-102-disk-1,discard=on,iothread=1,size=150G
virtio1: zfs:vm-102-disk-2,discard=on,iothread=1,size=700G
virtio2: zfs:vm-102-disk-3,discard=on,iothread=1,size=100G
vmgenid: f47c0d0b-048b-4603-8b99-63f4eed5898c
 
I can reproduce the mem address conflict messages. They appear after changes in SeaBIOS that also caused issues for 32bit guests: https://mail.coreboot.org/hyperkitt...org/message/R7FOQMMYWVX577QNIA2AKUAGOZKNJIAP/

The questions is if that is the same root cause as the passthrough breakage or if the conflict messages are just a red herring.

A workaround is using less memory, e.g. with 2048 I do not get the messages. Could you check if that works for you too? If yes and if it also fixes passthrough then it would be a good hint that it's the same root cause.

Another workaround would be not using SeaBIOS but OVMF/UEFI.
Important update.

I've made an additional test with a new VM and configured OVMF (UEFI), disabled Secure Boot and booted from cd76.iso and also here i directly see the mem address conflict independently of the amount of memory i configure (even with memory: 1024)

No the interesting part. Also with pve-qemu-kvm: 8.1.5-6 the issues arise, so there is a general problem with OpenBSD + UEFI + KVM.
 
Important update.

I've made an additional test with a new VM and configured OVMF (UEFI), disabled Secure Boot and booted from cd76.iso and also here i directly see the mem address conflict independently of the amount of memory i configure (even with memory: 1024)

No the interesting part. Also with pve-qemu-kvm: 8.1.5-6 the issues arise, so there is a general problem with OpenBSD + UEFI + KVM.
Does passthrough work or is it just the mem address conflict messages? I'm not very knowledgeable in this area, but I'm not sure these messages are necessarily bad, I do get them even without a passthrough device. AFAIU, it's just where the 64-bit PCI MMIO window resides if SeaBIOS/UEFI enables it. Of course, if the registers for the passthrough device can't be mapped, that will be an issue.
 
Does passthrough work or is it just the mem address conflict messages?
No. Test-VM with UEFI + 2G Mem + pve-qemu-kvm 8.1.5.6 shows the same messages

bridge mem address conflict
mem address conflict

and for the X710 devices
ixl0.....unable to map registers
ixl1.....unable to map registers
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!