QEMU 9.0 available as of now

I'm hitting another bug that is confirmed to go away when downgrading pve-qemu-kvm to <9:

https://forum.proxmox.com/threads/i...in-recent-versions-was-working-before.150727/

It affects importing disks and creating full clones (not linked clones) on a CEPH RBD pool, but may affect more that I haven't tested.

More reports, that look the same/similar:
 
  • Like
Reactions: fiona
Hi,
I'm hitting another bug that is confirmed to go away when downgrading pve-qemu-kvm to <9:

https://forum.proxmox.com/threads/i...in-recent-versions-was-working-before.150727/

It affects importing disks and creating full clones (not linked clones) on a CEPH RBD pool, but may affect more that I haven't tested.
thank you for the report! I can reproduce the issue and will investigate. Turning on the krbd setting in the RBD storage definition should also be a workaround.

EDIT: It should be enough to downgrade to pve-qemu-kvm=9.0.0-4, the problematic commit is likely https://git.proxmox.com/?p=pve-qemu.git;a=commit;h=b242e7f196acf53ef57a4a51539e4800a6e53cb4

EDIT2: a preliminary patch has been sent to the mailing list: https://lists.proxmox.com/pipermail/pve-devel/2024-July/064511.html
 
Last edited:
FYI, the package pve-qemu-kvm=9.0.0-6 contains a fix for the "import/clone to RBD"-issue and is currently available on the pvetest repository. If you'd like to install the package, you can temporarily enable the repository, run apt update, run apt install pve-qemu-kvm and disable the repository again (e.g. via the Repositories section in the UI), running apt update again.

To have a VM pick up the new version, you need to shutdown+start the VM, migrate to an already upgraded node or use the Reboot button in the UI (reboot within the guest is not enough).

EDIT: now also available on the no-subscription repository.
 
Last edited:
FYI, the package pve-qemu-kvm=9.0.0-6 contains a fix for the "import/clone to RBD"-issue and is currently available on the pvetest repository. If you'd like to install the package, you can temporarily enable the repository, run apt update, run apt install pve-qemu-kvm and disable the repository again (e.g. via the Repositories section in the UI), running apt update again.

To have a VM pick up the new version, you need to shutdown+start the VM, migrate to an already upgraded node or use the Reboot button in the UI (reboot within the guest is not enough).
Can confirm the patch worked for me.

I'm able to import a disk from the CLI as I used to do before upgrading to 9.0.0-4.

Thanks for the quick fix, well done !
 
FYI, the package pve-qemu-kvm=9.0.0-6 contains a fix for the "import/clone to RBD"-issue and is currently available on the pvetest repository. If you'd like to install the package, you can temporarily enable the repository, run apt update, run apt install pve-qemu-kvm and disable the repository again (e.g. via the Repositories section in the UI), running apt update again.

To have a VM pick up the new version, you need to shutdown+start the VM, migrate to an already upgraded node or use the Reboot button in the UI (reboot within the guest is not enough).
Can also confirm that 9.0.0-6 fixes the issue on our side. Thanks very much for the extremely swift reaction and fix!
 
Last edited:
I posted this on the main board and a member recommended I add it to this thread:

After updating Proxmox VE, I am no longer able to successfully bring up Check Point Gaia virtual firewall appliances. These are enterprise firewall appliances running a hardened Red Hat 3.10 kernel. Prior to QEMU 9, there were no issues. After upgrading, I can deploy the appliance successfully and the first boot generally works just fine. As soon as I reboot the appliance it enters a crash and reboot loop that I am unable to interrupt. Basic hardware config:

- SeaBIOS
- Default LSI disk controller
- i440fx machine (Latest, also tried 8.2, 8.1, 8.0)

After downgrading pve-qemu-kvm from 9.0.0-6 to 8.1.5-6 with no other changes to the VM configuration, the issues vanished, and I was able to work as usual.
 
Hi,
I posted this on the main board and a member recommended I add it to this thread:

After updating Proxmox VE, I am no longer able to successfully bring up Check Point Gaia virtual firewall appliances. These are enterprise firewall appliances running a hardened Red Hat 3.10 kernel. Prior to QEMU 9, there were no issues. After upgrading, I can deploy the appliance successfully and the first boot generally works just fine. As soon as I reboot the appliance it enters a crash and reboot loop that I am unable to interrupt. Basic hardware config:

- SeaBIOS
- Default LSI disk controller
- i440fx machine (Latest, also tried 8.2, 8.1, 8.0)

After downgrading pve-qemu-kvm from 9.0.0-6 to 8.1.5-6 with no other changes to the VM configuration, the issues vanished, and I was able to work as usual.
Please share the VM configuration qm config <ID>. Is there a publicly available ISO of the software available for testing?

Please also test with pve-qemu-kvm=8.2.2-1 to see if the regression came between QEMU 8.1 and 8.2 or between QEMU 8.2 and 9.0.

If this is 32 bit, please try the workaround mentioned here (adding lm=off to the CPU argument) to see if it has the same root cause.
 
I also have problems upgrading from 8.1.5-6 to latest 9.0.0-6 ... I have three Windows (Win10, Win XP,..) vm's which run into a BSOD bootloop when starting with qemu 9. Everything is fine after reverting to 8.1.5-6.

root@minimaster ~ # qm config 101
agent: 0
balloon: 0
boot: order=ide0;net0
cores: 4
cpu: host
ide0: local-zfs:vm-101-disk-0,discard=on,size=50G
machine: pc-i440fx-6.1,viommu=virtio
memory: 4096
meta: creation-qemu=6.1.1,ctime=1643979509
name: winxp
net0: e1000=1A:D8:DB:50:55:DC,bridge=vmbr0
numa: 0
onboot: 1
ostype: wxp
smbios1: uuid=450d0fbc-bdfd-4ccb-a1ca-4c0540b6efb5
sockets: 1
startup: order=6
vmgenid: 48779bf2-2c5f-4d7f-9570-7495ff94780b

just tried qemu 8.2.2 -> same result as with qemu 9 -> BSOD at boot
 
Last edited:
The mentioned 32Bit workaround did the job .... win xp is running now on qemu 9 ... sorry for not doing the homework :-)
 
Hi,

Please share the VM configuration qm config <ID>. Is there a publicly available ISO of the software available for testing?

Please also test with pve-qemu-kvm=8.2.2-1 to see if the regression came between QEMU 8.1 and 8.2 or between QEMU 8.2 and 9.0.

If this is 32 bit, please try the workaround mentioned here (adding lm=off to the CPU argument) to see if it has the same root cause.

Details below. The ISOs are available to anyone with an account at checkpoint.com and come with a trial license. I can provide the ISO if needed.

I'm now seeing something very odd. Normally, I use a KVM QCOW2 image provided by Check Point for fast deployments. Mainly designed for cloud deployments, but it works 100% reliably in Proxmox automated by Terraform -- I've been using it for almost exactly one year. I've brought up dozens of test environments in that time, easily 100+ VMs. I can spin that up now with pve-qemu-kvm 8 and it works, where it didn't with pve-qemu-kvm 9. I've also brought up at least 20 instances using fresh installs from ISO. However, if I try to boot from ISO now, it doesn't. It can see the media, but I Just get "boot:" like it can't find a bootable kernel. This works just fine on another Proxmox installation running QEMU 9 (though it exhibits the reboot issue as reported). If I upgrade the main Proxmox back to 9.0.0-6, I can boot the ISO, but the reboot issue occurs. If I downgrade to 8.1.5-6, the reboot issue disappears, but the ISO won't boot at all. The QCOW2 image boots either way, has no issues with 8.1.5-6 or 8.2.2-1, and fails with the reboot issue with 9.0.0-6.

Other ISOs work fine on either version, and no other VMs are having issues. I've already confirmed via checksum the ISOs are good. I'm at a bit of a loss.

VM config:

boot: order=scsi0;ide2;net0
cores: 8
cpu: host
ide2: iso:iso/Check_Point_R81.20_T631.iso,media=cdrom,size=4335904K
memory: 32768
meta: creation-qemu=8.1.5,ctime=1720806618
name: cust-mgmt
net0: virtio=BC:24:11:BD:38:24,bridge=vmbr30
numa: 0
ostype: l26
scsi0: vmpool:vm-103-disk-0,size=120G
smbios1: uuid=9d9fb94e-6762-4cb8-8497-cbb8cc712ae7
sockets: 1
tags: work
vmgenid: e75d0905-e97e-4f94-a908-a09264100139

--

I upgraded to 8.2.2-1 and the reboot issue did not return.

---

This is not 32-bit.

---

For fun, here is my pveversion as it stands:

root@vmhost:~# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.8-2-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
pve-kernel-6.2: 8.0.5
proxmox-kernel-6.8: 6.8.8-2
proxmox-kernel-6.8.8-2-pve-signed: 6.8.8-2
proxmox-kernel-6.8.8-1-pve-signed: 6.8.8-1
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.3
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.9
libpve-storage-perl: 8.2.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.7-1
proxmox-backup-file-restore: 3.2.7-1
proxmox-firewall: 0.4.2
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.1.12
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.12-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.2
pve-qemu-kvm: 8.1.5-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1
 
Details below. The ISOs are available to anyone with an account at checkpoint.com and come with a trial license. I can provide the ISO if needed.

I'm now seeing something very odd. Normally, I use a KVM QCOW2 image provided by Check Point for fast deployments. Mainly designed for cloud deployments, but it works 100% reliably in Proxmox automated by Terraform -- I've been using it for almost exactly one year. I've brought up dozens of test environments in that time, easily 100+ VMs. I can spin that up now with pve-qemu-kvm 8 and it works, where it didn't with pve-qemu-kvm 9. I've also brought up at least 20 instances using fresh installs from ISO. However, if I try to boot from ISO now, it doesn't. It can see the media, but I Just get "boot:" like it can't find a bootable kernel. This works just fine on another Proxmox installation running QEMU 9 (though it exhibits the reboot issue as reported). If I upgrade the main Proxmox back to 9.0.0-6, I can boot the ISO, but the reboot issue occurs. If I downgrade to 8.1.5-6, the reboot issue disappears, but the ISO won't boot at all. The QCOW2 image boots either way, has no issues with 8.1.5-6 or 8.2.2-1, and fails with the reboot issue with 9.0.0-6.

Other ISOs work fine on either version, and no other VMs are having issues. I've already confirmed via checksum the ISOs are good. I'm at a bit of a loss.

VM config:

boot: order=scsi0;ide2;net0
cores: 8
cpu: host
ide2: iso:iso/Check_Point_R81.20_T631.iso,media=cdrom,size=4335904K
memory: 32768
meta: creation-qemu=8.1.5,ctime=1720806618
name: cust-mgmt
net0: virtio=BC:24:11:BD:38:24,bridge=vmbr30
numa: 0
ostype: l26
scsi0: vmpool:vm-103-disk-0,size=120G
smbios1: uuid=9d9fb94e-6762-4cb8-8497-cbb8cc712ae7
sockets: 1
tags: work
vmgenid: e75d0905-e97e-4f94-a908-a09264100139

--

I upgraded to 8.2.2-1 and the reboot issue did not return.

---

This is not 32-bit.

---

For fun, here is my pveversion as it stands:

root@vmhost:~# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.8-2-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
pve-kernel-6.2: 8.0.5
proxmox-kernel-6.8: 6.8.8-2
proxmox-kernel-6.8.8-2-pve-signed: 6.8.8-2
proxmox-kernel-6.8.8-1-pve-signed: 6.8.8-1
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.3
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.9
libpve-storage-perl: 8.2.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.7-1
proxmox-backup-file-restore: 3.2.7-1
proxmox-firewall: 0.4.2
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.1.12
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.12-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.2
pve-qemu-kvm: 8.1.5-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1
I used the following ISO https://support.checkpoint.com/results/download/124397 and pve-qemu-kvm=9.0.0-6, installed and rebooted, configured via Web UI, rebooted again, and it came up just fine for me.
Code:
root@pve8a1 ~ # sha256sum /var/lib/vz/template/iso/CP.iso
5da59f6422d871cc0624c8b3ab20bf015c494fad8a3c4817a69f684dcf3b4d4c  /var/lib/vz/template/iso/CP.iso

EDIT:
- Default LSI disk controller
Oh, I had missed that. With that I get an error while booting with pve-qemu-kvm=9.0.0-6, but boots fine with pve-qemu-kvm=8.2.2-1.

Please try using VirtIO SCSI as a workaround. I'll try to investigate the issue with the default controller.

EDIT2: proposed fix: https://lore.kernel.org/qemu-devel/20240715131403.223239-1-f.ebner@proxmox.com/T/#u
 
Last edited:
  • Like
Reactions: justinclift
I used the following ISO https://support.checkpoint.com/results/download/124397 and pve-qemu-kvm=9.0.0-6, installed and rebooted, configured via Web UI, rebooted again, and it came up just fine for me.
Code:
root@pve8a1 ~ # sha256sum /var/lib/vz/template/iso/CP.iso
5da59f6422d871cc0624c8b3ab20bf015c494fad8a3c4817a69f684dcf3b4d4c  /var/lib/vz/template/iso/CP.iso

EDIT:

Oh, I had missed that. With that I get an error while booting with pve-qemu-kvm=9.0.0-6, but boots fine with pve-qemu-kvm=8.2.2-1.

Please try using VirtIO SCSI as a workaround. I'll try to investigate the issue with the default controller.

Thank you for your diligence, it's greatly appreciated. The earlier releases of Gaia (R81.10 and below) would not boot with the VirtIO SCSI controller as they did not include a controller. I'm seeing all kinds of weirdness. The system has been rock solid stable since I built it a year ago, and continues to be rock solid for everything but these Check Point virtual appliances -- unfortunately, these are the most important as labbing firewalls is a critical part of my workload.

I booted the R81.20 ISO (same you're using) this morning and had a kernel panic after the initial install. I upgraded again, removing the hold on the pve-qemu-kvm package to get 9.0.0-6, then deleted the VM and recreated with VirtIO SCSI. That did install perfectly fine. The first-time configuration is running. I will continue to test to see if the reboot breaks anything once I've applied the latest hotfix to the CP appliance.
 
Confirmed that all issues are resolved on 9.0.0-6 using the VirtIO SCSI storage controller for the R81.20 release. I see a proposed fix was added to your edit and am happy to test if desired, with all of the current supported Check Point Gaia releases (R81, R81.10, R81.20). I would just need documentation on doing so.
 
Confirmed that all issues are resolved on 9.0.0-6 using the VirtIO SCSI storage controller for the R81.20 release. I see a proposed fix was added to your edit and am happy to test if desired, with all of the current supported Check Point Gaia releases (R81, R81.10, R81.20). I would just need documentation on doing so.
Unfortunately, that would mean applying the patch and building QEMU yourself, so not super straight-forward. If you would like to do it, start with https://git.proxmox.com/?p=pve-qemu.git;a=summary then add the patch in debian/patches/pve and a line in debian/patches/series. Install the build-dependencies with apt and build the package with make deb.
 
Wondering when this is expected to go into pve-enterprise? Been running it on my test box, a mix of Windows and Linux machines, since it was available on pve-no-subscription and I don't think it's caused me any issues.
 
Wondering when this is expected to go into pve-enterprise? Been running it on my test box, a mix of Windows and Linux machines, since it was available on pve-no-subscription and I don't think it's caused me any issues.
Yeah, the current QEMU 9.0 package revision has been pretty stable for a while, so we probably move it to the enterprise repository over the next couple of weeks.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!