QEMU 7.2 available on no-subscription as of now

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
6,395
3,188
303
South Tyrol/Italy
shop.proxmox.com
FYI: The next Proxmox VE point release 7.4 (2023/H1) will default to QEMU 7.2. Internal testing for that version started in December, and it was available on the pvetest repository since last Wednesday without any relevant issue coming up, but a lot of positive feedback about resolving odd bugs.
Therefore, it was made available on the pve-no-subscription repository today.

To upgrade ensure you have the Proxmox VE No Subscription repositories configured, which can be added through the Web UI's Repositories panel, and then use the standard:
Bash:
apt update
apt full-upgrade
or use the upgrade functionality of the web-interface.

pveversion -v (or the web-interface's Node Summary -> Packages versions) should then include something like pve-qemu-kvm: 7.2.0-5

Note, as with all QEMU updates: A VM needs to be either fully restarted (shutdown/start or using restart via the CLI or web-interface) or, to avoid downtime, live-migrated to an already upgraded host to actually run with the newer QEMU version.

While we successfully run our production and lots of testing loads on this version since a while, no software is bug free, and often such issues are related to the specific setup. So, if you run into regressions that are definitively caused by installing the new QEMU version (and not some other change), please always include the affected VM config and some basic HW and Storage details.
 
Are there any updates related to Emulated NVMe Controllers?
Do you mean exposing them directly via Proxmox VE or from QEMU POV? For the former a colleague here picked it up again only recently, the biggest upfront change there is to upstream (live) migration support for those controllers - as that's currently missing (independent of the underlying storage used).
 
Do you mean exposing them directly via Proxmox VE or from QEMU POV? For the former a colleague here picked it up again only recently, the biggest upfront change there is to upstream (live) migration support for those controllers - as that's currently missing (independent of the underlying storage used).

Specifically, I'd like to know if there are any plans or progress toward exposing emulated nvme controllers through PVE. We can configure them natively with QEMU, today (i.e., Host NVMe/TCP <-> Guest Virtual NVME, and via PVE/qm --args parameter). That said, I wasn't aware of the challenges with live migration; I'll have to check that out.

TBH, we have yet to spend time quantifying the benefits of it from a performance perspective. For some time, the guest/qemu interactions were not competitive with virtio. However, there may have been some recent progress. I'll get some cycles on this to understand whether there is any value in it.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Anyone else having trouble migrating Windows-based VMs from a non-updated machine to a machine having the new QEmu-package?
I have only tested with two windows VMs (Server 2016) and both went unresponsive and had to be rebooted on the server running the new QEmu-version. Linux VMs working fine though.

Old Proxmox-version installed is 7.2.7.
 
Last edited:
I'm having boot issues with a FreeBSD VM (OPNsense). It fails booting with the error message "Root mount waiting for: CAM".
Anyone an idea how to fix this (other than reverting to the previous working QEMU version, 7.1.0-4 in my case)?

If you need more information for investigating this let me know.

EDIT: The recently released newer package version 7.2.0-7 of pve-qemu-kvm seems to have solved the boot error b/c it vanished on the VM in question (which it had with ver. 7.2.0-5).
 
Last edited:
Hi,
Anyone else having trouble migrating Windows-based VMs from a non-updated machine to a machine having the new QEmu-package?
I have only tested with two windows VMs (Server 2016) and both went unresponsive and had to be rebooted on the server running the new QEmu-version. Linux VMs working fine though.

Old Proxmox-version installed is 7.2.7.
please post the output of qm config <ID> for the affected VMs and pveversion -v on both nodes. What CPUs do the nodes have? Please also share /var/log/syslog from around the time of the migration from both nodes.

EDIT: The migration task log would also be interesting.
 
Last edited:
Hi,
I'm having boot issues with a FreeBSD VM (OPNsense). It fails booting with the error message "Root mount waiting for: CAM".
Anyone an idea how to fix this (other than reverting to the previous working QEMU version, 7.1.0-4 in my case)?

If you need more information for investigating this let me know.
please share the output of qm config <ID> for the VM and pveversion -v. Is there anything interesting in the VM <ID> - Start task log or /var/log/syslog when you try to start the VM?

Are you using the MegaRAID SAS SCSI controller by chance? There is a regression with that: https://lists.proxmox.com/pipermail/pve-devel/2023-March/056015.html
 
Last edited:
  • Like
Reactions: leesteken
Hello.
I think we have an issue with this qemu version.
We tried it on one cluster where we use NFS and LVM over iSCSI storages.
We can't move disks from NFS to LVM storage (no even local-lvm), when VM is running and AIO is the default io_uring
We get:

TASK ERROR: storage migration failed: target storage is known to cause issues with aio=io_uring (used by current drive)

EDIT:
Ok, found it not related to qemu, but libpve-storage-perl package
 
Last edited:
Hello.
I think we have an issue with this qemu version.
We tried it on one cluster where we use NFS and LVM over iSCSI storages.
We can't move disks from NFS to LVM storage (no even local-lvm), when VM is running and AIO is the default io_uring
We get:

TASK ERROR: storage migration failed: target storage is known to cause issues with aio=io_uring (used by current drive)

EDIT:
Ok, found it not related to qemu, but libpve-storage-perl package

I guess, it is this?!:
 
Hi,
Hello.
I think we have an issue with this qemu version.
We tried it on one cluster where we use NFS and LVM over iSCSI storages.
We can't move disks from NFS to LVM storage (no even local-lvm), when VM is running and AIO is the default io_uring
We get:

TASK ERROR: storage migration failed: target storage is known to cause issues with aio=io_uring (used by current drive)
EDIT:
Ok, found it not related to qemu, but libpve-storage-perl package
No, it's the qemu-server package ;)
Yes, this is intentional. Certain storage configurations have issues with io_uring and therefore we can't safely allow moving disks there, that are using io_uring. If we didn't, you would likely run into crashes/hangs/IO-errors. We are going to re-evaluate these storage configurations with newer kernels and allow them once they are deemed safe. But LVM-thin as the target should work, are you really sure you get the same error there?
 
yes, we don't use lvm-thin on local storage, just plain lvm. We saw aio=native works, so we are going to change that setting on running VMs before updating qemu
 
  • Like
Reactions: LinuxLoader
Hi,

I see from the QEMU 7.2 changelog that it is going to use 'max' CPU model instead of 'qemu32' / 'qemu64'. Is PVE going to be affected by this? I guess not, can you confirm?

Thanks!
 
I see from the QEMU 7.2 changelog that it is going to use 'max' CPU model instead of 'qemu32' / 'qemu64'. Is PVE going to be affected by this? I guess not, can you confirm?
Proxmox VE's default stays kvm64.

For completeness’ sake: if you use the qemu-system-x86_64 command directly, i.e., outside Proxmox VE's management stack, then yes, you'd be affected by this change. Simply set it explicitly then via your CLI invocation or just actually use Proxmox VE directly ;)
 
Last edited:
  • Like
Reactions: godzilla
There also is a regression with the LSI SCSI controller in our QEMU 7.2 leading to boot failures. A workaround is to use a different SCSI controller. The patch causing the problem has been identified and while it is not directly related to LSI, making the patch work would require changes to the LSI controller emulation code. But we might need to revert the patch for now, because those changes should be evaluated more carefully: https://lists.nongnu.org/archive/html/qemu-devel/2023-03/msg03739.html
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!