QEMU 10.1 available on pve-test and pve-no-subscription as of now

fiona

Proxmox Staff Member
Staff member
Aug 1, 2019
6,537
1,949
248
There is a new QEMU 10.1 package available in the pve-test and pve-no-subscription repositories for Proxmox VE 9.

After internally testing QEMU 10.1 for over a month and having this version available on the pve-test repository almost as long, we now (2025-11-04) made our QEMU 10.1 package available in the pve-no-subscription repository.
Our QEMU package version 10.1.2-1 includes some important stable fixes that have been developed since the original 10.1.0 release.

Note: While some of our production workloads already use this version and run stable, we cannot test every possible hardware and configuration combination, so we recommend testing the upgrade before applying it to mission-critical setups.

To upgrade, make sure you have configured either the Proxmox VE No-Subscription repository or the Proxmox VE Test repositories.
Either use the web-interface to refresh and then upgrade using the Node -> Updates panel, or use a console with the following standard apt commands:
Bash:
apt update
apt full-upgrade

The output of pveversion -v (or the web-interface's Node Summary -> Packages versions) should then include something like pve-qemu-kvm: 10.1.2-1

Note, as with all QEMU updates: A VM must either be completely restarted (shut it down and then start it again, or use the restart command via the CLI or web-interface) or, to avoid downtime, consider live-migrating to a host that has already been upgraded to the new QEMU package version.

While we have been successfully running our production and many test loads on this version for some time now, no software is bug-free, and often such issues are related to the specific setup. So if you encounter regressions that are definitely caused by installing the new QEMU version (and not some other change), please always include the affected VM configuration and some basic HW (e.g. CPU model) and memory details.

We welcome your feedback!

Known issues:
None at the time of publishing.
 
I see some mentions of multifd in the change log. Does that enable multi-stream live migration of VMs for local directory qcow2 storage? The change log is not terribly descriptive.
We have yet to upgrade our switches to 10GbE so live migrations are slow. Our servers do have several LACP bonded 1GbE NICs. Multi-stream live migration would help us a lot.
 
Hi,
If you’re interested, here’s the changelog for 10.1: https://wiki.qemu.org/ChangeLog/10.1
I see some mentions of multifd in the change log. Does that enable multi-stream live migration of VMs for local directory qcow2 storage? The change log is not terribly descriptive.
We have yet to upgrade our switches to 10GbE so live migrations are slow. Our servers do have several LACP bonded 1GbE NICs. Multi-stream live migration would help us a lot.
this is the upstream changelog. It does not mean that all features are already exposed in Proxmox VE. See apt changelog pve-qemu-kvm qemu-server for the downstream change logs. Local disks are mirrored via blockdev-mirror for migration, there is no multifd support for that, nor planned as far as I'm aware. Support for mutlifd migration for VM RAM is tracked here: https://bugzilla.proxmox.com/show_bug.cgi?id=4152
 
  • Like
Reactions: Johannes S
I have tested this pve-qemu-kvm: 10.1.2-1 on my system today, on a number of VMs, including some more exotic ones, & I have so far had no issues.
 
  • Like
Reactions: t.lamprecht
Code:
[HEADING=2]Block jobs[/HEADING]

[*]Non-active block-commit was optimized to keep sparseness
[*]blockdev-mirror was optimized to do less work with zero blocks
[*]blockdev-mirror and blockdev-backup gained new options, see QMP section
interesting :)
https://github.com/qemu/qemu/commit/7e277545b90874171128804e256a538fb0e8dd7e

"With this patch, if I create a raw sparse destination file, connect itwith QMP 'blockdev-add' while leaving it at the default "discard":"ignore", then run QMP 'blockdev-mirror' with "sync": "full", thedestination remains sparse rather than fully allocated. "

So, finally, sparse destination with online mirror without need to trim after ?
 
Last edited:
Hi spirit,
https://github.com/qemu/qemu/commit/7e277545b90874171128804e256a538fb0e8dd7e

"With this patch, if I create a raw sparse destination file, connect itwith QMP 'blockdev-add' while leaving it at the default "discard":"ignore", then run QMP 'blockdev-mirror' with "sync": "full", thedestination remains sparse rather than fully allocated. "

So, finally, sparse destination with online mirror without need to trim after ?
some of this was already covered with the zeroinit driver Proxmox VE uses downstream for mirroring and IIRC, with the new approach upstream there are still some limitations in scenarios using blockdev driver host_device rather than file, i.e. when non-file-based storages are involved (RBD without krbd has its own driver), since the host_device driver does not (yet) implement bdrv_co_block_status. But this is just recollection from a brief discussion with colleagues from a few months ago, so take it with a grain of salt ;) Still, nice to see that improvement!
 
Yeah, that matches what I remember too. The zeroinit driver handled part of it before, but there are still some edge cases with non-file-based storages where host_device doesn’t fully support bdrv_co_block_status yet. Hopefully future updates smooth that out — still, definitely a solid improvement overall smart play full apk