QEMU 10.1 available on pve-test and pve-no-subscription as of now

fiona

Proxmox Staff Member
Staff member
Aug 1, 2019
6,579
1,987
248
There is a new QEMU 10.1 package available in the pve-test and pve-no-subscription repositories for Proxmox VE 9.

After internally testing QEMU 10.1 for over a month and having this version available on the pve-test repository almost as long, we now (2025-11-04) made our QEMU 10.1 package available in the pve-no-subscription repository.
Our QEMU package version 10.1.2-1 includes some important stable fixes that have been developed since the original 10.1.0 release.

Note: While some of our production workloads already use this version and run stable, we cannot test every possible hardware and configuration combination, so we recommend testing the upgrade before applying it to mission-critical setups.

To upgrade, make sure you have configured either the Proxmox VE No-Subscription repository or the Proxmox VE Test repositories.
Either use the web-interface to refresh and then upgrade using the Node -> Updates panel, or use a console with the following standard apt commands:
Bash:
apt update
apt full-upgrade

The output of pveversion -v (or the web-interface's Node Summary -> Packages versions) should then include something like pve-qemu-kvm: 10.1.2-1

Note, as with all QEMU updates: A VM must either be completely restarted (shut it down and then start it again, or use the restart command via the CLI or web-interface) or, to avoid downtime, consider live-migrating to a host that has already been upgraded to the new QEMU package version.

While we have been successfully running our production and many test loads on this version for some time now, no software is bug-free, and often such issues are related to the specific setup. So if you encounter regressions that are definitely caused by installing the new QEMU version (and not some other change), please always include the affected VM configuration and some basic HW (e.g. CPU model) and memory details.

We welcome your feedback!

Known issues:
None at the time of publishing.
 
I see some mentions of multifd in the change log. Does that enable multi-stream live migration of VMs for local directory qcow2 storage? The change log is not terribly descriptive.
We have yet to upgrade our switches to 10GbE so live migrations are slow. Our servers do have several LACP bonded 1GbE NICs. Multi-stream live migration would help us a lot.
 
Hi,
If you’re interested, here’s the changelog for 10.1: https://wiki.qemu.org/ChangeLog/10.1
I see some mentions of multifd in the change log. Does that enable multi-stream live migration of VMs for local directory qcow2 storage? The change log is not terribly descriptive.
We have yet to upgrade our switches to 10GbE so live migrations are slow. Our servers do have several LACP bonded 1GbE NICs. Multi-stream live migration would help us a lot.
this is the upstream changelog. It does not mean that all features are already exposed in Proxmox VE. See apt changelog pve-qemu-kvm qemu-server for the downstream change logs. Local disks are mirrored via blockdev-mirror for migration, there is no multifd support for that, nor planned as far as I'm aware. Support for mutlifd migration for VM RAM is tracked here: https://bugzilla.proxmox.com/show_bug.cgi?id=4152
 
  • Like
Reactions: Johannes S
Code:
[HEADING=2]Block jobs[/HEADING]

[*]Non-active block-commit was optimized to keep sparseness
[*]blockdev-mirror was optimized to do less work with zero blocks
[*]blockdev-mirror and blockdev-backup gained new options, see QMP section
interesting :)
https://github.com/qemu/qemu/commit/7e277545b90874171128804e256a538fb0e8dd7e

"With this patch, if I create a raw sparse destination file, connect itwith QMP 'blockdev-add' while leaving it at the default "discard":"ignore", then run QMP 'blockdev-mirror' with "sync": "full", thedestination remains sparse rather than fully allocated. "

So, finally, sparse destination with online mirror without need to trim after ?
 
Last edited:
Hi spirit,
https://github.com/qemu/qemu/commit/7e277545b90874171128804e256a538fb0e8dd7e

"With this patch, if I create a raw sparse destination file, connect itwith QMP 'blockdev-add' while leaving it at the default "discard":"ignore", then run QMP 'blockdev-mirror' with "sync": "full", thedestination remains sparse rather than fully allocated. "

So, finally, sparse destination with online mirror without need to trim after ?
some of this was already covered with the zeroinit driver Proxmox VE uses downstream for mirroring and IIRC, with the new approach upstream there are still some limitations in scenarios using blockdev driver host_device rather than file, i.e. when non-file-based storages are involved (RBD without krbd has its own driver), since the host_device driver does not (yet) implement bdrv_co_block_status. But this is just recollection from a brief discussion with colleagues from a few months ago, so take it with a grain of salt ;) Still, nice to see that improvement!
 
There seem to be issues with Windows VMs when you install spice tools and set the clipboard to VNC with this version.
The mouse is not working on VNC connections, as discussed here:
 
  • Like
Reactions: fiona
There seem to be issues with Windows VMs when you install spice tools and set the clipboard to VNC with this version.
The mouse is not working on VNC connections, as discussed here:
For completeness posting it here too: upstream already has a fix which was sent to the pve-devel mailing list now too:
https://lore.proxmox.com/pve-devel/20251117114324.113404-1-f.ebner@proxmox.com/T/
 
  • Like
Reactions: ReluctantSnake
I have a problem with cloning templates. I have an LVM on which I have a templates. When I want to clone the template, it tells me an error
Code:
create full clone of drive ide0 (lvm-nvme:vm-9922-cloudinit)
  Logical volume "vm-219-cloudinit" created.
create full clone of drive virtio0 (lvm-nvme:vm-9922-disk-0)
  Logical volume "vm-219-disk-1" created.
transferred 0.0 B of 5.0 GiB (0.00%)
transferred 52.2 MiB of 5.0 GiB (1.02%)
transferred 103.9 MiB of 5.0 GiB (2.03%)
transferred 156.2 MiB of 5.0 GiB (3.05%)
transferred 207.9 MiB of 5.0 GiB (4.06%)
transferred 260.1 MiB of 5.0 GiB (5.08%)
transferred 311.8 MiB of 5.0 GiB (6.09%)
....
transferred 1.9 GiB of 5.0 GiB (38.59%)
transferred 2.0 GiB of 5.0 GiB (39.61%)
qemu-img: error while writing at byte 2145386496: Invalid argument
  Logical volume "vm-219-cloudinit" successfully removed.
  Logical volume "vm-219-disk-1" successfully removed.
TASK ERROR: clone failed: copy failed: command '/usr/bin/qemu-img convert -p -n -f raw -O raw /dev/vg_data/vm-9922-disk-0 /dev/vg_data/vm-219-disk-1' failed: exit code 1
I have everything up to date, I updated today just to be sure. pve-qemu-kvm is on version 10.1.2-4
If I downgrade, which I did, to version 10.0.2-4, it's ok. Am I the only one who has this problem?
 
Hi @drmartins,
does it always fail for the same volume/template or also for others? Does it always fail with the same byte value? Could you share the template configuration qm config 9922 as well as the section for the lvm-nvme storage in /etc/pve/storage.cfg?

Are there any messages in the system journal around the time of the issue?

To get a bit more information out of qemu-img, you could manually create an LV lvcreate --size 5G vg_data -n dummy and then run the following
Code:
lvchange -ay vg_data/vm-9922-disk-0
lvchange -ay vg_data/dummy
qemu-img --trace '*' convert -p -n -f raw -O raw /dev/vg_data/vm-9922-disk-0 /dev/vg_data/dummy 2>&1 | tail -n 1000 > /tmp/qemu-img-trace.txt
and then provide the /tmp/qemu-img-trace.txt file.
 
Last edited:
we did some more research and found that the problem is that the qemu-img convert command now uses -t writeback by default and when changed it uses to -t none or -t directsync.
A quick fix for us was to move the templates to local storage

Here is our debug
Code:
root@pve-node1:/var/log# qemu-img  convert -t none -p  -f raw -O raw /dev/vg_data/vm-9920-disk-0 /dev/vg_data/dummy
    (100.00/100%)

root@pve-node1:/var/log# qemu-img  convert -p  -f raw -O raw /dev/vg_data/vm-9920-disk-0 /dev/vg_data/dummy
qemu-img: error while writing at byte 2145386496: Invalid argument

root@ppve-node1:/var/log# qemu-img  convert -t directsync -p -n -f raw -O raw /dev/vg_data/vm-9920-disk-0 /dev/vg_data/dummy
    (100.00/100%)
root@pve-node1:/var/log# qemu-img  convert -t writeback  -p -n -f raw -O raw /dev/vg_data/vm-9920-disk-0 /dev/vg_data/dummy
qemu-img: error while writing at byte 2145386496: Invalid argument

root@pve-node1:/var/log# qemu-img  convert -p -n -f raw -O raw ./debian-13-genericcloud-amd64.raw /dev/vg_data/dummy
    (100.00/100%)

the information you requested
Template:
Code:
root@pve-node1:~# qm config 9900
agent: 1
balloon: 0
boot: order=virtio0
ciuser: ansible
cores: 2
cpu: host
ide0: lvm-nvme:vm-9900-cloudinit,media=cdrom,size=4M
ipconfig0: ip=10.1.99.233/24,gw=10.1.99.1
memory: 2048
meta: creation-qemu=10.0.2,ctime=1759822368
name: debian-13-tpl
nameserver: 8.8.8.8 1.0.0.1 1.1.1.1
net0: virtio=BC:24:11:C5:38:DB,bridge=vmbr199
numa: 1
ostype: l26
scsihw: virtio-scsi-single
smbios1: uuid=80f58566-0cac-4c5f-9231-9b1d64ab6ab3
sockets: 1
sshkeys: %20%20%20%20%20%20%20%20no-port-forwarding%2Cenvironment%3D%22SSH_USER%3Dansible%22%20ssh-rsa%***********%0A%0A%0A
template: 1
virtio0: lvm-nvme:vm-9900-disk-0,discard=on,iothread=1,replicate=0,size=5G
vmgenid: 4c4a5642-c861-40c8-a823-83685c202334

Storage
Code:
root@pve-node1:~# cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content backup,vztmpl,images,iso
    shared 0

lvm: lvm-nvme
    vgname vg_data
    content images,rootdir
    saferemove 0
    shared 0

And file as attachment
 

Attachments

we did some more research and found that the problem is that the qemu-img convert command now uses -t writeback by default and when changed it uses to -t none or -t directsync.
When you downgraded, was it really only pve-qemu-kvm=10.0.2-4 and not also the qemu-server package? Because the qemu-img command should be the very same if qemu-server didn't change and AFAICS from the docs, the default was already -t writeback in QEMU 10.0. If you issue the commands with different cache modes after the downgrade to pve-qemu-kvm=10.0.2-4 does it behave differently?
 
Last edited:
My suspicion is that while the issue is related to the cache mode, something else changed in QEMU 10.1 to make it trigger.

Sorry, I had messed up the trace command, it should be using tail instead of head. Edited now.