This looks exciting. I know it'll be a while before this gets to Proxmox, probably somewhere well into the PVE 9 release cycle, but it's always fun to see what's coming. Being involved with using such an actively developed piece of software is kind of exciting ... so long as things don't break too often.
Lots to digest, but a couple of things jumped out at me. They've apparently revamped all their documentation, too.
Release Notes: https://wiki.qemu.org/ChangeLog/10.0
Removed Features: https://qemu-project.gitlab.io/qemu/about/removed-features.html
Depreciated Features: https://qemu-project.gitlab.io/qemu/about/deprecated.html
Lots to digest, but a couple of things jumped out at me. They've apparently revamped all their documentation, too.
Release Notes: https://wiki.qemu.org/ChangeLog/10.0
The 'virtio-scsi' device has gained true multiqueue support where different queues of a single controller can be processed by different I/O threads (this catches up to the `virtio-blk` support that was added in QEMU 9.0). This can improve scalability in cases where the guest submitted enough I/O to saturate the host CPU running a single I/O thread processing the virtio-scsi requests. Multiple I/O threads can be configured using the new 'iothread-vq-mapping' property.
VirtIO
- virtio-mem is now also supported on s390x
- virtio-balloon guests stats are now cleared (set to zero) upon device/machine reset.
VFIO
- Improved support for IGD passthrough on all Intel Gen 11 and 12 devices
- Refactored dirty tracking engine to include VFIO state in calc-dirty-rate
- Improved error reporting for MMIO region mapping failures
- Improved property documentation
- Implemented basic PCI PM capability backing
- Added multifd support for VFIO migration
- Added support for old ATI GPUs (x550)
- Deprecated vfio-plaform
- Misc fixes
Block device backends and tools
- The Linux AIO and io_uring backends can now make use of the RWF_DSYNC flag for FUA write requests instead of emulating it with a normal write followed by an fdatasync() call. This can improve performance for guest disks with disabled write cache significantly (cache=writethrough and cache=directsync result in such configurations), in particular if the host disk is already operating in a write through cache mode.
- The user can now actively manage if nodes are active or inactive. Amongst others, this is required to perform safe live migration with a qemu-storage-daemon based backend. It also allows starting block device operation on the live migration destination of a paused VM without first resuming the VM (which was previously the only way to activate images).
- The vpc block driver has been fixed to handle VHD images exported from Azure more correctly
Removed Features: https://qemu-project.gitlab.io/qemu/about/removed-features.html
Depreciated Features: https://qemu-project.gitlab.io/qemu/about/deprecated.html