[SOLVED] VM won't start with PCI passthrough after upgrade to 9.0

jmjordan

New Member
Aug 11, 2025
2
0
1
I am receiving the following error when trying to start a VM after the upgrade to 9.0. The PCI device is a SAS controller.

Code:
kvm: -device vfio-pci,host=0000:01:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio 0000:01:00.0: Failed to set vIOMMU: aw-bits 48 > host aw-bits 39
TASK ERROR: start failed: QEMU exited with code 1
 
Ah I'm not alone. Just upgraded to PVE 9.0 and won't get VMs to start with exactly the same error message.
 
I was able to get comparable performance using the viommu to virtio from intel. Failing to set it will usually give me audio/visual sync issues particularly streaming videos.
 
I was able to get comparable performance using the viommu to virtio from intel. Failing to set it will usually give me audio/visual sync issues particularly streaming videos.
For me choosing virtio instead of intel results, as in PVE 8, in not working Manjaro and IPFire.
 
It looks like they are correcting the problem. Let's wait for the results.

https://github.com/proxmox/qemu-server/blob/master/src/PVE/QemuServer/CPUConfig.pm

They are updating this file today to fix that.

https://github.com/proxmox/qemu-server/commit/2f3c741bfdffc49b50e17274bc25881a41f57ff7

Code:
    'guest-phys-bits' => {
        type => 'integer',
        minimum => 32, # see target/i386/cpu.c in QEMU
        maximum => 64,
        description => "Number of physical address bits available to the guest.",
        optional => 1,
    },
    'phys-bits' => {
        type => 'string',
        format => 'pve-phys-bits',
        format_description => '8-64|host',
        description =>
            "The physical memory address bits that are reported to the guest OS. Should"
            . " be smaller or equal to the host's. Set to 'host' to use value from host CPU, but"
            . " note that doing so will break live migration to CPUs with other values.",
        optional => 1,
    },
 
Last edited:
Hopefully. I have only private use of PVE, but it's running a few things and iommu is important for me. I was already thing about going back to 8.4.
 
  • Like
Reactions: uzumo
For me choosing virtio instead of intel results, as in PVE 8, in not working Manjaro and IPFire.
Does manjaro have the qemu-guest-agent installed? I only mention it because I read a few articles that talk about the agent helping with viommu as well as other virtio things. I do not know if there is anything required there. However it all may be moot if the patch is released soon to address the intel issue.
 
Why do you need a vIOMMU (inside the VM)? This is not necessary for PCI(e) passthrough as far a I know (but feel free to correct me).
In my experience performance of PCI passthrough of the video card can hep performance in video and audio processing (output via hdmi) by enabling both numa and viommu. Usually without these things enabled audio and video can get out of sync. I've read about other devices that can be performance driven and such as network cards having similar results. It does not require it, but it improves the performance and experience.
 
In my experience performance of PCI passthrough of the video card can hep performance in video and audio processing (output via hdmi) by enabling both numa and viommu. Usually without these things enabled audio and video can get out of sync. I've read about other devices that can be performance driven and such as network cards having similar results. It does not require it, but it improves the performance and experience.
I never would have guessed that it would be influenced by a vIOMMU. Nor have I noticed video and audio out of sync with passthrough without it. Thanks for letting me know.
 
Everyone is happy that these unwanted errors no longer occur.

QEMU[11923]: kvm: VFIO_MAP_DMA failed: Invalid argument
Apr 29 09:12:09 pve QEMU[11923]: kvm: vfio_container_dma_map(0x5c9222494280, 0x380000000000, 0x10000, 0x78075ee70000) = -22 (Invalid argument)

https://bugzilla.kernel.org/show_bug.cgi?id=220057

P2P DMA mapping and vfio seem to be related, but it seems to be working despite the error.

The result is these error logs, which indicate P2P DMA mappings are not being created. With the fix we're pursuing above, this should not result in a performance/efficiency loss relative to the page table use though.
 
Last edited:
In my experience performance of PCI passthrough of the video card can hep performance in video and audio processing (output via hdmi) by enabling both numa and viommu. Usually without these things enabled audio and video can get out of sync. I've read about other devices that can be performance driven and such as network cards having similar results. It does not require it, but it improves the performance and experience.
Is there somewhere we can read more about properly configuring vIOMMU and NUMA for VMs that would benefit?

I only have a single CPU, so I would have never considered messing with the NUMA settings.
 
  • Like
Reactions: AceHack
Is there somewhere we can read more about properly configuring vIOMMU and NUMA for VMs that would benefit?

I only have a single CPU, so I would have never considered messing with the NUMA settings.
I don't know if there's anything that specific and all encompassing. These were mostly many google searches that I've done trying to solve my own audio and video glitches. That being said, the Proxmox wiki on PCI(e) Passthrough even covers the use of vIOMMU. https://pve.proxmox.com/wiki/PCI(e)_Passthrough. NUMA isn't mentioned in that article, but when I noticed it as an option in the proxmox vm processor settings I started researching what that feature does and how it might benefit and then after enabling it it seemed to help my situation.

Here's an older post where I posted about some of my trials and errors to someone else having similar problems. I do use v9.2 of q35 under 9.x now as opposed to the older version listed here.
 
Last edited:
The next update of qemu-server will be in 5 days, even if it is included in the next update of qemu-server.
 
Hi,
The next update of qemu-server will be in 5 days, even if it is included in the next update of qemu-server.
FYI, we do not have a timed schedule for package updates. What do you base your prediction on? The timing depends on how urgent fixes are, where current development focus is, etc.
 
Hallo zusammen,

trotz entsprechend vielfältiger Installations-/ Konfigurationsanleitungen (+ nicht aktueller Proxmox Best Practice Hinweise / Videos), die besten hiervor (in meinen Augen) vgl. hier:


erhalte ich bei der Installation / Konfiguration von Windows 11 pro / Windwos 2025 Server auf Proxmox 9 wiederholt / stetig folgende Hinweismeldung :

WARNUNG: iothread ist nur mit Virtio-Disk oder Virtio-SCSI-Single-Controller gültig, ignoriert
swtpm_setup: Vorhandene Statusdatei wird nicht überschrieben.
kvm: Gastspeicher kann nicht eingerichtet werden. „pc.ram“: Speicher kann nicht zugewiesen werden
Beenden der SWTPM-Instanz (PID 5428) aufgrund eines QEMU-Startfehlers
AUFGABENFEHLER: Start fehlgeschlagen: QEMU wurde mit Code 1 beendet

Hat jmd. eine Idee zur Fehlervermeidung /-beseitigung?

Mit besten Grüßen

Arnoux13
 

Attachments