PCI Passthrough not working after update.


Active Member
Mar 2, 2018
I have a small Proxmox cluster with a couple of VMs, some of them utilizing passed through PCI devices. After the latest update the passthrough no longer seems to work. Whenever I start a VM with a passed through device I get the following error:
"no pci device info for device '02:00.0'" (or any other device address). I suspect this is coming from an update to qemu-server ( 6.0-13 to 6.0-16 ). At least I found the log line on GitHub for an old version of qemu-server.
I am unable to successfully roll back the update (or just qemu-server) to verify if that's really the component causing the issues.

This is the upgrade delta:
Start-Date: 2019-11-22  09:23:10

Commandline: apt-get dist-upgrade

Install: libpve-cluster-api-perl:amd64 (6.0-9, automatic), libpve-cluster-perl:amd64 (6.0-9, automatic), pve-kernel-5.0.21-5-pve:amd64 (5.0.21-10, automatic)

Upgrade: proxmox-widget-toolkit:amd64 (2.0-8, 2.0-9), pve-kernel-5.0:amd64 (6.0-10, 6.0-11), postfix:amd64 (3.4.5-1, 3.4.7-0+deb10u1), libpve-access-control:amd64 (6.0-3, 6.0-4), linux-libc-dev:amd64 (4.19.67-2+deb10u1, 4.19.67-2+deb10u2), libpve-storage-perl:amd64 (6.0-9, 6.0-11), libsystemd0:amd64 (241-7~deb10u1, 241-7~deb10u2), libgs9:amd64 (9.27~dfsg-2+deb10u2, 9.27~dfsg-2+deb10u3), python2.7-minimal:amd64 (2.7.16-2, 2.7.16-2+deb10u1), postfix-sqlite:amd64 (3.4.5-1, 3.4.7-0+deb10u1), pve-qemu-kvm:amd64 (4.0.1-4, 4.0.1-5), libpython2.7:amd64 (2.7.16-2, 2.7.16-2+deb10u1), libncurses5:amd64 (6.1+20181013-2+deb10u1, 6.1+20181013-2+deb10u2), libncurses6:amd64 (6.1+20181013-2+deb10u1, 6.1+20181013-2+deb10u2), python2.7:amd64 (2.7.16-2, 2.7.16-2+deb10u1), pve-docs:amd64 (6.0-8, 6.0-9), pve-ha-manager:amd64 (3.0-2, 3.0-5), pve-firewall:amd64 (4.0-7, 4.0-8), udev:amd64 (241-7~deb10u1, 241-7~deb10u2), pve-container:amd64 (3.0-10, 3.0-13), libncursesw5:amd64 (6.1+20181013-2+deb10u1, 6.1+20181013-2+deb10u2), libncursesw6:amd64 (6.1+20181013-2+deb10u1, 6.1+20181013-2+deb10u2), pve-cluster:amd64 (6.0-7, 6.0-9), libudev1:amd64 (241-7~deb10u1, 241-7~deb10u2), python-cryptography:amd64 (2.6.1-3, 2.6.1-3+deb10u2), pve-kernel-5.0.21-4-pve:amd64 (5.0.21-8, 5.0.21-9), python3-cryptography:amd64 (2.6.1-3, 2.6.1-3+deb10u2), pve-manager:amd64 (6.0-11, 6.0-15), linux-image-4.19.0-6-amd64:amd64 (4.19.67-2+deb10u1, 4.19.67-2+deb10u2), libtinfo5:amd64 (6.1+20181013-2+deb10u1, 6.1+20181013-2+deb10u2), libtinfo6:amd64 (6.1+20181013-2+deb10u1, 6.1+20181013-2+deb10u2), libpve-guest-common-perl:amd64 (3.0-2, 3.0-3), systemd-sysv:amd64 (241-7~deb10u1, 241-7~deb10u2), libpve-common-perl:amd64 (6.0-6, 6.0-8), libpam-systemd:amd64 (241-7~deb10u1, 241-7~deb10u2), distro-info-data:amd64 (0.41, 0.41+deb10u1), ncurses-term:amd64 (6.1+20181013-2+deb10u1, 6.1+20181013-2+deb10u2), ghostscript:amd64 (9.27~dfsg-2+deb10u2, 9.27~dfsg-2+deb10u3), systemd:amd64 (241-7~deb10u1, 241-7~deb10u2), qemu-server:amd64 (6.0-13, 6.0-16), ncurses-bin:amd64 (6.1+20181013-2+deb10u1, 6.1+20181013-2+deb10u2), libnss-systemd:amd64 (241-7~deb10u1, 241-7~deb10u2), libgs9-common:amd64 (9.27~dfsg-2+deb10u2, 9.27~dfsg-2+deb10u3), pve-kernel-helper:amd64 (6.0-11, 6.0-12), ncurses-base:amd64 (6.1+20181013-2+deb10u1, 6.1+20181013-2+deb10u2), libpython2.7-minimal:amd64 (2.7.16-2, 2.7.16-2+deb10u1), libfreetype6:amd64 (2.9.1-3, 2.9.1-3+deb10u1), python-werkzeug:amd64 (0.14.1+dfsg1-4, 0.14.1+dfsg1-4+deb10u1), cron:amd64 (3.0pl1-134, 3.0pl1-134+deb10u1), xsltproc:amd64 (1.1.32-2.1~deb10u1, 1.1.32-2.2~deb10u1), libpython2.7-stdlib:amd64 (2.7.16-2, 2.7.16-2+deb10u1), rpcbind:amd64 (1.2.5-0.3, 1.2.5-0.3+deb10u1), libglib2.0-0:amd64 (2.58.3-2+deb10u1, 2.58.3-2+deb10u2), libfribidi0:amd64 (1.0.5-3.1, 1.0.5-3.1+deb10u1), libxslt1.1:amd64 (1.1.32-2.1~deb10u1, 1.1.32-2.2~deb10u1), base-files:amd64 (10.3+deb10u1, 10.3+deb10u2), tzdata:amd64 (2019b-0+deb10u1, 2019c-0+deb10u1)

End-Date: 2019-11-22  09:24:27
This is the output of pveversion (after update):
proxmox-ve: 6.0-2 (running kernel: 5.0.21-5-pve)
pve-manager: 6.0-15 (running version: 6.0-15/52b91481)
pve-kernel-helper: 6.0-12
pve-kernel-5.0: 6.0-11
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-4-pve: 5.0.21-9
pve-kernel-5.0.21-3-pve: 5.0.21-7
ceph: 12.2.12-pve1
ceph-fuse: 12.2.12-pve1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-4
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-8
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.0-11
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-9
pve-cluster: 6.0-9
pve-container: 3.0-13
pve-docs: 6.0-9
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-8
pve-firmware: 3.0-4
pve-ha-manager: 3.0-5
pve-i18n: 2.0-3
pve-qemu-kvm: 4.0.1-4
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-16
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2

I would definitely appreciate any help in getting to the root cause and remediating the issue :) :)

Thanks a lot!
The issue is caused by this commit:
* fix #2436: pci: do not hardcode pci domain to 0000

Because of this an address like this:
hostpci0: 02:00.0,pcie=1,x-vga=on
must now be prepended with 0000, like this:
hostpci0: 0000:02:00.0,pcie=1,x-vga=on
Hi, yes your fix with adding the "0000" was correct, but it was not planned that you need to do it.

I'll rechecked and it seems that the improvement to allow non-default "0000" PCI domains forgot a fallback in a specific case.
A qemu-server update (version 6.0-17) was uploaded to pvetest repository, which should check for this again.


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!