[SOLVED] VM w/ PCIe Passthrough not working after upgrading to 6.0

ctmartin

New Member
Jul 17, 2019
3
0
1
44
I have a FreeNAS VM with PCIe Passthrough for a LSI HBA/RAID card. The VM was working before upgrading to Proxmox 6.0. After upgrading it seems that the PCIe card is not getting attached to the VM (based on the qm monitor and that I can't see the disks nor card in the VM). The steps in the PCI(e) Passthrough wiki page were re-checked after noticing this issue. The PCIe card is getting detected by lspci on the host.

Code:
  Bus  0, device   0, function 0:
   Host bridge: PCI device 8086:29c0
     PCI subsystem 1af4:1100
     id ""
  Bus  0, device   1, function 0:
   VGA controller: PCI device 1234:1111
     PCI subsystem 1af4:1100
     BAR0: 32 bit prefetchable memory at 0xc0000000 [0xc0ffffff].
     BAR2: 32 bit memory at 0xc230b000 [0xc230bfff].
     BAR6: 32 bit memory at 0xffffffffffffffff [0x0000fffe].
     id "vga"
  Bus  0, device  26, function 0:
   USB controller: PCI device 8086:2937
     PCI subsystem 1af4:1100
     IRQ 16.
     BAR4: I/O at 0xe100 [0xe11f].
     id "uhci-4"
  Bus  0, device  26, function 1:
   USB controller: PCI device 8086:2938
     PCI subsystem 1af4:1100
     IRQ 17.
     BAR4: I/O at 0xe0e0 [0xe0ff].
     id "uhci-5"
  Bus  0, device  26, function 2:
   USB controller: PCI device 8086:2939
     PCI subsystem 1af4:1100
     IRQ 18.
     BAR4: I/O at 0xe0c0 [0xe0df].
     id "uhci-6"
  Bus  0, device  26, function 7:
   USB controller: PCI device 8086:293c
     PCI subsystem 1af4:1100
     IRQ 19.
     BAR0: 32 bit memory at 0xc230a000 [0xc230afff].
     id "ehci-2"
  Bus  0, device  27, function 0:
   Audio controller: PCI device 8086:293e
     PCI subsystem 1af4:1100
     IRQ 16.
     BAR0: 32 bit memory at 0xc2300000 [0xc2303fff].
     id "audio0"
  Bus  0, device  28, function 0:
   PCI bridge: PCI device 1b36:000c
     IRQ 16.
     BUS 0.
     secondary bus 1.
     subordinate bus 1.
     IO range [0xd000, 0xdfff]
     memory range [0xc2000000, 0xc21fffff]
     prefetchable memory range [0xfffffffffff00000, 0x000fffff]
     BAR0: 32 bit memory at 0xc2309000 [0xc2309fff].
     id "ich9-pcie-port-1"
  Bus  1, device   0, function 0:
   SAS controller: PCI device 1000:0087
     PCI subsystem 1000:3020
     IRQ 10.
     BAR0: I/O at 0xd000 [0xd0ff].
     BAR1: 64 bit memory at 0xc2040000 [0xc204ffff].
     BAR3: 64 bit memory at 0xc2000000 [0xc203ffff].
     BAR6: 32 bit memory at 0xffffffffffffffff [0x000ffffe].
     id "hostpci0"
  Bus  0, device  28, function 1:
   PCI bridge: PCI device 1b36:000c
     IRQ 16.
     BUS 0.
     secondary bus 2.
     subordinate bus 2.
     IO range [0xc000, 0xcfff]
     memory range [0xc1e00000, 0xc1ffffff]
     prefetchable memory range [0xfffffffffff00000, 0x000fffff]
     BAR0: 32 bit memory at 0xc2308000 [0xc2308fff].
     id "ich9-pcie-port-2"
  Bus  0, device  28, function 2:
   PCI bridge: PCI device 1b36:000c
     IRQ 16.
     BUS 0.
     secondary bus 3.
     subordinate bus 3.
     IO range [0xb000, 0xbfff]
     memory range [0xc1c00000, 0xc1dfffff]
     prefetchable memory range [0xfffffffffff00000, 0x000fffff]
     BAR0: 32 bit memory at 0xc2307000 [0xc2307fff].
     id "ich9-pcie-port-3"
  Bus  0, device  28, function 3:
   PCI bridge: PCI device 1b36:000c
     IRQ 16.
     BUS 0.
     secondary bus 4.
     subordinate bus 4.
     IO range [0xa000, 0xafff]
     memory range [0xc1a00000, 0xc1bfffff]
     prefetchable memory range [0xfffffffffff00000, 0x000fffff]
     BAR0: 32 bit memory at 0xc2306000 [0xc2306fff].
     id "ich9-pcie-port-4"
  Bus  0, device  29, function 0:
   USB controller: PCI device 8086:2934
     PCI subsystem 1af4:1100
     IRQ 16.
     BAR4: I/O at 0xe0a0 [0xe0bf].
     id "uhci-1"
  Bus  0, device  29, function 1:
   USB controller: PCI device 8086:2935
     PCI subsystem 1af4:1100
     IRQ 17.
     BAR4: I/O at 0xe080 [0xe09f].
     id "uhci-2"
  Bus  0, device  29, function 2:
   USB controller: PCI device 8086:2936
     PCI subsystem 1af4:1100
     IRQ 18.
     BAR4: I/O at 0xe060 [0xe07f].
     id "uhci-3"
  Bus  0, device  29, function 7:
   USB controller: PCI device 8086:293a
     PCI subsystem 1af4:1100
     IRQ 19.
     BAR0: 32 bit memory at 0xc2305000 [0xc2305fff].
     id "ehci"
  Bus  0, device  30, function 0:
   PCI bridge: PCI device 8086:244e
     BUS 0.
     secondary bus 5.
     subordinate bus 9.
     IO range [0x6000, 0x9fff]
     memory range [0xc1000000, 0xc18fffff]
     prefetchable memory range [0x800000000, 0x8000fffff]
     id "pcidmi"
  Bus  5, device   1, function 0:
   PCI bridge: PCI device 1b36:0001
     IRQ 21.
     BUS 5.
     secondary bus 6.
     subordinate bus 6.
     IO range [0x9000, 0x9fff]
     memory range [0xc1600000, 0xc17fffff]
     prefetchable memory range [0x800000000, 0x8000fffff]
     BAR0: 64 bit memory at 0xc1800000 [0xc18000ff].
     id "pci.0"
  Bus  6, device  10, function 0:
   SCSI controller: PCI device 1af4:1001
     PCI subsystem 1af4:0002
     IRQ 23.
     BAR0: I/O at 0x9000 [0x907f].
     BAR1: 32 bit memory at 0xc1601000 [0xc1601fff].
     BAR4: 64 bit prefetchable memory at 0x800004000 [0x800007fff].
     id "virtio0"
  Bus  6, device  18, function 0:
   Ethernet controller: PCI device 1af4:1000
     PCI subsystem 1af4:0001
     IRQ 23.
     BAR0: I/O at 0x9080 [0x909f].
     BAR1: 32 bit memory at 0xc1600000 [0xc1600fff].
     BAR4: 64 bit prefetchable memory at 0x800000000 [0x800003fff].
     BAR6: 32 bit memory at 0xffffffffffffffff [0x0003fffe].
     id "net0"
  Bus  5, device   2, function 0:
   PCI bridge: PCI device 1b36:0001
     IRQ 22.
     BUS 5.
     secondary bus 7.
     subordinate bus 7.
     IO range [0x8000, 0x8fff]
     memory range [0xc1400000, 0xc15fffff]
     prefetchable memory range [0xfffffffffff00000, 0x000fffff]
     BAR0: 64 bit memory at 0xc1801000 [0xc18010ff].
     id "pci.1"
  Bus  5, device   3, function 0:
   PCI bridge: PCI device 1b36:0001
     IRQ 23.
     BUS 5.
     secondary bus 8.
     subordinate bus 8.
     IO range [0x7000, 0x7fff]
     memory range [0xc1200000, 0xc13fffff]
     prefetchable memory range [0xfffffffffff00000, 0x000fffff]
     BAR0: 64 bit memory at 0xc1802000 [0xc18020ff].
     id "pci.2"
  Bus  5, device   4, function 0:
   PCI bridge: PCI device 1b36:0001
     IRQ 20.
     BUS 5.
     secondary bus 9.
     subordinate bus 9.
     IO range [0x6000, 0x6fff]
     memory range [0xc1000000, 0xc11fffff]
     prefetchable memory range [0xfffffffffff00000, 0x000fffff]
     BAR0: 64 bit memory at 0xc1803000 [0xc18030ff].
     id "pci.3"
  Bus  0, device  31, function 0:
   ISA bridge: PCI device 8086:2918
     PCI subsystem 1af4:1100
     id ""
  Bus  0, device  31, function 2:
   SATA controller: PCI device 8086:2922
     PCI subsystem 1af4:1100
     IRQ 16.
     BAR4: I/O at 0xe040 [0xe05f].
     BAR5: 32 bit memory at 0xc2304000 [0xc2304fff].
     id ""
  Bus  0, device  31, function 3:
   SMBus: PCI device 8086:2930
     PCI subsystem 1af4:1100
     IRQ 16.
     BAR4: I/O at 0xe000 [0xe03f].
     id ""

this appears in dmesg every time I start the VM:
Code:
[ 6138.508905] vfio-pci 0000:81:00.0: enabling device (0400 -> 0403)
[ 6138.616771] vfio_ecap_init: 0000:81:00.0 hiding ecap 0x19@0x1e0

Attachments:
  • Proxmox GUI info for LSI card
  • VM Hardware
  • Output of lspci from within the VM
Possibly related thread: https://forum.proxmox.com/threads/proxmox-6-0-beta-1-pci-e-passthrough-sr-iov.55917/

EDIT: clarity note
 

Attachments

  • hba_info.png
    hba_info.png
    5.6 KB · Views: 125
  • vm_hardware.png
    vm_hardware.png
    28.6 KB · Views: 143
  • vm_lspci.png
    vm_lspci.png
    102.9 KB · Views: 129
there was a change in qemu 4.0 machine type defaults
can you try to set the following args for the machine (turn off the vm, set it and the turn it on again):

Code:
qm set ID -args '-machine type=q35,kernel_irqchip=on'
 
@dcsapak that didn't do the trick unfortunately :(

I still don't see the disks nor do I see the PCIe card in lspci. Do you have anything else I can try?

Looking at things from a fresh day, I noticed the card identifier in qm monitor's info pci (marked "hostpci0") but I'm not seeing that bus/device number in FreeNAS via lspci
 
dcsapak, after upgrading to Proxmox 6 some of my passthrough devices in my "q35" guests failed to operate (they would malfunction in the guest):

Windows 10 guest:
- Failed: GTX 1060
- Failed: Intel Corporation C600/X79 series chipset USB2 Enhanced Host Controller * 2
- Working: Fresco Logic FL1100 USB 3.0 Host Controller

macOS Mojave guest:
- Failed: Intel Corporation C600/X79 series chipset USB2 Enhanced Host Controller * 2
- Working: Radeon R9 280X
- Working: Fresco Logic FL1100 USB 3.0 Host Controller
- Working: Intel Corporation 82574L Gigabit Network Connection
- Working: Samsung Electronics Co Ltd NVMe SSD Controller

Adding that "machine" option fixed all of the devices on both guests, thanks!

I found some more info about this here:

https://bugs.launchpad.net/qemu/+bug/1826422
 
Last edited:
@dcsapak that didn't do the trick unfortunately :(

I still don't see the disks nor do I see the PCIe card in lspci. Do you have anything else I can try?

did you do it while the machine was off? (otherwise the changes will not get applied)

if you are sure that qemu started with the args (you can verify with the 'ps' tool on the cli) you can try the following:

we changed the default pcie root ports for vms with 4.0 machine type

can you try to set the machine type to e.g. 3.1?
Code:
qm set ID -machine pc-q35-3.1

do not forget to remove the 'args' again with
Code:
qm set ID -delete args
 
@dcsapak setting the machine type to 3.1 did the trick and I set the thread title prefix to [SOLVED].

Two questions for you:
1) Could you add both of the fixes you suggested to the troubleshooting sections of the "PCI(e) Passthrough" & "Upgrade from 5.x to 6.0" wiki pages?
2) Is there anything I could do to help debug why this happened?

Thank you!
 
2) Is there anything I could do to help debug why this happened?
we changed the pcie root port hardware for machines with type q35 >= 4.0 because it fixes some passthrough issues on some platforms

obviously that hardware does not work in freebsd as it is, i have to investigate more for a fix (maybe we revert this change for some os types?)
 
I tried upgrading my main server this morning and had two freebsd vms using pcie passthrough that quit working after the upgrade. One VM was passing through a network card and the other a SATA controller. Everything looked fine based on how it worked under v5. Unfortunately I didn't happen to find this thread in my quick searches and I ended up reinstalling 5.4 and restoring my backups to get back up and running.

I'd like to know when a decision or update to 6 is made regarding this so I can schedule the upgrade again. Could that info be posted to this thread, or where should I watch for any updates on this issue? TIA
 
@ctmartin i did a little bit of investigation and it looks like a freebsd kernel issue

this bug https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=236922 and the freebsd kernel source https://svnweb.freebsd.org/base/release/12.0.0/sys/net/netmap_virt.h?view=markup
point to the fact that in the freebsd kernel the vendor/pci id of the pcie root port of qemu is used for a 'ptnetmap-memdev' device (which of course is wrong)
the reporter of the bug says that with a kernel without netmap support it loads the correct kernel driver...

maybe you can open a bug report at freebsd and ask why that vendor/pci id was chosen (i cannot seem to find any reference to this device anywhere else)
 
I found a thread discussing a breaking change in qemu that might be related as well. It looks like they plan to revert the change in an update and take a different approach on reintroducing it in a later version. I'm not sure how this will all shake out, but it looks like these are related in some way, but I haven't taken the time to delve into it all to understand it well enough to say for sure.

https://bugs.launchpad.net/qemu/+bug/1826422
 
Hello,

I am a new proxmox free user and wanted to say that this :

Code:
qm set ID -args '-machine type=q35,kernel_irqchip=on'

It has made my VMs with GeForce 970 (UEFI) and GeForce 560 Ti (Seabios) PCIe Passthrough GPUs work. Finally.

Without this bit it would fail right after driver install (official or windows update auto).
I then wouldn't be able to see anything going on, the VM would fail to boot windows properly.

This information seems invaluable and should really by mentioned in the wiki pages...

Best,
 
Last edited:
I am sure forums is also a decent place if google searching. right now that is only a workaround.
 
did you do it while the machine was off? (otherwise the changes will not get applied)

if you are sure that qemu started with the args (you can verify with the 'ps' tool on the cli) you can try the following:

we changed the default pcie root ports for vms with 4.0 machine type

can you try to set the machine type to e.g. 3.1?
Code:
qm set ID -machine pc-q35-3.1

do not forget to remove the 'args' again with
Code:
qm set ID -delete args

Running FreeNas with an LSI SAS card passthrough. Upgrading to 6 also broke it for me, and these are the command that got it working for me too. I originally tried the commands in post #2, but they didn't work
 
Apparently this is still a problem even after the update to QEMU 4.0.1.

I had to update the machine option in my VM configs to get my PCI passthrough working on my two FreeBSD VMs. And unfortunately I also have a verification error due to the machine arguments as well. I guess I will wait to update my clients servers until this is fixed.
 
@Kevo ,please read my post

@ctmartin i did a little bit of investigation and it looks like a freebsd kernel issue

this bug https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=236922 and the freebsd kernel source https://svnweb.freebsd.org/base/release/12.0.0/sys/net/netmap_virt.h?view=markup
point to the fact that in the freebsd kernel the vendor/pci id of the pcie root port of qemu is used for a 'ptnetmap-memdev' device (which of course is wrong)
the reporter of the bug says that with a kernel without netmap support it loads the correct kernel driver...

maybe you can open a bug report at freebsd and ask why that vendor/pci id was chosen (i cannot seem to find any reference to this device anywhere else)

this will probably not be fixed with a qemu update since it seems to be a freebsd kernel issue, my tip is to talk to the freebsd developers and see if they can fix that there...

what you can try tough is to customize the pve-q35-4.0.cfg to the older pcie root port hardware (like in pve-q35.cfg) but this is not support by us and you are on your own..
or you use i440fx instead
 
@Kevo ,please read my post



this will probably not be fixed with a qemu update since it seems to be a freebsd kernel issue, my tip is to talk to the freebsd developers and see if they can fix that there...

what you can try tough is to customize the pve-q35-4.0.cfg to the older pcie root port hardware (like in pve-q35.cfg) but this is not support by us and you are on your own..
or you use i440fx instead

I was under the impression that PCIE passthrough had to use q35 and it wouldn't work with i440fx.
 
I was under the impression that PCIE passthrough had to use q35 and it wouldn't work with i440fx.
yes pcie is only available in q35, but you can passthrough pcie devices as pci devices using i440fx, and since the pci hardware is all virtual there should be no noticable speed impact
 
Just got done fighting this for hours and for me, the key differentiator was the "rombar" option that the GUI wants to insert. When set to "0", the driver fails to load (code43). I wasn't familiar with the option because I've never used it before, and "1" is apparently the default used if it's absent (e.g. when adding it manually in 5.x).
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!