Proxmox requires full reboot after shutting down VM with PCI passthrough

Apr 16, 2021
60
14
8
35
EDIT: this is the output to lscpi -v on Proxmox host:

Does this mean that the kernel is still using the card? Should I also blacklist amdgpu as well?

Code:
83:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 23 (rev ff) (prog-if ff)
        !!! Unknown header type 7f
        Kernel driver in use: vfio-pci
        Kernel modules: amdgpu




There is a distinct and repeatable series of events which are very confusing, diagnosing the cause of which is far beyond my expertise.

The short version is that I am having trouble with PCIe passthrough of an AMD Radeon RX 6600 on an ArchLinux VM Guest. Basically every time I shut down the VM with PCI passthrough, I have to do a full reboot of Proxmox before I can boot that VM again. It's very frustrating.

The hardware is as follows:

- Gigabyte MZ72-HBO
- AMD Epyc 7402
- AMD Radeon RX 6600

The long version is as follows:

Prior adding interrupt remapping, I had to perform a full reboot of proxmox and also had to remove then re-add the PCI device (steps listed in below) to get the VM to boot. Simply rebooting the VM did not work. The syslogs are included below. Then I enabled interrupt remapping with the following commands: enable interrupt remapping: echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.conf echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf

Steps to get the VM booting before enabling interrupt remapping: To get the VM to boot with the PCI device after it has been shutdown I have to do the following steps exactly: reboot proxmox, remove the PCI device from the VM, boot the VM without the PCI device, shut down the VM, reboot proxmox again, boot the VM without the PCI device, shut down the VM, add the PCI device to the VM, boot the VM with the PCI device.

Since enabling interrupt remapping, the VM boots and shuts down just fine the first time but after that I need to reboot proxmox. Also, as I am shutting the VM down with the PCI device installed, what appears to be some kind of kernel panic is output to the noVNC viewer. See below.

This is the Syslog when booting the VM the second time prior rebooting proxmox:

Code:
Dec 16 22:01:46 central pvedaemon[10228]: start VM 310: UPID:central:000027F4:00006B22:61BBB74A:qmstart:310:root@pam:
Dec 16 22:01:46 central pvedaemon[4441]: <root@pam> starting task UPID:central:000027F4:00006B22:61BBB74A:qmstart:310:root@pam:
Dec 16 22:01:46 central systemd[1]: Created slice qemu.slice.
Dec 16 22:01:46 central systemd[1]: Started 310.scope.
Dec 16 22:01:46 central systemd-udevd[10009]: Using default interface naming scheme 'v247'.
Dec 16 22:01:46 central systemd-udevd[10009]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 16 22:01:47 central kernel: device tap310i0 entered promiscuous mode
Dec 16 22:01:47 central systemd-udevd[10243]: Using default interface naming scheme 'v247'.
Dec 16 22:01:47 central systemd-udevd[10243]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 16 22:01:47 central systemd-udevd[10243]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 16 22:01:47 central systemd-udevd[10009]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 16 22:01:47 central kernel: fwbr310i0: port 1(fwln310i0) entered blocking state
Dec 16 22:01:47 central kernel: fwbr310i0: port 1(fwln310i0) entered disabled state
Dec 16 22:01:47 central kernel: device fwln310i0 entered promiscuous mode
Dec 16 22:01:47 central kernel: fwbr310i0: port 1(fwln310i0) entered blocking state
Dec 16 22:01:47 central kernel: fwbr310i0: port 1(fwln310i0) entered forwarding state
Dec 16 22:01:47 central kernel: vmbr20: port 2(fwpr310p0) entered blocking state
Dec 16 22:01:47 central kernel: vmbr20: port 2(fwpr310p0) entered disabled state
Dec 16 22:01:47 central kernel: device fwpr310p0 entered promiscuous mode
Dec 16 22:01:47 central kernel: device eno2np1.20 entered promiscuous mode
Dec 16 22:01:47 central kernel: device eno2np1 entered promiscuous mode
Dec 16 22:01:47 central kernel: vmbr20: port 2(fwpr310p0) entered blocking state
Dec 16 22:01:47 central kernel: vmbr20: port 2(fwpr310p0) entered forwarding state
Dec 16 22:01:47 central kernel: fwbr310i0: port 2(tap310i0) entered blocking state
Dec 16 22:01:47 central kernel: fwbr310i0: port 2(tap310i0) entered disabled state
Dec 16 22:01:47 central kernel: fwbr310i0: port 2(tap310i0) entered blocking state
Dec 16 22:01:47 central kernel: fwbr310i0: port 2(tap310i0) entered forwarding state
Dec 16 22:01:49 central kernel: vfio-pci 0000:83:00.0: enabling device (0002 -> 0003)
Dec 16 22:01:49 central kernel: vfio-pci 0000:83:00.0: vfio_ecap_init: hiding ecap 0x19@0x270
Dec 16 22:01:49 central kernel: vfio-pci 0000:83:00.0: vfio_ecap_init: hiding ecap 0x1b@0x2d0
Dec 16 22:01:49 central kernel: vfio-pci 0000:83:00.0: vfio_ecap_init: hiding ecap 0x26@0x410
Dec 16 22:01:49 central kernel: vfio-pci 0000:83:00.0: vfio_ecap_init: hiding ecap 0x27@0x440
Dec 16 22:01:49 central pvedaemon[4441]: <root@pam> end task UPID:central:000027F4:00006B22:61BBB74A:qmstart:310:root@pam: OK
Dec 16 22:01:50 central pvedaemon[10289]: starting vnc proxy UPID:central:00002831:00006C81:61BBB74E:vncproxy:310:root@pam:
Dec 16 22:01:50 central pvedaemon[4442]: <root@pam> starting task UPID:central:00002831:00006C81:61BBB74E:vncproxy:310:root@pam:
Dec 16 22:01:57 central kernel: kvm [10237]: ignored rdmsr: 0xc0011020 data 0x0
Dec 16 22:01:58 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:01:58 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:01:58 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:01:58 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:01:58 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:01:58 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:01:58 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:01:58 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:01:58 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:02 central kernel: kvm_msr_ignored_check: 6137 callbacks suppressed
Dec 16 22:02:02 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:02 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:02:02 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:02 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:02:03 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:03 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:02:03 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:03 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:02:03 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:03 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:02:07 central kernel: kvm_msr_ignored_check: 378 callbacks suppressed
Dec 16 22:02:07 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:07 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:02:07 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:07 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:02:07 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:07 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:02:07 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:07 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:02:08 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:08 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:02:13 central kernel: kvm_msr_ignored_check: 36 callbacks suppressed
Dec 16 22:02:13 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:13 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:02:13 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:13 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:02:14 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:14 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:02:14 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:14 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:02:15 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:15 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:02:19 central kernel: kvm_msr_ignored_check: 4 callbacks suppressed
Dec 16 22:02:19 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:19 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:02:22 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:22 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:02:25 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:25 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:02:27 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:27 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:02:28 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:28 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:02:31 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:31 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:02:35 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:35 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:35 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:02:35 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:35 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:35 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:35 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:02:35 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:03:48 central kernel: kvm_msr_ignored_check: 49 callbacks suppressed
Dec 16 22:03:48 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x400
Dec 16 22:03:48 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0
Dec 16 22:03:48 central kernel: kvm [10237]: ignored wrmsr: 0xc0011020 data 0x0


This is what I assume to be some kind of kernel panic on shutting down the VM:

amderrors.png

Installation and configuration

Here is my /etc/default/grub file:

Code:
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nofb nomodeset video=vesafb:off,efifb:off"
GRUB_CMDLINE_LINUX=""

# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"

# Uncomment to disable graphical terminal (grub-pc only)
#GRUB_TERMINAL=console

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE=640x480

# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
#GRUB_DISABLE_LINUX_UUID=true

# Uncomment to disable generation of recovery mode menu entries
#GRUB_DISABLE_RECOVERY="true"

# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"


I followed a series of different posts all mashed together to form some kind of cogent guide.

The exact steps are as follows:

ON PROXMOX:
- enable IOMMU in BIOS (enabled by default)
- by default SVM mode and SR-IOV are also also enabled by default in the BIOS (there are no visible options for disabling CSM)
- set the following kernel parameters in Grub on Proxmox: amd_iommu=on, iommu=pt, pcie_acs_override=downstream,multifunction nofb nomodeset video=vesafb:eek:ff,efifb:eek:ff
- add the following to /etc/moduels: vfio, vfio_iommu_type1, vfio_pci, vfio_virqfd
- test that remapping is enabled: dmesg | grep 'remapping'
- confirm dedicated groups: find /sys/kernel/iommu_groups/ -type l
- update-grub
- blacklist all drivers: echo "blacklist radeon" >> /etc/modprobe.d/blacklist.conf, echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf, echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf
- add the vendor ids to block host from taking device: echo "options vfio-pci ids=<id>:<id>,<id>:<id> disable_vga=1" > /etc/modprobe.d/vfio.conf
- enable interrupt remapping: echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.conf echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf
- update-initramfs -u
- reset

ON THE GUEST:

- installed the following drivers: mesa, lib32-mesa, xf86-video-amdgpu, amdvlk, lib32-amdvlk, mesa-vdpau, lib32-mesa-vdpau

This is my guest VM config:

Arch Linux 5.15.8-arch1-1

guestconfig.png

Here is the output from lspci -v under the VGA controller:

Code:
00:10.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 23 [Radeon RX 6600/6600 XT/6600M] (rev c7) (prog-if 00 [VGA controller])
    Subsystem: Micro-Star International Co., Ltd. [MSI] Device 5025
    Physical Slot: 16
    Flags: bus master, fast devsel, latency 0, IRQ 42
    Memory at 800000000 (64-bit, prefetchable) [size=256M]
    Memory at 810000000 (64-bit, prefetchable) [size=2M]
    I/O ports at 1000 [size=256]
    Memory at c1400000 (32-bit, non-prefetchable) [size=1M]
    Expansion ROM at c1560000 [disabled] [size=128K]
    Capabilities: <access denied>
    Kernel driver in use: amdgpu
    Kernel modules: amdgpu
 
Last edited:
EDIT 2 (for the above post): Once adding the amdgpu to blocklist (derr!) on proxmox and rebuilding the initramfs, it now shows as follows:

Code:
83:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 23 (rev c7) (prog-if 00 [VGA controller])
        Subsystem: Micro-Star International Co., Ltd. [MSI] Navi 23
        Flags: bus master, fast devsel, latency 0, IRQ 152, IOMMU group 46
        Memory at 20010000000 (64-bit, prefetchable) [size=256M]
        Memory at 20020000000 (64-bit, prefetchable) [size=2M]
        I/O ports at c000 [size=256]
        Memory at f2000000 (32-bit, non-prefetchable) [size=1M]
        Expansion ROM at f2100000 [disabled] [size=128K]
        Capabilities: [48] Vendor Specific Information: Len=08 <?>
        Capabilities: [50] Power Management version 3
        Capabilities: [64] Express Legacy Endpoint, MSI 00
        Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
        Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
        Capabilities: [150] Advanced Error Reporting
        Capabilities: [200] Physical Resizable BAR
        Capabilities: [240] Power Budgeting <?>
        Capabilities: [270] Secondary PCI Express
        Capabilities: [2a0] Access Control Services
        Capabilities: [2d0] Process Address Space ID (PASID)
        Capabilities: [320] Latency Tolerance Reporting
        Capabilities: [410] Physical Layer 16.0 GT/s <?>
        Capabilities: [440] Lane Margining at the Receiver <?>
        Kernel driver in use: vfio-pci
        Kernel modules: amdgpu

Shutting down the VM still outputs the errors above and also Proxmox requires a full reboot before the VM will boot.

Here is the syslog:

Code:
Dec 16 23:06:16 central kernel: vfio-pci 0000:83:00.0: not ready 16383ms after FLR; waiting
Dec 16 23:06:22 central pvedaemon[4465]: VM 310 qmp command failed - VM 310 qmp command 'query-proxmox-support' failed - unable to connect to VM 310 qmp socket - timeout after 31 retries
Dec 16 23:06:22 central pvestatd[4435]: VM 310 qmp command failed - VM 310 qmp command 'query-proxmox-support' failed - unable to connect to VM 310 qmp socket - timeout after 31 retries
Dec 16 23:06:23 central pvestatd[4435]: status update time (6.449 seconds)
Dec 16 23:06:24 central pvedaemon[80258]: start failed: command '/usr/bin/kvm -id 310 -name VM-Gaming-ArchLinux -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/310.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/310.pid -daemonize -smbios 'type=1,uuid=36db9ebc-8b3a-4e47-baf7-ab581e8dc6f9' -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE_4M.fd' -drive 'if=pflash,unit=1,format=raw,id=drive-efidisk0,size=540672,file=/dev/zvol/VMs/vm-310-disk-0' -smp '12,sockets=1,cores=12,maxcpus=12' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga none -nographic -cpu 'EPYC-Rome,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,vendor=AuthenticAMD' -m 16834 -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'vmgenid,guid=25a8745d-f40c-43b3-a864-d2d5ee54209f' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'vfio-pci,host=0000:83:00.0,id=hostpci0,bus=pci.0,addr=0x10' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:e82c6ed5e391' -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=101' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/dev/zvol/VMs/vm-310-disk-1,if=none,id=drive-scsi0,cache=writethrough,format=raw,aio=io_uring,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,rotation_rate=1,bootindex=100' -drive 'file=/dev/zvol/Media/vm-310-disk-0,if=none,id=drive-scsi1,cache=writethrough,format=raw,aio=io_uring,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1,rotation_rate=1' -netdev 'type=tap,id=net0,ifname=tap310i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=EA:ED:EF:A7:2F:25,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=102' -machine 'type=pc+pve0'' failed: got timeout
Dec 16 23:06:24 central pvedaemon[4465]: <root@pam> end task UPID:central:00013982:0001E4BC:61BBC60B:qmstart:310:root@pam: start failed: command '/usr/bin/kvm -id 310 -name VM-Gaming-ArchLinux -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/310.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/310.pid -daemonize -smbios 'type=1,uuid=36db9ebc-8b3a-4e47-baf7-ab581e8dc6f9' -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE_4M.fd' -drive 'if=pflash,unit=1,format=raw,id=drive-efidisk0,size=540672,file=/dev/zvol/VMs/vm-310-disk-0' -smp '12,sockets=1,cores=12,maxcpus=12' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga none -nographic -cpu 'EPYC-Rome,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,vendor=AuthenticAMD' -m 16834 -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'vmgenid,guid=25a8745d-f40c-43b3-a864-d2d5ee54209f' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'vfio-pci,host=0000:83:00.0,id=hostpci0,bus=pci.0,addr=0x10' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:e82c6ed5e391' -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=101' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/dev/zvol/VMs/vm-310-disk-1,if=none,id=drive-scsi0,cache=writethrough,format=raw,aio=io_uring,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,rotation_rate=1,bootindex=100' -drive 'file=/dev/zvol/Media/vm-310-disk-0,if=none,id=drive-scsi1,cache=writethrough,format=raw,aio=io_uring,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1,rotation_rate=1' -netdev 'type=tap,id=net0,ifname=tap310i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=EA:ED:EF:A7:2F:25,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=102' -machine 'type=pc+pve0'' failed: got timeout
Dec 16 23:06:32 central pvestatd[4435]: VM 310 qmp command failed - VM 310 qmp command 'query-proxmox-support' failed - unable to connect to VM 310 qmp socket - timeout after 31 retries
Dec 16 23:06:32 central pvestatd[4435]: status update time (6.440 seconds)
Dec 16 23:06:33 central kernel: vfio-pci 0000:83:00.0: not ready 32767ms after FLR; waiting
Dec 16 23:06:41 central pvedaemon[4466]: VM 310 qmp command failed - VM 310 qmp command 'query-proxmox-support' failed - unable to connect to VM 310 qmp socket - timeout after 31 retries
Dec 16 23:06:42 central pvestatd[4435]: VM 310 qmp command failed - VM 310 qmp command 'query-proxmox-support' failed - unable to connect to VM 310 qmp socket - timeout after 31 retries
Dec 16 23:06:43 central pvestatd[4435]: status update time (6.455 seconds)
Dec 16 23:06:52 central pvestatd[4435]: VM 310 qmp command failed - VM 310 qmp command 'query-proxmox-support' failed - unable to connect to VM 310 qmp socket - timeout after 31 retries
Dec 16 23:06:52 central pvestatd[4435]: status update time (6.467 seconds)
Dec 16 23:07:00 central pvedaemon[4464]: VM 310 qmp command failed - VM 310 qmp command 'query-proxmox-support' failed - unable to connect to VM 310 qmp socket - timeout after 31 retries
Dec 16 23:07:02 central pvestatd[4435]: VM 310 qmp command failed - VM 310 qmp command 'query-proxmox-support' failed - unable to connect to VM 310 qmp socket - timeout after 31 retries
Dec 16 23:07:02 central pvestatd[4435]: status update time (6.451 seconds)
Dec 16 23:07:08 central kernel: vfio-pci 0000:83:00.0: not ready 65535ms after FLR; giving up
Dec 16 23:07:08 central kernel: vfio-pci 0000:83:00.0: vfio_cap_init: hiding cap 0xff@0xff
Dec 16 23:07:08 central kernel: vfio-pci 0000:83:00.0: vfio_cap_init: hiding cap 0xff@0xff
Dec 16 23:07:08 central kernel: vfio-pci 0000:83:00.0: vfio_cap_init: hiding cap 0xff@0xff
Dec 16 23:07:08 central kernel: vfio-pci 0000:83:00.0: vfio_cap_init: hiding cap 0xff@0xff
Dec 16 23:07:08 central kernel: vfio-pci 0000:83:00.0: vfio_cap_init: hiding cap 0xff@0xff
Dec 16 23:07:08 central kernel: vfio-pci 0000:83:00.0: vfio_cap_init: hiding cap 0xff@0xff
Dec 16 23:07:08 central kernel: vfio-pci 0000:83:00.0: vfio_cap_init: hiding cap 0xff@0xff
Dec 16 23:07:08 central kernel: vfio-pci 0000:83:00.0: vfio_cap_init: hiding cap 0xff@0xff
Dec 16 23:07:08 central kernel: vfio-pci 0000:83:00.0: vfio_cap_init: hiding cap 0xff@0xff
Dec 16 23:07:08 central kernel: vfio-pci 0000:83:00.0: vfio_cap_init: hiding cap 0xff@0xff
Dec 16 23:07:08 central kernel: vfio-pci 0000:83:00.0: vfio_cap_init: hiding cap 0xff@0xff
Dec 16 23:07:08 central kernel: vfio-pci 0000:83:00.0: vfio_cap_init: hiding cap 0xff@0xff
Dec 16 23:07:08 central kernel: vfio-pci 0000:83:00.0: vfio_cap_init: hiding cap 0xff@0xff
Dec 16 23:07:08 central kernel: vfio-pci 0000:83:00.0: vfio_cap_init: hiding cap 0xff@0xff
Dec 16 23:07:08 central kernel: vfio-pci 0000:83:00.0: vfio_cap_init: hiding cap 0xff@0xff
Dec 16 23:07:08 central kernel: vfio-pci 0000:83:00.0: vfio_cap_init: hiding cap 0xff@0xff
Dec 16 23:07:08 central kernel: vfio-pci 0000:83:00.0: vfio_cap_init: hiding cap 0xff@0xff
Dec 16 23:07:08 central kernel: vfio-pci 0000:83:00.0: vfio_cap_init: hiding cap 0xff@0xff
Dec 16 23:07:08 central kernel: vfio-pci 0000:83:00.0: vfio_cap_init: hiding cap 0xff@0xff
Dec 16 23:07:08 central kernel: vfio-pci 0000:83:00.0: vfio_cap_init: hiding cap 0xff@0xff
Dec 16 23:07:08 central kernel: vfio-pci 0000:83:00.0: vfio_cap_init: hiding cap 0xff@0xff
Dec 16 23:07:08 central kernel: vfio-pci 0000:83:00.0: vfio_cap_init: hiding cap 0xff@0xff
Dec 16 23:07:08 central kernel: vfio-pci 0000:83:00.0: vfio_cap_init: hiding cap 0xff@0xff
 
Last edited:
Re-opening this as it has started happening again seemingly without reason. Unless some kind of update has been pushed from the backend, I haven't touched a single thing. In fact, I've not used this VM in about a week.

This is the only relevant output:

Code:
Jan 04 18:26:49 central pvedaemon[4081939]: VM 210 qmp command failed - VM 210 qmp command 'query-proxmox-support' failed - got timeout
Jan 04 18:26:57 central pvedaemon[4081939]: worker exit
Jan 04 18:26:57 central pvestatd[4360]: VM 210 qmp command failed - VM 210 qmp command 'query-proxmox-support' failed - unable to connect to VM 210 qmp socket - timeout after 31 retries

Any suggestions on how I can investigate this further would be appreciated.
 
If you switched to version 5.15 of the Proxmox kernel, you'll need this to get vendor-reset to work again. Otherwise, I cannot explain it with no changes to the Proxmox system or the VM configuration.
 
  • Like
Reactions: Whitterquick
If you switched to version 5.15 of the Proxmox kernel, you'll need this to get vendor-reset to work again. Otherwise, I cannot explain it with no changes to the Proxmox system or the VM configuration.
Negative, running kernel 5.13. I will update and apply the workaround.

I'm also getting random shutdowns of the VM, very annoying.

Really looking forward to stable pcie passthrough, these issues are starting to become a barrier to continued use.
 
I needed to blacklist amdgpu on the Proxmox host since 5.13 (or late 5.11) kernel, as described in this post. This happened unexpectedly (and automatically) when updating Proxmox to 7.1. Maybe my solution can also work for your RX 6600?
Already blacklisted that aaaaaages ago. Passthrough wouldn't even work without first blacklisting that driver.
 
Managed to get some logs, after the VM randomly crashed and I tried to boot the VM:

Code:
Jan 06 11:38:22 central kernel: vfio-pci 0000:83:00.0: vfio_ecap_init: hiding ecap 0x19@0x270
Jan 06 11:38:22 central kernel: vfio-pci 0000:83:00.0: vfio_ecap_init: hiding ecap 0x1b@0x2d0
Jan 06 11:38:22 central kernel: vfio-pci 0000:83:00.0: vfio_ecap_init: hiding ecap 0x26@0x410
Jan 06 11:38:22 central kernel: vfio-pci 0000:83:00.0: vfio_ecap_init: hiding ecap 0x27@0x440
Jan 06 11:38:23 central kernel: fwbr210i0: port 2(tap210i0) entered disabled state
Jan 06 11:38:23 central kernel: fwbr210i0: port 1(fwln210i0) entered disabled state
Jan 06 11:38:23 central kernel: vmbr30: port 3(fwpr210p0) entered disabled state
Jan 06 11:38:23 central kernel: device fwln210i0 left promiscuous mode
Jan 06 11:38:23 central kernel: fwbr210i0: port 1(fwln210i0) entered disabled state
Jan 06 11:38:23 central kernel: device fwpr210p0 left promiscuous mode
Jan 06 11:38:23 central kernel: vmbr30: port 3(fwpr210p0) entered disabled state
Jan 06 11:38:23 central kernel:  zd32: p1 p2 p3 p4
Jan 06 11:38:24 central kernel:  zd0: p1 p2
Jan 06 11:38:24 central pvestatd[5103]: VM 210 qmp command failed - VM 210 not running
 
I've noticed this behaviour with a NIC (Broadcom BCM57810S) interface passed through to my OPNsense or pfSense routers and the enp1s0f0 / enp1s0f1 interface didn't show up again until 1) VM is not enabled at bootup using the passthrough interface 2) proxmox is rebooted to clear usage of enp1s0f0 / enp1s0f1 interface.

I have recently found a way to get the same/better performance using virtio interfaces by doing the 2.5Gbps synchronization of the BCM578310S interface in proxmox and adding the interface to a bridge (vmbr2) with the single port (rather than the whole card) and passing that into OPNsense which is used as vtnet2.

I don't really have this issue anymore using virtio interfaces and I can share the interface with another VM potentially to run another PPPoE session on the same physical interface that I couldn't do with pass through (and I get better performance since Linux can actually sync properly that FreeBSD cannot at this point).

Looking forward to OPNsense 22.1 based on FreeBSD as it should use Q35 rather than i440fx virtualized HW since FreeBSD will be able to support virtio properly and likely better performance.


Anyway just sharing my observation about passing PCIe card into a VM and not being able to recover the card until proxmox is rebooted. I thought perhaps it was due to the PCIe card being tied to the VM and tried to remove and still didn't show up in Linux (promox) until after a reboot.
 
Looks like I'll be sticking with PVE V6X until GPU passthru is ironed out a bit more.
 
Looks like I'll be sticking with PVE V6X until GPU passthru is ironed out a bit more.

Is it possible to downgrade from PVE7?

As an aside, I'm starting to get a little frustrated with this situation. Given I am new to the community, I am obviously missing the historical and contextual information that may help to understand more about this particular problem.

Why is this not a consistently operational feature?
Why is it not a priority? Or is it?
Will it become stable soon? Will that stability remain across versions? If not, what kind of upgrade schedule should I set to avoid losing this functionality?
Finally, are there alternatives to Proxmox that don't face this issue?

I am perfectly willing to accept that this is a community-driven project and that alone will come with it's pros and cons. However, I do pay for a subscription and I do use it for my work to passthrough more than one device, so even though I'm not exactly entitled to or guaranteed certain functionality I would feel better knowing the current state and roadmap as I just can't afford to keep having this problem.

If this issue isn't present on something like ESXi for example, then I have no option but to consider that as an alternative.
 
I wouldn't try to downgrade a host to a previous major version.
You can however, backup VM disks and put them on a new (previous version on a fresh install) and use the VM config's as well from the old host.

Will it become stable soon? Will that stability remain across versions? If not, what kind of upgrade schedule should I set to avoid losing this functionality?
I stick with the yearly major versions, we're now at PVE7 and PVE6 is on life support for just under a year now, at that time I'll be running my tests again for PVE7.

I can't comment too much about GPU passthru stability, but it's seemingly worked well enough with PVE V6X, it has never been a huge focus for the project, but maybe PVE team can chime in and answer some of your questions. There's lots of moving parts with GPU passthru. But, I use vGPU's for a production system, tied to a specific kernel and PVE version I know it works on, so I will be keeping PVE V6 around for this workload until stuch time I test later versions and it works for my use case.
Finally, are there alternatives to Proxmox that don't face this issue?
xcp-ng, ESXI, if you have the money. :D

With major releases, I recommend finding time to re-produce your setup and test that it works the way you expect before planning to upgrade.
 
Last edited:
I have seen some remarks from staff members that PCI passthrough cannot be guaranteed by Proxmox, as it very much depends on up-stream QEMU, drivers and kernels.
If you have a subscription and this is important to you, maybe you can escalate your issue to an official support ticket?
 
  • Like
Reactions: medicineman25
I have seen some remarks from staff members that PCI passthrough cannot be guaranteed by Proxmox, as it very much depends on up-stream QEMU, drivers and kernels.
If you have a subscription and this is important to you, maybe you can escalate your issue to an official support ticket?

That might be a good idea, I was hesitant to do that if the project is focused on more important things that may or may not appear immediately beneficial to me. I'll give it a shot and see what they say.
 
I wouldn't try to downgrade a host to a previous major version.
You can however, backup VM disks and put them on a new (previous version on a fresh install) and use the VM config's as well from the old host.


I stick with the yearly major versions, we're now at PVE7 and PVE6 is on life support for just under a year now, at that time I'll be running my tests again for PVE7.

I can't comment too much about GPU passthru stability, but it's seemingly worked well enough with PVE V6X, it has never been a huge focus for the project, but maybe PVE team can chime in and answer some of your questions. There's lots of moving parts with GPU passthru. But, I use vGPU's for a production system, tied to a specific kernel and PVE version I know it works on, so I will be keeping PVE V6 around for this workload until stuch time I test later versions and it works for my use case.

xcp-ng, ESXI, if you have the money. :D

With major releases, I recommend finding time to re-produce your setup and test that it works the way you expect before planning to upgrade.

Indeed, I need to operate a little better if I'm going to do this properly. Right now I'm just flying by the seat of my pants so to speak and it appears I'm flying a little close to the sun!

Will it just be a matter of re-installing the PVE instance? i.e. will any block storage allocated to VMs remain persistent across installs if that storage is on separate disks?
 
Will it just be a matter of re-installing the PVE instance? i.e. will any block storage allocated to VMs remain persistent across installs if that storage is on separate disks?
Grab the ISO and make a bootdrive, however you wish: https://proxmox.com/en/downloads/item/proxmox-ve-6-4-iso-installer
Grab all the configs from /etc/pve/qemu-server and /etc/pve/lxc/

Have all the VM's backed up in PBS, if not, keeping the .raw/.qcow2 files and throwing them on a storage you will make available to your PVE V6X, put your old VM/LXC configs in the proper place on the PVE V6X instance and make sure the location/datastore/storage names are correct in your config. If you run Windows with TPM stuff (cough Windows 11 only), remove it from the configs - PVE 6 doesn't support that. Then you should be good to go. I don't think there's any other hardware settings in the configs to worry about. And of course, follow the GPU passthru documentation again.

As others said, GPU support will only be stable and prioritised with paid customers... raising the issues. :)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!