I am using the GPU Seabios PCI PASSTHROUGH method to passthrough an EVGA GTS 450 graphics card:
Which works great upon first boot of the VM, but after a shutdown/stop and then re-start of the VM, the VM hangs and dmesg on the host shows "vfio-pci 0000:03:00.0: Invalid ROM contents" upon trying to start up the VM.
I have tried modifying the qemu command by hand, to specify a romfile for my graphics card, as I learned adding the romfile helped others solve the same VM restart problem in an old closed archlinux thread. To do so, I first used wget to download the romfile from techpowerup into /root on the host:
Then got the qemu command as generated by proxmox:
And added "romfile=" to the end of the "-device" flag argument for the GPU passthrough:
(Notice the part that says romfile=/root/EVGA.GTS450.1024.100929_1.rom)
However, running this modified command on the command-line of the proxmox host produced identical results as if I hadn't added the romfile and started from the web GUI; the VM would boot fine for the first time after a fresh reboot of the proxmox host, but the VM would hang and the host would report "Invalid ROM contents" upon stopping and re-starting the VM.
The only remedy is to reboot the proxmox host.
I also tried playing with the PCI bus from the proxmox host, using commands to unbind and bind the drivers, power cycle the ports, and even unload and reload the kernel drivers; but to no avail. If I shut down the VM that has GPU passthrough enabled, I must always restart the proxmox host to avoid the "Invalid ROM contents" error when trying to start the VM again.
Finally, as a last idea, I tried the GPU Seabios PCI EXPRESS PASSTHROUGH method, using "pcie=1" and "machine: q35" which yielded the same results as the regular PCI passthrough.
On the freshly booted proxmox host node...
Here's the dmesg output before starting the VM:
Continuation of dmesg after starting the VM (via web GUI; no romfile addition):
Continuation of dmesg after stopping the VM (via web GUI):
Continuation of dmesg after re-starting the VM (via web GUI):
(Notice the message: vfio-pci 0000:03:00.0: Invalid ROM contents)
Code:
bootdisk: virtio0
cores: 4
memory: 4096
name: MINT
hostpci0: 03:00.0,x-vga=on
net0: bridge=vmbr0,virtio=36:30:64:33:65:65
numa: 0
ostype: l26
serial0: socket
smbios1: uuid=b1937d68-5b23-4dfe-a5fe-d7877b8886e1
sockets: 1
tablet: 0
virtio0: dat:vm-101-disk-1,size=32G
Which works great upon first boot of the VM, but after a shutdown/stop and then re-start of the VM, the VM hangs and dmesg on the host shows "vfio-pci 0000:03:00.0: Invalid ROM contents" upon trying to start up the VM.
I have tried modifying the qemu command by hand, to specify a romfile for my graphics card, as I learned adding the romfile helped others solve the same VM restart problem in an old closed archlinux thread. To do so, I first used wget to download the romfile from techpowerup into /root on the host:
Code:
wget https://www.techpowerup.com/vgabios/88345/EVGA.GTS450.1024.100929_1.rom
Code:
qm showcmd 101
Code:
/usr/bin/systemd-run --scope --slice qemu --unit 101 -p KillMode=none -p CPUShares=1000 /usr/bin/kvm -id 101 -chardev socket,id=qmp,path=/var/run/qemu-server/101.qmp,server,nowait -mon chardev=qmp,mode=control -pidfile /var/run/qemu-server/101.pid -daemonize -smbios type=1,uuid=b1937d68-5b23-4dfe-a5fe-d7877b8886e1 -name MINT -smp 4,sockets=1,cores=4,maxcpus=4 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000 -vga none -nographic -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce,kvm=off -m 4096 -k en-us -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -readconfig /usr/share/qemu-server/pve-usb.cfg -device vfio-pci,host=03:00.0,id=hostpci0,bus=pci.0,addr=0x10,x-vga=on,romfile=/root/EVGA.GTS450.1024.100929_1.rom -device usb-host,hostbus=3,hostport=1 -chardev socket,id=serial0,path=/var/run/qemu-server/101.serial0,server,nowait -device isa-serial,chardev=serial0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:89de2dd6ea29 -drive file=/dev/zvol/tank/dat/vm-101-disk-1,if=none,id=drive-virtio0,format=raw,cache=none,aio=native,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -netdev type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=36:30:64:33:65:65,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300
However, running this modified command on the command-line of the proxmox host produced identical results as if I hadn't added the romfile and started from the web GUI; the VM would boot fine for the first time after a fresh reboot of the proxmox host, but the VM would hang and the host would report "Invalid ROM contents" upon stopping and re-starting the VM.
The only remedy is to reboot the proxmox host.
I also tried playing with the PCI bus from the proxmox host, using commands to unbind and bind the drivers, power cycle the ports, and even unload and reload the kernel drivers; but to no avail. If I shut down the VM that has GPU passthrough enabled, I must always restart the proxmox host to avoid the "Invalid ROM contents" error when trying to start the VM again.
Finally, as a last idea, I tried the GPU Seabios PCI EXPRESS PASSTHROUGH method, using "pcie=1" and "machine: q35" which yielded the same results as the regular PCI passthrough.
On the freshly booted proxmox host node...
Here's the dmesg output before starting the VM:
Code:
...
[ 20.553643] vmbr0: port 1(eth0) entered forwarding state
[ 20.553709] vmbr0: port 1(eth0) entered forwarding state
[ 20.554087] IPv6: ADDRCONF(NETDEV_CHANGE): vmbr0: link becomes ready
[ 20.643298] cgroup: new mount options do not match the existing superblock, will be ignored
[ 20.769322] ip_tables: (C) 2000-2006 Netfilter Core Team
[ 20.781974] cgroup: new mount options do not match the existing superblock, will be ignored
[ 23.208443] ip6_tables: (C) 2000-2006 Netfilter Core Team
[ 23.231828] ip_set: protocol 6
Continuation of dmesg after starting the VM (via web GUI; no romfile addition):
Code:
[ 1715.320243] device tap101i0 entered promiscuous mode
[ 1715.328424] vmbr0: port 2(tap101i0) entered forwarding state
[ 1715.328472] vmbr0: port 2(tap101i0) entered forwarding state
[ 1717.488302] vfio-pci 0000:03:00.0: enabling device (0100 -> 0103)
[ 1717.490112] pmd_set_huge: Cannot satisfy [mem 0xe8000000-0xe8200000] with a huge-page mapping due to MTRR override.
[ 1719.612842] vgaarb: device changed decodes: PCI:0000:03:00.0,olddecodes=io+mem,decodes=io+mem:owns=none
[ 1723.411206] kvm: zapping shadow pages for mmio generation wraparound
[ 1724.519222] kvm: zapping shadow pages for mmio generation wraparound
Continuation of dmesg after stopping the VM (via web GUI):
Code:
[ 20.781974] cgroup: new mount options do not match the existing superblock, will be ignored
[ 23.208443] ip6_tables: (C) 2000-2006 Netfilter Core Team
[ 23.231828] ip_set: protocol 6
[ 1715.320243] device tap101i0 entered promiscuous mode
[ 1715.328424] vmbr0: port 2(tap101i0) entered forwarding state
[ 1715.328472] vmbr0: port 2(tap101i0) entered forwarding state
[ 1717.488302] vfio-pci 0000:03:00.0: enabling device (0100 -> 0103)
[ 1717.490112] pmd_set_huge: Cannot satisfy [mem 0xe8000000-0xe8200000] with a huge-page mapping due to MTRR override.
[ 1719.612842] vgaarb: device changed decodes: PCI:0000:03:00.0,olddecodes=io+mem,decodes=io+mem:owns=none
[ 1723.411206] kvm: zapping shadow pages for mmio generation wraparound
[ 1724.519222] kvm: zapping shadow pages for mmio generation wraparound
[ 1878.234205] zd0: p1 p2 < p5 >
[ 1878.463434] vmbr0: port 2(tap101i0) entered disabled state
Continuation of dmesg after re-starting the VM (via web GUI):
Code:
[ 1961.193135] device tap101i0 entered promiscuous mode
[ 1961.201196] vmbr0: port 2(tap101i0) entered forwarding state
[ 1961.201241] vmbr0: port 2(tap101i0) entered forwarding state
[ 1965.311554] vfio-pci 0000:03:00.0: Invalid ROM contents
[ 1968.066620] kvm: zapping shadow pages for mmio generation wraparound
[ 1968.760277] kvm: zapping shadow pages for mmio generation wraparound
Last edited: