qemu exited with code 1 - GPU passthrough attemp

wawariors

New Member
Jul 14, 2023
2
0
1
Hello,
I've been trying different solutions for my GPU passthrough to work, but my VM with the PCI passthrough doesn't start. The only error in the CLI is start failed: QEMU exited with code 1
The GPU used is an Nvidia Quadro P400
I've followed and tried solutions in the following places:
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough
https://pve.proxmox.com/wiki/PCI_Passthrough
https://pve.proxmox.com/wiki/PCI(e)_Passthrough
and passed through a lot of forum posts online, but I can't seem to find a solution.
Here is the syslog for the VM start:
Jul 14 21:29:12 pve pvedaemon[3771]: <root@pam> starting task UPID:pve:000311D9:00027544:64B1A208:qmstart:109:root@pam:
Jul 14 21:29:12 pve pvedaemon[201177]: start VM 109: UPID:pve:000311D9:00027544:64B1A208:qmstart:109:root@pam:
Jul 14 21:29:12 pve systemd[1]: Started 109.scope.
Jul 14 21:29:12 pve kernel: device tap109i0 entered promiscuous mode
Jul 14 21:29:12 pve kernel: fwbr109i0: port 1(fwln109i0) entered disabled state
Jul 14 21:29:12 pve kernel: vmbr0: port 10(fwpr109p0) entered disabled state
Jul 14 21:29:12 pve kernel: device fwln109i0 left promiscuous mode
Jul 14 21:29:12 pve kernel: fwbr109i0: port 1(fwln109i0) entered disabled state
Jul 14 21:29:12 pve kernel: device fwpr109p0 left promiscuous mode
Jul 14 21:29:12 pve kernel: vmbr0: port 10(fwpr109p0) entered disabled state
Jul 14 21:29:13 pve kernel: vmbr0: port 10(fwpr109p0) entered blocking state
Jul 14 21:29:13 pve kernel: vmbr0: port 10(fwpr109p0) entered disabled state
Jul 14 21:29:13 pve kernel: device fwpr109p0 entered promiscuous mode
Jul 14 21:29:13 pve kernel: vmbr0: port 10(fwpr109p0) entered blocking state
Jul 14 21:29:13 pve kernel: vmbr0: port 10(fwpr109p0) entered forwarding state
Jul 14 21:29:13 pve kernel: fwbr109i0: port 1(fwln109i0) entered blocking state
Jul 14 21:29:13 pve kernel: fwbr109i0: port 1(fwln109i0) entered disabled state
Jul 14 21:29:13 pve kernel: device fwln109i0 entered promiscuous mode
Jul 14 21:29:13 pve kernel: fwbr109i0: port 1(fwln109i0) entered blocking state
Jul 14 21:29:13 pve kernel: fwbr109i0: port 1(fwln109i0) entered forwarding state
Jul 14 21:29:13 pve kernel: fwbr109i0: port 2(tap109i0) entered blocking state
Jul 14 21:29:13 pve kernel: fwbr109i0: port 2(tap109i0) entered disabled state
Jul 14 21:29:13 pve kernel: fwbr109i0: port 2(tap109i0) entered blocking state
Jul 14 21:29:13 pve kernel: fwbr109i0: port 2(tap109i0) entered forwarding state
Jul 14 21:29:15 pve kernel: vfio-pci 0000:81:00.0: vfio_ecap_init: hiding ecap 0x19@0x900
Jul 14 21:29:15 pve kernel: kvm[201211]: segfault at b8 ip 000055ef918a8805 sp 00007fff7d242f10 error 4 in qemu-system-x86_64[55ef914fb000+613000] likely on CPU 1 (core 1, socket 0)
Jul 14 21:29:15 pve kernel: Code: 48 85 c0 75 f0 48 8b 6b 60 48 89 b3 80 00 00 00 e8 20 6b 00 00 48 8b 7b 40 83 05 c1 1f b1 00 01 48 85 ff 74 05 e8 cb e7 09 00 <48> 8b 85 b8 00 00 00 48 85 c0 74 7f 8b 93 b0 00 00 00 eb 13 0f 1f
Jul 14 21:29:16 pve kernel: fwbr109i0: port 2(tap109i0) entered disabled state
Jul 14 21:29:16 pve kernel: fwbr109i0: port 2(tap109i0) entered disabled state
Jul 14 21:29:16 pve pvestatd[3670]: VM 109 qmp command failed - VM 109 qmp command 'query-proxmox-support' failed - client closed connection
Jul 14 21:29:16 pve pvedaemon[201177]: start failed: QEMU exited with code 1
Jul 14 21:29:16 pve pvedaemon[3771]: <root@pam> end task UPID:pve:000311D9:00027544:64B1A208:qmstart:109:root@pam: start failed: QEMU exited with code 1
Jul 14 21:29:16 pve systemd[1]: 109.scope: Deactivated successfully.
Jul 14 21:29:16 pve systemd[1]: 109.scope: Consumed 3.346s CPU time.


What other informations should I give to help troubleshoot my problem ?
I hope you have a great day,
wawariors
 
Here is my VM config:
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 2
cpu: x86-64-v2-AES
efidisk0: local-zfs:vm-109-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
ide2: local:iso/Win10_22H2_English_x64.iso,media=cdrom,size=5971862K
machine: q35
memory: 8192
meta: creation-qemu=8.0.2,ctime=1689361481
name: test
net0: virtio=7A:BB:39:68:4A:45,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-zfs:vm-109-disk-1,iothread=1,size=100G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=22b9279f-9f1b-45d2-8234-a9d6aee2de91
sockets: 2
vmgenid: 1e5fce5c-2e17-49bb-8a81-c1326d974471
hostpci0: 81:00,x-vga=1
 
Sorry for the necro. Were you able to find a solution to this? I'm having a nearly identical issue, but with a quadro K4000. Your segfault code is the most similar one to mine that I've been able to find through google. Same deal, went through the listed guides to no avail.

Diff:
48 85 c0 75 f0 48 8b 6b 60 48 89 b3 80 00 00 00 e8 20 6b 00 00 48 8b 7b 40 83 05 c1 1f b1 00 01 48 85 ff 74 05 e8 cb e7 09 00 <48> 8b 85 b8 00 00 00 48 85 c0 74 7f 8b 93 b0 00 00 00 eb 13 0f 1f

48 85 c0 75 f0 48 8b 6b 60 48 89 b3 80 00 00 00 e8 60 6b 00 00 48 8b 7b 40 83 05 e1 49 b3 00 01 48 85 ff 74 05 e8 5b ea 06 00 <48> 8b 85 b8 00 00 00 48 85 c0 74 7f 8b 93 b0 00 00 00 eb 13 0f 1f
 
Last edited:
Hi,
please share the VM configuration qm config <ID> and output of pveversion -v. You can also get a complete backtrace of the failure by installing apt install systemd-coredump gdb pve-qemu-kvm-dbgsym and then, after the failure happens, run coredumpctl -1 gdb and inside the GDB prompt enter thread apply all backtrace.
 
Code:
bios: ovmf
boot: order=ide2;net0
cores: 2
cpu: host
efidisk0: local-lvm:vm-105-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci0: 0000:0b:00,pcie=1,x-vga=1
ide2: none,media=cdrom
machine: q35
memory: 4096
meta: creation-qemu=8.1.5,ctime=1730770372
name: pt-test
net0: virtio=BC:24:11:FA:A0:E2,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsihw: virtio-scsi-single
smbios1: uuid=457d6c31-f3b6-4e40-bd7d-25dde35076ea
sockets: 1
vga: none
vmgenid: 91bae75f-6f82-4097-af25-e3fdf699f70b

Code:
root@vault:~# pveversion -v
proxmox-ve: 8.1.0 (running kernel: 6.5.11-8-pve)
pve-manager: 8.1.4 (running version: 8.1.4/ec5affc9e41f1d79)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.5: 6.5.11-8
proxmox-kernel-6.5.11-8-pve-signed: 6.5.11-8
ceph-fuse: 17.2.7-pve2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.4-1
proxmox-backup-file-restore: 3.1.4-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.4
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-3
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.2.0
pve-qemu-kvm: 8.1.5-2
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve1

Code:
(gdb) thread apply all backtrace

Thread 4 (Thread 0x7f72516466c0 (LWP 457116)):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x5583893bbd08) at ./nptl/futex-internal.c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x5583893bbd08, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x00007f7254ea4efb in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x5583893bbd08, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x00007f7254ea7558 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x558387aa0c00 <bql>, cond=0x5583893bbce0) at ./nptl/pthread_cond_wait.c:503
#4  ___pthread_cond_wait (cond=cond@entry=0x5583893bbce0, mutex=mutex@entry=0x558387aa0c00 <bql>) at ./nptl/pthread_cond_wait.c:618
#5  0x0000558386bd11bb in qemu_cond_wait_impl (cond=0x5583893bbce0, mutex=0x558387aa0c00 <bql>, file=0x558386d419f3 "../system/cpus.c", line=451) at ../util/qemu-thread-posix.c:225
#6  0x00005583867f556e in qemu_wait_io_event (cpu=cpu@entry=0x558389734d60) at ../system/cpus.c:451
#7  0x0000558386a19888 in kvm_vcpu_thread_fn (arg=arg@entry=0x558389734d60) at ../accel/kvm/kvm-accel-ops.c:55
#8  0x0000558386bd05c8 in qemu_thread_start (args=0x5583893bbd20) at ../util/qemu-thread-posix.c:541
#9  0x00007f7254ea8144 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x00007f7254f287dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 3 (Thread 0x7f7250b046c0 (LWP 457117)):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x55838976e168) at ./nptl/futex-internal.c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x55838976e168, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x00007f7254ea4efb in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x55838976e168, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x00007f7254ea7558 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x558387aa0c00 <bql>, cond=0x55838976e140) at ./nptl/pthread_cond_wait.c:503
#4  ___pthread_cond_wait (cond=cond@entry=0x55838976e140, mutex=mutex@entry=0x558387aa0c00 <bql>) at ./nptl/pthread_cond_wait.c:618
#5  0x0000558386bd11bb in qemu_cond_wait_impl (cond=0x55838976e140, mutex=0x558387aa0c00 <bql>, file=0x558386d419f3 "../system/cpus.c", line=451) at ../util/qemu-thread-posix.c:225
#6  0x00005583867f556e in qemu_wait_io_event (cpu=cpu@entry=0x5583897650f0) at ../system/cpus.c:451
#7  0x0000558386a19888 in kvm_vcpu_thread_fn (arg=arg@entry=0x5583897650f0) at ../accel/kvm/kvm-accel-ops.c:55
#8  0x0000558386bd05c8 in qemu_thread_start (args=0x55838976e180) at ../util/qemu-thread-posix.c:541
#9  0x00007f7254ea8144 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x00007f7254f287dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 2 (Thread 0x7f7251f486c0 (LWP 457036)):
#0  futex_wait (private=0, expected=2, futex_word=0x558387aa0c00 <bql>) at ../sysdeps/nptl/futex-internal.h:146
#1  __GI___lll_lock_wait (futex=futex@entry=0x558387aa0c00 <bql>, private=0) at ./nptl/lowlevellock.c:49
#2  0x00007f7254eab3d2 in lll_mutex_lock_optimized (mutex=0x558387aa0c00 <bql>) at ./nptl/pthread_mutex_lock.c:48
#3  ___pthread_mutex_lock (mutex=mutex@entry=0x558387aa0c00 <bql>) at ./nptl/pthread_mutex_lock.c:93
#4  0x0000558386bd09c3 in qemu_mutex_lock_impl (mutex=0x558387aa0c00 <bql>, file=0x558386e29505 "../util/rcu.c", line=286) at ../util/qemu-thread-posix.c:94
#5  0x00005583867f57c6 in bql_lock_impl (file=file@entry=0x558386e29505 "../util/rcu.c", line=line@entry=286) at ../system/cpus.c:525
#6  0x0000558386bdc842 in call_rcu_thread (opaque=opaque@entry=0x0) at ../util/rcu.c:286
#7  0x0000558386bd05c8 in qemu_thread_start (args=0x5583893bde40) at ../util/qemu-thread-posix.c:541
#8  0x00007f7254ea8144 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#9  0x00007f7254f287dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 1 (Thread 0x7f72521b0480 (LWP 457035)):
#0  memory_region_update_container_subregions (subregion=0x55838aa08a70) at ../system/memory.c:2637
#1  memory_region_add_subregion_common (mr=<optimized out>, offset=<optimized out>, subregion=0x55838aa08a70) at ../system/memory.c:2661
#2  0x0000558386985d8f in vfio_probe_nvidia_bar0_quirk (nr=0, vdev=0x55838a94a620) at ../hw/vfio/pci-quirks.c:966
#3  vfio_bar_quirk_setup (vdev=vdev@entry=0x55838a94a620, nr=nr@entry=0) at ../hw/vfio/pci-quirks.c:1259
#4  0x000055838698cfff in vfio_realize (pdev=<optimized out>, errp=<optimized out>) at ../hw/vfio/pci.c:3124
#5  0x000055838672847e in pci_qdev_realize (qdev=<optimized out>, errp=<optimized out>) at ../hw/pci/pci.c:2093
#6  0x0000558386a2633b in device_set_realized (obj=<optimized out>, value=<optimized out>, errp=0x7ffc94e5dc80) at ../hw/core/qdev.c:510
#7  0x0000558386a2ab6d in property_set_bool (obj=0x55838a94a620, v=<optimized out>, name=<optimized out>, opaque=0x5583893c5710, errp=0x7ffc94e5dc80) at ../qom/object.c:2358
#8  0x0000558386a2e0cb in object_property_set (obj=obj@entry=0x55838a94a620, name=name@entry=0x558386d42739 "realized", v=v@entry=0x55838a94c820, errp=errp@entry=0x7ffc94e5dc80) at ../qom/object.c:1472
#9  0x0000558386a319af in object_property_set_qobject (obj=obj@entry=0x55838a94a620, name=name@entry=0x558386d42739 "realized", value=value@entry=0x55838a94a2f0, errp=errp@entry=0x7ffc94e5dc80) at ../qom/qom-qobject.c:28
#10 0x0000558386a2e744 in object_property_set_bool (obj=obj@entry=0x55838a94a620, name=name@entry=0x558386d42739 "realized", value=value@entry=true, errp=errp@entry=0x7ffc94e5dc80) at ../qom/object.c:1541
#11 0x0000558386a26e2c in qdev_realize (dev=dev@entry=0x55838a94a620, bus=bus@entry=0x55838a384910, errp=errp@entry=0x7ffc94e5dc80) at ../hw/core/qdev.c:292
#12 0x00005583867fb3d3 in qdev_device_add_from_qdict (opts=opts@entry=0x55838a249400, from_json=from_json@entry=false, errp=0x7ffc94e5dc80, errp@entry=0x558387ab82f8 <error_fatal>) at ../system/qdev-monitor.c:718
#13 0x00005583867fb841 in qdev_device_add (opts=0x5583893c0ab0, errp=errp@entry=0x558387ab82f8 <error_fatal>) at ../system/qdev-monitor.c:737
#14 0x00005583868007ff in device_init_func (opaque=<optimized out>, opts=<optimized out>, errp=0x558387ab82f8 <error_fatal>) at ../system/vl.c:1201
#15 0x0000558386bdaa91 in qemu_opts_foreach (list=<optimized out>, func=func@entry=0x5583868007f0 <device_init_func>, opaque=opaque@entry=0x0, errp=errp@entry=0x558387ab82f8 <error_fatal>) at ../util/qemu-option.c:1135
#16 0x00005583868032ca in qemu_create_cli_devices () at ../system/vl.c:2644
#17 qmp_x_exit_preconfig (errp=0x558387ab82f8 <error_fatal>) at ../system/vl.c:2713
#18 0x000055838680747c in qemu_init (argc=<optimized out>, argv=<optimized out>) at ../system/vl.c:3782
#19 0x000055838659e8d9 in main (argc=<optimized out>, argv=<optimized out>) at ../system/main.c:47
 
Please try again after updating to the current QEMU version and kernel.
 
Ah, sorry about that. Legitimately just discovering the pve no subscription repo is a thing. Lol. Here ya go:

Code:
bios: ovmf
boot: order=ide2;net0
cores: 2
cpu: host
efidisk0: local-lvm:vm-105-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci0: 0000:0b:00,pcie=1,x-vga=1
ide2: none,media=cdrom
machine: q35
memory: 4096
meta: creation-qemu=8.1.5,ctime=1730770372
name: pt-test
net0: virtio=BC:24:11:FA:A0:E2,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsihw: virtio-scsi-single
smbios1: uuid=457d6c31-f3b6-4e40-bd7d-25dde35076ea
sockets: 1
vga: none
vmgenid: 91bae75f-6f82-4097-af25-e3fdf699f70b

Code:
root@vault:~# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.12-3-pve)
pve-manager: 8.2.7 (running version: 8.2.7/3e0176e6bb2ade3b)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-3
proxmox-kernel-6.8.12-3-pve-signed: 6.8.12-3
proxmox-kernel-6.5.13-6-pve-signed: 6.5.13-6
proxmox-kernel-6.5: 6.5.13-6
proxmox-kernel-6.5.11-8-pve-signed: 6.5.11-8
ceph-fuse: 17.2.7-pve2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx9
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.8
libpve-cluster-perl: 8.0.8
libpve-common-perl: 8.2.5
libpve-guest-common-perl: 5.1.4
libpve-http-server-perl: 5.1.2
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.10
libpve-storage-perl: 8.2.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-4
proxmox-backup-client: 3.2.7-1
proxmox-backup-file-restore: 3.2.7-1
proxmox-firewall: 0.5.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.2.4
pve-cluster: 8.0.8
pve-container: 5.2.0
pve-docs: 8.2.3
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.0.7
pve-firmware: 3.14-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.4
pve-qemu-kvm: 9.0.2-3
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.4
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.6-pve1
 
Part 2 because character limit:

The issue still persists:
Code:
(gdb) thread apply all backtrace

Thread 5 (Thread 0x78bee2e006c0 (LWP 3802)):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x56c48923ca18) at ./nptl/futex-internal.c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x56c48923ca18, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x000078bee6701efb in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x56c48923ca18, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x000078bee6704558 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x56c487a2ac00 <bql>, cond=0x56c48923c9f0) at ./nptl/pthread_cond_wait.c:503
#4  ___pthread_cond_wait (cond=cond@entry=0x56c48923c9f0, mutex=mutex@entry=0x56c487a2ac00 <bql>) at ./nptl/pthread_cond_wait.c:618
#5  0x000056c486b5b1bb in qemu_cond_wait_impl (cond=0x56c48923c9f0, mutex=0x56c487a2ac00 <bql>, file=0x56c486ccb9f3 "../system/cpus.c", line=451) at ../util/qemu-thread-posix.c:225
#6  0x000056c48677f56e in qemu_wait_io_event (cpu=cpu@entry=0x56c4895b5d50) at ../system/cpus.c:451
#7  0x000056c4869a3888 in kvm_vcpu_thread_fn (arg=arg@entry=0x56c4895b5d50) at ../accel/kvm/kvm-accel-ops.c:55
#8  0x000056c486b5a5c8 in qemu_thread_start (args=0x56c48923ca30) at ../util/qemu-thread-posix.c:541
#9  0x000078bee6705144 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x000078bee67857dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 4 (Thread 0x78bee20006c0 (LWP 3803)):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x56c4895ef928) at ./nptl/futex-internal.c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x56c4895ef928, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x000078bee6701efb in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x56c4895ef928, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x000078bee6704558 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x56c487a2ac00 <bql>, cond=0x56c4895ef900) at ./nptl/pthread_cond_wait.c:503
#4  ___pthread_cond_wait (cond=cond@entry=0x56c4895ef900, mutex=mutex@entry=0x56c487a2ac00 <bql>) at ./nptl/pthread_cond_wait.c:618
#5  0x000056c486b5b1bb in qemu_cond_wait_impl (cond=0x56c4895ef900, mutex=0x56c487a2ac00 <bql>, file=0x56c486ccb9f3 "../system/cpus.c", line=451) at ../util/qemu-thread-posix.c:225
#6  0x000056c48677f56e in qemu_wait_io_event (cpu=cpu@entry=0x56c4895e5de0) at ../system/cpus.c:451
#7  0x000056c4869a3888 in kvm_vcpu_thread_fn (arg=arg@entry=0x56c4895e5de0) at ../accel/kvm/kvm-accel-ops.c:55
#8  0x000056c486b5a5c8 in qemu_thread_start (args=0x56c4895ef940) at ../util/qemu-thread-posix.c:541
#9  0x000078bee6705144 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x000078bee67857dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 3 (Thread 0x78bee38006c0 (LWP 3684)):
#0  futex_wait (private=0, expected=2, futex_word=0x56c487a2ac00 <bql>) at ../sysdeps/nptl/futex-internal.h:146
#1  __GI___lll_lock_wait (futex=futex@entry=0x56c487a2ac00 <bql>, private=0) at ./nptl/lowlevellock.c:49
#2  0x000078bee67083d2 in lll_mutex_lock_optimized (mutex=0x56c487a2ac00 <bql>) at ./nptl/pthread_mutex_lock.c:48
#3  ___pthread_mutex_lock (mutex=mutex@entry=0x56c487a2ac00 <bql>) at ./nptl/pthread_mutex_lock.c:93
#4  0x000056c486b5a9c3 in qemu_mutex_lock_impl (mutex=0x56c487a2ac00 <bql>, file=0x56c486db3505 "../util/rcu.c", line=286) at ../util/qemu-thread-posix.c:94
#5  0x000056c48677f7c6 in bql_lock_impl (file=file@entry=0x56c486db3505 "../util/rcu.c", line=line@entry=286) at ../system/cpus.c:525
#6  0x000056c486b66842 in call_rcu_thread (opaque=opaque@entry=0x0) at ../util/rcu.c:286
#7  0x000056c486b5a5c8 in qemu_thread_start (args=0x56c48923ee40) at ../util/qemu-thread-posix.c:541
#8  0x000078bee6705144 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#9  0x000078bee67857dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 2 (Thread 0x78bee10006c0 (LWP 3804)):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x78bee0ffb060, op=393, expected=0, futex_word=0x56c489246fb4) at ./nptl/futex-internal.c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x56c489246fb4, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x78bee0ffb060, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x000078bee6701efb in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x56c489246fb4, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x78bee0ffb060, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x000078bee670483c in __pthread_cond_wait_common (abstime=0x78bee0ffb060, clockid=0, mutex=0x56c489246f20, cond=0x56c489246f88) at ./nptl/pthread_cond_wait.c:503
#4  ___pthread_cond_timedwait64 (cond=cond@entry=0x56c489246f88, mutex=mutex@entry=0x56c489246f20, abstime=abstime@entry=0x78bee0ffb060) at ./nptl/pthread_cond_wait.c:643
#5  0x000056c486b5a751 in qemu_cond_timedwait_ts (cond=cond@entry=0x56c489246f88, mutex=mutex@entry=0x56c489246f20, ts=ts@entry=0x78bee0ffb060, file=file@entry=0x56c486db5bd8 "../util/thread-pool.c", line=line@entry=91) at ../util/qemu-thread-posix.c:239
#6  0x000056c486b5b3f8 in qemu_cond_timedwait_impl (cond=0x56c489246f88, mutex=0x56c489246f20, ms=<optimized out>, file=0x56c486db5bd8 "../util/thread-pool.c", line=91) at ../util/qemu-thread-posix.c:253
#7  0x000056c486b721ac in worker_thread (opaque=opaque@entry=0x56c489246f10) at ../util/thread-pool.c:91
#8  0x000056c486b5a5c8 in qemu_thread_start (args=0x56c4894d3520) at ../util/qemu-thread-posix.c:541
#9  0x000078bee6705144 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x000078bee67857dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81


Thread 1 (Thread 0x78bee3d7b480 (LWP 3683)):
#0  memory_region_update_container_subregions (subregion=0x56c48a888ba0) at ../system/memory.c:2637
#1  memory_region_add_subregion_common (mr=<optimized out>, offset=<optimized out>, subregion=0x56c48a888ba0) at ../system/memory.c:2661
#2  0x000056c48690fd8f in vfio_probe_nvidia_bar0_quirk (nr=0, vdev=0x56c48a7ca630) at ../hw/vfio/pci-quirks.c:966
#3  vfio_bar_quirk_setup (vdev=vdev@entry=0x56c48a7ca630, nr=nr@entry=0) at ../hw/vfio/pci-quirks.c:1259
#4  0x000056c486916fff in vfio_realize (pdev=<optimized out>, errp=<optimized out>) at ../hw/vfio/pci.c:3124
#5  0x000056c4866b247e in pci_qdev_realize (qdev=<optimized out>, errp=<optimized out>) at ../hw/pci/pci.c:2093
#6  0x000056c4869b033b in device_set_realized (obj=<optimized out>, value=<optimized out>, errp=0x7fff11083150) at ../hw/core/qdev.c:510
#7  0x000056c4869b4b6d in property_set_bool (obj=0x56c48a7ca630, v=<optimized out>, name=<optimized out>, opaque=0x56c489246710, errp=0x7fff11083150) at ../qom/object.c:2358
#8  0x000056c4869b80cb in object_property_set (obj=obj@entry=0x56c48a7ca630, name=name@entry=0x56c486ccc739 "realized", v=v@entry=0x56c48a7cc850, errp=errp@entry=0x7fff11083150) at ../qom/object.c:1472
#9  0x000056c4869bb9af in object_property_set_qobject (obj=obj@entry=0x56c48a7ca630, name=name@entry=0x56c486ccc739 "realized", value=value@entry=0x56c48a7ca3
20, errp=errp@entry=0x7fff11083150) at ../qom/qom-qobject.c:28
#10 0x000056c4869b8744 in object_property_set_bool (obj=obj@entry=0x56c48a7ca630, name=name@entry=0x56c486ccc739 "realized", value=value@entry=true, errp=errp@entry=0x7fff11083150) at ../qom/object.c:1541
#11 0x000056c4869b0e2c in qdev_realize (dev=dev@entry=0x56c48a7ca630, bus=bus@entry=0x56c48a204d60, errp=errp@entry=0x7fff11083150) at ../hw/core/qdev.c:292
#12 0x000056c4867853d3 in qdev_device_add_from_qdict (opts=opts@entry=0x56c48a0c9400, from_json=from_json@entry=false, errp=0x7fff11083150, errp@entry=0x56c487a422f8 <error_fatal>) at ../system/qdev-monitor.c:718
#13 0x000056c486785841 in qdev_device_add (opts=0x56c489241ab0, errp=errp@entry=0x56c487a422f8 <error_fatal>) at ../system/qdev-monitor.c:737
#14 0x000056c48678a7ff in device_init_func (opaque=<optimized out>, opts=<optimized out>, errp=0x56c487a422f8 <error_fatal>) at ../system/vl.c:1201
#15 0x000056c486b64a91 in qemu_opts_foreach (list=<optimized out>, func=func@entry=0x56c48678a7f0 <device_init_func>, opaque=opaque@entry=0x0, errp=errp@entry=0x56c487a422f8 <error_fatal>) at ../util/qemu-option.c:1135
#16 0x000056c48678d2ca in qemu_create_cli_devices () at ../system/vl.c:2644
#17 qmp_x_exit_preconfig (errp=0x56c487a422f8 <error_fatal>) at ../system/vl.c:2713
#18 0x000056c48679147c in qemu_init (argc=<optimized out>, argv=<optimized out>) at ../system/vl.c:3782
#19 0x000056c4865288d9 in main (argc=<optimized out>, argv=<optimized out>) at ../system/main.c:47
 
If it's relevant, here's a copy of my own thread on the issue with more info (including hardware, though prior to updating):
Hi, I'm currently running into an issue where I can't pass through a known working GPU (Quadro K4000/GK106GL) to a VM. Whenever I start up a VM with this device passed through, I get QEMU exit code 1 after about 10-30 seconds. As far as I can tell, it doesn't reach the bios.

Here is the output of "journalctl -f" while trying to start a test VM with the video card:
Code:
Nov 04 19:07:33 vault pvedaemon[2729]: start VM 169: UPID:vault:00000AA9:000014D6:67298BF5:qmstart:169:root@pam:
Nov 04 19:07:33 vault pvedaemon[2609]: <root@pam> starting task UPID:vault:00000AA9:000014D6:67298BF5:qmstart:169:root@pam:
Nov 04 19:07:33 vault kernel: vfio-pci 0000:0b:00.0: vgaarb: deactivate vga console
Nov 04 19:07:33 vault kernel: vfio-pci 0000:0b:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none
Nov 04 19:07:33 vault kernel: vfio-pci 0000:0b:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none
Nov 04 19:07:33 vault kernel: vfio-pci 0000:0b:00.0: vgaarb: deactivate vga console
Nov 04 19:07:33 vault kernel: vfio-pci 0000:0b:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none
Nov 04 19:07:33 vault systemd[1]: Created slice qemu.slice - Slice /qemu.
Nov 04 19:07:33 vault systemd[1]: Started 169.scope.
Nov 04 19:07:35 vault kernel: tap169i0: entered promiscuous mode
Nov 04 19:07:35 vault kernel: vmbr0: port 2(fwpr169p0) entered blocking state
Nov 04 19:07:35 vault kernel: vmbr0: port 2(fwpr169p0) entered disabled state
Nov 04 19:07:35 vault kernel: fwpr169p0: entered allmulticast mode
Nov 04 19:07:35 vault kernel: fwpr169p0: entered promiscuous mode
Nov 04 19:07:35 vault kernel: bond0: entered promiscuous mode
Nov 04 19:07:35 vault kernel: ixgbe 0000:0c:00.0 eth11: entered promiscuous mode
Nov 04 19:07:35 vault kernel: ixgbe 0000:0c:00.1 eth12: entered promiscuous mode
Nov 04 19:07:35 vault kernel: ixgbe 0000:0a:00.0 eth13: entered promiscuous mode
Nov 04 19:07:35 vault kernel: ixgbe 0000:0a:00.1 eth14: entered promiscuous mode
Nov 04 19:07:43 vault pvedaemon[2610]: VM 169 qmp command failed - VM 169 qmp command 'query-proxmox-support' failed - got timeout
Nov 04 19:07:49 vault kernel: vmbr0: port 2(fwpr169p0) entered blocking state
Nov 04 19:07:49 vault kernel: vmbr0: port 2(fwpr169p0) entered forwarding state
Nov 04 19:07:49 vault kernel: fwbr169i0: port 1(fwln169i0) entered blocking state
Nov 04 19:07:49 vault kernel: fwbr169i0: port 1(fwln169i0) entered disabled state
Nov 04 19:07:49 vault kernel: fwln169i0: entered allmulticast mode
Nov 04 19:07:49 vault kernel: fwln169i0: entered promiscuous mode
Nov 04 19:07:49 vault kernel: fwbr169i0: port 1(fwln169i0) entered blocking state
Nov 04 19:07:49 vault kernel: fwbr169i0: port 1(fwln169i0) entered forwarding state
Nov 04 19:07:49 vault kernel: fwbr169i0: port 2(tap169i0) entered blocking state
Nov 04 19:07:49 vault kernel: fwbr169i0: port 2(tap169i0) entered disabled state
Nov 04 19:07:49 vault kernel: tap169i0: entered allmulticast mode
Nov 04 19:07:49 vault kernel: fwbr169i0: port 2(tap169i0) entered blocking state
Nov 04 19:07:49 vault kernel: fwbr169i0: port 2(tap169i0) entered forwarding state
Nov 04 19:07:51 vault kernel: show_signal_msg: 13 callbacks suppressed
---
Nov 04 19:07:51 vault kernel: kvm[2762]: segfault at b8 ip 0000559534cd1ba5 sp 00007ffc492f8ab0 error 4 in qemu-system-x86_64[55953491f000+625000] likely on CPU 1 (core 1, socket 0)
Nov 04 19:07:51 vault kernel: Code: 48 85 c0 75 f0 48 8b 6b 60 48 89 b3 80 00 00 00 e8 60 6b 00 00 48 8b 7b 40 83 05 e1 49 b3 00 01 48 85 ff 74 05 e8 5b ea 06 00 <48> 8b 85 b8 00 00 00 48 85 c0 74 7f 8b 93 b0 00 00 00 eb 13 0f 1f
---
Nov 04 19:07:51 vault kernel: fwbr169i0: port 2(tap169i0) entered disabled state
Nov 04 19:07:51 vault kernel: tap169i0 (unregistering): left allmulticast mode
Nov 04 19:07:51 vault kernel: fwbr169i0: port 2(tap169i0) entered disabled state
Nov 04 19:07:51 vault pvedaemon[2729]: start failed: QEMU exited with code 1
Nov 04 19:07:51 vault pvedaemon[2609]: <root@pam> end task UPID:vault:00000AA9:000014D6:67298BF5:qmstart:169:root@pam: start failed: QEMU exited with code 1
Nov 04 19:07:51 vault pvestatd[2578]: VM 169 qmp command failed - VM 169 qmp command 'query-proxmox-support' failed - unable to connect to VM 169 qmp socket - Connection refused
Nov 04 19:07:51 vault pvedaemon[2611]: VM 169 qmp command failed - VM 169 qmp command 'query-proxmox-support' failed - unable to connect to VM 169 qmp socket - Connection refused

The segfault "error code" is exactly as follows, every time:
Code:
48 85 c0 75 f0 48 8b 6b 60 48 89 b3 80 00 00 00 e8 60 6b 00 00 48 8b 7b 40 83 05 e1 49 b3 00 01 48 85 ff 74 05 e8 5b ea 06 00 <48> 8b 85 b8 00 00 00 48 85 c0 74 7f 8b 93 b0 00 00 00 eb 13 0f 1f
The "likely CPU" is random though.

My setup is as follows:

Motherboard: ASUS X99-E WS/USB3.1 (The block diagram for my mobo can be found on page 183 of the manual.)
CPU: Xeon E5-1660V3
RAM: 8x16GB DDR4 ECC 2133

PCIE slots:
1: PCIE SSD (Intel p3600 1.4tb)
2: LSI HBA card in IT mode
3: PCIE SSD (Intel p3600 1.4tb)
4: Empty, previously where my GPU was
5: X520 DA2 NIC
6: Quadro K4000
7: X520 DA2 NIC

Slots 1/2/3 are passed through to a truenas scale VM with no hiccups. Trying to pass through the quadro to any vm while either in slot 4 or 6 results in this error upon startup. All the pci passthrough options in the GUI seem to have no bearing on this issue. ROM-bar, all functions, pci-express w/ Q35 and OVMF, plain old i440fx/seabios, x-vga=0 or 1... No dice on anything.

Proxmox itself is running in UEFI mode.

IOMMU is enabled. The GPU and its HDMI audio controller are both in IOMMU group 37. Nothing else is in said group.

My cmdline options: quiet intel_iommu=on iommu=pt pcie_aspm=off (ASPM is disabled for now because the LSI card spits out a ton of "recovered error" messages otherwise.) In BIOS, VT-d, intel virtualization technology, and ACS are all enabled. I've disabled ASPM everywhere I can here as well, which makes no difference as far as I can tell. I've also blacklisted nvidia and nouveau drivers in /etc/modprobe.d/blacklist.conf

I've beaten the passthrough guide to death. See the following outputs to the commands in it:

lspci -nnk (just the relevant GPU stuff):
Code:
0b:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK106GL [Quadro K4000] [10de:11fa] (rev a1)
        Subsystem: Hewlett-Packard Company GK106GL [Quadro K4000] [103c:079c]
        Kernel driver in use: vfio-pci
        Kernel modules: nvidiafb, nouveau
0b:00.1 Audio device [0403]: NVIDIA Corporation GK106 HDMI Audio Controller [10de:0e0b] (rev a1)
        Subsystem: Hewlett-Packard Company GK106 HDMI Audio Controller [103c:079c]
        Kernel driver in use: vfio-pci
        Kernel modules: snd_hda_intel

dmesg | grep -e DMAR -e IOMMU:
Code:
[    0.021251] ACPI: DMAR 0x00000000BB1C0270 0000E4 (v01 ALASKA A M I    00000001 INTL 20091013)
[    0.021292] ACPI: Reserving DMAR table memory at [mem 0xbb1c0270-0xbb1c0353]
[    0.385224] DMAR: IOMMU enabled
[    1.066154] DMAR: Host address width 46
[    1.066156] DMAR: DRHD base: 0x000000fbffd000 flags: 0x0
[    1.066168] DMAR: dmar0: reg_base_addr fbffd000 ver 1:0 cap d2008c10ef0466 ecap f0205b
[    1.066173] DMAR: DRHD base: 0x000000fbffc000 flags: 0x1
[    1.066181] DMAR: dmar1: reg_base_addr fbffc000 ver 1:0 cap d2078c106f0466 ecap f020df
[    1.066185] DMAR: RMRR base: 0x000000bdb73000 end: 0x000000bdb81fff
[    1.066189] DMAR: ATSR flags: 0x0
[    1.066192] DMAR: RHSA base: 0x000000fbffc000 proximity domain: 0x0
[    1.066196] DMAR-IR: IOAPIC id 1 under DRHD base  0xfbffc000 IOMMU 1
[    1.066200] DMAR-IR: IOAPIC id 2 under DRHD base  0xfbffc000 IOMMU 1
[    1.066202] DMAR-IR: HPET id 0 under DRHD base 0xfbffc000
[    1.066205] DMAR-IR: x2apic is disabled because BIOS sets x2apic opt out bit.
[    1.066207] DMAR-IR: Use 'intremap=no_x2apic_optout' to override the BIOS setting.
[    1.067145] DMAR-IR: Enabled IRQ remapping in xapic mode
[    3.761644] DMAR: [Firmware Bug]: RMRR entry for device 13:00.0 is broken - applying workaround
[    3.761651] DMAR: No SATC found
[    3.761654] DMAR: IOMMU feature sc_support inconsistent
[    3.761656] DMAR: IOMMU feature dev_iotlb_support inconsistent
[    3.761658] DMAR: dmar0: Using Queued invalidation
[    3.761672] DMAR: dmar1: Using Queued invalidation
[    3.782899] DMAR: Intel(R) Virtualization Technology for Directed I/O

dmesg | grep 'remapping'
Code:
[    1.067145] DMAR-IR: Enabled IRQ remapping in xapic mode
[    1.067148] x2apic: IRQ remapping doesn't support X2APIC mode

I've tried "intremap=no_x2apic_optout" in the kernel command line. It seems to work as far as enabling x2apic. Rerunning these commands upon reboot indicate as such, but the main problem still persists. I've since rolled it back.

I've tried changing the CPU type to host, QEMU64, x86-64-vxxxx, Haswell. No luck.

Above 4G decoding in BIOS causes the web panel to never load. I have no clue whats going on there.

MCTP in bios does not stay enabled after a reboot. Not sure if it's related, but it's something I've tried since it was alongside ACS under the VT-d section.

All I can think of is that some virtualization/IOMMU-related option in bios isn't actually functioning despite it saying so. Does anyone have thoughts as to what other troubleshooting steps I can take?
 
Last edited:
Code:
Thread 1 (Thread 0x78bee3d7b480 (LWP 3683)):
#0  memory_region_update_container_subregions (subregion=0x56c48a888ba0) at ../system/memory.c:2637
#1  memory_region_add_subregion_common (mr=<optimized out>, offset=<optimized out>, subregion=0x56c48a888ba0) at ../system/memory.c:2661
#2  0x000056c48690fd8f in vfio_probe_nvidia_bar0_quirk (nr=0, vdev=0x56c48a7ca630) at ../hw/vfio/pci-quirks.c:966
#3  vfio_bar_quirk_setup (vdev=vdev@entry=0x56c48a7ca630, nr=nr@entry=0) at ../hw/vfio/pci-quirks.c:1259
#4  0x000056c486916fff in vfio_realize (pdev=<optimized out>, errp=<optimized out>) at ../hw/vfio/pci.c:3124
#5  0x000056c4866b247e in pci_qdev_realize (qdev=<optimized out>, errp=<optimized out>) at ../hw/pci/pci.c:2093
#6  0x000056c4869b033b in device_set_realized (obj=<optimized out>, value=<optimized out>, errp=0x7fff11083150) at ../hw/core/qdev.c:510
#7  0x000056c4869b4b6d in property_set_bool (obj=0x56c48a7ca630, v=<optimized out>, name=<optimized out>, opaque=0x56c489246710, errp=0x7fff11083150) at ../qom/object.c:2358
#8  0x000056c4869b80cb in object_property_set (obj=obj@entry=0x56c48a7ca630, name=name@entry=0x56c486ccc739 "realized", v=v@entry=0x56c48a7cc850, errp=errp@entry=0x7fff11083150) at ../qom/object.c:1472
#9  0x000056c4869bb9af in object_property_set_qobject (obj=obj@entry=0x56c48a7ca630, name=name@entry=0x56c486ccc739 "realized", value=value@entry=0x56c48a7ca3
20, errp=errp@entry=0x7fff11083150) at ../qom/qom-qobject.c:28
#10 0x000056c4869b8744 in object_property_set_bool (obj=obj@entry=0x56c48a7ca630, name=name@entry=0x56c486ccc739 "realized", value=value@entry=true, errp=errp@entry=0x7fff11083150) at ../qom/object.c:1541
#11 0x000056c4869b0e2c in qdev_realize (dev=dev@entry=0x56c48a7ca630, bus=bus@entry=0x56c48a204d60, errp=errp@entry=0x7fff11083150) at ../hw/core/qdev.c:292
#12 0x000056c4867853d3 in qdev_device_add_from_qdict (opts=opts@entry=0x56c48a0c9400, from_json=from_json@entry=false, errp=0x7fff11083150, errp@entry=0x56c487a422f8 <error_fatal>) at ../system/qdev-monitor.c:718
#13 0x000056c486785841 in qdev_device_add (opts=0x56c489241ab0, errp=errp@entry=0x56c487a422f8 <error_fatal>) at ../system/qdev-monitor.c:737
#14 0x000056c48678a7ff in device_init_func (opaque=<optimized out>, opts=<optimized out>, errp=0x56c487a422f8 <error_fatal>) at ../system/vl.c:1201
#15 0x000056c486b64a91 in qemu_opts_foreach (list=<optimized out>, func=func@entry=0x56c48678a7f0 <device_init_func>, opaque=opaque@entry=0x0, errp=errp@entry=0x56c487a422f8 <error_fatal>) at ../util/qemu-option.c:1135
#16 0x000056c48678d2ca in qemu_create_cli_devices () at ../system/vl.c:2644
#17 qmp_x_exit_preconfig (errp=0x56c487a422f8 <error_fatal>) at ../system/vl.c:2713
#18 0x000056c48679147c in qemu_init (argc=<optimized out>, argv=<optimized out>) at ../system/vl.c:3782
#19 0x000056c4865288d9 in main (argc=<optimized out>, argv=<optimized out>) at ../system/main.c:47
So the issue most likely is here when initially configuring the device (and apparently using some quirks - a wild guess would be that they the ones used are not fully applicable to your model).

Is there maybe a line mentioning SIGSEGV when you load up the coredump?

What you could still try is the new 6.11 kernel: https://forum.proxmox.com/threads/o...e-8-available-on-test-no-subscription.156818/

Be careful not to install libpve-common-perl=8.2.6 if you are using the pvetest repository: https://forum.proxmox.com/threads/warning-updating-these-packages-broke-my-pci-passthrough.156848/
 
There's a SEGV mentioned on line 4 of the core dump, and at the end: Program terminated with signal SIGSEGV, Segmentation fault.
Code:
root@vault:~# coredumpctl -1 gdb
           PID: 7463 (gdb)
           UID: 0 (root)
           GID: 0 (root)
        Signal: 11 (SEGV)
     Timestamp: Fri 2024-11-08 05:43:17 PST (48s ago)
  Command Line: gdb /usr/bin/qemu-system-x86_64 -c /var/tmp/coredump-mlfEDw
    Executable: /usr/bin/gdb
 Control Group: /user.slice/user-0.slice/session-7.scope
          Unit: session-7.scope
         Slice: user-0.slice
       Session: 7
     Owner UID: 0 (root)
       Boot ID: 3c9740035b444e90a829789dd749d63e
    Machine ID: ab5586ad017c462d89f8197f43c2b8e6
      Hostname: vault
       Storage: /var/lib/systemd/coredump/core.gdb.0.3c9740035b444e90a829789dd749d63e.7463.1731073397000000.zst (present)
  Size on Disk: 47.1M
       Message: Process 7463 (gdb) of user 0 dumped core.
                
                Stack trace of thread 7463:
                #0  0x0000772f556a9e3c __pthread_kill_implementation (libc.so.6 + 0x8ae3c)
                #1  0x0000772f5565afb2 __GI_raise (libc.so.6 + 0x3bfb2)
                #2  0x00006452e2d7c5ae n/a (gdb + 0x29f5ae)
                #3  0x00006452e2d7c777 n/a (gdb + 0x29f777)
                #4  0x0000772f5565b050 __restore_rt (libc.so.6 + 0x3c050)
                #5  0x0000772f5615c578 PyErr_SetInterruptEx (libpython3.11.so.1.0 + 0x35c578)
                #6  0x00006452e2d7c3f2 n/a (gdb + 0x29f3f2)
                #7  0x0000772f5565b050 __restore_rt (libc.so.6 + 0x3c050)
                #8  0x0000772f560d55a4 n/a (libpython3.11.so.1.0 + 0x2d55a4)
                #9  0x0000772f560d731c PyGC_Collect (libpython3.11.so.1.0 + 0x2d731c)
                #10 0x0000772f560ad747 Py_FinalizeEx (libpython3.11.so.1.0 + 0x2ad747)
                #11 0x00006452e2ee47d4 n/a (gdb + 0x4077d4)
                #12 0x00006452e3160b21 n/a (gdb + 0x683b21)
                #13 0x00006452e2fb5c0a n/a (gdb + 0x4d8c0a)
                #14 0x00006452e2c9c7d9 n/a (gdb + 0x1bf7d9)
                #15 0x00006452e31641d6 n/a (gdb + 0x6871d6)
                #16 0x00006452e3164cb3 n/a (gdb + 0x687cb3)
                #17 0x00006452e2e462fa n/a (gdb + 0x3692fa)
                #18 0x00006452e2e47f75 n/a (gdb + 0x36af75)
                #19 0x00006452e2bd6caa n/a (gdb + 0xf9caa)
                #20 0x0000772f5564624a __libc_start_call_main (libc.so.6 + 0x2724a)
                #21 0x0000772f55646305 __libc_start_main_impl (libc.so.6 + 0x27305)
                #22 0x00006452e2bdde31 n/a (gdb + 0x100e31)
                
                Stack trace of thread 7477:
                #0  0x0000772f556a4e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96)
                #1  0x0000772f556a7558 __pthread_cond_wait_common (libc.so.6 + 0x88558)
                #2  0x00006452e316dd13 n/a (gdb + 0x690d13)
                #3  0x0000772f558d44a3 n/a (libstdc++.so.6 + 0xd44a3)
                #4  0x0000772f556a8144 start_thread (libc.so.6 + 0x89144)
                #5  0x0000772f557287dc __clone3 (libc.so.6 + 0x1097dc)
                
                Stack trace of thread 7465:
                #0  0x0000772f556a4e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96)
                #1  0x0000772f556a7558 __pthread_cond_wait_common (libc.so.6 + 0x88558)
                #2  0x00006452e316dd13 n/a (gdb + 0x690d13)
                #3  0x0000772f558d44a3 n/a (libstdc++.so.6 + 0xd44a3)
                #4  0x0000772f556a8144 start_thread (libc.so.6 + 0x89144)
                #5  0x0000772f557287dc __clone3 (libc.so.6 + 0x1097dc)
                
                Stack trace of thread 7470:
                #0  0x0000772f556a4e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96)
                #1  0x0000772f556a7558 __pthread_cond_wait_common (libc.so.6 + 0x88558)
                #2  0x00006452e316dd13 n/a (gdb + 0x690d13)
                #3  0x0000772f558d44a3 n/a (libstdc++.so.6 + 0xd44a3)
                #4  0x0000772f556a8144 start_thread (libc.so.6 + 0x89144)
                #5  0x0000772f557287dc __clone3 (libc.so.6 + 0x1097dc)
                
                Stack trace of thread 7468:
                #0  0x0000772f556a4e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96)
                #1  0x0000772f556a7558 __pthread_cond_wait_common (libc.so.6 + 0x88558)
                #2  0x00006452e316dd13 n/a (gdb + 0x690d13)
                #3  0x0000772f558d44a3 n/a (libstdc++.so.6 + 0xd44a3)
                #4  0x0000772f556a8144 start_thread (libc.so.6 + 0x89144)
                #5  0x0000772f557287dc __clone3 (libc.so.6 + 0x1097dc)
                
                Stack trace of thread 7467:
                #0  0x0000772f556a4e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96)
                #1  0x0000772f556a7558 __pthread_cond_wait_common (libc.so.6 + 0x88558)
                #2  0x00006452e316dd13 n/a (gdb + 0x690d13)
                #3  0x0000772f558d44a3 n/a (libstdc++.so.6 + 0xd44a3)
                #4  0x0000772f556a8144 start_thread (libc.so.6 + 0x89144)
                #5  0x0000772f557287dc __clone3 (libc.so.6 + 0x1097dc)
                
                Stack trace of thread 7474:
                #0  0x0000772f556a4e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96)
                #1  0x0000772f556a7558 __pthread_cond_wait_common (libc.so.6 + 0x88558)
                #2  0x00006452e316dd13 n/a (gdb + 0x690d13)
                #3  0x0000772f558d44a3 n/a (libstdc++.so.6 + 0xd44a3)
                #4  0x0000772f556a8144 start_thread (libc.so.6 + 0x89144)
                #5  0x0000772f557287dc __clone3 (libc.so.6 + 0x1097dc)
                
                Stack trace of thread 7466:
                #0  0x0000772f556a4e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96)
                #1  0x0000772f556a7558 __pthread_cond_wait_common (libc.so.6 + 0x88558)
                #2  0x00006452e316dd13 n/a (gdb + 0x690d13)
                #3  0x0000772f558d44a3 n/a (libstdc++.so.6 + 0xd44a3)
                #4  0x0000772f556a8144 start_thread (libc.so.6 + 0x89144)
                #5  0x0000772f557287dc __clone3 (libc.so.6 + 0x1097dc)
                
                Stack trace of thread 7473:
                #0  0x0000772f556a4e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96)
                #1  0x0000772f556a7558 __pthread_cond_wait_common (libc.so.6 + 0x88558)
                #2  0x00006452e316dd13 n/a (gdb + 0x690d13)
                #3  0x0000772f558d44a3 n/a (libstdc++.so.6 + 0xd44a3)
                #4  0x0000772f556a8144 start_thread (libc.so.6 + 0x89144)
                #5  0x0000772f557287dc __clone3 (libc.so.6 + 0x1097dc)
                
                Stack trace of thread 7471:
                #0  0x0000772f556a4e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96)
                #1  0x0000772f556a7558 __pthread_cond_wait_common (libc.so.6 + 0x88558)
                #2  0x00006452e316dd13 n/a (gdb + 0x690d13)
                #3  0x0000772f558d44a3 n/a (libstdc++.so.6 + 0xd44a3)
                #4  0x0000772f556a8144 start_thread (libc.so.6 + 0x89144)
                #5  0x0000772f557287dc __clone3 (libc.so.6 + 0x1097dc)
                
                Stack trace of thread 7469:
                #0  0x0000772f556a4e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96)
                #1  0x0000772f556a7558 __pthread_cond_wait_common (libc.so.6 + 0x88558)
                #2  0x00006452e316dd13 n/a (gdb + 0x690d13)
                #3  0x0000772f558d44a3 n/a (libstdc++.so.6 + 0xd44a3)
                #4  0x0000772f556a8144 start_thread (libc.so.6 + 0x89144)
                #5  0x0000772f557287dc __clone3 (libc.so.6 + 0x1097dc)
                
                Stack trace of thread 7476:
                #0  0x0000772f556a4e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96)
                #1  0x0000772f556a7558 __pthread_cond_wait_common (libc.so.6 + 0x88558)
                #2  0x00006452e316dd13 n/a (gdb + 0x690d13)
                #3  0x0000772f558d44a3 n/a (libstdc++.so.6 + 0xd44a3)
                #4  0x0000772f556a8144 start_thread (libc.so.6 + 0x89144)
                #5  0x0000772f557287dc __clone3 (libc.so.6 + 0x1097dc)
                
                Stack trace of thread 7475:
                #0  0x0000772f556a4e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96)
                #1  0x0000772f556a7558 __pthread_cond_wait_common (libc.so.6 + 0x88558)
                #2  0x00006452e316dd13 n/a (gdb + 0x690d13)
                #3  0x0000772f558d44a3 n/a (libstdc++.so.6 + 0xd44a3)
                #4  0x0000772f556a8144 start_thread (libc.so.6 + 0x89144)
                #5  0x0000772f557287dc __clone3 (libc.so.6 + 0x1097dc)
                
                Stack trace of thread 7472:
                #0  0x0000772f556a4e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96)
                #1  0x0000772f556a7558 __pthread_cond_wait_common (libc.so.6 + 0x88558)
                #2  0x00006452e316dd13 n/a (gdb + 0x690d13)
                #3  0x0000772f558d44a3 n/a (libstdc++.so.6 + 0xd44a3)
                #4  0x0000772f556a8144 start_thread (libc.so.6 + 0x89144)
                #5  0x0000772f557287dc __clone3 (libc.so.6 + 0x1097dc)
                
                Stack trace of thread 7478:
                #0  0x0000772f556a4e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96)
                #1  0x0000772f556a7558 __pthread_cond_wait_common (libc.so.6 + 0x88558)
                #2  0x00006452e316dd13 n/a (gdb + 0x690d13)
                #3  0x0000772f558d44a3 n/a (libstdc++.so.6 + 0xd44a3)
                #4  0x0000772f556a8144 start_thread (libc.so.6 + 0x89144)
                #5  0x0000772f557287dc __clone3 (libc.so.6 + 0x1097dc)
                
                Stack trace of thread 7479:
                #0  0x0000772f556a4e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96)
                #1  0x0000772f556a7558 __pthread_cond_wait_common (libc.so.6 + 0x88558)
                #2  0x00006452e316dd13 n/a (gdb + 0x690d13)
                #3  0x0000772f558d44a3 n/a (libstdc++.so.6 + 0xd44a3)
                #4  0x0000772f556a8144 start_thread (libc.so.6 + 0x89144)
                #5  0x0000772f557287dc __clone3 (libc.so.6 + 0x1097dc)
                
                Stack trace of thread 7480:
                #0  0x0000772f556a4e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96)
                #1  0x0000772f556a7558 __pthread_cond_wait_common (libc.so.6 + 0x88558)
                #2  0x00006452e316dd13 n/a (gdb + 0x690d13)
                #3  0x0000772f558d44a3 n/a (libstdc++.so.6 + 0xd44a3)
                #4  0x0000772f556a8144 start_thread (libc.so.6 + 0x89144)
                #5  0x0000772f557287dc __clone3 (libc.so.6 + 0x1097dc)
                ELF object binary architecture: AMD x86-64

GNU gdb (Debian 13.1-3) 13.1
Copyright (C) 2023 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/bin/gdb...
(No debugging symbols found in /usr/bin/gdb)
[New LWP 7463]
[New LWP 7477]
[New LWP 7465]
[New LWP 7470]
[New LWP 7468]
[New LWP 7467]
[New LWP 7474]
[New LWP 7466]
[New LWP 7473]
[New LWP 7471]
[New LWP 7469]
[New LWP 7476]
[New LWP 7475]
[New LWP 7472]
[New LWP 7478]
[New LWP 7479]
[New LWP 7480]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `gdb /usr/bin/qemu-system-x86_64 -c /var/tmp/coredump-mlfEDw'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=11, no_tid=no_tid@entry=0) at ./nptl/pthread_kill.c:44
44      ./nptl/pthread_kill.c: No such file or directory.
[Current thread is 1 (Thread 0x772f547e0180 (LWP 7463))]
(gdb)

No success on the 6.11 kernel.
 
Did the passthrough of this card work in the past? Otherwise, there's not much we can do unfortunately, and you'd have to hope this gets fixed in a future kernel/QEMU/firmware release.
 
Unfortunately I don't believe it has worked on this machine yet. Anyway, thanks for your time. I learned a good bit of stuff diving into this. I'll post if I find a solution.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!