Help with video passthrough

jawt

New Member
Jan 30, 2025
7
0
1
Hi all,

I configured a windows 10 VM on my Proxmox server (v 8.4.1). I successfully passed through the GPU (Nvidia Quadro P4000). What I'm trying to achieve is to connect a monitor (or 3) to the windows 10 VM via the passed through Nvidia Quadro P4000. I plugged my Display Port cable in each of the 4 DP ports and none of them output a display signal to my monitor. Not sure if there's a setting or configuration that I'm missing.

Your help and experience is much appreciated.
 
If I understand correctly, everything has been successfully passed through, drivers installed, but no signal on the monitor?

RDP/VNC works?

Please post your VM configuration:
Code:
qm config <VMID>
Thank you for replying, Mariol. Appreciate your help.

Yes, RDP and VNC work.

Here is the VM config (not sure if smbios1 and vmgenid are unique to me/supposed to remain private so removed them):
FYI: 0000:8d:00 is my Nvidia Quadro P4000
agent: 1
args: -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=NV43FIX,kvm=off'
bios: ovmf
boot: order=ide0;scsi0
cores: 12
cpu: host,hidden=1,flags=+pcid
efidisk0: local-lvm01:vm-110-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci0: 0000:8d:00,pcie=1,x-vga=1
hotplug: usb
machine: pc-q35-9.0
memory: 65536
meta: creation-qemu=9.0.2,ctime=1741551573
name: Win10-PC3
net0: virtio=BC:24:11:B0:52:DA,bridge=vmbr0,firewall=1
numa: 1
ostype: win10
scsi0: local-lvm01:vm-110-disk-1,iothread=1,size=256G
scsihw: virtio-scsi-single
smbios1: uuid=removed
sockets: 1
tpmstate0: local-lvm01:vm-110-disk-2,size=4M,version=v2.0
vga: virtio
vmgenid: removed
 
Last edited:
Yes, RDP and VNC work.
Yes, that's good.

Your VM configuration looks fine. Many NVIDIA cards activate their outputs only if a monitor is detected during boot. Was the monitor connected before the VM was started? If not, shut down the VM, connect the monitor, turn it on, and start the VM. Do you see an image?

I'm guessing that the monitor works with other servers/PCs? And the graphics card connectors are working too.
 
Yes, that's good.

Your VM configuration looks fine. Many NVIDIA cards activate their outputs only if a monitor is detected during boot. Was the monitor connected before the VM was started? If not, shut down the VM, connect the monitor, turn it on, and start the VM. Do you see an image?

I'm guessing that the monitor works with other servers/PCs? And the graphics card connectors are working too.
Yes sir(/madam?). I have 3 DP dummy plugs connected to the graphics card's DP ports at all times. The VM boots with the dummy plugs connected.

Yep, the monitor works just fine with other PCs. I use the monitor daily on my main PC.

Help
 
Last edited:
Yes sir(/madam?). I have 3 DP dummy plugs connected to the graphics card's DP ports at all times. The VM boots with the dummy plugs connected.

Yep, the monitor works just fine with other PCs. I use the monitor daily on my main PC.

Okay, and if you disconnect all the dummy plugs and only connect the monitor, and then start the VM, does it work?

If it doesn't work, please send me the log from the time when you start the VM. Please adjust date and time.

Code:
journalctl --since "2025-10-01 00:00" --until "2025-10-03 16:00" > "$(hostname)-journal.log"
 
Okay, and if you disconnect all the dummy plugs and only connect the monitor, and then start the VM, does it work?

If it doesn't work, please send me the log from the time when you start the VM. Please adjust date and time.

Code:
journalctl --since "2025-10-01 00:00" --until "2025-10-03 16:00" > "$(hostname)-journal.log"
Just tried what you recommended and I don't have any output on my physical monitor (I am able to see the one out of 3 screens on the Proxmox VM console)

I pulled the log file and I'm trying to send it over via direct messaging but it's not letting me. Can you send a message over DM so I can reply with the log file contents?
 
Thank you for the tests. Could it be that the external monitor is disabled in the display settings on Windows?
 
Okay, and if you disconnect all the dummy plugs and only connect the monitor, and then start the VM, does it work?

If it doesn't work, please send me the log from the time when you start the VM. Please adjust date and time.

Code:
journalctl --since "2025-10-01 00:00" --until "2025-10-03 16:00" > "$(hostname)-journal.log"
I'm unable to send you the logfile via DM. Please find it below:
I appreciate your help!

Code:
May 12 11:32:29 pve pvedaemon[925256]: start VM 110: UPID:pve:000E1E48:011E7F0A:6822148D:qmstart:110:root@pam:
May 12 11:32:29 pve pvedaemon[922952]: <root@pam> starting task UPID:pve:000E1E48:011E7F0A:6822148D:qmstart:110:root@pam:
May 12 11:32:29 pve pvedaemon[925256]: error writing '1' to '/sys/bus/pci/devices/0000:8d:00.0/reset': Inappropriate ioctl for device
May 12 11:32:29 pve pvedaemon[925256]: failed to reset PCI device '0000:8d:00.0', but trying to continue as not all devices need a reset
May 12 11:32:29 pve systemd[1]: Started 110.scope.
May 12 11:32:30 pve kernel: tap110i0: entered promiscuous mode
May 12 11:32:30 pve kernel: vmbr0: port 7(fwpr110p0) entered blocking state
May 12 11:32:30 pve kernel: vmbr0: port 7(fwpr110p0) entered disabled state
May 12 11:32:30 pve kernel: fwpr110p0: entered allmulticast mode
May 12 11:32:30 pve kernel: fwpr110p0: entered promiscuous mode
May 12 11:32:30 pve kernel: vmbr0: port 7(fwpr110p0) entered blocking state
May 12 11:32:30 pve kernel: vmbr0: port 7(fwpr110p0) entered forwarding state
May 12 11:32:30 pve kernel: fwbr110i0: port 1(fwln110i0) entered blocking state
May 12 11:32:30 pve kernel: fwbr110i0: port 1(fwln110i0) entered disabled state
May 12 11:32:30 pve kernel: fwln110i0: entered allmulticast mode
May 12 11:32:30 pve kernel: fwln110i0: entered promiscuous mode
May 12 11:32:30 pve kernel: fwbr110i0: port 1(fwln110i0) entered blocking state
May 12 11:32:30 pve kernel: fwbr110i0: port 1(fwln110i0) entered forwarding state
May 12 11:32:30 pve kernel: fwbr110i0: port 2(tap110i0) entered blocking state
May 12 11:32:30 pve kernel: fwbr110i0: port 2(tap110i0) entered disabled state
May 12 11:32:30 pve kernel: tap110i0: entered allmulticast mode
May 12 11:32:30 pve kernel: fwbr110i0: port 2(tap110i0) entered blocking state
May 12 11:32:30 pve kernel: fwbr110i0: port 2(tap110i0) entered forwarding state
May 12 11:32:38 pve pvedaemon[87655]: VM 110 qmp command failed - VM 110 qmp command 'query-proxmox-support' failed - unable to connect to VM 110 qmp socket - timeo>
May 12 11:32:38 pve pvedaemon[76846]: VM 110 qmp command failed - VM 110 qmp command 'guest-ping' failed - got timeout
May 12 11:32:38 pve pvestatd[2579]: VM 110 qmp command failed - VM 110 qmp command 'query-proxmox-support' failed - unable to connect to VM 110 qmp socket - timeout>
May 12 11:32:39 pve pvestatd[2579]: status update time (8.258 seconds)
May 12 11:32:45 pve pvedaemon[925256]: VM 110 started with PID 925278.
May 12 11:32:45 pve pvedaemon[922952]: <root@pam> end task UPID:pve:000E1E48:011E7F0A:6822148D:qmstart:110:root@pam: OK
May 12 11:32:45 pve pvestatd[2579]: status update time (5.442 seconds)
May 12 11:32:57 pve pvedaemon[922952]: VM 110 qmp command failed - VM 110 qmp command 'guest-ping' failed - got timeout
May 12 11:33:16 pve pvedaemon[76846]: VM 110 qmp command failed - VM 110 qmp command 'guest-ping' failed - got timeout
May 12 11:33:55 pve pvedaemon[925970]: starting vnc proxy UPID:pve:000E2112:011EA0AD:682214E3:vncproxy:110:root@pam:
May 12 11:33:55 pve pvedaemon[922952]: <root@pam> starting task UPID:pve:000E2112:011EA0AD:682214E3:vncproxy:110:root@pam:
May 12 11:33:59 pve pvedaemon[922952]: <root@pam> end task UPID:pve:000E2112:011EA0AD:682214E3:vncproxy:110:root@pam: OK
May 12 11:34:32 pve pvedaemon[926167]: starting termproxy UPID:pve:000E21D7:011EAF02:68221508:vncshell::root@pam:
May 12 11:34:32 pve pvedaemon[922952]: <root@pam> starting task UPID:pve:000E21D7:011EAF02:68221508:vncshell::root@pam:
May 12 11:34:32 pve pvedaemon[76846]: <root@pam> successful auth for user 'root@pam'
May 12 11:34:32 pve login[926170]: pam_unix(login:session): session opened for user root(uid=0) by (uid=0)
May 12 11:34:32 pve systemd[1]: Created slice user-0.slice - User Slice of UID 0.
May 12 11:34:32 pve systemd[1]: Starting user-runtime-dir@0.service - User Runtime Directory /run/user/0...
May 12 11:34:32 pve systemd-logind[2238]: New session 294 of user root.
May 12 11:34:32 pve systemd[1]: Finished user-runtime-dir@0.service - User Runtime Directory /run/user/0.
May 12 11:34:32 pve systemd[1]: Starting user@0.service - User Manager for UID 0...
May 12 11:34:32 pve (systemd)[926176]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
May 12 11:34:32 pve systemd[926176]: Queued start job for default target default.target.
May 12 11:34:32 pve systemd[926176]: Created slice app.slice - User Application Slice.
May 12 11:34:32 pve systemd[926176]: Reached target paths.target - Paths.
May 12 11:34:32 pve systemd[926176]: Reached target timers.target - Timers.
May 12 11:34:32 pve systemd[926176]: Listening on dirmngr.socket - GnuPG network certificate management daemon.
May 12 11:34:32 pve systemd[926176]: Listening on gpg-agent-browser.socket - GnuPG cryptographic agent and passphrase cache (access for web browsers).
May 12 11:34:32 pve systemd[926176]: Listening on gpg-agent-extra.socket - GnuPG cryptographic agent and passphrase cache (restricted).
May 12 11:34:32 pve systemd[926176]: Listening on gpg-agent-ssh.socket - GnuPG cryptographic agent (ssh-agent emulation).
May 12 11:34:32 pve systemd[926176]: Listening on gpg-agent.socket - GnuPG cryptographic agent and passphrase cache.
May 12 11:34:32 pve systemd[926176]: Reached target sockets.target - Sockets.
May 12 11:34:32 pve systemd[926176]: Reached target basic.target - Basic System.
May 12 11:34:32 pve systemd[926176]: Reached target default.target - Main User Target.
May 12 11:34:32 pve systemd[926176]: Startup finished in 158ms.
May 12 11:34:32 pve systemd[1]: Started user@0.service - User Manager for UID 0.
May 12 11:34:32 pve systemd[1]: Started session-294.scope - Session 294 of User root.
May 12 11:34:32 pve login[926192]: ROOT LOGIN  on '/dev/pts/0'