Nvidia RTX Pro 6000 MaxQ - No Video Output on client VM (Windows/Ubuntu) - No Errors - GPU gets detected in VM

hawxxer

Member
Jul 19, 2023
9
0
6
Hi,
coming from this thread I opened an own issue as @dcsapak suggested.
I have issues with the video output when passing through the latest Blackwell Generation RTX Pro 6000 Max Q Version. The GPU passthrough works fine, the GPU is detected in Windows and Ubuntu Server. In Windows 11 there is no Code 43, I use the latest 580 driver from Nvidia Website. In the Nvidia Manager on the Windows VM the GPU is also available and shows the name of my Display but there is no video output, the screen is just black (but backlight is own, so their is a signal).
Same goes for ubuntu server 24.04 version. The GPU is recognized with the latest Nvidia driver from there website (I downloaded and run the abc.run file from nvidia site and selected the MIT Version of the driver, proprietary is not working for that gpu). Here are some outputs:

System is Proxmox 9.0.6 with zfs install:

1756983259768.png

Code:
root@titan:~# cat /etc/kernel/cmdline
root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on initcall_blacklist=sysfb_init vfio_iommu_type1.allow_unsafe_interrupts=1 vfio-pci.ids=10de:2bb4,10de:22e8

Code:
root@titan:~# cat /etc/modules
# /etc/modules is obsolete and has been replaced by /etc/modules-load.d/.
# Please see modules-load.d(5) and modprobe.d(5) for details.
#
# Updating this file still works, but it is undocumented and unsupported.
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

Code:
root@titan:~# ls /etc/modprobe.d/
blacklist.conf  intel-microcode-blacklist.conf  pve-blacklist.conf  vfio.conf  zfs.conf
root@titan:~# cat /etc/modprobe.d/*
blacklist radeon
blacklist nouveau
blacklist nvidia
blacklist snd_hda_intel
blacklist amd76x_edac
blacklist vga16fb
blacklist rivafb
blacklist nvidiafb
blacklist rivatv
# The microcode module attempts to apply a microcode update when
# it autoloads.  This is not always safe, so we block it by default.
blacklist microcode
# This file contains a list of modules which are not supported by Proxmox VE

# nvidiafb see bugreport https://bugzilla.proxmox.com/show_bug.cgi?id=701
blacklist nvidiafb
options vfio_iommu_type1 allow_unsafe_interrupts=1
options kvm ignore_msrs=1 report_ignored_msrs=0
options vfio-pci ids=10de:2bb4,10de:22e8 disable_vga=1 disable_idle_d3=1
options zfs zfs_arc_max=13477347328


Code:
root@titan:~# cat /etc/pve/qemu-server/1000.conf #WINDOWS
agent: 1
balloon: 0
bios: ovmf
boot: order=scsi0;usb0
cores: 16
cpu: host
efidisk0: local-zfs:vm-1000-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
hostpci0: 0000:01:00,pcie=1
machine: pc-q35-10.0
memory: 65536
meta: creation-qemu=10.0.2,ctime=1756918785
name: WIN-TEMPLATE
net0: virtio=xx:xx:11:xx:21:F9,bridge=vmbr0,firewall=1
numa: 0
ostype: win11
parent: IDLE
scsi0: local-zfs:vm-1000-disk-1,cache=writeback,discard=on,iothread=1,size=64G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=ab850xxb-1b2d-4xx65-axx7-316f27xxxxxx
sockets: 1
tpmstate0: local-zfs:vm-1000-disk-2,size=4M,version=v2.0
usb0: host=8564:1000,usb3=1
vga: none
vmgenid: efce54da-2664-xx8e-xx4b-afxxxxxxxxx


 
can you post the output of
Code:
lspci
dmesg
from the host?
and
Code:
nvidia-smi -q
from the guest?

since you can install the driver and nvidia-smi (in the guest) shows the card, it's a different issue than the OP, so it might be better to open a new thread

I attached the dmesg and nvidia-smi -q output as files. Also here is the config for that specific client ubuntu 24.04 vm:

Code:
root@titan:~# cat /etc/pve/qemu-server/1102.conf
agent: 1
balloon: 0
bios: ovmf
boot: order=scsi0;net0
cores: 8
cpu: host
efidisk0: local-zfs:vm-1102-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
hostpci0: 0000:01:00,pcie=1,x-vga=1
machine: q35
memory: 102400
meta: creation-qemu=10.0.2,ctime=1756919760
name: UBUSERVER24-TEMPLATE
net0: virtio=BC:24:xx:xx:18:xx,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
parent: INIT
scsi0: local-zfs:vm-1102-disk-1,cache=writeback,discard=on,iothread=1,size=64G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=6xxxxf3f-axx7-465c-xx2b-d4e96xxxxdxb
sockets: 1
vga: none
vmgenid: 4axxxx3b-2f56-4xx9-a3c4-45xxxxcaxx2f
 

Attachments

Last edited:
Hm checked that tool, but I am not quiet sure about that. I guess this tool is for the server cards to switch between compute and display, not for the workstation cards. Running the tool shows warnings, that a vgpu license needs to be in place otherwise this could permanently damage the card.
Using the NVIDIA Display Mode Selector Tool on systems that have notpassed vGPU software certification can cause the GPU PCle board and system to be permanently unusable.

As the workstation cards of the rtx pro 6000 do not support vgpu I guess that tool is not applicable, but maybe someone else knows more?
Nevertheless I checked with sudo ./displaymodeselector --listgpumodes and only got Display as possible mode.
Also display output works fine, if the gpu is not passthroughed, only once the gpu is passthroughed. I guess I will also try to extract the vbios and add it to the pcie passthrough.