[SOLVED] "Guest has not initialized the display (yet)" when adding PCIe GPU

2Stoned

New Member
Feb 6, 2020
24
1
3
32
I try to passthrough a AMD RX580 GPU to a VM, which boots fine without the GPU. With the GPU added as an "all function" PCIe device I only get the Guest has not initialized the display (yet) message in the console.

CPU: AMD Ryzen 3950X, Mainboard: Gigabyte Aorus Ultra, Virtualisation-related things (virtualisation, IOMMU, ARI, ACS) are turned on in the BIOS.

My VM config is the following:
Code:
agent: 1
balloon: 8192
bios: ovmf
bootdisk: scsi0
cores: 12
cpu: host
efidisk0: VMstorage:vm-102-disk-1,size=1M
hostpci0: 0a:00,pcie=1
ide2: local:iso/ubuntu-20.04-desktop-amd64.iso,media=cdrom
machine: q35
memory: 60000
name: Ubuntu
net0: virtio=B2:5E:C6:CA:A0:A5,bridge=vmbr0,firewall=1
numa: 1
ostype: l26
scsi0: VMstorage:vm-102-disk-0,cache=writeback,discard=on,size=32G,ssd=1
scsihw: virtio-scsi-pci
shares: 5000
smbios1: uuid=bd85f4fe-370e-4299-9ce2-a39f2f997fe9
sockets: 1
vmgenid: fdfa3441-a8f0-48f5-8063-2d633183ffa4

The syslog is the following (only booting up this VM):
Code:
Apr 24 16:18:00 pvehost systemd[1]: Starting Proxmox VE replication runner...
Apr 24 16:18:00 pvehost systemd[1]: pvesr.service: Succeeded.
Apr 24 16:18:00 pvehost systemd[1]: Started Proxmox VE replication runner.
Apr 24 16:18:27 pvehost pvedaemon[5286]: start VM 102: UPID:pvehost:000014A6:00012328:5EA2F533:qmstart:102:root@pam:
Apr 24 16:18:27 pvehost pvedaemon[1520]: <root@pam> starting task UPID:pvehost:000014A6:00012328:5EA2F533:qmstart:102:root@pam:
Apr 24 16:18:27 pvehost kernel: pcieport 0000:00:03.1: AER: Uncorrected (Non-Fatal) error received: 0000:00:03.1
Apr 24 16:18:27 pvehost kernel: pcieport 0000:00:03.1: AER: PCIe Bus Error: severity=Uncorrected (Non-Fatal), type=Transaction Layer, (Requester ID)
Apr 24 16:18:27 pvehost kernel: pcieport 0000:00:03.1: AER:   device [1022:1483] error status/mask=00100000/04400000
Apr 24 16:18:27 pvehost kernel: pcieport 0000:00:03.1: AER:    [20] UnsupReq               (First)
Apr 24 16:18:27 pvehost kernel: pcieport 0000:00:03.1: AER:   TLP Header: 34000000 0a000010 00000000 80008000
Apr 24 16:18:27 pvehost kernel: pcieport 0000:00:03.1: AER: Device recovery successful
Apr 24 16:18:27 pvehost systemd[1]: Started 102.scope.
Apr 24 16:18:27 pvehost systemd-udevd[5290]: Using default interface naming scheme 'v240'.
Apr 24 16:18:27 pvehost systemd-udevd[5290]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Apr 24 16:18:27 pvehost systemd-udevd[5290]: Could not generate persistent MAC address for tap102i0: No such file or directory
Apr 24 16:18:27 pvehost kernel: device tap102i0 entered promiscuous mode
Apr 24 16:18:27 pvehost systemd-udevd[5290]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Apr 24 16:18:27 pvehost systemd-udevd[5290]: Could not generate persistent MAC address for fwbr102i0: No such file or directory
Apr 24 16:18:27 pvehost systemd-udevd[5289]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Apr 24 16:18:27 pvehost systemd-udevd[5296]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Apr 24 16:18:27 pvehost systemd-udevd[5289]: Using default interface naming scheme 'v240'.
Apr 24 16:18:27 pvehost systemd-udevd[5296]: Using default interface naming scheme 'v240'.
Apr 24 16:18:27 pvehost systemd-udevd[5296]: Could not generate persistent MAC address for fwln102i0: No such file or directory
Apr 24 16:18:27 pvehost systemd-udevd[5289]: Could not generate persistent MAC address for fwpr102p0: No such file or directory
Apr 24 16:18:27 pvehost kernel: fwbr102i0: port 1(fwln102i0) entered blocking state
Apr 24 16:18:27 pvehost kernel: fwbr102i0: port 1(fwln102i0) entered disabled state
Apr 24 16:18:27 pvehost kernel: device fwln102i0 entered promiscuous mode
Apr 24 16:18:27 pvehost kernel: fwbr102i0: port 1(fwln102i0) entered blocking state
Apr 24 16:18:27 pvehost kernel: fwbr102i0: port 1(fwln102i0) entered forwarding state
Apr 24 16:18:27 pvehost kernel: vmbr0: port 2(fwpr102p0) entered blocking state
Apr 24 16:18:27 pvehost kernel: vmbr0: port 2(fwpr102p0) entered disabled state
Apr 24 16:18:27 pvehost kernel: device fwpr102p0 entered promiscuous mode
Apr 24 16:18:27 pvehost kernel: vmbr0: port 2(fwpr102p0) entered blocking state
Apr 24 16:18:27 pvehost kernel: vmbr0: port 2(fwpr102p0) entered forwarding state
Apr 24 16:18:27 pvehost kernel: fwbr102i0: port 2(tap102i0) entered blocking state
Apr 24 16:18:27 pvehost kernel: fwbr102i0: port 2(tap102i0) entered disabled state
Apr 24 16:18:27 pvehost kernel: fwbr102i0: port 2(tap102i0) entered blocking state
Apr 24 16:18:27 pvehost kernel: fwbr102i0: port 2(tap102i0) entered forwarding state
Apr 24 16:18:33 pvehost kernel: vfio-pci 0000:0a:00.0: vfio_ecap_init: hiding ecap 0x19@0x270
Apr 24 16:18:33 pvehost kernel: vfio-pci 0000:0a:00.0: vfio_ecap_init: hiding ecap 0x1b@0x2d0
Apr 24 16:18:33 pvehost kernel: vfio-pci 0000:0a:00.0: vfio_ecap_init: hiding ecap 0x1e@0x370
Apr 24 16:18:34 pvehost kernel: pcieport 0000:00:03.1: AER: Uncorrected (Non-Fatal) error received: 0000:00:03.1
Apr 24 16:18:34 pvehost kernel: pcieport 0000:00:03.1: AER: PCIe Bus Error: severity=Uncorrected (Non-Fatal), type=Transaction Layer, (Requester ID)
Apr 24 16:18:34 pvehost kernel: pcieport 0000:00:03.1: AER:   device [1022:1483] error status/mask=00100000/04400000
Apr 24 16:18:34 pvehost kernel: pcieport 0000:00:03.1: AER:    [20] UnsupReq               (First)
Apr 24 16:18:34 pvehost kernel: pcieport 0000:00:03.1: AER:   TLP Header: 34000000 0a000010 00000000 80008000
Apr 24 16:18:34 pvehost kernel: pcieport 0000:00:03.1: AER: Device recovery successful
Apr 24 16:18:34 pvehost pvedaemon[1520]: <root@pam> end task UPID:pvehost:000014A6:00012328:5EA2F533:qmstart:102:root@pam: OK
Apr 24 16:19:00 pvehost systemd[1]: Starting Proxmox VE replication runner...
Apr 24 16:19:00 pvehost systemd[1]: pvesr.service: Succeeded.
Apr 24 16:19:00 pvehost systemd[1]: Started Proxmox VE replication runner.
Apr 24 16:20:00 pvehost systemd[1]: Starting Proxmox VE replication runner...
Apr 24 16:20:00 pvehost systemd[1]: pvesr.service: Succeeded.
Apr 24 16:20:00 pvehost systemd[1]: Started Proxmox VE replication runner.
Apr 24 16:21:00 pvehost systemd[1]: Starting Proxmox VE replication runner...
Apr 24 16:21:00 pvehost systemd[1]: pvesr.service: Succeeded.
Apr 24 16:21:00 pvehost systemd[1]: Started Proxmox VE replication runner.
Apr 24 16:21:05 pvehost systemd[1]: Starting Cleanup of Temporary Directories...
Apr 24 16:21:05 pvehost systemd[1]: systemd-tmpfiles-clean.service: Succeeded.
Apr 24 16:21:05 pvehost systemd[1]: Started Cleanup of Temporary Directories.
Apr 24 16:21:37 pvehost pvedaemon[1520]: <root@pam> successful auth for user 'root@pam'
Apr 24 16:22:00 pvehost systemd[1]: Starting Proxmox VE replication runner...
Apr 24 16:22:00 pvehost systemd[1]: pvesr.service: Succeeded.
Apr 24 16:22:00 pvehost systemd[1]: Started Proxmox VE replication runner.
Apr 24 16:23:00 pvehost systemd[1]: Starting Proxmox VE replication runner...
Apr 24 16:23:00 pvehost systemd[1]: pvesr.service: Succeeded.
Apr 24 16:23:00 pvehost systemd[1]: Started Proxmox VE replication runner.
Apr 24 16:23:02 pvehost pvedaemon[6094]: starting vnc proxy UPID:pvehost:000017CE:00018E7A:5EA2F646:vncproxy:102:root@pam:
Apr 24 16:23:02 pvehost pvedaemon[1520]: <root@pam> starting task UPID:pvehost:000017CE:00018E7A:5EA2F646:vncproxy:102:root@pam:
Apr 24 16:23:05 pvehost pvedaemon[1520]: <root@pam> end task UPID:pvehost:000017CE:00018E7A:5EA2F646:vncproxy:102:root@pam: OK
Server View

proxmox is up-to-date and I have kernel 5.4 installed already.
Code:
proxmox-ve: 6.1-2 (running kernel: 5.4.27-1-pve)
pve-manager: 6.1-8 (running version: 6.1-8/806edfe1)
pve-kernel-5.4: 6.1-8
pve-kernel-helper: 6.1-8
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.27-1-pve: 5.4.27-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-17
libpve-guest-common-perl: 3.0-5
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 4.0.1-pve1
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-23
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-7
pve-ha-manager: 3.0-9
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-7
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

In GRUB I have the following line: GRUB_CMDLINE_LINUX_DEFAULT="net.ifnames=0 biosdevname=0 quiet amd_iommu=on iommu=pt"

I get the same "Guest has not ..." message with a VM running Arch Linux and a very similar config.

Is there a solution for this issue, to successfully pass-through a GPU in this case? What do I need to tweak?

Thank you very much in advance!
 
In the host BIOS I disabled ARI, ACS, and AER. Now I am able to pass-through the AMD GPU to the Ubuntu 20.04 VM. At first, I got no graphical output from the VM. But, the GPU was listed under lspci. I had to reboot the host. Now I get graphical output from the passed-through GPU.
But now, I can't login in Ubuntu. I type in my password and hit enter. Then the screen goes black and I am back at the login screen.

edit: if I choose "Ubuntu on Wayland" login is successfull.

edit2: Every time I shutdown the VM, I have to restart the host as well to have the passthrough working again.

edit3: qm set ID -args '-machine type=q35,kernel_irqchip=on' allows me to sometimes successfully reboot the VM, from within the VM, but never from the host.
 
Last edited:
Some issues remain (cannot shut down VM and restart it later from proxmox GUI, but restart within VM works), besides that, passthrough of my RX580 seems to work.

I used the following guides: [1], [2], [3], [4]

In addition to that I had to:
  • disable AER (and therefore also ACS) in BIOS
  • GRUB_CMDLINE_LINUX_DEFAULT="net.ifnames=0 biosdevname=0 quiet amd_iommu=on iommu=pt"
    • without video=efifb:off it works as well, but it takes Ubuntu about 10 seconds to display the icons in the "favourite applications-bar" on the left side (happens also with video=efifb:off)
  • blacklist amdgpu driver (not radeon as mentioned in above guides)
Code:
agent: 1
args: -machine type=q35
audio0: device=ich9-intel-hda,driver=spice
balloon: 8192
bios: ovmf
bootdisk: scsi0
cores: 12
cpu: host
efidisk0: VMstorage:vm-102-disk-1,size=1M
hostpci0: 0a:00,pcie=1
machine: q35
memory: 60000
name: Ubuntu
net0: virtio=B2:5E:C6:CA:A0:A5,bridge=vmbr0,firewall=1
numa: 1
ostype: l26
scsi0: VMstorage:vm-102-disk-0,cache=writeback,discard=on,size=32G,ssd=1
scsihw: virtio-scsi-pci
shares: 5000
smbios1: uuid=bd85f4fe-370e-4299-9ce2-a39f2f997fe9
sockets: 1
usb0: host=046d:c52b,usb3=1
usb1: host=1af3:0001,usb3=1
usb2: host=04d9:0169,usb3=1
usb3: host=046d:0819,usb3=1
usb4: host=048d:8297,usb3=1
vga: none
vmgenid: fdfa3441-a8f0-48f5-8063-2d633183ffa4

Maybe it is of use to anyone in the future with a similar issue... :)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!