How to have a console for the host

pengu1n

Member
Apr 19, 2022
57
5
13
Hello. I'm new to Proxmox. I'd like some pointers please.
As background: For home virtualisation I use ESXi. I want to move my VMs to a new Proxmox host and switch the ESXi host off.
I did my tests of Proxmox by installing it on a separate SSD on the new host. The main guest is an Ubuntu machine and I wanted single GPU passthrough. I struggled before deciding to purchase a dedicated GPU for it. Then I struggled again with the widely known uses of vfio file + drivers blacklist + grub settings. I started with all this on Proxmox version 7.1.
I then realised I had lost track of the combinations of those three configurations I had used, had run out of space on the LVM and other tribulations so I started from scratch again. To my surprise, a vanilla installation of Proxmox, creating the Ubuntu VM and passing through the GPU, it worked pretty much first time.
Proxmox developers, you make and support a wonderful product.!

What is giving me trouble and I need to fix before moving to create other VMs, and deal with freeBSD to Linux conversions is this:
When the host boots, there are framebuffer messages that eventually freeze. The Ubuntu VM is set to autostart. I can switch the phisical monitor input to the output from the GPU given to the VM and we can use the VM perfectly. When the user logs out or the screen saver kicks in, the monitor switches to the frozen boot console characters. It is using OVMF BIOS, q35 machine type, display is "Default".
Therefore the question is what system messages should I investigate/follow to identify how to have the host console unfrozen, usable?

The host has two discrete GPUs, an NVIDIA G96 (exact model to be provided) that I want to use for host console, and an AMD RX6600 XT that is passed-through to the Ubuntu VM.
Host spec is AMD Ryzen 5900X. Proxmox 7.2-3 kernel 5.15.35-1.

An additional non-critical problem I'd like to solve is that I can't connect to the VM virtual console. In the current Display type "Default" - with this attempting to connect via Proxmox UI to the VM console gives me a noVNC "Failed to connect to server". But a physical screen works perfectly.
If I change Display type to "Standard VG" allows me to use noVNC but there's only a black screen on the physical.
If I change Display type to "Virtio-GPU" also allows me to use noVNC but there is a problem I fail to recall at present. I will update.

Any pointers will be greatly appreciated.
 
In an attempt to make my question clear, my setup is like this
gpus_schema.png
When the host boots, the hdmi input freezes. The Ubuntu VM auto-starts and I can switch the monitor to the Display Port (DP) input and use the VM nicely. When the VM is inactive, the host gets a signal to go switch back to the Nvidia hdmi that is frozen.
The main problem for me is that it will burn the characters to the monitor and it never goes to sleep on the frozen signal. Right now I ask the user to switch the monitor off until they need to use it again. Switch it on and switch to the DP input.
What to I need to do to have a usable input/output on the host card please?
 
Is this what you mean by "frozen boot characters" ?

0888OS_01_14.png

can you not just unplug the hdmi cable and access the host by ssh or webgui if needed?

alternatively - the discussion on this page suggests that

sudo nano /etc/default/grub
then inserting
GRUB_CMDLINE_LINUX_DEFAULT="quiet consoleblank=60"
followed by
sudo update-grub
then rebooting the host should blank the screen after 60 seconds

haven't tried it though and it may be reliant on acpi settings in the bios etc
 
thanks for the suggestion, I'll read the page.
However the frozen is not that, I'll get a picture up tomorrow when I can physically get to it. It is the message buffer contents up to a point prior to the login prompt in your picture. And yes I can access the host by ssh and webgui, that's how I've been living with it. I'd like to fix it so I can access that console directly ideally.
 
I had to remove an unused nvme drive and move the second to the place of the first. That caused pci devices reordering that preventing the host network to start. It took me a while to realise that and manually rename the network interface and the bridge configuration. I learned something new about proxmox today.

Back to it then, I won't be able to use the suggestion to blank the console as I want to make it usable. My previous network problem shows is well worth being able to work at it.
The screen stops at some point like this
stuck-console.png

I've attached a redacted log with what seems the relevant screen sections. Could someone please see if that's of use to tell what I need to do to be able to use this host console.
Header
Proxmox
Virtual Environment 7.2-3
Node 'pve'

Jun 09 13:53:45 pve kernel: amdgpu 0000:08:00.0: amdgpu: VRAM: 8176M 0x0000008000000000 - 0x00000081FEFFFFFF (8176M used)
Jun 09 13:53:45 pve kernel: amdgpu 0000:08:00.0: amdgpu: GART: 512M 0x0000000000000000 - 0x000000001FFFFFFF
Jun 09 13:53:45 pve kernel: amdgpu 0000:08:00.0: amdgpu: AGP: 267894784M 0x0000008400000000 - 0x0000FFFFFFFFFFFF
Jun 09 13:53:45 pve kernel: [drm] Detected VRAM RAM=8176M, BAR=8192M
Jun 09 13:53:45 pve kernel: [drm] RAM width 128bits GDDR6
Jun 09 13:53:45 pve kernel: [drm] amdgpu: 8176M of VRAM memory ready
Jun 09 13:53:45 pve kernel: [drm] amdgpu: 8176M of GTT memory ready.
Jun 09 13:53:45 pve kernel: [drm] GART: num cpu pages 131072, num gpu pages 131072
Jun 09 13:53:45 pve kernel: [drm] PCIE GART of 512M enabled (table at 0x0000008000300000).

Jun 09 13:53:45 pve kernel: nouveau 0000:03:00.0: DRM: allocated 1920x1200 fb: 0x50000, bo 00000000a53d776e
Jun 09 13:53:45 pve kernel: Console: switching to colour frame buffer device 240x75
Jun 09 13:53:45 pve kernel: nouveau 0000:03:00.0: [drm] fb0: nouveaudrmfb frame buffer device

Jun 09 13:53:48 pve kernel: amdgpu 0000:08:00.0: amdgpu: Will use PSP to load VCN firmware

Jun 09 13:53:48 pve kernel: [drm] reserve 0xa00000 from 0x81fe000000 for PSP TMR
Jun 09 13:53:48 pve kernel: [drm] Initialized nouveau 1.3.1 20120801 for 0000:03:00.0 on minor 0
Jun 09 13:53:48 pve kernel: amdgpu 0000:08:00.0: amdgpu: RAS: optional ras ta ucode is not available
Jun 09 13:53:48 pve kernel: amdgpu 0000:08:00.0: amdgpu: SECUREDISPLAY: securedisplay ta ucode is not available
Jun 09 13:53:48 pve kernel: amdgpu 0000:08:00.0: amdgpu: smu driver if version = 0x0000000f, smu fw if version = 0x00000013, smu fw version = 0x003b2800 (59.40.0)
Jun 09 13:53:48 pve kernel: amdgpu 0000:08:00.0: amdgpu: SMU driver if version not matched
Jun 09 13:53:48 pve kernel: amdgpu 0000:08:00.0: amdgpu: use vbios provided pptable
Jun 09 13:53:48 pve kernel: amdgpu 0000:08:00.0: amdgpu: SMU is initialized successfully!
Jun 09 13:53:48 pve kernel: [drm] Display Core initialized with v3.2.149!
Jun 09 13:53:48 pve kernel: [drm] DMUB hardware initialized: version=0x0202000C
Jun 09 13:53:48 pve kernel: [drm] REG_WAIT timeout 1us * 100000 tries - mpc2_assert_idle_mpcc line:479
Jun 09 13:53:48 pve kernel: snd_hda_intel 0000:08:00.1: bound 0000:08:00.0 (ops amdgpu_dm_audio_component_bind_ops [amdgpu])

Jun 09 13:53:49 pve kernel: kfd kfd: amdgpu: Allocated 3969056 bytes on gart
Jun 09 13:53:49 pve kernel: memmap_init_zone_device initialised 2097152 pages in 8ms
Jun 09 13:53:49 pve kernel: amdgpu: HMM registered 8176MB device memory
Jun 09 13:53:49 pve kernel: amdgpu: SRAT table not found
Jun 09 13:53:49 pve kernel: amdgpu: Virtual CRAT table created for GPU
Jun 09 13:53:49 pve kernel: amdgpu: Topology: Add dGPU node [0x73ff:0x1002]
Jun 09 13:53:49 pve kernel: kfd kfd: amdgpu: added device 1002:73ff
Jun 09 13:53:49 pve kernel: amdgpu 0000:08:00.0: amdgpu: SE 2, SH per SE 2, CU per SH 8, active_cu_number 32
Jun 09 13:53:49 pve kernel: [drm] fb mappable at 0x7C004CF000
Jun 09 13:53:49 pve kernel: [drm] vram apper at 0x7C00000000
Jun 09 13:53:49 pve kernel: [drm] size 14745600
Jun 09 13:53:49 pve kernel: [drm] fb depth is 24
Jun 09 13:53:49 pve kernel: [drm] pitch is 10240
Jun 09 13:53:49 pve kernel: fbcon: amdgpudrmfb (fb1) is primary device
Jun 09 13:53:49 pve kernel: fbcon: Remapping primary device, fb1, to tty 1-63
Jun 09 13:53:49 pve kernel: [drm] REG_WAIT timeout 1us * 100000 tries - mpc2_assert_idle_mpcc line:479
Jun 09 13:53:50 pve kernel: amdgpu 0000:08:00.0: [drm] fb1: amdgpudrmfb frame buffer device
Jun 09 13:53:50 pve kernel: amdgpu 0000:08:00.0: amdgpu: ring gfx_0.0.0 uses VM inv eng 0 on hub 0
Jun 09 13:53:50 pve kernel: amdgpu 0000:08:00.0: amdgpu: ring comp_1.0.0 uses VM inv eng 1 on hub 0
Jun 09 13:53:50 pve kernel: amdgpu 0000:08:00.0: amdgpu: ring comp_1.1.0 uses VM inv eng 4 on hub 0
Jun 09 13:53:50 pve kernel: amdgpu 0000:08:00.0: amdgpu: ring comp_1.2.0 uses VM inv eng 5 on hub 0
Jun 09 13:53:50 pve kernel: amdgpu 0000:08:00.0: amdgpu: ring comp_1.3.0 uses VM inv eng 6 on hub 0
Jun 09 13:53:50 pve kernel: amdgpu 0000:08:00.0: amdgpu: ring comp_1.0.1 uses VM inv eng 7 on hub 0
Jun 09 13:53:50 pve kernel: amdgpu 0000:08:00.0: amdgpu: ring comp_1.1.1 uses VM inv eng 8 on hub 0
Jun 09 13:53:50 pve kernel: amdgpu 0000:08:00.0: amdgpu: ring comp_1.2.1 uses VM inv eng 9 on hub 0
Jun 09 13:53:50 pve kernel: amdgpu 0000:08:00.0: amdgpu: ring comp_1.3.1 uses VM inv eng 10 on hub 0
Jun 09 13:53:50 pve kernel: amdgpu 0000:08:00.0: amdgpu: ring kiq_2.1.0 uses VM inv eng 11 on hub 0
Jun 09 13:53:50 pve kernel: amdgpu 0000:08:00.0: amdgpu: ring sdma0 uses VM inv eng 12 on hub 0
Jun 09 13:53:50 pve kernel: amdgpu 0000:08:00.0: amdgpu: ring sdma1 uses VM inv eng 13 on hub 0
Jun 09 13:53:50 pve kernel: amdgpu 0000:08:00.0: amdgpu: ring vcn_dec_0 uses VM inv eng 0 on hub 1
Jun 09 13:53:50 pve kernel: amdgpu 0000:08:00.0: amdgpu: ring vcn_enc_0.0 uses VM inv eng 1 on hub 1
Jun 09 13:53:50 pve kernel: amdgpu 0000:08:00.0: amdgpu: ring vcn_enc_0.1 uses VM inv eng 4 on hub 1
Jun 09 13:53:50 pve kernel: amdgpu 0000:08:00.0: amdgpu: ring jpeg_dec uses VM inv eng 5 on hub 1
Jun 09 13:53:50 pve kernel: [drm] Initialized amdgpu 3.42.0 20150101 for 0000:08:00.0 on minor 1

Jun 09 13:53:51 pve systemd[1]: Started Getty on tty1.
Jun 09 13:53:51 pve systemd[1]: Reached target Login Prompts.

Jun 09 14:02:05 pve pvedaemon[1510]: <root@pam> starting task UPID:pve:00000AEF:0000FCD7:62A1EF4D:qmstart:100:root@pam:
Jun 09 14:02:05 pve pvedaemon[2799]: start VM 100: UPID:pve:00000AEF:0000FCD7:62A1EF4D:qmstart:100:root@pam:
Jun 09 14:02:05 pve kernel: VFIO - User Level meta-driver version: 0.3
Jun 09 14:02:05 pve kernel: amdgpu 0000:08:00.0: amdgpu: amdgpu: finishing device.

Jun 09 14:02:05 pve kernel: Console: switching to colour dummy device 80x25
Jun 09 14:02:09 pve kernel: amdgpu: cp queue pipe 4 queue 0 preemption failed
Jun 09 14:02:09 pve kernel: amdgpu 0000:08:00.0: amdgpu: Fail to disable thermal alert!
Jun 09 14:02:09 pve kernel: amdgpu 0000:08:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0012 address=0xfffa9900 flags=0x0020]
Jun 09 14:02:09 pve kernel: amdgpu 0000:08:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0012 address=0xfffa9920 flags=0x0020]
Jun 09 14:02:09 pve kernel: amdgpu 0000:08:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0012 address=0xfffa9940 flags=0x0020]
Jun 09 14:02:09 pve kernel: amdgpu 0000:08:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0012 address=0xfffa9960 flags=0x0020]
Jun 09 14:02:09 pve kernel: amdgpu 0000:08:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0012 address=0xfffa9980 flags=0x0020]
Jun 09 14:02:09 pve kernel: amdgpu 0000:08:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0012 address=0xfffa99a0 flags=0x0020]
Jun 09 14:02:09 pve kernel: amdgpu 0000:08:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0012 address=0xfffa99c0 flags=0x0020]
Jun 09 14:02:09 pve kernel: amdgpu 0000:08:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0012 address=0xfffa99e0 flags=0x0020]
Jun 09 14:02:09 pve kernel: [drm] free PSP TMR buffer
Jun 09 14:02:09 pve kernel: [drm] amdgpu: ttm finalized
Jun 09 14:02:09 pve kernel: vfio-pci 0000:08:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:eek:wns=none
Jun 09 14:02:09 pve systemd[1]: Created slice qemu.slice.
Jun 09 14:02:09 pve systemd[1]: Started 100.scope.

Jun 09 14:02:11 pve kernel: vfio-pci 0000:08:00.0: vfio_ecap_init: hiding ecap 0x19@0x270
Jun 09 14:02:11 pve kernel: vfio-pci 0000:08:00.0: vfio_ecap_init: hiding ecap 0x1b@0x2d0
Jun 09 14:02:11 pve kernel: vfio-pci 0000:08:00.0: vfio_ecap_init: hiding ecap 0x26@0x410
Jun 09 14:02:11 pve kernel: vfio-pci 0000:08:00.0: vfio_ecap_init: hiding ecap 0x27@0x440
Jun 09 14:02:12 pve pvedaemon[1510]: <root@pam> end task UPID:pve:00000AEF:0000FCD7:62A1EF4D:qmstart:100:root@pam: OK
Jun 09 14:02:23 pve kernel: usb 3-1: reset low-speed USB device number 2 using xhci_hcd
Jun 09 14:02:24 pve kernel: usb 1-10: reset full-speed USB device number 4 using xhci_hcd
Jun 09 14:02:25 pve pvedaemon[2990]: starting vnc proxy UPID:pve:00000BAE:000104C1:62A1EF61:vncproxy:100:root@pam:
Jun 09 14:02:25 pve pvedaemon[1510]: <root@pam> starting task UPID:pve:00000BAE:000104C1:62A1EF61:vncproxy:100:root@pam:
Jun 09 14:02:26 pve qm[2995]: VM 100 qmp command failed - VM 100 qmp command 'set_password' failed - Could not set password
Jun 09 14:02:26 pve pvedaemon[2990]: Failed to run vncproxy.
Jun 09 14:02:26 pve pvedaemon[1510]: <root@pam> end task UPID:pve:00000BAE:000104C1:62A1EF61:vncproxy:100:root@pam: Failed to run vncproxy.
Jun 09 14:06:28 pve systemd[1]: Starting Cleanup of Temporary Directories...
Jun 09 14:06:28 pve systemd[1]: systemd-tmpfiles-clean.service: Succeeded.
Jun 09 14:06:28 pve systemd[1]: Finished Cleanup of Temporary Directories.
Jun 09 14:13:01 pve pvedaemon[1511]: <root@pam> successful auth for user 'root@pam'
Jun 09 14:17:01 pve CRON[5170]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Jun 09 14:17:01 pve CRON[5171]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 09 14:17:01 pve CRON[5170]: pam_unix(cron:session): session closed for user root
Jun 09 14:17:48 pve pveproxy[1521]: worker exit
Jun 09 14:17:48 pve pveproxy[1518]: worker 1521 finished
Jun 09 14:17:48 pve pveproxy[1518]: starting 1 worker(s)
Jun 09 14:17:48 pve pveproxy[1518]: worker 5289 started
Jun 09 14:18:59 pve pvedaemon[5466]: starting vnc proxy UPID:pve:0000155A:000288FB:62A1F343:vncproxy:100:root@pam:
Jun 09 14:18:59 pve pvedaemon[1511]: <root@pam> starting task UPID:pve:0000155A:000288FB:62A1F343:vncproxy:100:root@pam:
Jun 09 14:18:59 pve qm[5468]: VM 100 qmp command failed - VM 100 qmp command 'set_password' failed - Could not set password
Jun 09 14:18:59 pve pvedaemon[5466]: Failed to run vncproxy.
Jun 09 14:18:59 pve pvedaemon[1511]: <root@pam> end task UPID:pve:0000155A:000288FB:62A1F343:vncproxy:100:root@pam: Failed to run vncproxy.
Logs



AMD for VM is PCI : 0000:08:00.0 using amdgpu driver it seems before giving it to vfio-pci.
Nvidia for the host is PCI : 0000:03:00.0 using nouveau it seems.
I see the line "amdgpudrmfb (fb1) is primary device" but I don't know how to make it be the nvidia card.
Any help will be appreciated.
 
Last edited:
Additionally the actual hardware. I have nothing in /etc/modules, only defaults in grub configuration, nothing blacklisted in /etc/modprobe.d/ by me but there is the default blacklisted nvidiafb. Could any of this be the problem, that the nvidia card is not used by the host?
I'll appreciate inputs.

Code:
root@pve:~# lshw -c video
  *-display                 
       description: VGA compatible controller
       product: G96C [GeForce 9400 GT]
       vendor: NVIDIA Corporation
       physical id: 0
       bus info: pci@0000:03:00.0
       logical name: /dev/fb0
       version: a1
       width: 64 bits
       clock: 33MHz
       capabilities: pm msi pciexpress vga_controller bus_master cap_list rom fb
       configuration: depth=32 driver=nouveau latency=0 mode=1920x1200 visual=truecolor xres=1920 yres=1200
       resources: iomemory:7e0-7df irq:89 memory:fa000000-faffffff memory:7e20000000-7e2fffffff memory:f8000000-f9ffffff ioport:f000(size=128) memory:fb000000-fb07ffff
  *-display
       description: VGA compatible controller
       product: Navi 23
       vendor: Advanced Micro Devices, Inc. [AMD/ATI]
       physical id: 0
       bus info: pci@0000:08:00.0
       version: c1
       width: 64 bits
       clock: 33MHz
       capabilities: pm pciexpress msi vga_controller bus_master cap_list rom
       configuration: driver=vfio-pci latency=0
       resources: iomemory:7c0-7bf iomemory:7e0-7df irq:90 memory:7c00000000-7dffffffff memory:7e00000000-7e0fffffff ioport:e000(size=256) memory:fb900000-fb9fffff memory:fba00000-fba1ffff
 
Does your mainboard bios have an option to specify the primary graphics adapter (via PCIe-slot)?
 
Hi, it seems not. There is an option to select the mode it runs in, as in the number of pci lanes, but not which one is the primary.
By the way it is appearing as: DMI: ASUS System Product Name/ROG STRIX B550-F GAMING, BIOS 2423 08/10/2021.
Any more ideas, please send them my way :)
 
I think I've overcomplicated the question.
How do I make Proxmox host use the nvidia GPU that is not the amd to be passed-through to a VM ?
Anyone ?
 
I think I would blacklist the AMD drivers, don't blacklist the Nvidia drivers and then go to BIOS/UEFI and select there wich GPU to use as primiry GPU. If there is no option to select the PCI slot for the primary GPU and the wrong GPU is used I would switch slots the GPUs are put into.
 
  • Like
Reactions: pengu1n and Neobin
As far as I know, only Gigabyte AM4 motherboards allow selecting the primary PCIe/discrete GPU. Please, please correct me if you know this to be wrong!

With the STRIX B550-F GAMING, the second x16 slot is only x4 and its PCIe 3.0 instead of 4.0. Assuming you want the most performance for the passed through GPU, swapping the cards is not ideal. Unfortunately, this motherboard is just not that great for PCIe passthrough. Swapping the NVIdia and CPU for a CPU with integrated graphics would probably allow you to passthrough the primary slot but it would run at x8 and you will have less cores.

I had much better luck with not blacklisting amdgpu and I would expect the 6000 series of AMD to reset properly. It does require disabling Resizable BAR (Smart Access Memory),which is not supported by KVM/QEMU yet.
 
  • Like
Reactions: pengu1n
Perfect. Thank you both @leesteken and @Dunuin . It gives me some things to try.
Indeed there is no choice of GPU in BIOS for this motherboard. Also changing CPU to one with integrated video isn't a viable option to me at the moment.
I had much better luck with not blacklisting amdgpu and I would expect the 6000 series of AMD to reset properly. It does require disabling Resizable BAR (Smart Access Memory),which is not supported by KVM/QEMU yet.
This is additionally interesting. I have Resizable BAR on at present, could that be what is causing me trouble with this?
 
So far no success yet.
I blacklisted amdgpu with
Code:
cat /etc/modprobe.d/blacklist.conf
blacklist amdgpu
followed by update-initramfs -u and reboot.
Then the result was for my tracking
- overwhelming number of errors of "Jun 16 10:09:00 pve kernel: vfio-pci 0000:08:00.0: BAR 0: can't reserve [mem 0xc0000000-0xcfffffff 64bit pref]"
- no VNC console for VM. This is a current issue.
- blank screen on host console
So instead of freezing, no output (blank screen).
I've just finished clearing space, the errors just consumed all disk so I couldn't stop the VM. I then went to disable the resizeble BAR in the Motherboard BIOS, seems of not use according to leesteken and I guess causes those errors. I didn't have them before blacklisting amdgpu but there might be an explanation.

I have also verified and the CSM is disabled.

I had to undo the change. Commented out the line in blacklist.conf and regenerated grub config and rebooted. Back to the start.

Should I try blacklisting radeon and nouveau instead or as well as amdgpu?
 
So far no success yet.
I blacklisted amdgpu with
Code:
cat /etc/modprobe.d/blacklist.conf
blacklist amdgpu
followed by update-initramfs -u and reboot.
I had more success (with a RX570) with not blacklising amdgpu because loading amdgpu for the GPU makes BOOTFB disappear from /proc/iomem. video=efifb:off video=simplefb:off does not work for anyone.
Then the result was for my tracking
- overwhelming number of errors of "Jun 16 10:09:00 pve kernel: vfio-pci 0000:08:00.0: BAR 0: can't reserve [mem 0xc0000000-0xcfffffff 64bit pref]"
Probably because BOOTFB appears in /proc/iomem.
- no VNC console for VM. This is a current issue.
- blank screen on host console
So instead of freezing, no output (blank screen).
I've just finished clearing space, the errors just consumed all disk so I couldn't stop the VM. I then went to disable the resizeble BAR in the Motherboard BIOS, seems of not use according to leesteken and I guess causes those errors. I didn't have them before blacklisting amdgpu but there might be an explanation.
The BAR can't reserve errors are because BOOFB does not release the iomem. The work-around is loading amdgpu or virtually disconnecting the GPU and rescanning the PCIe bus.
Resizable BAR is not supported for passthrough and needs to be turned off, but it is not the cause of the BAR can't reserve errors. Likely to be the cause of your previous error, in my opinion.
I have also verified and the CSM is disabled.
On some motherboards, turning it on (when it's off) will switch the boot GPU.
I had to undo the change. Commented out the line in blacklist.conf and regenerated grub config and rebooted. Back to the start.

Should I try blacklisting radeon and nouveau instead or as well as amdgpu?
There's no point in blacklisting unused (not applicable) drivers.
 
Last edited:
thank you for all your answers, I appreciate it. I'm coming from ubuntu, esxi and freebsd experience and although that ubuntu experience should help with proxmox, I'm new to passthroughs.
It sounds like I'm stumped for now then. I'll keep reading https://docs.kernel.org/gpu/drm-internals.html for clues.
For now I'll need to keep reminding my wife to switch the monitor off and remember to not press the PC on/off button thinking it is off when she needs it again, and then to switch inputs on the monitor to get to the VM's desktop.
I can't get to it remotely but I'll open a different question for that.
Much obliged leesteken.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!