Ryzen 9700X iGPU not being used by Proxmox

arch14

New Member
Jan 18, 2025
2
0
1
Hello,

I installed Proxmox 8.3 on my newish motherboard and CPU:

Motherboard: Gigabyte B650 Aorus Elite AX Ice
CPU: AMD Ryzen 9700x
GPU: EVGA GTX1050 SC GAMING 2GB G5


Code:
root@proxmox:~# pveversion -v
proxmox-ve: 8.3.0 (running kernel: 6.11.0-2-pve)
pve-manager: 8.3.2 (running version: 8.3.2/3e76eec21c4a14a7)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.11.0-2-pve-signed: 6.11.0-2
proxmox-kernel-6.11: 6.11.0-2
proxmox-kernel-6.8: 6.8.12-6
proxmox-kernel-6.8.12-6-pve-signed: 6.8.12-6
proxmox-kernel-6.8.12-4-pve-signed: 6.8.12-4
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2+deb12u1
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.2.0
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.10
libpve-cluster-perl: 8.0.10
libpve-common-perl: 8.2.9
libpve-guest-common-perl: 5.1.6
libpve-http-server-perl: 5.1.2
libpve-network-perl: 0.10.0
libpve-rs-perl: 0.9.1
libpve-storage-perl: 8.3.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.5.0-1
proxmox-backup-client: 3.3.2-1
proxmox-backup-file-restore: 3.3.2-2
proxmox-firewall: 0.6.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.3.1
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.3
pve-cluster: 8.0.10
pve-container: 5.2.3
pve-docs: 8.3.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.1.0
pve-firmware: 3.14-2
pve-ha-manager: 4.0.6
pve-i18n: 3.3.2
pve-qemu-kvm: 9.0.2-4
pve-xtermjs: 5.3.0-3
qemu-server: 8.3.3
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.6-pve1

Questions:

I want to pass audio, keyboard, mouse and GPU to a Ubuntu VM and use it like a regular Linux machine similar to what Virtualbox offers. If I do that, would the dedicated GPU not be available to any other VMs? I want to use one LXC or VM for Plex.

Before I passthrough GPU, I noticed that AMD iGPU is not being used by Proxmox. It's using the Nvidia GPU.

Code:
root@proxmox:~# dmesg | grep -i drm
[    4.297948] ACPI: bus type drm_connector registered
[    4.299519] [drm] Initialized simpledrm 1.0.0 for simple-framebuffer.0 on minor 0
[    4.299799] simple-framebuffer simple-framebuffer.0: [drm] fb0: simpledrmdrmfb frame buffer device
[    7.132328] systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
[    7.135511] systemd[1]: modprobe@drm.service: Deactivated successfully.
[    7.135541] systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
[    8.427191] [drm] amdgpu kernel modesetting enabled.
[    8.430572] [drm] initializing kernel modesetting (IP DISCOVERY 0x1002:0x13C0 0x1458:0xD000 0xC5).
[    8.430577] [drm] register mmio base: 0xF6700000
[    8.430578] [drm] register mmio size: 524288
[    8.432086] [drm] add ip block number 0 <nv_common>
[    8.432087] [drm] add ip block number 1 <gmc_v10_0>
[    8.432087] [drm] add ip block number 2 <navi10_ih>
[    8.432088] [drm] add ip block number 3 <psp>
[    8.432088] [drm] add ip block number 4 <smu>
[    8.432089] [drm] add ip block number 5 <dm>
[    8.432090] [drm] add ip block number 6 <gfx_v10_0>
[    8.432090] [drm] add ip block number 7 <sdma_v5_2>
[    8.432091] [drm] add ip block number 8 <vcn_v3_0>
[    8.432092] [drm] add ip block number 9 <jpeg_v3_0>
[    8.436988] [drm] vm size is 262144 GB, 4 levels, block size is 9-bit, fragment size is 9-bit
[    8.437000] [drm] Detected VRAM RAM=512M, BAR=512M
[    8.437000] [drm] RAM width 128bits DDR5
[    8.437044] [drm] amdgpu: 512M of VRAM memory ready
[    8.437045] [drm] amdgpu: 15599M of GTT memory ready.
[    8.437056] [drm] GART: num cpu pages 262144, num gpu pages 262144
[    8.437154] [drm] PCIE GART of 1024M enabled (table at 0x000000F41FC00000).
[    8.437387] [drm] Loading DMUB firmware via PSP: version=0x05001C00
[    8.437627] [drm] use_doorbell being set to: [true]
[    8.437636] [drm] Found VCN firmware Version ENC: 1.33 DEC: 4 VEP: 0 Revision: 3
[    8.524621] [drm] Seamless boot condition check passed
[    8.525465] [drm] Display Core v3.2.291 initialized on DCN 3.1.5
[    8.525467] [drm] DP-HDMI FRL PCON supported
[    8.526196] [drm] DMUB hardware initialized: version=0x05001C00
[    8.527420] [drm] kiq ring mec 2 pipe 1 q 0
[    8.533183] [drm] Initialized amdgpu 3.58.0 for 0000:11:00.0 on minor 1
[    8.535066] amdgpu 0000:11:00.0: [drm] Cannot find any crtc or sizes
[    8.535081] [drm] DSC precompute is not needed.
[    8.571249] nouveau 0000:01:00.0: DRM: VRAM: 2048 MiB
[    8.571251] nouveau 0000:01:00.0: DRM: GART: 536870912 MiB
[    8.571252] nouveau 0000:01:00.0: DRM: BIT table 'A' not found
[    8.571253] nouveau 0000:01:00.0: DRM: BIT table 'L' not found
[    8.571254] nouveau 0000:01:00.0: DRM: TMDS table version 2.0
[    8.572413] nouveau 0000:01:00.0: DRM: MM: using COPY for buffer copies
[    8.574321] [drm] Initialized nouveau 1.4.0 for 0000:01:00.0 on minor 0
[    8.681163] fbcon: nouveaudrmfb (fb0) is primary device
[    8.828048] nouveau 0000:01:00.0: [drm] fb0: nouveaudrmfb frame buffer device
 
Last edited:
Hello arch14! As you probably know, the Proxmox VE documentation for PCIe passthrough says:
But, if you pass through a device to a virtual machine, you cannot use that device anymore on the host or in any other VM.
This is otherwise possible using Intel’s GVT-g or NVIDIA's vGPU, but this requires explicit support from the GPU manufacturer.

Another possibility would be to use VirGL for host offloading, meaning that GPU processing can be done on the host, enabling multiple VMs to use hardware acceleration. The disadvantage of this approach is that it requires having a special driver in the VMs that enable this functionality, and these are not available for all platforms.

The only other possibility would be to use multiple GPUs for passing them through to each of the VMs. If you need a GPU but not a powerful one, you can use the integrated GPU (if the machine contains one), and/or buy some cheap low-power dedicated GPUs for this purpose. Of course, this also depends on the number of PCI-E slots on your motherboard, so installing multiple GPUs might only be possible on some more expensive motherboards.
 
  • Like
Reactions: Kingneutron
I was able to passthrough my Nvidia GTX1050 as raw device to Ubuntu VM. However, it seems like Proxmox host is also using it for console display. My monitor is plugged into the Nvidia card. How is this possible? Is there a way to check which GPU the Proxmox host is using?