[SOLVED] Linux guest not detecting hypervisor when GPU is passed to it

anh0516

New Member
Jul 25, 2024
6
2
3
I run a Debian 12 VM with XFCE on my home server so that it can serve dual purpose as a desktop. Obviously, that needs GPU and USB controllers to be passed through to the VM.

Without the GPU passed through, the guest kernel detects the hypervisor and everything works as expected:
[ 0.000000] Hypervisor detected: KVM
[ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00
[ 0.000001] kvm-clock: using sched offset of 10747905976 cycles
[ 0.000002] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
[ 0.022687] kvm-guest: KVM setup pv remote TLB flush
[ 0.022690] kvm-guest: setup PV sched yield
[ 0.022719] Booting paravirtualized kernel on KVM
[ 0.026355] kvm-guest: PV spinlocks enabled
[ 0.054739] kvm-guest: setup PV IPIs
[ 0.384071] clocksource: Switched to clocksource kvm-clock
[ 2.158906] systemd[1]: Detected virtualization kvm.

However, upon passing through the GPU (doesn't matter whether the virtual display is enabled or not) the guest kernel does not detect the hypervisor. Everything is perfectly functional, but there are performance implications.
[ 0.023278] Booting paravirtualized kernel on bare hardware
[ 0.098690] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x30e5ee828bd, max_idle_ns: 440795330237 ns
[ 0.655009] clocksource: Switched to clocksource tsc-early
[ 1.779190] tsc: Refined TSC clocksource calibration: 3392.210 MHz
[ 1.779211] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x30e58ff2122, max_idle_ns: 440795341059 ns
[ 1.780029] clocksource: Switched to clocksource tsc
[ 2.381796] systemd[1]: Detected virtualization qemu.
You can see the kernel itself just straight up doesn't see KVM. It is not enabling kvm-clock or other guest features. Whatever systemd is doing results in it detecting "qemu" instead of "kvm."

Just for testing purposes, I also reproduced this with Arch Linux, so this is not Debian-specific.

Is this expected behavior? I couldn't find any other information online about it. Has anyone else experienced this?
 
Last edited:
I have several Linux MInt VMs (Ubuntu but also Debian based LMDE) with USB and GPU paasthrough and it does detect the hypervisor just fine.

Can you please show your VM configuration file (qm showconfig VM_NUMBER)? DId you enable Primary GPU? Please try the VM without it.

What kind of "performance implications" are you experiencing? Please do try to give the VM all of the hardware resources as that introduced stutter since Proxmox (and other VM/CTs) also need some. To troubleshoot PCI(e) passthrough, please share information about the GPU and host machine.
 
I have several Linux MInt VMs (Ubuntu but also Debian based LMDE) with USB and GPU paasthrough and it does detect the hypervisor just fine.

Can you please show your VM configuration file (qm showconfig VM_NUMBER)? DId you enable Primary GPU? Please try the VM without it.

What kind of "performance implications" are you experiencing? Please do try to give the VM all of the hardware resources as that introduced stutter since Proxmox (and other VM/CTs) also need some. To troubleshoot PCI(e) passthrough, please share information about the GPU and host machine.
Disabling x-vga did it. Not sure why that matters. The docs say "x-vga=on|off marks the PCI(e) device as the primary GPU of the VM. With this enabled the vga configuration option will be ignored."

Performance was fine already, but there are advantages to having the guest kernel know it's running under a hypervisor. That's why I said "implications."

Anwyays, problem solved. Thanks.
 
  • Like
Reactions: leesteken
Figured I'd post this anyways:
agent: 1
balloon: 2048
bios: ovmf
boot: order=sata0;virtio0
cores: 8
cpu: host
efidisk0: local-zfs:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
hostpci0: 0000:01:00,pcie=1,romfile=amdgpu.rom removing x-vga=1 from here fixed the problem
hostpci2: 0000:00:14,pcie=1
hostpci3: 0000:00:1d,pcie=1
hostpci4: 0000:00:1a,pcie=1
hotplug: 0
machine: q35
memory: 8192
meta: creation-qemu=9.0.0,ctime=1721794579
name: debian-desktop
net0: virtio=BC:24:11:63:C7:52,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
sata0: none,media=cdrom
scsihw: virtio-scsi-single
smbios1: uuid=96c055d7-1e18-4a8c-a3c4-6d7ea072f6be
sockets: 1
tpmstate0: local-zfs:vm-100-disk-1,size=4M,version=v2.0
vga: none
virtio0: vms:vm-100-disk-0,cache=writethrough,discard=on,iothread=1,size=64G
vmgenid: 618d99ad-eb3f-4741-b12e-64f99934bffb

The GPU I am using (Radeon R5 220) needs a romfile otherwise it doesn't work.

Host is ASUSTeK COMPUTER INC. M51AC/M51AC, BIOS 0801 06/25/2013 from DMI. It's a Haswell desktop with an i7-4770 and 32GB RAM. IOMMU is in passthrough mode.
00:00.0 Host bridge: Intel Corporation 4th Gen Core Processor DRAM Controller (rev 06)
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller (rev 06)
00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller (rev 06)
00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06)
00:14.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI (rev 04)
00:16.0 Communication controller: Intel Corporation 8 Series/C220 Series Chipset Family MEI Controller #1 (rev 04)
00:1a.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2 (rev 04)
00:1b.0 Audio device: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller (rev 04)
00:1c.0 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #1 (rev d4)
00:1c.2 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #3 (rev d4)
00:1d.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #1 (rev 04)
00:1f.0 ISA bridge: Intel Corporation H87 Express LPC Controller (rev 04)
00:1f.2 SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 04)
00:1f.3 SMBus: Intel Corporation 8 Series/C220 Series Chipset Family SMBus Controller (rev 04)
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cedar [Radeon HD 5000/6000/7350/8350 Series]
01:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Cedar HDMI Audio [Radeon HD 5400/6300/7300 Series]
03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 0c)
 
Disabling x-vga did it. Not sure why that matters. The docs say "x-vga=on|off marks the PCI(e) device as the primary GPU of the VM.
It's really a NVidia setting that also hides the hypervisor because NVidia does not support passthrough (or did not, I'm not sure at the moment).
Performance was fine already, but there are advantages to having the guest kernel know it's running under a hypervisor. That's why I said "implications."
When using para-virtualization like VirtIO devices, it helps if the system knows about those devices and be more efficient than emulated devices.
Anwyays, problem solved. Thanks.
Please edit your first post and select Solved from the pull-down menu to mark it as [SOLVED], so other people can find it easier in the future.
 
It's really a NVidia setting that also hides the hypervisor because NVidia does not support passthrough (or did not, I'm not sure at the moment).

When using para-virtualization like VirtIO devices, it helps if the system knows about those devices and be more efficient than emulated devices.

Please edit your first post and select Solved from the pull-down menu to mark it as [SOLVED], so other people can find it easier in the future.
I added my hardware info in a reply before you. I was having IOMMU-related issues (something platform-specific or some sort of firmware bug) passing through the Intel integrated graphics on the Optiplex 9020 I was using before I moved it to the current Asus host (I moved it because I wanted the proper midtower instead of the SFF. It fits more drives and cards.), so I bought an AMD R5 220, the cheapest UEFI-capable GPU I could get my hands on. All this system does is run a web browser and scan/print documents so I don't need more than that. I could test and see if the Intel graphics work properly on the current Asus system, but it works now so it's not worth my time.

So there was a solved option. It was just hidden under "Edit thread."
 
  • Like
Reactions: leesteken
Thanks for reporting back on the solution (and marking the thread as solved). I do see the advantage of using PVE for a specific use case with just one or two VMs, as management and backups are much easier. Do please note that replacing the hardware (when it eventually fails) might be more troublesome as PCI(e) passthrough is never guaranteed.
Maybe try to separate the scan/print functionality (which can probably run in a container) from the desktop environment (which needs a VM with passthrough)?
Some people on this forum managed to get a container working with the graphics card (and input devices), in which case you would not even need a VM with passthrough...
 
I do see the advantage of using PVE for a specific use case with just one or two VMs, as management and backups are much easier.
I do run some other stuff in different VMs. I have a Minecraft server running on OpenBSD (running in an LXC container is probably the smarter option but I wanted to play with it and firewalling it with pf was really easy) and Jellyfin running in Docker on an Ubuntu VM (yes Docker can be made to work in containers but I tried to and it just wasn't going right). And I can easily spin up whatever else I want to play with or need to actually run.
Do please note that replacing the hardware (when it eventually fails) might be more troublesome as PCI(e) passthrough is never guaranteed.
You mean the PCI bus ID changing? I could see that being a pain if you have a lot of devices. Afaik pretty much all reasonably modern hardware has the IOMMU neccesary for it.
Maybe try to separate the scan/print functionality (which can probably run in a container) from the desktop environment (which needs a VM with passthrough)?
It's not running a print/scan server for other devices. I meant it's used to print documents from Google Chrome and scan documents with simple-scan directly from the desktop GUI. But if I did want to run a standalone server I'd definitely do it in a container.
Some people on this forum managed to get a container working with the graphics card (and input devices), in which case you would not even need a VM with passthrough...
That would be interesting. I just went with the easiest approach that would work the easiest.
 
You mean the PCI bus ID changing? I could see that being a pain if you have a lot of devices. Afaik pretty much all reasonably modern hardware has the IOMMU neccesary for it.
No, I mean that passthrough might fail completely because of using any different hardware. It's really finicky. If you get it working today, it might fail as soon as you change any hardware or update any firmware. Virtualization really breaks down when passing through hardware.
That would be interesting. I just went with the easiest approach that would work the easiest.
With such limited hardware, it might be worth pursuing. As I said before, you're really not using the right hardware for what your want. And it might break at any time.
 
No, I mean that passthrough might fail completely because of using any different hardware. It's really finicky. If you get it working today, it might fail as soon as you change any hardware or update any firmware. Virtualization really breaks down when passing through hardware.
I didn't know it was that finicky. As I said I did have some IOMMU errors passing through the iGPU to the VM on an Optiplex 9020, but it still functioned despite the errors.
With such limited hardware, it might be worth pursuing. As I said before, you're really not using the right hardware for what your want. And it might break at any time.
This isn't critical infrastructure. It's a home server I put together out of otherwise unused hardware I had lying around (though I did buy 8GB DIMMS to max it out with 32GB, and 2 2TB SSDs for redundant storage). When you say "right hardware," you mean actual server hardware? I wasn't really interested in spending the money on something like that. Worse comes to worst, I can just image the Debian VM to a spare SSD and move it to a whole separate PC. That was actually kind of how it was beforehand. The Optiplex was running the Minecraft server bare metal along with a desktop, and the Asus was running Jellyfin on TrueNAS SCALE. I eliminated an entire PC of power draw by moving the Minecraft server, the desktop, and Jellyfin onto one machine, and chose to use Proxmox to isolate them. Realistically, I could have just run a bare metal desktop and containerize the other stuff, but I wanted the flexibility of being able to easily scale it up and install different operating systems.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!