Comet Lake NUC (NUC10i5FNH / NUC10i7FNH) gvt-g support?

n1nj4888

Well-Known Member
Jan 13, 2019
162
22
58
44
Hi Guys,

I currently have a Coffee Lake NUC (NUC8i5BEH) where I use gvt-g for virtual GPUs for some ubuntu VMs (mainly for Intel QuickSync) and wondered whether anyone had tried this on the Comet Lake NUCs (either NUC10i5FNH or NUC10i7FNH) as thinking about buying one of those...

According to the gvt-g github site, Comet Lake is supported but wondered whether anyone had tried and could confirm from experience?

Thanks!
 
Hi, I dont have a NUC, but a "Server" with the Intel Core i3-10100 running Proxmox VE 6.2-11.

QuickSync on the UHD 630 seems to work (Plex transcodes work in HW on a Ubuntu 20.04 LTS LXC) but the output of vainfo and neofetch (GPU: Intel Device 9bc8) look like Comet Lake isn't (fully?) Supported yet.

I think the reason is the Kernel, which is 5.4.60-1-pve at the time of this post - but I could be wrong (maybe my Drivers are messed up).
 
Hi Guys,

I currently have a Coffee Lake NUC (NUC8i5BEH) where I use gvt-g for virtual GPUs for some ubuntu VMs (mainly for Intel QuickSync) and wondered whether anyone had tried this on the Comet Lake NUCs (either NUC10i5FNH or NUC10i7FNH) as thinking about buying one of those...

According to the gvt-g github site, Comet Lake is supported but wondered whether anyone had tried and could confirm from experience?

Thanks!
Can you tell me exactly what procedure did you follow in order to set up gvt-g mediated passthtrough? I have the same hardware and am tryind to do the same.
Thanks
 
Are you trying to passthrough to a LXC or a VM?
To a VM with Windows. Thanks for your reply!
Actually, I was giving up, after several hours, but at the last try it worked! I'm speaking about the NUC8i3.

I never succeded in complete passthrough of the integrated GPU, I read a lot of posts, no working solution.
I will test some more the performances of mediated passthrough (gvt-g). Maybe I'm doing something wrong. Till now:

Pro
- GPU is still available to the host and other VM
- VM recognize the intel GPU and istalled the drivers, hardware acceleration works (I believe), some graphical software (Blender, Kodi...) now work
- audio and video now always in sync

Cons
- full screen multimedia playing (2560x1080) is still not completeley fluid, connecting to the VM using RDP (it seems to have better performances than spice, and according to the virtual network card, the bitrate is around 100Mb/s!).
 
  • Like
Reactions: guru4gpu
Cons
- full screen multimedia playing (2560x1080) is still not completeley fluid, connecting to the VM using RDP (it seems to have better performances than spice, and according to the virtual network card, the bitrate is around 100Mb/s!).
Do you use the Resolution of 2560x1080p for the RDP Settion as well? 100 MBit/s sound way too high, even at this Resolution
 
Do you use the Resolution of 2560x1080p for the RDP Settion as well? 100 MBit/s sound way too high, even at this Resolution
Yes.
Video playing, even not full screen, can rise the upstream of the RedHat VirtIO ethernet adapter to 200Mbps
 
Hi all, I also followed the descriptions of the official guide in "General Requirements" and "Mediated Devices (vGPU, GVT-g)" like @n1nj4888.
Now I can choose the mediated devices in the VM's hardware properties but it just dosn't want to work... The VM is starting fine, but the screen still shows the host's output. Can someone tell me how I have to setup the VM so that it will work? I'm using an Intel i3-10100 and I try to pass through the intergated GPU to an ubuntu desktop 20.04 LTS VM using Proxmox 6.4-6.
 
Hi all, I also followed the descriptions of the official guide in "General Requirements" and "Mediated Devices (vGPU, GVT-g)" like @n1nj4888.
Now I can choose the mediated devices in the VM's hardware properties but it just dosn't want to work... The VM is starting fine, but the screen still shows the host's output. Can someone tell me how I have to setup the VM so that it will work? I'm using an Intel i3-10100 and I try to pass through the intergated GPU to an ubuntu desktop 20.04 LTS VM using Proxmox 6.4-6.
I think you need to blacklist the Intel Driver for the Proxmox VE Host, which also means you need another GPU for the Host, since the VM uses it already...

I don't need the VM to have a real Display Connected to it, I only need the Quick Sync Hardware of the Intel UHD 610 for my Plex Server
 
Last edited:
I think you need to blacklist the Intel Driver for the Proxmox VE Host, which also means you need another GPU for the Host, since the VM uses it already...

I don't need the VM to have a real Display Connected to it, I only need the Quick Sync Hardware of the Intel UHD 610 for my Plex Server
Thank you for the quick response! Unfortunately, blacklisting did not lead to success. I've added the following to the blacklist.conf:
blacklist snd_hda_intel​
blacklist snd_hda_codec_hdmi​
blacklist i915​
Is that right? Any other suggestions?
Here are the hardware settings of my Ubuntu Desktop VM:
007_Proxmox_VM103_Hardware_01.png
 
For posterity, I found this reddit thread:

https://www.reddit.com/r/homelab/comments/jyudnn/enable_mediated_intel_igpu_gvtg_for_vms_in/

Which gave me the grub changes and modules needed to get this to work. I was trying to use xorgxrdp-glamor in manjaro and could not get hardware acceleration working. I also had to change the proxmox display to None or Serial console, as the default VGA was conflicting somehow.

I now have both of my Cometlake NUCs doing hardware accelerated GVT-G to Linux and Windows VMS.
 
For posterity, I found this reddit thread:

https://www.reddit.com/r/homelab/comments/jyudnn/enable_mediated_intel_igpu_gvtg_for_vms_in/

Which gave me the grub changes and modules needed to get this to work. I was trying to use xorgxrdp-glamor in manjaro and could not get hardware acceleration working. I also had to change the proxmox display to None or Serial console, as the default VGA was conflicting somehow.

I now have both of my Cometlake NUCs doing hardware accelerated GVT-G to Linux and Windows VMS.
I have multiple NUC10i7's and am unable to get this working currently, do you mind posting your grub file, modules, blacklist, etc.? Any help you could give would be much appreciated since you were able to get it working properly. I have attempted to follow the instructions listed in the link you included, I am running Proxmox 7.2 and have been unsuccessful.
 
I have multiple NUC10i7's and am unable to get this working currently, do you mind posting your grub file, modules, blacklist, etc.? Any help you could give would be much appreciated since you were able to get it working properly. I have attempted to follow the instructions listed in the link you included, I am running Proxmox 7.2 and have been unsuccessful.
Sure.

Here's pastebin of modules/grub and some kernel/pci outputs: https://pastebin.com/khjUtyKE

Here's an image of the options when adding the PCI device: https://pasteboard.co/sPUanAhrR4kP.png <--- this link didn't get all the options, use https://pasteboard.co/cZTTGY9EywNL.png
 
Last edited:
Sure.

Here's pastebin of modules/grub and some kernel/pci outputs: https://pastebin.com/khjUtyKE

Here's an image of the options when adding the PCI device: https://pasteboard.co/sPUanAhrR4kP.png <--- this link didn't get all the options, use https://pasteboard.co/cZTTGY9EywNL.png
What version of Proxmox are you currently running? When using version 7.2 I feel that the integrated kernel drivers for my NUC10i7FNH graphics are not correct since I am receiving this as output for the device.

lspci -v
Code:
00:02.0 VGA compatible controller: Intel Corporation Device 9bca (rev 04) (prog-if 00 [VGA controller])
        DeviceName:  GPU
        Subsystem: Intel Corporation Device 2081
        Flags: bus master, fast devsel, latency 0, IRQ 160, IOMMU group 1
        Memory at 6022000000 (64-bit, non-prefetchable) [size=16M]
        Memory at 4000000000 (64-bit, prefetchable) [size=256M]
        I/O ports at 3000 [size=64]
        Expansion ROM at 000c0000 [virtual] [disabled] [size=128K]
        Capabilities: [40] Vendor Specific Information: Len=0c <?>
        Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00
        Capabilities: [ac] MSI: Enable+ Count=1/1 Maskable- 64bit-
        Capabilities: [d0] Power Management version 2
        Capabilities: [100] Process Address Space ID (PASID)
        Capabilities: [200] Address Translation Service (ATS)
        Capabilities: [300] Page Request Interface (PRI)
        Kernel driver in use: i915
        Kernel modules: i915
Code:
root@pve-01:~# dmesg | grep IOMMU
[    0.467951] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
[    0.869021] DMAR: Intel-IOMMU force enabled due to platform opt in
[    0.869057] DMAR: IOMMU feature fl1gp_support inconsistent
[    0.869058] DMAR: IOMMU feature pgsel_inv inconsistent
[    0.869061] DMAR: IOMMU feature nwfs inconsistent
[    0.869064] DMAR: IOMMU feature pasid inconsistent
[    0.869067] DMAR: IOMMU feature eafs inconsistent
[    0.869069] DMAR: IOMMU feature prs inconsistent
[    0.869071] DMAR: IOMMU feature nest inconsistent
[    0.869074] DMAR: IOMMU feature mts inconsistent
[    0.869076] DMAR: IOMMU feature sc_support inconsistent
[    0.869078] DMAR: IOMMU feature dev_iotlb_support inconsistent
Code:
root@pve-01:~# dmesg | grep DMAR
[    0.024149] ACPI: DMAR 0x00000000B9EF8000 0000A8 (v01 INTEL  NUC9i5FN 00000039      01000013)
[    0.024224] ACPI: Reserving DMAR table memory at [mem 0xb9ef8000-0xb9ef80a7]
[    0.467906] DMAR: Host address width 39
[    0.467909] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[    0.467920] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e
[    0.467928] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[    0.467934] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
[    0.467941] DMAR: RMRR base: 0x000000ba446000 end: 0x000000ba68ffff
[    0.467946] DMAR: RMRR base: 0x000000bb800000 end: 0x000000bfffffff
[    0.467951] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
[    0.467956] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[    0.467960] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.470330] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    0.869021] DMAR: Intel-IOMMU force enabled due to platform opt in
[    0.869051] DMAR: No ATSR found
[    0.869054] DMAR: No SATC found
[    0.869057] DMAR: IOMMU feature fl1gp_support inconsistent
[    0.869058] DMAR: IOMMU feature pgsel_inv inconsistent
[    0.869061] DMAR: IOMMU feature nwfs inconsistent
[    0.869064] DMAR: IOMMU feature pasid inconsistent
[    0.869067] DMAR: IOMMU feature eafs inconsistent
[    0.869069] DMAR: IOMMU feature prs inconsistent
[    0.869071] DMAR: IOMMU feature nest inconsistent
[    0.869074] DMAR: IOMMU feature mts inconsistent
[    0.869076] DMAR: IOMMU feature sc_support inconsistent
[    0.869078] DMAR: IOMMU feature dev_iotlb_support inconsistent
[    0.869081] DMAR: dmar0: Using Queued invalidation
[    0.869089] DMAR: dmar1: Using Queued invalidation
[    0.870412] DMAR: Intel(R) Virtualization Technology for Directed I/O
Code:
root@pve-01:~# lspci
00:00.0 Host bridge: Intel Corporation Device 9b51
00:02.0 VGA compatible controller: Intel Corporation Device 9bca (rev 04)
00:08.0 System peripheral: Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th/8th Gen Core Processor Gaussian Mixture Model
00:12.0 Signal processing controller: Intel Corporation Comet Lake Thermal Subsytem
00:14.0 USB controller: Intel Corporation Comet Lake PCH-LP USB 3.1 xHCI Host Controller
00:14.2 RAM memory: Intel Corporation Comet Lake PCH-LP Shared SRAM
00:15.0 Serial bus controller [0c80]: Intel Corporation Serial IO I2C Host Controller
00:15.2 Serial bus controller [0c80]: Intel Corporation Device 02ea
00:16.0 Communication controller: Intel Corporation Comet Lake Management Engine Interface
00:17.0 SATA controller: Intel Corporation Comet Lake SATA AHCI Controller
00:1c.0 PCI bridge: Intel Corporation Device 02bc (rev f0)
00:1d.0 PCI bridge: Intel Corporation Device 02b0 (rev f0)
00:1f.0 ISA bridge: Intel Corporation Comet Lake PCH-LP LPC Premium Controller/eSPI Controller
00:1f.4 SMBus: Intel Corporation Comet Lake PCH-LP SMBus Host Controller
00:1f.5 Serial bus controller [0c80]: Intel Corporation Comet Lake SPI (flash) Controller
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (10) I219-V
01:00.0 PCI bridge: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 2C 2018] (rev 06)
02:00.0 PCI bridge: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 2C 2018] (rev 06)
02:01.0 PCI bridge: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 2C 2018] (rev 06)
02:02.0 PCI bridge: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 2C 2018] (rev 06)
03:00.0 System peripheral: Intel Corporation JHL7540 Thunderbolt 3 NHI [Titan Ridge 2C 2018] (rev 06)
39:00.0 USB controller: Intel Corporation JHL7540 Thunderbolt 3 USB Controller [Titan Ridge 2C 2018] (rev 06)
3a:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983
For some reason by IOMMU is showing "force enabled" instead of just enabled by default.

I have the exact same values in my default grub as well as in the modules. I have been able to passthrough the entire GPU to a linux/windows VM, however, that prevents the host from using the gpu whereas gvt-g allows the split of the gpu including host usage. I am not sure what I could be doing differently other than a different proxmox version from you. Everything else seems to be the same but it just isnt working properly.
 
Last edited:
I'm running 7.2-4, it's been so long I'm not 100% sure what I enabled/disabled.

I did forget these:

Code:
root@pve:~# cd /etc/modprobe.d
root@pve:/etc/modprobe.d# ls
iommu.conf  pve-blacklist.conf
root@pve:/etc/modprobe.d# more iommu.conf
options vfio_iommu_type1 allow_unsafe_interrupts=1
root@pve:/etc/modprobe.d# more pve-blacklist.conf
# This file contains a list of modules which are not supported by Proxmox VE

# nidiafb see bugreport https://bugzilla.proxmox.com/show_bug.cgi?id=701
blacklist nvidiafb
root@pve:/etc/modprobe.d#
 
I'm running 7.2-4, it's been so long I'm not 100% sure what I enabled/disabled.

I did forget these:

Code:
root@pve:~# cd /etc/modprobe.d
root@pve:/etc/modprobe.d# ls
iommu.conf  pve-blacklist.conf
root@pve:/etc/modprobe.d# more iommu.conf
options vfio_iommu_type1 allow_unsafe_interrupts=1
root@pve:/etc/modprobe.d# more pve-blacklist.conf
# This file contains a list of modules which are not supported by Proxmox VE

# nidiafb see bugreport https://bugzilla.proxmox.com/show_bug.cgi?id=701
blacklist nvidiafb
root@pve:/etc/modprobe.d#
Thank you for trying to help me out. I appreciate you sending all of this information. Sadly, gvt-g is still not working properly. Are you also using an Intel NUC10i7? I know you said you are using cometlake nuc, so I am assuming you are using NUC10 of some sort.

I am assuming my issue may be coming from the BIOS version I have instead of related to Proxmox itself. I had upgraded to the most recent BIOS version. Other than that, my BIOS settings should be fine. VT-d and VT-x are both enabled. Secure Boot is disabled. And all other settings are mainly the default besides a few disabled devices such as Microphone, etc.
 

Intel NUC 10 BXNUC10i5FNHN1​


I have 2x of these. i5 not i7
I know its kind of a hastle, but if you're able, can you check which BIOS version your NUC's are using? I am currently on FNCML357.0057.2022.0520.1803. I feel that the BIOS version may be the issue and want to compare if possible since we have basically the same setup
 
No bother, here are both.


Code:
# dmidecode 3.3
Getting SMBIOS data from sysfs.
SMBIOS 3.3.0 present.
Table at 0x6F9DC000.

Handle 0x0000, DMI type 0, 26 bytes
BIOS Information
        Vendor: Intel Corp.
        Version: FNCML357.0055.2021.1202.1748
        Release Date: 12/02/2021
        Address: 0xF0000
        Runtime Size: 64 kB
        ROM Size: 16 MB
        Characteristics:
                PCI is supported
                BIOS is upgradeable
                BIOS shadowing is allowed
                Boot from CD is supported
                Selectable boot is supported
                BIOS ROM is socketed
                EDD is supported
                5.25"/1.2 MB floppy services are supported (int 13h)
                3.5"/720 kB floppy services are supported (int 13h)
                3.5"/2.88 MB floppy services are supported (int 13h)
                Print screen service is supported (int 5h)
                Serial services are supported (int 14h)
                Printer services are supported (int 17h)
                ACPI is supported
                USB legacy is supported
                BIOS boot specification is supported
                Targeted content distribution is supported
                UEFI is supported
        BIOS Revision: 5.16
        Firmware Revision: 3.9

Code:
# dmidecode 3.3
Getting SMBIOS data from sysfs.
SMBIOS 3.3.0 present.
Table at 0x6F9DC000.

Handle 0x0000, DMI type 0, 26 bytes
BIOS Information
        Vendor: Intel Corp.
        Version: FNCML357.0055.2021.1202.1748
        Release Date: 12/02/2021
        Address: 0xF0000
        Runtime Size: 64 kB
        ROM Size: 16 MB
        Characteristics:
                PCI is supported
                BIOS is upgradeable
                BIOS shadowing is allowed
                Boot from CD is supported
                Selectable boot is supported
                BIOS ROM is socketed
                EDD is supported
                5.25"/1.2 MB floppy services are supported (int 13h)
                3.5"/720 kB floppy services are supported (int 13h)
                3.5"/2.88 MB floppy services are supported (int 13h)
                Print screen service is supported (int 5h)
                Serial services are supported (int 14h)
                Printer services are supported (int 17h)
                ACPI is supported
                USB legacy is supported
                BIOS boot specification is supported
                Targeted content distribution is supported
                UEFI is supported
        BIOS Revision: 5.16
        Firmware Revision: 3.9
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!