Intel N100/iGPU Passthrough to VM and use with Docker

Iacov

Member
Jan 24, 2024
38
0
6
hey

i hope this is the right subforum to post this
it all started with the idea to get an Intel N100 mini pc to run plex or jellyin on it
then i thought to myself, that it would be handy to make the mini pc my second PVE node (for easier management, backups etc) and use the ressources for maybe some other VMs too

the issue that i have is that i neither know if this is good idea (a docker container in a vm on a hypervisor), nor have i ever passed through hardware and have no experience with graphics drivers on linux (but i try to be a fast learner)

neither searches nor lengthy talks with chatgpt could answer my questions. me asking on reddit ended in people simply recommending tteck's LXC scripts - but i have doubts how portable that container would be and how easy (or not) i could copy the relevant dater off of it to migrate the plex (or jellyfin) installation if necessary

what is your opinion and recommendation?
is the docker on a vm with hardware passthrough a bad idea?
how could i achieve it?
would i loose too much (graphics) performance to the overhead?
what part of the chip actually needs to be passed through? as far as i understood is, that the quicksync encoder is often tied to the cpu iommu instead of the igpu iommu, is that right?

if i passthrough the igpu to the vm, only the vm needs the drivers installed, right? when using a lxc the container as well as PVE need the correct drivers, right?

sorry for asking so many - and maybe for the experienced users trivial - questions but i would really like to learn

p.s.: i already have one PVE node and have gained my first experiences with creating VMs, some lxc, backups etc etc. not very experienced, but i don't start from zero ;)
 
then i thought to myself, that it would be handy to make the mini pc my second PVE node
Two-node cluster are problematic, as you can also see from threads on this forum: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_quorum
Having two separate Proxmox VE systems is not a problem. You might want to run Proxmox Backup Server (PBS) in a container on both and backup each other.
the issue that i have is that i neither know if this is good idea (a docker container in a vm on a hypervisor)
Docker in a VM is better than Docker in a container: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pct
nor have i ever passed through hardware and have no experience with graphics drivers on linux (but i try to be a fast learner)
PCI(e) passthrough is hit or miss and trial and error, as you can also see from many, many, many threads on this forum: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough . Make sure to search for your specific hardware and passthrough (and all the necessary work-around or show-stoppers) before committing.
if i passthrough the igpu to the vm, only the vm needs the drivers installed, right?
Yes. There is also mediated passthrough to multiple VMs but that does not allow display output. And there is VirGL for VMs: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_display
when using a lxc the container as well as PVE need the correct drivers, right?
Yes.
 
  • Like
Reactions: Iacov and Dunuin
would i loose too much (graphics) performance to the overhead?
There is no overhead. You passthrough the physical GPU and the guestOS will directly access the real hardware like with a bare metal install.

how could i achieve it?
See the PCI passthrough guide: https://pve.proxmox.com/wiki/PCI_Passthrough

is the docker on a vm with hardware passthrough a bad idea?
Docker is recommended to be run in a VM. And if you want hardware accelerated transcoding in a VM you need to PCI passthrough a GPU.

what part of the chip actually needs to be passed through? as far as i understood is, that the quicksync encoder is often tied to the cpu iommu instead of the igpu iommu, is that right?
You can only passthrough whole IOMMU groups. So first I would check if the iGPU actually got its own IOMMU group.

if i passthrough the igpu to the vm, only the vm needs the drivers installed, right?
Yes, its then exclusive for that single VM. Neither the host nor any other VM or LXC won't be able to use the iGPU.

when using a lxc the container as well as PVE need the correct drivers, right?
Yes. LXCs don't got their own kernels. The share the kernel with the PVE host so the PVE host has to run all the hardware.
 
  • Like
Reactions: Iacov and leesteken
Two-node cluster are problematic, as you can also see from threads on this forum: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_quorum
Having two separate Proxmox VE systems is not a problem. You might want to run Proxmox Backup Server (PBS) in a container on both and backup each other.
thank you :)
sorry if i have worded it wrong - i'm not thinking about a cluster per se, but being able to manage both machines/nodes via the same dashboard...maybe similar to portainer?
or am i mistaken and such a "unified" management is not possible without creating a subpar cluster? (as far as i understand, a cluster aims at migratability and reliability/ha, right?)
 
thank you :)
sorry if i have worded it wrong - i'm not thinking about a cluster per se, but being able to manage both machines/nodes via the same dashboard...maybe similar to portainer?
or am i mistaken and such a "unified" management is not possible without creating a subpar cluster? (as far as i understand, a cluster aims at migratability and reliability/ha, right?)

That requires a cluster...and for that you need 3 machines.
 
Everything about PCIe passthrough sounds logical and straight forward, until you actual try it and run into issues with the device and the motherboard and BIOS. I have been this road of trial and error and work-around for many a time. I wish you best of luck.
yes, that was not clearly worded by me. i mean it's logical compared to other guides that i have found
 
I have three OASLOA Mini PCs in a 3-node HA Proxmox Cluster (Intel N95 Processor, 16 GB LPDDR5, 512 GB NVMe).

If you want Portainer to be able to monitor containers across different machines, I don't know if it can do that, but that sounds a tad bit more like a Kubernetes thing.

In terms of passing the iGPU through, yes, you can do pass it through to both a privilege or to unprivileged LXC container, no issues.

Jim's Garage/apalrd's adventures on YouTube, I think has information about passing stuff through to a privileged container.

(A lot of people recommend running privileged containers where and when possible. I have argued that for a home lab, and if you're not going to be opening/exposing your services to the internet, you can probably get away with using unprivileged containers.)

In either case, iGPU passthrough is possible.

If you want the deployment notes for iGPU passthrough from my Intel N95 Processor (the process should be very similar for the N100), please let me know and I can dump my OneNote notes here.

Thanks.
 
I have three OASLOA Mini PCs in a 3-node HA Proxmox Cluster (Intel N95 Processor, 16 GB LPDDR5, 512 GB NVMe).

If you want Portainer to be able to monitor containers across different machines, I don't know if it can do that, but that sounds a tad bit more like a Kubernetes thing.

In terms of passing the iGPU through, yes, you can do pass it through to both a privilege or to unprivileged LXC container, no issues.

Jim's Garage/apalrd's adventures on YouTube, I think has information about passing stuff through to a privileged container.

(A lot of people recommend running privileged containers where and when possible. I have argued that for a home lab, and if you're not going to be opening/exposing your services to the internet, you can probably get away with using unprivileged containers.)

In either case, iGPU passthrough is possible.

If you want the deployment notes for iGPU passthrough from my Intel N95 Processor (the process should be very similar for the N100), please let me know and I can dump my OneNote notes here.

Thanks.
I would be very intersted to have a look on those notes. I am trying to implement video passthrough to N100 mini PC HDMI output without good results.
 
I would be very intersted to have a look on those notes. I am trying to implement video passthrough to N100 mini PC HDMI output without good results.
So a couple of notes before I dump my deployment notes from my OneNote:

My system is an OASLOA Mini PC and as such, has an Intel N95 Processor rather than the N100. (with 16 GB of RAM and 512 GB SSD).

So, the deployment notes should be VERY similar, as it should give you the jist of what you would need to do, but the specific implementation may be ever so slightly different (possibly/potentially).

With that out of the way, here are my deployment notes:

[begin deployment notes dump]

On the host:

Do not create/edit the /etc/modprobe.d/blacklist.conf!!!

Do not edit /etc/modules!!!

Do not enable IOMMU!!!

Do not unload the i915 kernel module/driver!!!

# apt install -y build-essential pve-headers-$(uname -r) intel-gpu-tools

# lspci -k


00:02.0 VGA compatible controller: Intel Corporation Alder Lake-N [UHD Graphics]
DeviceName: Onboard - Video
Subsystem: Intel Corporation Alder Lake-N [UHD Graphics]
Kernel driver in use: i915
Kernel modules: i915


# ls -l /dev/dri
total 0
drwxr-xr-x 2 root root 80 Feb 5 01:16 by-path
crw-rw---- 1 root video 226, 0 Feb 5 01:16 card0
crw-rw---- 1 root render 226, 128 Feb 5 01:16 renderD128


# cd /etc/pve/nodes/minipc3/lxc

# vi 4239.conf

Append to end of file:

lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.cgroup2.devices.allow: c 29:0 rwm
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file


Inside the privileged LXC container:

Verify that the LXC container can still see the Intel UHD Graphics
# ls -l /dev/dri
total 0
drwxr-xr-x 2 root root 80 Feb 5 06:16 by-path
crw-rw---- 1 root video 226, 0 Feb 5 06:16 card0
crw-rw---- 1 root ssl-cert 226, 128 Feb 5 06:16 renderD128


# vi /etc/hosts

192.168.4.160 pve
192.168.4.241 os-mirror


Install Plex Media Server
# wget https://downloads.plex.tv/plex-medi...exmediaserver_1.32.8.7639-fb6452ebf_amd64.deb


# apt install -y gpg nfs-common

# dpkg -i dpkg -i plexmediaserver_1.32.8.7639-fb6452ebf_amd64.deb

# mkdir /export/myfs

# vi /etc/fstab
pve:/export/myfs /export/myfs nfs defaults 0 0


save,quit

# mount -a

# df -h

[end of deployment notes dump]

If you have any questions, please feel free to ask.

Thanks.
 
On my Minisforum with i5-12450H with Alder-Lake iGPU:

On the Proxmox Host

- Enable IOMMU in BIOS

- blacklist.conf:
Code:
blacklist radeon
blacklist nvidia
blacklist nouveau
blacklist i2c_nvidia_gpu

- Grub:
Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt intel_pstate=disable"

Of course with all the "update-initramfs -u -k all" & "update-grub".... what's needed to make these changes applied.


On the VM (no LXC in my case)

In Proxmox UI: Of course turned off ballooning, passed the whole iGPU as PCIe & everything on.

Installed packages from corresponding repo's:
- firmware-misc-nonfree
- jellyfin-ffmpeg5
- intel-opencl-icd

... then a "sudo /usr/lib/jellyfin-ffmpeg/ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 -init_hw_device opencl@va" gave me everything green & OK which wasn't the case without "firmware-misc-nonfree". Pass /dev/dri into Jellyfin Docker Container which is then able to play everything with vaapi.

Hopefully I didn't miss anything - maybe something isn't necessary but several days of try & error brought me to this final result.
 
Last edited:
On my Minisforum with i5-12450H with Alder-Lake iGPU:

On the Proxmox Host

- Enable IOMMU in BIOS

- blacklist.conf:
Code:
blacklist radeon
blacklist nvidia
blacklist nouveau
blacklist i2c_nvidia_gpu

- Grub:
Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt intel_pstate=disable"

Of course with all the "update-initramfs -u -k all" & "update-grub".... what's needed to make these changes applied.


On the VM (no LXC in my case)

In Proxmox UI: Of course turned off ballooning, passed the whole iGPU as PCIe & everything on.

Installed packages from corresponding repo's:
- firmware-misc-nonfree
- jellyfin-ffmpeg5
- intel-opencl-icd

... then a "sudo /usr/lib/jellyfin-ffmpeg/ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 -init_hw_device opencl@va" gave me everything green & OK which wasn't the case without "firmware-misc-nonfree". Pass /dev/dri into Jellyfin Docker Container which is then able to play everything with vaapi.

Hopefully I didn't miss anything - maybe something isn't necessary but several days of try & error brought me to this final result.
Thank you for you deployment notes.

I think that I might have deployment notes for an Ubuntu VM as well, but I've been slowly migrating away from using Linux VMs (if I don't need to) and doing the same thing using LXC containers.

Yes, I understand that for business and commercial applications, for security reasons, you probably wouldn't want to use LXC containers (that share the host's kernel), but for a homelabber such as myself, this security risk isn't really much of a risk to me (as none of my services that I am hosting are open to the (public) internet).

Even Docker application containers are being deployed, from within a LXC container, and I have found that it makes it such that my Proxmox server is more flexible in terms of the allocation of resources and the utilisation of said resources.

As a result, the only VMs that I have now are those that can't use nor share the host's Linux kernel (so everything else that's NOT Linux).

My sub-$1000 used dual Xeon server doesn't have enough PCIe x16 slots for me to put a high speed networking card (100 Gbps Infiniband) AND more than one GPU in there, so if say my Windows VM wants a GPU, I am currently using a separate 5950X system for that.

(I think that I initially was learning how to pass the Intel iGPU through to a VM, but then switched over to using LXC containers instead (because I can share the iGPU between LXC containers more consistently than trying to share the iGPU between LXC <-> VMs and between VMs. (The hand off isn't always clean.)
 
I'm using it on my HomeLab as well. I ended up setting up an Debian Docker VM because of some restrictions for an LXC - when it comes to access shared folders and stuff - with PaperlessNGX eg I ran into problems. That's why I have a Docker LXC and Docker VM right now. I even got 2 more VMs: HomeAssistant and 3CX. Both came as an ISO to install which was way easier then manually trying to LXC' it. I dunno even if it's possible...
 
I'm using it on my HomeLab as well. I ended up setting up an Debian Docker VM because of some restrictions for an LXC - when it comes to access shared folders and stuff - with PaperlessNGX eg I ran into problems. That's why I have a Docker LXC and Docker VM right now. I even got 2 more VMs: HomeAssistant and 3CX. Both came as an ISO to install which was way easier then manually trying to LXC' it. I dunno even if it's possible...
So it depends on how your shared folders are set up.

On my Proxmox server, the host itself (which runs Debian 11 underneath the Proxmox middleware) -- I have it set up as both a NFS share and a Samba share.

So, if you are using a privileged container, there is more work to get the shares up and running.

For an unprivileged container, whenever I create a LXC container, I always have to remember to go to "Options" and enable the type of sharing that I want -- whether that's NFS (which is usually the case) and then that way, once I install the nfs-common package, I can then then mount said NFS share.

The newer thing that I've been doing with it, is that instead of actually using NFS or SMB, there is an option to actually pass the "share" through as a mount point to the LXC container.

edit /etc/pve/lxc/<<CTID>>.conf:

Code:
mp0: /path/on/host,mp=/path/in/container

And then once you start up the container, you can check to make sure that the mount point got passed through correctly and successfully.

re: HomeAssistant
It's available as a Docker container, which means that you can set up any base Linux distro that you want/that you're comfortable with, and then deploy it inside the LXC container for said base Linux distro, and then deploy docker inside there, and then deploy the Docker application container.

cf. https://www.home-assistant.io/installation/linux#install-home-assistant-container

It doesn't look like that this will be an option for 3CX.

And admittedly, I don't really know anything about 3CX, but from googling it, it looks like that it's a PBX phone system, so I'm not sure why you might need to or want to pass the iGPU through to that. (Or maybe it's completely unrelated.) In either case, some stuff works, but not all.

Thanks.
 
This is going off topic ;)
... OK I started it. My statements towards the VMs are not targeted at iGPU pass - just general why I use them and that I ran into problems with the sharing issues. Thanks for the tips - I guess I'll take a look at that! Thanks a lot!
 
This is going off topic ;)
... OK I started it. My statements towards the VMs are not targeted at iGPU pass - just general why I use them and that I ran into problems with the sharing issues. Thanks for the tips - I guess I'll take a look at that! Thanks a lot!
No problem.

You're welcome.

*edit*
I was checking my OneNote a little while ago, and it would appear that I actually do NOT have deployment notes for the Intel iGPU for VMs (just for the LXC container).

However, having said that, if someone needs or wants them, I am pretty sure that I can morph my deployment notes for my Nvidia GPUs so that it will work with the Intel iGPU as well, and then publish them on here; if someone needs or wants them.

Otherwise, as I mentioned before; the Intel iGPU that's on my Intel N95 Processor -- the Intel N95 Processor isn't that powerful of a processor (compared to the vast array and Intel's portfolio of CPUs), and as a result of that -- I don't really use that system much beyond being a Windows AD DC, DNS server, and AdguardHome server.

I tested it with some Plex HW accelerated transcoding and it can do the video just fine, but the audio still runs on the CPU and the CPU is quite slow for that, so it wasn't particularly effective nor efficient in performing that task.
 
Last edited:
Inside the privileged LXC container:

Verify that the LXC container can still see the Intel UHD Graphics
# ls -l /dev/dri
total 0
drwxr-xr-x 2 root root 80 Feb 5 06:16 by-path
crw-rw---- 1 root video 226, 0 Feb 5 06:16 card0
crw-rw---- 1 root ssl-cert 226, 128 Feb 5 06:16 renderD128


# vi /etc/hosts

192.168.4.160 pve
192.168.4.241 os-mirror


Install Plex Media Server
# wget https://downloads.plex.tv/plex-medi...exmediaserver_1.32.8.7639-fb6452ebf_amd64.deb


# apt install -y gpg nfs-common

# dpkg -i dpkg -i plexmediaserver_1.32.8.7639-fb6452ebf_amd64.deb

# mkdir /export/myfs

# vi /etc/fstab
pve:/export/myfs /export/myfs nfs defaults 0 0


save,quit

# mount -a

# df -h

[end of deployment notes dump]

If you have any questions, please feel free to ask.

Thanks.
I followed the first part of your instructions on brand new installation of proxmox, thank you for that.
I wish to apply the second half of your instructions on a Windows 11 VM. Do you have any idea? I installed Windows 11 on a new VM but it can't see the graphics card nor HDMI passthrough works.
 
I followed the first part of your instructions on brand new installation of proxmox, thank you for that.
I wish to apply the second half of your instructions on a Windows 11 VM. Do you have any idea? I installed Windows 11 on a new VM but it can't see the graphics card nor HDMI passthrough works.
I was checking my OneNote and apparently, I did not have any notes for deploying the iGPU to a Windows 11 VM.

So, to that end -- the first part of my deployment notes won't be applicable to the Windows 11 VM being that it's not a LXC container.

For that, the deployment notes will probably look relatively more like what it would take to be able to passthrough a NVidia discrete GPU to a VM.

I haven't personally done with it with my N95 system, so I am not 100% confident that me trying to morph the instructions will actually work for other people.

If I get time this weekend, maybe I'll play around with trying to actually deploy it, since I think you're at least the second person who has explicitly made mention of it.
 
  • Like
Reactions: peros550
Thanks! I have followed some guides and closest I got was to make the graphics card available within windows , in case someone wanted to use computing power . Unfortunately no real HDMI output.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!