Intel N100 MiniServer GPU passthrough results in black screen


New Member
Aug 20, 2023
Hello all,

I just bought the CWWK N100 mini pc to run as a low powered server:

I've followed this tutorial to setup GPU passthrough:

It took me quite a bit to get everything to work as apparently my system is making use of systemd boot rather then GRUB.

Anyway, I've setup my VM as follows:


When I start this VM the screen of the HOST turns black (I think this is as expected as the VM is now taking over).

However no output from the VM is displayed.

I did try to set the Display to "SPICE" and then boot the VM, this resulted in noVNC working again but in Device Manager it shows that the Intel N100 has error 43.

Things I've already tried which didn't work:
  1. Tick the PCI-Express box
  2. Ensure IOMMU is enabled (it shows in DMESG + I can select the Raw Device)
  3. Try to unload the i935 driver by adding it to the blacklist.
  4. Try to unload the i935 driver by adding it to the systemd boot file.

All my configs:
VM Config:
root@proxmox1:~# cat /etc/pve/qemu-server/104.conf
agent: enabled=1
bios: ovmf
boot: order=scsi0;net0
bootdisk: scsi0
cores: 4
cpu: host
efidisk0: local-zfs:vm-104-disk-0,efitype=4m,pre-enrolled-keys=0,size=1M
hostpci0: 0000:00:02
machine: pc-q35-8.0
memory: 4096
meta: creation-qemu=8.0.2,ctime=1692529766
name: GraphicsVM2
net0: virtio=8E:DD:FD:17:66:14,bridge=vmbr0,firewall=1
numa: 0
ostype: win11
scsi0: local-zfs:vm-104-disk-1,size=50G
scsihw: virtio-scsi-pci
smbios1: uuid=7fc64ba0-d8c4-4156-99c7-337082b9b63e
sockets: 1
vmgenid: 193fdb93-f121-42a0-b99b-fa05ab97b7ad

SYSTEMD boot config:
root@proxmox1:~# cat /etc/kernel/cmdline
root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on iommu=pt modprobe.blacklist=i915

Modules config:

root@proxmox1:~# cat /etc/modules
# /etc/modules: kernel modules to load at boot time.
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Parameters can be specified after the module name.

dmesg output:

root@proxmox1:~# dmesg | grep -i -e DMAR -e IOMMU
[    0.000000] Command line: initrd=\EFI\proxmox\6.2.16-8-pve\initrd.img-6.2.16-8-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on iommu=pt modprobe.blacklist=i915
[    0.011893] ACPI: DMAR 0x000000007550E000 000088 (v02 INTEL  EDK2     00000002      01000013)
[    0.011929] ACPI: Reserving DMAR table memory at [mem 0x7550e000-0x7550e087]
[    0.062089] Kernel command line: initrd=\EFI\proxmox\6.2.16-8-pve\initrd.img-6.2.16-8-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on iommu=pt modprobe.blacklist=i915
[    0.062139] DMAR: IOMMU enabled
[    0.145002] DMAR: Host address width 39
[    0.145003] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[    0.145010] DMAR: dmar0: reg_base_addr fed90000 ver 4:0 cap 1c0000c40660462 ecap 29a00f0505e
[    0.145013] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[    0.145019] DMAR: dmar1: reg_base_addr fed91000 ver 5:0 cap d2008c40660462 ecap f050da
[    0.145022] DMAR: RMRR base: 0x0000007c000000 end: 0x000000803fffff
[    0.145025] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
[    0.145028] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[    0.145029] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.146698] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    0.331970] pci 0000:00:02.0: DMAR: Skip IOMMU disabling for graphics
[    0.367410] iommu: Default domain type: Passthrough (set via kernel command line)
[    0.416155] DMAR: No ATSR found
[    0.416157] DMAR: No SATC found
[    0.416158] DMAR: IOMMU feature fl1gp_support inconsistent
[    0.416159] DMAR: IOMMU feature pgsel_inv inconsistent
[    0.416161] DMAR: IOMMU feature nwfs inconsistent
[    0.416163] DMAR: IOMMU feature dit inconsistent
[    0.416164] DMAR: IOMMU feature sc_support inconsistent
[    0.416166] DMAR: IOMMU feature dev_iotlb_support inconsistent
[    0.416167] DMAR: dmar0: Using Queued invalidation
[    0.416172] DMAR: dmar1: Using Queued invalidation
[    0.416398] pci 0000:00:02.0: Adding to iommu group 0
[    0.416437] pci 0000:00:00.0: Adding to iommu group 1
[    0.416453] pci 0000:00:14.0: Adding to iommu group 2
[    0.416461] pci 0000:00:14.2: Adding to iommu group 2
[    0.416475] pci 0000:00:16.0: Adding to iommu group 3
[    0.416487] pci 0000:00:19.0: Adding to iommu group 4
[    0.416494] pci 0000:00:19.1: Adding to iommu group 4
[    0.416502] pci 0000:00:1a.0: Adding to iommu group 5
[    0.416517] pci 0000:00:1c.0: Adding to iommu group 6
[    0.416530] pci 0000:00:1d.0: Adding to iommu group 7
[    0.416541] pci 0000:00:1d.1: Adding to iommu group 8
[    0.416556] pci 0000:00:1d.2: Adding to iommu group 9
[    0.416569] pci 0000:00:1d.3: Adding to iommu group 10
[    0.416586] pci 0000:00:1f.0: Adding to iommu group 11
[    0.416595] pci 0000:00:1f.3: Adding to iommu group 11
[    0.416603] pci 0000:00:1f.4: Adding to iommu group 11
[    0.416611] pci 0000:00:1f.5: Adding to iommu group 11
[    0.416625] pci 0000:01:00.0: Adding to iommu group 12
[    0.416640] pci 0000:02:00.0: Adding to iommu group 13
[    0.416651] pci 0000:03:00.0: Adding to iommu group 14
[    0.416666] pci 0000:04:00.0: Adding to iommu group 15
[    0.416678] pci 0000:05:00.0: Adding to iommu group 16
[    0.416771] DMAR: Intel(R) Virtualization Technology for Directed I/O


  • 1692554958452.png
    55.6 KB · Views: 9
  • Like
Reactions: welcometors


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!