Topton Nas motherboard N17 (not sure) with Ryzen 8845hs issue with IOMMU and AMD-VT

FancyBee

Member
Mar 7, 2024
34
3
8
Hi. I finally bought one for my NAS server but I can't activate IOMMU for passthrough. Company claim that IOMMU and AMD-VT activated by default. In bios firmware there are no such option(or may bi. But all manipulation with Grub that I do with previous mb with Ryzen and Proxmox 8.1 don't work.

May be someone completed this quest or have fresher firmware. I have version from May 07, 2024.
 
AMD-V is for running virtual machines (like Intel VT-x) and AMD-Vi is for PCI(e) passthrough (like VT-d). Proxmox enables amd_iommu by default (and has for a long time on AMD) but it needs to be enabled in the motherboard BIOS (setting IOMMU to Auto is not always enough). You don't need to do anything in Proxmox only in the BIOS.

What is this AMD-VT you are talking about? And what makes you think IOMMU is not enabled or working on your AMD system? Which motherboard Ryzen chipset?
 
Last edited:
AMD-V is for running virtual machines (like Intel VT-x) and AMD-Vi is for PCI(e) passthrough (like VT-d). Proxmox enables amd_iommu by default (and has for a long time on AMD)
on 8.1 it was not enabled. I needed to edit grub config.
but it needs to be enabled in the motherboard BIOS (setting IOMMU to Auto is not always enough). You don't need to do anything in Proxmox only in the BIOS.
In bios there are no option related to IOMMU. No AMD-Vi. Dealer said that both are enabled, I try to verify it and setup passthrough for GPU, sata controller.
What is this AMD-VT you are talking about? And what makes you think IOMMU is not enabled or working on your AMD system? Which motherboard Ryzen chipset?
Motherboard Topton nas (potential model number N17) with mobile cpu Ryzen 8845hs. Topton don't provide mb model and on pcb only mentioned "topton nas". It has several additional controllers from asmedia (8x sata ), intel (4x 2.5G eth). Cheap choice for home lab nas. I don't want to waste potential with dedicated truenas and want it virtualise inside proxmox

I don't know exact chipset for mobile cpu. 8845hs is released this year and support DDR5 so I think it is something generic.
 
Bought the same board but with 7840HS, it is still in transit, can you tell me (and for yourself in 2 years :D ) how did you do that?
You edited grub with what exactly?
 
Bought the same board but with 7840HS, it is still in transit, can you tell me (and for yourself in 2 years :D ) how did you do that?
You edited grub with what exactly?
About purchasing it and usage scenarios:
I look up several reviews on YouTube. On paper it looks great. You buy it on AliExpress, install Proxmox, setup TrueNas on top, connect disks and it works. I prefer Proxmox with TrueNas as one of several OS and for some of them with Nvidia VGPU passthrough. Right now looks like I have stuck with TrueNas and Docker images without any VGPU.

About IOMMU:
There is an article https://forum.proxmox.com/threads/p...x-ve-8-installation-and-configuration.130218/ that describe gpu passthrough. Usually nano or vi editors used to edit files. For AMD cpus:

/etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet iommu=pt"

after that execute command update-grub without arguments and reboot

I planned to use this motherboard as home server for nas and virtualisation so iommu support was crucial. CPU support it. Motherboard bios don't, so I'm stuck right for now. I have a long conversation with manufacturers support. they say this is OEM board so it shipped as is. It is functioning - so no additional service.
 
The board and other parts has arrived so today I installed proxmox.
I have 7850HS cpu, but it seems igpu works, here is the guide I followed:
https://www.servethehome.com/how-to-pass-through-pcie-nics-with-proxmox-ve-on-intel-and-amd/

Very similar to yours, but the grub file is a bit different:

nano /etc/default/grub
Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"
update-grub

nano /etc/modules
Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

Reboot PVE -> dmesg | grep -e DMAR -e IOMMU should show something like this:

9028.png

My bios is v0.01 and similar to OEM bioses, it has almost ZERO configuration. I can't even see if HDD-s are detected and I just hope everything will work in the future :\
 
Looks like I'm have problems with my mb and its bios.
This is mb https://www.aliexpress.com/item/1005006597893262.html I choose 8845hs with cables and fan.

I use proxmox 8.2.1 with kernel 6.5.13-6-pve
It uses grub and I tried to use use systemd setup. same result
--grub case:
grep iommu /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"

executed "update-grub"

--just in case systemd case:
grep iommu /etc/kernel/cmdline
quiet amd_iommu=on iommu=pt

executed "proxmox-boot-tool refresh"

rebooted
execute command "dmesg | grep -e DMAR -e IOMMU" and get empty result. no iommu for me from mb.

in dmesg related to modules load:

[ 21.203767] Modules linked in: intel_rapl_msr intel_rapl_common nvidia_vgpu_vfio(OE) nvidia(OE) snd_soc_dmic snd_soc_ps_mach snd_ps_pdm_dma amdgpu(+) snd_sof_amd_rembrandt edac_mce_amd snd_sof_amd_renoir snd_hda_codec_realtek snd_sof_amd_acp snd_hda_codec_generic snd_sof_pci snd_sof_xtensa_dsp kvm_amd ledtrig_audio snd_sof crct10dif_pclmul polyval_clmulni polyval_generic ghash_clmulni_intel snd_hda_codec_hdmi snd_sof_utils amdxcp sha256_ssse3 iommu_v2 snd_soc_core snd_hda_intel drm_buddy sha1_ssse3 gpu_sched snd_intel_dspcfg aesni_intel snd_compress snd_intel_sdw_acpi drm_suballoc_helper ac97_bus drm_ttm_helper snd_pcm_dmaengine snd_hda_codec ttm crypto_simd cryptd snd_pci_ps snd_hda_core snd_rpl_pci_acp6x drm_display_helper snd_acp_pci snd_pci_acp6x snd_hwdep cec snd_pcm snd_pci_acp5x rc_core snd_rn_pci_acp3x mdev snd_timer snd_acp_config drm_kms_helper snd snd_soc_acpi soundcore i2c_algo_bit rapl pcspkr kvm k10temp snd_pci_acp3x ccp mac_hid zfs(PO) spl(O) vhost_net vhost vhost_iotlb tap vfio_pci vfio_pci_core

[ 21.203840] irqbypass vfio_iommu_type1 vfio iommufd drm efi_pstore dmi_sysfs ip_tables x_tables autofs4 xfs btrfs blake2b_generic xor raid6_pq simplefb dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio libcrc32c xhci_pci xhci_pci_renesas nvme crc32_pclmul thunderbolt xhci_hcd ahci nvme_core i2c_piix4 igc libahci nvme_common video wmi

IOMMU and vfio modules loaded.


In bios there are no IOMMU, VT-d mentions. CPU handling it, but bios looks like don't have any setup options.

I have one idea - check that cmdline to kernel actually applied, but I need to look at documentation how to do it.
 
Since you have nothing to loose, have you tried the Step 3a from the link I posted?

Step 3a: Enable IOMMU using systemd​


My bios is pretty much empty, I see not VT-D or IOMMU either, it is a crap bios, but it is what it is for this price :\
 
After testing Proxmox does not recognize my igpu either, I have no idea why, iommu works, but I cant passthrough. :(

Since I use DSM I tried to install that baremetal, but after booting it restarts, probably kernel panic, but I'm not good enough to find out why. I will wait a little bit more and send it back, since it is useless for the average user. :(
 
Last edited:
I tried passthrough on my previous mobo motherboard from minisforum bd 770i. it works only after I dump bios from audio and igpu. without it reboot on vm with passthrough start

I needed to build cpp file and made a video bios dumper util. it dumped bios from my mb. bioses from other MB or igpu was not compatible.
 
Since you have nothing to loose, have you tried the Step 3a from the link I posted?

Step 3a: Enable IOMMU using systemd​


My bios is pretty much empty, I see not VT-D or IOMMU either, it is a crap bios, but it is what it is for this price :\
tried this. don't work. simple check - boot screen colour. grub is blue, systemd is black. mine is blue))

I'm waiting for fresh Proxmox to reinstall my current setup. right now I'm stuck without iommu and waiting for rack case to finally move it from my desk to rack.
 
An interesting motherboard. Yeah, iGPU is a pain to passthrough.

Yeah, turned out I need Phoenix1 which is 05:00 :) now it works.
Did you get it all working? I noticed on your other thread, your SATA controller is 04:00 from lspci.
I have low hope in AMD iGPU. Better to pass through proper one.

 
Yes, it is working pretty well since I found out how to properly passthrough. :)

It was a test server, so I made a few clean Proxmox install, and it turned out it was really easy, if you know what to do.
You don't have to install custom drivers, and by default IOMMU is enabled for AMD (drivers are preinstalled).
BUT somehow at some point of testing, I lost the /dev/dri/renderD128 folder and this is bad, if you don't have this folder that means you have to get it back somehow, because this folder contains the AMD Ryzen iGPU Driver, and you need to passthrough this in LXC container config file.

But, because I use other PCI devices (TPU CORAL) I did this:
nano /etc/default/grub
Code:
GRUB_CMDLINE_LINUX="quiet amd_iommu=on iommu=pt"
update-grub

nano /etc/modules
Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

Restart and see if IOMMU still works:
dmesg | grep -e DMAR -e IOMMU

It should show something like this:
9111.png


The easiest if you use VM, because you just have to give the VM the Pheonix1 PCI device.
If you use LXC and I strongly suggest, because you can share the GPU across infinite LXCs and not limit to just 1 VM.

To install Frigate in an LXC docker I followed this guide.
When you reach the "Mapping through the USB Coral TPU and Hardware Acceleration"
nano /etc/pve/lxc/[B]XXX[/B].conf
Add these lines to the end, first for AMD Ryzen 780m igpu and the second for M2 TPU Coral.
Code:
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file 0, 0
lxc.mount.entry: /dev/apex_0 dev/apex_0 none bind,optional,create=file 0, 0

This is the compose file I use and ignore the intel part, it works for AMD too! (Devices: Apex=CoralTPU, renderD128=AMD iGPU)
YAML:
services:
  frigate:
    container_name: frigate
    restart: unless-stopped
    image: ghcr.io/blakeblackshear/frigate:stable
    shm_size: "256mb" # update for your cameras based on calculation above
    devices:
      - /dev/apex_0:/dev/apex_0
      - /dev/dri/renderD128:/dev/dri/renderD128 # for intel hwaccel, needs to be updated for your hardware
    volumes:
      - /etc/localtime:/etc/localtime
      - /opt/frigate/config:/config
      - /mnt/pve/Surveillance:/media/frigate
      - type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
        target: /tmp/cache
        tmpfs:
          size: 1073741824
    ports:
      - "5000:5000"
      - "1935:1935" # RTMP feeds
    environment:
      FRIGATE_RTSP_PASSWORD: "password"
      NVIDIA_VISIBLE_DEVICES: void
      LIBVA_DRIVER_NAME: radeonsi


And this was my starter frigate config for Annke C800, to see if iGPU works:
YAML:
mqtt:
  enabled: false
ffmpeg:
  hwaccel_args: preset-vaapi
detectors:
  coral:
    type: edgetpu

#Global Object Settings
objects:
  track:
    - person
cameras:
  annkec800: # <------ Name the camera
    ffmpeg:
      output_args:
        record: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v copy -tag:v hvc1 -bsf:v hevc_mp4toannexb -c:a aac

      inputs:
        - path: rtsp://admin:password@192.168.0.200:554/H264/ch1/main/av_stream # <----- Update for your camera
          roles:
            - detect
            - record


And here is a pic in Frigate showing the iGPU:
9110.png
 
  • Like
Reactions: frankmanzhu
AMD-V is for running virtual machines (like Intel VT-x) and AMD-Vi is for PCI(e) passthrough (like VT-d). Proxmox enables amd_iommu by default (and has for a long time on AMD) but it needs to be enabled in the motherboard BIOS (setting IOMMU to Auto is not always enough). You don't need to do anything in Proxmox only in the BIOS.

What is this AMD-VT you are talking about? And what makes you think IOMMU is not enabled or working on your AMD system? Which motherboard Ryzen chipset?
I have it on my GMKTEC K8 in AMI BIOS version 1.0.7 and this option is called SVM, I set it to auto and configured the video playback on the virtual machine, but it is strange that when checking amd iommu and amd vi it gives an unknown option "on"
 
Last edited:
I don't understand, sorry. How did you check and what did the actual message look like?
Maybe you can simply check for IOMMU/AMD-Vi by looking at your IOMMU groups in Proxmox: https://pve.proxmox.com/wiki/PCI_Passthrough#Verify_IOMMU_isolation
Code:
 dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
[    0.022887] AMD-Vi: Unknown option - 'on'
[    0.065784] AMD-Vi: ivrs, add hid:AMDI0020, uid:\_SB.FUR0, rdevid:160
[    0.065786] AMD-Vi: ivrs, add hid:AMDI0020, uid:\_SB.FUR1, rdevid:160
[    0.065786] AMD-Vi: ivrs, add hid:AMDI0020, uid:\_SB.FUR2, rdevid:160
[    0.065787] AMD-Vi: ivrs, add hid:AMDI0020, uid:\_SB.FUR3, rdevid:160
[    0.065787] AMD-Vi: Using global IVHD EFR:0x246577efa2054ada, EFR2:0x0
[    0.591824] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[    0.592922] AMD-Vi: Extended features (0x246577efa2054ada, 0x0): PPR NX GT IA GA PC
[    0.592928] AMD-Vi: Interrupt remapping enabled
[    0.594247] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).
 
Code:
 dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
[    0.022887] AMD-Vi: Unknown option - 'on'
[    0.065784] AMD-Vi: ivrs, add hid:AMDI0020, uid:\_SB.FUR0, rdevid:160
[    0.065786] AMD-Vi: ivrs, add hid:AMDI0020, uid:\_SB.FUR1, rdevid:160
[    0.065786] AMD-Vi: ivrs, add hid:AMDI0020, uid:\_SB.FUR2, rdevid:160
[    0.065787] AMD-Vi: ivrs, add hid:AMDI0020, uid:\_SB.FUR3, rdevid:160
[    0.065787] AMD-Vi: Using global IVHD EFR:0x246577efa2054ada, EFR2:0x0
[    0.591824] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[    0.592922] AMD-Vi: Extended features (0x246577efa2054ada, 0x0): PPR NX GT IA GA PC
[    0.592928] AMD-Vi: Interrupt remapping enabled
[    0.594247] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).
on is indeed an invalid option for amd_iommu, because it is on by default: https://www.kernel.org/doc/html/v6.8/admin-guide/kernel-parameters.html . You can remove amd_iommu=on safely.

I don't think that the test you used is a good one for determining whether IOMMU is enabled for AMD (and I know it's on the Wiki but it's wrong for AMD).
Maybe use dmesg | grep iommu instead and see if you see lines like "Adding to iommu group".
Or simply check if you have multiple groups as described on the Wiki (as I linked to before) or the Proxmox web GUI when adding a (raw) PCI(e) device to a VM.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!