Topton Nas motherboard N17 (not sure) with Ryzen 8845hs issue with IOMMU and AMD-VT

Code:
dmesg | grep iommu
[    0.000000] Command line: BOOT_IMAGE=/vmlinuz-6.8.12-2-pve root=/dev/mapper/pve-root ro quiet amd_iommu=on iommu=pt
[    0.022850] Kernel command line: BOOT_IMAGE=/vmlinuz-6.8.12-2-pve root=/dev/mapper/pve-root ro quiet amd_iommu=on iommu=pt
[    0.545194] iommu: Default domain type: Passthrough (set via kernel command line)
[    0.591872] pci 0000:00:01.0: Adding to iommu group 0
[    0.591888] pci 0000:00:01.2: Adding to iommu group 1
[    0.591918] pci 0000:00:02.0: Adding to iommu group 2
[    0.591933] pci 0000:00:02.1: Adding to iommu group 3
[    0.591949] pci 0000:00:02.2: Adding to iommu group 4
[    0.591965] pci 0000:00:02.3: Adding to iommu group 5
[    0.591980] pci 0000:00:02.4: Adding to iommu group 6
[    0.592009] pci 0000:00:03.0: Adding to iommu group 7
[    0.592026] pci 0000:00:03.1: Adding to iommu group 7
[    0.592045] pci 0000:00:04.0: Adding to iommu group 8
[    0.592081] pci 0000:00:08.0: Adding to iommu group 9
[    0.592097] pci 0000:00:08.1: Adding to iommu group 10
[    0.592113] pci 0000:00:08.2: Adding to iommu group 11
[    0.592129] pci 0000:00:08.3: Adding to iommu group 12
[    0.592155] pci 0000:00:14.0: Adding to iommu group 13
[    0.592169] pci 0000:00:14.3: Adding to iommu group 13
[    0.592234] pci 0000:00:18.0: Adding to iommu group 14
[    0.592248] pci 0000:00:18.1: Adding to iommu group 14
[    0.592262] pci 0000:00:18.2: Adding to iommu group 14
[    0.592276] pci 0000:00:18.3: Adding to iommu group 14
[    0.592289] pci 0000:00:18.4: Adding to iommu group 14
[    0.592303] pci 0000:00:18.5: Adding to iommu group 14
[    0.592317] pci 0000:00:18.6: Adding to iommu group 14
[    0.592331] pci 0000:00:18.7: Adding to iommu group 14
[    0.592351] pci 0000:01:00.0: Adding to iommu group 15
[    0.592367] pci 0000:02:00.0: Adding to iommu group 16
[    0.592383] pci 0000:03:00.0: Adding to iommu group 17
[    0.592398] pci 0000:04:00.0: Adding to iommu group 18
[    0.592414] pci 0000:05:00.0: Adding to iommu group 19
[    0.592442] pci 0000:66:00.0: Adding to iommu group 20
[    0.592460] pci 0000:66:00.1: Adding to iommu group 21
[    0.592477] pci 0000:66:00.2: Adding to iommu group 22
[    0.592497] pci 0000:66:00.3: Adding to iommu group 23
[    0.592514] pci 0000:66:00.4: Adding to iommu group 24
[    0.592531] pci 0000:66:00.6: Adding to iommu group 25
[    0.592550] pci 0000:67:00.0: Adding to iommu group 26
[    0.592568] pci 0000:67:00.1: Adding to iommu group 27
[    0.592587] pci 0000:68:00.0: Adding to iommu group 28
[    0.592605] pci 0000:68:00.3: Adding to iommu group 29
[    0.592622] pci 0000:68:00.4: Adding to iommu group 30
[    0.592640] pci 0000:68:00.5: Adding to iommu group 31
[    0.594247] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).
 
Hi, I don't understand if IOMMU works well on this AMD Ryzen 7 8845HS 9-Bay NAS Motherboard.
I will install an additional card in the pci slot and I would like to pass it to a VM.
I would like to know before spending over €500 on it :p
 
Hi,

I'm new to this forum and purchased a similar mainboard for my new home lab with Proxmox. Comparing the board designs I guess it's always the same manufacture with different labels, only.
So, may I ask what kind of RAM you use with this mainboard?

TIA
 
Yes, it is working pretty well since I found out how to properly passthrough. :)

It was a test server, so I made a few clean Proxmox install, and it turned out it was really easy, if you know what to do.
You don't have to install custom drivers, and by default IOMMU is enabled for AMD (drivers are preinstalled).
BUT somehow at some point of testing, I lost the /dev/dri/renderD128 folder and this is bad, if you don't have this folder that means you have to get it back somehow, because this folder contains the AMD Ryzen iGPU Driver, and you need to passthrough this in LXC container config file.

But, because I use other PCI devices (TPU CORAL) I did this:
nano /etc/default/grub
Code:
GRUB_CMDLINE_LINUX="quiet amd_iommu=on iommu=pt"
update-grub

nano /etc/modules
Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

Restart and see if IOMMU still works:
dmesg | grep -e DMAR -e IOMMU

It should show something like this:
View attachment 75031


The easiest if you use VM, because you just have to give the VM the Pheonix1 PCI device.
If you use LXC and I strongly suggest, because you can share the GPU across infinite LXCs and not limit to just 1 VM.

To install Frigate in an LXC docker I followed this guide.
When you reach the "Mapping through the USB Coral TPU and Hardware Acceleration"
nano /etc/pve/lxc/[B]XXX[/B].conf
Add these lines to the end, first for AMD Ryzen 780m igpu and the second for M2 TPU Coral.
Code:
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file 0, 0
lxc.mount.entry: /dev/apex_0 dev/apex_0 none bind,optional,create=file 0, 0

This is the compose file I use and ignore the intel part, it works for AMD too! (Devices: Apex=CoralTPU, renderD128=AMD iGPU)
YAML:
services:
  frigate:
    container_name: frigate
    restart: unless-stopped
    image: ghcr.io/blakeblackshear/frigate:stable
    shm_size: "256mb" # update for your cameras based on calculation above
    devices:
      - /dev/apex_0:/dev/apex_0
      - /dev/dri/renderD128:/dev/dri/renderD128 # for intel hwaccel, needs to be updated for your hardware
    volumes:
      - /etc/localtime:/etc/localtime
      - /opt/frigate/config:/config
      - /mnt/pve/Surveillance:/media/frigate
      - type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
        target: /tmp/cache
        tmpfs:
          size: 1073741824
    ports:
      - "5000:5000"
      - "1935:1935" # RTMP feeds
    environment:
      FRIGATE_RTSP_PASSWORD: "password"
      NVIDIA_VISIBLE_DEVICES: void
      LIBVA_DRIVER_NAME: radeonsi


And this was my starter frigate config for Annke C800, to see if iGPU works:
YAML:
mqtt:
  enabled: false
ffmpeg:
  hwaccel_args: preset-vaapi
detectors:
  coral:
    type: edgetpu

#Global Object Settings
objects:
  track:
    - person
cameras:
  annkec800: # <------ Name the camera
    ffmpeg:
      output_args:
        record: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v copy -tag:v hvc1 -bsf:v hevc_mp4toannexb -c:a aac

      inputs:
        - path: rtsp://admin:password@192.168.0.200:554/H264/ch1/main/av_stream # <----- Update for your camera
          roles:
            - detect
            - record


And here is a pic in Frigate showing the iGPU:
View attachment 75026

This is no reboil...so all credits going to @Merwenus
It's just my brain dump so far...anyway...open source! :)

ATM: Confirmed

My system:
AMD Ryzen 7 8845HS on Chinese NAS Board

Command line in /etc/default/grub
Code:
GRUB_CMDLINE_LINUX_DEFAULT="iommu=pt"

update-grub

edit /etc/modules:
Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

Looking at the loaded modules above, there are some virtual IO file thingis loaded =>> needed for passing through devices.

reboot

dmesg | grep -e DMAR -e IOMMU:
Code:
[    0.404847] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[    0.407151] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).

dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
Code:
[    0.107287] AMD-Vi: ivrs, add hid:AMDI0020, uid:\_SB.FUR0, rdevid:160
[    0.107289] AMD-Vi: ivrs, add hid:AMDI0020, uid:\_SB.FUR1, rdevid:160
[    0.107290] AMD-Vi: ivrs, add hid:AMDI0020, uid:\_SB.FUR2, rdevid:160
[    0.107291] AMD-Vi: ivrs, add hid:AMDI0020, uid:\_SB.FUR3, rdevid:160
[    0.107292] AMD-Vi: Using global IVHD EFR:0x246577efa2054ada, EFR2:0x0
[    0.404847] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[    0.406168] AMD-Vi: Extended features (0x246577efa2054ada, 0x0): PPR NX GT IA GA PC
[    0.406176] AMD-Vi: Interrupt remapping enabled
[    0.407151] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).

dmesg|grep iommu
Code:
[    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-6.8.12-5-pve root=/dev/mapper/pve-root ro iommu=pt
[    0.039828] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-6.8.12-5-pve root=/dev/mapper/pve-root ro iommu=pt
[    0.367941] iommu: Default domain type: Passthrough (set via kernel command line)
[    0.404903] pci 0000:00:01.0: Adding to iommu group 0
[    0.404920] pci 0000:00:01.2: Adding to iommu group 1
[    0.404938] pci 0000:00:01.3: Adding to iommu group 2
[    0.404956] pci 0000:00:01.4: Adding to iommu group 3
[    0.404994] pci 0000:00:02.0: Adding to iommu group 4
[    0.405013] pci 0000:00:02.1: Adding to iommu group 5
[    0.405031] pci 0000:00:02.2: Adding to iommu group 6
[    0.405049] pci 0000:00:02.3: Adding to iommu group 7
[    0.405068] pci 0000:00:02.4: Adding to iommu group 8
[    0.405087] pci 0000:00:02.5: Adding to iommu group 9
[    0.405121] pci 0000:00:03.0: Adding to iommu group 10
[    0.405140] pci 0000:00:03.1: Adding to iommu group 10
[    0.405174] pci 0000:00:04.0: Adding to iommu group 11
[    0.405193] pci 0000:00:04.1: Adding to iommu group 11
[    0.405222] pci 0000:00:08.0: Adding to iommu group 12
[    0.405239] pci 0000:00:08.1: Adding to iommu group 13
[    0.405257] pci 0000:00:08.2: Adding to iommu group 14
[    0.405275] pci 0000:00:08.3: Adding to iommu group 15
[    0.405305] pci 0000:00:14.0: Adding to iommu group 16
[    0.405322] pci 0000:00:14.3: Adding to iommu group 16
[    0.405399] pci 0000:00:18.0: Adding to iommu group 17
[    0.405416] pci 0000:00:18.1: Adding to iommu group 17
[    0.405433] pci 0000:00:18.2: Adding to iommu group 17
[    0.405450] pci 0000:00:18.3: Adding to iommu group 17
[    0.405467] pci 0000:00:18.4: Adding to iommu group 17
[    0.405484] pci 0000:00:18.5: Adding to iommu group 17
[    0.405501] pci 0000:00:18.6: Adding to iommu group 17
[    0.405518] pci 0000:00:18.7: Adding to iommu group 17
[    0.405540] pci 0000:02:00.0: Adding to iommu group 18
[    0.405557] pci 0000:03:00.0: Adding to iommu group 19
[    0.405576] pci 0000:04:00.0: Adding to iommu group 20
[    0.405595] pci 0000:05:00.0: Adding to iommu group 21
[    0.405614] pci 0000:06:00.0: Adding to iommu group 22
[    0.405632] pci 0000:07:00.0: Adding to iommu group 23
[    0.405650] pci 0000:08:00.0: Adding to iommu group 24
[    0.405681] pci 0000:c9:00.0: Adding to iommu group 25
[    0.405700] pci 0000:c9:00.2: Adding to iommu group 26
[    0.405722] pci 0000:c9:00.3: Adding to iommu group 27
[    0.405741] pci 0000:c9:00.4: Adding to iommu group 28
[    0.405760] pci 0000:c9:00.5: Adding to iommu group 29
[    0.405780] pci 0000:c9:00.6: Adding to iommu group 30
[    0.405801] pci 0000:ca:00.0: Adding to iommu group 31
[    0.405822] pci 0000:ca:00.1: Adding to iommu group 32
[    0.405842] pci 0000:cb:00.0: Adding to iommu group 33
[    0.405862] pci 0000:cb:00.3: Adding to iommu group 34
[    0.405881] pci 0000:cb:00.4: Adding to iommu group 35
[    0.405901] pci 0000:cb:00.5: Adding to iommu group 36
[    0.405920] pci 0000:cb:00.6: Adding to iommu group 37
[    0.407151] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).

The main thing is:
Getting with
Code:
dmesg|grep iommu
something like this
Code:
[    0.367935] iommu: Default domain type: Translated
there is no way with passing through devices to a VM or LNX Container!
=>> "Translated" is the killer for any devices passing through!

What dmesg needs to show for passing through devices to a VM or a LNX container is something like this
Code:
[    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-6.8.12-5-pve root=/dev/mapper/pve-root ro iommu=pt
[    0.039828] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-6.8.12-5-pve root=/dev/mapper/pve-root ro iommu=pt
[    0.367941] iommu: Default domain type: Passthrough (set via kernel command line)

Third line of last code is the magic one: "Passthrough" enables passing through devices to VMs and LNX Containers. Literal...it sounds like magic! ;-)
 
I found this link with some suggestions
https://github.com/theodric/kvm-vfio-notes

Among these I found the part useful:
Enable Above 4G decoding
Disable Resizable BAR
Enable SR-IOV support
Enable BME DMA Mitigation

Running the command # pvesh get /nodes/{nodename}/hardware/pci --pci-class-blacklist ""

I get the attached list.
 

Attachments

Here is the blacklist in a slightly more readable format. Just rename the file extension to ".html". This is necessary due to the forums file upload restrictions, where uploading html files is not allowed.
 

Attachments

Here an info for new owners of the Topton Mainboard: Mine was delivered with plugged in only SFF8643 sockets. =>> Not soldered! So none of the SATA ports work.
Because I was focused on installation and configuration of Proxmox itself and some virtual machines to the NVME, I discovered this failure while trying to add some SATA disks to Openmedia Vault running in a VM.
So, when receiving the board just check the solder joints.
 
Update: The SFF-8643 sockets are soldered to the mainboard, but the pins of the sockets are much to short for this thick mainboard. The pins can't reach the surface downside of the mainboard. So they are not really fixed to the board...just glued! Hence, If you pull the SFF-8643 plug with just a little bit to much power of the socket, the socket is pulled from the mainboard. Keep this in mind when detaching this cable from the mainboard.
Hope it helps.
 
Support for the "ITE IT8613E Super IO Sensors".
The current PVE kernel has no support / driver for the Super IO Sensors of the Topton mainboard. There is a project on Github which adds a driver for the sensors.
To compile the driver:
  1. apt update && apt install git sysfsutils pve-headers mokutil -y
  2. apt-get install lm-sensors read-edid i2c-tools
  3. Download / pull the driver package from Github
  4. make dkms
  5. To ensure, the driver is loaded after a reboot, add module it87 to /etc/modules
  6. sensors-detect
  7. sensors
Now you should see something like that:
Code:
it8613-isa-0a20
Adapter: ISA adapter
in0:         660.00 mV (min =  +1.35 V, max =  +2.65 V)  ALARM
in1:           1.18 V  (min =  +1.06 V, max =  +0.19 V)  ALARM
in2:           2.06 V  (min =  +2.52 V, max =  +1.69 V)  ALARM
in4:           2.07 V  (min =  +1.56 V, max =  +1.87 V)  ALARM
in5:           1.87 V  (min =  +1.78 V, max =  +0.40 V)  ALARM
3VSB:          3.37 V  (min =  +0.22 V, max =  +4.49 V)
Vbat:          3.28 V
+3.3V:         3.37 V
fan2:        1708 RPM  (min =   55 RPM)
fan3:        1073 RPM  (min =   11 RPM)
temp1:        +43.0°C  (low  =  +0.0°C, high = +100.0°C)
temp2:        +34.0°C  (low  =  +0.0°C, high = +100.0°C)  sensor = thermistor
temp3:       -128.0°C  (low  =  +0.0°C, high = +100.0°C)
intrusion0:  ALARM

k10temp-pci-00c3
Adapter: PCI adapter
Tctl:         +28.6°C

acpitz-acpi-0
Adapter: ACPI interface
temp1:        +20.0°C

amdgpu-pci-c900
Adapter: PCI adapter
vddgfx:      715.00 mV
vddnb:       760.00 mV
edge:         +26.0°C
PPT:           6.10 W  (avg =   4.05 W)

Without the it87 module sensors would show:
Code:
k10temp-pci-00c3
Adapter: PCI adapter
Tctl:         +28.9°C

acpitz-acpi-0
Adapter: ACPI interface
temp1:        +20.0°C

amdgpu-pci-c900
Adapter: PCI adapter
vddgfx:      715.00 mV
vddnb:       664.00 mV
edge:         +26.0°C
PPT:           4.13 W  (avg =   4.13 W)

Now it's possible to track and control the CPU temperature and fan speed.
Don't forget to repeat recompiling the driver after a kernel update.

For those, who want do integrate the sensor data into the PVE-GUI overview dashboard: This patch is working fine on my system!

Hope it helps. :cool:
 
Last edited:
  • Like
Reactions: Mario Rossi