GPU Passthrough

Indirectelex

Member
May 26, 2021
95
11
13
35
Canada
Hey you!
So I know how to passthrough a PCI like LSI SAS3008 PCI-Express Fusion-MPT SAS-3, now I wonder if a NVIDIA Quadro K420 is even compatible with proxmox?

When I type # lspci -nnk then I got Kernel driver in use: vfio-pci for both the VGA and audio devices. When I equip the devices on my VM then Proxmox freeze...
 
Check your IOMMU groups. All devices in the same group cannot be shared between VMs or between VM and host, so they are all passed through (or at least the host looses access to them). Usually this happens when the your GPU is in a PCIe slot that comes off the motherboard chipset and is in the same group as the disk drive and network controllers (and often some USB too), which cause Proxmox to loose its root disk and network and to become unreachable and/or crash.
 
You don't mean GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on"?

into nano /etc/modules :
vfio
vfio_iommu_type1
vfio_pci ids=10de:0ff3,10de:0e1b
vfio_virqfd

I added the ids of my GPU vga + audio
 
Last edited:
Okay I think you are right my devices are all in the same group, my device is 25:00.00 and 25:00.01
root@myhomelab:~# find /sys/kernel/iommu_groups/ -type l /sys/kernel/iommu_groups/17/devices/0000:28:00.1 /sys/kernel/iommu_groups/7/devices/0000:00:07.0 /sys/kernel/iommu_groups/15/devices/0000:27:00.0 /sys/kernel/iommu_groups/5/devices/0000:00:04.0 /sys/kernel/iommu_groups/13/devices/0000:03:00.0 /sys/kernel/iommu_groups/13/devices/0000:20:00.0 /sys/kernel/iommu_groups/13/devices/0000:25:00.0 /sys/kernel/iommu_groups/13/devices/0000:03:00.1 /sys/kernel/iommu_groups/13/devices/0000:25:00.1 /sys/kernel/iommu_groups/13/devices/0000:22:00.0 /sys/kernel/iommu_groups/13/devices/0000:20:01.0 /sys/kernel/iommu_groups/13/devices/0000:20:04.0 /sys/kernel/iommu_groups/13/devices/0000:03:00.2 /sys/kernel/iommu_groups/3/devices/0000:00:03.0 /sys/kernel/iommu_groups/11/devices/0000:00:14.3 /sys/kernel/iommu_groups/11/devices/0000:00:14.0 /sys/kernel/iommu_groups/1/devices/0000:00:01.3 /sys/kernel/iommu_groups/18/devices/0000:28:00.3 /sys/kernel/iommu_groups/8/devices/0000:00:07.1 /sys/kernel/iommu_groups/16/devices/0000:28:00.0 /sys/kernel/iommu_groups/6/devices/0000:00:05.0 /sys/kernel/iommu_groups/14/devices/0000:26:00.0 /sys/kernel/iommu_groups/4/devices/0000:00:03.1 /sys/kernel/iommu_groups/12/devices/0000:00:18.3 /sys/kernel/iommu_groups/12/devices/0000:00:18.1 /sys/kernel/iommu_groups/12/devices/0000:00:18.6 /sys/kernel/iommu_groups/12/devices/0000:00:18.4 /sys/kernel/iommu_groups/12/devices/0000:00:18.2 /sys/kernel/iommu_groups/12/devices/0000:00:18.0 /sys/kernel/iommu_groups/12/devices/0000:00:18.7 /sys/kernel/iommu_groups/12/devices/0000:00:18.5 /sys/kernel/iommu_groups/2/devices/0000:00:02.0 /sys/kernel/iommu_groups/10/devices/0000:00:08.1 /sys/kernel/iommu_groups/0/devices/0000:00:01.0 /sys/kernel/iommu_groups/19/devices/0000:28:00.4 /sys/kernel/iommu_groups/9/devices/0000:00:08.0
 
Here is to ask if I should add intel_iommu=on there ==> GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on" for that NVIDIA GPU with kernel modules from intel?
Code:
25:00.0 VGA compatible controller: NVIDIA Corporation GK107GL [Quadro K420] (rev a1) (prog-if 00 [VGA co     ntroller])
        Subsystem: NVIDIA Corporation GK107GL [Quadro K420]
        Flags: bus master, fast devsel, latency 0, IRQ 11, IOMMU group 13
        Memory at fb000000 (32-bit, non-prefetchable) [size=16M]
        Memory at d0000000 (64-bit, prefetchable) [size=256M]
        Memory at e0000000 (64-bit, prefetchable) [size=32M]
        I/O ports at d000 [size=128]
        Expansion ROM at 000c0000 [disabled] [size=128K]
        Capabilities: [60] Power Management version 3
        Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [78] Express Endpoint, MSI 00
        Capabilities: [b4] Vendor Specific Information: Len=14 <?>
        Capabilities: [100] Virtual Channel
        Capabilities: [128] Power Budgeting <?>
        Capabilities: [420] Advanced Error Reporting
        Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
        Kernel modules: nvidiafb, nouveau

25:00.1 Audio device: NVIDIA Corporation GK107 HDMI Audio Controller (rev a1)
        Subsystem: NVIDIA Corporation GK107 HDMI Audio Controller
        Flags: bus master, fast devsel, latency 0, IRQ 34, IOMMU group 13
        Memory at fc080000 (32-bit, non-prefetchable) [size=16K]
        Capabilities: [60] Power Management version 3
        Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [78] Express Endpoint, MSI 00
        Kernel driver in use: snd_hda_intel
        Kernel modules: snd_hda_intel
 
The is never a need to add amd_iommu=on because it is already on by default. Also, intel_iommu has nothing to do with snd_hda_intel.

You GPU (25:00.0 and 25:00.1) are in the same IOMMU group (13) as several other devices that are needed by the Proxmox host. That is the cause of your freeze/crash problem.

The IOMMU groups are determined by your physical motherboard and your motherboard BIOS. You could try using another PCIe slot. You can get a nice overview of the groups with this command: for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU group %s ' "$n"; lspci -nnks "${d##*/}"; done. Can you tell us what motherboard you are using? Currently, it is not even clear if you use AMD or Intel. It is clear that PCI passthrough can work, so that's good.
 
  • Like
Reactions: AndyChu
Well you should know I am on AMD CPU because I succeed with the PCI Passthrough with that command line GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on", but anyway I thank you for your command it showed me that :
IOMMU group 13 25:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK107GL [Quadro K420] [10de:0ff3] (rev a1)
Subsystem: NVIDIA Corporation GK107GL [Quadro K420] [10de:1162]
Kernel driver in use: nouveau
Kernel modules: nvidiafb, nouveau
IOMMU group 13 25:00.1 Audio device [0403]: NVIDIA Corporation GK107 HDMI Audio Controller [10de:0e1b] (rev a1)
Subsystem: NVIDIA Corporation GK107 HDMI Audio Controller [10de:1162]
Kernel driver in use: snd_hda_intel
Kernel modules: snd_hda_intel
I dont really know what to think about it for now, except my GPU components are on the same IOMMU group
 
The GPU is in IOMMU group 13 together with a least a network and a SATA controller (I presume because you did not share all of the IOMMU group output). Devices in the same group cannot be split or shared between VMs or a VM and the Proxmox host, because of ACS security isolation. The groups are determined by your physical motherboard and its BIOS. When passing through the GPU, the Proxmox host loses its disks and network, which explains why it appears to freeze or crash (which is a rather common thing with AMD Ryzen on this forum).
Your NVIDIA Quadro K420 in not incompatible with Proxmox but you need to put it in another PCIe slot, until it is in a IOMMU group without other devices (except for some bridges). Without the make and model of your motherboard, I cannot advise any specific PCIe slot.
 
The GPU is in IOMMU group 13 together with a least a network and a SATA controller (I presume because you did not share all of the IOMMU group output). Devices in the same group cannot be split or shared between VMs or a VM and the Proxmox host, because of ACS security isolation. The groups are determined by your physical motherboard and its BIOS. When passing through the GPU, the Proxmox host loses its disks and network, which explains why it appears to freeze or crash (which is a rather common thing with AMD Ryzen on this forum).
Your NVIDIA Quadro K420 in not incompatible with Proxmox but you need to put it in another PCIe slot, until it is in a IOMMU group without other devices (except for some bridges). Without the make and model of your motherboard, I cannot advise any specific PCIe slot.
Really, that sounds like a bad new to me because I got a B450 Gaming Plus Max with only 1 PCI-E 2.0 x16 slot I have another PCI-E 3.0 x16 for a LSI SAS BUS and 4x PCI-E x1 slots. Do you think if there is a way to do something from the bios?
 
So maybe there is a way to understand why it appears to freeze :
Code:
root@myhomelab:~# for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU group %s ' "$n"; lspci -nnks "${d##*/}"; done
IOMMU group 0 00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 10 00:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] [1022:1484]
        Kernel driver in use: pcieport
IOMMU group 11 00:14.0 SMBus [0c05]: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller [1022:790b] (rev 61)
        Subsystem: Micro-Star International Co., Ltd. [MSI] FCH SMBus Controller [1462:7b86]
        Kernel driver in use: piix4_smbus
        Kernel modules: i2c_piix4, sp5100_tco
IOMMU group 11 00:14.3 ISA bridge [0601]: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge [1022:790e] (rev 51)
        Subsystem: Micro-Star International Co., Ltd. [MSI] FCH LPC Bridge [1462:7b86]
IOMMU group 12 00:18.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 0 [1022:1440]
IOMMU group 12 00:18.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 1 [1022:1441]
IOMMU group 12 00:18.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 2 [1022:1442]
IOMMU group 12 00:18.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 3 [1022:1443]
        Kernel driver in use: k10temp
        Kernel modules: k10temp
IOMMU group 12 00:18.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 4 [1022:1444]
IOMMU group 12 00:18.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 5 [1022:1445]
IOMMU group 12 00:18.6 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 6 [1022:1446]
IOMMU group 12 00:18.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 7 [1022:1447]
IOMMU group 13 03:00.0 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset USB 3.1 XHCI Controller [1022:43d5] (rev 01)
        Subsystem: ASMedia Technology Inc. 400 Series Chipset USB 3.1 XHCI Controller [1b21:1142]
        Kernel driver in use: xhci_hcd
        Kernel modules: xhci_pci
IOMMU group 13 03:00.1 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset SATA Controller [1022:43c8] (rev 01)
        Subsystem: ASMedia Technology Inc. 400 Series Chipset SATA Controller [1b21:1062]
        Kernel driver in use: ahci
        Kernel modules: ahci
IOMMU group 13 03:00.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Bridge [1022:43c6] (rev 01)
        Kernel driver in use: pcieport
IOMMU group 13 20:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port [1022:43c7] (rev 01)
        Kernel driver in use: pcieport
IOMMU group 13 20:01.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port [1022:43c7] (rev 01)
        Kernel driver in use: pcieport
IOMMU group 13 20:04.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port [1022:43c7] (rev 01)
        Kernel driver in use: pcieport
IOMMU group 13 22:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 15)
        Subsystem: Micro-Star International Co., Ltd. [MSI] RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [1462:7b86]
        Kernel driver in use: r8169
        Kernel modules: r8169
IOMMU group 13 25:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK107GL [Quadro K420] [10de:0ff3] (rev a1)
        Subsystem: NVIDIA Corporation GK107GL [Quadro K420] [10de:1162]
        Kernel driver in use: nouveau
        Kernel modules: nvidiafb, nouveau
IOMMU group 13 25:00.1 Audio device [0403]: NVIDIA Corporation GK107 HDMI Audio Controller [10de:0e1b] (rev a1)
        Subsystem: NVIDIA Corporation GK107 HDMI Audio Controller [10de:1162]
        Kernel driver in use: snd_hda_intel
        Kernel modules: snd_hda_intel
IOMMU group 14 26:00.0 Serial Attached SCSI controller [0107]: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 [1000:0097] (rev 02)
        Subsystem: Broadcom / LSI SAS9300-8i [1000:30e0]
        Kernel driver in use: vfio-pci
        Kernel modules: mpt3sas
IOMMU group 15 27:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function [1022:148a]
        Subsystem: Micro-Star International Co., Ltd. [MSI] Starship/Matisse PCIe Dummy Function [1462:7b86]
IOMMU group 16 28:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP [1022:1485]
        Subsystem: Micro-Star International Co., Ltd. [MSI] Starship/Matisse Reserved SPP [1462:7b86]
IOMMU group 17 28:00.1 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP [1022:1486]
        Subsystem: Micro-Star International Co., Ltd. [MSI] Starship/Matisse Cryptographic Coprocessor PSPCPP [1462:7b86]
        Kernel driver in use: ccp
        Kernel modules: ccp
IOMMU group 18 28:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller [1022:149c]
        Subsystem: Micro-Star International Co., Ltd. [MSI] Matisse USB 3.0 Host Controller [1462:7b86]
        Kernel driver in use: xhci_hcd
        Kernel modules: xhci_pci
IOMMU group 19 28:00.4 Audio device [0403]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller [1022:1487]
        Subsystem: Micro-Star International Co., Ltd. [MSI] Starship/Matisse HD Audio Controller [1462:cb86]
        Kernel driver in use: snd_hda_intel
        Kernel modules: snd_hda_intel
IOMMU group 1 00:01.3 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483]
        Kernel driver in use: pcieport
IOMMU group 2 00:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 3 00:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 4 00:03.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483]
        Kernel driver in use: pcieport
IOMMU group 5 00:04.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 6 00:05.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 7 00:07.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 8 00:07.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] [1022:1484]
        Kernel driver in use: pcieport
IOMMU group 9 00:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
 
As I said, your SATA, USB and network controllers are in the same group, and therefore cannot be split/shared between the Proxmox host and the VM. This is the reason your system freezes: it can no longer use the drives and the network because they are then passed to the VM. That's why you cannot reach the Proxmox host via the network. And it probably endlessly tries to write to the missing drives. Except for the X570, all Ryzen motherboard have this issue and almost all devices connect via the chipset.

If you are not passing through the LSI, then you could maybe swap them? The x16 slot closest to the CPU is actually in a separate IOMMU group. Note that your PCIe 2.0 x16 slot is only wired for x4. Or buy an Ryzen motherboard that has two x16 slots with, either the second one wired for x8, or the second one wired for x4 and shared with the first M.2 controller (which is also in its own group). Or buy a X570 motherboard. Or force the system to ignore the IOMMU groups.

Proxmox has the ACS-override-patch included to "break" all IOMMU groups. There are no guarantees that it will actually work and it does guarantee that the security isolation between the devices (currently in the same group) is not enforced. So don't run any untrusted software or allow untrusted user access! Try adding pcie_acs_override=downstream,multifunction to your kernel parameters (just like amd_iommu=on, which you don't actually need).

EDIT: Note that on a motherboard with a second x8 slot, it is only in a separate IOMMU if your Ryzen CPU has no graphics (is not an APU).
 
Last edited:
As I said, your SATA, USB and network controllers are in the same group, and therefore cannot be split/shared between the Proxmox host and the VM. This is the reason your system freezes: it can no longer use the drives and the network because they are then passed to the VM. That's why you cannot reach the Proxmox host via the network. And it probably endlessly tries to write to the missing drives. Except for the X570, all Ryzen motherboard have this issue and almost all devices connect via the chipset.

If you are not passing through the LSI, then you could maybe swap them? The x16 slot closest to the CPU is actually in a separate IOMMU group. Note that your PCIe 2.0 x16 slot is only wired for x4. Or buy an Ryzen motherboard that has two x16 slots with, either the second one wired for x8, or the second one wired for x4 and shared with the first M.2 controller (which is also in its own group). Or buy a X570 motherboard. Or force the system to ignore the IOMMU groups.

Proxmox has the ACS-override-patch included to "break" all IOMMU groups. There are no guarantees that it will actually work and it does guarantee that the security isolation between the devices (currently in the same group) is not enforced. So don't run any untrusted software or allow untrusted user access! Try adding pcie_acs_override=downstream,multifunction to your kernel parameters (just like amd_iommu=on, which you don't actually need).

EDIT: Note that on a motherboard with a second x8 slot, it is only in a separate IOMMU if your Ryzen CPU has no graphics (is not an APU).
The GPU is a PCI Express 2.0 x16 and my board have one PCI-E 3.0 and one 2.0, technically I am supposed to be okay... You say a PCI 2.0 is only wired for x4, but just have a look on the box :
 

Attachments

  • Screenshot_100.jpg
    Screenshot_100.jpg
    12.1 KB · Views: 12
The GPU is a PCI Express 2.0 x16 and my board have one PCI-E 3.0 and one 2.0, technically I am supposed to be okay... You say a PCI 2.0 is only wired for x4, but just have a look on the box :
According to the Detail tab of the Specifications it is physically a 2.0 x16 slot but it is only wired for x4 electrically, so you have only a quarter of the bandwidth. Sorry for not being more clear about this beforehand.
 
According to the Detail tab of the Specifications it is physically a 2.0 x16 slot but it is only wired for x4 electrically, so you have only a quarter of the bandwidth. Sorry for not being more clear about this beforehand.
Ok thanks, if the GPU is a PCI-E 2.0 and the slot is also a 2.0, I guess that GPU will work on the specific slot of the board, right? Then that GPU can't passthrough because of that IOMMU group #13, right? Have a good day
 
PCIe version is unimportant because it is backwards compatible both ways. The number of connections x16 versus x4 is also generally not an issue. Only the performance will be less, but sometimes not even noticable in practice. EDIT: PCIe 3.0 is twice as fast as PCIe 2.0, per lane, and double the lanes is also twice the bandwidth.
The problem is that you want to use some of the devices in IOMMU group 13 for the Proxmox host and pass another device in the same group to the VM.
Any luck with using pcie_acs_override?
 
Last edited:
Hi all,

This is an old post, but since my equipment is also (rather), I'll give it a shot.

I've recently installed Proxmox on my QNAP TVS-473. All went well without problems.

My setup:

  • 4x 6TB HDD
  • 2x 1TB WD Blue SSD (SATA On-board slots)
I'm now at the point that I want to install TrueNAS Scale and pass-through my 4 x 6TB HDD's and 2x 1TB SSD.
I have found the FCH SATA Controller and successfully passed through to TrueNAS....more or less. The only disks I get in TrueNAS are the 2 1TB SATA SSD's.

I have tried a lot of stuff, but of Proxmox crashes or just simply nothing happens.
Some info I managed to get, perhaps it helps getting help from the community.
I've used the following commands, the output is in a Pastebin.

https://pastebin.com/70EN7g96

  • pvesh get /nodes/QNAP-pve/hardware/pci --pci-class-blacklist ""
  • dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
  • lsmod | grep vfio
  • lsblk
  • lspci
Besides the FCH SATA Controller, I also have these:

0c:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9215 PCIe 2.0 x1 4-port SATA 6 Gb/s Controller (rev 11)

0d:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9215 PCIe 2.0 x1 4-port SATA 6 Gb/s Controller (rev 11)

0e:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9215 PCIe 2.0 x1 4-port SATA 6 Gb/s Controller (rev 11)

03:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9215 PCIe 2.0 x1 4-port SATA 6 Gb/s Controller (rev 11)

But, trying to pass-through the above, crashes Proxmox and I need to do a hard reset of my QNAP.

I have TrueNAS running, or setup, with UEFI. I found a post where this was the solution for someone...clearly not for me.

I know, there's a possibility to passthrough the individual disks, but it's not recommendable. I know.
https://www.youtube.com/watch?v=2mvCaqra6qY

Also tried the override-patch, this is my Grub:

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`

GRUB_CMDLINE_LINUX_DEFAULT="quiet"​

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction"
GRUB_CMDLINE_LINUX=""

But, unfortunately, no change. :/ Do I need to do some more tweaking or just accept the fact that this just isn't possible and should look for other ways. Setting up SMB in a Proxmox CT and try to share the disks like this?

I'd appreciate all the help I can get.
I'm not an expert, if more info is needed, let me know and I'll see to get as much as possible.

Thank you in advance.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!