TrueNAS Scale VM problems with PCIe NVME passthrough

Tijay99

New Member
May 25, 2024
5
0
1
I am having problems with my TrueNas VM as the host loses connection when I start the VM with my NVME SSD passed through to the VM.
The VM runs fine if I dont passthrough the SSD. IOMMMU is active and no other VMs are running/taking resources.

I am very new to working with linux and VMs, so it might be a simple oversight on my part, but I am not able to figure out what the problem is and why it only shows up if the NVME is passed through.
If anyone has an idea what to check or if you need more information, then please let me know.

Specs of the system:
-AMD Ryzen 5 5500GT
-MSI MPG B550 Gaming Plus
-16 GB RAM
-2x 1TB Sata SSD (passed through to VM)
-1x 1TB KIOXIA EXCERIA NVME
-1x 512 Samsung NVME (Boot drive for Proxmox)
 
I am having problems with my TrueNas VM as the host loses connection when I start the VM with my NVME SSD passed through to the VM.
Check your IOMMU groups: https://pve.proxmox.com/wiki/PCI_Passthrough#Verify_IOMMU_isolation
-AMD Ryzen 5 5500GT
-MSI MPG B550 Gaming Plus
-2x 1TB Sata SSD (passed through to VM)
Any Ryzen motherboard (except X570) can only passthrough a single x16 PCIe slot and one x4 M.2 slot (both connected to the CPU instead of the chipset).
 
Thanks for the quick reply.

I have checked this and the table I get looks fine to me. But I also dont really know what to look for.

1716632069249.png
Any Ryzen motherboard (except X570) can only passthrough a single x16 PCIe slot and one x4 M.2 slot (both connected to the CPU instead of the chipset).
I think this shouldnt matter, since the SSDs are passed through from the motherboard sata controller or have I reached this limit in doing so?
I used this command to do so:

https://youtu.be/M3pKprTdNqQ?list=PLQS-uPhGcHwfyYMQObGgmmB9DCk7OkzYM&t=726
 
I think this shouldnt matter, since the SSDs are passed through from the motherboard sata controller or have I reached this limit in doing so?
Sorry, but I don't want to watch YouTube for this. Akll of this does matter if you passthrough the SATA controller. If you don't use PCI(e) passthrough, then everything I'm saying does not matter.
I have checked this and the table I get looks fine to me. But I also dont really know what to look for.

View attachment 68789
Your SATA controller shared group 8 with the network controller and others devices (as you can see from the table). You cannot share devices from the same IOMMU group securely between VMs and/or the Proxmox host. The host loses all devices in group 8 (NVMe, network, USB) as soon a you start the VM with the SATA controller.
 
Last edited:
I am having problems with my TrueNas VM as the host loses connection when I start the VM with my NVME SSD passed through to the VM.
One NVMe devices is also in group 8.
-2x 1TB Sata SSD (passed through to VM)
I think this shouldnt matter, since the SSDs are passed through from the motherboard sata controller or have I reached this limit in doing so?
It's confusing to me that you talk about NVMe SSD passed through and then later talk about only SATA SSD passthrough. Which is it? Not every SSD is NVMe but every NVMe is a PCIe devices. Passthrough of one NVMe device (probably the one in IOMMU group 8) or pasthroough of the SATA controller or disk passthrough without PCI(e) passthrough?
 
One NVMe devices is also in group 8.
This should be the NVME SSD I am trying to passthrough to the VM. But if I try to add it to the VM(as shown in the picture below) then the host loses connection. Is this because the Ethernet Controller is also in group 8 and therfore "lost" to the VM?

1716633315043.png
It's confusing to me that you talk about NVMe SSD passed through and then later talk about only SATA SSD passthrough. Which is it? Not every SSD is NVMe but every NVMe is a PCIe devices. Passthrough of one NVMe device (probably the one in IOMMU group 8) or pasthroough of the SATA controller or disk passthrough without PCI(e) passthrough?
I know the difference between SSDs. I am currently trying to pass an NVME SSD to the VM. I already have passed two SATA SSDs to the VM. THose two are connected to the motherboard SATA ports, so they dont connect to PCIe in any way.
 
This should be the NVME SSD I am trying to passthrough to the VM. But if I try to add it to the VM(as shown in the picture below) then the host loses connection. Is this because the Ethernet Controller is also in group 8 and therfore "lost" to the VM?
Yes, that's what I'm saying. Switching the two NVMe drives will probably fix this particular issue as the other M.2 slot is in another IOMMU group.
View attachment 68790

I know the difference between SSDs. I am currently trying to pass an NVME SSD to the VM. I already have passed two SATA SSDs to the VM. THose two are connected to the motherboard SATA ports, so they dont connect to PCIe in any way.
I did not realize you are passing three drives, sorry. It was not clear to me if you used disk passthrough of the drives or PCI(e) passthrough of the SATA contoller. It's obvious now that it's the first otherwise you would already encountered this problem because the SATA controller is also in group 8.

You have selected a motherboard that is very limited for PCI(e) passthrough. Which is not an uncommon thing on this forum. I would suggest simply using virtual disks for your VM instead of passthrough. That also makes making backups much more easy and reliable. Or maybe Proxmox (as an clustered enterprise hypervisor) is not the best fit for your use case and Unraid (which ignores IOMMU groups and does not securly isolate VMs) or another Linux with virtsh might work better/easier.
 
Yes, that's what I'm saying. Switching the two NVMe drives will probably fix this particular issue as the other M.2 slot is in another IOMMU group.

I did not realize you are passing three drives, sorry. It was not clear to me if you used disk passthrough of the drives or PCI(e) passthrough of the SATA contoller. It's obvious now that it's the first otherwise you would already encountered this problem because the SATA controller is also in group 8.

You have selected a motherboard that is very limited for PCI(e) passthrough. Which is not an uncommon thing on this forum. I would suggest simply using virtual disks for your VM instead of passthrough. That also makes making backups much more easy and reliable. Or maybe Proxmox (as an clustered enterprise hypervisor) is not the best fit for your use case and Unraid (which ignores IOMMU groups and does not securly isolate VMs) or another Linux with virtsh might work better/easier.
I appreciate the quick help, since now I at least know the problem. Maybe I will search for a fix or as you said switch from Proxmox to something else.
I will tinker a bit more and then I will see.

Have a good weekend.
 
Small final Update on the issue. I have fixed the Problem by just swapping SSDs on the Mainboard, so now the NVME I wanted to passthrough was in IOMMU group 9, which doesnt include any other devices.

This fixed the issue I had with the system.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!