[SOLVED] No connection after adding SATA controller to VM

AkiVonAkira

New Member
Sep 30, 2023
12
0
1
Hello, I'm new to proxmox and was setting up TrueNAS Scale VM and passing through my sata controller. After a reboot, I was unable to see proxmox on my router, web or ssh.
I have attempted numerous popular search results to address similar issues, however, none of them have been successful. Furthermore, I have attempted to reinstall fresh PVE on over five occasions, yet the same issue persists. (used to have iommu issues because I was missing like one single setting in the bios, but that seems to work(?) now)

The rig was built with leftover parts from upgrading my gaming PC.
CPU: r5 3600
MOBO: asrock b550 steel legend
 

Attachments

  • 20231006_181522.jpg
    20231006_181522.jpg
    485 KB · Views: 3
See here the Thread from yesterday:
https://forum.proxmox.com/threads/lost-network-after-nvme-hba-install.134500/

If it's not a change of the NIC name then check your IOMMU groups. You can only passthrough whole IOMMU groups and not single devices. And in case your SATA controller is sharing the same IOMMU group with the NIC, you are probably passing the SAA controller as well as the NIC and all other devices in that IOMMU group into the VM, so your PVE host is then losing the NIC.

Run this to verify that the SATA controller is the only device of this IOMMU group:
for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU Group %s ' "$n"; lspci -nns "${d##*/}"; done;
 
Last edited:
  • Like
Reactions: AkiVonAkira
I don't seem to have a NIC name to begin with.
Trying to start the vm, I think it's the IOMMU groups, could you guide me the right way?
I have followed this for iommu.
It is enabled on bios and ACS and all the virtualization stuffs

Also, when I do
Code:
# pvesh get /nodes/{nodename}/hardware/pci --pci-class-blacklist ""
I can't scroll to read the whole thing ;(

20231006_190937.jpg
 
Whats the output of: ?
for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU Group %s ' "$n"; lspci -nns "${d##*/}"; done;
 
I went to eat, but then I wrote the command, and it didn't work. You were missing a do before n=${n%%/*};
anyway, here's part of the result, I can't seem to scroll with shift page up or anything so 20231006_194839.jpg20231006_195201.jpg
 
I can't see the IOMMU group of your NIC. But at least just passing through the SATA controller won't work as at least the USB controller is using the same IOMMU group (in case I read the table correctly and the ioMMu group is 15).
 
That is probably the case, how can I remove the hardware from the vm then? Or is there a way to separate them, ACS is enabled (before it was like 10 different group 0's)
 
If acs_override isn't helping I guess you are out of luck.
Then your best bet probably would be to buy a HBA card and put that in one of the primary or secondary 16x slots that are directly connected to the CPU (and not to the chipset).
 
Last edited:
ACS override did not work. :(

I could replace my motherboard to something with better server capabilities? I still have 2 or so weeks on my return window. Needs to be m-atx to fit in my node 804.
Alternatively, what HBA cards could you recommend?
Thanks for the help so far though.
 
I could replace my motherboard to something with better server capabilities?
Consumer hardware usually isn't great for stuff like that. People who care about virtualization features usually get enterprise hardware using Xeon/Epyc CPUs that is made for such features.
Alternatively, what HBA cards could you recommend?
Homelabber usually prefer used rebranded LSI HBAs: https://www.truenas.com/community/r...9300-9305-9311-9400-94xx-hba-and-variants.54/

I personally use three crossflashed LSI SAS2008.
 
Consumer hardware usually isn't great for stuff like that. People who care about virtualization features usually get enterprise hardware using Xeon/Epyc CPUs that is made for such features.
I am aware, however I had hardware lying around (cpu, ram, cooler, psu), and I want to learn until I can justify buying a 'proper' server pc (also some space constraints and I want a rack haha)

But ~$25 for an LSI 9211-8i doesn't look half bad(?). I am not the biggest fan of this solution however because I didn't really solve anything, but if I can get my server up and running finally I'll be happy...
 
I am aware, however I had hardware lying around (cpu, ram, cooler, psu), and I want to learn until I can justify buying a 'proper' server pc (also some space constraints and I want a rack haha)

But ~$25 for an LSI 9211-8i doesn't look half bad(?). I am not the biggest fan of this solution however because I didn't really solve anything, but if I can get my server up and running finally I'll be happy...
First I would verify with any PCIe card you have laying around if you can really passthrough that PCIe slot so that the device in it got it's dedicated IOMMU group.
 
First I would verify with any PCIe card you have laying around if you can really passthrough that PCIe slot so that the device in it got it's dedicated IOMMU group.
1696718561479.png
PCIe passthrough works (GPU), so I ordered the HBA card, should arrive in 2-3 weeks... Thanks for the help! @Dunuin
 
So, like already guessed, SATA controller, NIC and USB controller all share the same IOMMU group (15).
 
1698283098532.png

You've got to be kidding me Why is it using the same group when the GPU is not, just wasted 3 weeks and money
 
Like I already said, first verify that your PCIe slot you want to use got its own IOMMU group.
You probably didn't put that HBA in a PCIe that is connected to your CPU (usually only the 16x slots on consumer mainboards) as the other slots are only connected to the mainboards chipset and then share the same IOMMU group with the other onboard devices that are connected to the chipset.

So I would try to put it in another PCIe slot.
 
I swapped the GPU and HBA card, and it's as you told me, HBA in its own group and GPU on the chipset.
But yeah, it was my fault for not checking properly and assuming that because the GPU had its own group, the HBA card would too. But I'm going to return the motherboard and case and go for x570, currently researching which one exactly, but done some reading around, and it seems it has better groupings.

I'm going for the SilverStone CS380 and I think I'll be happier with it too going forward, looks nicer.
 
Hi, I'm a bit late to the party with maybe not a useful question, anyway :p
new to proxmox and was setting up TrueNAS Scale VM
Not so new anymore in the meantime :)

With TrueNAS scale running Linux, why would you run a VM when a container works perfectly with fewer resources and higher performance? You could do away with passthrough and offer block devices directly to the container.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!