Loss of Connectivity After Adding PERC H730 Mini RAID Controller to TrueNAS VM

Raedm

New Member
Feb 16, 2024
5
1
3
I'm experiencing a significant issue with my Proxmox setup after adding a PERC H730 Mini RAID controller which is in HBA mode to my TrueNAS VM via adding PCI device. Here is a brief overview of my current infrastructure and the problem I'm facing:

Current Setup:

Server Hardware:
Dell PowerEdge R730

Proxmox Version: 8.2.4

Network Setup:

NICs:
The server has an Intel(R) GbE 4P I350-t R NDC network card with 4 ports.

Port 1: Connected to the ISP for internet access (WAN).

Port 2: Dedicated for pfSense LAN, connected directly to my main PC.

Port 3: Carries all VLANs and connects to a UniFi USW Lite 16 PoE switch.

Port 4: Used for Proxmox management, also connected to the UniFi switch.

Virtual Machines:

pfSense VM:
Manages all network traffic, with each port assigned to handle WAN, LAN, and VLANs.

TrueNAS VM: Used for storage, with multiple hard drives passed through directly.

Other VMs: Various VMs running different services, including a UniFi controller.


Issue Description:


After adding the PERC H730 Mini RAID controller, as PCI device, to the TrueNAS VM, I lost access to the Proxmox GUI and SSH. Despite the Proxmox server's IP being pingable, all management interfaces are inaccessible. Other VMs, such as pfSense, continue to function normally but still the interfaces of all VMs inaccessible.


Temporary Fix:

The only way to restore access is to boot the server in recovery mode and remove the RAID controller from the TrueNAS VM configuration file. This workaround resolves the issue temporarily, but it prevents me from using the RAID controller as intended.


Question:

How can I resolve this issue and make the RAID controller passthrough to work with TrueNAS without disrupting Proxmox network connectivity? Please note that I have followed the PCI(e) Passthrough documentations.

https://pve.proxmox.com/wiki/PCI(e)_Passthrough



Thanks!
 
It is probably due to a Linux naming change to the NIC's after that PCI device is passed through.
Check ip a before & after the device is passed through to the VM & compare.

Also you will want to make sure the iOMMU groupings are distinct; ie: the PCI perc is not in the same group as the NIC.
Check with:
pvesh get /nodes/{nodename}/hardware/pci --pci-class-blacklist ""
(as per the above Wiki you mentioned).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!