Networking Assistance please

TmacTech

New Member
Jan 8, 2026
1
0
1
To preface, I'm still pretty new to working with Proxmox. I'm currently using it for setting up my homelab for various use-cases and projects.

I'm running into an issue with my latest Proxmox setup and haven't had much luck scouring the forums for a solution this far. The issue I'm having is when I start one of my VMs, the NIC I have connected to that VM will go offline and nothing else using that VM can use the NIC, access the network and so forth. I'm beginning to think it may be a restriction with my hardware but that's where I'm hoping the community might be able to help with whatever config changes might need to be made.

Some details about my setup. I'm running Proxmox on an Asus Prime Z390-a motherboard with an Intel i9-9900k and 64GB of RAM. In the first x16 PCIe slot, I have an HBA connected to 4 8TB HDDs. In the second x16 PCIe slot, I have an Intel x540 10gbps NIC. The HBA is configured in "resource mappings" so I can pass it directly through to my VM running TrueNAS Scale. The 10gbps NIC is setup with a Linux bridge and added to the TrueNAS VM. For the sake of IOMMU groups in Proxmox for these two devices, they are both in group 2 when ACS is not enabled.

Some additional detail, I've tried using a 4-port Intel 2.5gbps NIC with this node with the same effect. I've also updated the BIOS, FWIW, to the latest revision and I believe everything that needs to be enabled in the BIOS, should be enabled as far as vt-d and so forth.

I have tried forcing ACS by modifying the grub file but that has resulted in the VMs getting extremely poor network performance when performing file transfers or other networking functions. By poor I mean ~15MBps when I will get ~250MBps on other VMs with the TrueNAS VM offline.

I know that the NIC is working with Proxmox and with other VMs as I can connect to the network, internet and so forth when I spin up other VMs that use the 10gbps NIC. Once I turn on my TrueNAS VM, everything using the 10gbps NIC loses connection. The connection in the "network" section for the node is best described as "flapping" where it will say that its active but then go inactive, back and forth until I restart Proxmox.

My goal is to have the TrueNAS VM and most other VMs use the 10gbps NIC for anything network related and use the onboard 1gbps NIC for accessing the Proxmox console and for running at least most of the containers I have on this node.

I'm hoping that I'm just missing something with the network config or some other Proxmox setting that I'm otherwise unaware needs to be set. So I'm hoping for the best for any advice or guidance anyone can provide me with this.

Thank you
 
Hi TmacTech,

This is a pretty common issue with Intel X540 (and other ixgbe-based NICs) on consumer platforms like Z390 when they’re shared with a TrueNAS SCALE VM through a Linux bridge.

What’s happening is that when TrueNAS starts, the ixgbe driver resets the NIC at a hardware level. On consumer chipsets, the NIC ports (and sometimes other devices) usually sit in the same IOMMU group, so that reset also affects the host. From Proxmox’s point of view the NIC keeps disappearing and coming back, which is why you see link flapping and lose all network access until a reboot.

ACS override can change the IOMMU grouping, but on this kind of hardware it often comes with very poor performance, which matches what you’re seeing.

The most stable solution here is to pass the X540 through completely to the TrueNAS VM, so it has exclusive control of the card and there are no reset conflicts.

If you want a simpler workaround, keep the NIC on the host and use a virtio interface for TrueNAS (lower peak throughput, but very stable).

It's like a hardware and IOMMU limitation on consumer boards.

If you can confirm this?

Thanks
 
  • Like
Reactions: Johannes S