PVE nodes communicating with mounted storage (truenas) over cluster network NIC and not primary NIC

nightchrono

New Member
Nov 24, 2025
12
0
1
I have a 3 node cluster running on a variety of hardware. All three nodes and the truenas storage for virtual disks have 10 Gbps NICs. The separate NICs I am using for the cluster network are all only 1 Gbps.

Upon rebooting a node for a kernel update, it lost connection to the truenas NFS share. I went into truenas and added the cluster static IP as an allowed connection, and boom mounted in PVE webUI instantly.

I have run into this in the past, and have been unable to find a solution googling. Is there some way to set NIC priority, or assign the mapping of a file share to only a specific NIC? The Truenas box has 4 NVMe drives in a ZFS and I really need the 10 Gbps networking to get the virtual disks served to the hypervisor with as much bandwidth as possible.

I would be happy to post any requested configs or logs, I am just unsure of what would be needed for this.

Thanks to everyone who helps.
 
All three nodes and the truenas storage for virtual disks have 10 Gbps NICs. The separate NICs I am using for the cluster network are all only 1 Gbps.
What networks/subnets are being used here?
Upon rebooting a node for a kernel update, it lost connection to the truenas NFS share.
Can you expand on this? Are all your connections on the same subnet? What does /etc/pve/storage.cfg say? What about "ip a" from PVE? IPs of the TrueNAS?



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
What networks/subnets are being used here?

Can you expand on this? Are all your connections on the same subnet? What does /etc/pve/storage.cfg say? What about "ip a" from PVE? IPs of the TrueNAS?



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Network is 192.168.50.0/24. All devices in question are on the same subnet.

My nodes are as follows:
Node NameManagement IPCluster IP
PVX192.168.50.100192.168.50.120
PVX2192.168.50.104192.168.50.124
PVX3192.168.50.105192.168.50.125

I have multiple NFS share mounted actually, and I lost connection to all of them upon reboot. I added the cluster IP to the truenas share just to get up and running quick, but I left the NFS share to the synology in a broken state to reference for this troubleshooting. I am hoping the attached screenshot provides some information.

The Synology is 192.168.50.249. This is used as backup storage
The TrueNAS is 192.168.50.246.

Attaching the output of "ip a" and "storage.cfg"
 

Attachments

  • Screenshot 2025-12-16 171536.png
    Screenshot 2025-12-16 171536.png
    5.2 KB · Views: 3
  • ip a.png
    ip a.png
    110.9 KB · Views: 3
  • Screenshot 2025-12-16 172316.png
    Screenshot 2025-12-16 172316.png
    37.5 KB · Views: 3
Last edited:
Sorry was struggling with table formatting an accidentally posted. It is not intuitive to type an additional line after a table, but I digress.

will edit the previous post with the other information, just posting quick so you know I didn't ignore half your post.
 
You should not have all devices on the same subnet. Your cluster network should be on a different subnet.

Note that none of this is PVE specific, but rather basic Linux Networking.

Cheers

PS I initially misread and thought you have more than 2 NICs.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
Okay I am setting up a new vlan in my unifi setup now, and will switch the cluster to a different vlan and subnet. Will report back.
 
Changing the cluster IP's is not very straight forward. Been doing some reading and seeing if this is right:

1. Disable HA Rules
2. Migrate all VM's to primary node for time being
3. Backup /etc/pve/corosync.conf
4. Change the IP of the cluster NIC to its new IP
5. Edit the corosync.conf and replace the "ring0_addr: 192.168.50.120" with the new static IP and increment "config_version"
6. Repeat above on other nodes
7. Reboot nodes

Does that sound like I have it?
 
Last edited:
Changing the cluster IP's is not very straight forward. Been doing some reading and seeing if this is right:

1. Disable HA Rules
2. Migrate all VM's to primary node for time being
3. Backup /etc/pve/corosync.conf
4. Change the IP of the cluster NIC to its new IP
5. Edit the corosync.conf and replace the "ring0_addr: 192.168.50.120" with the new static IP and increment "config_version"
6. Repeat above on other nodes
7. Reboot nodes

Does that sound like I have it?

Well this was not the way to do it and am working on recovering my cluster and VMs