[SOLVED] NFS Shared storage keeps dropping out

mwg47x

New Member
Oct 21, 2025
7
2
3
First post here.

Background:
MY NFS share holds all my VM disks, ISO's, basically everything.
It is on a TrueNAS Core box. I had 2 available 1GBe ports available so I used Link Aggregration in TrueNAS to link them and gave them an IP of 10.1.1.70.
In Proxmox, I used two 1GBe ports to create bond0 and then used bond0 as the interface for vmbr1.
I gave vmbr1 an IP of 10.1.1.41
10.1.1.41 is the only IP authorized to use the NFS share in TrueNAS.

Please note, these two boxes sit right next to each other.
I have 2 Cat5e cables going directly from nic port to nic port.
I can ping .41 from .70 using the shell in TrueNAS and the same goes for the other direction.
Seems to work fine most of the time.

Issue:
However, every now and then, when I try to fire up a VM, it will error saying storage TrueNAS is not online.
Try it again and it will work.
Looking in the system log I see a lot of entries saying "pvestatd[1619]: storage 'TrueNAS' is not online"

A log entry with the above error is showing up every 3 or 4 minutes.
What the heck is going on here?
 
Minor update:
I am in the process of moving all VM disks to a Local LVM-Thinpool.
I don't want to risk disk corruption.

I have a feeling this is totally my fault...I just haven't figured out where I went wrong yet.
 
Looking at this further, I am 99% sure the issue was caused by my own stupidity.
I am also 99% sure it is on the TrueNAS side of things, not the Proxmox side.
Once I have verified this and have it working correctly, I will post what the issue was and mark this as solved.
 
I assume with 2 direct cables bonding would not work properly as you need a switch and bond them there too.
Without a switch you can config 2 separate subnet networks and access same storage on different ways for static load balancing.
 
  • Like
Reactions: Johannes S
I assume with 2 direct cables bonding would not work properly as you need a switch and bond them there too.
Without a switch you can config 2 separate subnet networks and access same storage on different ways for static load balancing.
By golly, you're right.
I didn't think that thru very well.
I'll get that changed.
 
Last edited:
  • Like
Reactions: Johannes S
Well, my issue is resolved.
Ya know, it really helps if you get the dang permissions correct on the dataset in the TrueNAS pool, if you're going to share it out with NFS to Proxmox.
I had that screwed up 7 different ways to Sunday.

I have it working well now with a single Cat 5e cable linking nic to nic since they sit right next to each other.

@waltar I would be interested in hearing how you can access the same NFS share two different ways from a single node.
Got a link?

Thanks for letting bounce me stuff off you folks.
Just writing a post, and describing the problem, helps me out in many cases.
 
Last edited:
Config eg like nr 1 over 192.168.1.55(/24) and the 2 over 192.168.2.55(/24) - that's all
:)
 
I assume with 2 direct cables bonding would not work properly as you need a switch and bond them there too.
Without a switch you can config 2 separate subnet networks and access same storage on different ways for static load balancing.
That is not quite right, you can definetly connect the machines directly and use LACP as bonding type. There is no difference if you connect them directly or over a switch. The LACP protocol will work nonetheless. But it would only provide failover, load balancing will not work properly as basically you have one connection (same ip and port). It is still a bit better than a normal active-backup bond.

Bondings like round-robin will not work properly, but OP did not mention which bonding type he uses.

The more performant, but also more invasive solution would be to assign two subnets (as you mentioned) and connect them with nfs 4.1 session trunking (If it is the same share). Not yet natively supported by Proxmox.
 
  • Like
Reactions: mwg47x
Thanks for the reply @Khensu
That helps clear things up for me considerably.
I was trying to use a balance-rr type bond so that wasn't going to work.

I have a sneaky feeling that 1 GBe access to all these disks is going to become a significant bottleneck for me.
10 GBe cards are cheap and I have open slots in both servers.
I am most likely going to buy 2 cards and a Cat6 cable and just direct connect them.
No 10 GB switch required and that will be more than enough bandwidth to the NFS share for my needs.
 
  • Like
Reactions: Khensu