I'm having a recurring problem that NFS mounts on my TrueNAS server go "offline" in my PVE cluster, and/or remain online but show very poor performance. Trying to track down why it's happening, and what I can do to to address it.
My TrueNAS server is running TrueNAS CORE 12.0-U5.1. It has 2x Xeon E5-2670s, 128 GB of RAM, and two storage pools. The first storage pool consists of 4x 6-disk RAIDZ2 vdevs (24 disks total) of varying sizes, and is a little over half full. This pool contains my jails, a few SMB shares, and a couple of NFS exports. The second pool consists of 4x 2TB disks in mirrored pairs. It has one NFS export and one iSCSI target. Other client systems, primarily via SMB, don't seem to have any performance issues with the server.
My PVE cluster is running the latest update of PVE 7. It consists of three nodes of a Dell PowerEdge C6220, each with 2x Xeon E5-2680v2 and ~80 GB of RAM. They're connected to each other, and to the TrueNAS box, via 10 GbE. NFS exports from both pools are mounted to the cluster as storage--the first for ISOs, container templates, and backups (the latter not being used much since I started using PBS), and the second pool for a few low-activity VM disk images (most virtual disks are stored on a Ceph pool, about which I have no complaints).
Frequently, though not constantly, the cluster reports both storages to be unavailable.
My TrueNAS server is running TrueNAS CORE 12.0-U5.1. It has 2x Xeon E5-2670s, 128 GB of RAM, and two storage pools. The first storage pool consists of 4x 6-disk RAIDZ2 vdevs (24 disks total) of varying sizes, and is a little over half full. This pool contains my jails, a few SMB shares, and a couple of NFS exports. The second pool consists of 4x 2TB disks in mirrored pairs. It has one NFS export and one iSCSI target. Other client systems, primarily via SMB, don't seem to have any performance issues with the server.
My PVE cluster is running the latest update of PVE 7. It consists of three nodes of a Dell PowerEdge C6220, each with 2x Xeon E5-2680v2 and ~80 GB of RAM. They're connected to each other, and to the TrueNAS box, via 10 GbE. NFS exports from both pools are mounted to the cluster as storage--the first for ISOs, container templates, and backups (the latter not being used much since I started using PBS), and the second pool for a few low-activity VM disk images (most virtual disks are stored on a Ceph pool, about which I have no complaints).
Frequently, though not constantly, the cluster reports both storages to be unavailable.
ls /mnt/pve
hangs, and any tasks that involve either of those mounts fail. But there's nothing obviously wrong on the TrueNAS box--there isn't a great deal of I/O latency, there's plenty of CPU capacity, ARC hit ratio is fine. A little stumped about how to track this down--any ideas?