Greetings Forum:
I am running PVE 5.4-11 on a r720xd with a single CT for testing: TKL File Server.
I noticed while copying files from another NAS to the TKL File Server CT from a Win client on the LAN that the shares would strangely become unavailable. After a little digging and research in the forums, I issued:
and realized the ZFS filesystem was no longer mounted. After issuing:
the zfs pools came back online, were available to TKL File Server, and the shares were again available to the Win client with no reboots needed. Excellent!
I thought this originally occurred because I ejected one of the hot swap drives and reinserted it for testing hot swap functionality. After successfully remounting the pool, I then decided to rebuild the pool with wwn- designations instead of /dev/sd[x]. For some time I thought this solved the issue, but after a reboot this weekend the exact same situation occurred again: ZFS became unmounted after a power blink. (I still haven't moved the server to my UPS).
SO, I am not sure why the ZFS filesystem becomes unmounted in PVE occasionally after reboot. But I do know that issuing
after a reboot takes care of the problem every time.
Where can I include
in PVE config so that I can make 100% sure my ZFS file system will be (re)mounted with each reboot??? (OR is there another place I can probe to discover why the ZFS file system is becoming unmounted in the first place?)
I am running PVE 5.4-11 on a r720xd with a single CT for testing: TKL File Server.
I noticed while copying files from another NAS to the TKL File Server CT from a Win client on the LAN that the shares would strangely become unavailable. After a little digging and research in the forums, I issued:
Code:
root@pve-r720xd1:~# zfs list -r -o name,mountpoint,mounted
NAME MOUNTPOINT MOUNTED
r720xd1 /r720xd1 no
r720xd1/subvol-108-disk-0 /r720xd1/subvol-108-disk-0 no
and realized the ZFS filesystem was no longer mounted. After issuing:
Code:
zfs mount -O -a
the zfs pools came back online, were available to TKL File Server, and the shares were again available to the Win client with no reboots needed. Excellent!
I thought this originally occurred because I ejected one of the hot swap drives and reinserted it for testing hot swap functionality. After successfully remounting the pool, I then decided to rebuild the pool with wwn- designations instead of /dev/sd[x]. For some time I thought this solved the issue, but after a reboot this weekend the exact same situation occurred again: ZFS became unmounted after a power blink. (I still haven't moved the server to my UPS).
SO, I am not sure why the ZFS filesystem becomes unmounted in PVE occasionally after reboot. But I do know that issuing
Code:
zfs mount -O -a
after a reboot takes care of the problem every time.
Where can I include
Code:
zfs mount -O -a
in PVE config so that I can make 100% sure my ZFS file system will be (re)mounted with each reboot??? (OR is there another place I can probe to discover why the ZFS file system is becoming unmounted in the first place?)