Hi All, hoping you might be able to help solve an issue I've got. My proxmox host has a couple of ZFS pools, a root pool (rpool) and a storage pool (Storage1). My VM's and containers run on the root pool. The Storage1 pool is used for shared storage and is for the VM's and containers which includes docker in LXC, as well as serving as a smb store for my network.
I upgraded the host to the latest version via the dashboard version 6.3-6, previously 6.3-2 I believe. The upgrade included an update for ZFS too I believe. I rebooted rebooted the machine to install the new kernel. After the reboot I had issues with docker not starting properly due to a premissions issue and after doing some digging my ZFS Storage pool has not mounted correctly on reboot. ZFS mounts the Storage pool to mounts to /mnt/Storage1 and I can see that the mount point is there,
zfs get mounted shows the pool mounted, but the directories are empty if I navigate to the directory and ls -l.
zpool status shows the pool healthy and my data is on the drives, I just cant access it.
I was able manually mount the pool again using the following sequence, (I had to stop the PVE services as it seems to auto create the mount points again, disabling this allowed me to manually mount the pool, after which it worked).
I seem to have a failure on the zfs import scan service, but no idea how to debug / resolve this.
Whilst the manual unmount / mount process got the pool back I need it to automount after a reboot.
Any assistance to resolve is greatly appreciated.
I upgraded the host to the latest version via the dashboard version 6.3-6, previously 6.3-2 I believe. The upgrade included an update for ZFS too I believe. I rebooted rebooted the machine to install the new kernel. After the reboot I had issues with docker not starting properly due to a premissions issue and after doing some digging my ZFS Storage pool has not mounted correctly on reboot. ZFS mounts the Storage pool to mounts to /mnt/Storage1 and I can see that the mount point is there,
zfs get mounted shows the pool mounted, but the directories are empty if I navigate to the directory and ls -l.
zpool status shows the pool healthy and my data is on the drives, I just cant access it.
I was able manually mount the pool again using the following sequence, (I had to stop the PVE services as it seems to auto create the mount points again, disabling this allowed me to manually mount the pool, after which it worked).
Code:
# zfs get mounted
# systemctl stop pve-cluster ; systemctl stop pvedaemon ; systemctl stop pveproxy ; systemctl stop pvestatd
# zfs unmount Storage1
# zfs get mounted
# zfs mount Storage1
# zfs get mounted
# systemctl start pvestatd ; systemctl start pveproxy ; systemctl start pvedaemon ; systemctl start pve-cluster
I seem to have a failure on the zfs import scan service, but no idea how to debug / resolve this.
Code:
# systemctl status zfs-import-scan.service
● zfs-import-scan.service - Import ZFS pools by device scanning
Loaded: loaded (/lib/systemd/system/zfs-import-scan.service; enabled; vendor
Active: inactive (dead)
Condition: start condition failed at Sat 2021-03-20 02:15:44 AEDT; 3 days ago
└─ ConditionFileNotEmpty=!/etc/zfs/zpool.cache was not met
Docs: man:zpool(8)
Mar 20 02:15:44 pve1 systemd[1]: Condition check resulted in Import ZFS pools by
Whilst the manual unmount / mount process got the pool back I need it to automount after a reboot.
Any assistance to resolve is greatly appreciated.
Last edited: