Zfs datasets don't mount anymore after reboot

gb00s

Well-Known Member
Aug 4, 2017
34
2
48
45
I updated/-graded the system today and rebooted:

Start-Date: 2020-08-07 18:12:21
Commandline: apt upgrade
Upgrade: libpve-guest-common-perl:amd64 (3.1-1, 3.1-2), libjson-c3:amd64 (0.12.1+ds-2, 0.12.1+ds-2+deb10u1)
End-Date: 2020-08-07 18:12:26

After the reboot, all datasets are no longer available. The status of <systemctl status zfs-mount.service> shows me ...
root@pve2:/var/log/apt# systemctl status zfs-mount.service
● zfs-mount.service - Mount ZFS filesystems
Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2020-08-07 21:40:43 CEST; 13min ago
Docs: man:zfs(8)
Process: 1629 ExecStart=/sbin/zfs mount -a (code=exited, status=1/FAILURE)
Main PID: 1629 (code=exited, status=1/FAILURE)

Aug 07 21:40:43 pve2 systemd[1]: Starting Mount ZFS filesystems...
Aug 07 21:40:43 pve2 zfs[1629]: cannot mount '/storage': directory is not empty
Aug 07 21:40:43 pve2 systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
Aug 07 21:40:43 pve2 systemd[1]: zfs-mount.service: Failed with result 'exit-code'.
Aug 07 21:40:43 pve2 systemd[1]: Failed to start Mount ZFS filesystems.
My datasets are still there, checked with zfs list:
root@pve2:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
storage 129G 7.78T 29.3G /storage
storage/backups 24K 7.78T 24K /storage/backups
storage/iso 24K 7.78T 24K /storage/iso
storage/vm 100G 7.78T 24K /storage/vm
storage/vm/vm-100-disk-0 100G 7.86T 9.72G -
A manual mount with <zfs mount -O -a> brings all the datasets back to the 'right place' until I reboot the machine again.

My question is now which command I have to use in order to get all datasets mounted automatically again after each reboot?
Any help is much appreciated. Thank you in advance.

Regards

Mike
 
Last edited:
EDIT:
No, this issue is not solved. I have the same behavior on another machine.

I made a 'Backups' directory under my 'storage' pool and scheduled some backups overnight. Unfortunately, the pool must have unmounted itself, as all the backups were done and saved under directory /storage/Backups/ and filled my whole root. Full system crash. The zfs datasets were 100% mounted.

The zfs pool 'storage' was there. When checking the zsf datasets it showed all of them. But while checking the mounts with cd /storage it just showed me the 'Backups' folder under /storage. The datasets were missing. But the /storage was not the pool, it was actually a folder under root and filled my whole root disk. Mounting the datasets with zfs mount -O -a just brought back everything as it should be.

Can anybody explain whats going on and where I'm doing something wrong?

ADD:

root@pve1:~# systemctl status zfs-import-cache.service
● zfs-import-cache.service - Import ZFS pools by cache file
Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2020-08-09 20:46:28 CEST; 1 day 16h ago
Docs: man:zpool(8)
Main PID: 1399 (code=exited, status=1/FAILURE)

Aug 09 20:46:28 pve1 systemd[1]: Starting Import ZFS pools by cache file...
Aug 09 20:46:28 pve1 zpool[1399]: invalid or corrupt cache file contents: invalid or missing cache file
Aug 09 20:46:28 pve1 systemd[1]: zfs-import-cache.service: Main process exited, code=exited, status=1/FAILURE
Aug 09 20:46:28 pve1 systemd[1]: zfs-import-cache.service: Failed with result 'exit-code'.
Aug 09 20:46:28 pve1 systemd[1]: Failed to start Import ZFS pools by cache file.
root@pve1:~#

Just what I found while the machine was sitting around Aug 09 20:46:28 at IDLE. No backups scheduled. No other jobs scheduled at the time. No startup. Nothing. Mysterious.
 

Attachments

  • Proxmox_Backups.png
    Proxmox_Backups.png
    91.2 KB · Views: 9
Last edited: