[SOLVED] ZFS not mounting after upgrade

Sep 17, 2018
After upgrade to 6.0, my ZFS mountpoints no longer attach on boot. If I run zfs mount manually, they will attach but they are not coming up on boot. PVE recognizes and sees them:

# pvesm status
Name             Type     Status           Total            Used       Available        %
local             dir     active        59600812        25246548        31297012   42.36%
local-lvm     lvmthin     active       167772160        41070624       126701535   24.48%
zfs1          zfspool     active      5656018404      1383865596      4272152808   24.47%

configured in storage.cfg:
zfspool: zfs1
        pool zfs_pool
        blocksize 4k
        content images,rootdir
        sparse 0

mountpoints exist:
# zfs list -r -o name,mountpoint,mounted
NAME                        MOUNTPOINT                   MOUNTED
zfs_pool                    /zfs_pool                         no
zfs_pool/subvol-101-disk-1  /zfs_pool/subvol-101-disk-1       no
zfs_pool/subvol-102-disk-0  /zfs_pool/subvol-102-disk-0       no
zfs_pool/subvol-103-disk-1  /zfs_pool/subvol-103-disk-1       no
zfs_pool/subvol-107-disk-0  /zfs_pool/subvol-107-disk-0       no
zfs_pool/subvol-111-disk-0  /zfs_pool/subvol-111-disk-0       no
zfs_pool/vm-201-disk-1      -                                  -
zfs_pool/vm-202-disk-0      -                                  -
zfs_pool/vm-205-disk-0      -                                  -

zfs-mount service runs:
# systemctl status zfs-mount.service
● zfs-mount.service - Mount ZFS filesystems
   Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enab
   Active: active (exited) since Tue 2019-07-16 13:19:59 EDT; 3min 1s ago
     Docs: man:zfs(8)
  Process: 911 ExecStart=/sbin/zfs mount -O -a (code=exited, status=0/SUCCESS)
 Main PID: 911 (code=exited, status=0/SUCCESS)

Jul 16 13:19:59 icarus systemd[1]: Starting Mount ZFS filesystems...
Jul 16 13:19:59 icarus systemd[1]: Started Mount ZFS filesystems.

It creates the /zfs_pool folder but none of the subdirectories:
root@icarus:/# cd /zfs_pool/
root@icarus:/zfs_pool# ls -lah
total 8.0K
drwxr-xr-x  2 root root 4.0K Jul 16 13:20 .
drwxr-xr-x 24 root root 4.0K Jul 16 13:20 ..

unless i run zfs mount manually:
root@icarus:/zfs_pool# zfs mount -a
root@icarus:/zfs_pool# zfs list -r -o name,mountpoint,mounted
NAME                        MOUNTPOINT                   MOUNTED
zfs_pool                    /zfs_pool                        yes
zfs_pool/subvol-101-disk-1  /zfs_pool/subvol-101-disk-1      yes
zfs_pool/subvol-102-disk-0  /zfs_pool/subvol-102-disk-0      yes
zfs_pool/subvol-103-disk-1  /zfs_pool/subvol-103-disk-1      yes
zfs_pool/subvol-107-disk-0  /zfs_pool/subvol-107-disk-0      yes
zfs_pool/subvol-111-disk-0  /zfs_pool/subvol-111-disk-0      yes
zfs_pool/vm-201-disk-1      -                                  -
zfs_pool/vm-202-disk-0      -                                  -
zfs_pool/vm-205-disk-0      -                                  -
root@icarus:/zfs_pool# cd ..
root@icarus:/# cd /zfs_pool/
root@icarus:/zfs_pool# ls -lah
total 31K
drwxr-xr-x  7 root root    7 Jun 26 14:46 .
drwxr-xr-x 24 root root 4.0K Jul 16 13:20 ..
drwxr-xr-x  2 root root    2 Sep 18  2018 subvol-101-disk-1
drwxr-xr-x 22 root root   22 May  1 15:35 subvol-102-disk-0
drwxrwxrwx  5 1000 1000    5 Aug 17  2018 subvol-103-disk-1
drwxr-xr-x 22 root root   22 Aug 27  2018 subvol-107-disk-0
drwxr-xr-x  4 root root    4 Nov  2  2018 subvol-111-disk-0

Any ideas why the zfs mounts aren't attaching? I'm sure it's something dumb I am forgetting, but I'm at a loss here.
Thanks for posting your solution. The zpool set cachefile command worked for me, even though my situation was a little different. I did a fresh install of 6.0, and I didn't notice any errors from the zfs-import-cache service.

The storage drive I added after I installed Proxmox wasn't mounting automatically, when I rebooted the host system, but everything mounted when I ran zfs mount -a manually. After running this command, my storage drive mounts automatically, and all of the containers and VMs on it start automatically too.
I had the same problem, on an update and afterwords also on a fresh install (to try and resolve the error).

This helped me as well! I've used it on two different pools and worked for both... So big thanks.
I got this exact problem. Everything worked for months but after apt-get upgrade some containers wouldn't start. Interestingly VMs which uses same zfs pool started without any problems, just containers with mountpoint in zfs were affected.
Noticed cachefile was set to None, after changing it to /etc/zfs/zpool.cache everything works fine.
I had the same problem after upgrade from 5.4 to 6.0. Complete reinstall helped me ))
what if i have more than one pool? using the same cache file?


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!