ZFS fails to mount datasets

morph027

Renowned Member
Mar 22, 2013
450
63
93
Leipzig
morph027.gitlab.io
After last update in no-subscription on one of my test servers, zfs fails to import and mount datasets on boot.

PVE however creates directories which will then prevent zfs from mounting the datasets afterwards. Also one of my pools needs manually importing now instead of automatic mounting like before.

Some debugging:

Code:
# journalctl -lu zfs-import-cache.service
-- Logs begin at Thu 2017-08-03 08:35:19 CEST, end at Thu 2017-08-03 08:47:18 CEST. --
Aug 03 08:35:20 test-pve systemd[1]: Starting Import ZFS pools by cache file...
Aug 03 08:35:26 test-pve zpool[3554]: cannot import 'test-vm': no such pool or dataset
Aug 03 08:35:26 test-pve zpool[3554]:         Destroy and re-create the pool from
Aug 03 08:35:26 test-pve zpool[3554]:         a backup source.
Aug 03 08:35:26 test-pve systemd[1]: zfs-import-cache.service: Main process exited, code=exited, status=1/FAILURE
Aug 03 08:35:26 test-pve systemd[1]: Failed to start Import ZFS pools by cache file.
Aug 03 08:35:26 test-pve systemd[1]: zfs-import-cache.service: Unit entered failed state.
Aug 03 08:35:26 test-pve systemd[1]: zfs-import-cache.service: Failed with result 'exit-code'.

Code:
# journalctl -lu zfs-mount.service
-- Logs begin at Thu 2017-08-03 08:35:19 CEST, end at Thu 2017-08-03 08:47:18 CEST. --
Aug 03 08:35:26 test-pve systemd[1]: Starting Mount ZFS filesystems...
Aug 03 08:35:26 test-pve zfs[5535]: cannot mount '/rpool': directory is not empty
Aug 03 08:35:27 test-pve systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
Aug 03 08:35:27 test-pve systemd[1]: Failed to start Mount ZFS filesystems.
Aug 03 08:35:27 test-pve systemd[1]: zfs-mount.service: Unit entered failed state.
Aug 03 08:35:27 test-pve systemd[1]: zfs-mount.service: Failed with result 'exit-code'.

Code:
# zfs mount -a
cannot mount '/rpool': directory is not empty
cannot mount '/test-vm/data': directory is not empty
cannot mount '/test-vm/vms': directory is not empty

No zfs properties were changed.
 
if you don't want PVE to create directories for configured directory storages, set the mkdir and is_mountpoint options (man pvesm).
 
The storages are configured as zfspools and not directories...

please post your storage config, output of 'zfs list' and contents of /etc/fstab. note that 'zfs mount -a' / zfs-mount.service IS racy, so it is entirely possible that mounting happens in the wrong order..
 
Weird thing is that it did work until the last update ;) I do keep datasets at a sublevel...

Code:
dir: local
   path /var/lib/vz
   maxfiles 0
   content rootdir,iso,vztmpl
zfspool: test-vms
   pool test-vm/vms
   sparse 1
   content rootdir,images

Code:
test-vm/vms                                                                         91.6G   233G   104K  /test-vm/vms
test-vm/vms/subvol-106-disk-1                                                       6.93G  12.6G  3.39G  /test-vm/vms/subvol-106-disk-1
test-vm/vms/subvol-108-disk-1                                                       14.4G  5.50G  2.50G  /test-vm/vms/subvol-108-disk-1
test-vm/vms/subvol-109-disk-1                                                       1.62G  7.54G   475M  /test-vm/vms/subvol-109-disk-1
test-vm/vms/subvol-111-disk-1                                                       4.02G  1.73G  2.27G  /test-vm/vms/subvol-111-disk-1
test-vm/vms/subvol-112-disk-1                                                       1.24G   592M   432M  /test-vm/vms/subvol-112-disk-1
test-vm/vms/subvol-113-disk-1                                                       5.56G  3.16G   856M  /test-vm/vms/subvol-113-disk-1
test-vm/vms/subvol-114-disk-1                                                       1.19G  3.60G   408M  /test-vm/vms/subvol-114-disk-1
test-vm/vms/subvol-115-disk-1                                                       1.17G  3.66G   346M  /test-vm/vms/subvol-115-disk-1
test-vm/vms/subvol-117-disk-1                                                       2.94G  3.14G   878M  /test-vm/vms/subvol-117-disk-1
test-vm/vms/subvol-118-disk-1                                                       2.34G  3.40G   610M  /test-vm/vms/subvol-118-disk-1
test-vm/vms/subvol-121-disk-1                                                       1.70G  3.34G   675M  /test-vm/vms/subvol-121-disk-1
test-vm/vms/subvol-122-disk-1                                                       1.36G  3.67G   342M  /test-vm/vms/subvol-122-disk-1
test-vm/vms/subvol-123-disk-1                                                       1.05G   591M   433M  /test-vm/vms/subvol-123-disk-1
test-vm/vms/subvol-124-disk-1                                                       1.71G  1.57G   442M  /test-vm/vms/subvol-124-disk-1
test-vm/vms/subvol-125-disk-1                                                       3.82G  5.69G  2.31G  /test-vm/vms/subvol-125-disk-1
test-vm/vms/vm-116-disk-1                                                           40.6G   233G  35.8G  -

Code:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/zvol/rpool/swap none swap sw 0 0
proc /proc proc defaults 0 0
 
if it is only the empty 'parent' datasets which are not mounted correctly, it is probably zfs getting the order wrong and mounting one of the children first....
 
Hm...any idea how to deal with zfs? i could probably set the mountpoints to none for the parents...

one solution (as long as they are supposed to contain no data ;)).
 
Ok, somehow, the pools are being available too late now (which wasn't the case before) and PVE already tries to start containers while the subvol is not there. But it creates folders for mountpoints, which when prevents the real subvol from mounting.

If i see properly, all containers are being started by the lxc@.service (which loads after lxc.service) template, right? Is there another systemd dependency in between which changed recently?
 
no, containers which start on boot are started by pve-manager.service (regular) or pve-ha-lrm.service. they are started via lxc@ID.service , but those instances are not enabled so they are not started automatically, only explicitly by our code.

zfs datasets should be mounted before local-fs.target is reached, which is long before pve-manager.service is started.

e.g., on my system I have:
Code:
journalctl -b -u "zfs*" -u "local-fs.target" -u "pve-manager"
-- Logs begin at Tue 2017-07-25 08:48:09 CEST, end at Tue 2017-08-08 11:23:29 CEST. --
Aug 08 08:11:16 nora systemd[1]: Starting Import ZFS pools by cache file...
Aug 08 08:11:21 nora systemd[1]: Started Import ZFS pools by cache file.
Aug 08 08:11:21 nora systemd[1]: Starting Mount ZFS filesystems...
Aug 08 08:11:26 nora systemd[1]: Started Mount ZFS filesystems.
Aug 08 08:11:26 nora systemd[1]: Reached target Local File Systems.
Aug 08 08:11:26 nora systemd[1]: Starting ZFS file system shares...
Aug 08 08:11:26 nora systemd[1]: Started ZFS file system shares.
Aug 08 08:11:26 nora systemd[1]: Reached target ZFS startup target.
Aug 08 08:11:38 nora systemd[1]: Starting PVE VM Manager...
Aug 08 08:11:39 nora pve-manager[6424]: <root@pam> starting task UPID:nora:0000191B:00000AB3:5989561B:star
Aug 08 08:11:39 nora pve-manager[6427]: <root@pam> starting task UPID:nora:0000191C:00000AB5:5989561B:vzst
Aug 08 08:11:39 nora pve-manager[6428]: starting CT 100023: UPID:nora:0000191C:00000AB5:5989561B:vzstart:1
Aug 08 08:11:45 nora pve-manager[6424]: <root@pam> end task UPID:nora:0000191B:00000AB3:5989561B:startall:
Aug 08 08:11:45 nora systemd[1]: Started PVE VM Manager.

maybe you could run the same command on yours?
 
  • Like
Reactions: chrone
It looks like the devices for the pools are there too late (these are luks crypted device mapper disks). Last bunch of updates was related to lvm and devmapper, probably something changed there so the volumes are not present while zfs tries to import them.

Code:
Start-Date: 2017-08-02  20:51:41
Commandline: apt-get dist-upgrade
Upgrade: libdevmapper1.02.1:amd64 (2:1.02.137-pve2, 2:1.02.137-pve3), liblvm2cmd2.02:amd64 (2.02.168-pve2, 2.02.168-pve3), dmsetup:amd64 (2:1.02.137-pve2, 2:1.02.137-pve3), libdevmapper-event1.02.1:amd64 (2:1.02.137-pve2, 2:1.02.137-pve3), dmeventd:amd64 (2:1.02.137-pve2, 2:1.02.137-pve3), lvm2:amd64 (2.02.168-pve2, 2.02.168-pve3), pve-manager:amd64 (5.0-23, 5.0-29), liblvm2app2.2:amd64 (2.02.168-pve2, 2.02.168-pve3), proxmox-ve:amd64 (5.0-16, 5.0-18), pve-kernel-4.10.17-1-pve:amd64 (4.10.17-16, 4.10.17-18)
End-Date: 2017-08-02  20:53:27

Code:
# journalctl -b -u "zfs*" -u "local-fs.target" -u "pve-manager"
-- Logs begin at Tue 2017-08-08 10:04:14 CEST, end at Tue 2017-08-08 13:17:21 CEST. --
Aug 08 10:04:15 test-pve systemd[1]: Starting Import ZFS pools by cache file...
Aug 08 10:04:19 test-pve zpool[3535]: cannot import 'test-vm': no such pool or dataset
Aug 08 10:04:21 test-pve zpool[3535]:         Destroy and re-create the pool from
Aug 08 10:04:21 test-pve zpool[3535]:         a backup source.
Aug 08 10:04:21 test-pve systemd[1]: zfs-import-cache.service: Main process exited, code=exited, status=1/FAILURE
Aug 08 10:04:21 test-pve systemd[1]: Failed to start Import ZFS pools by cache file.
Aug 08 10:04:21 test-pve systemd[1]: zfs-import-cache.service: Unit entered failed state.
Aug 08 10:04:21 test-pve systemd[1]: zfs-import-cache.service: Failed with result 'exit-code'.
Aug 08 10:04:21 test-pve systemd[1]: Starting Mount ZFS filesystems...
Aug 08 10:04:21 test-pve systemd[1]: Started Mount ZFS filesystems.
Aug 08 10:04:23 test-pve systemd[1]: Reached target Local File Systems.
Aug 08 10:04:23 test-pve systemd[1]: Started ZFS Event Daemon (zed).
Aug 08 10:04:23 test-pve zed[5789]: ZFS Event Daemon 0.6.5.9-pve16~bpo90 (PID 5789)
Aug 08 10:04:23 test-pve zed[5789]: Processing events since eid=0
Aug 08 10:04:31 test-pve systemd[1]: Starting ZFS file system shares...
Aug 08 10:04:32 test-pve systemd[1]: Started ZFS file system shares.
Aug 08 10:04:32 test-pve systemd[1]: Reached target ZFS startup target.
Aug 08 10:04:35 test-pve systemd[1]: Starting PVE VM Manager...
Aug 08 10:04:36 test-pve pve-manager[7110]: <root@pam> starting task UPID:proxmox:00001BD8:00000A04:59897094:startall::root@pam:
Aug 08 10:04:36 test-pve pve-manager[7110]: <root@pam> end task UPID:proxmox:00001BD8:00000A04:59897094:startall::root@pam: OK
Aug 08 10:04:36 test-pve systemd[1]: Started PVE VM Manager.
 
Ah, clearly too late ;) The pool consists of two 4TB drives (devicemapper names 4tb1 and 4tb2)

Code:
# journalctl -b -u "*cryptsetup*" -u "zfs*" -u "local-fs.target" -u "pve-manager"
-- Logs begin at Tue 2017-08-08 10:04:14 CEST, end at Tue 2017-08-08 13:17:21 CEST. --
Aug 08 10:04:15 test-pve systemd[1]: Starting Cryptography Setup for usb...
Aug 08 10:04:15 test-pve systemd[1]: cryptsetup.target: Job cryptsetup.target/start deleted to break ordering cycle starting with local-fs-pre.target/start
Aug 08 10:04:15 test-pve systemd[1]: Starting Import ZFS pools by cache file...
Aug 08 10:04:19 test-pve zpool[3535]: cannot import 'test-vms': no such pool or dataset
Aug 08 10:04:21 test-pve zpool[3535]:         Destroy and re-create the pool from
Aug 08 10:04:21 test-pve zpool[3535]:         a backup source.
Aug 08 10:04:21 test-pve systemd[1]: zfs-import-cache.service: Main process exited, code=exited, status=1/FAILURE
Aug 08 10:04:21 test-pve systemd[1]: Failed to start Import ZFS pools by cache file.
Aug 08 10:04:21 test-pve systemd[1]: zfs-import-cache.service: Unit entered failed state.
Aug 08 10:04:21 test-pve systemd[1]: zfs-import-cache.service: Failed with result 'exit-code'.
Aug 08 10:04:21 test-pve systemd[1]: Starting Mount ZFS filesystems...
Aug 08 10:04:21 test-pve systemd[1]: Started Mount ZFS filesystems.
Aug 08 10:04:22 test-pve systemd-cryptsetup[3461]: Set cipher aes, mode xts-plain64, key size 256 bits for device /dev/disk/by-uuid/3f06dcd7-ffe1-4796-a0c2-ae011b7cc60e.
Aug 08 10:04:23 test-pve systemd[1]: Started Cryptography Setup for derived.
Aug 08 10:04:23 test-pve systemd[1]: Reached target Local File Systems.
Aug 08 10:04:23 test-pve systemd[1]: Starting Cryptography Setup for 4tb1...
Aug 08 10:04:23 test-pve systemd[1]: Starting Cryptography Setup for 4tb2...
Aug 08 10:04:23 test-pve systemd-cryptsetup[5632]: Set cipher aes, mode xts-plain64, key size 512 bits for device /dev/disk/by-uuid/96e80fe2-2187-4877-95cc-5576a3e7e27a.
Aug 08 10:04:23 test-pve systemd-cryptsetup[5634]: Set cipher aes, mode xts-plain64, key size 512 bits for device /dev/disk/by-uuid/dfe75aff-8c8c-435d-8c31-497a884b0c6d.
Aug 08 10:04:23 test-pve systemd[1]: Started ZFS Event Daemon (zed).
Aug 08 10:04:23 test-pve zed[5789]: ZFS Event Daemon 0.6.5.9-pve16~bpo90 (PID 5789)
Aug 08 10:04:23 test-pve zed[5789]: Processing events since eid=0
Aug 08 10:04:27 test-pve systemd[1]: Started Cryptography Setup for 400g1.
Aug 08 10:04:28 test-pve systemd[1]: Started Cryptography Setup for 400g2.
Aug 08 10:04:28 test-pve systemd[1]: Started Cryptography Setup for 2tb1.
Aug 08 10:04:29 test-pve systemd[1]: Started Cryptography Setup for 2tb2.
Aug 08 10:04:29 test-pve systemd[1]: Started Cryptography Setup for 4tb2.
Aug 08 10:04:29 test-pve systemd[1]: Started Cryptography Setup for 4tb1.
Aug 08 10:04:31 test-pve systemd[1]: Starting ZFS file system shares...
Aug 08 10:04:32 test-pve systemd[1]: Started ZFS file system shares.
Aug 08 10:04:32 test-pve systemd[1]: Reached target ZFS startup target.
Aug 08 10:04:35 test-pve systemd[1]: Starting PVE VM Manager...
Aug 08 10:04:36 test-pve pve-manager[7110]: <root@pam> starting task UPID:proxmox:00001BD8:00000A04:59897094:startall::root@pam:
Aug 08 10:04:36 test-pve pve-manager[7110]: <root@pam> end task UPID:proxmox:00001BD8:00000A04:59897094:startall::root@pam: OK
Aug 08 10:04:36 test-pve systemd[1]: Started PVE VM Manager.
 
Ok, and the warning line differs to another machine still running pre-update:

Pre-update:

Code:
# systemctl status cryptsetup.target
● cryptsetup.target - Encrypted Volumes
   Loaded: loaded (/lib/systemd/system/cryptsetup.target; static; vendor preset: enabled)
   Active: active since Wed 2017-07-12 20:58:28 CEST; 3 weeks 5 days ago
     Docs: man:systemd.special(7)

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

Post-update:

Code:
# systemctl status cryptsetup.target
● cryptsetup.target - Encrypted Volumes
   Loaded: loaded (/lib/systemd/system/cryptsetup.target; static; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:systemd.special(7)

Aug 08 10:04:15 proxmox systemd[1]: cryptsetup.target: Job cryptsetup.target/start deleted to break ordering cycle starting with local-fs-pre.target/start

So i guess this breaks my (special) cryptsetup setup.
 
you should be able to re-order the units (you can even use "systemctl edit" on Stretch ;)) - but in this case I would advise against using a cache file and using zfs-import-scan.service instead (don't forget to set the cachefile property on your pools and update the initramfs after deleting the zpool.cache file)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!