[SOLVED] mount zfs when directory not empty

RobFantini

Famous Member
May 24, 2012
2,009
102
133
Boston,Mass
/etc/default/zfs should have a way to do that. does anyone remember the option ? years ago i used to do it but lost the notes.
 
Hi,

it is never possible to mount in non empty dir.

You can change the mount point, but I would clear the dir.
 
yea I did clear the directory. somehow zfs mounts failed at boot, yet pve created directories in the mount point. I had not seen that in awhile.. just checking if there is another solution . I do not want to automate the rm -fr /tank/* ;-)
 
yea I did clear the directory. somehow zfs mounts failed at boot, yet pve created directories in the mount point. I had not seen that in awhile.. just checking if there is another solution . I do not want to automate the rm -fr /tank/* ;-)

yes there is ;) you can set the "mkdir" or "is_mountpoint" options for the directory storage, see "man pvesm" for details.

edit: just realized those are only available in git yet - should hit no-subscription soon. see bug #1012
 
yes there is ;) you can set the "mkdir" or "is_mountpoint" options for the directory storage, see "man pvesm" for details.

edit: just realized those are only available in git yet - should hit no-subscription soon. see bug #1012

My home pve system is unable to boot due to zfs mount issues or something at boot.
the system also has user desktop software. my guess is a bug due to init system changes , or something i did wrong, is hitting the system. this is an old system and the newer ones never have desktop software..

so we'll just reinstall and keep off all desktop software, and keep the zfs . export / import is so easy.
 
You can use overlay=on (ZFS settting) which does revert the mount behavior to Linux stand one, allowing mounting in non-empty directories.
 
thanks for the replies.

the system had a lsi raid card with zfs. there was a bug on an update, i do not know the specifics yet.

so I changed the card to a lsi it mode one, booted zfs rescue usb, ran zfs import , zfs scrub and zfs import.
that did not fix the issue .

I am back to my original issue which I did not describe perfect yet..

the system fails to boot past runlevel 1 .

the errors on console are systemd related :
Code:
systemd-udev-settle
lvm2-activation-early.service/start

I searched and found some 2011 bugs related to that. still searching for a solution.

i enter root password and find:

1- zpool tank - which has most of my zfs filesystems, is not mounted.
2- the directories are empty [ one time many boots ago they were not ].
3- zfs mount -a works to mount all zfs

still working on this.. any suggestions are welcome .
 
the system is working now. during the debug process I had removed a disk used for extra backups. after removing the fstab entry the system booted.

when I had the original issue, that disk was installed... I think changing to lsi it mode card and the zpool export/import/scrub fixed the big problem. but I am not certain as I can not replicate the issue.
 
FWIW I fixed this to make zfs mount on non-empty directories by modifying the file:
Code:
/lib/systemd/system/zfs-mount.service

I found:
Code:
ExecStart=/sbin/zfs mount -a

And I changed it to:
Code:
ExecStart=/sbin/zfs mount -O -a

I had to do this because I wanted to use a zfs dataset as a directory storage in Proxmox. Problem was Proxmox creates its storage folder structure (dump, images, private, template) before zfs mounts so zfs complains that the directory is not empty.

I suppose the better fix would be to change the init order to make zfs import first before Proxmox creates it's directories but I wasn't entirely sure how to change that order.
 
See Comment 7 in bug entry #1012[1]

[1] https://bugzilla.proxmox.com/show_bug.cgi?id=1012

PS: @SirMaster: I recommend undoing your change once you added the options mentioned in the above link to keep your zfs "cleaner".

I do not mind that zfs and pve are improving, but something changed in system settings that made my zfs setup a bug.

for instance my system became no bootable.
Code:
pve9  /tank/lxc # zfs mount -a
cannot mount '/tank': directory is not empty
pve9  /tank/lxc # zfs list|grep tank
tank                                     10.3T  3.78T   192K  /tank
tank/bkup                                 208K  3.78T   208K  /bkup
tank/home                                 721G  3.78T   721G  /home
tank/ht                                  8.16T  3.78T  8.16T  /ht
tank/kvm                                  192K  3.78T   192K  /tank/kvm
tank/lxc                                  183G  3.78T   208K  /tank/lxc
tank/lxc/subvol-2200-disk-1              1.50G  4.07G   949M  /tank/lxc/subvol-2200-disk-1
tank/lxc/subvol-2209-disk-1              17.9G  41.3G  8.71G  /tank/lxc/subvol-2209-disk-1
tank/lxc/subvol-2214-disk-1               964M  4.10G   919M  /tank/lxc/subvol-2214-disk-1
tank/lxc/subvol-2217-disk-1              51.3G   309G  10.8G  /tank/lxc/subvol-2217-disk-1
tank/lxc/subvol-2219-disk-1              1.91G  2.96G  1.04G  /tank/lxc/subvol-2219-disk-1
tank/lxc/subvol-2224-disk-1               192K  4.00G   192K  /tank/lxc/subvol-2224-disk-1
tank/lxc/subvol-2227-disk-1              3.53G  2.47G  3.53G  /tank/lxc/subvol-2227-disk-1
tank/lxc/subvol-2228-disk-1              1.99G  2.65G  1.35G  /tank/lxc/subvol-2228-disk-1
tank/lxc/subvol-2244-disk-1               102G   123G  39.4G  /tank/lxc/subvol-2244-disk-1
tank/lxc/subvol-2249-disk-1              1.63G  2.37G  1.63G  /tank/lxc/subvol-2249-disk-1
tank/pve                                 1.04T  3.78T  1.04T  /pve
tank/pve-zsync                            167G  3.78T   208K  /tank/pve-zsync
tank/pve-zsync/15Minutes                  192K  3.78T   192K  /tank/pve-zsync/15Minutes
tank/pve-zsync/Daily                     3.49G  3.78T   192K  /tank/pve-zsync/Daily
tank/pve-zsync/Daily/subvol-2200-disk-1  1.47G  3.78T   993M  /tank/pve-zsync/Daily/subvol-2200-disk-1
tank/pve-zsync/Daily/subvol-2214-disk-1   957M  3.78T   919M  /tank/pve-zsync/Daily/subvol-2214-disk-1
tank/pve-zsync/Daily/subvol-2265-disk-1  1.09G  3.78T  1.09G  /tank/pve-zsync/Daily/subvol-2265-disk-1
tank/pve-zsync/Monthly                    192K  3.78T   192K  /tank/pve-zsync/Monthly
tank/pve-zsync/Weekly                     192K  3.78T   192K  /tank/pve-zsync/Weekly
tank/pve-zsync/subvol-2100-disk-1         554M  3.78T   552M  /tank/pve-zsync/subvol-2100-disk-1
tank/pve-zsync/subvol-2217-disk-1        51.1G  3.78T  10.9G  /tank/pve-zsync/subvol-2217-disk-1
tank/pve-zsync/subvol-2219-disk-1        1.85G  3.78T  1.01G  /tank/pve-zsync/subvol-2219-disk-1
tank/pve-zsync/subvol-2227-disk-1        3.53G  3.78T  3.53G  /tank/pve-zsync/subvol-2227-disk-1
tank/pve-zsync/subvol-2228-disk-1        1.96G  3.78T  1.35G  /tank/pve-zsync/subvol-2228-disk-1
tank/pve-zsync/subvol-2244-disk-1         102G  3.78T  39.5G  /tank/pve-zsync/subvol-2244-disk-1
tank/pve-zsync/subvol-2249-disk-1        1.63G  3.78T  1.63G  /tank/pve-zsync/subvol-2249-disk-1

for some reason zfs mount -a is trying to mount tank to /tank and can not. I'll have to check if the old system used to do that. the zpool disks were just moved from my old system.

I'll leave the systemd settings alone. they catch a bug in my set up , but broke old system.

I'll change it's mount point and see what is there. my guess is that /tank got some directories created there.
 
See Comment 7 in bug entry #1012[1]

[1] https://bugzilla.proxmox.com/show_bug.cgi?id=1012

PS: @SirMaster: I recommend undoing your change once you added the options mentioned in the above link to keep your zfs "cleaner".

I added this line to my directory storage where the directory is on zfs:
Code:
is_mountpoint 1

But whenever I edit any storage entry from the GUI, the 1 (that follows is_mountpoint) is removed from every storage entry when I click save.
 
Ah yes, it'll still interpret that as true, but this still needs to be changed since the mkdir option is a true-by-default option and could thus be lost unintentionally. (Already made a patch for that).
 
Code:
ExecStart=/sbin/zfs mount -O -a

that fixed the issue here.

Note , on our production systems we will have the same issue

zfs tank for some reason gets mounted on testing system.

on production system, /tank is mounted.
Code:
dell1  ~ # zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
rpool  40.8G  228G  96K  /rpool
rpool/ROOT  3.75G  228G  96K  /rpool/ROOT
rpool/ROOT/pve-1  3.75G  228G  3.75G  /
rpool/swap  37.1G  261G  4.68G  -
tank  819G  6.22T  83.3G  /tank
tank/bkup  717G  6.22T  717G  /bkup
tank/kvm  192K  6.22T  192K  /tank/kvm
tank/lxc  16.6G  6.22T  208K  /tank/lxc
tank/lxc/subvol-100-disk-1  1.34G  6.66G  1.34G  /tank/lxc/subvol-100-disk-1
tank/lxc/subvol-110-disk-1  2.05G  5.95G  2.05G  /tank/lxc/subvol-110-disk-1
tank/lxc/subvol-116-disk-1  1.30G  4.70G  1.30G  /tank/lxc/subvol-116-disk-1
tank/lxc/subvol-12101-disk-1  1.69G  2.31G  1.69G  /tank/lxc/subvol-12101-disk-1
tank/lxc/subvol-3032-disk-1  1.56G  8.45G  1.55G  /tank/lxc/subvol-3032-disk-1
tank/lxc/subvol-3108-disk-1  1.57G  6.43G  1.57G  /tank/lxc/subvol-3108-disk-1
tank/lxc/subvol-3110-disk-1  1.49G  6.51G  1.49G  /tank/lxc/subvol-3110-disk-1
tank/lxc/subvol-3945-disk-1  1.14G  4.86G  1.14G  /tank/lxc/subvol-3945-disk-1
tank/lxc/subvol-4444-disk-1  2.91G  11.1G  2.91G  /tank/lxc/subvol-4444-disk-1
tank/lxc/subvol-9999-disk-1  1.56G  3.44G  1.56G  /tank/lxc/subvol-9999-disk-1
tank/pve  192K  6.22T  192K  /tank/pve

The only files seem to be directories for other zfs mount points:
Code:
dell1  ~ # ls /tank
bkup/  kvm/  lxc/  pve/

production version:
pve-manager/4.3-1/e7cdc165 (running kernel: 4.4.13-2-pve)

testing:
pve-manager/4.3-3/557191d3 (running kernel: 4.4.19-1-pve)

Unless I am wrong, if 4.3-3 zfs/systemd code is used at production there will be start up issues.
 
for posterity ...
I ran into this exact same thing after upgrading from 5.4 to 6.0
editing
Code:
/lib/systemd/system/zfs-mount.service
did the trick.
 
humm...
Code:
/lib/systemd/system/zfs-mount.service
still contains the edit ,
Code:
ExecStart=/sbin/zfs mount -O -a

Yet, after reboots of the host, the pool doesn't get mounted.
It does mount fine when I ssh into the host and do a
Code:
zfs mount -O -a

How do I troubleshoot this further so that the zfs pool mounts after reboot, allowing containers to auto start ?
 
no, i didn't... but my issue has evolved. I cleaned up the zfs moint point, and it is now empty. Unfortunately & strangely, the pool _still_ won't mount at reboot.
However, it now _does_ mount with a manual issue of command "zfs mount -a" (without the Overlay parameter)
...go figure...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!