[SOLVED] ZFS pool does not mount at boot-time

Coffeeri

Active Member
Jun 8, 2019
29
4
43
29
Hello!
I am having problems with 2 pools, that do not get mounted on boot.

The setup:
I have 2 ZFS pools, where
  • TANK/backup
  • GREEN/backup
are each for different backups.

I configured 2 directories:

Code:
# extract from
# cat /etc/pve/storage.cfg

zfspool: GREEN
        pool GREEN
        content images,rootdir
        nodes pve

dir: BACKUP_GREEN
        path /green_backup
        content backup
        maxfiles 5
        shared 0

zfspool: TANK
        pool TANK
        content images,rootdir
        nodes pve

dir: BACKUP_TANK
        path /tank_backup
        content backup
        maxfiles 5
        shared 0
Code:
GREEN                         mounted     yes                            -
GREEN                         mountpoint  /GREEN                         default
GREEN/backup                  mounted     no                             -
GREEN/backup                  mountpoint  /green_backup                  local
TANK                          mounted     no                             -
TANK                          mountpoint  /TANK                          default
TANK/backup                   mounted     no                             -
TANK/backup                   mountpoint  /tank_backup                   local
The current problem is (my guess), that proxmox creates a directory /tank_backup and /green_backup before it was able to mount it. Because of that it is never empty and zfs mount -a does not work --- zfs mount -O -a does. I want a clean way, so it will be mounted at boot time.

Thanks for your help!
 
I hope Proxmox team will correct me.

/lib/systemd/system/pvestatd.service

After=pve-cluster.service zfs-mount.service
 
set the properties:
`is_mountpoint` to 1 and `mkdir` to 0 for both dir-storages (BACKUP_TANK and BACKUP_GREEN) in `/etc/storage/storage.cfg`
 
set the properties:
`is_mountpoint` to 1 and `mkdir` to 0 for both dir-storages (BACKUP_TANK and BACKUP_GREEN) in `/etc/storage/storage.cfg`
I saw this in another thread and tired it. It didn't work...I figured it out though - just unmounted the dirs and pools and deleted the mounted folders manually with
Code:
rm -rf /tank_backup
rm -rf /green_backup
..
then did a reboot and it works!
Nice, thank you. I'll mark it as solved.
 
I hope Proxmox team will correct me.

/lib/systemd/system/pvestatd.service

After=pve-cluster.service zfs-mount.service

This resolved my issue. Shouldn't this be the coded default?

set the properties:
`is_mountpoint` to 1 and `mkdir` to 0 for both dir-storages (BACKUP_TANK and BACKUP_GREEN) in `/etc/storage/storage.cfg`

This was my location in my cluster:
Bash:
root@dell-precision-t7500:~# find / -name 'storage.cfg'
/etc/pve/storage.cfg
root@dell-precision-t7500:~#

Can these be made into checkboxes in GUI?
 
Hello,

May I revive this thread?

Just stumbled upon the same issue. And also there are Workarounds given by manipulating the service file or adding 2 options to the storage cfg, this does not feel like a real fix.

Would it be possible to either put a changed service-file upstream and/or add matching checkboxes into the gui as suggested by @RabidPhilbrick ?

Thank you very much.

Regards
Matthias