[SOLVED] Zpools not mounting on boot

Coffeeri

Member
Jun 8, 2019
27
4
8
28
Hello!
I had a problem with dirs which didn't mount on boot. This is fixed. Now, since a couple weeks I have mounting problems with 2 zpools: BLACK and TANK.
At the moment I do it manually: zfs mount -a -Oafter every reboot...

I am on the newest pve 6. Can anyone relate or give me advice how to fix this?

Also I wanted to find out my dedups because zpool list gives me on all zpools dedup 1.00x. I wanted to find out with zdb -S zpool. Sadly only rpool is recognized by zdb, no other zpool - anyone knows why?

yeah I just started scrubbing so dont bother about the notification.
pve-manager/6.0-5/f8a710d7 (running kernel: 5.0.18-1-pve)
pool: BLACK
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(5) for details.
scan: scrub in progress since Thu Aug 22 13:55:18 2019
162G scanned at 236M/s, 35.5G issued at 51.7M/s, 162G total
0B repaired, 21.86% done, 0 days 00:41:52 to go
config:

NAME STATE READ WRITE CKSUM
BLACK ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wwn-0x50014ee206e701db ONLINE 0 0 0
wwn-0x50014ee25c3be86b ONLINE 0 0 0

errors: No known data errors

pool: GREEN
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(5) for details.
scan: scrub in progress since Thu Aug 22 13:55:00 2019
506G scanned at 718M/s, 71.9G issued at 102M/s, 506G total
0B repaired, 14.21% done, 0 days 01:12:31 to go
config:

NAME STATE READ WRITE CKSUM
GREEN ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wwn-0x50014ee2b0cd817d ONLINE 0 0 0
wwn-0x50014ee206639ffc ONLINE 0 0 0

errors: No known data errors

pool: TANK
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(5) for details.
scan: scrub in progress since Thu Aug 22 13:55:08 2019
2.39T scanned at 3.43G/s, 93.9G issued at 135M/s, 3.98T total
0B repaired, 2.30% done, 0 days 08:23:49 to go
config:

NAME STATE READ WRITE CKSUM
TANK ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wwn-0x50014ee210e2c4ef ONLINE 0 0 0
wwn-0x50014ee2bb8dcc44 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
wwn-0x50014ee2bba425de ONLINE 0 0 0
wwn-0x50014ee210f63778 ONLINE 0 0 0

errors: No known data errors

pool: rpool
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(5) for details.
scan: scrub repaired 0B in 0 days 00:04:25 with 0 errors on Thu Aug 22 13:59:39 2019
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda3 ONLINE 0 0 0
sdb3 ONLINE 0 0 0

errors: No known data errors
This system supports ZFS pool feature flags.

All pools are formatted using feature flags.


Some supported features are not enabled on the following pools. Once a
feature is enabled the pool may become incompatible with software
that does not support the feature. See zpool-features(5) for details.

POOL FEATURE
---------------
BLACK
encryption
project_quota
device_removal
obsolete_counts
zpool_checkpoint
spacemap_v2
allocation_classes
resilver_defer
bookmark_v2
GREEN
encryption
project_quota
device_removal
obsolete_counts
zpool_checkpoint
spacemap_v2
allocation_classes
resilver_defer
bookmark_v2
TANK
encryption
project_quota
device_removal
obsolete_counts
zpool_checkpoint
spacemap_v2
allocation_classes
resilver_defer
bookmark_v2
rpool
encryption
project_quota
device_removal
obsolete_counts
zpool_checkpoint
spacemap_v2
allocation_classes
resilver_defer
bookmark_v2
dir: local
path /var/lib/vz
content vztmpl,iso,backup

zfspool: local-zfs
pool rpool/data
content rootdir,images
sparse 1

zfspool: GREEN
pool GREEN
content rootdir,images
nodes pve

dir: BACKUP_GREEN
path /green_backup
content backup
is_mountpoint 1
maxfiles 5
mkdir 0
shared 0

zfspool: TANK
pool TANK
content rootdir,images
nodes pve

dir: BACKUP_TANK
path /tank_backup
content backup
is_mountpoint 1
maxfiles 5
mkdir 0
shared 0

dir: Media_TANK
path /TANK/media
content snippets,iso
is_mountpoint 1
mkdir 0
shared 0

zfspool: BLACK
pool BLACK
content rootdir,images
nodes pve

Thanks for any help!
 
Last edited:
Hi,
please have a look here https://forum.proxmox.com/threads/z...t-reboot-since-upgrade-to-6.56857/post-262091 maybe this helps solve your problem.
Thank you for your response, Chris.
I did this yesterday. I didn't have to re-import my pools. But the zfs-import-scan.service was (and is again) indeed inactive. After the reboot because of (init 6) everything was mounted. But when I rebooted today, I had the same issue as before.

Code:
# systemctl status zfs-import-scan.service
zfs-import-scan.service - Import ZFS pools by device scanning
   Loaded: loaded (/lib/systemd/system/zfs-import-scan.service; enabled; vendor preset: disabled)
   Active: inactive (dead)
Condition: start condition failed at Thu 2019-08-22 09:05:36 CEST; 8h ago
     Docs: man:zpool(8)

Aug 22 09:05:36 pve systemd[1]: Condition check resulted in Import ZFS pools by device scanning being skipped.
 
Please check if the zfs and zfs-import targets are enabled and what's the status of the zfs-import-cache.service. `systemctl status zfs.target zfs-import.target zfs-import-cache.service`
 
Please check if the zfs and zfs-import targets are enabled and what's the status of the zfs-import-cache.service. `systemctl status zfs.target zfs-import.target zfs-import-cache.service`

It seems to be fine.

root@pve:~# systemctl status zfs.target zfs-import.target zfs-import-cache.service
zfs.target - ZFS startup target
Loaded: loaded (/lib/systemd/system/zfs.target; enabled; vendor preset: enabled)
Active: active since Fri 2019-08-23 07:23:15 CEST; 10h ago

Aug 23 07:23:15 pve systemd[1]: Reached target ZFS startup target.

zfs-import.target - ZFS pool import target
Loaded: loaded (/lib/systemd/system/zfs-import.target; enabled; vendor preset: enabled)
Active: active since Fri 2019-08-23 07:23:14 CEST; 10h ago

Aug 23 07:23:14 pve systemd[1]: Reached target ZFS pool import target.

zfs-import-cache.service - Import ZFS pools by cache file
Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; enabled; vendor preset: enabled)
Active: active (exited) since Fri 2019-08-23 07:23:14 CEST; 10h ago
Docs: man:zpool(8)
Main PID: 1784 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 4915)
Memory: 0B
CGroup: /system.slice/zfs-import-cache.service

Aug 23 07:23:14 pve systemd[1]: Starting Import ZFS pools by cache file...
Aug 23 07:23:14 pve zpool[1784]: no pools available to import
Aug 23 07:23:14 pve systemd[1]: Started Import ZFS pools by cache file.
 
Do you still have a zpool.cache file present under /etc/zfs? It seems that the command of zfs-import-scan.serivce is not executed because of an unmet precondition. Try getting more info about that from `journalctl -u zfs-import-scan`.
 
Do you still have a zpool.cache file present under /etc/zfs? It seems that the command of zfs-import-scan.serivce is not executed because of an unmet precondition. Try getting more info about that from `journalctl -u zfs-import-scan`.

The zpool.cache file gets recreated after reboot (init 6) [1]
Code:
mv  /etc/zfs/zpool.cache  /etc/zfs/zpool.cache-
systemctl enable zfs-import-scan.service            

init 6

...

/etc/zfs/zpool.cache is redone

...

root@pve:/etc/zfs# journalctl -u zfs-import-scan
-- Logs begin at Tue 2019-08-27 21:36:25 CEST, end at Tue 2019-08-27 21:56:01 CEST. --
Aug 27 21:36:26 pve systemd[1]: Starting Import ZFS pools by device scanning...
Aug 27 21:38:17 pve systemd[1]: Started Import ZFS pools by device scanning.


+----+


root@pve:/etc/zfs# journalctl -u zfs-import-scan
-- Logs begin at Tue 2019-08-27 21:36:25 CEST, end at Tue 2019-08-27 21:56:01 CEST. --
Aug 27 21:36:26 pve systemd[1]: Starting Import ZFS pools by device scanning...
Aug 27 21:38:17 pve systemd[1]: Started Import ZFS pools by device scanning.
root@pve:/etc/zfs# systemctl status zfs-import-scan.service
● zfs-import-scan.service - Import ZFS pools by device scanning
   Loaded: loaded (/lib/systemd/system/zfs-import-scan.service; enabled; vendor preset: disabled)
   Active: active (exited) since Tue 2019-08-27 21:38:17 CEST; 18min ago
     Docs: man:zpool(8)
  Process: 1791 ExecStart=/sbin/zpool import -aN -d /dev/disk/by-id -o cachefile=none (code=exited, status=0/SUCCESS)
Main PID: 1791 (code=exited, status=0/SUCCESS)

Aug 27 21:36:26 pve systemd[1]: Starting Import ZFS pools by device scanning...
Aug 27 21:38:17 pve systemd[1]: Started Import ZFS pools by device scanning.

...

after another reboot

root@pve:~# journalctl -u zfs-import-scan
-- Logs begin at Tue 2019-08-27 22:03:47 CEST, end at Tue 2019-08-27 23:26:01 CEST.
Aug 27 22:03:48 pve systemd[1]: Condition check resulted in Import ZFS pools by device scanning being skipped.
When I follow the steps at [1] I all zpools are mounted correctly. When I do another reboot, the same error appears as before.



[1]
Hi,
please have a look here https://forum.proxmox.com/threads/z...t-reboot-since-upgrade-to-6.56857/post-262091 maybe this helps solve your problem.
 
When I follow the steps at [1] I all zpools are mounted correctly. When I do another reboot, the same error appears as before.
did you set cachefile=none on all pools?
(then you need to move the cache-file away and run `update-initramfs -k all -u`

else - usually it works fine if you have an up-to-date cachefile (just set the cachefile property on all pools) and run `update-initramfs -k all -u`
(otherwise the system sees an old/corrupted cachefile in the initramfs and import with cache-file fails , as does import with scan (since a cachefile is still present in the initramfs)

hope this helps!
 
did you set cachefile=none on all pools?
(then you need to move the cache-file away and run `update-initramfs -k all -u`

else - usually it works fine if you have an up-to-date cachefile (just set the cachefile property on all pools) and run `update-initramfs -k all -u`
(otherwise the system sees an old/corrupted cachefile in the initramfs and import with cache-file fails , as does import with scan (since a cachefile is still present in the initramfs)

hope this helps!

I tried this and after a second reboot, BLACK and TANK are not mounted..again.:confused:

Code:
root@pve:~# zpool set cachefile=none TANK
root@pve:~# zpool set cachefile=none BLACK
root@pve:~# zpool set cachefile=none GREEN
root@pve:~# zpool set cachefile=none rpool

root@pve:~# mv /etc/zfs/zpool.cache /etc/zfs/zpool.cache-
root@pve:~# update-initramfs -k all -u
update-initramfs: Generating /boot/initrd.img-5.0.18-1-pve
update-initramfs: Generating /boot/initrd.img-4.15.18-19-pve
update-initramfs: Generating /boot/initrd.img-4.15.18-18-pve
update-initramfs: Generating /boot/initrd.img-4.15.18-16-pve
update-initramfs: Generating /boot/initrd.img-4.15.18-15-pve
update-initramfs: Generating /boot/initrd.img-4.15.18-18-pve
update-initramfs: Generating /boot/initrd.img-4.15.18-16-pve
update-initramfs: Generating /boot/initrd.img-4.15.18-15-pve
update-initramfs: Generating /boot/initrd.img-4.15.18-12-pve

root@pve:~# reboot now

# GREAT EVERYTHING MOUNTED!


root@pve:~# reboot now

...

# OH WELL..

root@pve:~# zfs get mounted               
NAME                                      PROPERTY  VALUE    SOURCE
BLACK                                     mounted   no       -
BLACK/subvol-101-disk-0                   mounted   no       -
BLACK/subvol-101-disk-0@vzdump            mounted   -        -
BLACK/subvol-104-disk-0                   mounted   no       -
BLACK/subvol-104-disk-0@DockerClean       mounted   -        -
BLACK/subvol-109-disk-0                   mounted   no       -
BLACK/subvol-110-disk-0                   mounted   no       -
BLACK/subvol-111-disk-0                   mounted   no       -
BLACK/subvol-111-disk-0@Samstag           mounted   -        -
BLACK/subvol-111-disk-1                   mounted   no       -
BLACK/subvol-111-disk-1@Samstag           mounted   -        -
BLACK/subvol-112-disk-0                   mounted   no       -
BLACK/subvol-113-disk-0                   mounted   no       -
BLACK/subvol-113-disk-0@SetupWorking      mounted   -        -
GREEN                                     mounted   yes      -
GREEN@backup                              mounted   -        -
GREEN/backup                              mounted   yes      -
TANK                                      mounted   no       -
TANK@backup                               mounted   -        -
TANK/backup                               mounted   no       -
TANK/media                                mounted   no       -
TANK/subvol-102-disk-0                    mounted   no       -
TANK/subvol-102-disk-1                    mounted   no       -
rpool                                     mounted   yes      -
rpool/ROOT                                mounted   yes      -
rpool/ROOT/pve-1                          mounted   yes      -

I guess there must be some other problem.
 
Is maybe the property 'mountpoint' set to 'none' for the ones which don't get mounted? You can check with `zfs get mountpoint`.

Further, have you checked that the mountpoints are empty? As you are able to mount with the overlay option `zfs mount -a -O` this probably indicates that you are mounting over a non-empty mountpoint.
 
Last edited:
Is maybe the property 'mountpoint' set to 'none' for the ones which don't get mounted? You can check with `zfs get mountpoint`.
Code:
root@pve:~# zfs get mountpoint,mounted
NAME                                      PROPERTY    VALUE                          SOURCE
BLACK                                     mountpoint  /BLACK                         default
BLACK                                     mounted     no                             -
BLACK/subvol-101-disk-0                   mountpoint  /BLACK/subvol-101-disk-0       default
BLACK/subvol-101-disk-0                   mounted     no                             -
BLACK/subvol-101-disk-0@vzdump            mountpoint  -                              -
BLACK/subvol-101-disk-0@vzdump            mounted     -                              -
BLACK/subvol-104-disk-0                   mountpoint  /BLACK/subvol-104-disk-0       default
BLACK/subvol-104-disk-0                   mounted     no                             -
BLACK/subvol-104-disk-0@DockerClean       mountpoint  -                              -
BLACK/subvol-104-disk-0@DockerClean       mounted     -                              -
BLACK/subvol-109-disk-0                   mountpoint  /BLACK/subvol-109-disk-0       default
BLACK/subvol-109-disk-0                   mounted     no                             -
BLACK/subvol-110-disk-0                   mountpoint  /BLACK/subvol-110-disk-0       default
BLACK/subvol-110-disk-0                   mounted     no                             -
BLACK/subvol-111-disk-0                   mountpoint  /BLACK/subvol-111-disk-0       default
BLACK/subvol-111-disk-0                   mounted     no                             -
BLACK/subvol-111-disk-0@Samstag           mountpoint  -                              -
BLACK/subvol-111-disk-0@Samstag           mounted     -                              -
BLACK/subvol-111-disk-1                   mountpoint  /BLACK/subvol-111-disk-1       default
BLACK/subvol-111-disk-1                   mounted     no                             -
BLACK/subvol-111-disk-1@Samstag           mountpoint  -                              -
BLACK/subvol-111-disk-1@Samstag           mounted     -                              -
BLACK/subvol-112-disk-0                   mountpoint  /BLACK/subvol-112-disk-0       default
BLACK/subvol-112-disk-0                   mounted     no                             -
BLACK/subvol-113-disk-0                   mountpoint  /BLACK/subvol-113-disk-0       default
BLACK/subvol-113-disk-0                   mounted     no                             -
BLACK/subvol-113-disk-0@SetupWorking      mountpoint  -                              -
BLACK/subvol-113-disk-0@SetupWorking      mounted     -                              -
GREEN                                     mountpoint  /GREEN                         default
GREEN                                     mounted     yes                            -
GREEN@backup                              mountpoint  -                              -
GREEN@backup                              mounted     -                              -
GREEN/backup                              mountpoint  /green_backup                  local
GREEN/backup                              mounted     yes                            -
TANK                                      mountpoint  /TANK                          default
TANK                                      mounted     no                             -
TANK@backup                               mountpoint  -                              -
TANK@backup                               mounted     -                              -
TANK/backup                               mountpoint  /tank_backup                   local
TANK/backup                               mounted     no                             -
TANK/media                                mountpoint  /TANK/media                    default
TANK/media                                mounted     no                             -
TANK/subvol-102-disk-0                    mountpoint  /TANK/subvol-102-disk-0        default
TANK/subvol-102-disk-0                    mounted     no                             -
TANK/subvol-102-disk-1                    mountpoint  /TANK/subvol-102-disk-1        default
TANK/subvol-102-disk-1                    mounted     no                             -
rpool                                     mountpoint  /rpool                         default
rpool                                     mounted     yes                            -
rpool/ROOT                                mountpoint  /rpool/ROOT                    default
rpool/ROOT                                mounted     yes                            -
rpool/ROOT/pve-1                          mountpoint  /                              local
rpool/ROOT/pve-1                          mounted     yes                            -
rpool/data                                mountpoint  /rpool/data                    default
rpool/data                                mounted     yes                            -
rpool/data/subvol-100-disk-0              mountpoint  /rpool/data/subvol-100-disk-0  default
rpool/data/subvol-100-disk-0              mounted     yes                            -
rpool/data/subvol-107-disk-0              mountpoint  /rpool/data/subvol-107-disk-0  default
rpool/data/subvol-107-disk-0              mounted     yes                            -
rpool/data/subvol-107-disk-0@WithJupyter  mountpoint  -                              -
rpool/data/subvol-107-disk-0@WithJupyter  mounted     -                              -
rpool/data/subvol-108-disk-0              mountpoint  /rpool/data/subvol-108-disk-0  default
rpool/data/subvol-108-disk-0              mounted     yes                            -
rpool/data/subvol-114-disk-0              mountpoint  /rpool/data/subvol-114-disk-0  default
rpool/data/subvol-114-disk-0              mounted     yes                            -
rpool/data/subvol-223-disk-1              mountpoint  /rpool/data/subvol-223-disk-1  default
rpool/data/subvol-223-disk-1              mounted     yes                            -
rpool/data/vm-103-disk-0                  mountpoint  -                              -
rpool/data/vm-103-disk-0                  mounted     -                              -
rpool/data/vm-105-disk-0                  mountpoint  -                              -
rpool/data/vm-105-disk-0                  mounted     -                              -
Further, have you checked that the mountpoints are empty? As you are able to mount with the overlay option `zfs mount -a -O` this probably indicates that you are mounting over a non-empty mountpoint.
They are not empty indeed. When I do
Code:
rm -rf /TANK
rm -rf /BLACK
zfs mount -a

# all mounted

reboot now
# BLACK & TANK are not mounted again..
 
On my server /zpool contained /zpool/subvol-102-disk-1/dev and /zpool/subvol-103-disk-1/dev resulting in a failed zfs-mount.service and not running the lxc containers. After running 'rm -rf /zpool/*' mount.service was able to work properly.
Thanks for all the hints in this posting.
 
Code:
root@pve:~# zfs get mountpoint,mounted
NAME                                      PROPERTY    VALUE                          SOURCE
BLACK                                     mountpoint  /BLACK                         default
BLACK                                     mounted     no                             -
BLACK/subvol-101-disk-0                   mountpoint  /BLACK/subvol-101-disk-0       default
BLACK/subvol-101-disk-0                   mounted     no                             -
BLACK/subvol-101-disk-0@vzdump            mountpoint  -                              -
BLACK/subvol-101-disk-0@vzdump            mounted     -                              -
BLACK/subvol-104-disk-0                   mountpoint  /BLACK/subvol-104-disk-0       default
BLACK/subvol-104-disk-0                   mounted     no                             -
BLACK/subvol-104-disk-0@DockerClean       mountpoint  -                              -
BLACK/subvol-104-disk-0@DockerClean       mounted     -                              -
BLACK/subvol-109-disk-0                   mountpoint  /BLACK/subvol-109-disk-0       default
BLACK/subvol-109-disk-0                   mounted     no                             -
BLACK/subvol-110-disk-0                   mountpoint  /BLACK/subvol-110-disk-0       default
BLACK/subvol-110-disk-0                   mounted     no                             -
BLACK/subvol-111-disk-0                   mountpoint  /BLACK/subvol-111-disk-0       default
BLACK/subvol-111-disk-0                   mounted     no                             -
BLACK/subvol-111-disk-0@Samstag           mountpoint  -                              -
BLACK/subvol-111-disk-0@Samstag           mounted     -                              -
BLACK/subvol-111-disk-1                   mountpoint  /BLACK/subvol-111-disk-1       default
BLACK/subvol-111-disk-1                   mounted     no                             -
BLACK/subvol-111-disk-1@Samstag           mountpoint  -                              -
BLACK/subvol-111-disk-1@Samstag           mounted     -                              -
BLACK/subvol-112-disk-0                   mountpoint  /BLACK/subvol-112-disk-0       default
BLACK/subvol-112-disk-0                   mounted     no                             -
BLACK/subvol-113-disk-0                   mountpoint  /BLACK/subvol-113-disk-0       default
BLACK/subvol-113-disk-0                   mounted     no                             -
BLACK/subvol-113-disk-0@SetupWorking      mountpoint  -                              -
BLACK/subvol-113-disk-0@SetupWorking      mounted     -                              -
GREEN                                     mountpoint  /GREEN                         default
GREEN                                     mounted     yes                            -
GREEN@backup                              mountpoint  -                              -
GREEN@backup                              mounted     -                              -
GREEN/backup                              mountpoint  /green_backup                  local
GREEN/backup                              mounted     yes                            -
TANK                                      mountpoint  /TANK                          default
TANK                                      mounted     no                             -
TANK@backup                               mountpoint  -                              -
TANK@backup                               mounted     -                              -
TANK/backup                               mountpoint  /tank_backup                   local
TANK/backup                               mounted     no                             -
TANK/media                                mountpoint  /TANK/media                    default
TANK/media                                mounted     no                             -
TANK/subvol-102-disk-0                    mountpoint  /TANK/subvol-102-disk-0        default
TANK/subvol-102-disk-0                    mounted     no                             -
TANK/subvol-102-disk-1                    mountpoint  /TANK/subvol-102-disk-1        default
TANK/subvol-102-disk-1                    mounted     no                             -
rpool                                     mountpoint  /rpool                         default
rpool                                     mounted     yes                            -
rpool/ROOT                                mountpoint  /rpool/ROOT                    default
rpool/ROOT                                mounted     yes                            -
rpool/ROOT/pve-1                          mountpoint  /                              local
rpool/ROOT/pve-1                          mounted     yes                            -
rpool/data                                mountpoint  /rpool/data                    default
rpool/data                                mounted     yes                            -
rpool/data/subvol-100-disk-0              mountpoint  /rpool/data/subvol-100-disk-0  default
rpool/data/subvol-100-disk-0              mounted     yes                            -
rpool/data/subvol-107-disk-0              mountpoint  /rpool/data/subvol-107-disk-0  default
rpool/data/subvol-107-disk-0              mounted     yes                            -
rpool/data/subvol-107-disk-0@WithJupyter  mountpoint  -                              -
rpool/data/subvol-107-disk-0@WithJupyter  mounted     -                              -
rpool/data/subvol-108-disk-0              mountpoint  /rpool/data/subvol-108-disk-0  default
rpool/data/subvol-108-disk-0              mounted     yes                            -
rpool/data/subvol-114-disk-0              mountpoint  /rpool/data/subvol-114-disk-0  default
rpool/data/subvol-114-disk-0              mounted     yes                            -
rpool/data/subvol-223-disk-1              mountpoint  /rpool/data/subvol-223-disk-1  default
rpool/data/subvol-223-disk-1              mounted     yes                            -
rpool/data/vm-103-disk-0                  mountpoint  -                              -
rpool/data/vm-103-disk-0                  mounted     -                              -
rpool/data/vm-105-disk-0                  mountpoint  -                              -
rpool/data/vm-105-disk-0                  mounted     -                              -

They are not empty indeed. When I do
Code:
rm -rf /TANK
rm -rf /BLACK
zfs mount -a

# all mounted

reboot now
# BLACK & TANK are not mounted again..
Okay, this is strange... Are the mountpoints empty after the second reboot?
 
Okay, this is strange... Are the mountpoints empty after the second reboot?
Yep, the mountpoints /TANK and /BLACK do exists, but without any content inside them ( no files, nor any folder - completely empty).
 
Any idea how I am able to debug this?

Edit:
I guess I solved it by:

Bash:
root@pve:~# zpool set cachefile=/etc/zfs/zpool.cache TANK
root@pve:~# zpool set cachefile=/etc/zfs/zpool.cache BLACK

Great! I mark this as solved! Thanks for the help!
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!