[SOLVED] Upgrade to 6.3-6 breaks ZFS auto mount on reboot

TheBokke

Member
Jun 18, 2018
16
1
8
41
Hi All, hoping you might be able to help solve an issue I've got. My proxmox host has a couple of ZFS pools, a root pool (rpool) and a storage pool (Storage1). My VM's and containers run on the root pool. The Storage1 pool is used for shared storage and is for the VM's and containers which includes docker in LXC, as well as serving as a smb store for my network.

I upgraded the host to the latest version via the dashboard version 6.3-6, previously 6.3-2 I believe. The upgrade included an update for ZFS too I believe. I rebooted rebooted the machine to install the new kernel. After the reboot I had issues with docker not starting properly due to a premissions issue and after doing some digging my ZFS Storage pool has not mounted correctly on reboot. ZFS mounts the Storage pool to mounts to /mnt/Storage1 and I can see that the mount point is there,

zfs get mounted shows the pool mounted, but the directories are empty if I navigate to the directory and ls -l.

zpool status shows the pool healthy and my data is on the drives, I just cant access it.

I was able manually mount the pool again using the following sequence, (I had to stop the PVE services as it seems to auto create the mount points again, disabling this allowed me to manually mount the pool, after which it worked).

Code:
# zfs get mounted
# systemctl stop pve-cluster ; systemctl stop pvedaemon ; systemctl stop pveproxy ; systemctl stop pvestatd
# zfs unmount Storage1
# zfs get mounted
# zfs mount Storage1
# zfs get mounted
# systemctl start pvestatd ; systemctl start pveproxy ; systemctl start pvedaemon ; systemctl start pve-cluster

I seem to have a failure on the zfs import scan service, but no idea how to debug / resolve this.

Code:
# systemctl status zfs-import-scan.service
● zfs-import-scan.service - Import ZFS pools by device scanning
   Loaded: loaded (/lib/systemd/system/zfs-import-scan.service; enabled; vendor
   Active: inactive (dead)
Condition: start condition failed at Sat 2021-03-20 02:15:44 AEDT; 3 days ago
           └─ ConditionFileNotEmpty=!/etc/zfs/zpool.cache was not met
     Docs: man:zpool(8)

Mar 20 02:15:44 pve1 systemd[1]: Condition check resulted in Import ZFS pools by

Whilst the manual unmount / mount process got the pool back I need it to automount after a reboot.

Any assistance to resolve is greatly appreciated.
 
Last edited:

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
5,378
717
118
please post `pveversion -v` (and make sure you have the latest updates installed - we had some improvements to zfs-mounting recently)

systemctl status zfs-import-scan.service
this service is not starting because you have a cachefile:
ConditionFileNotEmpty=!/etc/zfs/zpool.cache was not met
What's the contents of the cache-file?
strings /etc/zfs/zpool.cache

I hope this helps!
 

TheBokke

Member
Jun 18, 2018
16
1
8
41
Hello Stoiko, thank you very much for the reply.

Stoiko Ivanov said:
please post `pveversion -v` (and make sure you have the latest updates installed - we had some improvements to zfs-mounting recently)

I have just updated the host to the latest version of pve and after reboot I still have the issue where I have to manually mount the Zfs pool again to be able to access my shares, lxc docker etc.

Code:
# pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.106-1-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
pve-kernel-5.4: 6.3-8
pve-kernel-helper: 6.3-8
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.103-1-pve: 5.4.103-1
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.73-1-pve: 5.4.73-1
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.55-1-pve: 5.4.55-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.8
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-5
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.13-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-9
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-5
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-10
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1

Stoiko Ivanov said:
What's the contents of the cache-file?
strings /etc/zfs/zpool.cache

I ran the strings command after manually mounting the pool after reboot, output copied below.

Code:
# strings /etc/zfs/zpool.cache
Backup1
version
name
Backup1
state
pool_guid
errata
hostid
hostname
pve1
com.delphix:has_per_vdev_zaps
vdev_children
vdev_tree
type
root
guid
create_txg
children
type
disk
guid
path
/dev/disk/by-id/ata-WDC_WD120EMFZ-11A6JA0_Z2K9HYLT-part1
devid
ata-WDC_WD120EMFZ-11A6JA0_Z2K9HYLT-part1
phys_path
pci-0000:04:00.0-sas-exp0x5003048000aef57f-phy22-lun-0
vdev_enc_sysfs_path
/sys/class/enclosure/0:0:5:0/Slot 11
whole_disk
metaslab_array
metaslab_shift
ashift
asize
is_log
create_txg
com.delphix:vdev_zap_leaf
com.delphix:vdev_zap_top
features_for_read
com.delphix:hole_birth
com.delphix:embedded_data
Storage1
version
name
Storage1
state
pool_guid
errata
hostid
hostname
pve1
com.delphix:has_per_vdev_zaps
vdev_children
vdev_tree
type
root
guid
create_txg
children
type
raidz
guid
nparity
metaslab_array
metaslab_shift
ashift
asize
is_log
create_txg
com.delphix:vdev_zap_top
children
type
disk
guid
path
/dev/disk/by-id/ata-WDC_WD8001FFWX-68J1UN0_R5GBH79V-part1
devid
ata-WDC_WD8001FFWX-68J1UN0_R5GBH79V-part1
phys_path
pci-0000:04:00.0-sas-exp0x5003048000aef57f-phy13-lun-0
vdev_enc_sysfs_path
/sys/class/enclosure/0:0:7:0/Slot 02
whole_disk
create_txg
com.delphix:vdev_zap_leaf
type
disk
guid
path
/dev/disk/by-id/ata-WDC_WD8001FFWX-68J1UN0_R6G6768Y-part1
devid
ata-WDC_WD8001FFWX-68J1UN0_R6G6768Y-part1
phys_path
pci-0000:04:00.0-sas-exp0x5003048000aef57f-phy19-lun-0
vdev_enc_sysfs_path
/sys/class/enclosure/0:0:7:0/Slot 08
whole_disk
create_txg
com.delphix:vdev_zap_leaf
type
disk
guid
path
/dev/disk/by-id/ata-WDC_WD8001FFWX-68J1UN0_R6GXDTRY-part1
devid
ata-WDC_WD8001FFWX-68J1UN0_R6GXDTRY-part1
phys_path
pci-0000:04:00.0-sas-exp0x5003048000aef57f-phy17-lun-0
vdev_enc_sysfs_path
/sys/class/enclosure/0:0:7:0/Slot 06
whole_disk
create_txg
com.delphix:vdev_zap_leaf
type
disk
guid
path
/dev/disk/by-id/ata-WDC_WD8001FFWX-68J1UN0_R6GY0P5Y-part1
devid
ata-WDC_WD8001FFWX-68J1UN0_R6GY0P5Y-part1
phys_path
pci-0000:04:00.0-sas-exp0x5003048000aef57f-phy15-lun-0
vdev_enc_sysfs_path
/sys/class/enclosure/0:0:7:0/Slot 04
whole_disk
create_txg
com.delphix:vdev_zap_leaf
type
disk
guid
path
/dev/disk/by-id/ata-WDC_WD80EFZX-68UW8N0_R6GYE81Y-part1
devid
ata-WDC_WD80EFZX-68UW8N0_R6GYE81Y-part1
phys_path
pci-0000:04:00.0-sas-exp0x5003048000aef57f-phy23-lun-0
vdev_enc_sysfs_path
/sys/class/enclosure/0:0:7:0/Slot 12
whole_disk
create_txg
com.delphix:vdev_zap_leaf
type
disk
guid
path
/dev/disk/by-id/ata-WDC_WD80EFZX-68UW8N0_R6GYLNGY-part1
devid
ata-WDC_WD80EFZX-68UW8N0_R6GYLNGY-part1
phys_path
pci-0000:04:00.0-sas-exp0x5003048000aef57f-phy21-lun-0
vdev_enc_sysfs_path
/sys/class/enclosure/0:0:7:0/Slot 10
whole_disk
create_txg
com.delphix:vdev_zap_leaf
type
disk
guid
hC`WcY
path
/dev/disk/by-id/nvme-INTEL_SSDPEK1W060GA_PHBT8033017P064Q-part1
devid
nvme-INTEL_SSDPEK1W060GA_PHBT8033017P064Q-part1
phys_path
pci-0000:01:00.0-nvme-1
whole_disk
metaslab_array
metaslab_shift
ashift
asize
is_log
create_txg
com.delphix:vdev_zap_leaf
com.delphix:vdev_zap_top
features_for_read
com.delphix:hole_birth
com.delphix:embedded_data
rpool
version
name
rpool
state
pool_guid
errata
hostid
hostname
(none)
com.delphix:has_per_vdev_zaps
vdev_children
vdev_tree
type
root
guid
create_txg
children
type
mirror
guid
metaslab_array
metaslab_shift
ashift
asize
is_log
create_txg
com.delphix:vdev_zap_top
children
type
disk
guid
path
/dev/disk/by-id/ata-ZA480NM10001_HKQ02JRM-part3
whole_disk
create_txg
com.delphix:vdev_zap_leaf
type
disk
guid
}164Y
path
/dev/disk/by-id/ata-ZA480NM10001_HKQ02J2V-part3
whole_disk
create_txg
com.delphix:vdev_zap_leaf
features_for_read
com.delphix:hole_birth
com.delphix:embedded_data

Thanks once again for your help Stoiko
 
Nov 19, 2020
43
3
8
22
Hi guys!

Anything new here? I'm also on 6.3.3 and since i read first post of this thread i have postponed updating to:

1618679097620.png

because i also have all data (VM's) on ZFS pool and i don't wannt to end with non-working system after reboot.

Best regards!
 

Mortifier

Member
Oct 9, 2016
8
0
21
USA
I am having the same issue upgrading from version 5, I don't know if this is the same issue or different as I also don't want to upgrade another machine until the ZFS mounting works correctly.
 

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
5,378
717
118
ran the strings command after manually mounting the pool after reboot, output copied below.
Thanks - the cache file seems like it contains all necessary information.. and the version of libpve-storage-perl should also contain the last improvements in that area

could you share the journal since booting (of the instance where the pool was not imported+mounted)?
 

TheBokke

Member
Jun 18, 2018
16
1
8
41
Thanks - the cache file seems like it contains all necessary information.. and the version of libpve-storage-perl should also contain the last improvements in that area

could you share the journal since booting (of the instance where the pool was not imported+mounted)?
Thanks for the reply Stoiko, can you direct me to how to find the journal to copy and post here please? I'll reboot the machine and make a copy but I must admit I don't really know where to find it.
 

TheBokke

Member
Jun 18, 2018
16
1
8
41
Thanks - the cache file seems like it contains all necessary information.. and the version of libpve-storage-perl should also contain the last improvements in that area

could you share the journal since booting (of the instance where the pool was not imported+mounted)?
Hi @Stoiko

I still have the same issue, can you explain which Journal you would like to see and where I can find it please? We had a power cut last night and server did an autoreboot on power restore but none of the storage got mounted correctly and required a manual unmont. Not ideal.

Thanks for your assistance.
 

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
5,378
717
118
You can view the journal since boot with `journalctl -b`

I hope this helps
 

TheBokke

Member
Jun 18, 2018
16
1
8
41
You can view the journal since boot with `journalctl -b`

I hope this helps
Thank you Stoiko, I'm still learning linux, so appreciate the help finding the journal logs.

Here is the contents of the journal from the last reboot, attached as a txt file due to size limitation in the post. I copied what I thought may be relevant below, the rest of the logs are in the attached file.



Code:
-- Logs begin at Thu 2021-06-10 20:22:04 AEST, end at Thu 2021-06-10 20:23:46 AEST. --
Jun 10 20:22:04 pve1 kernel: Linux version 5.4.119-1-pve (build@proxmox) (gcc version 8.3.0 (Debian 8.3.0-6)) #1 SMP PVE 5.4.119-1 (Tue, 01 Jun 2021 15:32:00 +0200) ()
Jun 10 20:22:04 pve1 kernel: Command line: initrd=\EFI\proxmox\5.4.119-1-pve\initrd.img-5.4.119-1-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs


Jun 10 20:22:05 pve1 systemd[1]: Started udev Wait for Complete Device Initialization.
Jun 10 20:22:05 pve1 systemd[1]: Started Helper to synchronize boot up for ifupdown.
Jun 10 20:22:05 pve1 systemd[1]: Created slice system-lvm2\x2dpvscan.slice.
Jun 10 20:22:05 pve1 systemd[1]: Starting LVM event activation on device 230:3...
Jun 10 20:22:05 pve1 systemd[1]: Condition check resulted in Import ZFS pools by device scanning being skipped.
Jun 10 20:22:05 pve1 systemd[1]: Starting Import ZFS pools by cache file...
Jun 10 20:22:05 pve1 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jun 10 20:22:05 pve1 systemd[1]: Started LVM event activation on device 230:3.
Jun 10 20:22:15 pve1 systemd[1]: Started Import ZFS pools by cache file.
Jun 10 20:22:15 pve1 systemd[1]: Reached target ZFS pool import target.
Jun 10 20:22:15 pve1 systemd[1]: Starting Mount ZFS filesystems...
Jun 10 20:22:15 pve1 systemd[1]: Starting Wait for ZFS Volume (zvol) links in /dev...
Jun 10 20:22:15 pve1 zvol_wait[2453]: Testing 1 zvol links
Jun 10 20:22:15 pve1 zvol_wait[2453]: All zvol links are now present.
Jun 10 20:22:15 pve1 systemd[1]: Started Wait for ZFS Volume (zvol) links in /dev.
Jun 10 20:22:15 pve1 systemd[1]: Reached target ZFS volumes are ready.
Jun 10 20:22:15 pve1 systemd[1]: Started Mount ZFS filesystems.
Jun 10 20:22:15 pve1 systemd[1]: Reached target Local File Systems.
Jun 10 20:22:15 pve1 systemd[1]: Starting Preprocess NFS configuration...
Jun 10 20:22:15 pve1 systemd[1]: Starting Set console font and keymap...
Jun 10 20:22:15 pve1 systemd[1]: Starting Load AppArmor profiles...
Jun 10 20:22:15 pve1 systemd[1]: Starting Proxmox VE firewall logger...
Jun 10 20:22:15 pve1 systemd[1]: Condition check resulted in Commit a transient machine-id on disk being skipped.
Jun 10 20:22:15 pve1 systemd[1]: Starting Create Volatile Files and Directories...
Jun 10 20:22:15 pve1 systemd[1]: Starting Proxmox VE Login Banner...
Jun 10 20:22:15 pve1 systemd[1]: Starting Commit Proxmox VE network changes...
Jun 10 20:22:15 pve1 systemd[1]: nfs-config.service: Succeeded.
Jun 10 20:22:15 pve1 systemd[1]: Started Preprocess NFS configuration.

My Samba shares on on this machine come up empty after reboot. ZFS lists my "Storage1" pool as being mounted with no errors, but I can't see any data unless I manually unmount and then remount. Then all works fine and I can access my Samba shares and LXC containers until the next reboot.

Hope this provides some useful information.

Thanks
 

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
5,378
717
118
The logs would indicate that everything is correctly mounted ...

However I'm not completely clear regarding your setup (e.g. Samba-shares were just mentioned now..)

to get a better overview could you:
* reboot so that everything is broken
* get the 'mounted' property of all datasets in the pool `zfs get -r mounted <pool>` (replace <pool> by the poolname)
* go in the directory of a samba share and execute: `ls -l`, `df -h .`
* fix the system the way you currently do
* repeat the getting of the mounted property and `ls -l` and `df -h .`

Regarding your samba-shares being empty - does restarting nmbd and smbd help for that?
 

TheBokke

Member
Jun 18, 2018
16
1
8
41
The logs would indicate that everything is correctly mounted ...

However I'm not completely clear regarding your setup (e.g. Samba-shares were just mentioned now..)
Thanks, that's good to know. Sorry I should have been more clear on my setup.

My server has three pools:

rpool - local zfs pool for boot and VM and CT storage. Currently running docker in a LXC with a local directory bind mountpoints to 'Storage1'
Storage1 - main ZFS storage pool, acts as storage for VM's and LXC. Also contains as Samba shares for a windows network
Backup1 - ZFS pool for backup. This pool is fully accessible after a reboot, I dont have to run a zfs unmount / mount command here..

After reboot the Docker LXC does not auto start and if manually started I cant access the Dockers (storage mapped to the Storage1 zpool), and I have the issue with the empty samba shares, after a manual remount all works again fine.

to get a better overview could you:
* reboot so that everything is broken
* get the 'mounted' property of all datasets in the pool `zfs get -r mounted <pool>` (replace <pool> by the poolname)
* go in the directory of a samba share and execute: `ls -l`, `df -h .`

I ran those commands, whilst the Samba shares show nothing, the df -h command shows my data on the drives. Looks like I can see the data just not access it directly from the directory or via smb until I unmount and remount the zpool.

Code:
# zfs get -r mounted rpool
NAME                            PROPERTY  VALUE    SOURCE
rpool                           mounted   yes      -
rpool/ROOT                      mounted   yes      -
rpool/ROOT/pve-1                mounted   yes      -
rpool/data                      mounted   yes      -
rpool/data/subvol-201-disk-1    mounted   yes      -
rpool/data/vm-100-disk-0        mounted   -        -
rpool/data/vm-100-disk-0@Snap1  mounted   -        -

# zfs get -r mounted Storage1
NAME                      PROPERTY  VALUE    SOURCE
Storage1                  mounted   yes      -
Storage1/Backups          mounted   yes      -
Storage1/Backups/Proxmox  mounted   yes      -
Storage1/Media            mounted   yes      -
Storage1/Users            mounted   yes      -

# zfs get -r mounted Backup1
NAME     PROPERTY  VALUE    SOURCE
Backup1  mounted   yes      -

user@pve1/mnt/Storage1/Media# ls -l
total 0

root@pve1:/mnt/Storage1/Users# df -h
Filesystem                    Size  Used Avail Use% Mounted on
udev                           63G     0   63G   0% /dev
tmpfs                          13G   11M   13G   1% /run
rpool/ROOT/pve-1              414G  5.8G  408G   2% /
tmpfs                          63G   43M   63G   1% /dev/shm
tmpfs                         5.0M     0  5.0M   0% /run/lock
tmpfs                          63G     0   63G   0% /sys/fs/cgroup
rpool                         408G  128K  408G   1% /rpool
rpool/ROOT                    408G  128K  408G   1% /rpool/ROOT
rpool/data                    408G  128K  408G   1% /rpool/data
rpool/data/subvol-201-disk-1  100G   13G   88G  13% /rpool/data/subvol-201-disk-1
Storage1                       29T  8.5T   20T  30% /mnt/Storage1
Backup1                        11T  8.5T  2.2T  80% /mnt/Backup1
Storage1/Media                 20T  256K   20T   1% /mnt/Storage1/Media
Storage1/Users                 20T  256K   20T   1% /mnt/Storage1/Users
Storage1/Backups               20T  256K   20T   1% /mnt/Storage1/Backups
Storage1/Backups/Proxmox       20T  256K   20T   1% /mnt/Storage1/Backups/Proxmox
/dev/fuse                      30M   20K   30M   1% /etc/pve
tmpfs                          13G     0   13G   0% /run/user/0

* fix the system the way you currently do
* repeat the getting of the mounted property and `ls -l` and `df -h .`

After I do a manual 'ZFS unmount Storage1' and then a 'zfs mount Storage1':

Code:
user@pve1:/mnt/Storage1/Media# ls -l
total 916

drwxrwxr-x 1093 docker-media user_media      1093 May 28 19:56  Films
drwxrwxr-x    4 docker-media user_media            4 Aug  8  2020  Kids
drwxrwxr-x  439 docker-media user_media       441 May 16 14:28  Music
drwxrwxr-x  168 docker-media user_media       233 Jan 19 15:52  Pictures

root@pve1:/mnt/Storage1/Media# df -h
Filesystem                    Size  Used Avail Use% Mounted on
udev                           63G     0   63G   0% /dev
tmpfs                          13G   11M   13G   1% /run
rpool/ROOT/pve-1              414G  5.8G  408G   2% /
tmpfs                          63G   34M   63G   1% /dev/shm
tmpfs                         5.0M     0  5.0M   0% /run/lock
tmpfs                          63G     0   63G   0% /sys/fs/cgroup
rpool                         408G  128K  408G   1% /rpool
rpool/data                    408G  128K  408G   1% /rpool/data
rpool/ROOT                    408G  128K  408G   1% /rpool/ROOT
rpool/data/subvol-201-disk-1  100G   13G   88G  13% /rpool/data/subvol-201-disk-1
Backup1                        11T  8.5T  2.2T  80% /mnt/Backup1
tmpfs                          13G     0   13G   0% /run/user/0
Storage1                       29T  8.5T   20T  30% /mnt/Storage1
/dev/fuse                      30M   20K   30M   1% /etc/pve

Regarding your samba-shares being empty - does restarting nmbd and smbd help for that?

After a reboot, I stopped and restarted these services, it doesn't make a difference, still empty samba shares.

Any ideas what may be causing this, something with how ZFS is interpreting the cache file with the auto mounting of the pool?

Thanks again for your help.
 
Last edited:

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
5,378
717
118
After I do a manual 'ZFS unmount Storage1' and then a 'zfs mount Storage1':
That explains it (it would be even more visible if you run `df -h .` (the dot would make it only print the usage of the current mount-point)
-> after you unmount and mount Storage1 - Storage1/Media is not mounted anymore on /mnt/Storage1/Media
I'd carefully check the command-output of the mount command, as well as the journal - usually there is a message about a mountpoint not being empty...

I think the problem happend as follows:
* at some point Storage1/Media was not mounted - and at that point it got filled (with Films, Kids, Music, Pictures)
* from then on you cannot mount Storage1/Media anymore...

To resolve this I'd do the following:
* Make a backup! (Always, but especially when moving/deleting data)
* `mv /mnt/Storage1/Media /mnt/Storage1/Media.bck`
* `zfs mount Storage1/Media`
* compare 'Media' and 'Media.bck' copy what you need to Media
(do the steps above for all other datasets on Storage1 as well)
* reboot
 
  • Like
Reactions: TheBokke

TheBokke

Member
Jun 18, 2018
16
1
8
41
That explains it (it would be even more visible if you run `df -h .` (the dot would make it only print the usage of the current mount-point)
-> after you unmount and mount Storage1 - Storage1/Media is not mounted anymore on /mnt/Storage1/Media
I'd carefully check the command-output of the mount command, as well as the journal - usually there is a message about a mountpoint not being empty...

I think the problem happend as follows:
* at some point Storage1/Media was not mounted - and at that point it got filled (with Films, Kids, Music, Pictures)
* from then on you cannot mount Storage1/Media anymore...

To resolve this I'd do the following:
* Make a backup! (Always, but especially when moving/deleting data)
* `mv /mnt/Storage1/Media /mnt/Storage1/Media.bck`
* `zfs mount Storage1/Media`
* compare 'Media' and 'Media.bck' copy what you need to Media
(do the steps above for all other datasets on Storage1 as well)
* reboot

Aah yes, I think you may be right! A while ago, being new to Proxmox and Linux I messed something up on 'Storage1' pool and I had to copy back from a backup. My 'Backup1' pool is effectively an exact rsync copy of 'Storage1' and maybe I copied that without the 'Storage1' zpool being mounted by zfs.

I forgot to show the output of the zfs get mounted command after I did the zfs unmount and then zfs mount fix:

Code:
# zfs get -r mounted Storage1
NAME                      PROPERTY  VALUE    SOURCE
Storage1                  mounted   yes      -
Storage1/Backups          mounted   no       -
Storage1/Backups/Proxmox  mounted   no       -
Storage1/Media            mounted   no       -
Storage1/Users            mounted   no       -

In this case it is saying Storage1/Backups, /Media, /Users etc are not mounted, but I can access those directories and the files and smb shares no problem. How is that possible without being mounted?

So to be clear with your steps to fix this:

1. I need to move all the data off Storage1 and its subdirectories to a backup location (can I do this for all directories at once?)
2. Then mount the 'Storage1' pool again,
3. and then copy back the data from the backup?

For 2. above, does mnt/Storage1 need to be empty before mounting again? If so, can the subdirectories which are also smb shares be retained or wont this matter when they are copied back?

Can my Backup1 pool be used instead of creating a .bck copy?

Thanks again for you patience and help Stoiko!
 

TheBokke

Member
Jun 18, 2018
16
1
8
41
That explains it (it would be even more visible if you run `df -h .` (the dot would make it only print the usage of the current mount-point)
-> after you unmount and mount Storage1 - Storage1/Media is not mounted anymore on /mnt/Storage1/Media
I'd carefully check the command-output of the mount command, as well as the journal - usually there is a message about a mountpoint not being empty...

I think the problem happend as follows:
* at some point Storage1/Media was not mounted - and at that point it got filled (with Films, Kids, Music, Pictures)
* from then on you cannot mount Storage1/Media anymore...

To resolve this I'd do the following:
* Make a backup! (Always, but especially when moving/deleting data)
* `mv /mnt/Storage1/Media /mnt/Storage1/Media.bck`
* `zfs mount Storage1/Media`
* compare 'Media' and 'Media.bck' copy what you need to Media
(do the steps above for all other datasets on Storage1 as well)
* reboot
Hi Stoiko

I think I've tracked it down. I followed your steps and unmounted the 'Storage1' pool, I had to stop the PVE processes and services first as Proxmox seems to immediately auto mount (confirmed by 'zfs get mounted' command) and I dont have time to make any changes.

After this I found that there are directories in /mnt/Storage1/Backups

Code:
user@pve1:/mnt/Storage1/Backups# ls -l
total 2
drwxr-xr-x 2 root root 2 Mar 20 00:08 dump
drwxr-xr-x 2 root root 2 Mar 20 00:08 images
drwxr-xr-x 2 root root 2 Mar 20 00:08 private
drwxr-xr-x 2 root root 2 Mar 20 00:08 snippets

I moved those directories and files to a backup location so the mountpoint was empty and then ran the zfs mount command

(These directories seem to be the automatically created on reboot before ZFS is able to mount the pool, so it is unable to do so because the mount point is not empty. In the PVE console, that location is setup as directory storage for VZDump backup file, Disk Image, Container, Snippets. It is set to enabled and active.)

After running the 'zfs mount Storage1' command, I get the following, but I can access the samba shares e.g. Storage1/Media Users etc even though it is saying these are not mounted

Code:
user@pve1:/mnt/Storage1# zfs get mounted
NAME                            PROPERTY  VALUE    SOURCE
Backup1                         mounted   yes      -
Storage1                        mounted   yes      -
Storage1/Backups                mounted   no       -
Storage1/Backups/Proxmox        mounted   no       -
Storage1/Media                  mounted   no       -
Storage1/Users                  mounted   no       -
rpool                           mounted   yes      -
rpool/ROOT                      mounted   yes      -
rpool/ROOT/pve-1                mounted   yes      -
rpool/data                      mounted   yes      -
rpool/data/subvol-201-disk-1    mounted   yes      -
rpool/data/vm-100-disk-0        mounted   -        -
rpool/data/vm-100-disk-0@Snap1  mounted   -        -

When I individually mount 'zfs mount Storage1/Media' 'zfs mount Storage1/Users' etc and all show as mounted in zfs as below, I can't access the samba shares in those directories, and 'ls -l 'shows no files in those directories:

Code:
user@pve1:/mnt/Storage1# zfs mount Storage1/Media
user@pve1:/mnt/Storage1# zfs get mounted
NAME                            PROPERTY  VALUE    SOURCE
Backup1                         mounted   yes      -
Storage1                        mounted   yes      -
Storage1/Backups                mounted   yes      -
Storage1/Backups/Proxmox        mounted   yes      -
Storage1/Media                  mounted   yes      -
Storage1/Users                  mounted   yes      -
rpool                           mounted   yes      -
rpool/ROOT                      mounted   yes      -
rpool/ROOT/pve-1                mounted   yes      -
rpool/data                      mounted   yes      -

user@pve1:/mnt/Storage1/Media# ls -l
total 0

After reboot I am back to the same issue, I suspect because Proxmox is mounting the backup directories in the Storage1/Backups mountpoint before the zfs pool can be mounted. How can I fix this?
 
Last edited:

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
5,378
717
118
After reboot I am back to the same issue, I suspect because Proxmox is mounting the backup directories in the Storage1/Backups mountpoint before the zfs pool can be mounted. How can I fix this?
Usually if you like to have a directory storage on a zfs dataset you need to set the is_mountpoint property for the directory storage (see `man pvesm`)
 

TheBokke

Member
Jun 18, 2018
16
1
8
41
Usually if you like to have a directory storage on a zfs dataset you need to set the is_mountpoint property for the directory storage (see `man pvesm`)
Hi Stoiko,

I followed your previous instructions and unmounted each zfs dataset and removed the files that were in the corresponding directories to the mount points, then mounted each dataset and copied back the files. Everything works perfectly after each reboot now!

I don't really know how that happened but I think after the upgrade something didn't mount properly and fearing I had lost my data must have copied the files back from a backup when the zfs datasets were not mounted.

A big thanks for taking the time and helping me resolve this issue! Very much appreciated.
 
  • Like
Reactions: Stoiko Ivanov

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!