Howdy,
I'm running 5.0 beta 2 and running into a strange issue that started happening today. The zfs-mount.service is failing at boot.
Error is self-explanatory but I am not sure what is happening prior to boot that is creating the file directory structure? i am unable to run any KVM/containers until I "zfs mount -O -a" which fixes the problem.
The curious thing is that /gdata/pve
I tried to unmount but apparently /gdata or /gdata/pve are not mountpoints? I am confused by the error.
The only weird thing I am doing is mounting folders like these on my containers:
mp0: /gdata/music,mp=/media/music
mp1: /gdata/xenu/downloads,mp=/mnt/downloads
There is this post from 2013 refering to unmounting all zfs and deleting all folders but not sure how safe that would be? or is it relevant to 5.0 beta 2?
even if I unmount all, rm -rf /mymountpoint/folders and then leave it empty, if I reboot PVE it seems to recreate all files or folders somehow? Not sure what is doing it but seems like it may be something Promox given that its using the subvol folders....
What I've tried:
- Unmount all zfs, rm -rf /gdata
- No folders are left named /gdata
- Reboot, check zfs-mount.service status upon reboot... still shows failed
I'm running 5.0 beta 2 and running into a strange issue that started happening today. The zfs-mount.service is failing at boot.
Error is self-explanatory but I am not sure what is happening prior to boot that is creating the file directory structure? i am unable to run any KVM/containers until I "zfs mount -O -a" which fixes the problem.
The curious thing is that /gdata/pve
Code:
-- Unit zfs-mount.service has begun starting up.
Jun 30 00:42:24 pve zfs[6682]: cannot mount '/gdata': directory is not empty
Jun 30 00:42:24 pve kernel: zd32: p1 p2
Jun 30 00:42:25 pve zfs[6682]: cannot mount '/gdata/pve': directory is not empty
Jun 30 00:42:26 pve systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
Jun 30 00:42:26 pve systemd[1]: Failed to start Mount ZFS filesystems.
-- Subject: Unit zfs-mount.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit zfs-mount.service has failed.
--
-- The result is failed.
Jun 30 00:42:26 pve systemd[1]: zfs-mount.service: Unit entered failed state.
Jun 30 00:42:26 pve systemd[1]: zfs-mount.service: Failed with result 'exit-code'.
Jun 30 00:42:26 pve systemd[1]: Reached target Local File Systems.
I tried to unmount but apparently /gdata or /gdata/pve are not mountpoints? I am confused by the error.
Code:
root@pve:/gdata/vz/template/iso# zfs umount /gdata
cannot unmount '/gdata': not a mountpoint
root@pve:/gdata/vz/template/iso# zfs umount /gdata/pve
cannot unmount '/gdata/pve': not a mountpoint
root@pve:/gdata/vz/template/iso# zpool export gdata
umount: /gdata/xenu: not mounted
cannot unmount '/gdata/xenu': umount failed
root@pve:/gdata/vz/template/iso#
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
gdata 9.50T 3.70T 4.97G /gdata
gdata/data 1.72T 3.70T 1.72T /gdata/data
gdata/docs 48.1G 3.70T 48.1G /gdata/docs
gdata/fit 28.8G 21.2G 28.8G /gdata/fit
gdata/movies 3.11T 3.70T 3.11T /gdata/movies
gdata/music 63.6G 36.4G 63.6G /gdata/music
gdata/pve 108G 3.70T 104K /gdata/pve
gdata/pve/subvol-101-disk-1 466M 29.5G 466M /gdata/pve/subvol-101-disk-1
gdata/pve/subvol-102-disk-1 21.1G 8.94G 21.1G /gdata/pve/subvol-102-disk-1
gdata/pve/subvol-104-disk-1 605M 119G 605M /gdata/pve/subvol-104-disk-1
gdata/pve/subvol-105-disk-1 550M 9.46G 550M /gdata/pve/subvol-105-disk-1
gdata/pve/subvol-106-disk-1 375M 7.63G 375M /gdata/pve/subvol-106-disk-1
gdata/pve/subvol-107-disk-1 526M 7.49G 526M /gdata/pve/subvol-107-disk-1
gdata/pve/subvol-108-disk-1 612M 7.40G 612M /gdata/pve/subvol-108-disk-1
gdata/pve/subvol-109-disk-1 565M 7.45G 565M /gdata/pve/subvol-109-disk-1
gdata/pve/subvol-110-disk-1 415M 7.59G 415M /gdata/pve/subvol-110-disk-1
gdata/pve/vm-103-disk-1 82.5G 3.77T 13.7G -
gdata/tv 4.42T 3.70T 4.42T /gdata/tv
gdata/xenu 3.22G 498G 2.39G /gdata/xenu
rpool 14.9G 57.3G 192K /rpool
rpool/ROOT 1.42G 57.3G 192K /rpool/ROOT
rpool/ROOT/pve-1 1.42G 57.3G 1.11G /
rpool/data 2.30G 57.3G 192K /rpool/data
rpool/data/subvol-106-disk-1 192K 8.00G 192K /rpool/data/subvol-106-disk-1
rpool/data/vm-100-disk-1 2.30G 57.3G 2.30G -
rpool/swap 11.1G 57.3G 11.1G -
stripe 1.64M 45.0G 192K /stripe
The only weird thing I am doing is mounting folders like these on my containers:
mp0: /gdata/music,mp=/media/music
mp1: /gdata/xenu/downloads,mp=/mnt/downloads
There is this post from 2013 refering to unmounting all zfs and deleting all folders but not sure how safe that would be? or is it relevant to 5.0 beta 2?
even if I unmount all, rm -rf /mymountpoint/folders and then leave it empty, if I reboot PVE it seems to recreate all files or folders somehow? Not sure what is doing it but seems like it may be something Promox given that its using the subvol folders....
Code:
root@pve:~# systemctl status zfs-mount.service
● zfs-mount.service - Mount ZFS filesystems
Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2017-06-30 01:20:29 PDT; 53s ago
Process: 6591 ExecStart=/sbin/zfs mount -a (code=exited, status=1/FAILURE)
Main PID: 6591 (code=exited, status=1/FAILURE)
Jun 30 01:20:27 pve systemd[1]: Starting Mount ZFS filesystems...
Jun 30 01:20:27 pve zfs[6591]: cannot mount '/gdata': directory is not empty
Jun 30 01:20:28 pve zfs[6591]: cannot mount '/gdata/pve/subvol-102-disk-1': directory is not empty
Jun 30 01:20:28 pve zfs[6591]: cannot mount '/gdata/pve/subvol-106-disk-1': directory is not empty
Jun 30 01:20:28 pve zfs[6591]: cannot mount '/gdata/pve/subvol-109-disk-1': directory is not empty
Jun 30 01:20:29 pve systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
Jun 30 01:20:29 pve systemd[1]: Failed to start Mount ZFS filesystems.
Jun 30 01:20:29 pve systemd[1]: zfs-mount.service: Unit entered failed state.
Jun 30 01:20:29 pve systemd[1]: zfs-mount.service: Failed with result 'exit-code'.
What I've tried:
- Unmount all zfs, rm -rf /gdata
- No folders are left named /gdata
- Reboot, check zfs-mount.service status upon reboot... still shows failed
Last edited: