[SOLVED] My container/KVM storage is failing at boot

Giovanni

Renowned Member
Apr 1, 2009
112
11
83
Howdy,

I'm running 5.0 beta 2 and running into a strange issue that started happening today. The zfs-mount.service is failing at boot.

Error is self-explanatory but I am not sure what is happening prior to boot that is creating the file directory structure? i am unable to run any KVM/containers until I "zfs mount -O -a" which fixes the problem.

The curious thing is that /gdata/pve

Code:
-- Unit zfs-mount.service has begun starting up.
Jun 30 00:42:24 pve zfs[6682]: cannot mount '/gdata': directory is not empty
Jun 30 00:42:24 pve kernel:  zd32: p1 p2
Jun 30 00:42:25 pve zfs[6682]: cannot mount '/gdata/pve': directory is not empty
Jun 30 00:42:26 pve systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
Jun 30 00:42:26 pve systemd[1]: Failed to start Mount ZFS filesystems.
-- Subject: Unit zfs-mount.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit zfs-mount.service has failed.
--
-- The result is failed.
Jun 30 00:42:26 pve systemd[1]: zfs-mount.service: Unit entered failed state.
Jun 30 00:42:26 pve systemd[1]: zfs-mount.service: Failed with result 'exit-code'.
Jun 30 00:42:26 pve systemd[1]: Reached target Local File Systems.

I tried to unmount but apparently /gdata or /gdata/pve are not mountpoints? I am confused by the error.

Code:
root@pve:/gdata/vz/template/iso# zfs umount /gdata
cannot unmount '/gdata': not a mountpoint
root@pve:/gdata/vz/template/iso# zfs umount /gdata/pve
cannot unmount '/gdata/pve': not a mountpoint
root@pve:/gdata/vz/template/iso# zpool export gdata
umount: /gdata/xenu: not mounted
cannot unmount '/gdata/xenu': umount failed
root@pve:/gdata/vz/template/iso#


# zfs list
NAME                           USED  AVAIL  REFER  MOUNTPOINT
gdata                         9.50T  3.70T  4.97G  /gdata
gdata/data                    1.72T  3.70T  1.72T  /gdata/data
gdata/docs                    48.1G  3.70T  48.1G  /gdata/docs
gdata/fit                     28.8G  21.2G  28.8G  /gdata/fit
gdata/movies                  3.11T  3.70T  3.11T  /gdata/movies
gdata/music                   63.6G  36.4G  63.6G  /gdata/music
gdata/pve                      108G  3.70T   104K  /gdata/pve
gdata/pve/subvol-101-disk-1    466M  29.5G   466M  /gdata/pve/subvol-101-disk-1
gdata/pve/subvol-102-disk-1   21.1G  8.94G  21.1G  /gdata/pve/subvol-102-disk-1
gdata/pve/subvol-104-disk-1    605M   119G   605M  /gdata/pve/subvol-104-disk-1
gdata/pve/subvol-105-disk-1    550M  9.46G   550M  /gdata/pve/subvol-105-disk-1
gdata/pve/subvol-106-disk-1    375M  7.63G   375M  /gdata/pve/subvol-106-disk-1
gdata/pve/subvol-107-disk-1    526M  7.49G   526M  /gdata/pve/subvol-107-disk-1
gdata/pve/subvol-108-disk-1    612M  7.40G   612M  /gdata/pve/subvol-108-disk-1
gdata/pve/subvol-109-disk-1    565M  7.45G   565M  /gdata/pve/subvol-109-disk-1
gdata/pve/subvol-110-disk-1    415M  7.59G   415M  /gdata/pve/subvol-110-disk-1
gdata/pve/vm-103-disk-1       82.5G  3.77T  13.7G  -
gdata/tv                      4.42T  3.70T  4.42T  /gdata/tv
gdata/xenu                    3.22G   498G  2.39G  /gdata/xenu
rpool                         14.9G  57.3G   192K  /rpool
rpool/ROOT                    1.42G  57.3G   192K  /rpool/ROOT
rpool/ROOT/pve-1              1.42G  57.3G  1.11G  /
rpool/data                    2.30G  57.3G   192K  /rpool/data
rpool/data/subvol-106-disk-1   192K  8.00G   192K  /rpool/data/subvol-106-disk-1
rpool/data/vm-100-disk-1      2.30G  57.3G  2.30G  -
rpool/swap                    11.1G  57.3G  11.1G  -
stripe                        1.64M  45.0G   192K  /stripe

The only weird thing I am doing is mounting folders like these on my containers:
mp0: /gdata/music,mp=/media/music
mp1: /gdata/xenu/downloads,mp=/mnt/downloads

There is this post from 2013 refering to unmounting all zfs and deleting all folders but not sure how safe that would be? or is it relevant to 5.0 beta 2?

even if I unmount all, rm -rf /mymountpoint/folders and then leave it empty, if I reboot PVE it seems to recreate all files or folders somehow? Not sure what is doing it but seems like it may be something Promox given that its using the subvol folders....

Code:
root@pve:~# systemctl status zfs-mount.service
● zfs-mount.service - Mount ZFS filesystems
   Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Fri 2017-06-30 01:20:29 PDT; 53s ago
  Process: 6591 ExecStart=/sbin/zfs mount -a (code=exited, status=1/FAILURE)
 Main PID: 6591 (code=exited, status=1/FAILURE)

Jun 30 01:20:27 pve systemd[1]: Starting Mount ZFS filesystems...
Jun 30 01:20:27 pve zfs[6591]: cannot mount '/gdata': directory is not empty
Jun 30 01:20:28 pve zfs[6591]: cannot mount '/gdata/pve/subvol-102-disk-1': directory is not empty
Jun 30 01:20:28 pve zfs[6591]: cannot mount '/gdata/pve/subvol-106-disk-1': directory is not empty
Jun 30 01:20:28 pve zfs[6591]: cannot mount '/gdata/pve/subvol-109-disk-1': directory is not empty
Jun 30 01:20:29 pve systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
Jun 30 01:20:29 pve systemd[1]: Failed to start Mount ZFS filesystems.
Jun 30 01:20:29 pve systemd[1]: zfs-mount.service: Unit entered failed state.
Jun 30 01:20:29 pve systemd[1]: zfs-mount.service: Failed with result 'exit-code'.

What I've tried:
- Unmount all zfs, rm -rf /gdata
- No folders are left named /gdata
- Reboot, check zfs-mount.service status upon reboot... still shows failed :(
 
Last edited:
Hi
can you show the output of this command?
zfs list -r -o name,mountpoint,mounted
 
Hi
can you show the output of this command?
zfs list -r -o name,mountpoint,mounted

I did two tests, I just rebooted and this was the output:

Code:
Last login: Fri Jun 30 17:27:07 2017 from 192.168.1.145
root@pve:~# uptime
 17:31:34 up 1 min,  1 user,  load average: 0.46, 0.12, 0.04
root@pve:~# zfs list -r -o name,mountpoint,mounted
NAME                          MOUNTPOINT                     MOUNTED
gdata                         /gdata                              no
gdata/data                    /gdata/data                        yes
gdata/docs                    /gdata/docs                        yes
gdata/fit                     /gdata/fit                         yes
gdata/movies                  /gdata/movies                      yes
gdata/music                   /gdata/music                       yes
gdata/pve                     /gdata/pve                         yes
gdata/pve/subvol-101-disk-1   /gdata/pve/subvol-101-disk-1       yes
gdata/pve/subvol-102-disk-1   /gdata/pve/subvol-102-disk-1        no
gdata/pve/subvol-104-disk-1   /gdata/pve/subvol-104-disk-1       yes
gdata/pve/subvol-105-disk-1   /gdata/pve/subvol-105-disk-1       yes
gdata/pve/subvol-106-disk-1   /gdata/pve/subvol-106-disk-1        no
gdata/pve/subvol-107-disk-1   /gdata/pve/subvol-107-disk-1       yes
gdata/pve/subvol-108-disk-1   /gdata/pve/subvol-108-disk-1       yes
gdata/pve/subvol-109-disk-1   /gdata/pve/subvol-109-disk-1        no
gdata/pve/subvol-110-disk-1   /gdata/pve/subvol-110-disk-1       yes
gdata/pve/vm-103-disk-1       -                                    -
gdata/tv                      /gdata/tv                          yes
gdata/xenu                    /gdata/xenu                        yes
rpool                         /rpool                             yes
rpool/ROOT                    /rpool/ROOT                        yes
rpool/ROOT/pve-1              /                                  yes
rpool/data                    /rpool/data                        yes
rpool/data/subvol-106-disk-1  /rpool/data/subvol-106-disk-1      yes
rpool/data/vm-100-disk-1      -                                    -
rpool/swap                    -                                    -
stripe                        /stripe                            yes
root@pve:~# systemctl status zfs-mount.service
● zfs-mount.service - Mount ZFS filesystems
   Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Fri 2017-06-30 17:31:01 PDT; 1min 1s ago
  Process: 6599 ExecStart=/sbin/zfs mount -a (code=exited, status=1/FAILURE)
 Main PID: 6599 (code=exited, status=1/FAILURE)

Jun 30 17:31:00 pve systemd[1]: Starting Mount ZFS filesystems...
Jun 30 17:31:00 pve zfs[6599]: cannot mount '/gdata': directory is not empty
Jun 30 17:31:01 pve zfs[6599]: cannot mount '/gdata/pve/subvol-102-disk-1': directory is not empty
Jun 30 17:31:01 pve zfs[6599]: cannot mount '/gdata/pve/subvol-106-disk-1': directory is not empty
Jun 30 17:31:01 pve zfs[6599]: cannot mount '/gdata/pve/subvol-109-disk-1': directory is not empty
Jun 30 17:31:01 pve systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
Jun 30 17:31:01 pve systemd[1]: Failed to start Mount ZFS filesystems.
Jun 30 17:31:01 pve systemd[1]: zfs-mount.service: Unit entered failed state.
Jun 30 17:31:01 pve systemd[1]: zfs-mount.service: Failed with result 'exit-code'.
root@pve:~#

before reboot it looked like this (but I did run zfs mount -O -a so that may explain why)

Code:
root@pve:~# zfs list -r -o name,mountpoint,mounted
NAME                          MOUNTPOINT                     MOUNTED
gdata                         /gdata                              no
gdata/data                    /gdata/data                        yes
gdata/docs                    /gdata/docs                        yes
gdata/fit                     /gdata/fit                         yes
gdata/movies                  /gdata/movies                      yes
gdata/music                   /gdata/music                       yes
gdata/pve                     /gdata/pve                         yes
gdata/pve/subvol-101-disk-1   /gdata/pve/subvol-101-disk-1       yes
gdata/pve/subvol-102-disk-1   /gdata/pve/subvol-102-disk-1        no
gdata/pve/subvol-104-disk-1   /gdata/pve/subvol-104-disk-1       yes
gdata/pve/subvol-105-disk-1   /gdata/pve/subvol-105-disk-1       yes
gdata/pve/subvol-106-disk-1   /gdata/pve/subvol-106-disk-1        no
gdata/pve/subvol-107-disk-1   /gdata/pve/subvol-107-disk-1       yes
gdata/pve/subvol-108-disk-1   /gdata/pve/subvol-108-disk-1       yes
gdata/pve/subvol-109-disk-1   /gdata/pve/subvol-109-disk-1        no
gdata/pve/subvol-110-disk-1   /gdata/pve/subvol-110-disk-1       yes
gdata/pve/vm-103-disk-1       -                                    -
gdata/tv                      /gdata/tv                          yes
gdata/xenu                    /gdata/xenu                        yes
rpool                         /rpool                             yes
rpool/ROOT                    /rpool/ROOT                        yes
rpool/ROOT/pve-1              /                                  yes
rpool/data                    /rpool/data                        yes
rpool/data/subvol-106-disk-1  /rpool/data/subvol-106-disk-1      yes
rpool/data/vm-100-disk-1      -                                    -
rpool/swap                    -                                    -
stripe                        /stripe                            yes
root@pve:~# uptime
 17:27:29 up 16:00,  1 user,  load average: 0.76, 0.53, 0.44

Digging a bit deeper here.

/etc/pve/storage.cfg
Code:
dir: local
        disable
        path /var/lib/vz
        content backup,iso,vztmpl
        maxfiles 0
        shared 0

zfspool: local-zfs
        pool rpool/data
        content images,rootdir
        sparse 1

dir: gdata-dump
        path /gdata/vz
        content iso,vztmpl,backup
        maxfiles 0
        shared 0

zfspool: gdata-zfs
        pool gdata/pve
        content images,rootdir
        sparse 0

I'm going to add 'mkdir 0' and 'is_mountpoint 1' to gdata-zfs section which seems to be the problem... and reboot again.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!