'zfs-mount.service failed' after latest update, how do I fix?

jsalas424

Active Member
Jul 5, 2020
142
3
38
34
I was updating to PVE 6.3-2 and saw this error pop up:

Code:
Setting up zfsutils-linux (0.8.5-pve1) ...
zfs-import-scan.service is a disabled or a static unit not running, not starting it.
Job for zfs-mount.service failed because the control process exited with error code.
See "systemctl status zfs-mount.service" and "journalctl -xe" for details.

root@TracheServ:/# journalctl -xe
-- A start job for unit pvesr.service has begun execution.
--
-- The job identifier is 1124072.
Nov 27 10:06:01 TracheServ systemd[1]: pvesr.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- The unit pvesr.service has successfully entered the 'dead' state.
Nov 27 10:06:01 TracheServ systemd[1]: Started Proxmox VE replication runner.
-- Subject: A start job for unit pvesr.service has finished successfully
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit pvesr.service has finished successfully.
--
-- The job identifier is 1124072.
Nov 27 10:06:52 TracheServ systemd[1]: Starting Mount ZFS filesystems...
-- Subject: A start job for unit zfs-mount.service has begun execution
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit zfs-mount.service has begun execution.
--
-- The job identifier is 1124127.
Nov 27 10:06:52 TracheServ zfs[24968]: cannot mount '/Nextcloud.Storage': directory is not empty
Nov 27 10:06:52 TracheServ systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- An ExecStart= process belonging to unit zfs-mount.service has exited.
--
-- The process' exit code is 'exited' and its exit status is 1.
Nov 27 10:06:52 TracheServ systemd[1]: zfs-mount.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- The unit zfs-mount.service has entered the 'failed' state with result 'exit-code'.
Nov 27 10:06:52 TracheServ systemd[1]: Failed to start Mount ZFS filesystems.
-- Subject: A start job for unit zfs-mount.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit zfs-mount.service has finished with a failure.
--
-- The job identifier is 1124127 and the job result is failed.

root@TracheServ:/# systemctl status zfs-mount.service
* zfs-mount.service - Mount ZFS filesystems
   Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Fri 2020-11-27 09:56:49 EST; 4min 45s ago
     Docs: man:zfs(8)
Main PID: 40977 (code=exited, status=1/FAILURE)

Nov 27 09:56:49 TracheServ systemd[1]: Starting Mount ZFS filesystems...
Nov 27 09:56:49 TracheServ zfs[40977]: cannot mount '/Nextcloud.Storage': directory is not empty
Nov 27 09:56:49 TracheServ systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
Nov 27 09:56:49 TracheServ systemd[1]: zfs-mount.service: Failed with result 'exit-code'.
Nov 27 09:56:49 TracheServ systemd[1]: Failed to start Mount ZFS filesystems.
root@TracheServ:/#
root@TracheServ:/# ls /Nextcloud.Storage
dump  images  private  snippets  template

Any advice on how to deal with this?
 
This is unlikely to have been caused by the update itset and probably just shows up because of a restart (of the service).
ZFS does not want to import if the mount point is not empty: cannot mount '/Nextcloud.Storage': directory is not empty
Most likely the directory is not empty because Proxmox created empty directories before mounting. Please show us your /etc/pve/storage.cfg.
 
Last edited:
This is unlikely to have been caused by the update itset and probably just shows up because of a restart (of the service).
ZFS does not want to import if the mount point is not empty: cannot mount '/Nextcloud.Storage': directory is not empty
Most likely the directory is not empty because Proxmox created empty directories before mounting. Please show us your /etc/pve/storage.cfg.
Code:
root@TracheServ:~# cat /etc/pve/storage.cfg
zfspool: local-zfs
        pool rpool/data
        content rootdir,images
        sparse 0

dir: local
        path /var/lib/vz
        content backup,rootdir,snippets,iso,vztmpl,images
        maxfiles 1
        shared 0

zfspool: Storage.1
        pool Storage.1
        content rootdir,images
        mountpoint /Storage.1
        sparse 0

zfspool: Nextcloud.Storage
        pool Nextcloud.Storage
        content images,rootdir
        mountpoint /Nextcloud.Storage
        sparse 0
 
Because Nextcloud.Storage is a zfspool, it is unclear to me why there would be directories below the mountpoint [ICOD]//Nextcloud.Storage[/ICODE].
Can you check if there are files and/or directories at /Nextcloud.Storage/ when it is not mounted? Maybe you can move those files and/or directories to another location?

Edit:
Do not move directories starting with subvol-, those are an indication that the Nextcloud.Storage is mounted (or not mounted before you used it).
Can you show us the files and/or directories that are at /Nextcloud.Storage/?
 
Last edited:
Because Nextcloud.Storage is a zfspool, it is unclear to me why there would be directories below the mountpoint [ICOD]//Nextcloud.Storage[/ICODE].
Can you check if there are files and/or directories at /Nextcloud.Storage/ when it is not mounted? Maybe you can move those files and/or directories to another location?
Sure I can give it a shot. I've haven't had to unmount a zpool yet, so just to confirm the proper code would be
Code:
zpool export Nextcloud.Storage
?
 
Sure I can give it a shot. I've haven't had to unmount a zpool yet, so just to confirm the proper code would be
Code:
zpool export Nextcloud.Storage
?
That is indeed how you unmount. According to the original error messages it could (and therefore should) not be mounted. Maybe use zfs list and mount to check first?
 
That is indeed how you unmount. According to the original error messages it could (and therefore should) not be mounted. Maybe use zfs list and mount to check first?
Here's that output:

Code:
root@TracheServ:~# zfs list
NAME                                USED  AVAIL     REFER  MOUNTPOINT
Nextcloud.Storage                  1.80T   854G      105G  /Nextcloud.Storage
Nextcloud.Storage/vm-400-disk-1     378M   854G      378M  -
Nextcloud.Storage/vm-42069-disk-0  3.14M   854G     3.14M  -
Nextcloud.Storage/vm-42069-disk-1  11.5G   854G     11.5G  -
Nextcloud.Storage/vm-600-disk-0    99.5G   854G     99.5G  -
Nextcloud.Storage/vm-600-disk-1     101M   854G      101M  -
Nextcloud.Storage/vm-700-disk-0    1.51T  2.01T      345G  -
Nextcloud.Storage/vm-900-disk-0    79.2G   854G     79.2G  -
Storage.1                           897G  1.76G      104K  /Storage.1
Storage.1/vm-700-disk-0             897G  22.9G      876G  -
rpool                               147G  78.1G      104K  /rpool
rpool/ROOT                         24.5G  78.1G       96K  /rpool/ROOT
rpool/ROOT/pve-1                   24.5G  78.1G     24.5G  /
rpool/data                          123G  78.1G       96K  /rpool/data
rpool/data/vm-300-disk-0           1.78G  78.1G     1.78G  -
rpool/data/vm-400-disk-0           7.37G  78.1G     7.37G  -
rpool/data/vm-400-disk-1           51.6G   115G     14.3G  -
rpool/data/vm-500-disk-0           4.62G  78.1G     4.62G  -
rpool/data/vm-600-disk-0           6.25G  78.1G     6.25G  -
rpool/data/vm-600-disk-1           6.16G  78.1G     6.16G  -
rpool/data/vm-700-disk-0           13.8G  78.1G     13.8G  -
rpool/data/vm-800-disk-0           30.9G  85.5G     23.5G  -

root@TracheServ:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=24624548k,nr_inodes=6156137,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=4929816k,mode=755)
rpool/ROOT/pve-1 on / type zfs (rw,relatime,xattr,noacl)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=38,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=28404)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/sdc1 on /mnt/pve/spare type ext4 (rw,relatime)
rpool on /rpool type zfs (rw,noatime,xattr,noacl)
rpool/ROOT on /rpool/ROOT type zfs (rw,noatime,xattr,noacl)
rpool/data on /rpool/data type zfs (rw,noatime,xattr,noacl)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
Storage.1 on /Storage.1 type zfs (rw,xattr,noacl)
192.168.1.139:/data/backups/proxmox on /mnt/pve/Proxmox_backups type nfs4 (rw,relatime,vers=4.2,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.129,local_lock=none,addr=192.168.1.139)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
tracefs on /sys/kernel/debug/tracing type tracefs (rw,relatime)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=4929816k,mode=700)
root@TracheServ:~#

I have disks that I'm actively using mounted to this zpool, so is it mounted or not?
 
Last edited:
Here's that output:

Code:
root@TracheServ:~# zfs list
NAME                                USED  AVAIL     REFER  MOUNTPOINT
Nextcloud.Storage                  1.80T   854G      105G  /Nextcloud.Storage
Nextcloud.Storage/vm-400-disk-1     378M   854G      378M  -
Nextcloud.Storage/vm-42069-disk-0  3.14M   854G     3.14M  -
Nextcloud.Storage/vm-42069-disk-1  11.5G   854G     11.5G  -
Nextcloud.Storage/vm-600-disk-0    99.5G   854G     99.5G  -
Nextcloud.Storage/vm-600-disk-1     101M   854G      101M  -
Nextcloud.Storage/vm-700-disk-0    1.51T  2.01T      345G  -
Nextcloud.Storage/vm-900-disk-0    79.2G   854G     79.2G  -
Storage.1                           897G  1.76G      104K  /Storage.1
Storage.1/vm-700-disk-0             897G  22.9G      876G  -
rpool                               147G  78.1G      104K  /rpool
rpool/ROOT                         24.5G  78.1G       96K  /rpool/ROOT
rpool/ROOT/pve-1                   24.5G  78.1G     24.5G  /
rpool/data                          123G  78.1G       96K  /rpool/data
rpool/data/vm-300-disk-0           1.78G  78.1G     1.78G  -
rpool/data/vm-400-disk-0           7.37G  78.1G     7.37G  -
rpool/data/vm-400-disk-1           51.6G   115G     14.3G  -
rpool/data/vm-500-disk-0           4.62G  78.1G     4.62G  -
rpool/data/vm-600-disk-0           6.25G  78.1G     6.25G  -
rpool/data/vm-600-disk-1           6.16G  78.1G     6.16G  -
rpool/data/vm-700-disk-0           13.8G  78.1G     13.8G  -
rpool/data/vm-800-disk-0           30.9G  85.5G     23.5G  -

root@TracheServ:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=24624548k,nr_inodes=6156137,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=4929816k,mode=755)
rpool/ROOT/pve-1 on / type zfs (rw,relatime,xattr,noacl)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=38,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=28404)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/sdc1 on /mnt/pve/spare type ext4 (rw,relatime)
rpool on /rpool type zfs (rw,noatime,xattr,noacl)
rpool/ROOT on /rpool/ROOT type zfs (rw,noatime,xattr,noacl)
rpool/data on /rpool/data type zfs (rw,noatime,xattr,noacl)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
Storage.1 on /Storage.1 type zfs (rw,xattr,noacl)
192.168.1.139:/data/backups/proxmox on /mnt/pve/Proxmox_backups type nfs4 (rw,relatime,vers=4.2,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.129,local_lock=none,addr=192.168.1.139)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
tracefs on /sys/kernel/debug/tracing type tracefs (rw,relatime)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=4929816k,mode=700)
root@TracheServ:~#

I have disks that I'm actively using mounted to this zpool, so is it mounted or not?
Doesn't show up under df -h, but presumably a zpool wouldn't?

Code:
root@TracheServ:~# df -h
Filesystem                           Size  Used Avail Use% Mounted on
udev                                  24G     0   24G   0% /dev
tmpfs                                4.8G  146M  4.6G   4% /run
rpool/ROOT/pve-1                     103G   25G   79G  24% /
tmpfs                                 24G   40M   24G   1% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                 24G     0   24G   0% /sys/fs/cgroup
/dev/sdc1                            458G  282G  153G  65% /mnt/pve/spare
rpool                                 79G  128K   79G   1% /rpool
rpool/ROOT                            79G  128K   79G   1% /rpool/ROOT
rpool/data                            79G  128K   79G   1% /rpool/data
/dev/fuse                             30M   24K   30M   1% /etc/pve
Storage.1                            1.8G  128K  1.8G   1% /Storage.1
192.168.1.139:/data/backups/proxmox  870G  570G  301G  66% /mnt/pve/Proxmox_backups
tmpfs                                4.8G     0  4.8G   0% /run/user/0
 
Zpools do show up on df, for example yourrpool and Storage.1.
Nextcloud.Storage is not mounted. Howver, ZFS complains that the mountpoint is not empty when importing. What is the output of ls -alh /Nextcloud.Storage/?
 
Zpools do show up on df, for example yourrpool and Storage.1.
Nextcloud.Storage is not mounted. Howver, ZFS complains that the mountpoint is not empty when importing. What is the output of ls -alh /Nextcloud.Storage/?
Code:
root@TracheServ:~# ls -alh /Nextcloud.Storage/
total 20K
drwxr-xr-x  7 root root  7 Nov  7 08:53 .
drwxr-xr-x 21 root root 27 Nov  7 23:58 ..
drwxr-xr-x  2 root root  2 Nov  7 08:53 dump
drwxr-xr-x  2 root root  2 Nov 13 08:17 images
drwxr-xr-x  2 root root  2 Nov  7 08:53 private
drwxr-xr-x  2 root root  2 Nov  7 08:53 snippets
drwxr-xr-x  4 root root  4 Nov  7 08:53 template
root@TracheServ:~#
 
Code:
root@TracheServ:~# ls -alh /Nextcloud.Storage/
total 20K
drwxr-xr-x  7 root root  7 Nov  7 08:53 .
drwxr-xr-x 21 root root 27 Nov  7 23:58 ..
drwxr-xr-x  2 root root  2 Nov  7 08:53 dump
drwxr-xr-x  2 root root  2 Nov 13 08:17 images
drwxr-xr-x  2 root root  2 Nov  7 08:53 private
drwxr-xr-x  2 root root  2 Nov  7 08:53 snippets
drwxr-xr-x  4 root root  4 Nov  7 08:53 template
root@TracheServ:~#
Those directories are most likely empty (or contain empty subdirectories). Looks like you used that same directory for ISO's, backups, etc. before the Nextcloud.Storage zpool was mounted.
Note that any content in those directories is stored on your rpool and not on the (not yet mounted) Nextcloud.Storage zpool.
Please move the contents of those directories somewhere else and remove them here. That should fix the "mountpoint is not empty" error.
 
Those directories are most likely empty (or contain empty subdirectories). Looks like you used that same directory for ISO's, backups, etc. before the Nextcloud.Storage zpool was mounted.
Note that any content in those directories is stored on your rpool and not on the (not yet mounted) Nextcloud.Storage zpool.
Please move the contents of those directories somewhere else and remove them here. That should fix the "mountpoint is not empty" error.
This might be related to a prior issue I had here: https://forum.proxmox.com/threads/w...-much-space-on-local.78967/page-2#post-349804

I mounted a directory from the GUI at /Nextcloud.Storage, but that failed and ended up crashing my whole setup. The directory "actually" mounted at root and ultimately led to I/O errors when it quickly filled up. I think that's why we're seeing the dump, images, etc, they were autogenerated when I tried to mount the Directory, I have not used that directory for anything else.

Those subfolders are actually completely empty:

Code:
root@TracheServ:~# ls -alh /Nextcloud.Storage/
total 20K
drwxr-xr-x  7 root root  7 Nov  7 08:53 .
drwxr-xr-x 21 root root 27 Nov  7 23:58 ..
drwxr-xr-x  2 root root  2 Nov  7 08:53 dump
drwxr-xr-x  2 root root  2 Nov 13 08:17 images
drwxr-xr-x  2 root root  2 Nov  7 08:53 private
drwxr-xr-x  2 root root  2 Nov  7 08:53 snippets
drwxr-xr-x  4 root root  4 Nov  7 08:53 template
root@TracheServ:~# ls -alh /Nextcloud.Storage/dump
total 9.0K
drwxr-xr-x 2 root root 2 Nov  7 08:53 .
drwxr-xr-x 7 root root 7 Nov  7 08:53 ..
root@TracheServ:~# ls /Nextcloud.Storage/dump
root@TracheServ:~# ls /Nextcloud.Storage/images
root@TracheServ:~# ls /Nextcloud.Storage/private
root@TracheServ:~# ls /Nextcloud.Storage/snippets
root@TracheServ:~# ls /Nextcloud.Storage/template
cache  iso
root@TracheServ:~# ls /Nextcloud.Storage/template/cache
root@TracheServ:~# ls /Nextcloud.Storage/template/iso
root@TracheServ:~#

So is the move now to just delete anything mounted at /Nextcloud.Storage?
 
This might be related to a prior issue I had here: https://forum.proxmox.com/threads/w...-much-space-on-local.78967/page-2#post-349804

I mounted a directory from the GUI at /Nextcloud.Storage, but that failed and ended up crashing my whole setup. The directory "actually" mounted at root and ultimately led to I/O errors when it quickly filled up. I think that's why we're seeing the dump, images, etc, they were autogenerated when I tried to mount the Directory, I have not used that directory for anything else.

Those subfolders are actually completely empty:

Code:
root@TracheServ:~# ls -alh /Nextcloud.Storage/
total 20K
drwxr-xr-x  7 root root  7 Nov  7 08:53 .
drwxr-xr-x 21 root root 27 Nov  7 23:58 ..
drwxr-xr-x  2 root root  2 Nov  7 08:53 dump
drwxr-xr-x  2 root root  2 Nov 13 08:17 images
drwxr-xr-x  2 root root  2 Nov  7 08:53 private
drwxr-xr-x  2 root root  2 Nov  7 08:53 snippets
drwxr-xr-x  4 root root  4 Nov  7 08:53 template
root@TracheServ:~# ls -alh /Nextcloud.Storage/dump
total 9.0K
drwxr-xr-x 2 root root 2 Nov  7 08:53 .
drwxr-xr-x 7 root root 7 Nov  7 08:53 ..
root@TracheServ:~# ls /Nextcloud.Storage/dump
root@TracheServ:~# ls /Nextcloud.Storage/images
root@TracheServ:~# ls /Nextcloud.Storage/private
root@TracheServ:~# ls /Nextcloud.Storage/snippets
root@TracheServ:~# ls /Nextcloud.Storage/template
cache  iso
root@TracheServ:~# ls /Nextcloud.Storage/template/cache
root@TracheServ:~# ls /Nextcloud.Storage/template/iso
root@TracheServ:~#

So is the move now to just delete anything mounted at /Nextcloud.Storage?

Actually there seems to plenty of stuff in the .. sub-directory, not sure how to safely proceed now:

Code:
root@TracheServ:~# ls /Nextcloud.Storage/..
bin   dev  home  lib32  libx32  mnt                opt   root   run   srv        sys   tmp  var
boot  etc  lib   lib64  media   Nextcloud.Storage  proc  rpool  sbin  Storage.1  test  usr

root@TracheServ:~# ls /Nextcloud.Storage/../mnt
hostrun  iso  pve
 
Actually there seems to plenty of stuff in the .. sub-directory, not sure how to safely proceed now:

Code:
root@TracheServ:~# ls /Nextcloud.Storage/..
bin   dev  home  lib32  libx32  mnt                opt   root   run   srv        sys   tmp  var
boot  etc  lib   lib64  media   Nextcloud.Storage  proc  rpool  sbin  Storage.1  test  usr

root@TracheServ:~# ls /Nextcloud.Storage/../mnt
hostrun  iso  pve
/Nextcloud.Storage/.. is the same as /. .. is the way Linux refers to the parent directory. Of course it has files, it is your Proxmox root directory.
Please remove all "real subdirectories": dump, images, private, snippets, template.
 
/Nextcloud.Storage/.. is the same as /. .. is the way Linux refers to the parent directory. Of course it has files, it is your Proxmox root directory.
Please remove all "real subdirectories": dump, images, private, snippets, template.
You're an MVP
Code:
root@TracheServ:~# ls /Nextcloud.Storage
dump  images  private  snippets  template
root@TracheServ:~# rm -r /Nextcloud.Storage/dump
root@TracheServ:~# rm -r /Nextcloud.Storage/images
root@TracheServ:~# rm -r /Nextcloud.Storage/private
root@TracheServ:~# rm -r /Nextcloud.Storage/snippets
root@TracheServ:~# rm -r /Nextcloud.Storage/template
root@TracheServ:~# ls /Nextcloud.Storage
root@TracheServ:~# systemctl start zfs-mount.service
root@TracheServ:~# systemctl status zfs-mount.service
● zfs-mount.service - Mount ZFS filesystems
   Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
   Active: active (exited) since Sat 2020-11-28 09:11:52 EST; 8s ago
     Docs: man:zfs(8)
  Process: 48134 ExecStart=/sbin/zfs mount -a (code=exited, status=0/SUCCESS)
 Main PID: 48134 (code=exited, status=0/SUCCESS)

Nov 28 09:11:51 TracheServ systemd[1]: Starting Mount ZFS filesystems...
Nov 28 09:11:52 TracheServ systemd[1]: Started Mount ZFS filesystems.
root@TracheServ:~#
 
You're an MVP
Code:
root@TracheServ:~# ls /Nextcloud.Storage
dump  images  private  snippets  template
root@TracheServ:~# rm -r /Nextcloud.Storage/dump
root@TracheServ:~# rm -r /Nextcloud.Storage/images
root@TracheServ:~# rm -r /Nextcloud.Storage/private
root@TracheServ:~# rm -r /Nextcloud.Storage/snippets
root@TracheServ:~# rm -r /Nextcloud.Storage/template
root@TracheServ:~# ls /Nextcloud.Storage
root@TracheServ:~# systemctl start zfs-mount.service
root@TracheServ:~# systemctl status zfs-mount.service
● zfs-mount.service - Mount ZFS filesystems
   Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
   Active: active (exited) since Sat 2020-11-28 09:11:52 EST; 8s ago
     Docs: man:zfs(8)
  Process: 48134 ExecStart=/sbin/zfs mount -a (code=exited, status=0/SUCCESS)
Main PID: 48134 (code=exited, status=0/SUCCESS)

Nov 28 09:11:51 TracheServ systemd[1]: Starting Mount ZFS filesystems...
Nov 28 09:11:52 TracheServ systemd[1]: Started Mount ZFS filesystems.
root@TracheServ:~#
Code:
root@TracheServ:~# mount -a
root@TracheServ:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=24624548k,nr_inodes=6156137,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=4929816k,mode=755)
rpool/ROOT/pve-1 on / type zfs (rw,relatime,xattr,noacl)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=38,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=28404)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/sdc1 on /mnt/pve/spare type ext4 (rw,relatime)
rpool on /rpool type zfs (rw,noatime,xattr,noacl)
rpool/ROOT on /rpool/ROOT type zfs (rw,noatime,xattr,noacl)
rpool/data on /rpool/data type zfs (rw,noatime,xattr,noacl)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
Storage.1 on /Storage.1 type zfs (rw,xattr,noacl)
192.168.1.139:/data/backups/proxmox on /mnt/pve/Proxmox_backups type nfs4 (rw,relatime,vers=4.2,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.129,local_lock=none,addr=192.168.1.139)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
tracefs on /sys/kernel/debug/tracing type tracefs (rw,relatime)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=4929816k,mode=700)
Nextcloud.Storage on /Nextcloud.Storage type zfs (rw,xattr,noacl)
root@TracheServ:~# df -h
Filesystem                           Size  Used Avail Use% Mounted on
udev                                  24G     0   24G   0% /dev
tmpfs                                4.8G  146M  4.6G   4% /run
rpool/ROOT/pve-1                     103G   25G   79G  24% /
tmpfs                                 24G   40M   24G   1% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                 24G     0   24G   0% /sys/fs/cgroup
/dev/sdc1                            458G  282G  153G  65% /mnt/pve/spare
rpool                                 79G  128K   79G   1% /rpool
rpool/ROOT                            79G  128K   79G   1% /rpool/ROOT
rpool/data                            79G  128K   79G   1% /rpool/data
/dev/fuse                             30M   24K   30M   1% /etc/pve
Storage.1                            1.8G  128K  1.8G   1% /Storage.1
192.168.1.139:/data/backups/proxmox  870G  570G  300G  66% /mnt/pve/Proxmox_backups
tmpfs                                4.8G     0  4.8G   0% /run/user/0
Nextcloud.Storage                    960G  105G  855G  11% /Nextcloud.Storage
 
So why does my Nextcloud.Storage now look so small?

Code:
Nextcloud.Storage                    960G  105G  855G  11% /Nextcloud.Storage

1606572971016.png
 
Edit: it is just df that does not take the subvolumes into account, that's probably where the 960GB comes from.
If I'm reading this correctly, it does reflect the used storage appropriately, so I guess we're fine?

Code:
root@TracheServ:~# zfs get all Nextcloud.Storage
NAME               PROPERTY              VALUE                  SOURCE
Nextcloud.Storage  type                  filesystem             -
Nextcloud.Storage  creation              Sat Jun 20 18:30 2020  -
Nextcloud.Storage  used                  1.80T                  -
Nextcloud.Storage  available             854G                   -
Nextcloud.Storage  referenced            105G                   -
Nextcloud.Storage  compressratio         1.01x                  -
Nextcloud.Storage  mounted               yes                    -
Nextcloud.Storage  quota                 none                   default
Nextcloud.Storage  reservation           none                   default
Nextcloud.Storage  recordsize            128K                   default
Nextcloud.Storage  mountpoint            /Nextcloud.Storage     default
Nextcloud.Storage  sharenfs              off                    default
Nextcloud.Storage  checksum              on                     default
Nextcloud.Storage  compression           on                     local
Nextcloud.Storage  atime                 on                     default
Nextcloud.Storage  devices               on                     default
Nextcloud.Storage  exec                  on                     default
Nextcloud.Storage  setuid                on                     default
Nextcloud.Storage  readonly              off                    default
Nextcloud.Storage  zoned                 off                    default
Nextcloud.Storage  snapdir               hidden                 default
Nextcloud.Storage  aclinherit            restricted             default
Nextcloud.Storage  createtxg             1                      -
Nextcloud.Storage  canmount              on                     default
Nextcloud.Storage  xattr                 on                     default
Nextcloud.Storage  copies                1                      default
Nextcloud.Storage  version               5                      -
Nextcloud.Storage  utf8only              off                    -
Nextcloud.Storage  normalization         none                   -
Nextcloud.Storage  casesensitivity       sensitive              -
Nextcloud.Storage  vscan                 off                    default
Nextcloud.Storage  nbmand                off                    default
Nextcloud.Storage  sharesmb              off                    default
Nextcloud.Storage  refquota              none                   default
Nextcloud.Storage  refreservation        none                   default
Nextcloud.Storage  guid                  10708833461188943126   -
Nextcloud.Storage  primarycache          all                    default
Nextcloud.Storage  secondarycache        all                    default
Nextcloud.Storage  usedbysnapshots       0B                     -
Nextcloud.Storage  usedbydataset         105G                   -
Nextcloud.Storage  usedbychildren        1.70T                  -
Nextcloud.Storage  usedbyrefreservation  0B                     -
Nextcloud.Storage  logbias               latency                default
Nextcloud.Storage  objsetid              54                     -
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!