[SOLVED] What is suddenly taking up so much space on local?

Does the graph in the summary for the `local` storage look better now?

Because currently the root FS on which it is located (/var/lib/vz) shows to be only about 18GB now.
Code:
NAME                                USED  AVAIL     REFER  MOUNTPOINT

rpool/ROOT/pve-1                   17.6G   168G     17.6G  /

The only thing I have done new recently was migrate disks to qcow2
Did you place those in the `local` storage and were they thin provisioned?

Because if you create VMs using the `local-zfs` storage, which is on the same pool, the available space (light green in the graph) should go down for `local`.
 
I'm leaving on vacation today so this was a terrible way to start the day - but moving everything off of local and off of the User.data mount point seems to have me back and running again - albeit without knowing the cause of my crash nor how to prevent it in the future, yet.
Ah okay, that answers my question. If those VMs were thin provisioned, it is likely that the disks grew larger and fast than you expected and thus, filling up the available space.
 
Does the graph in the summary for the `local` storage look better now?

Because currently the root FS on which it is located (/var/lib/vz) shows to be only about 18GB now.
Code:
NAME                                USED  AVAIL     REFER  MOUNTPOINT

rpool/ROOT/pve-1                   17.6G   168G     17.6G  /


Did you place those in the `local` storage and were they thin provisioned?

Because if you create VMs using the `local-zfs` storage, which is on the same pool, the available space (light green in the graph) should go down for `local`.
Graph:
1605276139766.png

Local-ZFS is thin provisioned indeed. Yes as Local-ZFS storage changes, so does Local storage as expected.

I thought qcow2 disks couldn't be thin provisioned? Does it have to do with this article from ya'll where my Qcow2 disks were growing too large? https://pve.proxmox.com/wiki/Shrink_Qcow2_Disk_Files
 
Okay here's something, my User.data drive and local are showing the same size available despite User.data being mounted on the ZFS drive. Is this because User.data is for some reason saving to local?1605276550756.png
 
Did you add the is_mountpoint 1 to the config for the User.data storage? It's possible that it has an affect on how space usage is tracked. Should be done anyway because that directory storage is defined on a zfs pool and should it not be mounted already you would run into the issue that I assumed at first.
 
Code:
root@TracheServ:~# cat /etc/pve/storage.cfg
zfspool: local-zfs
        pool rpool/data
        content rootdir,images
        sparse 1

dir: local
        path /var/lib/vz
        content rootdir,images,vztmpl,backup,snippets,iso
        maxfiles 1
        shared 0

zfspool: Storage.1
        pool Storage.1
        content images,rootdir
        mountpoint /Storage.1
        sparse 1

zfspool: Nextcloud.Storage
        pool Nextcloud.Storage
        content images,rootdir
        mountpoint /Nextcloud.Storage
        sparse 1

dir: spare
        path /mnt/pve/spare
        content backup,rootdir,vztmpl,snippets,iso,images
        is_mountpoint 1
        maxfiles 3
        shared 1

nfs: Proxmox_backups
        export /data/backups/proxmox
        path /mnt/pve/Proxmox_backups
        server 192.168.1.139
        content rootdir,backup,images,vztmpl,iso,snippets
        maxfiles 5
        options vers=4.2

dir: User.data
        path /Nextcloud.Storage
        content images,vztmpl,snippets,iso,rootdir,backup
        maxfiles 10
        shared 1
        is_mountpoint 1

root@TracheServ:~#

That seems to have broken something

1605277397983.png
 
Last edited:
Ah, sorry man, it is Friday afternoon...

Yep, seems like that pool isn't actually mounted. It does not show up in the output of df -h and probably also not if you run mount.

Try to rename the dir /Nextcloud.Storage and if you run zfs mount -a it should show up again and as well as in df and mount output.
 
Ah, sorry man, it is Friday afternoon...

Yep, seems like that pool isn't actually mounted. It does not show up in the output of df -h and probably also not if you run mount.

Try to rename the dir /Nextcloud.Storage and if you run zfs mount -a it should show up again and as well as in df and mount output.

Hey Aaron, I had abondoned this for the time being, but this issue has come up again after the latest update:


Code:
Setting up zfsutils-linux (0.8.5-pve1) ...
zfs-import-scan.service is a disabled or a static unit not running, not starting it.
Job for zfs-mount.service failed because the control process exited with error code.
See "systemctl status zfs-mount.service" and "journalctl -xe" for details.

root@TracheServ:/# systemctl status zfs-mount.service
* zfs-mount.service - Mount ZFS filesystems
   Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Fri 2020-11-27 09:56:49 EST; 4min 45s ago
     Docs: man:zfs(8)
Main PID: 40977 (code=exited, status=1/FAILURE)

Nov 27 09:56:49 TracheServ systemd[1]: Starting Mount ZFS filesystems...
Nov 27 09:56:49 TracheServ zfs[40977]: cannot mount '/Nextcloud.Storage': directory is not empty
Nov 27 09:56:49 TracheServ systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
Nov 27 09:56:49 TracheServ systemd[1]: zfs-mount.service: Failed with result 'exit-code'.
Nov 27 09:56:49 TracheServ systemd[1]: Failed to start Mount ZFS filesystems.
root@TracheServ:/#
root@TracheServ:/# ls /Nextcloud.Storage
dump  images  private  snippets  template
 
Last edited:
Hey Aaron, I had abondoned this for the time being, but this issue has come up again after the latest update:


Code:
Setting up zfsutils-linux (0.8.5-pve1) ...
zfs-import-scan.service is a disabled or a static unit not running, not starting it.
Job for zfs-mount.service failed because the control process exited with error code.
See "systemctl status zfs-mount.service" and "journalctl -xe" for details.

root@TracheServ:/# systemctl status zfs-mount.service
* zfs-mount.service - Mount ZFS filesystems
   Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Fri 2020-11-27 09:56:49 EST; 4min 45s ago
     Docs: man:zfs(8)
Main PID: 40977 (code=exited, status=1/FAILURE)

Nov 27 09:56:49 TracheServ systemd[1]: Starting Mount ZFS filesystems...
Nov 27 09:56:49 TracheServ zfs[40977]: cannot mount '/Nextcloud.Storage': directory is not empty
Nov 27 09:56:49 TracheServ systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
Nov 27 09:56:49 TracheServ systemd[1]: zfs-mount.service: Failed with result 'exit-code'.
Nov 27 09:56:49 TracheServ systemd[1]: Failed to start Mount ZFS filesystems.
root@TracheServ:/#
root@TracheServ:/# ls /Nextcloud.Storage
dump  images  private  snippets  template

Code:
root@TracheServ:/# systemctl start zfs-mount.service
Job for zfs-mount.service failed because the control process exited with error code.
See "systemctl status zfs-mount.service" and "journalctl -xe" for details.
root@TracheServ:/# journalctl -xe
-- A start job for unit pvesr.service has begun execution.
--
-- The job identifier is 1124072.
Nov 27 10:06:01 TracheServ systemd[1]: pvesr.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- The unit pvesr.service has successfully entered the 'dead' state.
Nov 27 10:06:01 TracheServ systemd[1]: Started Proxmox VE replication runner.
-- Subject: A start job for unit pvesr.service has finished successfully
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit pvesr.service has finished successfully.
--
-- The job identifier is 1124072.
Nov 27 10:06:52 TracheServ systemd[1]: Starting Mount ZFS filesystems...
-- Subject: A start job for unit zfs-mount.service has begun execution
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit zfs-mount.service has begun execution.
--
-- The job identifier is 1124127.
Nov 27 10:06:52 TracheServ zfs[24968]: cannot mount '/Nextcloud.Storage': directory is not empty
Nov 27 10:06:52 TracheServ systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- An ExecStart= process belonging to unit zfs-mount.service has exited.
--
-- The process' exit code is 'exited' and its exit status is 1.
Nov 27 10:06:52 TracheServ systemd[1]: zfs-mount.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- The unit zfs-mount.service has entered the 'failed' state with result 'exit-code'.
Nov 27 10:06:52 TracheServ systemd[1]: Failed to start Mount ZFS filesystems.
-- Subject: A start job for unit zfs-mount.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit zfs-mount.service has finished with a failure.
--
-- The job identifier is 1124127 and the job result is failed.
 
Okay, so AFAICT importing the pool failed because the mount point /Nextcloud.Storage is not empty. So it seems that something has written to it at a moment when the pool wasn't mounted yet.

What kind of data are you storing there again? Is it something else than VMs and/or containers?
 
Okay, so AFAICT importing the pool failed because the mount point /Nextcloud.Storage is not empty. So it seems that something has written to it at a moment when the pool wasn't mounted yet.

What kind of data are you storing there again? Is it something else than VMs and/or containers?

Another user helped me get the zfs-mount service back up and running: https://forum.proxmox.com/threads/z...latest-update-how-do-i-fix.79774/#post-353179

When I tried to create a directory mounted on /Nextcloud.Storage, it created a dump, images, private, snippets, and template folder. There was nothing in then, so I deleted them and that cleared this up.

That brings me back to the question, is there a correct way to mount a Directory at /Nextcloud.Storage ZFS pool? I'd like to store qcow2 disks on this pool to get snapshot functionality.
 
That brings me back to the question, is there a correct way to mount a Directory at /Nextcloud.Storage ZFS pool? I'd like to store qcow2 disks on this pool to get snapshot functionality.
It is a ZFS pool and you should be able to snapshot guests which are stored on the ZFS pool directly without any additional directory storage on top. See the storage overview in the docs: https://pve.proxmox.com/pve-docs/chapter-pvesm.html

If you have a directory storage that is located right at a mount point, you need to configure the following two parameters for that storage:
Code:
is_mountpoint 1
mkdir 0
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!