I am also having this issue after updating to the latest Proxmox version:
root@TracheServ:/# systemctl start zfs-mount.service
Job for zfs-mount.service failed because the control process exited with error code.
See "systemctl status zfs-mount.service" and "journalctl -xe" for details...
Hey Aaron, I had abondoned this for the time being, but this issue has come up again after the latest update:
Setting up zfsutils-linux (0.8.5-pve1) ...
zfs-import-scan.service is a disabled or a static unit not running, not starting it.
Job for zfs-mount.service failed because the control...
Okay here's something, my User.data drive and local are showing the same size available despite User.data being mounted on the ZFS drive. Is this because User.data is for some reason saving to local?
Graph:
Local-ZFS is thin provisioned indeed. Yes as Local-ZFS storage changes, so does Local storage as expected.
I thought qcow2 disks couldn't be thin provisioned? Does it have to do with this article from ya'll where my Qcow2 disks were growing too large...
Oh and I haven't taken the time to thank you yet Aaron! I really appreciate the support.
I'm leaving on vacation today so this was a terrible way to start the day - but moving everything off of local and off of the User.data mount point seems to have me back and running again - albeit without...
Nextcloud.Storage is a ZFS mirrored pool. I definitely have access to the storage as I'm storing plenty of stuff on it. Pics attached.
So where do I go from here?
Although I have recently implemented the user data backups to the spare drive so there's definitely a correlation there as well. This disk fill up happened overnight when a backup drive would have bene running.
I'm more and more confident that it has to do with this error:
https://forum.proxmox.com/threads/directory-mounted-on-2-5tb-zfs-disk-only-uses-200gb.78969/#post-349639
After waking up to a 100% full root directory, I panicked. The only thing I have done new recently was migrate disks to qcow2...
ncdu 1.13 ~ Use the arrow keys to navigate, press ? for help
--- /mnt ------------------------------------------------------------------------------------------------------------
437.9 GiB [##########] /pve...
root@TracheServ:~# cat /etc/pve/storage.conf
cat: /etc/pve/storage.conf: No such file or directory
root@TracheServ:~# systemctl status pve-cluster
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset...
root@TracheServ:~# cat /etc/pve/storage.conf
cat: /etc/pve/storage.conf: No such file or directory
root@TracheServ:~#
I have only defined storage via the GUI thus far.
I woke up this morning to a locked out server. My local storage had gone up to 100% usage and resulted in I/O errors. I was able to recover by rebooting the server which freed up a couple MB which then allowed me to move disks off of there.
Here is the output of ncdu /
446.8 GiB [##########]...
I think this may be a related problem to another post I made: https://forum.proxmox.com/threads/what-is-suddenly-taking-up-so-much-space-on-local.78967/
This morning I woke up and found that my local storage had reached 100% and I was getting I/O errors. Interestingly, the size of the directory...
I do have ZFS storage! Thanks for explaining snapshots mode, that's what I suspected from my experiments but wanted to confirm.
Its a shame that there isn't a scheduled snapshot mechanism in the GUI, any idea if there's one in the pipeline?