[SOLVED] Disk space availability problem - local zfs rpool/ROOT seemingly gobbling up all space; probably causing hangups in tasks like backups/upgrade

Bernie2020

New Member
Feb 13, 2020
10
0
1
33
Hello! Running PVE as my main home system has been a blast of an experience for the past year, so I just wanna say I appreciate the amazing work developing it open source very much!

The one thing I struggle with on and off though has been local zfs usage maxing out. I posted some time ago when my system became unusable because of insufficient space; the solution included deleting snapshots of VMs on the local-zfs as well as manually deleting some old VM disc images which for some reason did not get deleted when deleting the corresponding VMs via the GUI.

This time it is a bit different I am afraid. The Summary of my pve-node reads a HD space(root) of 98.99% (264.4 GiB of 267.14 GiB), and zfs list shows

Code:
NAME                           USED  AVAIL     REFER  MOUNTPOINT
rpool                          443G  2.71G      104K  /rpool
rpool/ROOT                     264G  2.71G       96K  /rpool/ROOT
rpool/ROOT/pve-1               264G  2.71G      264G  /
rpool/data                     178G  2.71G      136K  /rpool/data
rpool/data/subvol-100-disk-0  1.11G  2.71G     1.11G  /rpool/data/subvol-100-disk-0
rpool/data/subvol-101-disk-0  35.3G     0B       32G  /rpool/data/subvol-101-disk-0
rpool/data/subvol-103-disk-0  1.44G  2.71G     1.01G  /rpool/data/subvol-103-disk-0
rpool/data/subvol-104-disk-0  42.4G  2.71G     42.4G  /rpool/data/subvol-104-disk-0
rpool/data/subvol-105-disk-0   852M  2.71G      852M  /rpool/data/subvol-105-disk-0
rpool/data/subvol-105-disk-1   861M  2.71G      861M  /rpool/data/subvol-105-disk-1
rpool/data/subvol-105-disk-2    96K  2.71G       96K  /rpool/data/subvol-105-disk-2
rpool/data/vm-102-disk-0      1.82G  2.71G     1.82G  -
rpool/data/vm-201-disk-0       192K  2.71G      192K  -
rpool/data/vm-201-disk-1      94.7G  2.71G     94.7G  -
or
Code:
NAME                          AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
rpool                         2.70G   443G        0B    104K             0B       443G
rpool/ROOT                    2.70G   264G        0B     96K             0B       264G
rpool/ROOT/pve-1              2.70G   264G        0B    264G             0B         0B
rpool/data                    2.70G   178G        0B    136K             0B       178G
rpool/data/subvol-100-disk-0  2.70G  1.11G        0B   1.11G             0B         0B
rpool/data/subvol-101-disk-0     0B  35.3G     3.31G     32G             0B         0B
rpool/data/subvol-103-disk-0  2.70G  1.44G      448M   1.01G             0B         0B
rpool/data/subvol-104-disk-0  2.70G  42.4G     28.6M   42.4G             0B         0B
rpool/data/subvol-105-disk-0  2.70G   852M        0B    852M             0B         0B
rpool/data/subvol-105-disk-1  2.70G   861M        0B    861M             0B         0B
rpool/data/subvol-105-disk-2  2.70G    96K        0B     96K             0B         0B
rpool/data/vm-102-disk-0      2.70G  1.82G        0B   1.82G             0B         0B
rpool/data/vm-201-disk-0      2.70G   192K        0B    192K             0B         0B
rpool/data/vm-201-disk-1      2.70G  94.7G        0B   94.7G             0B         0B

for zfs list -ospace. The situation I had previously was different, as the root filesystem was not the issue here, having been at around 35 GB, but rather rpool/data:
Code:
NAME                          AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
rpool                                           35.0G   411G        0B    104K             0B       411G
rpool/ROOT                                      35.0G  34.7G        0B     96K             0B      34.7G
rpool/ROOT/pve-1                                35.0G  34.7G        0B   34.7G             0B         0B
rpool/data                                      35.0G   376G        0B    144K             0B       376G

I have recently migrated most of my larger VM discs away from that SSD to another drive via "Move Disk" in the GUI under Hardware/Resources. zfs list as well as ls /rpool/data/ seem to confirm that this worked; unfortunately, I can't say if the rpool/ROOT usage was as low as 35 GB right before that or not.

Another couple of notes: I tried to update the host today, and read 4 log lines:
Error: command 'usr/bin/termproxy 5900 --path /nodes/pve --perm Sys.Console -- /usr/bin/pveupgrade --shell' failed: exit code 1'
and 3 times like
'command '/usr/bin/termproxy 5903 --path /nodes/pve --perm Sys.Console -- /bin/login -f root' failed: exit code 1'.
I am not sure what to make of these, except maybe that due to insufficient space, some updates didn't go through properly (worst case) or just threw some errors but went through fine (better case).
The fragmentation of the SSD is also quite high at 68%, could that be one factor contributing to the issue?


As I understand it, the proxmox host only requires very little space, like around 32 GB should suffice, so I don't know where the root of the problem is here. Any help in resolving it is greatly appreciated- I hope the system is not doomed yet.
 
Last edited:
First of out of interest, can you please post the output of zpool status to get an idea of the pool layout?

What is the output of zfs list -t all?

Other than that it looks like the rpool/ROOT/pve-1 dataset which is mounted at / is using about 260GB.

Do you have quite a few ISOs and backups located in the local storage? That would explain it as that storage is a directory at /var/lib/vz.

Other than that you could install ncdu and run ncdu / to get a detailed idea where that space is used.
 
  • Like
Reactions: Bernie2020
Thank you for your quick response!


First of out of interest, can you please post the output of zpool status to get an idea of the pool layout?
Code:
pool: mirzpool
 state: ONLINE
  scan: scrub repaired 0B in 0 days 08:18:26 with 0 errors on Sun Nov  8 08:42:27 2020
config:

        NAME                                          STATE     READ WRITE CKSUM
        mirzpool                                      ONLINE       0     0     0
          mirror-0                                    ONLINE       0     0     0
            usb-WD_My_Book_25EE_37484B5435563248-0:0  ONLINE       0     0     0
            usb-WD_My_Book_25EE_37484B523338304A-0:0  ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 0 days 00:04:08 with 0 errors on Sun Nov  8 00:28:10 2020
config:

        NAME                               STATE     READ WRITE CKSUM
        rpool                              ONLINE       0     0     0
          nvme-eui.0025385781b1fc78-part3  ONLINE       0     0     0

errors: No known data errors

  pool: rpool2
 state: ONLINE
  scan: none requested
config:

        NAME                                              STATE     READ WRITE CKSUM
        rpool2                                            ONLINE       0     0     0
          nvme-Samsung_SSD_960_EVO_500GB_S3EUNX0HC13248R  ONLINE       0     0     0

errors: No known data errors

  pool: zpool12
 state: ONLINE
  scan: scrub repaired 0B in 0 days 00:39:06 with 0 errors on Sun Nov  8 01:03:09 2020
config:

        NAME                                         STATE     READ WRITE CKSUM
        zpool12                                      ONLINE       0     0     0
          usb-WD_Elements_25A3_584A47304C37474D-0:0  ONLINE       0     0     0

What is the output of zfs list -t all?
Code:
NAME                                                              USED  AVAIL     REFER  MOUNTPOINT
mirzpool                                                         5.16T  1.88T       96K  /mirzpool
mirzpool/mirzdata                                                4.41T  1.88T     4.41T  /mirzpool/mirzdata
mirzpool/subvol-105-disk-0                                        650M  7.36G      650M  /mirzpool/subvol-105-disk-0
mirzpool/subvol-901-disk-0                                        917M  31.1G      917M  /mirzpool/subvol-901-disk-0
mirzpool/vm-202-disk-0                                           8.25G  1.88T       56K  -
mirzpool/vm-208-disk-0                                            155G  2.03T     14.8M  -
mirzpool/vm-210-disk-0                                            132G  1.90T      105G  -
mirzpool/vm-210-disk-1                                              3M  1.88T       80K  -
mirzpool/vm-212-disk-0                                           16.5G  1.89T       56K  -
mirzpool/vm-221-disk-0                                           16.5G  1.88T     10.6G  -
mirzpool/vm-222-disk-0                                           16.5G  1.89T     6.84G  -
mirzpool/vm-224-disk-0                                           2.06G  1.88T       56K  -
mirzpool/vm-226-disk-0                                           66.0G  1.93T     13.6G  -
mirzpool/vm-226-disk-1                                              3M  1.88T       68K  -
mirzpool/vm-401-disk-0                                              3M  1.88T       84K  -
mirzpool/vm-401-disk-1                                           88.7G  1.90T     60.1G  -
mirzpool/vm-402-disk-0                                           66.0G  1.91T     33.1G  -
mirzpool/vm-402-disk-1                                              3M  1.88T       68K  -
mirzpool/vm-406-disk-0                                           66.0G  1.92T     20.8G  -
mirzpool/vm-406-disk-1                                              3M  1.88T       68K  -
mirzpool/vm-431-disk-0                                              3M  1.88T       68K  -
mirzpool/vm-431-disk-1                                           74.3G  1.92T     25.2G  -
mirzpool/vm-441-disk-0                                           66.0G  1.91T     33.1G  -
mirzpool/vm-441-disk-1                                              3M  1.88T       68K  -
rpool                                                             443G  2.68G      104K  /rpool
rpool/ROOT                                                        264G  2.68G       96K  /rpool/ROOT
rpool/ROOT/pve-1                                                  264G  2.68G      264G  /
rpool/data                                                        178G  2.68G      136K  /rpool/data
rpool/data/subvol-100-disk-0                                     1.11G  2.68G     1.11G  /rpool/data/subvol-100-disk-0
rpool/data/subvol-101-disk-0                                     35.3G     0B       32G  /rpool/data/subvol-101-disk-0
rpool/data/subvol-101-disk-0@nc1v1_0                             76.3M      -     1.78G  -
rpool/data/subvol-101-disk-0@beforeExternalSMBshare              12.6M      -     4.86G  -
rpool/data/subvol-101-disk-0@beforeSmbInstallAndExternalStorage  79.9M      -     4.88G  -
rpool/data/subvol-103-disk-0                                     1.44G  2.68G     1.01G  /rpool/data/subvol-103-disk-0
rpool/data/subvol-103-disk-0@beforeCurl                          78.4M      -      817M  -
rpool/data/subvol-103-disk-0@working_beforeDHCPPi                2.55M      -      939M  -
rpool/data/subvol-103-disk-0@DHCP_configured                     2.75M      -      939M  -
rpool/data/subvol-104-disk-0                                     42.4G  2.68G     42.4G  /rpool/data/subvol-104-disk-0
rpool/data/subvol-104-disk-0@vzdump                              36.7M      -     42.4G  -
rpool/data/subvol-105-disk-0                                      852M  2.68G      852M  /rpool/data/subvol-105-disk-0
rpool/data/subvol-105-disk-1                                      861M  2.68G      861M  /rpool/data/subvol-105-disk-1
rpool/data/subvol-105-disk-2                                       96K  2.68G       96K  /rpool/data/subvol-105-disk-2
rpool/data/vm-102-disk-0                                         1.82G  2.68G     1.82G  -
rpool/data/vm-201-disk-0                                          192K  2.68G      192K  -
rpool/data/vm-201-disk-1                                         94.7G  2.68G     94.7G  -
rpool2                                                            198G   251G       96K  /rpool2
rpool2/vm-208-disk-0                                             66.0G   280G     37.0G  -
rpool2/vm-208-disk-1                                                3M   251G       84K  -
rpool2/vm-209-disk-0                                             66.0G   278G     39.3G  -
rpool2/vm-209-disk-1                                                3M   251G       84K  -
rpool2/vm-211-disk-0                                                3M   251G       60K  -
rpool2/vm-211-disk-1                                             66.0G   291G     26.7G  -
rpool2/vm-513-disk-0                                                3M   251G       60K  -
zpool12                                                          2.60T  7.96T       96K  /zpool12
zpool12/subvol-304-disk-0                                        29.6G  20.4G     29.6G  /zpool12/subvol-304-disk-0
zpool12/vm-203-disk-0                                            3.06M  7.96T       84K  -
zpool12/vm-203-disk-0@rightAfterCloning                            60K      -       60K  -
zpool12/vm-203-disk-1                                            79.8G  8.02T     16.5G  -
zpool12/vm-203-disk-1@rightAfterCloning                          6.27G      -     13.8G  -
zpool12/vm-203-disk-2                                             155G  8.03T     83.5G  -
zpool12/vm-203-disk-3                                             516G  8.46T       56K  -
zpool12/vm-211-disk-0                                             330G  8.00T      291G  -
zpool12/vm-213-disk-0                                            66.0G  7.99T     37.0G  -
zpool12/vm-213-disk-1                                               3M  7.96T       60K  -
zpool12/vm-213-disk-2                                             155G  8.11T       56K  -
zpool12/vm-301-disk-0                                               3M  7.96T       84K  -
zpool12/vm-301-disk-1                                            88.7G  7.98T     65.2G  -
zpool12/vm-302-disk-0                                            8.25G  7.97T       56K  -
zpool12/vm-305-disk-0                                            66.0G  8.01T     15.1G  -
zpool12/vm-305-disk-1                                               3M  7.96T       68K  -
zpool12/vm-306-disk-0                                            66.0G  8.01T     16.6G  -
zpool12/vm-306-disk-1                                               3M  7.96T       80K  -
zpool12/vm-307-disk-0                                               3M  7.96T       68K  -
zpool12/vm-307-disk-1                                            66.0G  8.01T     13.1G  -
zpool12/vm-308-disk-0                                               3M  7.96T       76K  -
zpool12/vm-308-disk-1                                            66.0G  8.01T     13.3G  -
zpool12/vm-314-disk-0                                            33.0G  7.99T      306M  -
zpool12/vm-316-disk-0                                            33.0G  7.99T     4.40G  -
zpool12/vm-323-disk-0                                            2.06G  7.96T       56K  -
zpool12/vm-325-disk-0                                            3.07M  7.96T       80K  -
zpool12/vm-325-disk-0@before500GBodyssees                          68K      -       68K  -
zpool12/vm-325-disk-1                                            79.8G  8.02T     14.0G  -
zpool12/vm-325-disk-1@before500GBodyssees                        1.22G      -     13.8G  -
zpool12/vm-327-disk-0                                            66.0G  8.01T     13.3G  -
zpool12/vm-327-disk-1                                               3M  7.96T       68K  -
zpool12/vm-330-disk-0                                            76.3G  7.99T     43.9G  -
zpool12/vm-330-disk-1                                               3M  7.96T       88K  -
zpool12/vm-330-disk-2                                             206G  8.16T       56K  -
zpool12/vm-331-disk-0                                               3M  7.96T       80K  -
zpool12/vm-331-disk-1                                            74.3G  7.99T     46.1G  -
zpool12/vm-331-disk-2                                             206G  8.10T     64.0G  -
zpool12/vm-333-disk-0                                               3M  7.96T       68K  -
zpool12/vm-333-disk-1                                            66.0G  8.01T     20.1G  -
zpool12/vm-351-disk-0                                            66.0G  8.01T     15.8G  -
zpool12/vm-351-disk-1                                               3M  7.96T       68K  -
zpool12/vm-513-disk-0                                            66.0G  8.01T     13.8G  -

Do you have quite a few ISOs and backups located in the local storage? That would explain it as that storage is a directory at /var/lib/vz.
Not that I am aware of, no. I have removed most of it already over the course of last year, in particular whenever my system became unusable due to not enough free space. ls /var/lib/vz/* seems to not list more than the GUI shows at local-zfs or local as far as I can tell:
Code:
root@pve:~# ls /var/lib/vz
dump  images  template
root@pve:~# ls /var/lib/vz/images/
root@pve:~# ls /var/lib/vz/template/
cache  iso  qemu
root@pve:~# ls /var/lib/vz/template/iso/
root@pve:~# ls /var/lib/vz/template/qemu/
root@pve:~# ls /var/lib/vz/template/cache/
archlinux-base_20190924-1_amd64.tar.gz    debian-9-turnkey-nextcloud_15.2-1_amd64.tar.gz
debian-10.0-standard_10.0-1_amd64.tar.gz  ubuntu-19.10-standard_19.10-1_amd64.tar.gz
root@pve:~# ls /var/lib/vz/dump/
vzdump-lxc-100-2020_01_23-02_37_33.log      vzdump-lxc-101-2020_01_31-03_30_18.tar.lzo  vzdump-qemu-102-2020_02_03-15_37_02.log
vzdump-lxc-100-2020_01_23-02_37_33.tar.lzo  vzdump-lxc-105-2020_01_30-23_03_02.log      vzdump-qemu-102-2020_02_03-15_49_55.log
vzdump-lxc-100-2020_01_31-03_43_01.log      vzdump-lxc-105-2020_01_30-23_03_02.tar.lzo  vzdump-qemu-102-2020_02_03-15_49_55.vma.lzo
vzdump-lxc-100-2020_01_31-03_43_01.tar.lzo  vzdump-lxc-105-2020_01_31-01_47_10.log      vzdump-qemu-225-2020_02_02-21_09_04.log
vzdump-lxc-101-2020_01_23-04_18_35.log      vzdump-lxc-105-2020_01_31-01_47_10.tar.lzo  vzdump-qemu-225-2020_02_03-02_55_19.log
vzdump-lxc-101-2020_01_23-04_18_35.tar.lzo  vzdump-lxc-105-2020_01_31-03_23_58.log      vzdump-qemu-327-2020_11_18-12_43_01.log
vzdump-lxc-101-2020_01_31-03_30_18.log      vzdump-lxc-105-2020_01_31-03_23_58.tar.lzo  vzdump-qemu-327-2020_11_18-12_43_01.vma.zst
20201208_183344_firefox.png20201208_183451_firefox.png


Other than that you could install ncdu and run ncdu / to get a detailed idea where that space is used.
Thank you, I'm gonna check this out.
 
Last edited:
Reporting back after using ncdu.

If I am understanding zfs list correctly, everything on the SSD should be checked with ncdu /rpool though, which outputs the following:
20201208_194222_firefox.png20201208_194235_firefox.png

The command ncdu / just finished, though I do not understand why it seems to not display everything- for example /rpool2, which the PVE GUI reports a usage of 44.06% (198.04 GiB of 449.50 GiB).
20201208_195949_firefox.png20201208_200225_firefox.png

Very puzzling to me.
 
To match what you see in the zfs list output to what you see in the file system you have to keep in mind that pools and datasets in pools can be mounted at arbitrary points. The last column in the zfs list output shows the mount point.

Secondly, there are different type of datasets. Mainly file system ones and volumes (zvol). zvols are exposed as block device and will not show up in the file system. They are used for VM disks while containers have a file system based dataset. You can see that the datasets for containers are named "...subvol..." and have a mount point while the datasets for VMs don't.

So the dataset of interest is this one:
Code:
NAME                                                              USED  AVAIL     REFER  MOUNTPOINT
rpool/ROOT/pve-1                                                  264G  2.68G      264G  /

As you can see it is mounted at / and thus the root file system. Under that hierarchy all the other pools get mounted. That's why you see those directories.

NCDU does not check if one of the directories in the file system hierarchy is part of some other pool or disk, it just traverses the FS.

So if we ignore the paths of the existing pools and do a rough calculation we see the directories of /var and below we only about 16GB of usage. But there is the /zpool directory present for which there is no pool configured, thus it must be a directory that holds data and not just a mount point. It is using 241 GB and with that we are getting in the region of the space usage of the rpool/ROOT/pve-1 dataset.
Check the contents of the /zpool directory. Do you have a storage configured there? Maybe of the type directory?
 
  • Like
Reactions: Bernie2020
To match what you see in the zfs list output to what you see in the file system you have to keep in mind that pools and datasets in pools can be mounted at arbitrary points. The last column in the zfs list output shows the mount point.

Secondly, there are different type of datasets. Mainly file system ones and volumes (zvol). zvols are exposed as block device and will not show up in the file system. They are used for VM disks while containers have a file system based dataset. You can see that the datasets for containers are named "...subvol..." and have a mount point while the datasets for VMs don't.

So the dataset of interest is this one:
Code:
NAME                                                              USED  AVAIL     REFER  MOUNTPOINT
rpool/ROOT/pve-1                                                  264G  2.68G      264G  /

As you can see it is mounted at / and thus the root file system. Under that hierarchy all the other pools get mounted. That's why you see those directories.

NCDU does not check if one of the directories in the file system hierarchy is part of some other pool or disk, it just traverses the FS.

So if we ignore the paths of the existing pools and do a rough calculation we see the directories of /var and below we only about 16GB of usage. But there is the /zpool directory present for which there is no pool configured, thus it must be a directory that holds data and not just a mount point. It is using 241 GB and with that we are getting in the region of the space usage of the rpool/ROOT/pve-1 dataset.
Check the contents of the /zpool directory. Do you have a storage configured there? Maybe of the type directory?

Thank you so very much for the excellent reply and explanations, it has been tremendously helpful even above and beyond solving my problem with the system drive getting full! :)

From what I understand this is where my mistakes started:
-I added a new, single HDD to the host and created a new ZFS pool in the GUI under pve/Disks/ZFS named zpool12,
-then I must have forgotten to manually use create zfs zpool12/pve_backups and
-added a directory via the GUI under Datacenter/Storage with the ID zpool12pvebackups and the path /zpool/pve_backups.

I really wonder what would have happened instead if I didn't make that typo of writing /zpool/pve_backups instead of /zpool12/pve_backups at this point, in particular because of what I tried next after reading your reply and finding out about the core issue of /zpool being the space hog on the drive.

Somewhat unfortunately, I got too curious about whether or not it would work to move the files from /zpool (SSD) directly to the mounted /zpool12 at /zpool12/data.
During and after rsync /zpool /zpool12/data -v -r --remove-source-files completed, I confirmed with ncdu to see if the ~200+ GB were successfully moved to the /zpool12/data directory. My hope was then to also edit the path (and name, though not important) of the added directory in /etc/pve/storage.cfg to
Code:
dir: zpool12data_dir
path /zpool12/data
, in order to make proxmox aware of the backups and ISOs on the drive. I think (though am not sure) that it did work at least to the point where I could see the content showing up in the GUI under e.g. Backups and also the Summary grew to the 200+ GB of 8TB like expected:1607671900812.png

At this point I thought I test a reboot.
After the reboot, the Backups did not show up in the GUI, ls and ncdu also showed empty proxmox standard directories at /zpool12/data, and the Summary changed from the expected 200+ GB before it to 15.5 of 266 GiB like so:
1607672517684.png

I have then tried to find the missing data, but was not able to do so, even after using zfs create zpool12/data, I cannot find the files anywhere, with
Code:
zfs list
zpool12                       2.84T  7.73T      241G  /zpool12
zpool12/data                    96K  7.73T       96K  /zpool12/data
zpool12/subvol-304-disk-0     29.6G  20.4G     29.6G  /zpool12/subvol-304-disk-0
zpool12/vm-203-disk-0         3.06M  7.73T       84K  -
zpool12/vm-203-disk-1         79.8G  7.78T     16.5G  -
zpool12/vm-203-disk-2          155G  7.80T     83.5G  -
zpool12/vm-203-disk-3          516G  8.23T       56K  -
zpool12/vm-211-disk-0          330G  7.76T      291G  -
zpool12/vm-213-disk-0         66.0G  7.75T     37.0G  -
zpool12/vm-213-disk-1            3M  7.73T       60K  -
zpool12/vm-213-disk-2          155G  7.88T       56K  -
zpool12/vm-301-disk-0            3M  7.73T       84K  -
zpool12/vm-301-disk-1         88.7G  7.75T     65.2G  -
zpool12/vm-302-disk-0         8.25G  7.73T       56K  -
zpool12/vm-305-disk-0         66.0G  7.78T     15.1G  -
zpool12/vm-305-disk-1            3M  7.73T       68K  -
zpool12/vm-306-disk-0         66.0G  7.77T     16.6G  -
zpool12/vm-306-disk-1            3M  7.73T       80K  -
zpool12/vm-307-disk-0            3M  7.73T       68K  -
zpool12/vm-307-disk-1         66.0G  7.78T     13.1G  -
zpool12/vm-308-disk-0            3M  7.73T       76K  -
zpool12/vm-308-disk-1         66.0G  7.78T     13.3G  -
zpool12/vm-314-disk-0         33.0G  7.76T      306M  -
zpool12/vm-316-disk-0         33.0G  7.75T     4.40G  -
zpool12/vm-323-disk-0         2.06G  7.73T       56K  -
zpool12/vm-325-disk-0         3.07M  7.73T       80K  -
zpool12/vm-325-disk-1         79.8G  7.79T     14.0G  -
zpool12/vm-327-disk-0         66.0G  7.78T     13.3G  -
zpool12/vm-327-disk-1            3M  7.73T       68K  -
zpool12/vm-330-disk-0         76.3G  7.76T     43.9G  -
zpool12/vm-330-disk-1            3M  7.73T       88K  -
zpool12/vm-330-disk-2          206G  7.93T       56K  -
zpool12/vm-331-disk-0            3M  7.73T       80K  -
zpool12/vm-331-disk-1         74.3G  7.75T     46.1G  -
zpool12/vm-331-disk-2          206G  7.86T     64.0G  -
zpool12/vm-333-disk-0            3M  7.73T       68K  -
zpool12/vm-333-disk-1         66.0G  7.77T     20.1G  -
zpool12/vm-351-disk-0         66.0G  7.77T     15.8G  -
zpool12/vm-351-disk-1            3M  7.73T       68K  -
zpool12/vm-513-disk-0         66.0G  7.78T     13.8G  -
zpool12/z12data                392M  7.73T      392M  /zpool12/z12data

not making me less clueless either as to where they went. I copied them not onto a dataset which was created like zpool12/z12data using zfs create /zpool12/z12data (plus adding Directory /zpool12/z12data via GUI/Datacenter), which worked well testing it with a newly created backup, everything showing up as intended. I am assuming they are on a file based dataset somewhere and not a zvol.

You have already helped me a great deal with the information you provided me and what I have been reading up because of it. The lost backup and ISO files are not too bad, though I would like to know if or where they still are, I'll certainly mark the thread as solved, thank you once again! :)
 
Last edited:
Can you please show the output of mount?

Comparing the size of the whole pool from the previous one I can see that it grew from 2.60T to 2.84T. So the data must be somewhere.
 
Can you please show the output of mount?

Comparing the size of the whole pool from the previous one I can see that it grew from 2.60T to 2.84T. So the data must be somewhere.
Here is the output of mount
Code:
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=24589212k,nr_inodes=6147303,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=4929344k,mode=755)
rpool/ROOT/pve-1 on / type zfs (rw,relatime,xattr,noacl)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=48,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=2896)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
rpool on /rpool type zfs (rw,noatime,xattr,noacl)
rpool2 on /rpool2 type zfs (rw,xattr,noacl)
rpool/ROOT on /rpool/ROOT type zfs (rw,noatime,xattr,noacl)
rpool/data on /rpool/data type zfs (rw,noatime,xattr,noacl)
rpool/data/subvol-100-disk-0 on /rpool/data/subvol-100-disk-0 type zfs (rw,noatime,xattr,posixacl)
rpool/data/subvol-105-disk-1 on /rpool/data/subvol-105-disk-1 type zfs (rw,noatime,xattr,posixacl)
rpool/data/subvol-103-disk-0 on /rpool/data/subvol-103-disk-0 type zfs (rw,noatime,xattr,posixacl)
rpool/data/subvol-104-disk-0 on /rpool/data/subvol-104-disk-0 type zfs (rw,noatime,xattr,posixacl)
rpool/data/subvol-105-disk-0 on /rpool/data/subvol-105-disk-0 type zfs (rw,noatime,xattr,posixacl)
rpool/data/subvol-101-disk-0 on /rpool/data/subvol-101-disk-0 type zfs (rw,noatime,xattr,posixacl)
rpool/data/subvol-105-disk-2 on /rpool/data/subvol-105-disk-2 type zfs (rw,noatime,xattr,posixacl)
mirzpool on /mirzpool type zfs (rw,xattr,noacl)
mirzpool/subvol-105-disk-0 on /mirzpool/subvol-105-disk-0 type zfs (rw,xattr,posixacl)
mirzpool/subvol-901-disk-0 on /mirzpool/subvol-901-disk-0 type zfs (rw,xattr,posixacl)
mirzpool/mirzdata on /mirzpool/mirzdata type zfs (rw,xattr,noacl)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=4929340k,mode=700)
zpool12/z12data on /zpool12/z12data type zfs (rw,xattr,noacl)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
tracefs on /sys/kernel/debug/tracing type tracefs (rw,relatime)

At least the summary view of the pool (zpool12) seems to suggest that the data is still somewhere on the drive
1607701591754.png
 
Last edited:
Okay, so zpool12/z12data is currently mounted. But if you run ls /zpool12/z12data it is empty?

If that is the case, I would unmount the ZFS dataset and run the ls command again to see if there was somehow a mixup with it being a directory directly on the pool (/zpool12) instead of the zpool12/z12data dataset.
Code:
zfs unmount zpool12/z12data
then again ls zpool12/z12data.

Now if you see the data that should be there, rename that directory to something else.
Code:
mv /zpool12/z12data /zpool12/dirz12data
Now you can mount the zpool12/z12data dataset again
Code:
zfs mount zpool12/z12data

Now you can move the data from the /zpool12/dirz12data to the actual dataset mounted at /zpool12/z12data. Depending on what you need from it.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!