Old disk where i cant find all the data.

kk-openbox

New Member
Apr 19, 2023
4
0
1
Hey Proxmoxers.

I have moved to a new server a 2 years ago and at that point i tried to move to new SSD also.
Now i turned the old computer on with the old drives that have not been deleted because i know there is some data, that i home still is there.

When i look in the term i see that the 1T drive is almost 75% filled up.
Bash:
root@kamper:/# zpool list mt
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
mt     928G   687G   241G        -         -    54%    74%  1.00x    ONLINE  -
root@kamper:/# zpool status mt
  pool: mt
 state: ONLINE
  scan: scrub repaired 0B in 0 days 00:09:55 with 0 errors on Sun Jul 11 00:33:56 2021
config:

    NAME                         STATE     READ WRITE CKSUM
    mt                           ONLINE       0     0     0
      nvme-eui.0026b7683db93b85  ONLINE       0     0     0

errors: No known data errors
root@kamper:/#
And in the front end in PVE i see the same.

kamper_-_Proxmox_Virtual_Environment.png
But i only have one disk image on it when i look at

kamper_-_Proxmox_Virtual_Environment.png

how do i see what else is on that nvme thats filled up with aprox 800gb?
 
I would first try zfs list -o space to see whats stored on ZFS. As well as cat /etc/pve/storage.cfg to see where the storages store stuff. Maybe df -h to see some other mounted filesystems.
 
It gives the same output with zfs list -o space

Bash:
root@kamper:~# zfs list -o space
NAME               AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
NAS                 250G  1.51T        0B     96K             0B      1.51T
NAS/vm-101-disk-0   856G  1.51T        0B    942G           605G         0B
mt                  197G   702G        0B    686G             0B      16.6G
mt/vm-101-disk-0    212G  16.5G        0B   1.33G          15.2G         0B

And
Bash:
root@kamper:/mt# cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content iso,backup
    maxfiles 1
    shared 0

lvmthin: local-lvm
    thinpool data
    vgname pve
    content images,rootdir

zfspool: mt1
    pool mt
    content images,rootdir
    mountpoint /mt
    sparse 0

zfspool: StorageNAS
    pool NAS
    content images,rootdir
    mountpoint /NAS
    sparse 0

dir: templates
    path /mt/templates
    content iso,vztmpl,images
    shared 0

Maybe the nvme used to be "passed-through" to a truenas VM and hence the drive is a zfs drive.

When i run a fdisk on the drive i get

Code:
Device           Start        End    Sectors   Size Type
/dev/nvme0n1p1     128    4194431    4194304     2G FreeBSD swap
/dev/nvme0n1p2 4194432 1953525119 1949330688 929.5G FreeBSD ZFS

But this doen't help me to see whats on the zfs drive..
 
There should be 686GB of files in "/mt" and its subfolders.
And 942GB of data on that "NAS/vm-101-disk-0" block device.
 
Last edited:
That is what i was thinking too.


Bash:
root@kamper:/mt# ls
templates
root@kamper:/mt# ls -la
total 12
drwxr-xr-x  3 root root 4096 Feb 15  2021 .
drwxr-xr-x 25 root root 4096 Apr 20 00:27 ..
drwxr-xr-x  4 root root 4096 Feb 15  2021 templates
root@kamper:/mt# du -h .
4.0K    ./templates/template/cache
4.0K    ./templates/template/iso
12K    ./templates/template
4.0K    ./templates/images
20K    ./templates
24K    .
root@kamper:/mt#
 
I was thinking that there might be a zfs pool on that drive also, but when i try to do a import i can't see it.

Bash:
root@kamper:/mt# zpool import
   pool: boot-pool
     id: 15513697974996719685
  state: UNAVAIL
status: The pool can only be accessed in read-only mode on this system. It
    cannot be accessed in read-write mode because it uses the following
    feature(s) not supported on this system:
    com.delphix:log_spacemap (Log metaslab changes on a single spacemap and flush them periodically.)
    com.delphix:livelist (Improved clone deletion performance.)
action: The pool cannot be imported in read-write mode. Import the pool with
    "-o readonly=on", access the pool on a system that supports the
    required feature(s), or recreate the pool from backup.
 config:

    boot-pool   UNAVAIL  unsupported feature(s)
      zd16      ONLINE
root@kamper:/mt# zpool list
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
NAS   1.81T   942G   914G        -         -    44%    50%  1.00x  DEGRADED  -
NAS7  1.46T   275G  1.19T        -         -     0%    18%  1.00x    ONLINE  -
mt     928G   687G   241G        -         -    54%    74%  1.00x    ONLINE  -
root@kamper:/mt#
 
Last edited:
That is what i was thinking too.


Bash:
root@kamper:/mt# ls
templates
root@kamper:/mt# ls -la
total 12
drwxr-xr-x  3 root root 4096 Feb 15  2021 .
drwxr-xr-x 25 root root 4096 Apr 20 00:27 ..
drwxr-xr-x  4 root root 4096 Feb 15  2021 templates
root@kamper:/mt# du -h .
4.0K    ./templates/template/cache
4.0K    ./templates/template/iso
12K    ./templates/template
4.0K    ./templates/images
20K    ./templates
24K    .
root@kamper:/mt#
Check if that pool is actually mounted:
zfs get mounted,mountpoint,canmount mt

Not that you are just looking at an empty mountpoint on the root filesystem as PVE also creates these folder on its own when they don't exist.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!