[SOLVED] Some zfs problem here...

Vasilij Lebedinskij

Renowned Member
Jan 30, 2016
65
3
73
39
Hello! I've just moved my node to another hardware - simply installed all disks in new server. PVE booted successfully, I've changed network interfaces. But some LXCs didn't start and I saw that two directories on my second hdd zfs raid 10 pool are missing and they were configured as lxc.mount.entry forwards. When I started new server first time one drive was missing - Sata power connector was pulled out accidentally. I fixed it and restarted server - pool resilvered successfully. All VM and LXC data on that pool preserved. According to zpool status its online and 2.2Tb is in use but its many times more than all VM and LXC data allocates. Is there any way to scan zfs filesystem and recover data?

Proxmox 5.4.3

Code:
root@pve:~# zpool status
  pool: superfastpool
 state: ONLINE
  scan: scrub repaired 0B in 0h23m with 0 errors on Sun May 12 00:47:41 2019
config:

        NAME                                                STATE     READ WRITE CKSUM
        superfastpool                                       ONLINE       0     0     0
          mirror-0                                          ONLINE       0     0     0
            nvme-Samsung_SSD_960_EVO_250GB_S3ESNX0K500778W  ONLINE       0     0     0
            nvme-Samsung_SSD_960_EVO_250GB_S3ESNX1K402688J  ONLINE       0     0     0

errors: No known data errors

  pool: superslowpool
 state: ONLINE
  scan: resilvered 27.9M in 0h0m with 0 errors on Thu May 16 18:04:48 2019
config:

        NAME                                  STATE     READ WRITE CKSUM
        superslowpool                         ONLINE       0     0     0
          mirror-0                            ONLINE       0     0     0
            ata-TOSHIBA_DT01ACA200_45S64RSGS  ONLINE       0     0     0
            ata-TOSHIBA_DT01ACA200_45S68REAS  ONLINE       0     0     0
          mirror-1                            ONLINE       0     0     0
            ata-TOSHIBA_DT01ACA200_45U7DXDGS  ONLINE       0     0     0
            ata-TOSHIBA_DT01ACA200_45U7DXXGS  ONLINE       0     0     0

errors: No known data errors

Code:
root@pve:~# zfs list
NAME                              USED  AVAIL  REFER  MOUNTPOINT
superfastpool                    83.8G   131G   152K  /superfastpool
superfastpool/subvol-100-disk-0  2.62G  5.38G  2.62G  /superfastpool/subvol-100-disk-0
superfastpool/subvol-101-disk-0  5.31G  19.7G  5.31G  /superfastpool/subvol-101-disk-0
superfastpool/subvol-104-disk-0   400M  7.61G   400M  /superfastpool/subvol-104-disk-0
superfastpool/subvol-105-disk-1  3.34G  4.66G  3.34G  /superfastpool/subvol-105-disk-1
superfastpool/subvol-108-disk-1  2.76G  2.29G  1.71G  /superfastpool/subvol-108-disk-1
superfastpool/subvol-114-disk-1  5.67G  74.3G  5.67G  /superfastpool/subvol-114-disk-1
superfastpool/subvol-117-disk-0  16.5G  33.5G  16.5G  /superfastpool/subvol-117-disk-0
superfastpool/subvol-118-disk-0  1.44G  6.75G  1.25G  /superfastpool/subvol-118-disk-0
superfastpool/subvol-122-disk-0  3.55G  5.93G  2.07G  /superfastpool/subvol-122-disk-0
superfastpool/subvol-123-disk-0  2.30G  5.98G  2.02G  /superfastpool/subvol-123-disk-0
superfastpool/subvol-124-disk-0   427M  7.58G   427M  /superfastpool/subvol-124-disk-0
superfastpool/vm-107-disk-1      30.9G   148G  14.6G  -
superfastpool/vm-112-disk-1      8.25G   135G  4.56G  -
superslowpool                    2.21T  1.30T  2.15T  /superslowpool
superslowpool/subvol-106-disk-1   762M  7.26G   762M  /superslowpool/subvol-106-disk-1
superslowpool/subvol-116-disk-1  8.65G   359M  8.65G  /superslowpool/subvol-116-disk-1
superslowpool/vm-103-disk-1      10.3G  1.30T  10.0G  -
superslowpool/vm-109-disk-1      10.3G  1.30T  10.0G  -
superslowpool/vm-115-disk-1      33.0G  1.33T  3.73G  -
 
Hi!

I'm facing the same kind of problem, and I can see the containers usage by executing the zfs list command

Code:
zfs list -r
NAME                     USED  AVAIL  REFER  MOUNTPOINT
data                     450G   442G   176K  /data
data/subvol-105-disk-0  8,27G  16,7G  8,27G  /data/subvol-105-disk-0
data/subvol-202-disk-0  19,2G  80,8G  19,2G  /data/subvol-202-disk-0
data/subvol-203-disk-0  5,19G  44,8G  5,19G  /data/subvol-203-disk-0
data/subvol-204-disk-0  6,60G  43,6G  6,38G  /data/subvol-204-disk-0
data/subvol-401-disk-0  42,2G   208G  42,2G  /data/subvol-401-disk-0
data/subvol-404-disk-0  24,4G  95,6G  24,4G  /data/subvol-404-disk-0
data/subvol-409-disk-0  21,7G  78,3G  21,7G  /data/subvol-409-disk-0
data/subvol-413-disk-0  6,86G   114G  6,86G  /data/subvol-413-disk-0
data/subvol-602-disk-0   124G  40,6G   124G  /data/subvol-602-disk-0
data/subvol-801-disk-0  95,3G  34,7G  95,3G  /data/subvol-801-disk-0
data/subvol-802-disk-0  62,9G  67,1G  62,9G  /data/subvol-802-disk-0
data/subvol-904-disk-0  24,0G  96,0G  24,0G  /data/subvol-904-disk-0
data/subvol-905-disk-0  8,26G  16,7G  8,26G  /data/subvol-905-disk-0

And the storage seems to be allocated:

Code:
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
data   920G   450G   470G         -    44%    48%  1.00x  ONLINE  -

But all the folders inside de data pool seems to be empty, so . I think I have the same problem as you had.

Can you tell me how to export and import the zpool as you did?

So many thanks in advance!
 
Hi!

I'm facing the same kind of problem, and I can see the containers usage by executing the zfs list command

Code:
zfs list -r
NAME                     USED  AVAIL  REFER  MOUNTPOINT
data                     450G   442G   176K  /data
data/subvol-105-disk-0  8,27G  16,7G  8,27G  /data/subvol-105-disk-0
data/subvol-202-disk-0  19,2G  80,8G  19,2G  /data/subvol-202-disk-0
data/subvol-203-disk-0  5,19G  44,8G  5,19G  /data/subvol-203-disk-0
data/subvol-204-disk-0  6,60G  43,6G  6,38G  /data/subvol-204-disk-0
data/subvol-401-disk-0  42,2G   208G  42,2G  /data/subvol-401-disk-0
data/subvol-404-disk-0  24,4G  95,6G  24,4G  /data/subvol-404-disk-0
data/subvol-409-disk-0  21,7G  78,3G  21,7G  /data/subvol-409-disk-0
data/subvol-413-disk-0  6,86G   114G  6,86G  /data/subvol-413-disk-0
data/subvol-602-disk-0   124G  40,6G   124G  /data/subvol-602-disk-0
data/subvol-801-disk-0  95,3G  34,7G  95,3G  /data/subvol-801-disk-0
data/subvol-802-disk-0  62,9G  67,1G  62,9G  /data/subvol-802-disk-0
data/subvol-904-disk-0  24,0G  96,0G  24,0G  /data/subvol-904-disk-0
data/subvol-905-disk-0  8,26G  16,7G  8,26G  /data/subvol-905-disk-0

And the storage seems to be allocated:

Code:
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
data   920G   450G   470G         -    44%    48%  1.00x  ONLINE  -

But all the folders inside de data pool seems to be empty, so . I think I have the same problem as you had.

Can you tell me how to export and import the zpool as you did?

So many thanks in advance!

zpool export tank

Then I renamed folder (you may remove it completely) where my pool was mounted

zpool import tank

It recreated mountpoint and all folders appeared.
 
zpool export tank

Then I renamed folder (you may remove it completely) where my pool was mounted

zpool import tank

It recreated mountpoint and all folders appeared.
Hi, Vasilij.

I could manage to restore the pool doing something similar. My Proxmox instance didn't clear the mount point after a forced reboot, so the system couldn't mount the pool for my containers, that's why I couldn't start them.

After renaming/moving the original mount point, I could just invoke this command and all the data is already accesible.
Code:
service zfs-mount restart

Anyway, thank you very much for your kind answer.

Regards!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!