ZFS pool disappear

Elinsky

New Member
Sep 16, 2020
15
0
1
53
2 disks/one ZFS pool
ZFS list:
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
nnpoolz 3.62T 569G 3.07T - - 0% 15% 1.00x ONLINE -
sda 1.81T 230G 1.59T - - 0% 12.4% - ONLINE
sdb 1.81T 339G 1.48T - - 1% 18.3% - ONLINE

ZFS history
2020-02-21.14:27:02 zpool create -f -o ashift=12 nnpoolz /dev/sda /dev/sdb
2020-02-25.16:09:31 zpool import -c /etc/zfs/zpool.cache -aN
2020-02-27.11:24:14 zpool import -c /etc/zfs/zpool.cache -aN
2020-02-27.16:28:16 zpool import -c /etc/zfs/zpool.cache -aN
2020-02-29.14:54:05 zpool import -c /etc/zfs/zpool.cache -aN
2020-02-29.16:22:47 zpool import -c /etc/zfs/zpool.cache -aN
2020-03-04.21:22:18 zpool import -c /etc/zfs/zpool.cache -aN
2020-03-08.00:24:04 zpool scrub nnpoolz
2020-03-29.18:56:04 zpool import -c /etc/zfs/zpool.cache -aN
2020-04-12.00:24:04 zpool scrub nnpoolz
2020-04-18.19:01:16 zpool import -c /etc/zfs/zpool.cache -aN
2020-05-07.20:32:41 zpool import -c /etc/zfs/zpool.cache -aN
2020-05-10.00:24:04 zpool scrub nnpoolz
2020-05-11.22:44:34 zpool import -c /etc/zfs/zpool.cache -aN
2020-05-22.19:54:37 zpool import -c /etc/zfs/zpool.cache -aN
2020-06-04.20:29:58 zpool import -c /etc/zfs/zpool.cache -aN
2020-06-08.22:13:13 zpool import -c /etc/zfs/zpool.cache -aN
2020-06-14.00:24:09 zpool scrub nnpoolz
2020-07-05.23:53:10 zpool import -c /etc/zfs/zpool.cache -aN
2020-07-12.00:24:05 zpool scrub nnpoolz
2020-07-13.23:07:38 zpool import -c /etc/zfs/zpool.cache -aN
2020-07-23.20:51:21 zpool import -c /etc/zfs/zpool.cache -aN
2020-07-30.22:22:22 zpool import -c /etc/zfs/zpool.cache -aN
2020-08-07.20:22:43 zpool import -c /etc/zfs/zpool.cache -aN
2020-08-09.00:24:09 zpool scrub nnpoolz
2020-09-01.19:30:54 zpool import -c /etc/zfs/zpool.cache -aN
2020-09-05.21:04:52 zpool import -c /etc/zfs/zpool.cache -aN
2020-09-13.00:24:09 zpool scrub nnpoolz
2020-09-19.18:10:28 zpool import -c /etc/zfs/zpool.cache -aN
2020-09-20.14:30:16 zpool import -c /etc/zfs/zpool.cache -aN
2020-09-20.17:37:10 zpool import -c /etc/zfs/zpool.cache -aN

ZFS status
pool: nnpoolz
state: ONLINE
scan: scrub repaired 0B in 0 days 00:31:39 with 0 errors on Sun Sep 13 00:55:40 2020
config:

NAME STATE READ WRITE CKSUM
nnpoolz ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0

This is preamble and current situation

Trying to add 2 news disk to system. Immediately ZFS pool disappear, cause added new disks /sda and /sdb and "old" ones getting /sdc and /sdb
Then new disks removed - pool restored, but all catalogs with data invisible now. According to zfs list all information still exist, but not available.

So, question - any advise hoe to full restore ZFS pool and get access to data?

Thanks a lot
 
Try to import the pool not with the /dev/sdX paths which can change but with the /dev/disk/by-id/ paths.

To do so, run zpool import -d /dev/disk/by-id nnpoolz.

After you imported the pool like that you should see the different path to the disks when you run zpool status.

IIRC you would need to run update-initramfs -u to make sure that the current zpool.cache file is used at the next start.
 
Aaron, thanks a lot!
Now i feel myself totally stupid. Ok, just got next picture after "ls /dev/disk/by-id/"
"
ata-ST2000DM001-1ER164_Z4Z0GGKL
ata-ST2000DM001-1ER164_Z4Z0GGKL-part1
ata-ST2000DM001-1ER164_Z4Z0GGKL-part9
ata-WDC_WD20EZRZ-00Z5HB0_WD-WCC4N2KCNF9U
ata-WDC_WD20EZRZ-00Z5HB0_WD-WCC4N2KCNF9U-part1
ata-WDC_WD20EZRZ-00Z5HB0_WD-WCC4N2KCNF9U-part9 "

Advise, please, which ones i have to include to "zpool import -d /dev/disk/by-id nnpoolz "

Thanks
Dima
 
"/dev/disk/by-id/ata-ST2000DM001-1ER164_Z4Z0GGKL" + "/dev/disk/by-id/ata-WDC_WD20EZRZ-00Z5HB0_WD-WCC4N2KCNF9U" are the not changing versions of "/dev/sda" + "/dev/sdb" (which you used to create the pool)
/dev/disk/by-id/ata-ST2000DM001-1ER164_Z4Z0GGKL-part1" + "/dev/disk/by-id/ata-WDC_WD20EZRZ-00Z5HB0_WD-WCC4N2KCNF9U-part1" are "/dev/sda1" + "/dev/sdb1" and so on.

So you need "ata-ST2000DM001-1ER164_Z4Z0GGKL" and "ata-WDC_WD20EZRZ-00Z5HB0_WD-WCC4N2KCNF9U".
 
Last edited:
If you use the import command, the path to the directory should be enough. ZFS searches the disks used for the pool automatically.
 
  • Like
Reactions: Dunuin
Aaron, due to I never did such thing before - a bit afraid to lost some old, historical information (this is not a big issue, but if i keep it safe - will be greate)
zpool import --help goes to next

import [-d dir] [-D]
import [-o mntopts] [-o property=value] ...
[-d dir | -c cachefile] [-D] [-l] [-f] [-m] [-N] [-R root] [-F [-n]] -a
import [-o mntopts] [-o property=value] ...
[-d dir | -c cachefile] [-D] [-l] [-f] [-m] [-N] [-R root] [-F [-n]]
[--rewind-to-checkpoint] <pool | id> [newpool]
No any refers to "../by-id... "
And should i remove disk from pool before and then add again by id (still didn,t catch how)?
So, would you be so kind and make it more clear for rookie user.
Thanks a lot
Dima
 
Okay, can you show the output of zpool status in [code][/code] tags please?
 
amm... do yo mean this:

root@pve:~# zpool status
pool: nnpoolz
state: ONLINE
scan: scrub repaired 0B in 0 days 00:31:39 with 0 errors on Sun Sep 13 00:55:40 2020
config:

NAME STATE READ WRITE CKSUM
nnpoolz ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0

errors: No known data errors

just in case i could welcome you to my comp by teamviewer... May be it will be more easy to you?
 
Okay, in that case you first need to export the pool. Make sure that anything accessing the pool is stopped.
Then export the pool and reimport it with the -d parameter.
Code:
zpool export nnpolz
zpool import -d /dev/disk/by-id nnpoolz

A bit of information on why the disk/by-id helps. Linux exposes the same disk at multiple locations. The one most people use is the /dev/sdX. But it is not guaranteed that it will always be the same disk, especially if disks are added or removed. That is why in the /dev/disk path, the disks are exposed by certain other parameters, some of which are tied to the physical disk, such as the /dev/disk/by-id.

Don't forget to run upgrade-initramfs -u afterwards.
 
Got the next message

root@pve:~# zpool export nnpoolz
root@pve:~# zpool import -d /dev/disk/by-id nnpoolz
cannot mount '/nnpoolz': directory is not empty
root@pve:~#

Should I going to panic:)
 
root@pve:~# ls /nnpoolz
home

and next level -

root@pve:~# ls /nnpoolz/home/
D download images template

But problem is those directories without any data! i
 
DId you create a directory storage there or does a user have it's home directory configured there?

As a quick workaround, if that dir is empty, remove the /nnpoolz/home directory so that the pool can be imported.
 
Originally this dir was created as mount point for all applications/services (like Transmission/Plex/Samba) and all information should be inside. But not visible or unavailable. If i just remove it looks like i'll remove all info.
 
I assume that this data was present on the pool. You did export it but the services are still running and recreating the directories that they expect. Stop them, remove the directories and then the import should work.
 
Aaron, now i feel myself like a timber. Would you please show (write) more precise . i just have to rm /nnpoolz/home?? But everything will disappear...
 
Aaron, now i feel myself like a timber. Would you please show (write) more precise . i just have to rm /nnpoolz/home?? But everything will disappear...
If you fear to loose something, move that home folder to usb stick instead of just deleting it, mount that pool and if files on the usb stick are newer than the files from the pool copy them to the pool.
 
Aaron, now i feel myself like a timber. Would you please show (write) more precise . i just have to rm /nnpoolz/home?? But everything will disappear...
You did say that the directories are there, but they are empty right? Then the services most likely recreated them after they disappeared when you exported the pool.
Stop these services (transmission, plex, samba, ...) either move the /nnpoolz/home directory somewhere else, and then try to import the pool as described above. The directories in the pool should be present again with all the data.

ZFS will not mount a pool to a path that is not empty.
 
i ve move home folder, made all steps regarding import pool and got the next picture
 
I ve moved all subdirs from nnpoolz, made all steps with import, finally got the next picture:
root@pve:~# zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
nnpoolz 3.62T 569G 3.07T - - 0% 15% 1.00x ONLINE -
wwn-0x50014ee20d82c0be 1.81T 230G 1.59T - - 0% 12.4% - ONLINE
wwn-0x5000c50078dff321 1.81T 339G 1.48T - - 1% 18.3% - ONLINE

root@pve:~# zpool history
History for 'nnpoolz':
2020-02-21.14:27:02 zpool create -f -o ashift=12 nnpoolz /dev/sda /dev/sdb
2020-02-25.16:09:31 zpool import -c /etc/zfs/zpool.cache -aN
2020-02-27.11:24:14 zpool import -c /etc/zfs/zpool.cache -aN
2020-02-27.16:28:16 zpool import -c /etc/zfs/zpool.cache -aN
2020-02-29.14:54:05 zpool import -c /etc/zfs/zpool.cache -aN
2020-02-29.16:22:47 zpool import -c /etc/zfs/zpool.cache -aN
2020-03-04.21:22:18 zpool import -c /etc/zfs/zpool.cache -aN
2020-03-08.00:24:04 zpool scrub nnpoolz
2020-03-29.18:56:04 zpool import -c /etc/zfs/zpool.cache -aN
2020-04-12.00:24:04 zpool scrub nnpoolz
2020-04-18.19:01:16 zpool import -c /etc/zfs/zpool.cache -aN
2020-05-07.20:32:41 zpool import -c /etc/zfs/zpool.cache -aN
2020-05-10.00:24:04 zpool scrub nnpoolz
2020-05-11.22:44:34 zpool import -c /etc/zfs/zpool.cache -aN
2020-05-22.19:54:37 zpool import -c /etc/zfs/zpool.cache -aN
2020-06-04.20:29:58 zpool import -c /etc/zfs/zpool.cache -aN
2020-06-08.22:13:13 zpool import -c /etc/zfs/zpool.cache -aN
2020-06-14.00:24:09 zpool scrub nnpoolz
2020-07-05.23:53:10 zpool import -c /etc/zfs/zpool.cache -aN
2020-07-12.00:24:05 zpool scrub nnpoolz
2020-07-13.23:07:38 zpool import -c /etc/zfs/zpool.cache -aN
2020-07-23.20:51:21 zpool import -c /etc/zfs/zpool.cache -aN
2020-07-30.22:22:22 zpool import -c /etc/zfs/zpool.cache -aN
2020-08-07.20:22:43 zpool import -c /etc/zfs/zpool.cache -aN
2020-08-09.00:24:09 zpool scrub nnpoolz
2020-09-01.19:30:54 zpool import -c /etc/zfs/zpool.cache -aN
2020-09-05.21:04:52 zpool import -c /etc/zfs/zpool.cache -aN
2020-09-13.00:24:09 zpool scrub nnpoolz
2020-09-19.18:10:28 zpool import -c /etc/zfs/zpool.cache -aN
2020-09-20.14:30:16 zpool import -c /etc/zfs/zpool.cache -aN
2020-09-20.17:37:10 zpool import -c /etc/zfs/zpool.cache -aN
2020-09-22.12:08:04 zpool export nnpoolz
2020-09-22.12:12:24 zpool export nnpoolz
2020-09-23.15:42:26 zpool import -c /etc/zfs/zpool.cache -aN
2020-09-24.13:50:28 zpool import -c /etc/zfs/zpool.cache -aN
2020-09-28.07:20:47 zpool import -c /etc/zfs/zpool.cache -aN
2020-09-28.11:25:19 zpool import -c /etc/zfs/zpool.cache -aN
2020-09-28.20:13:59 zpool import -c /etc/zfs/zpool.cache -aN
2020-09-28.20:15:15 zpool export nnpoolz

and nothing changes to visibility, except one thing/ I couldnt catch nnpoolx from prompt... it's not visible :)
 

Attachments

  • ZFS pool.png
    ZFS pool.png
    46.1 KB · Views: 10

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!