ZFS pool disappear

and nothing changes to visibility, except one thing/ I couldnt catch nnpoolx from prompt... it's not visible
What do you mean by that? Don't you see the pools data in /nnpoolz or by running zfs list?
 
root@pve:~# zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
nnpoolz 3.62T 569G 3.07T - - 0% 15% 1.00x ONLINE -
wwn-0x50014ee20d82c0be 1.81T 230G 1.59T - - 0% 12.4% - ONLINE
wwn-0x5000c50078dff321 1.81T 339G 1.48T - - 1% 18.3% - ONLINE
root@pve:~#

zpool list looks the same - pretty good,
nnpoolz exust, but still no date and after just deleting all directories inside i've got the next message
root@pve:/nnpoolz# rm -R home
root@pve:/nnpoolz# ls
root@pve:/nnpoolz# zpool export nnpoolz
root@pve:/nnpoolz# zpool import -d /dev/disk/by-id nnpoolz
cannot mount '/nnpoolz': directory is not empty
root@pve:/nnpoolz#
so, i just stacked in the beginning of the problem
 
zpool list shows that it is imported, but what does the zfs list command show?

so, i just stacked in the beginning of the problem
If you did not stop the services that access that directory (transmission, and so on) before you exported the pool, they are likely to create these directories again as they went missing from their point of view.
 
Everything stopped. No one is in air. And still the same.
and zfs list below:
zfs list
NAME USED AVAIL REFER MOUNTPOINT
nnpoolz 569G 2.96T 569G /nnpoolz
root@pve:~#
 
Last edited:
As far as i understood, nobody in a whole world never seen such problem. Even Proxmox super pro. So, my be it's not so polite from me side , but i'll will repeat the problem.

Trying to add 2 new disk to the system. Immediately ZFS pool disappear, cause added new disks /sda and /sdb and "old" ones getting /sdc and /sdb
Then new disks removed - pool restored, but all catalogs with data invisible now. According to zfs list all information still exist, but not available.

So, question - any advise hoe to full restore ZFS pool and get access to data?

All dances with a tambourine show only one picture - pool is in place, data is not available.
How to get (if possible) access to "hidden" info?
Thanks a lot!
 
As far as i understood, nobody in a whole world never seen such problem. Even Proxmox super pro. So, my be it's not so polite from me side , but i'll will repeat the problem.
i am trying to understand your problem (i read the whole thread but did not understand everything you meant), so bear with me

Trying to add 2 new disk to the system. Immediately ZFS pool disappear, cause added new disks /sda and /sdb and "old" ones getting /sdc and /sdb
Then new disks removed - pool restored, but all catalogs with data invisible now. According to zfs list all information still exist, but not available.
what do you mean exactly with 'disappear' ? did the zfs list output vanish? did you reboot in between? what did zpool status/zfs list show before and after?
it is not normal by adding new disks on a running server that disks change their devpath

So, question - any advise hoe to full restore ZFS pool and get access to data?
depends on the answer from the questions before
 
1. Problem is next: working well Proxmox for home with Plex, Transmission and Samba.
Decided to add 2 new HDDs to the system. Plug in and ZFS pool become unavailable, cause 2 "old" disks in pool was /dev/sda and /dev/sdb. But "new" ones after connection start as /dev/sda and /dev/sdb/
Unplug "new added disks, "old" HDDs returned to their original state, ZFS pool available again but no access to data.

2. Current status for pool is well according to Proxmox, but all documents/photos - not accessible. No catalogs in system, just available unused folders. Status below:
--------------------------------------------------------------------
root@pve:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
nnpoolz 569G 2.96T 569G /nnpoolz
--------------------------------------------------------------------
root@pve:~# zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
nnpoolz 3.62T 569G 3.07T - - 0% 15% 1.00x ONLINE -
wwn-0x50014ee20d82c0be 1.81T 230G 1.59T - - 0% 12.4% - ONLINE // in the beginning it was just /dev/sda and /dev/sdb,
wwn-0x5000c50078dff321 1.81T 339G 1.48T - - 1% 18.3% - ONLINE // after all attempts to restore got by dev_id , but no results
--------------------------------------------------------------------
root@pve:~# zpool history
History for 'nnpoolz':
2020-02-21.14:27:02 zpool create -f -o ashift=12 nnpoolz /dev/sda /dev/sdb
2020-02-25.16:09:31 zpool import -c /etc/zfs/zpool.cache -aN
2020-02-27.11:24:14 zpool import -c /etc/zfs/zpool.cache -aN
2020-02-27.16:28:16 zpool import -c /etc/zfs/zpool.cache -aN
2020-02-29.14:54:05 zpool import -c /etc/zfs/zpool.cache -aN
2020-02-29.16:22:47 zpool import -c /etc/zfs/zpool.cache -aN
2020-03-04.21:22:18 zpool import -c /etc/zfs/zpool.cache -aN
2020-03-08.00:24:04 zpool scrub nnpoolz
2020-03-29.18:56:04 zpool import -c /etc/zfs/zpool.cache -aN
2020-04-12.00:24:04 zpool scrub nnpoolz
2020-04-18.19:01:16 zpool import -c /etc/zfs/zpool.cache -aN
2020-05-07.20:32:41 zpool import -c /etc/zfs/zpool.cache -aN
2020-05-10.00:24:04 zpool scrub nnpoolz
2020-05-11.22:44:34 zpool import -c /etc/zfs/zpool.cache -aN
2020-05-22.19:54:37 zpool import -c /etc/zfs/zpool.cache -aN
2020-06-04.20:29:58 zpool import -c /etc/zfs/zpool.cache -aN
2020-06-08.22:13:13 zpool import -c /etc/zfs/zpool.cache -aN
2020-06-14.00:24:09 zpool scrub nnpoolz
2020-07-05.23:53:10 zpool import -c /etc/zfs/zpool.cache -aN
2020-07-12.00:24:05 zpool scrub nnpoolz
2020-07-13.23:07:38 zpool import -c /etc/zfs/zpool.cache -aN
2020-07-23.20:51:21 zpool import -c /etc/zfs/zpool.cache -aN
2020-07-30.22:22:22 zpool import -c /etc/zfs/zpool.cache -aN
2020-08-07.20:22:43 zpool import -c /etc/zfs/zpool.cache -aN
2020-08-09.00:24:09 zpool scrub nnpoolz
2020-09-01.19:30:54 zpool import -c /etc/zfs/zpool.cache -aN
2020-09-05.21:04:52 zpool import -c /etc/zfs/zpool.cache -aN
2020-09-13.00:24:09 zpool scrub nnpoolz
2020-09-19.18:10:28 zpool import -c /etc/zfs/zpool.cache -aN // new disks added and then unplugged (starts all problem)
2020-09-20.14:30:16 zpool import -c /etc/zfs/zpool.cache -aN
2020-09-20.17:37:10 zpool import -c /etc/zfs/zpool.cache -aN
2020-09-22.12:08:04 zpool export nnpoolz
2020-09-22.12:12:24 zpool export nnpoolz
2020-09-23.15:42:26 zpool import -c /etc/zfs/zpool.cache -aN
2020-09-24.13:50:28 zpool import -c /etc/zfs/zpool.cache -aN
2020-09-28.07:20:47 zpool import -c /etc/zfs/zpool.cache -aN
2020-09-28.11:25:19 zpool import -c /etc/zfs/zpool.cache -aN
2020-09-28.20:13:59 zpool import -c /etc/zfs/zpool.cache -aN
2020-09-28.20:15:15 zpool export nnpoolz
2020-09-28.20:27:11 zpool import -c /etc/zfs/zpool.cache -aN
2020-09-28.21:39:04 zpool export nnpoolz
2020-09-28.21:40:47 zpool export nnpoolz
2020-09-29.11:53:09 zpool export nnpoolz
2020-10-02.20:24:42 zpool import -c /etc/zfs/zpool.cache -aN
2020-10-03.14:30:12 zpool import -c /etc/zfs/zpool.cache -aN
2020-10-03.14:54:00 zpool import -c /etc/zfs/zpool.cache -aN
----------------------------------------------------------------------------------------
Yes, i restarted pve.
it is not normal by adding new disks on a running server that disks change their devpath
absolutely agree. I switch off server (totally), add disks, then start. Got problem with changed paths, switch off again, unplug disks and start server. And found out this problem.
If you need any additional info, or, remote access even - welcome.
Thanks a lot
Dima
 
ok good so i think i understand your problem

solution is as follows:

the zpool is imported by /dev/sda etc. those paths change when you add a disk (after reboot)
first step is to stop anything that uses that zpool and make sure it does not start automatically (e.g. temporarily deactivate your pve storage that uses that zpool)
then do an export/import with dev/disk/by-id (like my colleague described on the previous page)
if there is data left in the folder, move it away and make sure nothing writes into it (cronjobs, etc.)
after you have succesfully imported the old pool with 'by-id' paths, you can shut your server down, add the disks
and start again. your old pool should be available and you should be able to use your new disks
now you can re-enable all things that access the pool
 
e.g. temporarily deactivate your pve storage that uses that zpool)
That was trick! I just stopped all not services only but disabled nnpoolz for pve. So, I have to say many thanks for comprehension and patience!
Dominik and Aaron! Thanks a lot. Not so obviously for me, but finally everything is done!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!