[SOLVED] zpool import -a : no pools available

cvega

Member
Oct 30, 2019
13
3
8
44
hi forum,

I have been running a proxmox machine for a while now, using a SSD as a OS drive and 2 x 4TB drives in ZFS RAID1 as main storage.
I've now gotten a new machine, and moved the raid card (dell perc h310 in IT mode) to the new motherboard along with the drives, onto a fresh proxmox install.
However, I cannot seem to import the zfs pool. It's called "main" but i cannot get zfs to find it. What did I do wrong when migrating ?
The server1 and server2 versions are identical (proxmox and zfs/zfs-utils)..

Disks are physically showing up in the node interface:View attachment 18509

/dev/sde and /dev/sdg are the two members of the raid pool that previously worked.
HELP!
 
please post the outputs of:
Code:
zpool status
zpool list
zpool import
lsblk

Thanks!
 
  • Like
Reactions: mzaferyahsi
root@zenon:~# zpool status
no pools available
root@zenon:~# zpool list
no pools available
root@zenon:~# zpool import
no pools available to import
root@zenon:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
sdb 8:16 0 931.5G 0 disk
sdc 8:32 0 931.5G 0 disk
sdd 8:48 0 931.5G 0 disk
sde 8:64 0 3.7T 0 disk
├─sde1 8:65 0 3.7T 0 part
└─sde9 8:73 0 8M 0 part
sdf 8:80 0 931.5G 0 disk
└─sdf1 8:81 0 931.5G 0 part
sdg 8:96 0 3.7T 0 disk
├─sdg1 8:97 0 3.7T 0 part
└─sdg9 8:105 0 8M 0 part
sdh 8:112 0 298.1G 0 disk
├─sdh1 8:113 0 1007K 0 part
├─sdh2 8:114 0 512M 0 part
└─sdh3 8:115 0 297.6G 0 part
├─pve-root 253:0 0 74.3G 0 lvm /
├─pve-swap 253:3 0 8G 0 lvm [SWAP]
├─pve-data_tmeta 253:4 0 2G 0 lvm
│ └─pve-data 253:8 0 195.4G 0 lvm
└─pve-data_tdata 253:5 0 195.4G 0 lvm
└─pve-data 253:8 0 195.4G 0 lvm
sdi 8:128 0 465.8G 0 disk
├─vm--ssd-vm--ssd_tmeta 253:1 0 120M 0 lvm
│ └─vm--ssd-vm--ssd-tpool 253:6 0 465.5G 0 lvm
│ ├─vm--ssd-vm--ssd 253:7 0 465.5G 0 lvm
│ ├─vm--ssd-vm--202--disk--0 253:9 0 64G 0 lvm
│ ├─vm--ssd-vm--500--state--HP_Sureclick_working 253:10 0 8.5G 0 lvm
│ ├─vm--ssd-vm--500--state--HP_SUreclick_421 253:11 0 8.5G 0 lvm
│ └─vm--ssd-vm--500--disk--0 253:12 0 64G 0 lvm
└─vm--ssd-vm--ssd_tdata 253:2 0 465.5G 0 lvm
└─vm--ssd-vm--ssd-tpool 253:6 0 465.5G 0 lvm
├─vm--ssd-vm--ssd 253:7 0 465.5G 0 lvm
├─vm--ssd-vm--202--disk--0 253:9 0 64G 0 lvm
├─vm--ssd-vm--500--state--HP_Sureclick_working 253:10 0 8.5G 0 lvm
├─vm--ssd-vm--500--state--HP_SUreclick_421 253:11 0 8.5G 0 lvm
└─vm--ssd-vm--500--disk--0 253:12 0 64G 0 lvm
 
please use code tags for pasting command line output - it makes reading it much easier.

sde and sdg look like they could belong to a zpool - on a hunch does:
Code:
zpool import -a -d /dev/disk/by-id
work?
 
Interesting - yes I moved those drives from another (now dead) node called erebus (previous server).

Code:
root@zenon:~# zpool import -a -d /dev/disk/by-id/ata-WDC_WD40PURX-64GVNY0_WD-WCC4E6KE8J64-part1
cannot import 'main': pool was previously in use from another system.
Last accessed by erebus (hostid=f0f6e005) at Mon Jul 13 14:02:29 2020
The pool can be imported, use 'zpool import -f' to import the pool.
 
Code:
root@zenon:~# zpool import -a -f -d /dev/disk/by-id/ata-WDC_WD40PURX-64GVNY0_WD-WCC4E6KE8J64-part1
root@zenon:~# zpool list
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
main  3.62T   238G  3.39T        -         -    12%     6%  1.00x    ONLINE  -
root@zenon:~# zfs list
NAME                     USED  AVAIL     REFER  MOUNTPOINT
main                    1.34T  2.17T       96K  /main
main/base-999-disk-0    17.6G  2.19T     1.10G  -
main/vm-201-disk-0      4.11G  2.17T     4.96G  -
main/vm-201-disk-1      1.03T  3.04T      166G  -
main/vm-204-disk-0      21.8G  2.19T     5.38G  -
main/vm-204-state-test  4.63G  2.18T     1.23G  -
main/vm-205-disk-0       132G  2.28T     26.5G  -
main/vm-207-disk-0      33.0G  2.19T     12.8G  -
main/vm-210-disk-0      33.0G  2.20T     2.07G  -
main/vm-210-disk-1      33.0G  2.20T     1.15G  -
main/vm-210-disk-2      33.0G  2.19T     14.3G  -
main/vm-501-disk-0      1.51G  2.17T     2.21G  -
 
seems that worked ;)
please mark the thread as 'SOLVED'
Thanks!
 
It happened to me recently after restart of the server, and this option of import was helpful.
Why did it happen on the first place?
In my case, I had this message:
Code:
$ zfs version
the ZFS modules are not loaded.
Try running '/sbin/modprobe zfs' as root to load them.

Although it exists:
Code:
$ dpkg-query -l | grep zfs
ii  libzfs4linux                   2.1.4-pve1                     amd64        OpenZFS filesystem library for Linux
ii  zfs-dkms                       2.0.3-9                        all          OpenZFS filesystem kernel modules for Linux
ii  zfs-zed                        2.1.4-pve1                     amd64        OpenZFS Event Daemon
ii  zfsutils-linux                 2.1.4-pve1                     amd64        command-line tools to manage OpenZFS filesystems

After command:
Code:
$ apt install linux-headers-$(uname -r) linux-image-amd64 spl kmod
$ modprobe zfs
I could get everything back with above command:
$ zpool import -a -d /dev/disk/by-id

Do I need to install explicitly linux-headers for each new kernel? This looks strange.
 
After command:
Code:
$ apt install linux-headers-$(uname -r) linux-image-amd64 spl kmod
$ modprobe zfs
I could get everything back with above command
this is odd/not really correct on a proxmox system:
* pve-kernel ships the zfs module - so there's no need to install the linux-headers or linux-image (both from stock debian)
make sure you're running a pve-kernel on the system then this issue should not happen
 
this is odd/not really correct on a proxmox system:
* pve-kernel ships the zfs module - so there's no need to install the linux-headers or linux-image (both from stock debian)
make sure you're running a pve-kernel on the system then this issue should not happen
Thanks for reply, and you're probably correct, with the command:
Code:
$ uname -a
Linux srv2 5.10.0-13-amd64 #1 SMP Debian 5.10.106-1 (2022-03-17) x86_64 GNU/Linux

It doesn't look like PVE kernel.
I followed this procedure: https://pbs.proxmox.com/docs/installation.html#install-proxmox-backup-server-on-debian (installing PBS on the top of the Debian).

I just installed linux-headers-5.15.30-1-pve, let's see after reboot (but not today, at Friday)
 
I'd assume you instaleld proxmox-backup-server instead of the meta package proxmox-backup - quoting from the docs you posted:
If you want to install the same set of packages as the installer does, please use the following:

# apt-get update
# apt-get install proxmox-backup

This will install all required packages, the Proxmox kernel with ZFS support, and a set of common and useful packages.

I'd suggest just installing proxmox-backup - this will pull in pve-kernel and provide you with ZFS

I hope this helps!
 
  • Like
Reactions: franko5

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!