[SOLVED] Proxmox 4.0: Failed to boot after zfs install : cannot import rpool: no such pool

bjornoss

New Member
Oct 8, 2015
7
0
1
Hi

Tried to install Proxmox 4 today on a machine with 4 2TB disks (+ one SSD).
I selected Raid10 as file system type and selected my four drives.

When the machine reboots, I see the following after the GRUB menu:
Code:
Loading, please wait...

Command: /sbin/zpool import -N rpool
Message: cannot import 'rpool': no such pool available
Error: 1

No pool imported. Manually import the root pool
at the command prompt and then exit. 
Hint: Tyr: zpool import -R /root -N rpool

BusyBox v1.22.1 .... 
/ #

I tried the commend hinted but no luck :(

I see that outer in the forum have had similar problems but not sure if it is exactly the same problem...

Any suggestions?

Thanks
 
Last edited:
Hi, I've the exact same problem (I'm here to post it but noticed you post so...)!
During installation I select ZFS RAID10, select the 4 1TB (velociraptor) hd, I don't select the ssd (I will use it as arc2 etc in a second time), reboot and have the same error you have.
Help is needed here too :)
BTW, what kind (brand/model) of HBA / passthrough controller do you use for direct access of disks?
 
Thanks for the reply, nice to know it is not just me :p
My disks is connected to the integrated controller (Z68) on a ASUS P8Z68-v pro/gen3 motherboard.
I'm also planing to use my SSD as ARC2.
 
Hi,
Here on my machine I have no problems with raid 10 and 4 4 GB disk on a Supermicro board with Hashwell e.
can you both please post your HW?
 
Mine is a MB by Intel, S2400SC (sigh, we have discovered that is only sata2, and to use the second 4 port sas connector you also need and activation key $$$).
The 4 sata are connected through the on board SAS controller, set as NO RAID and AHCI.
CPU is a 2x Xeon E5-2420, 64 MB RAM
SSD is an Intel DC3710, 200 GB, last in the list of available devices during installation and I select "don't use". Is connected through on board sata
Installation of 4.0 you just released (btw, you are great!)
An ext4 installation on the first disk goes smooth instead (I've not tested a ZFS on single disk, I can do if needed only monday).
Btw Wolfang, in the wiki I've seen you have used as example an S2600SC MB, have you used the on board SAS controller disabling RAID from the BIOS? Have you bought the "Intel Raid C600 upgrade Kit" to be able to use also the second sas port? Are performances OK with that controller, or an external PCI-e HBA controller would be better?
 
Last edited:
Sure Wolfgang

MB: ASUS P8Z68-v pro/gen3
CPU:
INTEL CORE I7 2600K
RAM: 16GB
Disks: 2x Hitachi HDS722020ALA330 and 2x ST2000DM001-1CH164
The disks are all connected to the four 3Gb/s ports on the motherboard.

SSD: Corsair Fource GT - connected to one of the 6Gb/s ports.
The SATA config in the BIOS is set to AHCI.

Before installing 4.0 I used version 3.4 without any problems on the machine. But the disk setup was mdraid + LVM + ext4.

Please let me know if you need any more information.
 
Btw Wolfang, in the wiki I've seen you have used as example an S2600SC MB, have you used the on board SAS controller disabling RAID from the BIOS? Have you bought the "Intel Raid C600 upgrade Kit" to be able to use also the second sas port? Are performances OK with that controller, or an external PCI-e HBA controller would be better?

I don't know why do you think we used the S2600SC MB, but I'm sorry we don't, so I can't tell you about this SAS controller.

can you please try to install the PVE 4 on Zfs with raid 1 and tell me if you have the same failure?
 
If you reached busybox that means that the kernel and initrd have been loaded and executed, so GRUB was OK reading them from ZFS.
Try the below command and paste the output:
Code:
zpool import

If nothing is displayed, show output from
Code:
ls /dev/disk/by-id

I do have a mirrored zpool as root and the funny thing is that it is imported by /dev/sdX at boot instead of usual /dev/disk/by-id
 
I do have a mirrored zpool as root and the funny thing is that it is imported by /dev/sdX at boot instead of usual /dev/disk/by-id

newer zfs version use libblkid, which always use /dev/sdX

The corresponding CLI command to find zfs members is:

# blkid -l -t TYPE=zfs_member
 
Last edited:
@dietmar: thanks for the information. Indeed "blkid" command shows a lot of ZFS information on my block devices. Seems improved.
@escoreal: Type "blkid" and let's hope it is in the initrd.
 
You've had MD before and didn't erase the superblock?
Code:
mdadm --zero-superblock /dev/sda2
mdadm --zero-superblock /dev/sdb2

This is the output from a proper 2-drive mirror setup (didn't select RAID1 at boot, but attached the mirror after install):

Code:
/dev/sda2: LABEL="rpool" UUID="13685920825240192355" UUID_SUB="9119363625140310734" TYPE="zfs_member" PARTLABEL="zfs" PARTUUID="a6cd9a1a-4098-4719-8d74-c9cab5df7dfa"
/dev/sdb2: LABEL="rpool" UUID="13685920825240192355" UUID_SUB="2512103741661058662" TYPE="zfs_member" PARTLABEL="zfs" PARTUUID="07b63574-cf95-4543-9ae8-60147e372405"
 
Last edited:
I don't know why do you think we used the S2600SC MB, but I'm sorry we don't, so I can't tell you about this SAS controller.
can you please try to install the PVE 4 on Zfs with raid 1 and tell me if you have the same failure?
About the MB, I've seen it in the cheph wiki and is a different model, S2600CP, but my question remains (http://pve.proxmox.com/wiki/Ceph_Server under "Recommended hardware"). I sadly discovered S2600CP has also ony 3GB/s channels (what a shame!).
Monday I'll try Zfs on raid1 and also on single disk (seems to me to remember that also in single disk I had the issue, but I'm not sure, better try again).
I've seen in a reply asking about "superblock", I never used mdadm on those disks but were managed by Adaptec board trying to figure out if JBOD was a viable solution.
Thanks a lot
 
I can confirm that this was my problem.

I rebooted using a PartedMagic CD, stopped the mdraid and then zeroing all the superblocks.
Code:
mdadm -E /dev/md0
mdadm --zero-superblock /dev/sda2
mdadm --zero-superblock /dev/sdb2
mdadm --zero-superblock /dev/sdc2
mdadm --zero-superblock /dev/sdd2

After I rebooted I was able to access the web interface.

The installation warns that all data will be lost and I expected that it would remove any previous partitions and settings (also the mdraid). Maybe something to warn about?
Should I mark this thread as Solved?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!