BusyBox error after installation

R.Sato

New Member
Nov 27, 2017
6
1
1
65
Hello,

I installed first time Proxmox 5.1 on a HPE_ProLiant DL320e Gen8.
I have 4 drives with each 1TB and 24GB memory.

The installer works as well, but after the first restart the system will not start up.

Code:
[Firmware Bug]: the BIOS ha corrupted hw-PMU resources (MSR 38d is 330)
Read all physical volumes. This may take a while...

Command: /sbin/zpool import -N 'rpool'
Message: cannot import 'rpool': no such pool available
Error: 1

Failed to import pool 'rpool'.
Manually import the pool and exit.

BusyBox v1.22.1 (Debian 1:1.22.0-19+b3) built-in shell (ash)
Enter 'help' for a list of built-in commands.

/sbin/sh: can't access tty; job control turned off
/ # [     2.980129] sd 6:0:0:0: [sdb] No Caching mode page found
[       2.980192] sd 6:0:0:0: [sdb] Assuming driver cache: write through
[       2.983238] sd 6:0:0:1: [sdc] No Caching mode page found
[       2.983299] sd 6:0:0:1: [sdc] Assuming driver cache: write through


If I use Raid10 or one volume only, the system works not correctly. Where is the issue, any ideas?

Thank you in advance.
 
I didn't try it and installed Proxmox on Debian 9 as successfully now. Thank you.
 
I didn't try it and installed Proxmox on Debian 9 as successfully now. Thank you.
Hi,

Can you say what driver Debian 9 is using for the B120i? Is it ahci?

I have the same server as yours (DL320e Gen8) and experiencing that same Busybox ZFS boot error. I'll be trying the grub fix when I can fit in some downtime, but I'm interested in what a native Debian 9 install might be doing differently on your server.

I'm seeing some significant drive performance issues with proxmox 5.1-41 which I suspect might be driver related. This happens whether in AHCI or RAID mode on the B120i. SSD or HDD.

FYI - here's the driver in use on my server when in RAID mode...

Code:
DRIVER=ahci
PCI_CLASS=10400
PCI_ID=8086:1C04
PCI_SUBSYS_ID=1590:006C
PCI_SLOT_NAME=0000:00:1f.2

Thanks
 
I have the same server as yours (DL320e Gen8) and experiencing that same Busybox ZFS boot error. I'll be trying the grub fix when I can fit in some downtime, but I'm interested in what a native Debian 9 install might be doing differently on your server.
It uses LVM with EXT4 and not ZFS. Please try my suggestion above, as it usually comes down to the controller initialization taking longer and ZFS has no dataset it could boot from.

FYI - here's the driver in use on my server when in RAID mode...
With ZFS, don't use any sort of RAID other then provided by ZFS itself. That just kills your performance.
 
With ZFS, don't use any sort of RAID other then provided by ZFS itself. That just kills your performance.
Sorry, I was interested in the literal driver being used to the AHCI (B120i) controller. As you suggest, I'm using ZFS to do the real RAID. The B120i controller is just offering single drive arrays to the OS.

My initial reason for asking was because I've seen a few issues with these HP Gen8 / B120i devices. Both with older Linux builds and other hypervisors. But after digging into this further I've found I'm being bitten by the same z_null_int / ZFS arc bug that others are seeing.

Thanks. :)