PVE 4.1 - Grub rescue shell after add ZFS cache & log

elurex

Active Member
Oct 28, 2015
204
13
38
Taiwan
Hi all,

I have reinstall dozen times and I cannot figure out how PVE's grub works. I always assume grub is install on the first 2 disk of /dev/sda and /dev/sdb.

When I install my pve 4.1, I use following ZFS pool configuration
2016-04-26_10-58-01.jpg

after I add two intel 750 nvme sdd as long and cache, after reboot I can no longer boot into pve.
2016-04-26_11-01-04.jpg

I have tried to manually grub-install
Code:
grub-install /dev/sda
grub-install /dev/sda
update-grub
update-initramfs -u

Still After reboot, I can only get into grub-rescue shell

2016-04-26_10-54-49.jpg

I tried to insmod zfs insmod part_gpt then ls (hd0) , ls (hd0,gpt1) ls (hd0, gpt2) ls (hd0, gpt9) they all return unknow filesystem.

Please help.

I have updated pve 4.1 to latest update and still the same results
2016-04-22_17-03-58.jpg
 
Please post the resulting grub configuration ("/boot/grub/grub.cfg") and the content of "/etc/default/grub"
 
Code:
root@san:~# uname -r
4.4.6-1-pve
root@san:~# pveversion -v
proxmox-ve: 4.1-48 (running kernel: 4.4.6-1-pve)
pve-manager: 4.1-34 (running version: 4.1-34/8887b0fd)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.2.6-1-pve: 4.2.6-36
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-39
qemu-server: 4.0-72
pve-firmware: 1.1-8
libpve-common-perl: 4.0-59
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-50
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-13
pve-container: 1.0-61
pve-firewall: 2.0-25
pve-ha-manager: 1.0-28
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve9~jessie
root@san:~# cat /etc/default/grub
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
#   info -f grub -n 'Simple configuration'

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="Proxmox Virtual Environment"
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs"

# Disable os-prober, it might add menu entries for each guest
# root FS on a local partition
GRUB_DISABLE_OS_PROBER=true

# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"

# Uncomment to disable graphical terminal (grub-pc only)
#GRUB_TERMINAL=console

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE=640x480

# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
#GRUB_DISABLE_LINUX_UUID=true

# Disable generation of recovery mode menu entries
GRUB_DISABLE_RECOVERY="true"

# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"

grub.cfg uploaded as grub.txt

these are cfg BEFORE ADDING zfs cache and log
 

Attachments

  • grub.txt
    5.7 KB · Views: 10
the config that is not working (i.e., after adding the cache and log vdevs and updating the grub config) would be more interesting
 
Code:
root@san:/dev# zpool add rpool cache /dev/nvme0n1p1 -f
root@san:/dev# zpool add rpool log /dev/nvme1n1p1 -f
root@san:/dev# zpool status
  pool: rpool
state: ONLINE
  scan: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        rpool        ONLINE       0     0     0
          mirror-0   ONLINE       0     0     0
            sda2     ONLINE       0     0     0
            sdb2     ONLINE       0     0     0
          mirror-1   ONLINE       0     0     0
            sdc      ONLINE       0     0     0
            sdd      ONLINE       0     0     0
          mirror-2   ONLINE       0     0     0
            sde      ONLINE       0     0     0
            sdf      ONLINE       0     0     0
          mirror-3   ONLINE       0     0     0
            sdg      ONLINE       0     0     0
            sdh      ONLINE       0     0     0
        logs
          nvme1n1p1  ONLINE       0     0     0
        cache
          nvme0n1p1  ONLINE       0     0     0

errors: No known data errors
root@san:/dev# cat /etc/default/grub
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
#  info -f grub -n 'Simple configuration'

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="Proxmox Virtual Environment"
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs"

# Disable os-prober, it might add menu entries for each guest
# root FS on a local partition
GRUB_DISABLE_OS_PROBER=true

# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"

# Uncomment to disable graphical terminal (grub-pc only)
#GRUB_TERMINAL=console

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE=640x480

# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
#GRUB_DISABLE_LINUX_UUID=true

# Disable generation of recovery mode menu entries
GRUB_DISABLE_RECOVERY="true"

# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"

grub.cfg uploaded as grub.txt
 

Attachments

  • grub.txt
    5.7 KB · Views: 2
did you run "update-grub2" to regenerate the configuration? it is identical
 
I could not reproduce this here, but unfortunately we only have limited NVME testing hardware available.

Since NVME support is fairly new and we want to rule out bugs on our side, we would allow you (this one time, under these special circumstances) to open a free support ticket at https://my.proxmox.com - if you are willing to give us temporary access to the server and iKVM console. Maybe we can find the root of this issue by looking around in the Grub rescue shell.

If you decide to take this route, please put a reference to this forum thread into the ticket so that our support staff can assign it to me or one of my colleagues directly.
 
ticket opened... please gimme some time to create necessary connection profile and account access. currently that server holds no data... everytihing on the hard drive can be wiped, partition or reinstalled

I have tried reinstall that server so many time trying to solve the problem and I have used wipefs -a /dev/sdX and /dev/nvme0n1 many times already
 
I have the same problem here, running Promox 4.3 with 5x 4TB drives and a 256GB SSD, after adding 10GB as log and the rest as cache the new installed server does not boot anymore (stopping at grub rescue with the same error "no such device"). Any solution available?
 
I have the same problem here, running Promox 4.3 with 5x 4TB drives and a 256GB SSD, after adding 10GB as log and the rest as cache the new installed server does not boot anymore (stopping at grub rescue with the same error "no such device"). Any solution available?

SSD or NVME? in the latter case I would suggest checking for firmware updates (both for the NVME disk and the motherboard)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!