[SOLVED] failed to import 'rpool' on bootup after system update

Jul 28, 2017
40
0
11
53
San Diego, CA
I have a server running proxmox-ve-5.1-25 with all the latest updates. I have been running the server for about 4 months now with no problems until this last update.

A couple of days after running the update I needed to reboot the server and when I did it failed to come back online with the "failed to import error" others have had. Someone inanotherr thread said this was often seen when running zfs enabled OSs within proxmox (like freenas), however I am not running any ZFS enabled OSs at all.

I have 2 x 1TB Samsung Pro SSDs as my primary boot and storage (mirror-0) (rpool) and two 8TB Helium HGST drives (also in a mirror) for data storage and OS storage for my VMs that do not require the speed of the SSDs. On the spinning media I have 2 x 256GB SSDs acting as Zil and Cache. All drives are connected to an LSI 3008 12Gb/s controller flashed to IT mode.

I run scrubs every couple of weeks, the latest on October 29th. This is a very lightly loaded server with Dual Xeon E5-2640s and 256GB RAM.

I am on the subscription repository and the bpo80 fix that some suggested might be a fix did not work, I am on bpo90.

While the system runs just fine after it has booted, I would really like to get to the bottom of this issue since I am right in the middle of deploying a second proxmox server for VM replication. I am looking for any ideas that I may have missed!

Thanks



Here are the software versions:
root@proxmox:~# pveversion -v
proxmox-ve: 5.1-25 (running kernel: 4.13.4-1-pve)
pve-manager: 5.1-36 (running version: 5.1-36/131401db)
pve-kernel-4.13.4-1-pve: 4.13.4-25
pve-kernel-4.10.17-4-pve: 4.10.17-24
pve-kernel-4.10.17-2-pve: 4.10.17-20
pve-kernel-4.10.15-1-pve: 4.10.15-15
pve-kernel-4.10.17-3-pve: 4.10.17-23
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-15
qemu-server: 5.0-17
pve-firmware: 2.0-3
libpve-common-perl: 5.0-20
libpve-guest-common-perl: 2.0-13
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-16
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-2
pve-container: 2.0-17
pve-firewall: 3.0-3
pve-ha-manager: 2.0-3
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.0-2
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.2-pve1~bpo90

Here are the ZFS pools:
root@proxmox:~# zpool status
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 0h3m with 0 errors on Sun Oct 29 23:03:06 2017
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wwn-0x5002538d421655d5-part2 ONLINE 0 0 0
wwn-0x5002538d41fb6695-part2 ONLINE 0 0 0

errors: No known data errors

pool: spinning
state: ONLINE
scan: scrub repaired 0B in 0h0m with 0 errors on Sun Oct 29 23:00:02 2017
config:

NAME STATE READ WRITE CKSUM
spinning ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wwn-0x5000cca254ccb54d ONLINE 0 0 0
wwn-0x5000cca23bf6ee31 ONLINE 0 0 0
logs
ata-2.5__SATA_SSD_3ME_20140325AA0000000009 ONLINE 0 0 0
cache
wwn-0x5002538d42182519 ONLINE 0 0 0

errors: No known data errors

Output of /etc/default/grub:
root@proxmox:~# cat /etc/default/grub
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
# info -f grub -n 'Simple configuration'

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="Proxmox Virtual Environment"
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs"

# Disable os-prober, it might add menu entries for each guest
GRUB_DISABLE_OS_PROBER=true

# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"

# Uncomment to disable graphical terminal (grub-pc only)
#GRUB_TERMINAL=console

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE=640x480

# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
#GRUB_DISABLE_LINUX_UUID=true

# Disable generation of recovery mode menu entries
GRUB_DISABLE_RECOVERY="true"

# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"

Output of /boot/grub/grub.conf:
root@proxmox:~# cat /boot/grub/grub.cfg
#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#

### BEGIN /etc/grub.d/00_header ###
if [ -s $prefix/grubenv ]; then
set have_grubenv=true
load_env
fi
if [ "${next_entry}" ] ; then
set default="${next_entry}"
set next_entry=
save_env next_entry
set boot_once=true
else
set default="0"
fi

if [ x"${feature_menuentry_id}" = xy ]; then
menuentry_id_option="--id"
else
menuentry_id_option=""
fi

export menuentry_id_option

if [ "${prev_saved_entry}" ]; then
set saved_entry="${prev_saved_entry}"
save_env saved_entry
set prev_saved_entry=
save_env prev_saved_entry
set boot_once=true
fi

function savedefault {
if [ -z "${boot_once}" ]; then
saved_entry="${chosen}"
save_env saved_entry
fi
}
function load_video {
if [ x$feature_all_video_module = xy ]; then
insmod all_video
else
insmod efi_gop
insmod efi_uga
insmod ieee1275_fb
insmod vbe
insmod vga
insmod video_bochs
insmod video_cirrus
fi
}

if [ x$feature_default_font_path = xy ] ; then
font=unicode
else
insmod part_gpt
insmod part_gpt
insmod zfs
set root='hd0,gpt2'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt2 --hint-efi=hd0,gpt2 --hint-baremetal=ahci0,gpt2 --hint-bios=hd1,gpt2 --hint-efi=hd1,gpt2 --hint-baremetal=ahci1,gpt2 f9e591d1b3cbfa85
else
search --no-floppy --fs-uuid --set=root f9e591d1b3cbfa85
fi
font="/ROOT/pve-1@/usr/share/grub/unicode.pf2"
fi

if loadfont $font ; then
set gfxmode=auto
load_video
insmod gfxterm
set locale_dir=$prefix/locale
set lang=en_US
insmod gettext
fi
terminal_output gfxterm
if [ "${recordfail}" = 1 ] ; then
set timeout=30
else
if [ x$feature_timeout_style = xy ] ; then
set timeout_style=menu
set timeout=5
# Fallback normal timeout code in case the timeout_style feature is
# unavailable.
else
set timeout=5
fi
fi
### END /etc/grub.d/00_header ###

### BEGIN /etc/grub.d/05_debian_theme ###
set menu_color_normal=cyan/blue
set menu_color_highlight=white/blue
### END /etc/grub.d/05_debian_theme ###

### BEGIN /etc/grub.d/10_linux ###
function gfxmode {
set gfxpayload="${1}"
}
set linux_gfx_mode=
export linux_gfx_mode
menuentry 'Proxmox Virtual Environment GNU/Linux' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-f9e591d1b3cbfa85' {
load_video
insmod gzio
if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
insmod part_gpt
insmod part_gpt
insmod zfs
set root='hd0,gpt2'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt2 --hint-efi=hd0,gpt2 --hint-baremetal=ahci0,gpt2 --hint-bios=hd1,gpt2 --hint-efi=hd1,gpt2 --hint-baremetal=ahci1,gpt2 f9e591d1b3cbfa85
else
search --no-floppy --fs-uuid --set=root f9e591d1b3cbfa85
fi
echo 'Loading Linux 4.13.4-1-pve ...'
linux /ROOT/pve-1@/boot/vmlinuz-4.13.4-1-pve root=ZFS=rpool/ROOT/pve-1 ro root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
echo 'Loading initial ramdisk ...'
initrd /ROOT/pve-1@/boot/initrd.img-4.13.4-1-pve
}
submenu 'Advanced options for Proxmox Virtual Environment GNU/Linux' $menuentry_id_option 'gnulinux-advanced-f9e591d1b3cbfa85' {
menuentry 'Proxmox Virtual Environment GNU/Linux, with Linux 4.13.4-1-pve' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.13.4-1-pve-advanced-f9e591d1b3cbfa85' {
load_video
insmod gzio
if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
insmod part_gpt
insmod part_gpt
insmod zfs
set root='hd0,gpt2'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt2 --hint-efi=hd0,gpt2 --hint-baremetal=ahci0,gpt2 --hint-bios=hd1,gpt2 --hint-efi=hd1,gpt2 --hint-baremetal=ahci1,gpt2 f9e591d1b3cbfa85
else
search --no-floppy --fs-uuid --set=root f9e591d1b3cbfa85
fi
echo 'Loading Linux 4.13.4-1-pve ...'
linux /ROOT/pve-1@/boot/vmlinuz-4.13.4-1-pve root=ZFS=rpool/ROOT/pve-1 ro root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
echo 'Loading initial ramdisk ...'
initrd /ROOT/pve-1@/boot/initrd.img-4.13.4-1-pve
}
menuentry 'Proxmox Virtual Environment GNU/Linux, with Linux 4.10.17-4-pve' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.10.17-4-pve-advanced-f9e591d1b3cbfa85' {
load_video
insmod gzio
if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
insmod part_gpt
insmod part_gpt
insmod zfs
set root='hd0,gpt2'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt2 --hint-efi=hd0,gpt2 --hint-baremetal=ahci0,gpt2 --hint-bios=hd1,gpt2 --hint-efi=hd1,gpt2 --hint-baremetal=ahci1,gpt2 f9e591d1b3cbfa85
else
search --no-floppy --fs-uuid --set=root f9e591d1b3cbfa85
fi
echo 'Loading Linux 4.10.17-4-pve ...'
linux /ROOT/pve-1@/boot/vmlinuz-4.10.17-4-pve root=ZFS=rpool/ROOT/pve-1 ro root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
echo 'Loading initial ramdisk ...'
initrd /ROOT/pve-1@/boot/initrd.img-4.10.17-4-pve
}
menuentry 'Proxmox Virtual Environment GNU/Linux, with Linux 4.10.17-3-pve' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.10.17-3-pve-advanced-f9e591d1b3cbfa85' {
load_video
insmod gzio
if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
insmod part_gpt
insmod part_gpt
insmod zfs
set root='hd0,gpt2'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt2 --hint-efi=hd0,gpt2 --hint-baremetal=ahci0,gpt2 --hint-bios=hd1,gpt2 --hint-efi=hd1,gpt2 --hint-baremetal=ahci1,gpt2 f9e591d1b3cbfa85
else
search --no-floppy --fs-uuid --set=root f9e591d1b3cbfa85
fi
echo 'Loading Linux 4.10.17-3-pve ...'
linux /ROOT/pve-1@/boot/vmlinuz-4.10.17-3-pve root=ZFS=rpool/ROOT/pve-1 ro root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
echo 'Loading initial ramdisk ...'
initrd /ROOT/pve-1@/boot/initrd.img-4.10.17-3-pve
}
menuentry 'Proxmox Virtual Environment GNU/Linux, with Linux 4.10.17-2-pve' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.10.17-2-pve-advanced-f9e591d1b3cbfa85' {
load_video
insmod gzio
if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
insmod part_gpt
insmod part_gpt
insmod zfs
set root='hd0,gpt2'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt2 --hint-efi=hd0,gpt2 --hint-baremetal=ahci0,gpt2 --hint-bios=hd1,gpt2 --hint-efi=hd1,gpt2 --hint-baremetal=ahci1,gpt2 f9e591d1b3cbfa85
else
search --no-floppy --fs-uuid --set=root f9e591d1b3cbfa85
fi
echo 'Loading Linux 4.10.17-2-pve ...'
linux /ROOT/pve-1@/boot/vmlinuz-4.10.17-2-pve root=ZFS=rpool/ROOT/pve-1 ro root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
echo 'Loading initial ramdisk ...'
initrd /ROOT/pve-1@/boot/initrd.img-4.10.17-2-pve
}
menuentry 'Proxmox Virtual Environment GNU/Linux, with Linux 4.10.15-1-pve' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.10.15-1-pve-advanced-f9e591d1b3cbfa85' {
load_video
insmod gzio
if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
insmod part_gpt
insmod part_gpt
insmod zfs
set root='hd0,gpt2'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt2 --hint-efi=hd0,gpt2 --hint-baremetal=ahci0,gpt2 --hint-bios=hd1,gpt2 --hint-efi=hd1,gpt2 --hint-baremetal=ahci1,gpt2 f9e591d1b3cbfa85
else
search --no-floppy --fs-uuid --set=root f9e591d1b3cbfa85
fi
echo 'Loading Linux 4.10.15-1-pve ...'
linux /ROOT/pve-1@/boot/vmlinuz-4.10.15-1-pve root=ZFS=rpool/ROOT/pve-1 ro root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
echo 'Loading initial ramdisk ...'
initrd /ROOT/pve-1@/boot/initrd.img-4.10.15-1-pve
}
}

### END /etc/grub.d/10_linux ###

### BEGIN /etc/grub.d/20_linux_xen ###

### END /etc/grub.d/20_linux_xen ###

### BEGIN /etc/grub.d/20_memtest86+ ###
menuentry "Memory test (memtest86+)" {
insmod part_gpt
insmod part_gpt
insmod zfs
set root='hd0,gpt2'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt2 --hint-efi=hd0,gpt2 --hint-baremetal=ahci0,gpt2 --hint-bios=hd1,gpt2 --hint-efi=hd1,gpt2 --hint-baremetal=ahci1,gpt2 f9e591d1b3cbfa85
else
search --no-floppy --fs-uuid --set=root f9e591d1b3cbfa85
fi
linux16 /ROOT/pve-1@/boot/memtest86+.bin
}
menuentry "Memory test (memtest86+, serial console 115200)" {
insmod part_gpt
insmod part_gpt
insmod zfs
set root='hd0,gpt2'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt2 --hint-efi=hd0,gpt2 --hint-baremetal=ahci0,gpt2 --hint-bios=hd1,gpt2 --hint-efi=hd1,gpt2 --hint-baremetal=ahci1,gpt2 f9e591d1b3cbfa85
else
search --no-floppy --fs-uuid --set=root f9e591d1b3cbfa85
fi
linux16 /ROOT/pve-1@/boot/memtest86+.bin console=ttyS0,115200n8
}
menuentry "Memory test (memtest86+, experimental multiboot)" {
insmod part_gpt
insmod part_gpt
insmod zfs
set root='hd0,gpt2'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt2 --hint-efi=hd0,gpt2 --hint-baremetal=ahci0,gpt2 --hint-bios=hd1,gpt2 --hint-efi=hd1,gpt2 --hint-baremetal=ahci1,gpt2 f9e591d1b3cbfa85
else
search --no-floppy --fs-uuid --set=root f9e591d1b3cbfa85
fi
multiboot /ROOT/pve-1@/boot/memtest86+_multiboot.bin
}
menuentry "Memory test (memtest86+, serial console 115200, experimental multiboot)" {
insmod part_gpt
insmod part_gpt
insmod zfs
set root='hd0,gpt2'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt2 --hint-efi=hd0,gpt2 --hint-baremetal=ahci0,gpt2 --hint-bios=hd1,gpt2 --hint-efi=hd1,gpt2 --hint-baremetal=ahci1,gpt2 f9e591d1b3cbfa85
else
search --no-floppy --fs-uuid --set=root f9e591d1b3cbfa85
fi
multiboot /ROOT/pve-1@/boot/memtest86+_multiboot.bin console=ttyS0,115200n8
}
### END /etc/grub.d/20_memtest86+ ###

### BEGIN /etc/grub.d/30_os-prober ###
### END /etc/grub.d/30_os-prober ###

### BEGIN /etc/grub.d/30_uefi-firmware ###
### END /etc/grub.d/30_uefi-firmware ###

### BEGIN /etc/grub.d/40_custom ###
# This file provides an easy way to add custom menu entries. Simply type the
# menu entries you want to add after this comment. Be careful not to change
# the 'exec tail' line above.
### END /etc/grub.d/40_custom ###

### BEGIN /etc/grub.d/41_custom ###
if [ -f ${config_directory}/custom.cfg ]; then
source ${config_directory}/custom.cfg
elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then
source $prefix/custom.cfg;
fi
### END /etc/grub.d/41_custom ###

Output of zfs list -t all:
root@proxmox:~# zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
rpool 67.6G 393G 96K /rpool
rpool/ROOT 1.79G 393G 96K /rpool/ROOT
rpool/ROOT/pve-1 1.79G 393G 1.79G /
rpool/data 3.20G 393G 96K /rpool/data
rpool/data/vm-103-disk-1 3.20G 393G 3.20G -
rpool/ssd_images 54.0G 393G 96K /rpool/ssd_images
rpool/ssd_images/vm-100-disk-1 8.70G 393G 7.56G -
rpool/ssd_images/vm-101-disk-1 14.4G 393G 14.4G -
rpool/ssd_images/vm-102-disk-1 7.68G 393G 5.64G -
rpool/ssd_images/vm-102-disk-1@Works 2.04G - 4.43G -
rpool/ssd_images/vm-104-disk-1 6.89G 393G 6.89G -
rpool/ssd_images/vm-105-disk-1 6.89G 393G 6.89G -
rpool/ssd_images/vm-106-disk-1 7.45G 393G 7.45G -
rpool/swap 8.50G 402G 64K -
spinning

In my particular case, when I boot up I get "Failed to import pool 'rpool'. Manually import the pool and exit".

I am able to get the system back up and running by entering the following and then exiting:

Code:
zpool import -aN -d /dev/disk/by-id -o cachefile=none


I have tried all of the following ideas and none of them work for me:

1) apt update followed by apt upgrade and apt-get dist-upgrade

2) Adding the "ZPOOL_IMPORT_PATH" in /etc/default/zfs to "/dev/disk/by-vdev:/dev/disk/by-id" and regenerating the initramfs with "update-initramfs -u"

3) I was going to try to create a "disk-by-id.conf" file as suggested by @fabian here, however I did notice that the service zfs-import-scan is not active or enabled on my version of Proxmox, so doing so would have no effect at all:

Code:
root@proxmox:~# systemctl status zfs-import-scan
● zfs-import-scan.service - Import ZFS pools by device scanning
   Loaded: loaded (/lib/systemd/system/zfs-import-scan.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
     Docs: man:zpool(8)

4) Tried adding "rootdelay=30" to my /etc/default/grub right after GRUB_CMDLINE_LINUX and ran update_grub.
 
Last edited:

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
6,489
1,119
164
try setting the SLEEP parameters in /etc/default/zfs and update the initramfs afterwards
 
  • Like
Reactions: helojunkie
Jul 28, 2017
40
0
11
53
San Diego, CA
Thank you @fabian

There are two different "SLEEP" parameters in /etc/default/zfs:

Code:
# Wait for this many seconds in the initrd pre_mountroot?
# This delays startup and should be '0' on most systems.
# Only applicable for Debian GNU/Linux {dkms,initramfs}.
ZFS_INITRD_PRE_MOUNTROOT_SLEEP='0'

# Wait for this many seconds in the initrd mountroot?
# This delays startup and should be '0' on most systems. This might help on
# systems which have their ZFS root on a USB disk that takes just a little
# longer to be available
# Only applicable for Debian GNU/Linux {dkms,initramfs}.
ZFS_INITRD_POST_MODPROBE_SLEEP='0'

Which do you suggest changing?
 

PVEPeta

Member
Oct 6, 2017
24
2
8
46
try setting the SLEEP parameters in /etc/default/zfs and update the initramfs afterwards

This did the trick for me!

After I found the (my) error in the Busy Box (didn't run the right command:D)… System started with

# zpool import -N rpool
# exit

Then I made the adjustments in /etc/default/zfs
ZFS_INITRD_PRE_MOUNTROOT_SLEEP=' 5' and ZFS_INITRD_POST_MODPROBE_SLEEP=' 5' and the whole thing ended with # update-initramfs -u.

System running. I have not yet made any further updates by subscription.
 

RollMops

Active Member
Jul 17, 2014
31
0
26
Just encountered the same problem w. an upgrade from 4 to 5.1-43 (via apt) and @PVEPeta's post saved my day! Many thanks! ❤️
 

TheMrg

Member
Aug 1, 2019
92
3
13
40
Same Problem with Proxmox 5->6
we change the /etc/default/zfs
But
update-initramfs
not found. so if we reboot, nothing changed.

what can we do?
 

Kyle

Member
Oct 18, 2017
5
4
8
39
Code:
# zpool import -N rpool
# exit

Then I made the adjustments in /etc/default/zfs
Code:
ZFS_INITRD_PRE_MOUNTROOT_SLEEP=' 5' and ZFS_INITRD_POST_MODPROBE_SLEEP=' 5'
and the whole thing ended with # update-initramfs -u.

I had a similar issue after upgrading v4 to v5, not immediately after but a few reboots later.
Manually running the zpool import as posted by PVEPeta worked fine on the physical console.

In the file /etc/default/zfs I set
Code:
ZFS_INITRD_PRE_MOUNTROOT_SLEEP='4'
and then ran
Code:
update-initramfs -k $(uname -r) -u

Then tried a cold shutdown and some reboots, all seemed well again.

Thanks all for sharing your solves.
 
  • Like
Reactions: TheMrg

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!