[SOLVED] Failed To Import Pool 'rpool'

mhayhurst

Well-Known Member
Jul 21, 2016
108
5
58
43
Hello everyone,

I was using Proxmox 4.2-17... before upgrading yesterday. After upgrading I performed a reboot and received: Failed to import pool 'rpool' immediately following the GRUB screen. After I manually execute: zpool import -N 'rpool' and then exit, everything appears to start loading again but hangs at: A start job is running for Import ZFS pools by devic... I have attached screenshots of what I am seeing. Would anyone be able to provide a solution for this dilemma...I'm really confused on why the ZFS system appears to import okay yet everything hangs at the start job task?

Error Message.jpg Start Job Error.jpg
 
  • Like
Reactions: semanticbeeng
currently trying to reproduce this / narrow the cause down - do you have VMs with ZFS pools inside? if yes, are any of the vdevs of those pools ZFS zvols on the host?
 
if you are able to boot into a zfs-enabled rescue environment, could you try adding "-d /dev/disk/by-id" to the zfs-import-scan.service file?

you can simply drop a file named "disk-by-id.conf" into "/etc/systemd/system/zfs-import-scan.service.d/", with the following content:
Code:
[Service]
ExecStart=
ExecStart=/sbin/zpool import -aN -d /dev/disk/by-id -o cachefile=none

edit: a package with this change should hit the no-subscription repository soon (0.6.5.8-pve12~bpo80) - after you have installed it, you can (and should) remove the above snippet file again.
 
Last edited:
if you are able to boot into a zfs-enabled rescue environment, could you try adding "-d /dev/disk/by-id" to the zfs-import-scan.service file?

you can simply drop a file named "disk-by-id.conf" into "/etc/systemd/system/zfs-import-scan.service.d/", with the following content:
Code:
[Service]
ExecStart=
ExecStart=/sbin/zpool import -aN -d /dev/disk/by-id -o cachefile=none

edit: a package with this change should hit the no-subscription repository soon (0.6.5.8-pve12~bpo80) - after you have installed it, you can (and should) remove the above snippet file again.

Hey, thanks for replying to this. To answer your first post: NO, I do not have any VM's with ZFS pools inside. I believe you might be referring to something like FreeNAS, which from what I have read should be setup using ZFS...that is not the case here. Please excuse my ignorance, but I am new to ZFS so I am learning as we go. If I import the ZFS from the CLI using: zpool import -aN -d /dev/disk/by-id -o cachefile=none it imports okay and even executing: zfs get mountpoint shows rpool as being mounted (see screen shot). Is there a way I can access the Proxmox /etc directory from those mount points so I can add the conf file to: /etc/systemd/system/zfs-import-scan.service.d/? If not, and I must use a zfs-enabled rescue environment can this be any live linux disc such as Ubuntu Live or do you suggest something else?
 

Attachments

  • ZFS Get Mount Point.jpg
    ZFS Get Mount Point.jpg
    947.1 KB · Views: 226
Hey, thanks for replying to this. To answer your first post: NO, I do not have any VM's with ZFS pools inside. I believe you might be referring to something like FreeNAS, which from what I have read should be setup using ZFS...that is not the case here.

that is strange - it's the only setup where I could reproduce this issue so far..

If I import the ZFS from the CLI using: zpool import -aN -d /dev/disk/by-id -o cachefile=none it imports okay and even executing: zfs get mountpoint shows rpool as being mounted (see screen shot). Is there a way I can access the Proxmox /etc directory from those mount points so I can add the conf file to: /etc/systemd/system/zfs-import-scan.service.d/? If not, and I must use a zfs-enabled rescue environment can this be any live linux disc such as Ubuntu Live or do you suggest something else?

"zfs get mountpoint" only shows you the value of the mountpoint property (i.e., where it should be mounted, relative to the "altroot" property if set). if you import the pool with "-N", it won't be mounted (this makes sense in the service, where first all pools are imported, and then a separate service mounts them, and yet another service shares them (if configured).

given your first screenshot, you should be able to do the following:
  • boot into emergency mode (in the grub menu, hit "e", and add " emergency" to the end of the line starting with "linux", then press ctrl+x)
  • if needed, manually import your rpool in the initramfs prompt like you did in your first post ("zpool import -N rpool" followed by "exit")
  • enter your root password when prompted to enter maintenance/emergency mode
  • disable the zfs-import-scan.service ("systemctl disable zfs-import-scan")
  • continue the boot ("exit" or ctrl+d)
now you should be able to install the update with the fix, and then re-enable the zfs-import-scan service ("systemctl enable zfs-import-scan") and reboot the host. I would recommend regenerating the initramfs ("update-initramfs -u"). if you are still dropped into the initramfs prompt after that, I would be very interested in your "pveversion -v" output and your grub config ("/boot/grub/grub.cfg").
 
that is strange - it's the only setup where I could reproduce this issue so far..



"zfs get mountpoint" only shows you the value of the mountpoint property (i.e., where it should be mounted, relative to the "altroot" property if set). if you import the pool with "-N", it won't be mounted (this makes sense in the service, where first all pools are imported, and then a separate service mounts them, and yet another service shares them (if configured).

given your first screenshot, you should be able to do the following:
  • boot into emergency mode (in the grub menu, hit "e", and add " emergency" to the end of the line starting with "linux", then press ctrl+x)
  • if needed, manually import your rpool in the initramfs prompt like you did in your first post ("zpool import -N rpool" followed by "exit")
  • enter your root password when prompted to enter maintenance/emergency mode
  • disable the zfs-import-scan.service ("systemctl disable zfs-import-scan")
  • continue the boot ("exit" or ctrl+d)
now you should be able to install the update with the fix, and then re-enable the zfs-import-scan service ("systemctl enable zfs-import-scan") and reboot the host. I would recommend regenerating the initramfs ("update-initramfs -u"). if you are still dropped into the initramfs prompt after that, I would be very interested in your "pveversion -v" output and your grub config ("/boot/grub/grub.cfg").

THIS WORKED PERFECT, THANK YOU!!! I did as you instructed and I was able to boot into Proxmox and update the packages! I still get: Failed to import pool 'rpool' but I just execute that command manually, exit and then Proxmox will bootup. Im still not sure what causes that but I plan to add another HDD to my system and instead of creating another vdev with the new HDD and adding it to the ZFS pool, I'm just going to reinstall Proxmox. THANKS AGAIN!!
 
THIS WORKED PERFECT, THANK YOU!!! I did as you instructed and I was able to boot into Proxmox and update the packages! I still get: Failed to import pool 'rpool' but I just execute that command manually, exit and then Proxmox will bootup. Im still not sure what causes that but I plan to add another HDD to my system and instead of creating another vdev with the new HDD and adding it to the ZFS pool, I'm just going to reinstall Proxmox. THANKS AGAIN!!

you could try adding "rootdelay=N" to your linux boot cmdline (e.g., by pressing "e" in the grub menu and adding it to the end of the line starting with "linux"). replace N with the number of seconds that your system should wait for the root device. there are some systems where booting from LVM is not possible without, it might be that the same applies to some hardware with ZFS as well.
 
you could try adding "rootdelay=N" to your linux boot cmdline (e.g., by pressing "e" in the grub menu and adding it to the end of the line starting with "linux"). replace N with the number of seconds that your system should wait for the root device. there are some systems where booting from LVM is not possible without, it might be that the same applies to some hardware with ZFS as well.

Once again, thank you!!! This is exactly what I needed and it corrected the problem! I went ahead and added: rootdelay=20 to /etc/default/grub after the: GRUB_CMDLINE_LINUX="... and then ran update-grub from the CLI. I have since rebooted my Proxmox system five times and not once did I have to manually type in the: zpool import... command in order to get Proxmox to boot!! Is this issue caused by certain hardware taking longer than expected to initialize? The SuperMicro hardware I'm using is a lower end system from 2009 or 2011 I believe.
 
Is this issue caused by certain hardware taking longer than expected to initialize?

pretty much. if you don't have such hardware, you don't want to wait, hence it's configurable via the rootdelay option ;)
 
pretty much. if you don't have such hardware, you don't want to wait, hence it's configurable via the rootdelay option ;)

I just want to say that I was a little skeptical about Proxmox at first but decided to give it a try...and I'm sure glad I did! The community is very supportive and I love Proxmox now! I feel that VMware has gone the path of so many other corporations in the sense that they ignore the customer and force changes down our throat. I'm in the process of starting a small business and you can rest assure that a subscription of Proxmox will be running on all my systems once I get to that point. Just remember to keep the customer first as Proxmox continues to grow and I promise it will continue to remain a great product! Thanks for all the help!
 
I'm running a couple sets of older supermicro (Opteron 2xxx/8xxx series) motherboard, and they're all having this issue, even on a completely clean new install.

Tried all the options listed here, still can't get a clean boot.

zpool import -aN -d /dev/disk/by-id -o cachefile=none

this works. can then boot cleanly.

zpool import -aN -d /dev/disk/by-id

also works

you can simply drop a file named "disk-by-id.conf" into "/etc/systemd/system/zfs-import-scan.service.d/", with the following content:

hasn't changed the behavior.

rootdelay=20

Also has no effect.



Current Version:
Code:
root@pmx1:~# pveversion -v
proxmox-ve: 4.3-66 (running kernel: 4.4.19-1-pve)
pve-manager: 4.3-9 (running version: 4.3-9/f7c6f0cd)
pve-kernel-4.4.19-1-pve: 4.4.19-66
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-46
qemu-server: 4.0-92
pve-firmware: 1.1-10
libpve-common-perl: 4.0-79
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-68
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.3-12
pve-qemu-kvm: 2.7.0-4
pve-container: 1.0-80
pve-firewall: 2.0-31
pve-ha-manager: 1.0-35
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.5-1
lxcfs: 2.0.4-pve2
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.7-pve10~bpo80

HARDWARE:
Code:
00:00.0 RAM memory: NVIDIA Corporation MCP55 Memory Controller (rev a2)
00:01.0 ISA bridge: NVIDIA Corporation MCP55 LPC Bridge (rev a3)
00:01.1 SMBus: NVIDIA Corporation MCP55 SMBus Controller (rev a3)
00:02.0 USB controller: NVIDIA Corporation MCP55 USB Controller (rev a1)
00:02.1 USB controller: NVIDIA Corporation MCP55 USB Controller (rev a2)
00:05.0 IDE interface: NVIDIA Corporation MCP55 SATA Controller (rev a3)
00:05.1 IDE interface: NVIDIA Corporation MCP55 SATA Controller (rev a3)
00:05.2 IDE interface: NVIDIA Corporation MCP55 SATA Controller (rev a3)
00:06.0 PCI bridge: NVIDIA Corporation MCP55 PCI bridge (rev a2)
00:08.0 Bridge: NVIDIA Corporation MCP55 Ethernet (rev a3)
00:09.0 Bridge: NVIDIA Corporation MCP55 Ethernet (rev a3)
00:0a.0 PCI bridge: NVIDIA Corporation MCP55 PCI Express bridge (rev a3)
00:0d.0 PCI bridge: NVIDIA Corporation MCP55 PCI Express bridge (rev a3)
00:0e.0 PCI bridge: NVIDIA Corporation MCP55 PCI Express bridge (rev a3)
00:0f.0 PCI bridge: NVIDIA Corporation MCP55 PCI Express bridge (rev a3)
00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 10h Processor HyperTransport Configuration
00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 10h Processor Address Map
00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 10h Processor DRAM Controller
00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 10h Processor Miscellaneous Control
00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 10h Processor Link Control
00:19.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 10h Processor HyperTransport Configuration
00:19.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 10h Processor Address Map
00:19.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 10h Processor DRAM Controller
00:19.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 10h Processor Miscellaneous Control
00:19.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 10h Processor Link Control
01:05.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] ES1000 (rev 02)
02:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 07)
02:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 07)
03:06.0 SCSI storage controller: Marvell Technology Group Ltd. MV88SX6081 8-port SATA II PCI-X Controller (rev 09)
04:06.0 SCSI storage controller: Marvell Technology Group Ltd. MV88SX6081 8-port SATA II PCI-X Controller (rev 09)


and finally disks:
Code:
root@pmx1:/dev/disk/by-id# ls -al
total 0
drwxr-xr-x 2 root root 680 Nov  2 08:57 .
drwxr-xr-x 6 root root 120 Nov  2 09:01 ..
lrwxrwxrwx 1 root root   9 Nov  2 09:01 ata-ST3500418AS_6VMD0XXX -> ../../sdd
lrwxrwxrwx 1 root root  10 Nov  2 09:01 ata-ST3500418AS_6VMD0XXX-part1 -> ../../sdd1
lrwxrwxrwx 1 root root  10 Nov  2 09:01 ata-ST3500418AS_6VMD0XXX-part2 -> ../../sdd2
lrwxrwxrwx 1 root root  10 Nov  2 09:01 ata-ST3500418AS_6VMD0XXX-part9 -> ../../sdd9
lrwxrwxrwx 1 root root   9 Nov  2 09:01 ata-ST3500418AS_9VMDDXXX -> ../../sdc
lrwxrwxrwx 1 root root  10 Nov  2 09:01 ata-ST3500418AS_9VMDDXXX-part1 -> ../../sdc1
lrwxrwxrwx 1 root root  10 Nov  2 09:01 ata-ST3500418AS_9VMDDXXX-part2 -> ../../sdc2
lrwxrwxrwx 1 root root  10 Nov  2 09:01 ata-ST3500418AS_9VMDDXXX-part9 -> ../../sdc9
lrwxrwxrwx 1 root root   9 Nov  2 09:01 ata-ST3500418AS_9VMJXXXX -> ../../sdb
lrwxrwxrwx 1 root root  10 Nov  2 09:01 ata-ST3500418AS_9VMJXXXX-part1 -> ../../sdb1
lrwxrwxrwx 1 root root  10 Nov  2 09:01 ata-ST3500418AS_9VMJXXXX-part2 -> ../../sdb2
lrwxrwxrwx 1 root root  10 Nov  2 09:01 ata-ST3500418AS_9VMJXXXX-part9 -> ../../sdb9
lrwxrwxrwx 1 root root   9 Nov  2 09:01 ata-ST3500418AS_Z2AJZXXX -> ../../sda
lrwxrwxrwx 1 root root  10 Nov  2 09:01 ata-ST3500418AS_Z2AJZXXX-part1 -> ../../sda1
lrwxrwxrwx 1 root root  10 Nov  2 09:01 ata-ST3500418AS_Z2AJZXXX-part2 -> ../../sda2
lrwxrwxrwx 1 root root  10 Nov  2 09:01 ata-ST3500418AS_Z2AJZXXX-part9 -> ../../sda9
lrwxrwxrwx 1 root root   9 Nov  2 09:01 wwn-0x5000c5002022c6ae -> ../../sdc
lrwxrwxrwx 1 root root  10 Nov  2 09:01 wwn-0x5000c5002022c6ae-part1 -> ../../sdc1
lrwxrwxrwx 1 root root  10 Nov  2 09:01 wwn-0x5000c5002022c6ae-part2 -> ../../sdc2
lrwxrwxrwx 1 root root  10 Nov  2 09:01 wwn-0x5000c5002022c6ae-part9 -> ../../sdc9
lrwxrwxrwx 1 root root   9 Nov  2 09:01 wwn-0x5000c50022379f0c -> ../../sdd
lrwxrwxrwx 1 root root  10 Nov  2 09:01 wwn-0x5000c50022379f0c-part1 -> ../../sdd1
lrwxrwxrwx 1 root root  10 Nov  2 09:01 wwn-0x5000c50022379f0c-part2 -> ../../sdd2
lrwxrwxrwx 1 root root  10 Nov  2 09:01 wwn-0x5000c50022379f0c-part9 -> ../../sdd9
lrwxrwxrwx 1 root root   9 Nov  2 09:01 wwn-0x5000c50026c72baf -> ../../sdb
lrwxrwxrwx 1 root root  10 Nov  2 09:01 wwn-0x5000c50026c72baf-part1 -> ../../sdb1
lrwxrwxrwx 1 root root  10 Nov  2 09:01 wwn-0x5000c50026c72baf-part2 -> ../../sdb2
lrwxrwxrwx 1 root root  10 Nov  2 09:01 wwn-0x5000c50026c72baf-part9 -> ../../sdb9
lrwxrwxrwx 1 root root   9 Nov  2 09:01 wwn-0x5000c5003fa3b51f -> ../../sda
lrwxrwxrwx 1 root root  10 Nov  2 09:01 wwn-0x5000c5003fa3b51f-part1 -> ../../sda1
lrwxrwxrwx 1 root root  10 Nov  2 09:01 wwn-0x5000c5003fa3b51f-part2 -> ../../sda2
lrwxrwxrwx 1 root root  10 Nov  2 09:01 wwn-0x5000c5003fa3b51f-part9 -> ../../sda9

Ideas?
 

on apt-get upgrade, some packages were held back, so I force updated them all, which seems to include the bpo80 fix.

Code:
proxmox-ve: 4.3-71 (running kernel: 4.4.21-1-pve)
pve-manager: 4.3-9 (running version: 4.3-9/f7c6f0cd)
pve-kernel-4.4.21-1-pve: 4.4.21-71
pve-kernel-4.4.19-1-pve: 4.4.19-66
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-46
qemu-server: 4.0-92
pve-firmware: 1.1-10
libpve-common-perl: 4.0-79
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-68
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.3-12
pve-qemu-kvm: 2.7.0-4
pve-container: 1.0-80
pve-firewall: 2.0-31
pve-ha-manager: 1.0-35
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.5-1
lxcfs: 2.0.4-pve2
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve13~bpo80


Still broken. :(

Code:
root@pmx1:~# systemctl status zfs-mount.service
● zfs-mount.service - Mount ZFS filesystems
   Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled)
   Active: failed (Result: exit-code) since Wed 2016-11-02 09:24:53 PDT; 54s ago
  Process: 1987 ExecStart=/sbin/zfs mount -a (code=exited, status=1/FAILURE)
Main PID: 1987 (code=exited, status=1/FAILURE)

Nov 02 09:24:53 pmx1 zfs[1987]: cannot mount '/rpool': directory is not empty
Nov 02 09:24:53 pmx1 systemd[1]: zfs-mount.service: main process exited, code=exited, status=1/FAILURE
Nov 02 09:24:53 pmx1 systemd[1]: Failed to start Mount ZFS filesystems.
Nov 02 09:24:53 pmx1 systemd[1]: Unit zfs-mount.service entered failed state.

I noticed the apt-get install complained about one part - the zfsutils-linux... when I force a reinstall, here's the complaint:
Code:
root@pmx1:~# apt-get install --reinstall zfsutils-linux
Reading package lists... Done
Building dependency tree
Reading state information... Done
0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 0 not upgraded.
Need to get 0 B/341 kB of archives.
After this operation, 0 B of additional disk space will be used.
(Reading database ... 44086 files and directories currently installed.)
Preparing to unpack .../zfsutils-linux_0.6.5.8-pve13~bpo80_amd64.deb ...
Warning: Unit file of zfs-import-scan.service changed on disk, 'systemctl daemon-reload' recommended.
Unpacking zfsutils-linux (0.6.5.8-pve13~bpo80) over (0.6.5.8-pve13~bpo80) ...
Processing triggers for man-db (2.7.0.2-5) ...
Setting up zfsutils-linux (0.6.5.8-pve13~bpo80) ...
Job for zfs-mount.service failed. See 'systemctl status zfs-mount.service' and 'journalctl -xn' for details.

root@pmx1:~# journalctl -xn
-- Logs begin at Wed 2016-11-02 09:24:48 PDT, end at Wed 2016-11-02 09:35:02 PDT. --
Nov 02 09:35:02 pmx1 systemd[1]: Started Import ZFS pools by device scanning.
-- Subject: Unit zfs-import-scan.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit zfs-import-scan.service has finished starting up.
--
-- The start-up result is done.
Nov 02 09:35:02 pmx1 systemd[1]: Starting Mount ZFS filesystems...
-- Subject: Unit zfs-mount.service has begun with start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit zfs-mount.service has begun starting up.
Nov 02 09:35:02 pmx1 zfs[9309]: cannot mount '/rpool': directory is not empty
Nov 02 09:35:02 pmx1 systemd[1]: zfs-mount.service: main process exited, code=exited, status=1/FAILURE
Nov 02 09:35:02 pmx1 systemd[1]: Failed to start Mount ZFS filesystems.
-- Subject: Unit zfs-mount.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit zfs-mount.service has failed.
--
-- The result is failed.
Nov 02 09:35:02 pmx1 systemd[1]: Unit zfs-mount.service entered failed state.
Nov 02 09:35:02 pmx1 systemd[1]: Starting ZFS file system shares...
-- Subject: Unit zfs-share.service has begun with start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit zfs-share.service has begun starting up.
Nov 02 09:35:02 pmx1 systemd[1]: Started ZFS file system shares.
-- Subject: Unit zfs-share.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit zfs-share.service has finished starting up.
--
-- The start-up result is done.
Nov 02 09:35:02 pmx1 systemd[1]: Starting ZFS startup target.
-- Subject: Unit zfs.target has begun with start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit zfs.target has begun starting up.
Nov 02 09:35:02 pmx1 systemd[1]: Reached target ZFS startup target.
-- Subject: Unit zfs.target has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit zfs.target has finished starting up.
--
-- The start-up result is done.[/COLOR]
 
well, the error message says that /rpool is not empty so it cannot mount there.. I'd check that out first! did you install using the PVE installer or manually?
 
well, the error message says that /rpool is not empty so it cannot mount there.. I'd check that out first! did you install using the PVE installer or manually?

PVE installer. 4.3, then upgraded.

FWIW, first boot after install won't boot without assistance. There's something missing here....
 
PVE installer. 4.3, then upgraded.

FWIW, first boot after install won't boot without assistance. There's something missing here....

that does indeed sound strange. I wonder if you are able to reproduce this without partial upgrades (i.e., when using "apt-get dist-upgrade", not "apt-get upgrade")?
 
Sorry if I was unclear.

This happens upon first boot after initial install.

Continues to happen after upgrade to latest versions of everything.

If you'd like to give me logs to collect after a fresh install, let me know what files, I'll grab & post them.
 
sorry for misreading. just to make sure:
  • your system drops you to the initramfs shell because it cannot import the rpool? what is the exact error message?
  • have you checked why the rpool is not mounted correctly?
  • can you try regenerating the initramfs and see if the problem persists?
  • output of "zfs list -t all" and "zpool status" and "mount" might help to shed some light
 
Order of things:
* Blue PMX Screen
* Loading Ramdisk
* (clear screen)
* Loading, please wait...

C9609xE.jpg



Let me know which logs you need me to pull, etc.. I've posted hardware info above.

Code:
root@pmx1:~# zfs list -t all
NAME                           USED  AVAIL  REFER  MOUNTPOINT
rpool                         25.5G  1.25T   140K  /rpool
rpool/ROOT                    15.9G  1.25T   140K  /rpool/ROOT
rpool/ROOT/pve-1              15.9G  1.25T  15.9G  /
rpool/data                    1.10G  1.25T   140K  /rpool/data
rpool/data/subvol-101-disk-1   397M  39.6G   397M  /rpool/data/subvol-101-disk-1
rpool/data/subvol-102-disk-1   731M  31.3G   731M  /rpool/data/subvol-102-disk-1
rpool/swap                    8.50G  1.26T    93K  -
root@pmx1:~# zpool status
  pool: rpool
state: ONLINE
  scan: scrub repaired 0 in 0h2m with 0 errors on Wed Nov  2 14:51:24 2016
config:

        NAME                                STATE     READ WRITE CKSUM
        rpool                               ONLINE       0     0     0
          raidz1-0                          ONLINE       0     0     0
            ata-ST3500418AS_Z2AJZCDE-part2  ONLINE       0     0     0
            ata-ST3500418AS_9VMJXQHL-part2  ONLINE       0     0     0
            ata-ST3500418AS_9VMDD41J-part2  ONLINE       0     0     0
            ata-ST3500418AS_6VMD0XDW-part2  ONLINE       0     0     0

errors: No known data errors
root@pmx1:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=8235175,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,relatime,size=13195316k,mode=755)
rpool/ROOT/pve-1 on / type zfs (rw,relatime,xattr,noacl)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=23,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
rpool/ROOT on /rpool/ROOT type zfs (rw,noatime,xattr,noacl)
rpool/data on /rpool/data type zfs (rw,noatime,xattr,noacl)
rpool/data/subvol-101-disk-1 on /rpool/data/subvol-101-disk-1 type zfs (rw,noatime,xattr,posixacl)
rpool/data/subvol-102-disk-1 on /rpool/data/subvol-102-disk-1 type zfs (rw,noatime,xattr,posixacl)
rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
 
Last edited:
does "/sbin/zpool import -N 'rpool'" work in the initramfs prompt? are you sure that the rootdelay parameter does not help (try setting it to something like 30 seconds)? this seems like the textbook case..
 
does "/sbin/zpool import -N 'rpool'" work in the initramfs prompt? are you sure that the rootdelay parameter does not help (try setting it to something like 30 seconds)? this seems like the textbook case..

no, I am unable to import by name.
I'll try setting the rootdelay to 30 seconds.. it's at 20 seconds now.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!