Proxmox 5.1 won't boot on ZFS and 440AR in HBA mode (HP DL380 Gen9 server)

Dubard

Active Member
Oct 5, 2016
61
2
28
47
Switzerland
Hi everybody,

I want to install Proxmox 5.1 on HP DL380 Gen9 with UEFI Mode.
I managed to switch the HP Smart array P440ar controller to HBA mode. The problem is that once Proxmox 5.1 is installed and the server is restarted... the server does not start on the "Proxmox" installation because it does not find the bootloader !

I read here that the current Proxmox installer couldn't handle UEFI !

I then saw this post: https://forum.proxmox.com/threads/p...440ar-in-hba-mode-hp-gl380-gen9-server.22950/ ...where some people had exactly the same problem as me. I tried to apply the method described in this post:

  • Boot the Proxmox VE 5.1 CD/ISO, choosing "Debug mode", then exiting the console with Ctrl+D, clicking "Abort" in the installer. This drops you to a shell again.
  • Use cgdisk /dev/sda and delete the ZFS partition (partition number 2). That does not destroy data as ZFS is in pseudo-RAID1-mode. Pool is now degraded.
  • Create a new (second) partition of size 260 MB, type EF00, label it ESP.
  • Create a new (third) partition in the remaining space, type BF01, label it ZFS
  • Write that partition layout and exit cgdisk.
  • Verify that you see sda1, sda2 and sda3. In my case I saw a strange (=no idea what it is) Part# 9 "Solaris Reserved 1" that I did not touch.
  • Then apply the following commands:
Code:
Code:

$ modprobe efivars

$ zpool import rpool -R /mnt

$ zpool replace rpool /dev/sda2 /dev/sda3
# watch resilvering finish before you continue!

$ mkfs.vfat -F32 /dev/sda2
$ mount /dev/sda2 /mnt/boot/efi
     
$ mount -t proc /proc /mnt/proc
$ mount --rbind /dev /mnt/dev
$ mount --rbind /sys /mnt/sys

$ chroot /mnt /bin/bash
(chroot)$ source /etc/profile
(chroot)$ grub-install /dev/sda
(chroot)$ update-grub2
(chroot)$ update-initramfs -u
(chroot)$ efibootmgr -c --disk /dev/sda --part 2
# verify a new record called Linux is there
(chroot)$ efibootmgr -v
Ctrl-D
 
$ umount /mnt/sys
$ umount /mnt/dev
$ umount /mnt/proc
$ umount /mnt/boot/efi
$ zfs unmount rpool/ROOT/pve-1
Ctrl-D
Ctrl-D

Regarding the last command ("zfs unmount rpool/ROOT/pve-1")...i have a problem because this one does not return an error "Ressource busy" !

If I ignore this error and restart my server, it finds the UEFI partition but the Proxmox boot process fails with the error "...Cannot import rpool: pool was previously in use from another system... " (as shown in the image send with this post)

I had to make a mistake in the procedure of setting up the UEFI...

has anyone ever had this problem and could you please help me ?
Thanks
 

Attachments

  • uefi-proxmox-boot-failed.jpg
    uefi-proxmox-boot-failed.jpg
    285.7 KB · Views: 40
Last edited:
  • Like
Reactions: chrone
you can just do "zpool import -N -f rpool" in the busybox / initramfs shell, followed by "exit" and it should boot. we need to re-visit ZFS+EFI support in the near future anyway, and then problems like this should no longer require manual intervention.
 
  • Like
Reactions: chrone and Dubard
Re,

It's OK now ...thanks you @fabian !

Here's what I did:

[Allow the Proxmox distribution to boot...]
Code:
+++++ After boot...Into busybox / initramfs +++++

/ # zpool import -N -f rpool
/ # exit
...
...
The system boot...
..
..

+++++ After system ready +++++

root@myserver:~# zpool status rpool
  pool: rpool
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: resilvered 786M in 0h0m with 0 errors on Mon Jan 22 16:43:46 2018
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       DEGRADED     0     0     0
          mirror-0  DEGRADED     0     0     0
            sdc3    ONLINE       0     0     0
            sdd2    OFFLINE      0     0     0

errors: No known data errors
root@myserver:~#

[ Put the pool back in good condition and instead put the disk ID instead of "/dev/sXX"]

Code:
root@myserver:~# zpool detach rpool /dev/sdd2

root@myserver:~# zpool status rpool
  pool: rpool
 state: ONLINE
  scan: resilvered 786M in 0h0m with 0 errors on Mon Jan 22 16:43:46 2018
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          sdc3      ONLINE       0     0     0

errors: No known data errors

root@myserver:~# zpool attach rpool /dev/sdc3 /dev/disk/by-id/scsi-33001438038d46e43-part3
Make sure to wait until resilver is done before rebooting.
Code:
root@myserver:~# zpool status rpool
  pool: rpool
 state: ONLINE
  scan: resilvered 790M in 0h0m with 0 errors on Tue Jan 23 11:34:38 2018
config:

        NAME                              STATE     READ WRITE CKSUM
        rpool                             ONLINE       0     0     0
          mirror-0                        ONLINE       0     0     0
            sdc3                          ONLINE       0     0     0
            scsi-33001438038d46e43-part3  ONLINE       0     0     0

errors: No known data errors
root@myserver:~#

root@myserver:~# zpool detach rpool /dev/sdc3

root@myserver:~# zpool status rpool
  pool: rpool
 state: ONLINE
  scan: resilvered 790M in 0h0m with 0 errors on Tue Jan 23 11:34:38 2018
config:

        NAME                            STATE     READ WRITE CKSUM
        rpool                           ONLINE       0     0     0
          scsi-33001438038d46e43-part3  ONLINE       0     0     0

errors: No known data errors
root@myserver:~#

root@myserver:~# zpool attach rpool /dev/disk/by-id/scsi-33001438038d46e43-part3 /dev/disk/by-id/scsi-33001438038d46e42-part3
Make sure to wait until resilver is done before rebooting.
Code:
root@myserver:~# zpool status rpool
  pool: rpool
 state: ONLINE
  scan: resilvered 790M in 0h0m with 0 errors on Tue Jan 23 11:50:11 2018
config:

        NAME                              STATE     READ WRITE CKSUM
        rpool                             ONLINE       0     0     0
          mirror-0                        ONLINE       0     0     0
            scsi-33001438038d46e43-part3  ONLINE       0     0     0
            scsi-33001438038d46e42-part3  ONLINE       0     0     0

errors: No known data errors
root@myserver:~#

I just now have another quick question... how can I do to also put the correct UEFI boot on the second disk ?

Thanks
 
Last edited:
I just now have another quick question... how can I do to also put the correct UEFI boot on the second disk ?

Thanks

create and format and ESP on the second disk, mount it, and run the proper grub-install incantation. note that nothing will keep those two ESPs in sync, so you might want to repeat the "mount and run grub-install" step every now and then (this is part of the reason why PVE does not support ZFS+EFI currently).
 
  • Like
Reactions: chrone and Dubard
Re,
create and format and ESP on the second disk, mount it, and run the proper grub-install incantation. note that nothing will keep those two ESPs in sync, so you might want to repeat the "mount and run grub-install" step every now and then (this is part of the reason why PVE does not support ZFS+EFI currently).

Thanks @fabian for your reply...I've an error when apply grub-install command. See below:

[For disk number 1...]

Code:
root@myserver:~# fdisk -l
...
...
Disk /dev/sdd: 111.8 GiB, 120034123776 bytes, 234441648 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 555A0971-A247-40DA-95AA-99A4101C692C

Device         Start       End   Sectors   Size Type
/dev/sdd1         34      2047      2014  1007K BIOS boot
/dev/sdd2       2048    534527    532480   260M EFI System
/dev/sdd3     534528 234425229 233890702 111.5G Solaris /usr & Apple ZFS
/dev/sdd9  234425230 234441614     16385     8M Solaris reserved 1

Partition 1 does not start on physical sector boundary.
Partition 9 does not start on physical sector boundary.
...
...
root@myserver:/# mkdir -p /mnt/boot/efi

root@myserver:/# mount /dev/sdc2 /mnt/boot/efi

root@myserver:/# grub-install --target=x86_64-efi --efi-directory=/mnt/boot/efi /dev/disk/by-id/scsi-33001438038d46e42
Installing for x86_64-efi platform.
Installation finished. No error reported.
root@myserver:/#

[For disk number 2...]

Code:
root@myserver:~# fdisk -l
...
...
Disk /dev/sdd: 111.8 GiB, 120034123776 bytes, 234441648 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 555A0971-A247-40DA-95AA-99A4101C692C

Device         Start       End   Sectors   Size Type
/dev/sdd1         34      2047      2014  1007K BIOS boot
/dev/sdd2       2048    534527    532480   260M EFI System
/dev/sdd3     534528 234425229 233890702 111.5G Solaris /usr & Apple ZFS
/dev/sdd9  234425230 234441614     16385     8M Solaris reserved 1

Partition 1 does not start on physical sector boundary.
Partition 9 does not start on physical sector boundary.
...
...


root@myserver:/# umount /dev/sdc2

root@myserver:/# mount /dev/sdd2 /mnt/boot/efi
mount: unknown filesystem type 'zfs_member'
root@myserver:/#

I don't understand why tell me "unknown filesystem type 'zfs_member'"...because fdisk show that sdd2 is a "EFI System" !


So , i've another problem...when reboot my server I always arrive on the "
busybox / initramfs" !
I have to reorder every time the command:
Code:
+++++ Into busybox / initramfs +++++

/ # zpool import -N -f rpool
/ # exit

...and system boot after this command.

What am I doing that's not fair ?

Thanks
 
Last edited:
you didn't format the ESP..
 
  • Like
Reactions: Dubard
you didn't format the ESP..
Sorry... I'm spending a lot of time on the problem and I'm starting to get a little tired... I actually forgot to format /dev/sdd !

It's OK now:

Code:
root@myserver:~# mkfs.vfat -F32 /dev/sdd2
mkfs.fat 4.1 (2017-01-24)

root@myserver:~# mount /dev/sdd2 /mnt/boot/efi

root@myserver:~# grub-install --target=x86_64-efi --efi-directory=/mnt/boot/efi /dev/disk/by-id/scsi-33001438038d46e43
Installing for x86_64-efi platform.
Installation finished. No error reported.
root@myserver:~#

Do you know why i must reorder every time the command below when reboot my server ?
Code:
+++++ Into busybox / initramfs +++++

/ # zpool import -N -f rpool
/ # exit

...and system boot after this command.

Yet rpool seems okay. See below:

Code:
root@myserver:~# zpool status
  pool: rpool
 state: ONLINE
  scan: resilvered 790M in 0h0m with 0 errors on Tue Jan 23 11:50:11 2018
config:

   NAME                              STATE     READ WRITE CKSUM
   rpool                             ONLINE       0     0     0
     mirror-0                        ONLINE       0     0     0
       scsi-33001438038d46e43-part3  ONLINE       0     0     0
       scsi-33001438038d46e42-part3  ONLINE       0     0     0

errors: No known data errors
root@myserver:~#

What am I doing that's not fair ?

Thanks
 
Last edited:
try updating your initramfs.
 
  • Like
Reactions: Dubard
Re,

I found the solution HERE:

a) edit /etc/default/grub and add "rootdelay=10" at GRUB_CMDLINE_LINUX_DEFAULT (i.e. GRUB_CMDLINE_LINUX_DEFAULT="rootdelay=10 quiet") and then issue a # update-grub

b) edit /etc/default/zfs, set ZFS_INITRD_PRE_MOUNTROOT_SLEEP='4', and then issue a "update-initramfs -k 4.2.6-1-pve -u"

Code:
root@myserver:/boot# vim /etc/default/grub
 
root@myserver:/boot# update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.13.4-1-pve
Found initrd image: /boot/initrd.img-4.13.4-1-pve
Found memtest86+ image: /ROOT/pve-1@/boot/memtest86+.bin
Found memtest86+ multiboot image: /ROOT/pve-1@/boot/memtest86+_multiboot.bin
Adding boot menu entry for EFI firmware configuration
done

root@myserver:/boot# vim /etc/default/zfs

root@myserver:/boot# ls -altr
total 44146
-rw-r--r--  1 root root   184840 Jun 25  2015 memtest86+_multiboot.bin
-rw-r--r--  1 root root   182704 Jun 25  2015 memtest86+.bin
-rw-r--r--  1 root root  7956240 Oct 13 08:59 vmlinuz-4.13.4-1-pve
-rw-r--r--  1 root root  3914587 Oct 13 08:59 System.map-4.13.4-1-pve
-rw-r--r--  1 root root   212594 Oct 13 08:59 config-4.13.4-1-pve
drwxr-xr-x  2 root root        2 Jan 22 16:27 efi
drwxr-xr-x  2 root root        4 Jan 22 16:30 pve
-rw-r--r--  1 root root 35719979 Jan 22 16:47 initrd.img-4.13.4-1-pve
drwxr-xr-x  5 root root       11 Jan 22 16:47 .
drwxr-xr-x 22 root root       23 Jan 22 16:48 ..
drwxr-xr-x  6 root root        9 Jan 23 14:24 grub

root@myserver:/boot# update-initramfs -k 4.13.4-1-pve -u
update-initramfs: Generating /boot/initrd.img-4.13.4-1-pve

root@myserver:/boot#

...and now...all is done !

Last question...after command line above, is required sync GRUB into all ESP partitions with the following command ?

Code:
root@myserver:/boot# mount /dev/sdd2 /mnt/boot/efi

root@myserver:/boot# grub-install --target=x86_64-efi --efi-directory=/mnt/boot/efi /dev/disk/by-id/scsi-33001438038d46e43
Installing for x86_64-efi platform.
Installation finished. No error reported.

root@myserver:/boot# umount /dev/sdd2

root@myserver:/boot# mount /dev/sdc2 /mnt/boot/efi

root@myserver:/boot# grub-install --target=x86_64-efi --efi-directory=/mnt/boot/efi /dev/disk/by-id/scsi-33001438038d46e42
Installing for x86_64-efi platform.
Installation finished. No error reported.

root@myserver:/boot# umount /dev/sdc2

Thanks
 
Last edited:
Hi everybody,

I just upgraded one of my cluster's server.

Do I need to execute all of the above commands to update GRUB on the disks concerned ...or just:
Code:
grub-install /dev/disk/by-id/scsi-33001438038d46e42-part2 && grub-install /dev/disk/by-id/scsi-33001438038d46e43-part2
?

Thanks
 
Hi everybody,

If I understood correctly, every time I update my kernel, I have to redo the GRUB in this way:

On a server installed in Legacy-mode:

Code:
root@myserver:/boot# grub-install /dev/disk/by-id/scsi-33001438038d46e42-part2 && grub-install /dev/disk/by-id/scsi-33001438038d46e43-part2


On a server installed in UEFI-mode:

Code:
root@myserver:/boot# mount /dev/sdd2 /mnt/boot/efi

root@myserver:/boot# grub-install --target=x86_64-efi --efi-directory=/mnt/boot/efi /dev/disk/by-id/scsi-33001438038d46e43
Installing for x86_64-efi platform.
Installation finished. No error reported.

root@myserver:/boot# umount /dev/sdd2

root@myserver:/boot# mount /dev/sdc2 /mnt/boot/efi

root@myserver:/boot# grub-install --target=x86_64-efi --efi-directory=/mnt/boot/efi /dev/disk/by-id/scsi-33001438038d46e42
Installing for x86_64-efi platform.
Installation finished. No error reported.

root@myserver:/boot# umount /dev/sdc2


  1. What I am saying above is correct ?
  2. Should I also add the command below for both modes (Legacy and UEFI) ?
Code:
root@myserver:/boot# update-initramfs -k 4.13.4-1-pve -u
update-initramfs: Generating /boot/initrd.img-4.13.4-1-pve
...where "4.13.4-1-pve" is replaced by my kernel value !

Thanks
 
no. you only need to re-install grub when grub has been updated, and only for UEFI with multiple ESPs (legacy handles multiple BIOS boot partitions just fine, and a single, mounted ESP is also correctly detected for UEFI). a kernel update will always regenerate both the grub config and the initramfs, without any manual intervention.
 
  • Like
Reactions: Dubard
Hello @fabian ...thanks for your reply.

To better understand, I take the example below with a server where I have 2 ESPs (one on each SSD that forms the ZFS-raid-1 for the rpool):

Code:
...
...
Disk /dev/sdc: 111.8 GiB, 120034123776 bytes, 234441648 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: FD58381D-CC85-4E6A-A757-5382F39D8CA0

Device         Start       End   Sectors   Size Type
/dev/sdc1         34      2047      2014  1007K BIOS boot
/dev/sdc2       2048    534527    532480   260M EFI System
/dev/sdc3     534528 234425229 233890702 111.5G Solaris /usr & Apple ZFS
/dev/sdc9  234425230 234441614     16385     8M Solaris reserved 1

Partition 1 does not start on physical sector boundary.
Partition 9 does not start on physical sector boundary.


Disk /dev/sdd: 111.8 GiB, 120034123776 bytes, 234441648 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 21DEECEC-C4AE-4036-9DA8-9C4B836FE38D

Device         Start       End   Sectors   Size Type
/dev/sdd1         34      2047      2014  1007K BIOS boot
/dev/sdd2       2048    534527    532480   260M EFI System
/dev/sdd3     534528 234425229 233890702 111.5G Solaris /usr & Apple ZFS
/dev/sdd9  234425230 234441614     16385     8M Solaris reserved 1

Partition 1 does not start on physical sector boundary.
Partition 9 does not start on physical sector boundary.
...
...
root@monserveur:~# zpool status rpool
  pool: rpool
 state: ONLINE
  scan: resilvered 789M in 0h0m with 0 errors on Mon Jan 29 14:57:24 2018
config:

   NAME                              STATE     READ WRITE CKSUM
   rpool                             ONLINE       0     0     0
     mirror-0                        ONLINE       0     0     0
       scsi-33001438038d46f83-part3  ONLINE       0     0     0
       scsi-33001438038d46f82-part3  ONLINE       0     0     0

errors: No known data errors
root@monserveur:~#
...
...

I take a upgrade...

Code:
root@monserveur:~# apt-get update && apt-get dist-upgrade
Ign:1 http://ftp.ch.debian.org/debian stretch InRelease
Hit:2 http://ftp.ch.debian.org/debian stretch Release
Get:3 http://security.debian.org stretch/updates InRelease [63.0 kB]
Hit:5 https://enterprise.proxmox.com/debian/pve stretch InRelease
Fetched 63.0 kB in 0s (273 kB/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Calculating upgrade... Done
The following packages were automatically installed and are no longer required:
  dkms fakeroot gcc gcc-6 libasan3 libatomic1 libcc1-0 libcilkrts5 libfakeroot libgcc-6-dev libgomp1 libitm1 liblsan0 libmpx2 libtsan0 libubsan0 linux-compiler-gcc-6-x86 linux-headers-4.9.0-5-amd64
  linux-headers-4.9.0-5-common linux-headers-amd64 linux-kbuild-4.9 spl-dkms sudo zfs-dkms
Use 'apt autoremove' to remove them.
The following NEW packages will be installed:
  pve-kernel-4.13.13-5-pve
The following packages will be upgraded:
  iproute2 libnvpair1linux libuutil1linux libzfs2linux libzpool2linux proxmox-ve pve-docs pve-kernel-4.13.13-2-pve pve-manager pve-qemu-kvm qemu-server zfs-initramfs zfs-zed zfsutils-linux
14 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 118 MB of archives.
After this operation, 2,516 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 https://enterprise.proxmox.com/debian/pve stretch/pve-enterprise amd64 iproute2 amd64 4.13.0-3 [693 kB]
Get:2 https://enterprise.proxmox.com/debian/pve stretch/pve-enterprise amd64 libuutil1linux amd64 0.7.4-pve2~bpo9 [49.3 kB]
Get:3 https://enterprise.proxmox.com/debian/pve stretch/pve-enterprise amd64 libnvpair1linux amd64 0.7.4-pve2~bpo9 [46.4 kB]
Get:4 https://enterprise.proxmox.com/debian/pve stretch/pve-enterprise amd64 libzpool2linux amd64 0.7.4-pve2~bpo9 [560 kB]
Get:5 https://enterprise.proxmox.com/debian/pve stretch/pve-enterprise amd64 libzfs2linux amd64 0.7.4-pve2~bpo9 [138 kB]
Get:6 https://enterprise.proxmox.com/debian/pve stretch/pve-enterprise amd64 pve-kernel-4.13.13-5-pve amd64 4.13.13-38 [50.3 MB]
Get:7 https://enterprise.proxmox.com/debian/pve stretch/pve-enterprise amd64 pve-docs all 5.1-16 [7,257 kB]
Get:8 https://enterprise.proxmox.com/debian/pve stretch/pve-enterprise amd64 pve-qemu-kvm amd64 2.9.1-6 [6,371 kB]
Get:9 https://enterprise.proxmox.com/debian/pve stretch/pve-enterprise amd64 qemu-server amd64 5.0-20 [153 kB]
Get:10 https://enterprise.proxmox.com/debian/pve stretch/pve-enterprise amd64 pve-manager amd64 5.1-43 [1,992 kB]
Get:11 https://enterprise.proxmox.com/debian/pve stretch/pve-enterprise amd64 proxmox-ve all 5.1-38 [5,332 B]
Get:12 https://enterprise.proxmox.com/debian/pve stretch/pve-enterprise amd64 pve-kernel-4.13.13-2-pve amd64 4.13.13-33 [50.3 MB]
Get:13 https://enterprise.proxmox.com/debian/pve stretch/pve-enterprise amd64 zfsutils-linux amd64 0.7.4-pve2~bpo9 [349 kB]
Get:14 https://enterprise.proxmox.com/debian/pve stretch/pve-enterprise amd64 zfs-initramfs all 0.7.4-pve2~bpo9 [26.0 kB]
Get:15 https://enterprise.proxmox.com/debian/pve stretch/pve-enterprise amd64 zfs-zed amd64 0.7.4-pve2~bpo9 [62.7 kB]
Fetched 118 MB in 2s (43.7 MB/s)   
Reading changelogs... Done
Preconfiguring packages ...
(Reading database ... 59538 files and directories currently installed.)
Preparing to unpack .../00-iproute2_4.13.0-3_amd64.deb ...
Unpacking iproute2 (4.13.0-3) over (4.10.0-1) ...
Preparing to unpack .../01-libuutil1linux_0.7.4-pve2~bpo9_amd64.deb ...
Unpacking libuutil1linux (0.7.4-pve2~bpo9) over (0.7.3-pve1~bpo9) ...
Preparing to unpack .../02-libnvpair1linux_0.7.4-pve2~bpo9_amd64.deb ...
Unpacking libnvpair1linux (0.7.4-pve2~bpo9) over (0.7.3-pve1~bpo9) ...
Preparing to unpack .../03-libzpool2linux_0.7.4-pve2~bpo9_amd64.deb ...
Unpacking libzpool2linux (0.7.4-pve2~bpo9) over (0.7.3-pve1~bpo9) ...
Preparing to unpack .../04-libzfs2linux_0.7.4-pve2~bpo9_amd64.deb ...
Unpacking libzfs2linux (0.7.4-pve2~bpo9) over (0.7.3-pve1~bpo9) ...
Selecting previously unselected package pve-kernel-4.13.13-5-pve.
Preparing to unpack .../05-pve-kernel-4.13.13-5-pve_4.13.13-38_amd64.deb ...
Unpacking pve-kernel-4.13.13-5-pve (4.13.13-38) ...
Preparing to unpack .../06-pve-docs_5.1-16_all.deb ...
Unpacking pve-docs (5.1-16) over (5.1-12) ...
Preparing to unpack .../07-pve-qemu-kvm_2.9.1-6_amd64.deb ...
Unpacking pve-qemu-kvm (2.9.1-6) over (2.9.1-5) ...
Preparing to unpack .../08-qemu-server_5.0-20_amd64.deb ...
Unpacking qemu-server (5.0-20) over (5.0-18) ...
Preparing to unpack .../09-pve-manager_5.1-43_amd64.deb ...
Unpacking pve-manager (5.1-43) over (5.1-41) ...
Preparing to unpack .../10-proxmox-ve_5.1-38_all.deb ...
Unpacking proxmox-ve (5.1-38) over (5.1-32) ...
Preparing to unpack .../11-pve-kernel-4.13.13-2-pve_4.13.13-33_amd64.deb ...
Unpacking pve-kernel-4.13.13-2-pve (4.13.13-33) over (4.13.13-32) ...
Preparing to unpack .../12-zfsutils-linux_0.7.4-pve2~bpo9_amd64.deb ...
Unpacking zfsutils-linux (0.7.4-pve2~bpo9) over (0.7.3-pve1~bpo9) ...
Preparing to unpack .../13-zfs-initramfs_0.7.4-pve2~bpo9_all.deb ...
Unpacking zfs-initramfs (0.7.4-pve2~bpo9) over (0.7.3-pve1~bpo9) ...
Preparing to unpack .../14-zfs-zed_0.7.4-pve2~bpo9_amd64.deb ...
Unpacking zfs-zed (0.7.4-pve2~bpo9) over (0.6.5.9-5) ...
Setting up pve-kernel-4.13.13-5-pve (4.13.13-38) ...
Examining /etc/kernel/postinst.d.
run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 4.13.13-5-pve /boot/vmlinuz-4.13.13-5-pve
run-parts: executing /etc/kernel/postinst.d/dkms 4.13.13-5-pve /boot/vmlinuz-4.13.13-5-pve
Error! Your kernel headers for kernel 4.13.13-5-pve cannot be found.
Please install the linux-headers-4.13.13-5-pve package,
or use the --kernelsourcedir option to tell DKMS where it's located
Error! Your kernel headers for kernel 4.13.13-5-pve cannot be found.
Please install the linux-headers-4.13.13-5-pve package,
or use the --kernelsourcedir option to tell DKMS where it's located
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 4.13.13-5-pve /boot/vmlinuz-4.13.13-5-pve
update-initramfs: Generating /boot/initrd.img-4.13.13-5-pve
run-parts: executing /etc/kernel/postinst.d/zz-update-grub 4.13.13-5-pve /boot/vmlinuz-4.13.13-5-pve
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.13.13-5-pve
Found initrd image: /boot/initrd.img-4.13.13-5-pve
Found linux image: /boot/vmlinuz-4.13.13-2-pve
Found initrd image: /boot/initrd.img-4.13.13-2-pve
Found memtest86+ image: /ROOT/pve-1@/boot/memtest86+.bin
Found memtest86+ multiboot image: /ROOT/pve-1@/boot/memtest86+_multiboot.bin
Adding boot menu entry for EFI firmware configuration
done
Setting up libuutil1linux (0.7.4-pve2~bpo9) ...
Processing triggers for initramfs-tools (0.130) ...
update-initramfs: Generating /boot/initrd.img-4.13.13-5-pve
Setting up iproute2 (4.13.0-3) ...
Setting up pve-kernel-4.13.13-2-pve (4.13.13-33) ...
Examining /etc/kernel/postinst.d.
run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 4.13.13-2-pve /boot/vmlinuz-4.13.13-2-pve
run-parts: executing /etc/kernel/postinst.d/dkms 4.13.13-2-pve /boot/vmlinuz-4.13.13-2-pve
Error! Your kernel headers for kernel 4.13.13-2-pve cannot be found.
Please install the linux-headers-4.13.13-2-pve package,
or use the --kernelsourcedir option to tell DKMS where it's located
Error! Your kernel headers for kernel 4.13.13-2-pve cannot be found.
Please install the linux-headers-4.13.13-2-pve package,
or use the --kernelsourcedir option to tell DKMS where it's located
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 4.13.13-2-pve /boot/vmlinuz-4.13.13-2-pve
update-initramfs: Generating /boot/initrd.img-4.13.13-2-pve
run-parts: executing /etc/kernel/postinst.d/zz-update-grub 4.13.13-2-pve /boot/vmlinuz-4.13.13-2-pve
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.13.13-5-pve
Found initrd image: /boot/initrd.img-4.13.13-5-pve
Found linux image: /boot/vmlinuz-4.13.13-2-pve
Found initrd image: /boot/initrd.img-4.13.13-2-pve
Found memtest86+ image: /ROOT/pve-1@/boot/memtest86+.bin
Found memtest86+ multiboot image: /ROOT/pve-1@/boot/memtest86+_multiboot.bin
Adding boot menu entry for EFI firmware configuration
done
Setting up libnvpair1linux (0.7.4-pve2~bpo9) ...
Setting up pve-docs (5.1-16) ...
Processing triggers for libc-bin (2.24-11+deb9u1) ...
Processing triggers for systemd (232-25+deb9u1) ...
Processing triggers for man-db (2.7.6.1-2) ...
Setting up libzpool2linux (0.7.4-pve2~bpo9) ...
Setting up pve-qemu-kvm (2.9.1-6) ...
Setting up libzfs2linux (0.7.4-pve2~bpo9) ...
Setting up qemu-server (5.0-20) ...
Setting up pve-manager (5.1-43) ...
Setting up zfsutils-linux (0.7.4-pve2~bpo9) ...
Created symlink /etc/systemd/system/zfs-mount.service.wants/zfs-import.target → /lib/systemd/system/zfs-import.target.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-import.target → /lib/systemd/system/zfs-import.target.
zfs-import-scan.service is a disabled or a static unit, not starting it.
Setting up zfs-zed (0.7.4-pve2~bpo9) ...
Installing new version of config file /etc/zfs/zed.d/zed-functions.sh ...
Installing new version of config file /etc/zfs/zed.d/zed.rc ...
Setting up zfs-initramfs (0.7.4-pve2~bpo9) ...
Processing triggers for pve-ha-manager (2.0-4) ...
Setting up proxmox-ve (5.1-38) ...
Processing triggers for libc-bin (2.24-11+deb9u1) ...
Processing triggers for initramfs-tools (0.130) ...
update-initramfs: Generating /boot/initrd.img-4.13.13-5-pve
root@monserveur:~#

In this case, which commands do I have to execute manually ?

Thanks
 
none, since as you can see from your log both update-initramfs and update-grub have been triggered correctly. but you should remove the zfs-dkms and spl-dkms packages from stock debian, as well as any stock debian kernel image and header packages (linux-image-XXX linux-headers-XXX).
 
  • Like
Reactions: Dubard
Thanks @fabian

Here's what I did:

Code:
root@monserveur:~# apt-get remove zfs-dkms
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages were automatically installed and are no longer required:
  dkms fakeroot gcc gcc-6 libasan3 libatomic1 libcc1-0 libcilkrts5 libfakeroot libgcc-6-dev libgomp1 libitm1 liblsan0 libmpx2 libtsan0 libubsan0 linux-compiler-gcc-6-x86 linux-headers-4.9.0-5-amd64
  linux-headers-4.9.0-5-common linux-headers-amd64 linux-kbuild-4.9 spl-dkms sudo
Use 'apt autoremove' to remove them.
The following packages will be REMOVED:
  zfs-dkms
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
After this operation, 8,152 kB disk space will be freed.
Do you want to continue? [Y/n] Y
(Reading database ... 69626 files and directories currently installed.)
Removing zfs-dkms (0.6.5.9-5) ...

------------------------------
Deleting module version: 0.6.5.9
completely from the DKMS tree.
------------------------------
Done.



root@monserveur:~# apt-get remove spl-dkms
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages were automatically installed and are no longer required:
  dkms fakeroot gcc gcc-6 libasan3 libatomic1 libcc1-0 libcilkrts5 libfakeroot libgcc-6-dev libgomp1 libitm1 liblsan0 libmpx2 libtsan0 libubsan0 linux-compiler-gcc-6-x86 linux-headers-4.9.0-5-amd64
  linux-headers-4.9.0-5-common linux-headers-amd64 linux-kbuild-4.9 sudo
Use 'apt autoremove' to remove them.
The following packages will be REMOVED:
  spl-dkms
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
After this operation, 3,017 kB disk space will be freed.
Do you want to continue? [Y/n] Y
(Reading database ... 69226 files and directories currently installed.)
Removing spl-dkms (0.6.5.9-1) ...

-------- Uninstall Beginning --------
Module:  spl
Version: 0.6.5.9
Kernel:  4.9.0-5-amd64 (x86_64)
-------------------------------------

Status: Before uninstall, this module version was ACTIVE on this kernel.

spl.ko:
 - Uninstallation
   - Deleting from: /lib/modules/4.9.0-5-amd64/updates/dkms/
 - Original module
   - No original module was found for this module on this kernel.
   - Use the dkms install command to reinstall any previous module version.


splat.ko:
 - Uninstallation
   - Deleting from: /lib/modules/4.9.0-5-amd64/updates/dkms/
 - Original module
   - No original module was found for this module on this kernel.
   - Use the dkms install command to reinstall any previous module version.

depmod...

DKMS: uninstall completed.

------------------------------
Deleting module version: 0.6.5.9
completely from the DKMS tree.
------------------------------
Done.


root@monserveur:~# apt-get remove linux-headers-4.9.0-5-amd64
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages were automatically installed and are no longer required:
  dkms fakeroot gcc gcc-6 libasan3 libatomic1 libcc1-0 libcilkrts5 libfakeroot libgcc-6-dev libgomp1 libitm1 liblsan0 libmpx2 libtsan0 libubsan0 linux-compiler-gcc-6-x86 linux-headers-4.9.0-5-common
  linux-kbuild-4.9 sudo
Use 'apt autoremove' to remove them.
The following packages will be REMOVED:
  linux-headers-4.9.0-5-amd64 linux-headers-amd64
0 upgraded, 0 newly installed, 2 to remove and 0 not upgraded.
After this operation, 4,301 kB disk space will be freed.
Do you want to continue? [Y/n] Y
(Reading database ... 68939 files and directories currently installed.)
Removing linux-headers-amd64 (4.9+80+deb9u3) ...
Removing linux-headers-4.9.0-5-amd64 (4.9.65-3+deb9u2) ...
dpkg: warning: while removing linux-headers-4.9.0-5-amd64, directory '/lib/modules/4.9.0-5-amd64' not empty so not removed


root@monserveur:~# apt-get remove linux-headers-4.9.0-5-common
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages were automatically installed and are no longer required:
  dkms fakeroot gcc gcc-6 libasan3 libatomic1 libcc1-0 libcilkrts5 libfakeroot libgcc-6-dev libgomp1 libitm1 liblsan0 libmpx2 libtsan0 libubsan0 linux-compiler-gcc-6-x86 linux-kbuild-4.9 sudo
Use 'apt autoremove' to remove them.
The following packages will be REMOVED:
  linux-headers-4.9.0-5-common
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
After this operation, 45.4 MB disk space will be freed.
Do you want to continue? [Y/n] Y
(Reading database ... 61543 files and directories currently installed.)
Removing linux-headers-4.9.0-5-common (4.9.65-3+deb9u2) ...
root@atlas:~# apt-get remove linux-image-4.9.0-5
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Note, selecting 'linux-image-4.9.0-5-rt-amd64-dbg' for regex 'linux-image-4.9.0-5'
Note, selecting 'linux-image-4.9.0-5-amd64' for regex 'linux-image-4.9.0-5'
Note, selecting 'linux-image-4.9.0-5-rt-amd64' for regex 'linux-image-4.9.0-5'
Note, selecting 'linux-image-4.9.0-5-amd64-dbg' for regex 'linux-image-4.9.0-5'
Package 'linux-image-4.9.0-5-amd64' is not installed, so not removed
Package 'linux-image-4.9.0-5-amd64-dbg' is not installed, so not removed
Package 'linux-image-4.9.0-5-rt-amd64' is not installed, so not removed
Package 'linux-image-4.9.0-5-rt-amd64-dbg' is not installed, so not removed
The following packages were automatically installed and are no longer required:
  dkms fakeroot gcc gcc-6 libasan3 libatomic1 libcc1-0 libcilkrts5 libfakeroot libgcc-6-dev libgomp1 libitm1 liblsan0 libmpx2 libtsan0 libubsan0 linux-compiler-gcc-6-x86 linux-kbuild-4.9 sudo
Use 'apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.



root@monserveur:~# apt autoremove
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be REMOVED:
  dkms fakeroot gcc gcc-6 libasan3 libatomic1 libcc1-0 libcilkrts5 libfakeroot libgcc-6-dev libgomp1 libitm1 liblsan0 libmpx2 libtsan0 libubsan0 linux-compiler-gcc-6-x86 linux-kbuild-4.9 sudo
0 upgraded, 0 newly installed, 19 to remove and 0 not upgraded.
After this operation, 48.1 MB disk space will be freed.
Do you want to continue? [Y/n] Y
(Reading database ... 53157 files and directories currently installed.)
Removing dkms (2.3-2) ...
Removing fakeroot (1.21-3.1) ...
update-alternatives: using /usr/bin/fakeroot-tcp to provide /usr/bin/fakeroot (fakeroot) in auto mode
Removing gcc (4:6.3.0-4) ...
Removing linux-compiler-gcc-6-x86 (4.9.65-3+deb9u2) ...
Removing gcc-6 (6.3.0-18) ...
Removing libgcc-6-dev:amd64 (6.3.0-18) ...
Removing libasan3:amd64 (6.3.0-18) ...
Removing libatomic1:amd64 (6.3.0-18) ...
Removing libcc1-0:amd64 (6.3.0-18) ...
Removing libcilkrts5:amd64 (6.3.0-18) ...
Removing libfakeroot:amd64 (1.21-3.1) ...
Removing libgomp1:amd64 (6.3.0-18) ...
Removing libitm1:amd64 (6.3.0-18) ...
Removing liblsan0:amd64 (6.3.0-18) ...
Removing libmpx2:amd64 (6.3.0-18) ...
Removing libtsan0:amd64 (6.3.0-18) ...
Removing libubsan0:amd64 (6.3.0-18) ...
Removing linux-kbuild-4.9 (4.9.65-3+deb9u2) ...
Removing sudo (1.8.19p1-2.1) ...
Processing triggers for libc-bin (2.24-11+deb9u1) ...
Processing triggers for man-db (2.7.6.1-2) ...
root@monserveur:~#

Thank you for your advice !
See you soon
 
I'm having a similar problem. HP440ar with 2 zpool 2disk mirror with proxmox + 6 disk zfs raid10. The server boot only if i remove the 6 disk raid10. Have someone had the same problem?
 
Has there been any update on whether or not we can install Proxmox with ZFS in UEFI mode using an iso installer? I just got the NUC7CJYH and it doesn't support boot for legacy mode so I don't know how else I can install proxmox with ZFS.
 
Has there been any update on whether or not we can install Proxmox with ZFS in UEFI mode using an iso installer? I just got the NUC7CJYH and it doesn't support boot for legacy mode so I don't know how else I can install proxmox with ZFS.

not supported (yet). you can manually install it (if you know what you are doing) using a live CD and debootstrap.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!