[SOLVED] Moving to NVMe, initrd doesn't creates /dev/nvme* nodes

NStorm

Active Member
Dec 23, 2011
64
2
28
Russia, Rostov-na-Donu
Running recent & updated proxmox 5.2. It was installed on SATA drive, but now I want to move it to NVMe drives. MB has the support for UEFI boot from NVMe.
So I've created a GPT partition on NVMe. 512Mb FAT32 partition with ESP flag, and rest are for linux RAID partition where LVM volume will reside. I usually have no issues doing this with SATA drives at all. So I did like before:
1. Did a grub-install /dev/nvme0n1. It went fine and efibootmgr shows this as grub option (actually EFI BIOS sees a new entry 'proxmox' now and grub boots).
2. Did an update-initramfs -u -k all which also went without any issues.
3. Did a update-grub to rebuild GRUB config.

The grub boots itself and launches kernel & initrd. But initrd scripts can't find root partiton (pve LVM volume) and dropping to a shell. And I can't see /dev/nvm* nodes in initramfs shell:
Screenshot_20180910_112158.png

How can I fix this? I've rebooted with systemrescuecd 5.3.0 and can chroot into my proxmox installation. /dev/nvme* devices are visible from there. I've tried creating new initrd image with 'rm /boot/initrd.img-4.15.18-4-pve && update-initramfs -c -k all' but it doesn't helps.
Code:
root@sysresccd /root % mkdir /mnt/tmp
root@sysresccd /root % mount /dev/pve/root /mnt/tmp
root@sysresccd /root % mount -o bind /proc /mnt/tmp/proc
root@sysresccd /root % mount -o bind /sys /mnt/tmp/sys
root@sysresccd /root % mount -o bind /dev /mnt/tmp/dev
root@sysresccd /root % SHELL=/bin/bash chroot /mnt/tmp
root@sysresccd:/# ls -l /dev/nvm*
crw------- 1 root root 240, 0 Sep 10 11:28 /dev/nvme0
brw-rw---- 1 root disk 259, 3 Sep 10 11:28 /dev/nvme0n1
brw-rw---- 1 root disk 259, 4 Sep 10 11:28 /dev/nvme0n1p1
brw-rw---- 1 root disk 259, 5 Sep 10 11:28 /dev/nvme0n1p2
crw------- 1 root root 240, 1 Sep 10 11:28 /dev/nvme1
brw-rw---- 1 root disk 259, 0 Sep 10 11:28 /dev/nvme1n1
brw-rw---- 1 root disk 259, 1 Sep 10 11:28 /dev/nvme1n1p1
brw-rw---- 1 root disk 259, 2 Sep 10 11:28 /dev/nvme1n1p2

update-initramfs verbose log are attached.

Any suggestions?
 

Attachments

  • initrd2.log
    88.5 KB · Views: 2
Seems like this problem are because udev running from initramfs doesn't populates /dev/nvme* devices for some reason. According to pve kernel config both nvme and nvme block device drivers are built in kernel.
 
It may be that the rootdelay needs to be longer for the NVMe.
 
I've managed to boot existing installation from NVMe by changing BIOS settings. Will test a few things and will post details later.
EDIT: Installer also runs Xorg fine now after changing BIOS settings.
 
Is the NVMe available for selection as a boot device in the BIOS?


EDIT: Just seen, your message. Sometimes a refresh helps ;)
 
Platform: Supermicro SSG-5029P-E1CTR12L
MB: Supermicro X11SPH-nCTF
BIOS Version: 2.0 / 2.0b
BIOS Build Time: 11/29/2017
CPU: Intel Xeon Silver 4108
RAM: 6 x 4 Gb DDR4 2400MHz ECC

Symptoms: once NVMe optional drives (including tray & OCuLink cables) installed Proxmox can't boot or start installation.

Solution: Enter BIOS Setup -> Advanced -> Chipset Configuration -> North Bridge -> IIO Configuration -> Intel VMD Technology -> Intel VMD for Volume Management Device on CPU, and DISABLE "Intel VMD for Volume Management Device" option.

Current version of Proxmox aren't compatible with this option. But systemrescuecd 5.3.0 was able to recognize drives with this option enabled.
Sounds like a bug to me, but this could be reported to upstream. Should I report this somewhere?

EDIT: Vendor has a newer BIOS version for this MB 2.0b, but nothing about Intel VMD on release notes. I'll try to upgrade to see if this helps. Will report back soon.
EDIT2: Upgrading to BIOS version 2.0b didn't helped. Issue remains with Intel VMD enabled.
 
Last edited:
Intel VMD was added sometime in Kernel 4.5 and is a separate module. You need to add that module to the initrd to boot. AFAIK, it is not added on the current ISO.
 
  • Like
Reactions: NStorm
You are right! I've added 'vmd' to /etc/modules & /etc/initramfs-tools/modules, rebuild initramfs with 'update-initramfs -u -k all', enabled VMD in BIOS, rebooted and Proxmox booted fine. Marking this as solved. But probably vmd module should be enabled in future releases to avoid such unpredictable behavior with newer Intel system.
 
Hi! I guess we have similar issue to yours. Here is our screenshoot:

upload_2018-12-10_10-32-30.png

Do you have a solution for our problem? If yes, please advise on how to proceed. Thank you!

P.S. I can't use any usefull Linux command. Only few are available...
 
@Elkoss, please open up a new thread, this issue is different from the OPs. pve-root needs an fsck, no module is missing.
 
  • Like
Reactions: Elkoss

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!