Proxmox 5.0 and nvme

David Hooton

Active Member
Apr 12, 2017
13
3
43
46
Hi Guys,

Do I need to install/configure anything special for Proxmox 5.0 to see a PCI NVME disk?

I can't see any kernel modules activated for nvme or the actual disk itself. Any hints?

Code:
root@bruce:~# modinfo nvme
modinfo: ERROR: Module nvme not found.
root@bruce:~#

Code:
root@bruce:~# lsmod | grep nvm
root@bruce:~#

Code:
root@bruce:~# pveversion -v
proxmox-ve: 5.0-16 (running kernel: 4.10.15-1-pve)
pve-manager: 5.0-23 (running version: 5.0-23/af4267bf)
pve-kernel-4.4.40-1-pve: 4.4.40-82
pve-kernel-4.4.35-2-pve: 4.4.35-79
pve-kernel-4.10.5-1-pve: 4.10.5-5
pve-kernel-4.4.19-1-pve: 4.4.19-66
pve-kernel-4.4.49-1-pve: 4.4.49-86
pve-kernel-4.10.15-1-pve: 4.10.15-15
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.4.21-1-pve: 4.4.21-71
pve-kernel-4.10.8-1-pve: 4.10.8-7
pve-kernel-4.4.44-1-pve: 4.4.44-84
pve-kernel-4.10.11-1-pve: 4.10.11-9
pve-kernel-4.10.17-1-pve: 4.10.17-16
libpve-http-server-perl: 2.0-5
lvm2: 2.02.168-pve2
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-12
qemu-server: 5.0-14
pve-firmware: 2.0-2
libpve-common-perl: 5.0-16
libpve-guest-common-perl: 2.0-11
libpve-access-control: 5.0-5
libpve-storage-perl: 5.0-12
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.0-9
pve-qemu-kvm: 2.9.0-2
pve-container: 2.0-15
pve-firewall: 3.0-2
pve-ha-manager: 2.0-2
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.8-3
lxcfs: 2.0.7-pve2
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.9-pve16~bpo90
openvswitch-switch: 2.7.0-2
root@bruce:~#
 
Hi Guys,

Do I need to install/configure anything special for Proxmox 5.0 to see a PCI NVME disk?

I can't see any kernel modules activated for nvme or the actual disk itself. Any hints?

Code:
root@bruce:~# modinfo nvme
modinfo: ERROR: Module nvme not found.
root@bruce:~#

Code:
root@bruce:~# lsmod | grep nvm
root@bruce:~#

Code:
root@bruce:~# pveversion -v
proxmox-ve: 5.0-16 (running kernel: 4.10.15-1-pve)
pve-manager: 5.0-23 (running version: 5.0-23/af4267bf)
pve-kernel-4.4.40-1-pve: 4.4.40-82
pve-kernel-4.4.35-2-pve: 4.4.35-79
pve-kernel-4.10.5-1-pve: 4.10.5-5
pve-kernel-4.4.19-1-pve: 4.4.19-66
pve-kernel-4.4.49-1-pve: 4.4.49-86
pve-kernel-4.10.15-1-pve: 4.10.15-15
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.4.21-1-pve: 4.4.21-71
pve-kernel-4.10.8-1-pve: 4.10.8-7
pve-kernel-4.4.44-1-pve: 4.4.44-84
pve-kernel-4.10.11-1-pve: 4.10.11-9
pve-kernel-4.10.17-1-pve: 4.10.17-16
libpve-http-server-perl: 2.0-5
lvm2: 2.02.168-pve2
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-12
qemu-server: 5.0-14
pve-firmware: 2.0-2
libpve-common-perl: 5.0-16
libpve-guest-common-perl: 2.0-11
libpve-access-control: 5.0-5
libpve-storage-perl: 5.0-12
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.0-9
pve-qemu-kvm: 2.9.0-2
pve-container: 2.0-15
pve-firewall: 3.0-2
pve-ha-manager: 2.0-2
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.8-3
lxcfs: 2.0.7-pve2
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.9-pve16~bpo90
openvswitch-switch: 2.7.0-2
root@bruce:~#

output of ... dmesg |grep -i nvme on my system ... had no problems yet :)
Code:
[    1.714227] nvme nvme0: pci function 0000:05:00.0
[    1.714307] nvme nvme1: pci function 0000:06:00.0
[    1.714449] nvme nvme2: pci function 0000:07:00.0
[    1.714534] nvme nvme3: pci function 0000:08:00.0
[    1.714600] nvme nvme4: pci function 0000:09:00.0
[    1.714817] nvme nvme5: pci function 0000:0a:00.0
[    1.714882] nvme nvme6: pci function 0000:0b:00.0
[    1.830061]  nvme3n1: p1 p2
[    1.830216]  nvme2n1: p1 p2
[    1.830267]  nvme5n1: p1 p2
[    1.830404]  nvme6n1: p1 p2
[    1.830741]  nvme4n1: p1 p2
[    6.796154]  nvme1n1: p1 p2
[    6.909109]  nvme0n1: p1 p2
[   14.759710] XFS (nvme2n1p1): Mounting V5 Filesystem
[   14.769108] XFS (nvme2n1p1): Ending clean mount
[   35.336880] XFS (nvme1n1p1): Mounting V5 Filesystem
[   35.344737] XFS (nvme1n1p1): Ending clean mount
[   55.058668] XFS (nvme5n1p1): Mounting V5 Filesystem
[   55.068472] XFS (nvme5n1p1): Ending clean mount
[   75.331771] XFS (nvme0n1p1): Mounting V5 Filesystem
[   75.338547] XFS (nvme0n1p1): Ending clean mount
[   95.554044] XFS (nvme6n1p1): Mounting V5 Filesystem
[   95.561464] XFS (nvme6n1p1): Ending clean mount
[  116.229031] XFS (nvme3n1p1): Mounting V5 Filesystem
[  116.240562] XFS (nvme3n1p1): Ending clean mount
[  136.890724] XFS (nvme4n1p1): Mounting V5 Filesystem
[  136.900029] XFS (nvme4n1p1): Ending clean mount
 
Sadly mine is not so happy :(

Code:
root@bruce:~# dmesg |grep -i nvme
root@bruce:~#

hm must be bios issue in your case ... can you see device(s) in bios ? perhaps you need a bios update to operate also nmve disks ?
or have you plugged in the device on wrong pci slot ?
 
I think it depends on linux-kernel, if it supports your hardware (and to what extent). Sometimes not all features of new chipsets are supported. But if Debian can see it (you can try live-dvd), Proxmox should see it too...
 
WIth NVME disks, I sometimes needed to create GPT partition tables using a Ubuntu live CD first, before Proxmox can create partitions.
 
The next ISO include some NVMe related bug fixes.
(will be released on Monday next week)