Proxmox not recognizing PCIe attached NVMe disks

starportal

New Member
Aug 14, 2024
7
0
1
Hello,

I'm running Proxmox debian version 12.6, kernel is pinned with v6.5.13-6

It's a custom-built machine. I have 8x NVMe disks attached by PCIe via 2x ASUS Hyper RAID cards.

I have 3x Windows 10 Pro VMs and 1x TrueNAS SCALE VM.

PCIe Bifurcation is enabled.

I am trying to create a ZFS Pool within Proxmox itself from the 8x NVMe disks. However, each of these individual NVMe disks are not being recognized as storage devices within Proxmox.

For many months I've had these NVMe disks passed through as PCI devices (attached) to the TrueNAS VM. The TrueNAS VM was always configured to have a RAIDZ pool made out of these NVMe disks.

I want to make ZFS pool in Proxmox itself instead and get rid of TrueNAS. My intention is to make 3x virtual disks from this new ZFS Pool in Proxmox and attach them directly to the Windows VMs instead. This is so that the Windows VMs have direct access to the ZFS pool, reducing latency by bypassing TCP/IP networking stacks and eliminating SMB/NFS sharing protcols.

When I detach these disks from the TrueNAS, it seems there is no way to recognize storage devices - e.g. /dev/nvmeX

When choosing ZFS Pool option in the PVE GUI, it shows "No unused disks", however I have already wiped them and destroyed the RAIDZ pool they were associated with in TrueNAS. So all data including metadata should have been wiped, ready for use by Proxmox instead.



Here is my output from "lsblk | grep -i nvme":

root@pve:/home# lsblk | grep -i nvme
nvme4n1 259:3 0 3.6T 0 disk
nvme5n1 259:25 0 476.9G 0 disk
├─nvme5n1p1 259:26 0 1007K 0 part
├─nvme5n1p2 259:27 0 1G 0 part /boot/efi
└─nvme5n1p3 259:28 0 475.9G 0 part


Here is my output from "fdisk -l | grep -i nvme":

root@pve:/home# fdisk -l | grep -i nvme
Partition 1 does not start on physical sector boundary.
Partition 2 does not start on physical sector boundary.
Disk /dev/nvme4n1: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/nvme5n1: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: PC SN810 NVMe WDC 512GB
/dev/nvme5n1p1 34 2047 2014 1007K BIOS boot
/dev/nvme5n1p2 2048 2099199 2097152 1G EFI System
/dev/nvme5n1p3 2099200 1000215182 998115983 475.9G Linux LVM
Partition 1 does not start on physical sector boundary.
Partition 1 does not start on physical sector boundary.
Partition 1 does not start on physical sector boundary.


Here is my output from "nvme list":

root@pve:/home# nvme list
Node Generic SN Model Namespace Usage Format FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme5n1 /dev/ng5n1 220887453304 PC SN810 NVMe WDC 512GB 1 512.11 GB / 512.11 GB 512 B + 0 B 61912524
/dev/nvme4n1 /dev/ng4n1 50026B76869CB0B9 KINGSTON SFYRD4000G 1 4.00 TB / 4.00 TB 512 B + 0 B EIFK31.6


Here is my output from "ls /dev/ | grep -i nvme":

nvme4
nvme4n1
nvme5
nvme5n1
nvme5n1p1
nvme5n1p2
nvme5n1p3
nvme-fabrics


None of these above outputs shows any of my PCIe attached NVMe disks as storage devices.

However, when I run the command "lspci | grep -i nvme", it does show them recognized as separate pcie devices:

root@pve:/home# lspci | grep -i nvme
02:00.0 Non-Volatile memory controller: Kingston Technology Company, Inc. KC3000/FURY Renegade NVMe SSD E18 (rev 01)
03:00.0 Non-Volatile memory controller: Kingston Technology Company, Inc. KC3000/FURY Renegade NVMe SSD E18 (rev 01)
04:00.0 Non-Volatile memory controller: Kingston Technology Company, Inc. KC3000/FURY Renegade NVMe SSD E18 (rev 01)
05:00.0 Non-Volatile memory controller: Kingston Technology Company, Inc. KC3000/FURY Renegade NVMe SSD E18 (rev 01)
2c:00.0 Non-Volatile memory controller: Kingston Technology Company, Inc. KC3000/FURY Renegade NVMe SSD E18 (rev 01)
2d:00.0 Non-Volatile memory controller: Sandisk Corp WD PC SN810 / Black SN850 NVMe SSD (rev 01)
61:00.0 Non-Volatile memory controller: Kingston Technology Company, Inc. KC3000/FURY Renegade NVMe SSD E18 (rev 01)
62:00.0 Non-Volatile memory controller: Kingston Technology Company, Inc. KC3000/FURY Renegade NVMe SSD E18 (rev 01)
63:00.0 Non-Volatile memory controller: Kingston Technology Company, Inc. KC3000/FURY Renegade NVMe SSD E18 (rev 01)
64:00.0 Non-Volatile memory controller: Kingston Technology Company, Inc. KC3000/FURY Renegade NVMe SSD E18 (rev 01)


Is there something simple I'm missing here? Is this possibly due to my custom kernel (which is mandatory to make my vgpu setup work)? Is there a way to mount these PCIe NVMe disks as storage devices? Have researched endlessly but not finding much useful information out there on this.

Thanks!
 
I fixed it.

I had to unbind all the nvme drives from the vfio-pci driver and bind them instead to the nvme driver.

I ran this command to see if it was bound to vfio-pci: lshw -class storage

I ran these commands to unbind from vfio-pci:

Code:
echo "0000:02:00.0" > /sys/bus/pci/devices/0000:02:00.0/driver/unbind
echo "0000:03:00.0" > /sys/bus/pci/devices/0000:03:00.0/driver/unbind
echo "0000:04:00.0" > /sys/bus/pci/devices/0000:04:00.0/driver/unbind
echo "0000:05:00.0" > /sys/bus/pci/devices/0000:05:00.0/driver/unbind
echo "0000:61:00.0" > /sys/bus/pci/devices/0000:61:00.0/driver/unbind
echo "0000:62:00.0" > /sys/bus/pci/devices/0000:62:00.0/driver/unbind
echo "0000:63:00.0" > /sys/bus/pci/devices/0000:63:00.0/driver/unbind
echo "0000:64:00.0" > /sys/bus/pci/devices/0000:64:00.0/driver/unbind

I ran these command to bind to nvme driver:

Code:
echo "0000:02:00.0" > /sys/bus/pci/drivers/nvme/bind
echo "0000:03:00.0" > /sys/bus/pci/drivers/nvme/bind
echo "0000:04:00.0" > /sys/bus/pci/drivers/nvme/bind
echo "0000:05:00.0" > /sys/bus/pci/drivers/nvme/bind
echo "0000:61:00.0" > /sys/bus/pci/drivers/nvme/bind
echo "0000:62:00.0" > /sys/bus/pci/drivers/nvme/bind
echo "0000:63:00.0" > /sys/bus/pci/drivers/nvme/bind
echo "0000:64:00.0" > /sys/bus/pci/drivers/nvme/bind

They now all appear as storage devices and are now selectable to be created as a ZFS pool.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!