Hello,
I'm running Proxmox debian version 12.6, kernel is pinned with v6.5.13-6
It's a custom-built machine. I have 8x NVMe disks attached by PCIe via 2x ASUS Hyper RAID cards.
I have 3x Windows 10 Pro VMs and 1x TrueNAS SCALE VM.
PCIe Bifurcation is enabled.
I am trying to create a ZFS Pool within Proxmox itself from the 8x NVMe disks. However, each of these individual NVMe disks are not being recognized as storage devices within Proxmox.
For many months I've had these NVMe disks passed through as PCI devices (attached) to the TrueNAS VM. The TrueNAS VM was always configured to have a RAIDZ pool made out of these NVMe disks.
I want to make ZFS pool in Proxmox itself instead and get rid of TrueNAS. My intention is to make 3x virtual disks from this new ZFS Pool in Proxmox and attach them directly to the Windows VMs instead. This is so that the Windows VMs have direct access to the ZFS pool, reducing latency by bypassing TCP/IP networking stacks and eliminating SMB/NFS sharing protcols.
When I detach these disks from the TrueNAS, it seems there is no way to recognize storage devices - e.g. /dev/nvmeX
When choosing ZFS Pool option in the PVE GUI, it shows "No unused disks", however I have already wiped them and destroyed the RAIDZ pool they were associated with in TrueNAS. So all data including metadata should have been wiped, ready for use by Proxmox instead.
Here is my output from "lsblk | grep -i nvme":
Here is my output from "fdisk -l | grep -i nvme":
Here is my output from "nvme list":
Here is my output from "ls /dev/ | grep -i nvme":
None of these above outputs shows any of my PCIe attached NVMe disks as storage devices.
However, when I run the command "lspci | grep -i nvme", it does show them recognized as separate pcie devices:
Is there something simple I'm missing here? Is this possibly due to my custom kernel (which is mandatory to make my vgpu setup work)? Is there a way to mount these PCIe NVMe disks as storage devices? Have researched endlessly but not finding much useful information out there on this.
Thanks!
I'm running Proxmox debian version 12.6, kernel is pinned with v6.5.13-6
It's a custom-built machine. I have 8x NVMe disks attached by PCIe via 2x ASUS Hyper RAID cards.
I have 3x Windows 10 Pro VMs and 1x TrueNAS SCALE VM.
PCIe Bifurcation is enabled.
I am trying to create a ZFS Pool within Proxmox itself from the 8x NVMe disks. However, each of these individual NVMe disks are not being recognized as storage devices within Proxmox.
For many months I've had these NVMe disks passed through as PCI devices (attached) to the TrueNAS VM. The TrueNAS VM was always configured to have a RAIDZ pool made out of these NVMe disks.
I want to make ZFS pool in Proxmox itself instead and get rid of TrueNAS. My intention is to make 3x virtual disks from this new ZFS Pool in Proxmox and attach them directly to the Windows VMs instead. This is so that the Windows VMs have direct access to the ZFS pool, reducing latency by bypassing TCP/IP networking stacks and eliminating SMB/NFS sharing protcols.
When I detach these disks from the TrueNAS, it seems there is no way to recognize storage devices - e.g. /dev/nvmeX
When choosing ZFS Pool option in the PVE GUI, it shows "No unused disks", however I have already wiped them and destroyed the RAIDZ pool they were associated with in TrueNAS. So all data including metadata should have been wiped, ready for use by Proxmox instead.
Here is my output from "lsblk | grep -i nvme":
root@pve:/home# lsblk | grep -i nvmenvme4n1 259:3 0 3.6T 0 disknvme5n1 259:25 0 476.9G 0 disk├─nvme5n1p1 259:26 0 1007K 0 part├─nvme5n1p2 259:27 0 1G 0 part /boot/efi└─nvme5n1p3 259:28 0 475.9G 0 partHere is my output from "fdisk -l | grep -i nvme":
root@pve:/home# fdisk -l | grep -i nvmePartition 1 does not start on physical sector boundary.Partition 2 does not start on physical sector boundary.Disk /dev/nvme4n1: 3.64 TiB, 4000787030016 bytes, 7814037168 sectorsDisk /dev/nvme5n1: 476.94 GiB, 512110190592 bytes, 1000215216 sectorsDisk model: PC SN810 NVMe WDC 512GB/dev/nvme5n1p1 34 2047 2014 1007K BIOS boot/dev/nvme5n1p2 2048 2099199 2097152 1G EFI System/dev/nvme5n1p3 2099200 1000215182 998115983 475.9G Linux LVMPartition 1 does not start on physical sector boundary.Partition 1 does not start on physical sector boundary.Partition 1 does not start on physical sector boundary.Here is my output from "nvme list":
root@pve:/home# nvme listNode Generic SN Model Namespace Usage Format FW Rev--------------------- --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------/dev/nvme5n1 /dev/ng5n1 220887453304 PC SN810 NVMe WDC 512GB 1 512.11 GB / 512.11 GB 512 B + 0 B 61912524/dev/nvme4n1 /dev/ng4n1 50026B76869CB0B9 KINGSTON SFYRD4000G 1 4.00 TB / 4.00 TB 512 B + 0 B EIFK31.6Here is my output from "ls /dev/ | grep -i nvme":
nvme4nvme4n1nvme5nvme5n1nvme5n1p1nvme5n1p2nvme5n1p3nvme-fabricsNone of these above outputs shows any of my PCIe attached NVMe disks as storage devices.
However, when I run the command "lspci | grep -i nvme", it does show them recognized as separate pcie devices:
root@pve:/home# lspci | grep -i nvme02:00.0 Non-Volatile memory controller: Kingston Technology Company, Inc. KC3000/FURY Renegade NVMe SSD E18 (rev 01)03:00.0 Non-Volatile memory controller: Kingston Technology Company, Inc. KC3000/FURY Renegade NVMe SSD E18 (rev 01)04:00.0 Non-Volatile memory controller: Kingston Technology Company, Inc. KC3000/FURY Renegade NVMe SSD E18 (rev 01)05:00.0 Non-Volatile memory controller: Kingston Technology Company, Inc. KC3000/FURY Renegade NVMe SSD E18 (rev 01)2c:00.0 Non-Volatile memory controller: Kingston Technology Company, Inc. KC3000/FURY Renegade NVMe SSD E18 (rev 01)2d:00.0 Non-Volatile memory controller: Sandisk Corp WD PC SN810 / Black SN850 NVMe SSD (rev 01)61:00.0 Non-Volatile memory controller: Kingston Technology Company, Inc. KC3000/FURY Renegade NVMe SSD E18 (rev 01)62:00.0 Non-Volatile memory controller: Kingston Technology Company, Inc. KC3000/FURY Renegade NVMe SSD E18 (rev 01)63:00.0 Non-Volatile memory controller: Kingston Technology Company, Inc. KC3000/FURY Renegade NVMe SSD E18 (rev 01)64:00.0 Non-Volatile memory controller: Kingston Technology Company, Inc. KC3000/FURY Renegade NVMe SSD E18 (rev 01)Is there something simple I'm missing here? Is this possibly due to my custom kernel (which is mandatory to make my vgpu setup work)? Is there a way to mount these PCIe NVMe disks as storage devices? Have researched endlessly but not finding much useful information out there on this.
Thanks!