PCIE SSD - ZFS

jgiddens

Member
Aug 24, 2021
9
1
8
54
Hey all,

I have a dell poweredge r720 and I want to add some SSDs (m2, 2.5, nvme, havent decided) on PCI adapters. I know i cannot easily or natively boot from them but is there any reason to expect Proxmox wont be able to use them? All the other drives are on a IT Mode RAID card and have ZFS deployed.

My goal is to put two adapters into two slots with a SSD each. My server doesnt support bifurcation on PCI so I have to do two slots. Once deployed I would want to create a mirrored ZFS pool between the two devices.

Will this work?
 
It will work just not as boot devices. This is also assuming the pcie slot doesnt disable any slots you already using as well as it does on some boards.
 
And also make sure the the slot is electrically a 4x PCIe slot. Often your physical 4x slots will only work as a electrically 1x slot.
And even if that slot got 4 PCIe lanes that doesn't mean you can use the full speed. That PCIe slot needs PCIe lanes directly connected to your CPU and not to the mainboards chipset, because otherwise it will be easily capped by the link between the CPU and the chipset. Often this link between CPU and chipset only got 4 PCIe lanes and all SATA, NICs, USB and several PCIe slot all share this link. So if you for example got two 4x PCIe slots and both are connected to the chipset and that chipset is only connected via 4 PCIe lanes to the CPU those electically 4x PCIe slots can only use 50% of the bandwidth. And if you also have high loads on the SATA/USB/NICs this can easily drop down to something like 25% performance.
 
Last edited: