Upgrading Dell R720 with 2x NVMe drives that are on PCIe cards

mishanw

New Member
Feb 2, 2024
27
3
3
Hi Everyone.

I recently purchased 2x NVMe drives that are on PCIe cards. I was running Proxmox on an SSD that I had mounted to the dell using the DVD drive Sata Connection using a caddy. I wanted to ensure there was some redundancy on the Proxmox and system drives and opted to get 2x PCIE cards to house 2x NVMe SSD's

How do I go about setting up mirroring on the two SSD's and then migrate everything I have currently running?

Thank you in advance.
 
Is the installation based on LVM2 or ZFS?

With LVM2 you could create a mdadm RAID1 mirror between these NVMe devices and the add that as PV to the VG and use pvmove to move over all the LVs. Then make the NVMe devices bootable and remove the old SSD from the VG.
 
Is the installation based on LVM2 or ZFS?

With LVM2 you could create a mdadm RAID1 mirror between these NVMe devices and the add that as PV to the VG and use pvmove to move over all the LVs. Then make the NVMe devices bootable and remove the old SSD from the VG.
I used the ZFS RAID1 option during setup. Not sure what LVM2 is? The install goes through, however, doesn't boot.
 
The Dell R720 isn't able to boot from NVMe!! You need those boot files on SAS/SATA or USB...
 
The Dell R720 isn't able to boot from NVMe!! You need those boot files on SAS/SATA or USB...
So this was a bad idea? I was hoping this would provide a good redundant system boot volume. Since I'm passing the HBA directly to TrueNAS. I did manage to get it to boot from one of the drives, I just can't do mirrored. While trying to do this, I also messed up my primary boot drive, Had to reinstall proxmox and restored the backup VM and LXC. But that not working: I realized I had forgotten to backup the actual proxmox node entirely

I get the following, not sure why.

Task viewer: CT 100 - Start

OutputStatus

Stop

Download
netdev_configure_server_veth: 662 Operation not supported - Failed to create veth pair "veth100i0" and "vethAWsqOv"
lxc_create_network_priv: 3427 Operation not supported - Failed to create network device
lxc_spawn: 1840 Failed to create the network
TASK ERROR: startup for container '100' failed


Task viewer: VM 102 - Start

OutputStatus

Stop

Download
TASK ERROR: KVM virtualisation configured, but not available. Either disable in VM configuration or enable in BIOS.
 
For the R720XD
The Intel DC P3520 PCIe SSD card is bootable in UEFI mode. I have Proxmox 8.2.4 installed directly to it and it boots every time. Windows Server 2012 also boots off this card with no issue or special configuration. Running bios 2.9.0 and the card works on either CPU 1 or CPU 2 riser. Hope this helps anyone buying old hardware that wants a fast home storage server for as cheep as possible.

also the R720XD does NOT support pcie bifurcation. BUT you can get quad slot PCIe switching cards and they 100% work in Proxmox, TrueNAS core and Scale, and UNRAID. I’ll link below what I bought, I have 3 working perfectly all loaded up with 1TB Optain drives. If the link is dead just search PLX 8747 chipset on AliExpress and you’ll find at-least a few people making them. Mine are by LinkReal and they work awesome. https://www.aliexpress.us/item/3256...c4MPfhy&gatewayAdapt=glo2usa&_randl_shipto=US

FYI you NEED to have your Proxmox (or which every hypervisor) installed on a PCIe SSD if you want to be able to iommu Hardware pass through your PERC LSI controller to a VM (make sure you flash your PERC to IT mode for best results with TrueNAS Scale.) cuz if you install on a drive that’s in one of the bays your gonna crash your hypervisor when you start your VM with pass through (for obvious reasons)

Learned all this over 3months trial and erroring different PCIe adapter cards and 6 different nvme drives and 3 different models of Optain drives.
 
Hello all

I just added a thread for guidance on my proxmox 8.x on a dell r720 non where to created the zfs pool (proxmox or truenas)

I will only install truenas scale, HAOS, wireguard, and PI HOLE

I was thinking of adding a larger ssd to use ad boot drive.

However I see (if i understand you all correctly) that I could add this Linkreal PCIe 3.0 X16 to Quad M.2 NVMe SSD Switch Adapter to my dell r720 and add 3 mvme to install the vms and leave the ssd for proxmox boot. Or add 4 mvmes 2 for boot and 2 for vms as mirrors ? i

n this configuration i would have a backup of my proxmox environment

Would this work ?