Dell Hardware Compatibility with Proxmox for Infrastructure Upgrade

Ali73

New Member
Aug 28, 2024
3
0
1
Hi there,

We are planning to upgrade our infrastructure with the latest hypervisor hardware and are considering options such as the PowerEdge R760 with external Dell storage perhapes an upgraded version of the Dell Storage SCv3020. Our plan includes setting up two nodes at the primary site and two nodes at the secondary site to ensure continuity in case the primary site experiences significant downtime.

Can anyone confirm how to check if the hardware configuration will be compatible with Proxmox before we buy it? We intend to purchase the hardware first and may opt for paid promxmox support if necessary, but it's crucial for us to select the right hardware from the start.

Any assistance in confirming the compatibility of the latest hardware with Proxmox would be greatly appreciated.

Please share as much details as possible such as which particular version of the hardware have been tested and works without any issue. Also I am curious to know if it would make any difference between external SAN storage i.e Dell Storage SCv3020 or using the internal storage of the host so that we can plan accordingly.

The workload is mix of both Winodws and debian

Thank you in advance for your help!
 
As aaron mentions, Proxmox does work a little differently to what you might expect. Do note especially that iSCSI SANs will not work the way you want it to. I have spent some time on this and the hyperconverged way is what I have chosen. I have used and owned a lot of Equallogics and SCs, Nimbles etc (and still do).

You will want shared storage AND snapshots and iSCSI SANs cannot be used to do snapshots ie checkpoint the VM for backups etc.

I am currently replacing a VMware cluster of Dell Rx20s with an iSCSI Equallogic with R450s and Proxmox. Boot and root off a pair of mirrored spinning SAS and six SAS SSDs in each box for Ceph OSDs. We generally specify second hand/reconditioned these days because you get far more bang for your buck. The cluster is n+1 and the storage is also highly available and very, very fast. I have also put in eight 10Gb/s NICs in each box - each pair uses LACP to the switches or RSTP.

The real beauty of a hyperconverged setup is that it scales horizontally. Just add another node as you need more compute and storage. You don't have to budget for lumps of SAN although to be fair they are rather cheaper these days than they were 20 or 10 years ago.

An "old" SAN could make a very decent backup destination. Mount it with native iSCSI from your Proxmox backup server.

It took me a while to get to grips with not using RAID except for boot and root ie the PVE itself. Ceph and Ceph-FS are rather nifty. I have just experienced a SSD failure today and a OSD vanished. No sweat. Ceph booted the OSD out and everything carried on working. I've notified the supplier and a new disc is on the way. I'll swap them over and create a new OSD on the new unit.

Do note that you must have three nodes for Ceph to be suitable. However, if you drop back from Rx60 to Rx50 and drop the SANs then you can buy a lot of SSDs. Also make sure you have decent switches. A three node cluster can use direct NIC to NIC connections for Ceph to save on switch ports - LACP trunks wont get much benefit from a few streams. You will need Open vSwitch and RSTP - there's a write up on the Proxmox wiki.
 
  • Like
Reactions: Ali73
perhapes an upgraded version of the Dell Storage SCv3020
I believe that old compellent stuff is EOL'd. Entry level Dell storage now is ME5024; the good news is that its WAY faster, and doesnt have any unlockable features (you get it all.)

As @Blueloop mentioned, before you go the iscsi route you should familiarize itself with its limitations in a pve deployment- no thin provision, no snapshots.
LACP trunks wont get much benefit from a few streams. You will need Open vSwitch and RSTP - there's a write up on the Proxmox wiki.
not that this is wrong, but woefully limited. To architect a performant, fault tolerant ceph networking enviroment require some design and investment. there are many discussions on the forums, but in essence: have separate interfaces for ceph. The lower the latency the better. in a perfect world, you want to afford enough bandwidth on the ceph private vlan to account for your osd write capability on your osd nodes, and enough bandwidth on your public vlan to feed all your guests. oh and you want that doubled across two seperate switches for fault tolerance. As for OVS and RSTP (edit- dont design your network with the opportunity to "cross the streams")- neither is required, and in most cases aren't even beneficial.
 
Last edited:
  • Like
Reactions: Ali73
Hi there,

We are planning to upgrade our infrastructure with the latest hypervisor hardware and are considering options such as the PowerEdge R760 with external Dell storage perhapes an upgraded version of the Dell Storage SCv3020. Our plan includes setting up two nodes at the primary site and two nodes at the secondary site to ensure continuity in case the primary site experiences significant downtime.
A current replacement for a Compellent (these are all EOL) would be a Powerstore. This money can be better invested in NVMe's for the servers and a good network.
Can anyone confirm how to check if the hardware configuration will be compatible with Proxmox before we buy it? We intend to purchase the hardware first and may opt for paid promxmox support if necessary, but it's crucial for us to select the right hardware from the start.
I often have these DELL servers in projects, although I prefer the Power Edge R7615 because I have more cores per cpu and higher clock speeds.
Please share as much details as possible such as which particular version of the hardware have been tested and works without any issue. Also I am curious to know if it would make any difference between external SAN storage i.e Dell Storage SCv3020 or using the internal storage of the host so that we can plan accordingly.
Like the previous speakers, I recommend an HCI solution with Ceph when buying new hardware.
Nowadays I only use NVMe disks and for the storage network I use dedicated 100GBit network cards. With a 2 room setup with 4 hosts, you can easily build a Ceph setup that can take down a complete site in case of failure. But please also remember that you need a quorum, preferably in a third site. A small computer is usually sufficient for this, or if you have a third server room, you can use the backup server as a quorum in the third site.
 
  • Like
Reactions: Ali73
Hi everyone,

Thanks again for the valuable insights shared earlier. Based on your recommendations, we're leaning towards internal storage solutions over external SAN iSCSI for our upcoming Proxmox deployment. I'd like to confirm a few additional details to ensure we choose the right hardware configuration.

We aim to achieve a total of 16 TB of usable storage. Given our interest in RAID 1 for mirroring to ensure data redundancy, we understand that each server should ideally have 16 TB of raw storage, resulting in 8 TB of usable space per server after mirroring. This setup will give us 16 TB in total across both servers. Am I correct?

We're evaluating the latest Dell rack servers, comparable in capabilities and price to the PowerEdge R760r or PowerEdge R750r, with each server ideally equipped with:

  • 2 physical CPUs (about 80 logical processors), preferably Intel Xeon Gold.
  • Approximately 600 GB of RAM.
  • At least 4 to 8 NICs.
  • 16 TB of raw storage to achieve 8 TB of mirrored usable storage.
Could anyone recommend specific Dell models that have been tested with the latest version of the Proxmox hypervisor and meet these specifications?

Additionally, if we subscribe to Proxmox's paid support, would they assist in setting up the hypervisor in a High Availability (HA) configuration with backup, ensuring adherence to best practices?

Your recommendations will be invaluable as we finalize our hardware selection.
Thank you in advance
 
Additionally, if we subscribe to Proxmox's paid support, would they assist in setting up the hypervisor in a High Availability (HA) configuration with backup, ensuring adherence to best practices?
Yes, you can open a ticket with a current system report for us to go through for any potential improvements or if you have questions on how to actually configure things. We will not do the configuration for you though ;)
 
Yes, you can open a ticket with a current system report for us to go through for any potential improvements or if you have questions on how to actually configure things. We will not do the configuration for you though ;)
Thanks Aaron, Can you please also answer my other questions. Just making sure we buy the right hardware from the start.
 
We aim to achieve a total of 16 TB of usable storage. Given our interest in RAID 1 for mirroring to ensure data redundancy, we understand that each server should ideally have 16 TB of raw storage
Assist in setting up the hypervisor in a High Availability (HA) configuration
understand that these ideas are mutually exclusive. A cluster requires shared storage to be effective. The reason I and others explained you be aware of iSCSI based shared storage limitations are NOT because shared storage is not required, but rather that this particular method has limitations.

I gather by your questions that you're not familiar with Ceph, and the discussion so far was not resonating with its purpose. Have a look here: https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster which may help explain.
 
We aim to achieve a total of 16 TB of usable storage. Given our interest in RAID 1 for mirroring to ensure data redundancy, we understand that each server should ideally have 16 TB of raw storage, resulting in 8 TB of usable space per server after mirroring. This setup will give us 16 TB in total across both servers. Am I correct?

We're evaluating the latest Dell rack servers, comparable in capabilities and price to the PowerEdge R760r or PowerEdge R750r, with each server ideally equipped with:

  • 2 physical CPUs (about 80 logical processors), preferably Intel Xeon Gold.
  • Approximately 600 GB of RAM.
  • At least 4 to 8 NICs.
  • 16 TB of raw storage to achieve 8 TB of mirrored usable storage.
Could anyone recommend specific Dell models that have been tested with the latest version of the Proxmox hypervisor and meet these specifications?
There is still a lot of room for improvement here.
If you have read the wiki, you will know that you need at least 3 nodes.
You can also use this calculator to calculate the memory:
https://florian.ca/ceph-calculator/
I recommend single socket servers (AMD) as you don't have to worry about NUMA during configuration.
With ceph, please do not save on the network, 10GBit is not fast. At least 25GBit, better 100GBit for Ceph. 40GBit is worse in terms of latency than 25 GBit, which many people do not take into consideration.

But my most important recommendation is to look for a service provider who has already done such setups several times or attend the Proxmox training courses before ordering.
 
Last edited:
  • Like
Reactions: _gabriel
We aim to achieve a total of 16 TB of usable storage. Given our interest in RAID 1 for mirroring to ensure data redundancy, we understand that each server should ideally have 16 TB of raw storage, resulting in 8 TB of usable space per server after mirroring. This setup will give us 16 TB in total across both servers.
Distributed Storage, such as Ceph, has data protection built-in. You do not need to feed it an already protected disk pair.
Since your capacity requirements are very modest, your best bet is likely Ceph.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!