Proxmox installation - hard disks and PowerEdge R620

Zilla

New Member
Jan 31, 2024
5
0
1
Hello all. Looking for some thoughts.

We're currently running Hyper V (free version) on a few PowerEdge R620's. We have a spare, and I'd like to experiment with Proxmox and see if it's a good fit for our org. I'm "maybe" thinking of moving away from Hyper V due to their retiring the free version, plus some other costs (namely Veeam), and management of Hyper V without a domain has not been fun.

Wanting to see if Proxmox (and maybe ZFS?) would be a good fit for us. But from what I can tell out of the gate, we'd have to rework our servers somewhat. They're all filled with 900GB enterprise Dell HDDs. I've always created one-big-raid-10, partitioned off 100GB for the OS, left the rest for VMs. Did something similar with ESXi when we ran that. But that doesn't seem to be a recommended or even possible path for Proxmox.

Everything I've seen seems to suggest keeping the OS on a pair of mirrored SSDs, with the remaining disks for VMs (SSDs or HDDs, depending). I'm not sure if I really want to take the plunge to remove two disks (thus losing some space), and replacing with SSDs. And if I did, assuming they would have to be Enterprise drives? (I've read Proxmox and ZFS will kill consumer drives).

I already flashed our PERC controller to get rid of hardware Raid, so that isn't an issue. But the disk config is. What have others done?
 
Considered best practice to mirror the OS drives with small drives. But with a 8-bay R620, yeah, you lose 2 drive bays. I previously used ZFS RAID-6 (still losing 2 drives but any two drives can fail before data is lost). I went ahead and mirrored the OS and setup ZFS RAID-50.


One last option is to use a Dell BOSS-S1 or equivalent. You do have to setup the mirroring using a command-line utility but Proxmox does see the S1 storage. Since the S1 was already mirroring two SSDs, I use XFS to install Proxmox and the rest of drives using ZFS. This was on another server.
 
Thanks for the reply. So, If I bit the bullet and replaced two disks with Samsung PM893 (Enterprise Drives), and left the remaining SAS drives for VMs, all would be well? The VMs do have a few databases (Xibo, Unifi controller, Zabbix, etc).

From what I've read, if I "did" want to go entire SSD, just use Enterprise drives?
 
One last option is to use a Dell BOSS-S1 or equivalent.
But they weren't supported in the R620 / R630, or am I wrong?
So, If I bit the bullet and replaced two disks with Samsung PM893 (Enterprise Drives), and left the remaining SAS drives for VMs, all would be well?
You can't answer that like that. Basically it works and doesn't cause any problems. But you can also create a RAID 10 over all of them and put PVE directly on this RAID.
Personally, I would also separate PVE from the data disks, because you can usually simply move the OS disks to another server and start Proxmox VE again. If you mix everything up, you may have to reinstall it.
But I wouldn't use SAS disks anymore today, but apart from that I wouldn't run Proxmox without CEPH or other central storage.

A lot depends on your requirements and expectations.
From what I've read, if I "did" want to go entire SSD, just use Enterprise drives?
If you want to use ZFS or CEPH, then absolutely. The enterprise SSDs sometimes don't even cost as much more as the consumer SSDs but are often double or triple the TBW. In some cases you pay 1/4 and get 2/3 more life in return.
 
You can't answer that like that. Basically it works and doesn't cause any problems. But you can also create a RAID 10 over all of them and put PVE directly on this RAID.
Yeah, this is what I'm used to, coming from Hyper V. One Big Raid and just a small OS partition. I can see the benefits of separating out PVE though.
But I wouldn't use SAS disks anymore today
Why is that? Just due to SSDs finally taking over? Just wondering as all the refurbs I get tend to already have SAS in them.
 
Why is that? Just due to SSDs finally taking over?
I run several servers in a data center, I have to pay for every single watt of electricity. I want the hardware to run as efficiently as possible, and by that I also mean that the hardware has as much power as possible with low power consumption. Using an SSD is already a no-brainer; it uses less power, is often not that much more expensive to purchase, but has significantly more performance than a SAS disk.
I also no longer use hardware RAIDs, I replace this dependency with ZFS or CEPH and can therefore simply throw away the server and put the disks in the next one, and I've done that several times so far.

And before the question comes, even my backups are on SSDs.

But like I said, it has to work for you. It's your infrastructure, you may be happy with it or you don't need more power and then that's absolutely fine. Ultimately it's not junk that you're using, but for me it's just out of proportion.

But to also answer the question of whether SSDs have already taken over, I would say that the SSD is actually already obsolete again. There are now so many variations of flash storage, especially in the enterprise environment. In some cases you can get 500 TB or more in one height unit; you can't achieve this density with either HDDs or standard SATA SSDs.
However, this consideration is more from the enterprise environment than from the consumer sector. As a consumer, you may have just reached 10 GbE when the data centers are already thinking about replacing the 400 GbE with 800 GbE.
 
I also no longer use hardware RAIDs, I replace this dependency with ZFS or CEPH and can therefore simply throw away the server and put the disks in the next one, and I've done that several times so far.
This is one of the reasons I'd like to start getting into more ZFS and move away from Hardware Raid. I don't like being tied to a Raid controller.
 
But they weren't supported in the R620 / R630, or am I wrong?
Dell BOSS-S1 cards are technically not supported on 13th-gen Dells but it does work. This server was previously an ESXi host doing backups which ESXi recognized the card during install. It does show up as a install target during Proxmox installation.

The server that is installed is a 2U LFF Proxmox Backup Server. I did not want to lose two LFF drive bays for just mirroring the OS.

I did make sure to update the firmware on both the card and drives before putting into production.

Never used a BOSS-S1 on 12th-gen Dells.
 
Last edited:
This is one of the reasons I'd like to start getting into more ZFS and move away from Hardware Raid. I don't like being tied to a Raid controller.
If going to use either ZFS or Ceph, you need an IT-mode disk controller. 12th-gen Dell PERC controllers can be flashed to IT-mode with this guide at https://fohdeesha.com/docs/perc.html

I've converted a fleet of 12th-gen Dells to IT-mode and all running either Ceph or ZFS.
 
Do you use all the internal PCIe slots on your R620? You could add a small m.2 mirrored device there to install PVE. 12th gen don't have bifurcation but there is some limited NVMe support.

8-bay as opposed to 10-bay R620 also implies there is an optical bay? You could just tape up a small SATA SSD in the optical slot?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!