Configuration Suggestion for Proxmox VE and Ceph

brian.vanasten

New Member
Jan 23, 2025
8
0
1
Good Morning,

I'm also new to Proxmox. We have 9 HPe Gen11 servers with four 6.4tb SSD drives for each server for a total of roughly 24tb of local space per server. We are going to setup a cluster with Ceph and will let Ceph manage the storage.

The question I have is do I put any kind of RAID when I initially setup a Proxmox VE node or do I add the disks after installation? Also, do I use any RAID on the disks or use ext4 or xfs or any of the zfs raid options? I understand there are alot of different variables, etc. but is there possibly a standard way of setting up each node using Ceph?

The Proxmox environment is not in production yet and I only have 3 of the servers currently set up with Proxmox VE and I was about the create the cluster but losing 1/2 of the drive space by selecting zfs RAID10 may not be the best option if I'm going to use Ceph to manage the storage.

Any insight into this would be greatly appreciated.

Thank you in advance for assistance with this issue.

Brian
 
The question I have is do I put any kind of RAID when I initially setup a Proxmox VE node or do I add the disks after installation?
ceph wants direct access to disks. If you are to do any raid configuration it should be ONLY for your boot device.

but is there possibly a standard way of setting up each node using Ceph?
your OSD on disk format will be managed by the ceph toolset. for the intent of your question, you will be presenting unformatted raw devices to the ceph tools.

The Proxmox environment is not in production yet and I only have 3 of the servers currently set up with Proxmox VE and I was about the create the cluster but losing 1/2 of the drive space by selecting zfs RAID10 may not be the best option if I'm going to use Ceph to manage the storage.
In light of the above, you may want to consider that under a normal ceph deployment for virtualization you will end up with 33% storage utilization (vs raw) so you should temper your expectations accordingly.
 
  • Like
Reactions: Johannes S
ceph wants direct access to disks. If you are to do any raid configuration it should be ONLY for your boot device.


your OSD on disk format will be managed by the ceph toolset. for the intent of your question, you will be presenting unformatted raw devices to the ceph tools.


In light of the above, you may want to consider that under a normal ceph deployment for virtualization you will end up with 33% storage utilization (vs raw) so you should temper your expectations accordingly.
Hi Alexskysilk,

Are you saying that with the ceph deployment, we will want to keep those as raw storage instead of using any Proxmox RAID?
 
There is no such thing as "proxmox raid." it's probably a good idea for you to study and understand what ceph is and how it works if you intend to deploy and support it.

Proxmox RAID is probably a bad term so I guess what I should've said was software level RAID within Proxmox VE. I've attached a picture. In Proxmox, what's that called? I come from Windows/VMWare so I apologize if I don't have the terminology and concepts down yet. I will also do some more reading about Ceph.
 

Attachments

  • Proxmox Software RAID.png
    Proxmox Software RAID.png
    144.9 KB · Views: 7
the screen you attached is only for the BOOT DEVICE. honestly, there only two options that matter here is:

ext4 if your boot device is hardware raid, and zfs raid 1 if its not. and for the love of all thats good, UNCHECK all disks before selecting the one(s) you will end up installing the OS to (installer pet peeve.)

Since you're coming from a vmware background, ceph=vsan
 
Yep, based on the repsonses, we are going to add two smaller disks and hardware RAID them and add the 6.4tb drives after installation. We will then add ceph and make sure they are available to that.

Thanks for helping me through how to get Proxmox installed and ceph = vsan comparison. That makes sense. Thanks again!!
 
Not too relevant for a boot device ;) I typically use zfs mirror for boot but only out of convenience, not because it serves any particular benefit.
One sure benefit is that in case of a broken disk one didn't need to reinstall and reconfigure the ProxmoxVE host operating system. Whether this is actually a problem of course depends on how much you actually customized. And zfs send/receive is quite handy for a backup of the host system.
 
Last edited:
hardware raid 1 accomplishes this just fine, with the added benefit of much better "rescue-ablity" of an ext root filesystem.

And zfs send/receive is quite handy for a backup of the host system.
when operating a cluster "for real" this is never required or even desired. your hosts are cattle, not pets. couple those things together and the mirrored boot is almost unnecessary- but it does make the need for dealing with a failed host not subject to a disk fault :)
 
  • Like
Reactions: Johannes S