Design Option for 3 Dell Server

QonaQona

New Member
Feb 10, 2024
4
0
1
Hi,

Currently I in the midst of exploring proxmox to replace vmware and i kinda new in VE technology.

For this setup i have
  • 3 x Dell PowerEdge r620
  • Each of the server have
    • 2 x 200GB SSD
    • 4 x 1TB SAS
    • 12 Core
    • 9x GB ( I forgot the exact value )
Server were factory reset and I only disable integrated SD slot in BIOS Boot setting.
  • I choose zfs raid10 for proxmox VE installation and were left with 2.2TB available storage for VM usage for each of the server.
  • Cluster created
I tried to install Ceph but hit with not supported due to server using raid controller error ? Does this setup are good or is there any advice to explore with the current hardware that I have, I'm open to reinstall the server again with different configuration.

The purpose of this server is for internal use, maybe few VM for Window server for AD and Unix VM for developer usage.
 
I tried to install Ceph but hit with not supported due to server using raid controller error ?
Works wonderfully [0], just follow the instructions and the RAID controller is in HBA mode. I also recommend that you flash the BIOS so that you can also set the two 200 GB as a boot (ALT + B) and alternative boot (ALT + A) device.

But don't expect a performance miracle from 12 SAS hard drives with CEPH, you definitely won't have that. I reckon you'll get significantly less than 1,000 IOPS with this setup. Especially with Windows you won't have any fun here.

[0] https://fohdeesha.com/docs/perc.html
 
I choose zfs raid10 for proxmox VE installation and were left with 2.2TB available storage for VM usage for each of the server.
So you created a 6-disk raid10 mixing SSDs and HDDs? This would mean the HDDs will slow down the SSDs. Would be better to install PVE on a 4-disk raid10 only using those HDDs and later create another 2-disk raid1 pool with those SSDs as faster VM storage or adding them as "special" vdevs to that raid10 pool.
 
Works wonderfully [0], just follow the instructions and the RAID controller is in HBA mode. I also recommend that you flash the BIOS so that you can also set the two 200 GB as a boot (ALT + B) and alternative boot (ALT + A) device.

But don't expect a performance miracle from 12 SAS hard drives with CEPH, you definitely won't have that. I reckon you'll get significantly less than 1,000 IOPS with this setup. Especially with Windows you won't have any fun here.

[0] https://fohdeesha.com/docs/perc.html

Thanks for the suggestion, after try and error, It seem because i install PVE in all of the disk during initial raid10 pve installation and ceph unable to locate the disk during OSD creation.

But i have 1 question, i assume that after i create osd for all the HDD i have in node, i should have 12 TB available space that i able to use for the VM

1707822826106.png

1707823094913.png

But during VM creation i only have 3.8 TB of available for VM

1707822990652.png

Sorry if I'm misunderstood the concept of ceph, I'm kinda new in VE.


So you created a 6-disk raid10 mixing SSDs and HDDs? This would mean the HDDs will slow down the SSDs. Would be better to install PVE on a 4-disk raid10 only using those HDDs and later create another 2-disk raid1 pool with those SSDs as faster VM storage or adding them as "special" vdevs to that raid10 pool.

Thanks and i notice the slow down so i choose to do do raid 1 for 2 ssd and install PVE in it. for the rest of HDD i just create OSD for ceph.

Does this have an issue later on if the HDD fail in 1 of the nodes ?
 
Sorry if I'm misunderstood the concept of ceph, I'm kinda new in VE.
Proxmox VE and CEPH have nothing to do with each other. Only CEPH is natively integrated.

CEPH distributes your data to the servers via replica 3. For this reason, you can only use 1/3 of the storage space effectively.
 
Proxmox VE and CEPH have nothing to do with each other. Only CEPH is natively integrated.

CEPH distributes your data to the servers via replica 3. For this reason, you can only use 1/3 of the storage space effectively.

Thanks and how about CPU and RAM, does it accumulated or still segregated for each of the nodes ?
 
It is technically possible to use CPU and RAM on another host, but not with the usual and well-known server hardware. The CEPH resources are also retained on each node; the hard drive does not float in thin air but belongs to a node. Only the management of the hard drives integrated in CEPH is managed comprehensively.
 
You wont have lots of fun with a hdd based ceph pool, as performance is really bad.
Minimum should be sata-ssd, best would be nvme. Also make sure you have minimu 10, better 25-100Gbit/s for ceph.
 
It is technically possible to use CPU and RAM on another host, but not with the usual and well-known server hardware. The CEPH resources are also retained on each node; the hard drive does not float in thin air but belongs to a node. Only the management of the hard drives integrated in CEPH is managed comprehensively.
Thank you for the explanation.

You wont have lots of fun with a hdd based ceph pool, as performance is really bad.
Minimum should be sata-ssd, best would be nvme. Also make sure you have minimu 10, better 25-100Gbit/s for ceph.

I feel it too and thanks for the suggestion to use sata-ssd as a bare minimum to have a good performance while using ceph. I will take note on that.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!