CEPH 3 nodes Cluster : data size available ?

atec666

Member
Mar 8, 2019
136
4
18
Issoire
Hello,

i planned to install a 3 nodes Proxmox cluster.
With 1 SSD (120Gb) and 2 HDD 500Go per node.

i want data (HDD x2) to be available if i lost one node ...
Which technology is better for me
  • CEPH RDB or (i test it but do not understand data size calculation and how it work )
  • GlusterFS.

in the two case what is the best conf. and what is the data size available with this setup ?
 
one OSD = 500GB, so 2 x 3 OSD = 6 OSD (total cluster ) = 3000GB.

=== size 3 ===
500 * 0.8 = 400 GB
400 * 6 = 2400 GB (80% fill of OSD)
2400 / 3 = 800 GB (space after replication)
800 / 3 = 266 GB (space per host)
266 * 2 = 533 GB (data that needs to be moved after 1x nodes fail)
800 - 533 = 266 GB (usable space)
266 * 0.8 = 213 GB (80% of usable space, 20% for growth in degraded state)

WHAT ?????
if have 2 x 500GB x 3 node = 3000 GB and ...
Only 213 GB of free space ?????

This is a joke ?
 
i think i will do this setup :
2 SSD 120GB in raid1( mdadm) ==> with proxmox + iso and template.
2 HDD 500GB in raid 1 too (mdadm is my friends) in "LVM-Thin" ==> VM + LXC

With 3 nodes ==> 1500GB of space free, and good hardware fault tolerance.
 
i think i will do this setup :
2 SSD 120GB in raid1( mdadm) ==> with proxmox + iso and template.
2 HDD 500GB in raid 1 too (mdadm is my friends) in "LVM-Thin" ==> VM + LXC

With 3 nodes ==> 1500GB of space free, and good hardware fault tolerance.
How well did this setup work?
 
How well did this setup work?
well, we are hosting nextcloud instance, webmail , mailer, dns ... we just upgrade all our 3 node with 2 To ... : migrate was ... very very easy !
for 12TB we have 4GB available because we choose (advice taken here) 3/2 for CEph.
 
well, we are hosting nextcloud instance, webmail , mailer, dns ... we just upgrade all our 3 node with 2 To ... : migrate was ... very very easy !
for 12TB we have 4GB available because we choose (advice taken here) 3/2 for CEph.
How will does live migration and high availability between the nodes work?
 
How will does live migration and high availability between the nodes work?
fast 1 to 3 sec.
design a cluster must be done with the state of art.
- 1 nic for corosync,
- 1 nic for ceph
- 1 nic for client traffic.
(if your rich 1 other nic for backup network, u can backup during prod. hours)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!