CEPH 3 nodes Cluster : data size available ?

atec666

Member
Mar 8, 2019
68
2
8
Issoire
Hello,

i planned to install a 3 nodes Proxmox cluster.
With 1 SSD (120Gb) and 2 HDD 500Go per node.

i want data (HDD x2) to be available if i lost one node ...
Which technology is better for me
  • CEPH RDB or (i test it but do not understand data size calculation and how it work )
  • GlusterFS.

in the two case what is the best conf. and what is the data size available with this setup ?
 

atec666

Member
Mar 8, 2019
68
2
8
Issoire
one OSD = 500GB, so 2 x 3 OSD = 6 OSD (total cluster ) = 3000GB.

=== size 3 ===
500 * 0.8 = 400 GB
400 * 6 = 2400 GB (80% fill of OSD)
2400 / 3 = 800 GB (space after replication)
800 / 3 = 266 GB (space per host)
266 * 2 = 533 GB (data that needs to be moved after 1x nodes fail)
800 - 533 = 266 GB (usable space)
266 * 0.8 = 213 GB (80% of usable space, 20% for growth in degraded state)

WHAT ?????
if have 2 x 500GB x 3 node = 3000 GB and ...
Only 213 GB of free space ?????

This is a joke ?
 

atec666

Member
Mar 8, 2019
68
2
8
Issoire
i think i will do this setup :
2 SSD 120GB in raid1( mdadm) ==> with proxmox + iso and template.
2 HDD 500GB in raid 1 too (mdadm is my friends) in "LVM-Thin" ==> VM + LXC

With 3 nodes ==> 1500GB of space free, and good hardware fault tolerance.
 

SilverNodashi

Member
Jul 30, 2017
98
2
13
40
i think i will do this setup :
2 SSD 120GB in raid1( mdadm) ==> with proxmox + iso and template.
2 HDD 500GB in raid 1 too (mdadm is my friends) in "LVM-Thin" ==> VM + LXC

With 3 nodes ==> 1500GB of space free, and good hardware fault tolerance.
How well did this setup work?
 

atec666

Member
Mar 8, 2019
68
2
8
Issoire
How well did this setup work?
well, we are hosting nextcloud instance, webmail , mailer, dns ... we just upgrade all our 3 node with 2 To ... : migrate was ... very very easy !
for 12TB we have 4GB available because we choose (advice taken here) 3/2 for CEph.
 

SilverNodashi

Member
Jul 30, 2017
98
2
13
40
well, we are hosting nextcloud instance, webmail , mailer, dns ... we just upgrade all our 3 node with 2 To ... : migrate was ... very very easy !
for 12TB we have 4GB available because we choose (advice taken here) 3/2 for CEph.
How will does live migration and high availability between the nodes work?
 

atec666

Member
Mar 8, 2019
68
2
8
Issoire
How will does live migration and high availability between the nodes work?
fast 1 to 3 sec.
design a cluster must be done with the state of art.
- 1 nic for corosync,
- 1 nic for ceph
- 1 nic for client traffic.
(if your rich 1 other nic for backup network, u can backup during prod. hours)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!