4 Nodes Ceph

yena

Renowned Member
Nov 18, 2011
378
5
83
Hello, i'm testing a 4 nodes Ceph Cluster:
Each node have two sata HD and two SSD for Journal.
-----------------------------------------------------------------------------------
ceph -w
cluster 1126f843-c89b-4a28-84cd-e89515b10ea2
health HEALTH_OK
monmap e4: 4 mons at {0=10.10.10.1:6789/0,1=10.10.10.2:6789/0,2=10.10.10.3:6789/0,3=10.10.10.4:6789/0}
election epoch 150, quorum 0,1,2,3 0,1,2,3
osdmap e359: 8 osds: 8 up, 8 in
flags sortbitwise,require_jewel_osds
pgmap v28611: 512 pgs, 1 pools, 15124 MB data, 3869 objects
45780 MB used, 29748 GB / 29793 GB avail
512 active+clean
client io 817 B/s wr, 0 op/s rd, 0 op/s wr
-----------------------------------------------------------------------------------

I have tested the fail of 2 Nodes and the cluster go away.
With a 3/2 Pool, how many OSD can i lost ? ( how can i calculate it ? )
 
Min 1, Max 2, all depends on where the last 2 copies of data are when you take down 2 nodes.

If you wanted to be able to survive 2 nodes failing out of 4 you would need to have a 4/2 pool.
 
Ok so now i can lost max 2 OSD.
I Will try the performance of 4/2 pool ..
Thanks!
 
if you have 4 monitors, your monitors will not be quorate if any two are down.. if you have three monitors, your cluster might be still quorate if a mon and the non-mon node fail. even monitor counts almost never make sense. either upgrade your cluster to 5 nodes (with 3 or 5 monitors) or keep 4 nodes, but only make 3 of them monitors.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!