Is the following setup possible with Ceph Jewel?
4 Nodes in Proxmox Cluster/Ceph Cluster:
2 Storage nodes, running some testing VMs as well:
--> 2 Nodes (128GB RAM, Octa Core) with 13 OSDs, each 6TB, MONs on same Disks
2 VM dedicated nodes
--> 2 Nodes (265GB RAM, Octa Core) with no OSDs but MONs running on local SSD (GPT) storage
This leaves us with 4 MONs, 2 of them with SSD and 26 OSDs split on 2 storage nodes.
- All nodes have additional 10Gbit network cards dedicated for ceph and cluster communication (via VLANs).
- Public communication runs via 1Gbit network cards.
If I understand Ceph data redundancy (replica) right it would work to set it to 2 instead of the default 3, if one of the storage nodes goes down, Ceph should still be running, even if load is higher, right?
How many OSDs can be missing/damaged on each node before cluster crash if replica is set to 2 ?
How many OSDs can be missing/damaged on each node before cluster crash if replica is set to 3 ?
Thanks for your help!
4 Nodes in Proxmox Cluster/Ceph Cluster:
2 Storage nodes, running some testing VMs as well:
--> 2 Nodes (128GB RAM, Octa Core) with 13 OSDs, each 6TB, MONs on same Disks
2 VM dedicated nodes
--> 2 Nodes (265GB RAM, Octa Core) with no OSDs but MONs running on local SSD (GPT) storage
This leaves us with 4 MONs, 2 of them with SSD and 26 OSDs split on 2 storage nodes.
- All nodes have additional 10Gbit network cards dedicated for ceph and cluster communication (via VLANs).
- Public communication runs via 1Gbit network cards.
If I understand Ceph data redundancy (replica) right it would work to set it to 2 instead of the default 3, if one of the storage nodes goes down, Ceph should still be running, even if load is higher, right?
How many OSDs can be missing/damaged on each node before cluster crash if replica is set to 2 ?
How many OSDs can be missing/damaged on each node before cluster crash if replica is set to 3 ?
Thanks for your help!