Hallo,
we are currently evaluating a new Proxmox cluster with Ceph.
Facts:
10 servers in a cluster
2 data centers
5 Server á Data center
12 hard disks á server
The installation and configuration is already finished, currently only test VM's are running.
Now I noticed during failure tests that neither Ceph nor Proxmox work properly in case of a datacenter failure. So I have created another (virtual) Proxmox Node in another datacenter, which will serve as a quorum for Proxmox and Ceph. With this quorums-VM the cluster now works in case of a datacenter failure.
My question:
If one datacenter is down, the cluster works properly but the VMs aren't accessable because ceph change the disk to read-only.
Right now we have a Ceph pool with 3/2 and a correct OSD Crush-map (correct assignment from datacenter, rack, host, osd) .
Can a pool size 3/1 can solve this problem? Any other ideas?
Greetings
we are currently evaluating a new Proxmox cluster with Ceph.
Facts:
10 servers in a cluster
2 data centers
5 Server á Data center
12 hard disks á server
The installation and configuration is already finished, currently only test VM's are running.
Now I noticed during failure tests that neither Ceph nor Proxmox work properly in case of a datacenter failure. So I have created another (virtual) Proxmox Node in another datacenter, which will serve as a quorum for Proxmox and Ceph. With this quorums-VM the cluster now works in case of a datacenter failure.
My question:
If one datacenter is down, the cluster works properly but the VMs aren't accessable because ceph change the disk to read-only.
Right now we have a Ceph pool with 3/2 and a correct OSD Crush-map (correct assignment from datacenter, rack, host, osd) .
Can a pool size 3/1 can solve this problem? Any other ideas?
Greetings