That's just not how it works.
"Failure domain" is host. You could have shutdown one host with all 4 OSDs going down. That would have worked fine.
All data is distributed between those three nodes. You do not know onto which OSD. (Well, you...
Yes, most probably you have lost data. As you are having 3 copies of every PG, it can happen that PG was located on those three disks, so when you have removed them at the same time it became unavailable. As you have wiped OSDs, I dont think...
When you only have one fast network connection use that for the Ceph public network and do not configure a cluster network.
A cluster network is only useful when you have a separate physical network infrastructure that is at least twice as fast...
Choosing the Interface is currently not possible. It always takes the IP of the interface that provides the default route (or none, if there is no default route).
Better SNAT/DNAT support is something that I am considering implementing with the...
If you're gunna be running Proxmox on 48 nodes, use your commercial support to lodge a ticket to get exact advice on your setup.
You do have a support subscription given that Proxmox VE is core to your operation, right?
Hello,
We have seen clusters of around ~24 nodes in production. In our experience this can work without fine-tuning if they follow Corosync best practices (see e.g. [1]) and the network latency (of Corosync's network) is small enough and stable...
Hi @Nathan Stratton and all,
You need clear guidance here: do not do that unless you have a very compelling reason to.
a) Your hardware is discontinued and past the end of service, which significantly increases the likelihood of component...