Losing data is something else than losing write access.
In an erasure coded pool if you lose more than m OSDs in an affected PG you lose data.
If you have less than min_size OSDs you lose write access to the placement group.
With size=min_size you cannot lose any OSDs without losing write access to the affected objects.
And it has nothing to do with number of nodes or number of OSDs.
Yes. In erasure coded pools with m=2 you can lose 2 OSDs for one PG at the same time without losing data.
The same can be achieved in replicated pools with size=3. You can lose 2 OSDs for a PG without losing its data.
This is not recommended and certainly not HA. With m=1 you cannot loose a single disk.
An erasure coded pool should have size=k+m and min_size=k+1 settings which would be size=3 and min_size=3 in your case.
No no no. You got your math wrong.
To achieve the same availability as EC with k=6 and m=2 you need triple replication (three copies) meaning a storage efficiency of 33%. It is rarely necessary to go beyond 4 copies.
The failure domain must never be the OSD.
With failure domain = host you only have one copy or one chunk of the erasure coded object in one host. All the other copies or chunks live on other hosts.
That is why you need at least three hosts for replication (better four to be able recover) and...
iSCSI is deprecated in the Ceph project and should not be used any more.
And there is no need to backup a single Proxmox node (if you have a cluster).
You may want to backup the VM config files but everything else is really not that important.
If you want to lower the time needed to bring up a...
Ceph can deploy NVMEoF gateways. You need to find hardware that is able to boot from that.
Or you use a PXE network boot where the initrd contains all necessary things to continue with a Ceph RBD as root device.
Have these OSDs been deployed with 19.2?
You may be seeing this bug: https://docs.clyso.com/blog/critical-bugs-ceph-reef-squid/#squid-deployed-osds-are-crashing
Kevin Beaumont ist ein anerkannter IT-Sicherheitsexperte und hat u.a. schon für Microsoft gearbeitet.
Aber klar, ich bild' mir meine eigene Meinung. Das hat ja schon immer gut funktioniert.
OSDs have a minimum allocation size (min_alloc_size) of 4096 bytes which is configured at creation time and cannot be changed afterward.
But this mostly affects small files.
No, the separation of the Ceph public and cluster (not private) networks has nothing to do with security.
The cluster network is there to transport replication and recovery traffic between the OSD nodes. It can be configured if there is a separate physical network available that provides more...
All Ceph daemons register in the CRUSH map wih exactly one IP and one port.
You can have multiple public networks in the config but this is for the case where different hosts of the Ceph cluster are in different IP networks.
It is not practical to have one Ceph host with IPs from multiple public...