Recent content by gurubert

  1. gurubert

    Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

    Losing data is something else than losing write access. In an erasure coded pool if you lose more than m OSDs in an affected PG you lose data. If you have less than min_size OSDs you lose write access to the placement group.
  2. gurubert

    Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

    With size=min_size you cannot lose any OSDs without losing write access to the affected objects. And it has nothing to do with number of nodes or number of OSDs.
  3. gurubert

    Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

    Yes. In erasure coded pools with m=2 you can lose 2 OSDs for one PG at the same time without losing data. The same can be achieved in replicated pools with size=3. You can lose 2 OSDs for a PG without losing its data.
  4. gurubert

    Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

    This is not recommended and certainly not HA. With m=1 you cannot loose a single disk. An erasure coded pool should have size=k+m and min_size=k+1 settings which would be size=3 and min_size=3 in your case.
  5. gurubert

    Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

    No no no. You got your math wrong. To achieve the same availability as EC with k=6 and m=2 you need triple replication (three copies) meaning a storage efficiency of 33%. It is rarely necessary to go beyond 4 copies.
  6. gurubert

    Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

    The failure domain must never be the OSD. With failure domain = host you only have one copy or one chunk of the erasure coded object in one host. All the other copies or chunks live on other hosts. That is why you need at least three hosts for replication (better four to be able recover) and...
  7. gurubert

    Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

    In this case just backup the complete /etc. All configuration should be contained therein.
  8. gurubert

    Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

    Think about this the other way around: If you have some automation helping you to setup a Proxmox host you do not need to backup these settings.
  9. gurubert

    Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

    iSCSI is deprecated in the Ceph project and should not be used any more. And there is no need to backup a single Proxmox node (if you have a cluster). You may want to backup the VM config files but everything else is really not that important. If you want to lower the time needed to bring up a...
  10. gurubert

    Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

    Ceph can deploy NVMEoF gateways. You need to find hardware that is able to boot from that. Or you use a PXE network boot where the initrd contains all necessary things to continue with a Ceph RBD as root device.
  11. gurubert

    Ceph squid OSD crash related to RocksDB ceph_assert(cut_off == p->length)

    Have these OSDs been deployed with 19.2? You may be seeing this bug: https://docs.clyso.com/blog/critical-bugs-ceph-reef-squid/#squid-deployed-osds-are-crashing
  12. gurubert

    Dokumentation zum Download für Claude Proxmox Skill

    Kevin Beaumont ist ein anerkannter IT-Sicherheitsexperte und hat u.a. schon für Microsoft gearbeitet. Aber klar, ich bild' mir meine eigene Meinung. Das hat ja schon immer gut funktioniert.
  13. gurubert

    Dokumentation zum Download für Claude Proxmox Skill

    Was Claude so produziert: https://cyberplace.social/@GossiTheDog/116080909947754833
  14. gurubert

    Dokumentation zum Download für Claude Proxmox Skill

    Ich würde selbst bei genauester Formulierung einer solch stochastischen Maschine nie den produktiven Betrieb anvertrauen.
  15. gurubert

    CephFS "Block Size"?

    OSDs have a minimum allocation size (min_alloc_size) of 4096 bytes which is configured at creation time and cannot be changed afterward. But this mostly affects small files.