Recent content by gurubert

  1. gurubert

    FSID CLUSTER CEPH

    This is not possible as the CephX keys for the OSDs and all the other internal config will be missing. What happened to your cluster? If you still have all the OSDs you could try to restore the cluster map from the copies stored there and start a new MON with that. Read the Ceph documentation...
  2. gurubert

    WAL und RocksDB von Ceph auslagern

    Das WAL liegt üblicherweise zusammen mit der RocksDB auf einem Device. Es ist sehr unüblich, ein separates WAL-Device zu konfigurieren. Ceph nutzt immer Block-Devices und richtet sich dieses selber ein.
  3. gurubert

    WAL und RocksDB von Ceph auslagern

    Das ist eine sehr alte Empfehlung aus den Anfangstagen von BlueStore. Inzwischen ist die RocksDB auch flexibler geworden. Für RBD und CephFS Nutzung reichen 40 - 70 GB, wenn S3 dazukommt, dann dürfen es auch 300GB werden. Der RadosGW nutzt viele OMAP-Daten.
  4. gurubert

    How to install Windows 10 OS and Proxmox VE in a separate drive?

    The Proxmox installer is not built for that use case. You could install Debian alongside Windows, make it dual-boot and then install Proxmox on Debian.
  5. gurubert

    Fiberchannel Shared Storage Support

    Thanks for the mention. We only see OCFS2 as a transition technology for the case when the hardware is already there. OCFS2 has its issues. We would always recommend a supported storage setup for new infrastructure.
  6. gurubert

    16 PCI Device limitation

    Why don't you run TrueNAS directly on the Hardware? What is the benefit of virtualization when you need to pass 21 drives into the VM?
  7. gurubert

    Classify Cluster Node Roles?

    You could assign the storage for the VMs only to the nodes where they should run. No storage, no images, no VMs.
  8. gurubert

    ceph after upgrade to 18.2.6 - observed slow operation indications in BlueStore

    A hotfix release 18.2.7 is currently being prepared by the Ceph project.
  9. gurubert

    [SOLVED] Proxmox on SATA & Ceph on NVMe - does it make sense?

    Do not run Ceph on 1G network equipment. You will be disappointed.
  10. gurubert

    Proxmox 9 Kernel and Ubuntu 25.04?

    proxmox-kernel-6.14 ist already available for Proxmox 8.4
  11. gurubert

    Ceph Storage (18.2.4) bei 80%

    Das sieht doch gut ausbalanciert aus. Eine Empfehlung noch: Verdoppele die Anzahl der PGs für den Pool IKT-Labor. Momentan sind es weniger als 70 PGs pro OSD. SSDs können gerne 200 - 400 PGs pro OSD halten. Dann kann der Algorithmus die Daten noch besser verteilen, weil die einzelnen PGs nicht...
  12. gurubert

    Ceph Storage (18.2.4) bei 80%

    "ceph osd df tree" wäre interessant und "ceph df".
  13. gurubert

    [SOLVED] Add WAL/DB to CEPH OSD after installation.

    Thos cannot be avoided. If Ceph needs more space for the RocksDB it takes it from the block device.
  14. gurubert

    [SOLVED] Migrating Proxmox HA-Cluster with Ceph to new IP Subnet

    You cannot just change the IPs in the ceph.conf file. You need to deploy new MONs first that listen to the new IP addresses. These addresses are recorded in the mon map as you already have seen. The IPs in ceph.conf are just for all the other processes talking to the Ceph cluster.