Search results

  1. gurubert

    No OSDs in CEPH after recreating monitor

    You really need your old cluster map extracted from the OSDs. As long as you are only deploying a new MON you create a new Ceph cluster. The existing OSDs will not be able to join it. The ceph.conf file does not matter here. It only tells the clients and the OSDs where to find the MONs.
  2. gurubert

    Cluster aware FS for shared datastores?

    You need to create the OCFS2 filesystem with "-T vmstore", which creates 1MB clusters for the files. Each time when a file needs to be enlarged, all nodes have to communicate so that they know about the newly allocated blocks. With larger cluster sizes this happens less often.
  3. gurubert

    Cluster aware FS for shared datastores?

    From kernel 6.5 to at least 6.8 there is an issue with OCFS2 and io_uring that produces IO errors inside the VM. Unfortunately OCFS2 is not well maintained.
  4. gurubert

    Cluster aware FS for shared datastores?

    There is OCFS2 which can be setup as clustered filesystem on a shared LUN. You need the 6.14 kernel and will get no official support.
  5. gurubert

    No OSDs in CEPH after recreating monitor

    With "# ceph-mon --monmap /root/monmap --keyring /etc/pve/priv/ceph.mon.keyring --mkfs -i nextclouda -m 10.0.1.1" you created a new MON database (--mkfs) and removed all info from the old one, not only the monmap. You should have just inserted the new monmap with "ceph-mon -i mon.nextclouda...
  6. gurubert

    Migrate to a new cluster

    Yes, this is sufficient.
  7. gurubert

    Migrate to a new cluster

    AFAIK this is safe. Best would be to remove the VM/CT config file from /etc/pve of the old cluster. You may encounter some issues with the virtual hardware version on the new cluster. VMs (especially Windows) may be picky.
  8. gurubert

    Ceph Warnung nach Update

    Ich würde als erstes den MDS mds.pox einmal neu starten.
  9. gurubert

    Ceph 19.2.0 does not distribute PG equally across OSDs

    This is not unusual in such a small cluster with such a low number of PGs. The CRUSH algorithm just doe snot have enough pieces to distribute the data evenly. You should increase the number of PGs so that you have at least 100 per OSD.
  10. gurubert

    osd crashed

    I would replace the disk now.
  11. gurubert

    osd crashed

    Remove this OSD and redeploy it. There may be just a bit error on the disk.
  12. gurubert

    Ceph placement group remapping

    Erasure coding is not usable in such small clusters. You need at least 10 nodes with enough OSDs to do anything meaningful with erasure coding.
  13. gurubert

    OSD struggles

    Yes, do not mix two different device classes in one pool. You will only get HDD performance.
  14. gurubert

    OSD struggles

    You need to replace the sdb drive.
  15. gurubert

    OSD struggles

    Are there any signs in the kernel log about a failure on the device of this OSD?
  16. gurubert

    Ceph 2 OSD's down and out

    Is data affected? Are there any PGs not active+clean?
  17. gurubert

    how to make proxmox node use vmbr1 instead of default vmbr0

    Proxmox does not use DHCP for network configuration. You could remove the IP configuration from vmbr0 and add a local IP on vmbr1 and also point the default gateway to the opnsense VM. You just have to make sure that these changes happen so that Proxmox is still reachable (via opnsense).
  18. gurubert

    Ceph support - Not Proxmox

    Ah, I did not know about Ceph nano. It says that it only exposes S3 which is HTTP. You will not be able to access the Ceph "cluster" running inside this container with anything else.
  19. gurubert

    Ceph support - Not Proxmox

    If a MON has 127.0.0.1 as its IP there is something fundamentally wrong in the setup. The MONs need an IP from Ceph's public_network so that they are reachable from all the other Ceph daemons and the clients.