Search results

  1. gurubert

    Adding a Second Public Network to Proxmox VE with Ceph Cluster

    This will not work. The public network is called public for a reason: each client that wants to access the Ceph storage cluster needs to be able to talk to every node in each public network.
  2. gurubert

    Ceph - feasible for Clustered MSSQL?

    If you want to run MSSQL in a clustered setup use local storage for its nodes. There is no need to replicate on the storage level if the DB replicates its data on the application level.
  3. gurubert

    Does Proxmox plan to support SCSI over FC?

    You need to create OCFS2 on the shared LUN including an OCFS2 cluster on all nodes. Read the OCFS2 documentation for details. The OCFS2 filesystem is then mounted at the same mountpoint on all Proxmox nodes. In Proxmox you can then create a new storage of type directory with the shared flag set...
  4. gurubert

    Does Proxmox plan to support SCSI over FC?

    We are running such a setup with OCFS2 on the shared LUNs and qcow2 images in the filesystem for several years now. It works but is unsupported by Proxmox.
  5. gurubert

    No OSDs in CEPH after recreating monitor

    You really need your old cluster map extracted from the OSDs. As long as you are only deploying a new MON you create a new Ceph cluster. The existing OSDs will not be able to join it. The ceph.conf file does not matter here. It only tells the clients and the OSDs where to find the MONs.
  6. gurubert

    Cluster aware FS for shared datastores?

    You need to create the OCFS2 filesystem with "-T vmstore", which creates 1MB clusters for the files. Each time when a file needs to be enlarged, all nodes have to communicate so that they know about the newly allocated blocks. With larger cluster sizes this happens less often.
  7. gurubert

    Cluster aware FS for shared datastores?

    From kernel 6.5 to at least 6.8 there is an issue with OCFS2 and io_uring that produces IO errors inside the VM. Unfortunately OCFS2 is not well maintained.
  8. gurubert

    Cluster aware FS for shared datastores?

    There is OCFS2 which can be setup as clustered filesystem on a shared LUN. You need the 6.14 kernel and will get no official support.
  9. gurubert

    No OSDs in CEPH after recreating monitor

    With "# ceph-mon --monmap /root/monmap --keyring /etc/pve/priv/ceph.mon.keyring --mkfs -i nextclouda -m 10.0.1.1" you created a new MON database (--mkfs) and removed all info from the old one, not only the monmap. You should have just inserted the new monmap with "ceph-mon -i mon.nextclouda...
  10. gurubert

    Migrate to a new cluster

    Yes, this is sufficient.
  11. gurubert

    Migrate to a new cluster

    AFAIK this is safe. Best would be to remove the VM/CT config file from /etc/pve of the old cluster. You may encounter some issues with the virtual hardware version on the new cluster. VMs (especially Windows) may be picky.
  12. gurubert

    Ceph Warnung nach Update

    Ich würde als erstes den MDS mds.pox einmal neu starten.
  13. gurubert

    Ceph 19.2.0 does not distribute PG equally across OSDs

    This is not unusual in such a small cluster with such a low number of PGs. The CRUSH algorithm just doe snot have enough pieces to distribute the data evenly. You should increase the number of PGs so that you have at least 100 per OSD.
  14. gurubert

    osd crashed

    I would replace the disk now.
  15. gurubert

    osd crashed

    Remove this OSD and redeploy it. There may be just a bit error on the disk.
  16. gurubert

    Ceph placement group remapping

    Erasure coding is not usable in such small clusters. You need at least 10 nodes with enough OSDs to do anything meaningful with erasure coding.
  17. gurubert

    OSD struggles

    Yes, do not mix two different device classes in one pool. You will only get HDD performance.
  18. gurubert

    OSD struggles

    You need to replace the sdb drive.
  19. gurubert

    OSD struggles

    Are there any signs in the kernel log about a failure on the device of this OSD?