Search results

  1. gurubert

    osd crashed

    Remove this OSD and redeploy it. There may be just a bit error on the disk.
  2. gurubert

    Ceph placement group remapping

    Erasure coding is not usable in such small clusters. You need at least 10 nodes with enough OSDs to do anything meaningful with erasure coding.
  3. gurubert

    OSD struggles

    Yes, do not mix two different device classes in one pool. You will only get HDD performance.
  4. gurubert

    OSD struggles

    You need to replace the sdb drive.
  5. gurubert

    OSD struggles

    Are there any signs in the kernel log about a failure on the device of this OSD?
  6. gurubert

    Ceph 2 OSD's down and out

    Is data affected? Are there any PGs not active+clean?
  7. gurubert

    how to make proxmox node use vmbr1 instead of default vmbr0

    Proxmox does not use DHCP for network configuration. You could remove the IP configuration from vmbr0 and add a local IP on vmbr1 and also point the default gateway to the opnsense VM. You just have to make sure that these changes happen so that Proxmox is still reachable (via opnsense).
  8. gurubert

    Ceph support - Not Proxmox

    Ah, I did not know about Ceph nano. It says that it only exposes S3 which is HTTP. You will not be able to access the Ceph "cluster" running inside this container with anything else.
  9. gurubert

    Ceph support - Not Proxmox

    If a MON has 127.0.0.1 as its IP there is something fundamentally wrong in the setup. The MONs need an IP from Ceph's public_network so that they are reachable from all the other Ceph daemons and the clients.
  10. gurubert

    Proxmox cluster with shared Ceph partition in VMs

    Are these two separate CephFS volumes? What does "ceph fs status" show? If that's the case you need to specify the fs name with the fs= option to the mount command.
  11. gurubert

    OSD ghost

    Ceph is not able to run dual stack. You have to either use IPv4 or IPv6 on both networks.
  12. gurubert

    Ceph total storage/useable

    Failure donain host will not allow two copies on the same host. All PGs of the pool will be degraded in such a situation as Ceph is not able to find a location for the fourth copy.
  13. gurubert

    Ceph total storage/useable

    No, the size of the pool defines the number of copies it creates (when being replicated, not erasure coded). It has nothing to do with the number of nodes. A size=3 pool will distribute the three copies over four nodes randomly. If an OSD is down the missing third copy will be recreated from...
  14. gurubert

    Ceph total storage/useable

    Your datastore01 pool has a size of 4. It stores four copies of each object, hence its usable capacity is only 25% of the total capacity.
  15. gurubert

    Multipath Ceph storage network

    This is Ethernet and not Fibre-Channel. You cannot have two separate interfaces in the same VLAN. Create a stack out of the two switches (or MLAG or whatever the vendor calls it) and then a link aggregation group for each Ceph node with one port from each switch. On the Ceph nodes create a...
  16. gurubert

    Ceph RBD Storage Shrinking Over Time – From 10TB up to 8.59TB

    This is for the rule, it is not the size of the pool.
  17. gurubert

    Ceph RBD Storage Shrinking Over Time – From 10TB up to 8.59TB

    Why are there 8TB data stored in device_health_metrics? Do you store RBD data in this pool? You should create a separate pool for RBD data. https://pve.proxmox.com/wiki/Ceph_Pacific_to_Quincy#Guest_images_are_stored_on_pool_device_health_metrics