Search results

  1. T

    Increased PG from 256 to 512 and deep scrub running on 2 pg over 24 hours

    Hi, We increased our PG count from 256 to 512 over the weekend to accommodate our growing CEPH cluster. The cluster has been in a healthy state and everything appears to be ok, except we have noticed 2 PG have been deep scrubbing for over 24 hours now. My questions: 1). Could this be Ceph...
  2. T

    Any solution to slow backups?

    We are currently running a 8 node cluster - 5 nodes are for computing, 3 nodes are for ceph. It's all in HA on a 10GB network - backups to NFS. Our data transfer speeds are around 40MB/s which is unacceptable for our needs. Curious if migration our backup solution to PBS improves this...
  3. T

    Ceph configuration issues - Health problems

    Hi All, I've done my best to read documentation and research the forums / web. I keep having these health errors pop up. I've tried a ton of different PG configurations including 128, 1024, 2000, etc. I can't seem to nail this setup - I've tried using calculators as well. I have 3 nodes, 8...
  4. T

    Can't add OSD for a certain disk

    One disk on one of our servers is labeled as a 'partition' (not sure why - clean install and all other nodes don't have the same issue) - as a result not able to add the disk as an OSD to our Ceph cluster. Anyone have this issue and know how to fix? TIA