Search results

  1. T

    Increased PG from 256 to 512 and deep scrub running on 2 pg over 24 hours

    Hi, We increased our PG count from 256 to 512 over the weekend to accommodate our growing CEPH cluster. The cluster has been in a healthy state and everything appears to be ok, except we have noticed 2 PG have been deep scrubbing for over 24 hours now. My questions: 1). Could this be Ceph...
  2. T

    Any solution to slow backups?

    We are currently running a 8 node cluster - 5 nodes are for computing, 3 nodes are for ceph. It's all in HA on a 10GB network - backups to NFS. Our data transfer speeds are around 40MB/s which is unacceptable for our needs. Curious if migration our backup solution to PBS improves this...
  3. T

    Proxmox Backup Server 1.0 (stable)

    We're currently running a 8 node cluster - 3 nodes are dedicated to CEPH. All our QCOW2 files reside on the CEPH cluster and we do a vma.zst backup to an NFS share. Our network is all 10GBIT and we're seeing only 40MB/S backups which is becoming a big issue as our client base grows on this...
  4. T

    Ceph configuration issues - Health problems

    Figured it out. I need to review my switch config. Server is setup on a bonded interface running LACP. When I shutdown the port on one of two switches it resolves the OSD flapping and Ceph errors.
  5. T

    Ceph configuration issues - Health problems

    To add - in all scenarios I have OSD's going up and down (mainly on one specific node).
  6. T

    Ceph configuration issues - Health problems

    Hi All, I've done my best to read documentation and research the forums / web. I keep having these health errors pop up. I've tried a ton of different PG configurations including 128, 1024, 2000, etc. I can't seem to nail this setup - I've tried using calculators as well. I have 3 nodes, 8...
  7. T

    Can't add OSD for a certain disk

    Nevermind - I used fdisk and purged both partitions on the drive.
  8. T

    Can't add OSD for a certain disk

    One disk on one of our servers is labeled as a 'partition' (not sure why - clean install and all other nodes don't have the same issue) - as a result not able to add the disk as an OSD to our Ceph cluster. Anyone have this issue and know how to fix? TIA

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!