ceph bad performance

  1. J

    Ceph Cluster Down\Sorta

    So currently I have 3 nodes, 3x16x18TB HDDs, in a Ceph cluster running normally. Today i went to go add 2 more nodes with 2x12x12TB drives. All was fine untill I went to go add the OSDs into ceph. I set the NoRecover flag but forgot to set the Noout and the Norebalance flag. The cluster failed...
  2. B

    Proxmox ceph low write iops but good read iops. Why??

    Hi, We are running a 5-node proxmox ceph cluster. Among them 3 of them are having ssd drives which is making a pool called ceph-ssd-pool1. Following are the configuration: Ceph Network: 10G SSD drives are of: Kingston SEDC500M/1920G (Which they call it as Datacenter grade SSDs and claiming to...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!