ceph bottleneck

  1. Jackobli

    PVE-Ceph, Adding multiple disks to a existing pool

    Hi there We got a six server cluster with existing ceph pools. Now we need to add more disks to one pool and I am unsure what scenario needs more time and/or causes more «turbulances» . The pool consists of 6 x 2 SAS SSD (3.2 TB and 6.4 TB). We would add another 6 x 2 SAS SSD (6.4 TB)...
  2. M

    ceph tuning

    First a disclaimer, this is a lab, definitely not a reference design, the point was to do weird things, learn how cephs reacts and then learn how to get myself out of whatever weird scenario I ended up with. I've spent a few days on the forum, seems many of the resolutions were people...
  3. D

    Ceph OSD Full

    Hello, I am a beginner with Proxmox. My ceph storage is full, I have 3 nodes with one OSD each, and the smallest (80GB) is full (more than 90%), each OSD is created from lvm volume and there are hdds, I know that as long as I have a single OSD that is much fuller than the others, it will be the...
  4. B

    Proxmox ceph low write iops but good read iops. Why??

    Hi, We are running a 5-node proxmox ceph cluster. Among them 3 of them are having ssd drives which is making a pool called ceph-ssd-pool1. Following are the configuration: Ceph Network: 10G SSD drives are of: Kingston SEDC500M/1920G (Which they call it as Datacenter grade SSDs and claiming to...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!