osds

  1. B

    Slow ops in ceph.

    Hi, i have 2 erros regarding ceph. I have ceph version 17.2.6 (995dec2cdae920da21db2d455e55efbc339bde24) quincy (stable) on all nodes. 1- Reduced data availability: 128 pgs inactive, 5 pgs stale: pg 1.8 is stuck stale for 2d, current state stale+active+clean, last acting [1,2,3] pg 1.a is stuck...
  2. jsterr

    Ceph multiple osds per nvme

    Hello Community, Whats the recommended way to create multiple osds per nvme? We wanto make some iops testing with gen4 u.2 nvmes and see if its worth it. "Recommended" in terms of: they should be visible in gui after creation. Thanks Jonas
  3. T

    Ceph maintenance question

    Hi everyone, Quick and simple set of similar questions: Before restarting a CEPH OSD, should you mark it as "Out"? Before restarting a monitor, should you do anything? Before restarting a node, should you do anything? Before restarting a manager or manager node, should you do anything...
  4. ssaman

    Node aus CEPH und Cluster entfernen

    Hallo zusammen, wir sind gerade dabei unseren Cluster Stück für Stück abzubauen. Es ist ein 5 Node-Cluster. Davon wollen wir 2 entfernen. Bisher haben wir keine Erfahrung mit der Entfernung von Nodes, deshalb wollten wir auf Nummer sicher gehen und fragen hier nach. Wir würden wie folgt...
  5. ssaman

    [SOLVED] Ceph Health - backfillfull / OSDs marked as out and down

    Hello proxmox community, today we noticed our heath error with the message: HEALTH_ERR 1 backfillfull osd(s); 1 nearfull osd(s); 1 pool(s) backfillfull; Degraded data redundancy: 99961/8029671 objects degraded (1.245%), 19 pgs degraded, 19 pgs undersized; Degraded data redundancy (low space)...
  6. K

    Ceph Placement Groups

    Hi everyone, when creating pools, what size pg_num do you generally shoot for? We have a 16 node cluster set up, with 48 disks stretched across the cluster. I believe we need between 512-1024 PG's, but what does everyone else go with? Just looking for general suggestions and past experience.