New Ceph Cluster: degraded data redundancy 256 pgs undersized

repa

Renowned Member
Nov 18, 2010
37
4
73
Hi,

we have a new Ceph cluster running 3 nodes. Each node has 6 x 1.96TB SSD and 2x SAS Disks for System.

After creating the OSD's and Pool, we get the following warning:

Code:
degraded data redundancy 256 pgs undersized

We created a ceph pool with 256 PGs.

ceph osd df tree:

Bash:
ID CLASS WEIGHT  REWEIGHT SIZE    RAW USE DATA    OMAP META  AVAIL   %USE VAR  PGS STATUS TYPE NAME
-1       9.60342        - 9.6 TiB 899 GiB 6.2 MiB  0 B 5 GiB 8.7 TiB 9.14 1.00   -        root default
-5       7.68274        - 7.7 TiB 719 GiB 4.9 MiB  0 B 4 GiB 7.0 TiB 9.14 1.00   -            host server01
 1   ssd 1.92068  1.00000 1.9 TiB 180 GiB 1.2 MiB  0 B 1 GiB 1.7 TiB 9.14 1.00  53     up         osd.1
 2   ssd 1.92068  1.00000 1.9 TiB 180 GiB 1.2 MiB  0 B 1 GiB 1.7 TiB 9.14 1.00  64     up         osd.2
 3   ssd 1.92068  1.00000 1.9 TiB 180 GiB 1.2 MiB  0 B 1 GiB 1.7 TiB 9.14 1.00  70     up         osd.3
 4   ssd 1.92068  1.00000 1.9 TiB 180 GiB 1.2 MiB  0 B 1 GiB 1.7 TiB 9.14 1.00  69     up         osd.4
-3       1.92068        - 1.9 TiB 180 GiB 1.2 MiB  0 B 1 GiB 1.7 TiB 9.14 1.00   -            host server02
 0   ssd 1.92068  1.00000 1.9 TiB 180 GiB 1.2 MiB  0 B 1 GiB 1.7 TiB 9.14 1.00 256     up         osd.0
                    TOTAL 9.6 TiB 899 GiB 6.2 MiB  0 B 5 GiB 8.7 TiB 9.14
MIN/MAX VAR: 1.00/1.00  STDDEV: 0

What is best practice for our configuration ?

Thanks
 
  • Like
Reactions: lsmithx2
3 nodes, 6 osd per node = 18
Now count, how much osd you see in your tree output.