Ceph OSD with 10GB

Guilherme Filippo

New Member
Dec 1, 2017
2
0
1
38
Hello guys

I'm trying to use Proxmox 5.4 and Ceph in my lab servers.
I've 2 x HP ML110 G6 with 2 x 480GB SSD drives each.

I've created a ZFS RAID1 partition with 48GB to install Proxmox and 400GB to Ceph.
So, I've 2 x 400GB OSD in each server.
When the OSD was created, I see the size 10GB and don't understand why.

Node lab1:
Code:
/dev/sda4         399G  9.3G  390G   3% /var/lib/ceph/osd/ceph-0
/dev/sdb4         399G  9.3G  390G   3% /var/lib/ceph/osd/ceph-1

Node lab2:
Code:
/dev/sda4         399G  9.3G  390G   3% /var/lib/ceph/osd/ceph-0
/dev/sdb4         399G  9.3G  390G   3% /var/lib/ceph/osd/ceph-1

I'm getting the message "osd.3 full" after install 2 test VMs.
How can I resize to use full partition size?

Thanks.
 
Code:
root@lab1:~# ceph -s
  cluster:
    id:     c744123b-ff13-4dfb-a678-7a7fa9c9d261
    health: HEALTH_OK

  services:
    mon: 2 daemons, quorum lab1,lab2
    mgr: lab2(active), standbys: lab1
    osd: 4 osds: 4 up, 4 in

  data:
    pools:   1 pools, 128 pgs
    objects: 2.08k objects, 8.09GiB
    usage:   20.1GiB used, 19.9GiB / 40GiB avail
    pgs:     128 active+clean

root@lab1:~# ceph osd dump
epoch 49
fsid c744123b-ff13-4dfb-a678-7a7fa9c9d261
created 2019-07-07 18:53:10.634073
modified 2019-07-09 16:11:59.489562
flags sortbitwise,recovery_deletes,purged_snapdirs
crush_version 9
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
require_min_compat_client jewel
min_compat_client jewel
require_osd_release luminous
pool 2 'storage' replicated size 2 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 46 flags hashpspool stripe_width 0 application rbd
    removed_snaps [1~3]
max_osd 4
osd.0 up   in  weight 1 up_from 5 up_thru 40 down_at 0 last_clean_interval [0,0) 10.99.99.2:6800/59322 10.99.99.2:6801/59322 10.99.99.2:6802/59322 10.99.99.2:6803/59322 exists,up 32844129-514d-481c-bad1-80c29a4c4d3c
osd.1 up   in  weight 1 up_from 9 up_thru 40 down_at 0 last_clean_interval [0,0) 10.99.99.2:6804/135064 10.99.99.2:6805/135064 10.99.99.2:6806/135064 10.99.99.2:6807/135064 exists,up 1d089dcc-1ac6-4dbc-8870-71188359b8f4
osd.2 up   in  weight 1 up_from 13 up_thru 40 down_at 0 last_clean_interval [0,0) 10.99.99.1:6800/7683 10.99.99.1:6801/7683 10.99.99.1:6802/7683 10.99.99.1:6803/7683 exists,up 605584f3-b86d-4e0a-9914-0ca6d3953e7e
osd.3 up   in  weight 1 up_from 17 up_thru 40 down_at 0 last_clean_interval [0,0) 10.99.99.1:6804/58114 10.99.99.1:6805/58114 10.99.99.1:6806/58114 10.99.99.1:6807/58114 exists,up 209ac92b-123c-4fe8-8bd0-85d926ced83a
root@lab1:~# ceph df
GLOBAL:
    SIZE      AVAIL       RAW USED     %RAW USED
    40GiB     19.9GiB      20.1GiB         50.28
POOLS:
    NAME        ID     USED        %USED     MAX AVAIL     OBJECTS
    storage     2      8.09GiB     51.95       7.48GiB        2082