Proxmox Ceph OSD Partition Created With Only 10GB

sysvar

New Member
Jan 23, 2019
2
0
1
34
How do you define the Ceph OSD Disk Partition Size?
It always creates with only 10 GB usable space.
  • Disk size = 3.9 TB
  • Partition size = 3.7 TB
  • Using *ceph-disk prepare* and *ceph-disk activate* (See below)
  • OSD created but only with 10 GB, not 3.7 TB

Commands Used
Code:
    root@proxmox:~# ceph-disk prepare --cluster ceph --cluster-uuid fea02667-f17d-44fd-a4c2-a8e19d05ed51 --fs-type xfs /dev/sda4

    meta-data=/dev/sda4              isize=2048   agcount=4, agsize=249036799 blks
             =                       sectsz=512   attr=2, projid32bit=1
             =                       crc=1        finobt=1, sparse=0, rmapbt=0, reflink=0
    data     =                       bsize=4096   blocks=996147194, imaxpct=5
             =                       sunit=0      swidth=0 blks
    naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
    log      =internal log           bsize=4096   blocks=486399, version=2
             =                       sectsz=512   sunit=0 blks, lazy-count=1
    realtime =none                   extsz=4096   blocks=0, rtextents=0


Code:
    root@proxmox:~# ceph-disk activate /dev/sda4

    creating /var/lib/ceph/tmp/mnt.jyqJTM/keyring
    added entity osd.0 auth auth(auid = 18446744073709551615 key=AQAohgpdjwb3NRAAIrINUiXDWQ5iMWp4Ueah3Q== with 0 caps)
    got monmap epoch 3
    2019-06-19 19:59:54.006226 7f966e628e00 -1 bluestore(/var/lib/ceph/tmp/mnt.jyqJTM/block) _read_bdev_label failed to open /var/lib/ceph/tmp/mnt.jyqJTM/block: (2) No such file or directory
    2019-06-19 19:59:54.006285 7f966e628e00 -1 bluestore(/var/lib/ceph/tmp/mnt.jyqJTM/block) _read_bdev_label failed to open /var/lib/ceph/tmp/mnt.jyqJTM/block: (2) No such file or directory
    2019-06-19 19:59:55.668619 7f966e628e00 -1 created object store /var/lib/ceph/tmp/mnt.jyqJTM for osd.0 fsid fea02667-f17d-44fd-a4c2-a8e19d05ed51
    Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service → /lib/systemd/system/ceph-osd@.service.

    # Don't worry about my keys/IDs, its just a dev environment.


Disk Layout
Code:
    root@proxmox:~# fdisk -l
    Disk /dev/sda: 3.9 TiB, 4294967296000 bytes, 8388608000 sectors
    OMITTIED
   
    Device         Start        End    Sectors   Size Type
    /dev/sda1         34       2047       2014  1007K BIOS boot
    /dev/sda2       2048    1050623    1048576   512M EFI System
    /dev/sda3    1050624  419430400  418379777 199.5G Linux LVM
    /dev/sda4  419430408 8388607966 7969177559   3.7T Ceph OSD



Ceph OSD Disk Size Incorrect (10 GB not 1.7 TB)
Code:
    root@proxmox:~# ceph status
      data:
        pools:   0 pools, 0 pgs
        objects: 0 objects, 0B
        usage:   1.00GiB used, 9.00GiB / 10GiB avail
        pgs:


--------------------------------------------------------------------------------------------------------------------------------------

Full Install Details
If you want details on Proxmox install and creating Ceph OSD with partitions, read on...

Setup
  • Disk Size: 2TB NVMe (/dev/sda)
  • Operating system (Proxmox) installed with 200 GB and rest of disk is empty (1800 GB).
  • Once booted and in the web interface, create a cluster and join two hosts to ensure a green quorum status
  • Now do script below

Config Script
Code:
    # Install Ceph
    pveceph install
   
    # Configure Network (Just run on Primary Proxmox Server, your LAN network)
    pveceph init --network 192.168.6.0/24
   
    # Create Monitor
    pveceph createmon
   
    # View Disks Before
    sgdisk --print /dev/sda
   
    sgdisk --largest-new=4 --change-name="4:CephOSD" \
    --partition-guid=4:4fbd7e29-9d25-41b8-afd0-062c0ceff05d \
    --typecode=4:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sda
     
    # View Disks After (Compare)
    sgdisk --print /dev/sda
   
    # Reboot for changes to be in affect
    reboot
   
    # Note your cluster ID (fsid) at this point from the web interface 
    Datacenter > Server > Ceph

    # Prepare the Ceph OSD Disk, replace cluster-uuid with above fsid
    ceph-disk prepare --cluster ceph --cluster-uuid fea02667-f17d-44fd-a4c2-a8e19d05ed51 --fs-type xfs /dev/sda4

    # Activate the Ceph OSD Disk 
    ceph-disk activate /dev/sda4

    # Check Ceph OSD Disk Size
    ceph status

Warnings
I have read posts highly recommending to use disks instead of partitions because of performance issues, I understand the warnings, but In my case I'm using NVMe SSD storage and accept any risks.