[SOLVED] ceph OSD creation using a partition

kenyoukenme

New Member
Aug 27, 2019
12
0
1
28
hi all.
i need help in creating osd in my partition.

in our server, we are provided with 2 nvme drive in raid-1. this is the partition:

Code:
root@XXXXXXXX:~# lsblk
NAME           MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
nvme1n1        259:0    0  1.8T  0 disk
├─nvme1n1p1    259:1    0  511M  0 part
├─nvme1n1p2    259:2    0 19.5G  0 part
│ └─md2          9:2    0 19.5G  0 raid1 /
├─nvme1n1p3    259:3    0   10G  0 part  [SWAP]
├─nvme1n1p4    259:4    0  1.6T  0 part
│ └─md4          9:4    0  1.6T  0 raid1 /mnt/vol
└─nvme1n1p5    259:5    0 97.7G  0 part
  └─md5          9:5    0 97.7G  0 raid1
    └─pve-data 253:0    0 93.7G  0 lvm   /var/lib/vz
nvme0n1        259:6    0  1.8T  0 disk
├─nvme0n1p1    259:7    0  511M  0 part  /boot/efi
├─nvme0n1p2    259:8    0 19.5G  0 part
│ └─md2          9:2    0 19.5G  0 raid1 /
├─nvme0n1p3    259:9    0   10G  0 part  [SWAP]
├─nvme0n1p4    259:10   0  1.6T  0 part
│ └─md4          9:4    0  1.6T  0 raid1 /mnt/vol
└─nvme0n1p5    259:11   0 97.7G  0 part
  └─md5          9:5    0 97.7G  0 raid1
    └─pve-data 253:0    0 93.7G  0 lvm   /var/lib/vz

i want to create an OSD using the nvme0n1p4 or md4. it is a primary partition.

i would appreciate any help guys
thanks :)


***UPDATE***

I was able to remove the raid array (it's a soft raid it turns out) for the partition by using mdadm commands. And I used this tutorial and this tutorial as a guide to prep the partition for osd creation. You could also use cfdisk and mkfs commands to prep the partition or use ceph-disk commands. However you wont still be able to create osd via gui, so OSD creation must be done via cli.

ill post the documentation for the process once ive fixed some kinks.

C. Preparing disk for Ceph OSD (OVH Dedicated server):
1. Remove the partition intended for the OSD’s from Raid filesystem:
a. Check the partitions by running one of the following (note the device partition name, the raid array name (mdX) and the mount point):
lsblk
fdisk -l
df -h

2. Remove the raid array:
a. Check first the list of raid array by opening the config files “cat /proc/mdstat” or “cat /etc/mdadm/mdadm.conf” (Debian)
b. Check the details of the selected raid array by running “mdadm --detail /dev/mdX”
c. Remove and delete the raid array by running the following commands, do this in both disk’s partition:
umount /dev/mdX
mdadm --stop /dev/mdX
mdadm --remove /dev/mdX
mdadm --zero-superblock /dev/nvmeXnX
lsblk --fs (check if superblock still exist)

d. MAKE SURE CONFIG FILES ARE UPDATED. Comment out any line referencing mdX at /etc/fstab and /etc/mdadm/mdadm.conf and run command “update-initramfs -u”

3. Change partition type and filesystem
a. use cfdisk to change the partition type:
cfdisk /dev/nvmeXnX
Select partition
Select type and choose Ceph OSD from the list
Select write and quit
b. Change the filesystem of partition by running the command “mkfs.xfs /dev/nvmeXnXpX”

4. Create OSD

umount /dev/nvme0n1p4
umount /dev/nvme1n1p4
ceph-disk prepare /dev/nvme0n1p4
ceph-disk prepare /dev/nvme1n1p4
ceph-disk activate /dev/nvme0n1p4
ceph-disk activate /dev/nvme1n1p4


BUT i encountered the same problem like this.


********PROBLEM SOLVED*********


i settled with using ceph-volume and this setup can be used for disk/partition with raid. In my setup i run this commands:

pvcreate /dev/md4
pvdisplay
vgcreate cephvg /dev/md4
vgdisplay
lvcreate --name cephlv --size 1454G cephvg
ceph-volume lvm prepare --data /dev/cephvg/cephlv
ceph-volume lvm activate --all


Hope this helps others with the same issue.
 
Last edited:
Do you want to use Ceph on a single server?

Also be aware that we don't support MD Raid.
 
Do you want to use Ceph on a single server?

Also be aware that we don't support MD Raid.

we're gonna start with 3 servers, and scale it up in the future.

i tried making different partitions (logical, primary, lv) with different filesystems (ext4, xfs), but still i cant find the partitions when creating an osd. i tried creating an osd using cli, but it gives me an error:

ceph-disk: Error: Device /dev/ is in use by a device-mapper mapping (dm-crypt?): md0
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!