Ceph: Device usage LVM

Marius Matei

Member
Jun 23, 2014
13
0
21
Bucharest, Romania, Romania
Hello everyone,

I'm trying to set up ceph stoare on my 4 node cluster.
The problem is I cannot use the disks.
In the GUI they are marked as in use for LVM.
In CLI it just uotputs this:
Code:
root@pm02:~# pveceph createosd /dev/sdk -journal_dev /dev/sdj
device '/dev/sdk' is in use
root@pm02:~#
I've tried MBR and GPT partition table.
Tried zaping disk and the dd solution as described in the wiki.

At first I thought this was because of multipath (I also have MPIO iSCSI).
Also tried plugging in the disks after reboot with multipath tools turned off.
The disks are not mounted in any way:

Code:
root@pm02:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=6179463,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=4945336k,mode=755)
/dev/mapper/pve-root on / type ext3 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /run/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=9890660k)
/dev/mapper/pve-data on /var/lib/vz type ext3 (rw,relatime,data=ordered)
/dev/sda2 on /boot type ext3 (rw,relatime,data=ordered)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
none on /sys/kernel/config type configfs (rw,relatime)
10.100.1.252:/mnt/vol00/nfspm on /mnt/pve/nfspm type nfs (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.100.1.252,mountvers=3,mountport=695,mountproto=udp,local_lock=none,addr=10.100.1.252)
Code:
root@pm02:~# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "vg_sata2-01" using metadata type lvm2
  Found volume group "vg_sata2-03" using metadata type lvm2
  Found volume group "vg_sata2-02" using metadata type lvm2
  Found volume group "vg_sata2-00" using metadata type lvm2
  Found volume group "pve" using metadata type lvm2
Code:
root@pm02:~# lvscan
  inactive          '/dev/vg_sata2-01/vm-101-disk-1' [927.00 GiB] inherit
  inactive          '/dev/vg_sata2-02/vm-102-disk-1' [927.00 GiB] inherit
  inactive          '/dev/vg_sata2-00/vm-100-disk-1' [10.00 GiB] inherit
  ACTIVE            '/dev/pve/swap' [13.88 GiB] inherit
  ACTIVE            '/dev/pve/root' [27.75 GiB] inherit
  ACTIVE            '/dev/pve/data' [55.79 GiB] inherit
root@pm02:~#

Any ideas?


pm.PNG
 
Last edited:

Marius Matei

Member
Jun 23, 2014
13
0
21
Bucharest, Romania, Romania
Hello again,

Just tried this and it worked:

Code:
root@pm02:~# aptitude purge multipath-tools
The following packages will be REMOVED:
  kpartx{u} multipath-tools{p}
0 packages upgraded, 0 newly installed, 2 to remove and 9 not upgraded.
Need to get 0 B of archives. After unpacking 745 kB will be freed.
Do you want to continue? [Y/n/?] y
(Reading database ... 36220 files and directories currently installed.)
Removing multipath-tools ...
Stopping multipath daemon: multipathd.
Purging configuration files for multipath-tools ...
Processing triggers for man-db ...
(Reading database ... 36178 files and directories currently installed.)
Removing kpartx ...
Processing triggers for man-db ...

Code:
root@pm02:~# apt-cache search multipath | grep kpartx
kpartx - create device mappings for partitions
root@pm02:~#

So it seems kpartx was the issue beacause it creates entries in /dev/mapper/.

So no ceph and iSCSI MPIO on same node?
 

markmarkmia

New Member
Feb 5, 2018
23
0
1
47
I came across this old post searching for answers myself to the same issue, so should someone else come across this post searching for answers to the same problem, I thought I'd add it (though I'm sure it's too late to help OP, lol).

The solution it turns out is that you need to blacklist your wwids for your Ceph OSDs as they are being picked up by DM-multipath, which makes them 'in use'.

# multipath -l

(copy the WWID numbers next to your OSD devices (don't copy any of your iSCSI stuff obviously, you don't want to blacklist it)

# vi /etc/multipath.conf

blacklist {
wwid ##########
wwid ###########
etc
}

(where ######## is the wwid of your OSDs)

# service multipathd restart

# pveceph creatosd /dev/xxxxx
(and it'll work)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!