PVE7 pveceph create osd, allways device is already in use

StefanSa

New Member
Feb 13, 2022
2
0
1
63
Hi there,
Have here FC-HBA with multpath, I do not manage to create an OSD on this devices.
I always get the error message "device is already in use". Have already sgdisk and fdisk the partion gelsöcht, but no chance.

Any idea ?
Thanks for any help.

Code:
root@pve01:~# multipath -ll
mpatha (3600d023100072719000000005340993a) dm-10 IFT,S16F-R1840-4
size=2.4T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 1:0:1:0 sde 8:64 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  `- 1:0:0:0 sdb 8:16 active ready running

Code:
oot@pve01:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                            8:0    0   1.1T  0 disk
├─sda1                         8:1    0  1007K  0 part
├─sda2                         8:2    0   512M  0 part  /boot/efi
└─sda3                         8:3    0   1.1T  0 part
  ├─pve-swap                 253:0    0     8G  0 lvm   [SWAP]
  ├─pve-root                 253:1    0    96G  0 lvm   /
  ├─pve-data_tmeta           253:2    0    10G  0 lvm
  │ └─pve-data-tpool         253:4    0 976.3G  0 lvm
  │   ├─pve-data             253:5    0 976.3G  1 lvm
  │   ├─pve-vm--100--disk--0 253:6    0     8G  0 lvm
  │   ├─pve-vm--101--disk--0 253:7    0    40G  0 lvm
  │   ├─pve-vm--101--disk--1 253:8    0     4M  0 lvm
  │   └─pve-vm--101--disk--2 253:9    0     4M  0 lvm
  └─pve-data_tdata           253:3    0 976.3G  0 lvm
    └─pve-data-tpool         253:4    0 976.3G  0 lvm
      ├─pve-data             253:5    0 976.3G  1 lvm
      ├─pve-vm--100--disk--0 253:6    0     8G  0 lvm
      ├─pve-vm--101--disk--0 253:7    0    40G  0 lvm
      ├─pve-vm--101--disk--1 253:8    0     4M  0 lvm
      └─pve-vm--101--disk--2 253:9    0     4M  0 lvm
sdb                            8:16   0   2.4T  0 disk
└─mpatha                     253:10   0   2.4T  0 mpath
sdc                            8:32   0   2.4T  0 disk
└─sdc1                         8:33   0   2.4T  0 part
sdd                            8:48   0   2.4T  0 disk
└─sdd1                         8:49   0   2.4T  0 part
sde                            8:64   0   2.4T  0 disk
└─mpatha                     253:10   0   2.4T  0 mpath
sdf                            8:80   0   2.4T  0 disk
└─sdf1                         8:81   0   2.4T  0 part
sdg                            8:96   0   2.4T  0 disk
└─sdg1                         8:97   0   2.4T  0 part

Code:
pveceph osd create /dev/sdb
device '/dev/sdb' is already in use
 
Well, have you tried to create the OSD on /dev/mpatha? sdb and sde are in use already because on top you have the maptha device.

But on a nother note; is that FC storage used by all your cluster nodes? If so, you kinda defeat the purpose of Ceph and have another central part in the cluster again. Plus additional latency from the nodes to the FC storage.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!