No such block device - Proxmox 3.2 and Ceph configuration

haemi

New Member
Apr 1, 2014
2
0
1
Hello

I have successfully installed Proxmox 3.2 on a 3 Node HP DL380 G5 Cluster. I have setup ceph install and monitors on each node, now i want to create OSD's but i allways receive error message:

no such block device '/dev/cciss!c0d5' (500)

The servers has an SmartArray 400 RAID Controller, i have configured each disk as RAID0, but i did not works.

Thanks for help
Regards
 
you run 3.10 kernel? if yes, try 2.6.32.
 
Same situation here. I want to cerate a new OSD and select a hard drive (/dev/cciss!c0p2) a journal (/dev/cciss!c0d1) and finaly the following error message pop up:

pveceph_createosd.jpeg

What is goning wrong?


HW: HP Prolinat DL 385 G2
OS: Proxmox 3.2

proxmox-ve-2.6.32: 3.2-124 (running kernel: 2.6.32-28-pve)
pve-manager: 3.2-2 (running version: 3.2-2/82599a65)
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-2.6.32-28-pve: 2.6.32-124
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-15
pve-firmware: 1.1-2
libpve-common-perl: 3.0-14
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve5
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-6
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1
 
After a couple of tests I found a unclear situation in /usr/share/perl5/PVE/CephTools.pm line 240

[PERL]
dir_glob_foreach('/sys/block', '.*', sub {
[/PERL]

A check in /sys/block and I can't see any information about the /dev/xxx futurity OSD-device.

So it is verry clear that pveceph createosd /dev/xxx isn't working, because

[PERL]
my $diskinfo = $disklist->{$devname};
die "unable to get device info for '$devname'\n"
[/PERL]

in /usr/shar/perl5/PVE/API2/Ceph.pm line (207) get an empty $disklist

At this point I have no idea what the problem could be. This is a task for a PVE-PERL-specialist.
 
  • Like
Reactions: Romkus
After a couple of tests I found a unclear situation in /usr/share/perl5/PVE/CephTools.pm line 240

[PERL]
dir_glob_foreach('/sys/block', '.*', sub {
[/PERL]

A check in /sys/block and I can't see any information about the /dev/xxx futurity OSD-device.

So it is verry clear that pveceph createosd /dev/xxx isn't working, because

[PERL]
my $diskinfo = $disklist->{$devname};
die "unable to get device info for '$devname'\n"
[/PERL]

in /usr/shar/perl5/PVE/API2/Ceph.pm line (207) get an empty $disklist

At this point I have no idea what the problem could be. This is a task for a PVE-PERL-specialist.

Same problem here on a DL380 G5, i'm waiting for a solution or will switch back to drbd :(

Anyway any news elsewhere?
 
cli command also fail:
root@node01-proxmox:~# pveceph createosd /dev/cciss/c0d1
unable to get device info for 'cciss/c0d1'
root@node01-proxmox:~#
 
Last edited:
If it can help debug, the zap command seems to work:<br>
root@node02-proxmox:~# ceph-disk zap /dev/cciss/c0d1<br>
Creating new GPT entries.<br>
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.<br>
The operation has completed successfully.<br>
root@node02-proxmox:~#<br>
 
After a couple of tests I found a unclear situation in /usr/share/perl5/PVE/CephTools.pm line 240

[PERL]
dir_glob_foreach('/sys/block', '.*', sub {
[/PERL]

A check in /sys/block and I can't see any information about the /dev/xxx futurity OSD-device.

So it is verry clear that pveceph createosd /dev/xxx isn't working, because

[PERL]
my $diskinfo = $disklist->{$devname};
die "unable to get device info for '$devname'\n"
[/PERL]

in /usr/shar/perl5/PVE/API2/Ceph.pm line (207) get an empty $disklist

At this point I have no idea what the problem could be. This is a task for a PVE-PERL-specialist.

Hi HuHu,

thanks a lot, you have pointed me on the right path!

That has worked for me:

in /usr/share/perl5/PVE/API2/Ceph.pm I've added a line after line number 205:

Code:
205: $devname =~ s|/dev/||;
206: [COLOR=red][B]$devname =~ s|cciss/|cciss!|;[/B][/COLOR] [COLOR=blue]#<-- new line[/COLOR]

because the device name should be in this case "cciss!c0dx", not "cciss/c0dx".

After modifying this file I was able to create OSDs by command line (not by GUI).

Best regards
 
Last edited:
  • Like
Reactions: Romkus
Had the same issue today with PVE 3.3 and this fixed it for me too on HP P400 raid controllers,
maybe instead of hardcoding limitation then add a config table of device expressions with known default patterns that are known so far and doc. possible issue with various HW :D
 
same problem
i'm trying to use iscsi disk as osd's
path looks like /dev/iscsi-vms/ceph-disk , which is symlinked to /dev/dm-15
Code:
[I]pveceph createosd /dev/iscsi-vms/ceph-disk -journal_dev /dev/iscsi-vms/ceph-diskj[/I]
fails with same error:
Code:
unable to get device info for 'dm-15'
Code:
perl -e 'use PVE::CephTools;use Data::Dumper;my $disklist = PVE::CephTools::list_disks(); print Dumper($disklist);'
showing all local disk ( /dev/sd* ) , but not iscsi ( /dev/dm* )
so in
Code:
my $diskinfo = $disklist->{$devname};
die "unable to get device info for '$devname'\n"
pveceph script fails because of empty diskinfo variable
Is there a workaround for this situation , or should i manually create osd via ceph-disk prepare and ceph-disk activate ?
 
pveceph is designed to work with local disks, and I guess it makes no sense to run that over iscsi.
 
Dietmar, these definitively are NOT decisions belonging to proxmox development but users. There are many use cases which require other than raw, direct SATA/SAS devices! Ours included.

We are planning to build a sizeable cluster, and if we cannot use partitions it would mean wasting *half* of all drive bays to just the proxmox OS! Not very sensible.
We use dell cloud nodes in 3node/2U configuration. This is 4 disks per node with 2 CPUs, 12 disks total in the chassis. There is no means to mount 2xSSD inside the chassis, and USB sticks are not reliable.
This means we make RAID10 array from 4x80G partitions on the drives, and all of the space we want to use for Ceph is partitions.
We will add discreet storage nodes too, but at least in the start we need to just utilize what we already have.
We were planning to do 30 nodes to start with ... That would be 120x3TB SATA drives.
Wasting half of the drive bays to do RAID1 just for the OS and perhaps journaling makes absolutely no financial sense, every single drive bay & sata/sas port is very very precious.

Now i'm left wondering how much of a PITA it will be to manage without using the pveceph tools for osd creation (what else does pveceph do, what does it save etc.), and what issues it will create. I managed to activate 4 OSDs from partitions using the usual ceph tools but now i'm left on proxmox gui with all kinds of timeouts etc. when trying to manage things ...
 
Hi nucode,

the exact issue on mine site. I would lose just one quarter of disks on my 4-node cloud-Server units just for serving proxmox.
I just wanted to use the unused storage from the root devices (LVM), create a lv on each node with the unused space and create a osd on it, but without sucess.

Code:
#pveceph createosd /dev/pve/root123-ceph1
#unable to get device info for 'dm-3'

@nucode Did you succeed with going on creating osd with usual ceph tools ?

Greetings
Dominik
 
Hi nucode,

the exact issue on mine site. I would lose just one quarter of disks on my 4-node cloud-Server units just for serving proxmox.
I just wanted to use the unused storage from the root devices (LVM), create a lv on each node with the unused space and create a osd on it, but without sucess.

Code:
#pveceph createosd /dev/pve/root123-ceph1
#unable to get device info for 'dm-3'

@nucode Did you succeed with going on creating osd with usual ceph tools ?

Greetings
Dominik

Yes, usual ceph tools seemed to work - but i have no idea what else pveceph does, so no idea does it work in practice.
Tbh, needs to be done somehow, no matter what. If it is not possible, this is a showstopper issue. But we are not yet at a point where ceph deployment is practical, need to acquire more gear first (10G switches etc.).

Another issue for me is, do we really need to install full proxmox on just storage nodes? That's a lot of useless code running which increases the potential attack vector for hackers :(
I also didn't yet figure out does pveceph tools support jerasure + caching, so for these too might need to use standard ceph tools. Bottomline, is that it's potential that pveceph is detrimental for the whole of proxmox :(
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!