pveceph unable to get device info

huky

Renowned Member
Jul 1, 2016
70
3
73
44
Chongqing, China
I have add two shannon pci-e ssd for ceph, but pveceph unable to get device info


# shannon-status -l
82:00:0: /dev/dfb Direct-IO G3S 1200GB 1200GB SS16704K7310005
81:00:0: /dev/dfa Direct-IO G3S 1200GB 1200GB SS16704K7310006

# pveceph createosd /dev/dfa
unable to get device info for 'dfa'

# ls -l /dev/df*
brw-rw---- 1 root disk 252, 0 Jun 26 16:27 /dev/dfa
brw-rw---- 1 root disk 252, 64 Apr 17 21:46 /dev/dfb

now, how could I to use it
 
Hi,

the device path is the problem.
/dev/dfa is a nonstandard device name.
Proxmox VE toolset has a whitelist of the following dev names.

Code:
    # whitelisting following devices
    # hdX: ide block device
    # sdX: sd block device
    # vdX: virtual block device
    # xvdX: xen virtual block device
    # nvmeXnY: nvme devices
    # cciss!cXnY: cciss devices
 
Hi,

the device path is the problem.
/dev/dfa is a nonstandard device name.
Proxmox VE toolset has a whitelist of the following dev names.

Code:
    # whitelisting following devices
    # hdX: ide block device
    # sdX: sd block device
    # vdX: virtual block device
    # xvdX: xen virtual block device
    # nvmeXnY: nvme devices
    # cciss!cXnY: cciss devices

thansk, where is the list , can i add dfX into it?
 
This is in or code that means it is hardcoded and not a config file.
 
Code:
pveceph createosd /dev/mmcblk0
unable to get device info for 'mmcblk0'

I believe it's the same problem with emmc storage?. No way to test this in my lab ?
 
I believe it's the same problem with emmc storage?. No way to test this in my lab ?
Hm... you don't love your hardware? ;) Even if it would work, the hardware will probably die very soon (besides a horrible performance).
 
Hm... you don't love your hardware? ;) Even if it would work, the hardware will probably die very soon (besides a horrible performance).

I love my hardware but i prefer to shrink my electricity bill at home, using low power baytrail mini pc for testing
 
I love my hardware but i prefer to shrink my electricity bill at home, using low power baytrail mini pc for testing
Well, not that I warned you. :) You can always use the 'ceph-disk' utility directly to add OSDs.
 
For the info :
Code:
ceph-disk prepare --cluster ceph --cluster-uuid e6a0e1ec-70c7-4a24-aaaa-9127fb59f5f9 --fs-type xfs /dev/mmcblk0

ceph-disk activate /dev/mmcblk0p1

I test it right now and it perform not so bad when i compare with my ganesha + glusterfs install for containers.
 
So, well, I have to dig up this thread because I stumbled upon the "device path" thing...

We have a 6 Node Cluster, of which are 4 Ceph Nodes, 8HDDs each Node and one Enterprise NVME SSD per Node. When I setup the ceph storage, I created a partition on the ssd for every osd, to serve as WAL device. Since the partitions have a naming scheme of /dev/nvmeXnYpZ the device path kicks in and I am unable to create OSDs with the NVME SSD as WAL Device.

How can I handle this?

greetings from the North Sea
 
@Ingo S, as this is an old thread and refers to PVE 5 with ceph-disk (doesn't exist anymore), can you please open up a new thread? Thanks.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!