[SOLVED] Can not create OSD for ceph

elmacus

Well-Known Member
Mar 20, 2011
71
3
48
Can not create OSD for ceph.

Same error in GUI and terminal:

# pveceph osd create /dev/nvme0n1
Error: any valid prefix is expected rather than "".
command '/sbin/ip address show to '' up' failed: exit code 1

The only thing i can think of is since last time it worked was that i now have two nets, x.x.0.0, and the new x.x.1.0 for migrations.

The manual does not say this, but from other thread: https://forum.proxmox.com/threads/w...-when-multiple-pulbic-nets-are-defined.59059/

Obviously wrong but i try anyway:
pveceph osd create /dev/nvme0n1 -mon-address x.x.0.11
Unknown option: mon-address
400 unable to parse option

Ceph 14.2.5, Proxmox 6.1-5. 10 Gbit.
4 nodes, 25 OSD or so.

# fdisk -l
Disk /dev/nvme0n1: 447.1 GiB, 480103981056 bytes, 937703088 sectors
Disk model: INTEL SSDPED1D480GA
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

It have worked before, it was recently moved for other reasons.

I have also tried:
ceph-volume lvm zap /dev/nvme0n1 --destroy


Anyone with similar problem or known bug ?
 
Last edited:
Is the interface for the Ceph cluster network up?
 
Yes, its a production ceph.
I did remove the extra net to test, but that did not help.
I don't know if we talk about the same thing. ;) Are the Ceph cluster & public network working on that node?

Could you post the ip addr output and what is configured in your ceph.conf?
 
I don't know if we talk about the same thing. ;) Are the Ceph cluster & public network working on that node?

Could you post the ip addr output and what is configured in your ceph.conf?
Yes ceph is working, its the same net at the moment, im trying to split them in future, thats why i have created new net.

# ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether MAC-SECRET brd ff:ff:ff:ff:ff:ff
3: eno3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether MAC-SECRET brd ff:ff:ff:ff:ff:ff
4: eno2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
link/ether MAC-SECRET brd ff:ff:ff:ff:ff:ff
5: eno4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
link/ether MAC-SECRET brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether MAC-SECRET brd ff:ff:ff:ff:ff:ff
7: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether MAC-SECRET brd ff:ff:ff:ff:ff:ff
inet x.x.0.11/24 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 xxx:dd8e/64 scope link
valid_lft forever preferred_lft forever
10: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP group default qlen 1000
link/ether MAC-SECRET brd ff:ff:ff:ff:ff:ff
11: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether MAC-SECRET brd ff:ff:ff:ff:ff:ff
inet x.x.1.1/24 scope global vmbr1
valid_lft forever preferred_lft forever
inet6 xxx:dd90/64 scope link
valid_lft forever preferred_lft forever


~# cat /etc/ceph/ceph.conf
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
fsid = SECRET
mon allow pool delete = false
osd journal size = 5120
osd pool default min size = 2
osd pool default size = 3
mon cluster log file level = info
mon_host = x.x.0.11, + all other monitors...
[client]
keyring = /etc/pve/priv/$cluster.$name.keyring

[mds]
keyring = /var/lib/ceph/mds/ceph-$id/keyring

[osd]
keyring = /var/lib/ceph/osd/ceph-$id/keyring

[mds.server18]
host = server18
mds standby for name = pve

[mds.server17]
host = server17
mds standby for name = pve

[mds.server16]
host = server16
mds standby for name = pve

[mon.server16]
host = server16
mon addr = x.x.0.161:6789

[mon.server1]
host = server1
mon addr = x.x.0.11:6789

[mon.server14]
host = server14
mon addr = x.x.0.101:6789

[mon.server17]
host = server17
mon addr = x.x.0.171:6789

[mon.server18]
host = server18
mon addr = x.x.0.181:6789
 
Where do you have your cluster_network and public_network setting? I don't see them in your posted ceph.conf.
 
Where do you have your cluster_network and public_network setting? I don't see them in your posted ceph.conf.
I have installed ceph in Proxmox 5 and upgraded to 6, always following wiki and forum, i dont know where it supposed to be ?

Anyway it must be on x.x.0.0.

Show me your ceph.conf
 
Last edited:
Code:
:~# cat /etc/pve/ceph.conf
[global]
     auth_client_required = cephx
     auth_cluster_required = cephx
     auth_service_required = cephx
     cluster_network = 10.10.10.151/24
     fsid = <ID>
     mon_allow_pool_delete = true
     mon_host = 10.10.10.151 10.10.10.152 10.10.10.157
     osd_pool_default_min_size = 2
     osd_pool_default_size = 3
     public_network = 10.10.10.151/24

[client]
     keyring = /etc/pve/priv/$cluster.$name.keyring

[mds]
     keyring = /var/lib/ceph/mds/ceph-$id/keyring

[mds.p6c156]
     host = p6c156
     mds_standby_for_name = pve

[mds.p6c154]
     host = p6c154
     mds_standby_for_name = pve

[mds.p6c153]
     host = p6c153
     mds_standby_for_name = pve
Add the public_network and cluster_network to your ceph.conf. Then the OSD creation should work again.
 
Code:
:~# cat /etc/pve/ceph.conf
[global]

     cluster_network = 10.10.10.151/24

     public_network = 10.10.10.151/24
Add the public_network and cluster_network to your ceph.conf. Then the OSD creation should work again.
Thanks.
And .151 is any server in ceph cluster ?

Is there a commando to check the net now before i add? So i dont set it to wrong nets.
 
And .151 is any server in ceph cluster ?
Use network or the hosts address, Ceph understands both.

Is there a commando to check the net now before i add? So i dont set it to wrong nets.
As long as you don't restart any Ceph service, it shouldn't get picked up.
 
Yep that was it:
--> ceph-volume lvm create successful for: /dev/nvme0n1

Thanks for GREAT support, you should add this info about ceph.conf to the wiki.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!