Can not create OSD for ceph.
Same error in GUI and terminal:
# pveceph osd create /dev/nvme0n1
Error: any valid prefix is expected rather than "".
command '/sbin/ip address show to '' up' failed: exit code 1
The only thing i can think of is since last time it worked was that i now have two nets, x.x.0.0, and the new x.x.1.0 for migrations.
The manual does not say this, but from other thread: https://forum.proxmox.com/threads/w...-when-multiple-pulbic-nets-are-defined.59059/
Obviously wrong but i try anyway:
pveceph osd create /dev/nvme0n1 -mon-address x.x.0.11
Unknown option: mon-address
400 unable to parse option
Ceph 14.2.5, Proxmox 6.1-5. 10 Gbit.
4 nodes, 25 OSD or so.
# fdisk -l
Disk /dev/nvme0n1: 447.1 GiB, 480103981056 bytes, 937703088 sectors
Disk model: INTEL SSDPED1D480GA
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
It have worked before, it was recently moved for other reasons.
I have also tried:
ceph-volume lvm zap /dev/nvme0n1 --destroy
Anyone with similar problem or known bug ?
Same error in GUI and terminal:
# pveceph osd create /dev/nvme0n1
Error: any valid prefix is expected rather than "".
command '/sbin/ip address show to '' up' failed: exit code 1
The only thing i can think of is since last time it worked was that i now have two nets, x.x.0.0, and the new x.x.1.0 for migrations.
The manual does not say this, but from other thread: https://forum.proxmox.com/threads/w...-when-multiple-pulbic-nets-are-defined.59059/
Obviously wrong but i try anyway:
pveceph osd create /dev/nvme0n1 -mon-address x.x.0.11
Unknown option: mon-address
400 unable to parse option
Ceph 14.2.5, Proxmox 6.1-5. 10 Gbit.
4 nodes, 25 OSD or so.
# fdisk -l
Disk /dev/nvme0n1: 447.1 GiB, 480103981056 bytes, 937703088 sectors
Disk model: INTEL SSDPED1D480GA
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
It have worked before, it was recently moved for other reasons.
I have also tried:
ceph-volume lvm zap /dev/nvme0n1 --destroy
Anyone with similar problem or known bug ?
Last edited: