We have a 6 Node Cluster, of which are 4 Ceph Nodes, 8HDDs each Node and one Enterprise NVME SSD per Node. In the last few days serveral HDDs died and have to be replaced.
Back when I setup the ceph storage, I created a partition on the ssd for every osd, to serve as WAL device.
When I try to create a new osd i get an error message.
Since the partitions have a naming scheme of /dev/nvmeXnYpZ I believe pveceph does not accept this as a valid devicepath, so I am unable to create OSDs with the NVME SSD as WAL Device.
How can I handle this?
greetings from the North Sea
Back when I setup the ceph storage, I created a partition on the ssd for every osd, to serve as WAL device.
When I try to create a new osd i get an error message.
Bash:
root@vm-2:~# pveceph createosd /dev/sde -wal_dev /dev/nvme0n1p5
unable to get device info for '/dev/nvme0n1p5' for type wal_dev
root@vm-2:~#
Since the partitions have a naming scheme of /dev/nvmeXnYpZ I believe pveceph does not accept this as a valid devicepath, so I am unable to create OSDs with the NVME SSD as WAL Device.
Bash:
root@vm-2:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
[...]
sdd 8:48 0 3.7T 0 disk
├─sdd1 8:49 0 100M 0 part /var/lib/ceph/osd/ceph-14
└─sdd2 8:50 0 3.7T 0 part
sde 8:64 0 3.7T 0 disk
sdf 8:80 0 3.7T 0 disk
├─sdf1 8:81 0 100M 0 part /var/lib/ceph/osd/ceph-16
└─sdf2 8:82 0 3.7T 0 part
[...]
nvme0n1 259:0 0 349.3G 0 disk
├─nvme0n1p1 259:1 0 43.7G 0 part
├─nvme0n1p2 259:2 0 43.7G 0 part
├─nvme0n1p3 259:3 0 43.7G 0 part
├─nvme0n1p4 259:4 0 43.7G 0 part
├─nvme0n1p5 259:5 0 43.7G 0 part
├─nvme0n1p6 259:6 0 43.7G 0 part
├─nvme0n1p7 259:7 0 43.7G 0 part
└─nvme0n1p8 259:8 0 43.7G 0 part
How can I handle this?
greetings from the North Sea