I'm trying to setup a new 4-node Proxmox/Ceph cluster.
Each node has 6 x NVMe SSDs, as well as an Intel Optane drive (used for WAL/DB).
I have partitioned each NVMe SSD like so:
Rationale - I'm setting up 4 OSDs per NVMe drive, in order to better utilise resources.
I then try to create a new OSD using pveceph, using the Optane drive (/dev/nvme0n1) as the WAL/DB drive. However, it fails, saying it can't get device info for my partition:
Is there a reason that pveceph doesn't recognise /dev/nvme2n1p1? Is this the correct way to pass a disk partition to pveceph?
Each node has 6 x NVMe SSDs, as well as an Intel Optane drive (used for WAL/DB).
I have partitioned each NVMe SSD like so:
Code:
parted /dev/nvme6n1 mklabel gpt
parted -a optimal /dev/nvme6n1 mkpart primary 0% 25%
parted -a optimal /dev/nvme6n1 mkpart primary 25% 50%
parted -a optimal /dev/nvme6n1 mkpart primary 50% 75%
parted -a optimal /dev/nvme6n1 mkpart primary 75% 100
Rationale - I'm setting up 4 OSDs per NVMe drive, in order to better utilise resources.
I then try to create a new OSD using pveceph, using the Optane drive (/dev/nvme0n1) as the WAL/DB drive. However, it fails, saying it can't get device info for my partition:
Code:
# pveceph osd create /dev/nvme2n1p1 -db_Dev /dev/nvme0n1 -db_size 35
unable to get device info for '/dev/nvme2n1p1'
Is there a reason that pveceph doesn't recognise /dev/nvme2n1p1? Is this the correct way to pass a disk partition to pveceph?
Last edited: