Hi,
I have a 4-node Proxmox cluster
Each node has:
I am trying to setup OSDs on the SATA SSDs, using the Optane as WAL/DB drive.
However, when I get to the 5th drive, it complains about benig out of space?
I see from the docs that the pveceph commands seems to default to 10% of OSD size for the WAL - so that would be around 192 GB, right? And DB is 1%, which is 19GB.
So I assume it's because these defaults (in particular the DB) have filled up the disk.
Any advice on what values to pick here? And the impact if I have a smaller WAL/DB than the 10%/1% defaults?
I have a 4-node Proxmox cluster
Each node has:
- 1 x 512GB M.2 SSD (for Proxmox)
- 1 x Intel Optane SSD (895 GB) for Ceph WAL/DB
- 6 x Intel SATA SSD (1.75. TiB) for Ceph OSDs
I am trying to setup OSDs on the SATA SSDs, using the Optane as WAL/DB drive.
However, when I get to the 5th drive, it complains about benig out of space?
Code:
root@angussyd-vm01:~# pveceph osd create /dev/sde -db_dev /dev/nvme0n1
create OSD on /dev/sde (bluestore)
creating block.db on '/dev/nvme0n1'
Rounding up size to full physical extent 178.85 GiB
lvcreate 'ceph-861ebf6d-8fee-4313-8de6-4e797dc436ee/osd-db-da591d0f-8a05-42fa-bc62-a093bf98aded' error: Volume group "ceph-861ebf6d-8fee-4313-8de6-4e797dc436ee" has insufficient free space (45784 extents): 45786 required.
I see from the docs that the pveceph commands seems to default to 10% of OSD size for the WAL - so that would be around 192 GB, right? And DB is 1%, which is 19GB.
So I assume it's because these defaults (in particular the DB) have filled up the disk.
Any advice on what values to pick here? And the impact if I have a smaller WAL/DB than the 10%/1% defaults?