Hi Adam,
The enumeration is no longer an issue. So the problem is that the default WAL size is too small. So with the GUI or pveceph createosd command, they both seem to go with the default size, and ignore what is specified in the ceph.conf file. Also there is no symbolic link that is created...
Alwin,
I really appreciate all of the help. I am up and running now with the partitioning exactly how I need it. You have been extremely helpful!!!! I am going to change the title of this post to add [Solved]. For reference, I have opened up a Bugzilla ticket which can be found here...
Hi Alwin,
I have it all figured out now. Both the GUI and the "pveceph createosd" command line are broken (only broken when the same device is specified for a separate db and wal).. -They will not add another partition for the wal. In the documentation you pointed me to, the documentation is...
Hi Alwin,
Thanks again for your response. Since I do have two NVME devices, I was able to work around the issue by putting the DB and WAL separated on each of them. This is not ideal however since I'd rather have only half of my OSD's affected if one of the NVMe's goes down. I have rebuilt 2 of...
Alwin,
Thanks for getting back to me. I believe using the "disk by-partlabel" was in an effort to get around the problem that "pveceph createosd" seems broken. Which is why I am posting. I just want to be able to create OSD's, and have a separate db and wal partition with my size specs on the...
Preface:
I have a hybrid Ceph environment using 16 SATA spinners and 2 Intel Optane NVMe PCIe cards (intended for DB and WAL). Because of enumeration issues on reboot, the NVMe cards can flip their /dev/{names}. This will cause a full cluster re balance if the /dev/{names} flip. The...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.