The workload for the OSDs of type HDD is only:
OLTP Database backup / restore
This means that any DB server has mapped a single RBD to backup / restore the Database.
Would you confirm that for this workload a dedicated SSD for block.db is not required?
OK.
I modified /etc/ceph/ceph.conf by adding this in [global]:
bluestore_block_db_size = 53687091200
This should create RockDB with size 50GB.
Then I wanted to move DB to a new device (SSD) that is not formatted:
root@ld5505:~# ceph-bluestore-tool bluefs-bdev-new-db –-path...
Hi,
I have created OSD on HDD w/o putting DB on faster drive.
In order to improve performance I have now a single SSD drive with 3.8TB.
Questions:
How can I add DB device for every single OSD to this new SSD drive?
Which parameter in ceph.conf defines the size for the DB?
Can you confirm that...
Does this means I need to create the json-files for LVM OSDs, too?
If yes, how should I do this?
If not, how can I ensure that OSD activation on startup for LVM OSDs is working in case the files in /var/lib/ceph/osd/ceph-<id>/ are lost?
All right.
I've completed all activities on servers with "simple" ceph-disk(s).
The json-files in /etc/ceph/osd/ are complete now.
I understand this as a precaution measure in case files in /var/lib/ceph/osd/ceph-<id>/ are lost (again).
However I don't understand how to fix a comparable issue...
Based on my calculation I need much more SSD disk space.
260x HDD 2TB = 520TB total
5% for DB = 26TB
distributed over 4 nodes = 6.5TB
Once I have the required SSD drives I will create new DB storage location.
Can you please advise how to proceed for the following 2 scenarios:
1. HDD - Single...
root@ld5507:~# ceph-volume simple scan /dev/sda1
Running command: /sbin/cryptsetup status /dev/sda1
--> OSD 172 got scanned and metadata persisted to file: /etc/ceph/osd/172-a7de0317-05da-4df5-be08-8b4401d76f10.json
--> To take over management of this scanned OSD, and disable ceph-disk and udev...
No, I didn't create the json-files.
And directory /etc/ceph/osd/ does not exist.
root@ld5508:~# ls -l /etc/ceph/
insgesamt 16
-rw------- 1 ceph ceph 161 Mai 28 14:33 ceph.client.admin.keyring
lrwxrwxrwx 1 root root 18 Mai 28 14:33 ceph.conf -> /etc/pve/ceph.conf
-rw-r----- 1 root root 704 Aug...
Well, the partitions on SSD are created sequentially.
The design now looks like this:
sdbl 67:240 0 372,6G 0 disk
├─sdbl1 67:241 0 1G 0 part
├─sdbl2 67:242 0 1G 0 part
├─sdbl3 67:243 0 1G 0 part
├─sdbl4 67:244 0 1G 0 part
├─sdbl5 67:245...
Yes. I followed the upgrade guide and executed every single step.
And actually everything was fine.
However since yesterday the issue started.
I cleaned up some packages from Debian 9, upgraded the PVE kernel and rebooted 2 of 4 nodes.
I modified /ect/pve/ceph.conf too in order to troubleshoot...
Hi,
thanks Alwin for the explanation.
However there's one thing that is not mentioned.
With Nautilus all OSDs are now created using LVM when using command pveceph createosd <device>.
Before this command creates primary partitions with GPT.
Or is this command obsolete now? It is still documented...
Hi,
I finished upgrade to Proxmox 6 + Ceph Nautilus on 4 node cluster.
On 2 nodes I have identified that all directories /var/lib/ceph/osd/ceph-<id>/ are empty after rebooting.
Typically the content of this directory is this:
root@ld5508:~# ls -l /var/lib/ceph/osd/ceph-70/
insgesamt 60...
Well, my issue is not the OSD performance. Therefore tuning was not my request.
The issue is that my setup originated from Proxmox 5 + Ceph Luminous with every OSD of type HDD has a journal on SSD with 1GB each.
According to Ceph this is by fare to small for block.db (see here):
It is...
Thanks for providing this link.
This means my current Ceph setup is somehow obsolete because the command
pveceph osd create <hdd-device> --journal-dev <ssd-device>
created a partion of size 1G on the SSD.
What is the recommended procedure to correct this?
THX
Hello!
I'm facing the same issue with just the difference that 192 OSD(s) are affected.
When I created the OSD(s) in PVE 5 + Luminous there was a 1GB partition on the SSD created for the DB (metadata).
Question:
How can I determine the amount of spilled metadata?
I run this command...
Hi,
thanks for this input.
After successfully removing the relevant node ld4464 from Ceph the relevant error message is gone.
What would be the next steps?
Do you advice to re-enter this node ld4464 to the existing Ceph cluster?
Or should I first fix the OSDs and ensure that they will start...
Hello!
Due to an HD crash I was forced to rebuild a server node from scratch, means I installed OS and Proxmox VE (
apt install proxmox-ve postfix open-iscsi) fresh on the server.
Then I executed and Ceph (pveceph install) on greenfield.
Then I ran pvecm add 192.168.10.11 -ring0_addr...
Hi,
I'm aware of the object classes. As far as I understand Ceph can now identify the object class of a disk automatically.
My intention was this:
In order to ensure that a VM uses a specific disk say NVME, I need to
- define another root nvme
- define a fake hostname, e.g. <hostname>-nvme
-...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.