actually, here are a number of links, proving it's a problem with ubuntu kernel after all.
this problem is present in ubuntu 19.10 or even earlier:
I'm in the same boat but I believe my conditions are a bit different.
Maybe this can help:
I have the latest Proxmox v6.1-8 and a very old IBM v7000 Unified storage.
The storage works well with several dozens machines so there's no problem with it.
But it only supports NFSv2 and V3!
Will I be able to change the default pool to work with the new rule or will I have to create a new pool and transfer all the osds over?
One more question :
On a summary view of a ceph pool I see that the TOTAL space sometimes grows a bit (from 3.44TB to 3.55TB for example ) - how...
so I tested this and this is what I got:
I created a new device class "ssd2", set up a rule to assign ssd2 to osds.
disabled the automatic assignment of the device class.
Then I created a new pool with this rule.
It was assigned OK, all the newly created osds were ssd2.
But then it...
So let me just confirm one last time.
I can actually create only one crush rule (for ssd2, for example) and attach the new osds to it, and I don't have to create any rules for the old existing pool, right?
Sorry for asking the same thing over and over - I just don't want...
In this order? I would think that first I should create the rule, then pool and then osds, otherwise the osds get attached to the old pool.
I just want to create the new pool without touching the old one, I mean I don't want to create any crush rules for the old pool and definitely don't want...
I have a 4 node proxmox cluster with ceph.
The version is 5.3. I have 1 pool with 16 OSDs - SSD.
Now I added several nodes to the cluster and I want the new OSDs be a part of a different pool.
I understand that this can only be done via device class settings:
ceph osd crush rule...
Thanks a lot for such a detailed response.
I'd like to clarify some things though.
I couldn't find anywhere the numbers you mentioned: the default size of rocksdb is 1GB and WAL - 500MB. Can you please direct me to these?
If I have a total of 12 OSDs in cluster, will I be right to...
I have a new cluster of 4 nodes, 3 of them have ceph.
root@pve3:~# pveversion -v
proxmox-ve: 5.1-25 (running kernel: 4.13.4-1-pve)
pve-manager: 5.1-35 (running version: 5.1-35/722cc488)