Ceph - keep pool settings on reboot

paradox55

Member
May 31, 2019
92
4
13
33
TIL rebooting the node reverts pool settings to default. How do I make ceph save pool settings short of setting up an init script?

Code:
root@cl-01:~# rados bench -p primary_volatile 10 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_cl-01_55168
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
    1      16        59        43   171.975       172   0.0911558    0.313172
    2      16       126       110   219.975       268    0.100511    0.277244
    3      16       195       179   238.643       276    0.106172    0.261989
    4      16       256       240   239.977       244    0.365402    0.258503
    5      16       311       295   235.973       220   0.0932502    0.263105
    6      16       380       364   242.635       276   0.0913791    0.259185
    7      16       432       416   237.682       208    0.112712     0.25987
    8      16       502       486   242.968       280   0.0873168    0.255562
    9      16       573       557   247.524       284    0.361759    0.254726
   10      16       628       612   244.768       220    0.122259    0.255039

root@cl-01:~# rados bench -p primary_volatile 10 seq
hints = 1
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
    1      16        88        72   287.963       288   0.0289035    0.174664
    2      16       183       167   333.963       380    0.292418    0.174838
    3      16       268       252   335.965       340   0.0220428    0.180982
    4      16       351       335   334.965       332    0.999724     0.17736
    5      16       430       414   331.168       316    0.444224    0.185377
    6      16       521       505   336.634       364    0.372936    0.181515
    7      16       614       598   341.683       372    0.142144    0.180472

---------
---------
---------

root@cl-01:~# ceph osd pool set primary_volatile compression_mode aggressive
set pool 74 compression_mode to aggressive
root@cl-01:~# ceph osd pool set primary_volatile compression_algorithm lz4
set pool 74 compression_algorithm to lz4

root@cl-01:~# rados bench -p primary_volatile 10 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_cl-01_55658
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
    1      16       147       131   523.974       524     0.09166    0.108584
    2      16       284       268   535.965       548   0.0568118    0.113235
    3      16       425       409   545.296       564   0.0638425    0.115996
    4      16       572       556    555.96       588   0.0450933    0.113053
    5      16       714       698   558.356       568     0.06666    0.113288
    6      16       838       822   547.956       496   0.0745234    0.114872
    7      16       978       962   549.671       560   0.0765595    0.114896
    8      16      1119      1103   551.457       564   0.0584567     0.11494
    9      16      1271      1255   557.734       608    0.160169    0.114047
   10      16      1400      1384   553.557       516   0.0498437    0.114605
Total time run:         10.1147

root@cl-01:~# rados bench -p primary_volatile 10 seq
hints = 1
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
    1      16       318       302   1207.86      1208   0.0792652   0.0506832
    2      16       630       614   1227.88      1248   0.0100196   0.0506708
    3      16       937       921   1227.82      1228    0.095927   0.0511167
    4      16      1256      1240   1239.84      1276   0.0735082   0.0508995
 
Set it in the ceph.conf or with ceph config set in the MONs DB.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!