Install Ceph nodes (not proxmox)

paradox55

Member
May 31, 2019
92
4
13
33
How do I install ceph monitors/osd/nodes in my cluster that don't run proxmox?

Running ceph on proxmox is fine, but some of my nodes have 1GB ram. Proxmox requires 2GB as a minimum making it rather resource intensive.

Yes, 1GB per OSD runs fine in my testing.
 
How do I install ceph monitors/osd/nodes in my cluster that don't run proxmox?

If you do not run Proxmox VE, ask the vendor of your OS for an installation howto. This is the support forum for Proxmox VE.
 
If you do not run Proxmox VE, ask the vendor of your OS for an installation howto. This is the support forum for Proxmox VE.

Yes, I run proxmox VE and it manages my cluster. There is no option to add nodes to the cluster unless they run Proxmox. I'd like to be able to add nodes to the cluster without running proxmox on them.
 
How do I install ceph monitors/osd/nodes in my cluster that don't run proxmox?

Running ceph on proxmox is fine, but some of my nodes have 1GB ram. Proxmox requires 2GB as a minimum making it rather resource intensive.

Yes, 1GB per OSD runs fine in my testing.
Hi,
you can simply install an OS with the same ceph-version and use the same ceph.conf like the pve-cluster...

BUT I would not recommend to run with 1GB any ceph services!!
You will got much trouble (don't know your testing, but I'm quite sure, that you will run in trouble).

Udo
 
Hi,
you can simply install an OS with the same ceph-version and use the same ceph.conf like the pve-cluster...

BUT I would not recommend to run with 1GB any ceph services!!
You will got much trouble (don't know your testing, but I'm quite sure, that you will run in trouble).

Udo

No, 1GB is perfectly fine for an OSD in my testing. Then again, the drives are 3TB. The other ceph stuff not so much.

So, just install ceph like normal on debian/ubuntu and then copy the config? Don't need to use ceph-deploy or anything?

Thanks!
 
So, just install ceph like normal on debian/ubuntu and then copy the config? Don't need to use ceph-deploy or anything?

There are many ways to install Ceph packages. on Proxmox VE, its a just a click on the GUI.

Running Ceph with such limited memory will not make you happy. It looks like your are not that experienced with Ceph (as you are asking for a installation howto), you should follow best practice guides.
 
There are many ways to install Ceph packages. on Proxmox VE, its a just a click on the GUI.

Running Ceph with such limited memory will not make you happy. It looks like your are not that experienced with Ceph (as you are asking for a installation howto), you should follow best practice guides.

I'm asking for a installation how-to because the ceph setup on proxmox does not allow non-proxmox installations through the GUI. It has to be part of the proxmox cluster, with proxmox installed on the node.

If it's just a "click of the GUI" tell me where to add an external node, not part of the cluster, and without proxmox installed on it.

Edit:
This also isn't covered in any of the proxmox guides.

And without knowing how proxmox manages the cluster (ceph-deploy is absent, so likely salt/deepsea?) it might become a configuration nightmare.

I can just toss ceph on the nodes and toss on the config, but I'd like proxmox to y'know, manage them.
 
Last edited:
Just take a look into our documentation and you will see how Proxmox VE manage the Ceph package installation:

https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pveceph

Code:
root@central-01:~# pveceph mon create --mon-address 172.16.0.100
Specified monitor IP '172.16.0.100' not configured or up on central-01!
root@central-01:~#

I've also tried transferring the ceph.conf and manually adding in osd hosts with key file with little success.

Basically, while this method might be doable there is no documentation for it and I might as well stick with a ceph-deploy install using the dashboard plugin.
 
No, 1GB is perfectly fine for an OSD in my testing. Then again, the drives are 3TB. The other ceph stuff not so much.

So, just install ceph like normal on debian/ubuntu and then copy the config? Don't need to use ceph-deploy or anything?

Thanks!
Hi,
if you look here: https://docs.ceph.com/docs/jewel/start/hardware-recommendations/ you see, that you need min 3GB free ram for one OSD.
And newer versions need more ram - see also here: https://unix.stackexchange.com/questions/448801/ceph-luminous-osd-memory-usage?rq=1

second - linux use free ram as cache - so ceph will speed up much from ansering ceph data from ram, instead read all data from the very slow disk.
With such low memory system your performance will be "pain in the ass".

Udo
 
Hi,
if you look here: https://docs.ceph.com/docs/jewel/start/hardware-recommendations/ you see, that you need min 3GB free ram for one OSD.
And newer versions need more ram - see also here: https://unix.stackexchange.com/questions/448801/ceph-luminous-osd-memory-usage?rq=1

second - linux use free ram as cache - so ceph will speed up much from ansering ceph data from ram, instead read all data from the very slow disk.
With such low memory system your performance will be "pain in the ass".

Udo

Hi,

Thank you for your advice but I've been running ceph on several test OSD's for a while now with 1GB RAM and they run perfectly fine with good throughput.

If you install a OSD in a VM it will even run at 512MB RAM.

The difference between 512MB and 1GB is around 80MB/s-100MB/s throughput per OSD.

Edit: To clarify, it's 80MB/s-100MB/s difference on a 5 OSD cluster, not per OSD.

For example, if your throughput was 180MB/s for the cluster it drops down to 40-50MB/s at 512MB RAM.

These OSD are also not huge, 3TB drives.
 
Code:
root@central-01:~# pveceph mon create --mon-address 172.16.0.100
Specified monitor IP '172.16.0.100' not configured or up on central-01!
root@central-01:~#

I've also tried transferring the ceph.conf and manually adding in osd hosts with key file with little success.

Basically, while this method might be doable there is no documentation for it and I might as well stick with a ceph-deploy install using the dashboard plugin.
Hi,
documentation: https://docs.ceph.com/docs/master/install/manual-deployment/#adding-osds

I would NOT use ceph-mons outside the pve-cluster, only osd-nodes (which is fine, if you have enough resources).
Esp. ceph-mons should have the same (and the newest version) - osds are not so critical.

But again - use such an system not for important things and don't blame ceph/pve for bad performance/reability.

Udo
 
Hi,
documentation: https://docs.ceph.com/docs/master/install/manual-deployment/#adding-osds

I would NOT use ceph-mons outside the pve-cluster, only osd-nodes (which is fine, if you have enough resources).
Esp. ceph-mons should have the same (and the newest version) - osds are not so critical.

But again - use such an system not for important things and don't blame ceph/pve for bad performance/reability.

Udo

Thanks! I'll give that a shot. And yeah, this is mostly just for testing purposes right now. Apologies if I've come across as rude.
 
...
The difference between 512MB and 1GB is around 80MB/s-100MB/s throughput per OSD.
Hi,
sorry - you must measure caching!
With an 5-OSD HDD cluster on 1GB-Network you will never ever got 80-100MB thorughput in an VM / single thread.

Ok, with replica 3 and 5 OSDs you can got 100/(5/3) = 60MB inside an VM with 100MB/s per OSD - but I don't think that you reach such values!

Try
Code:
rados bench -p rbd 60 write --no-cleanup
(-t 1 for single thread - which mean one VM...)
Udo
 
Just take a look into our documentation and you will see how Proxmox VE manage the Ceph package installation:

https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pveceph

Alright! I have a cluster setup, 3 OSD, 1 monitor, 1 MDS.

Works 100%, ceph -s shows every osd up and running.

...then I try to create a pool or cephfs through Proxmox and suddenly every OSD can't connect to the cluster and are down.

I can fix this by running ceph-deploy push config and restarting the OSD nodes - they then connect to the cluster and are up.

Then this cycle repeats whenever I try to modify the cluster through the proxmox dashboard.
 
Running Ceph with such limited memory will not make you happy. It looks like your are not that experienced with Ceph (as you are asking for a installation howto), you should follow best practice guides.

Perfectly happy! I have each OSD setup on 768MB RAM - they can run on 512, but my osd memory target is set a little higher for increased throughput.

Code:
root@central-01:/# rados bench -p main-pool_data 20 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 20 seconds or 0 objects
Object prefix: benchmark_data_central-01_19469
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
    1      16        54        38   151.991       152    0.105509    0.321269
    2      16       125       109    217.98       284     0.41014    0.278222
    3      16       189       173   230.635       256    0.363403    0.271736
    4      16       237       221    220.97       192    0.349635    0.278367
    5      16       289       273   218.371       208    0.148403    0.283145
    6      16       349       333    221.97       240    0.170854    0.277248
    7      16       415       399    227.97       264    0.413298    0.278011
    8      16       466       450   224.972       204    0.466315    0.280022
    9      16       523       507   225.306       228    0.143453    0.278555
   10      16       582       566   226.373       236    0.312861    0.278628
   11      16       635       619   225.064       212   0.0979489    0.280134
   12      16       695       679   226.307       240    0.361974    0.280023
   13      16       755       739   227.357       240    0.166233    0.276538
   14      16       829       813   232.257       296    0.230743     0.27495
   15      16       885       869   231.705       224    0.313543    0.274029
   16      16       940       924   230.971       220   0.0902356    0.273401
   17      16      1008       992   233.383       272    0.106616    0.272294
   18      16      1064      1048    232.86       224   0.0932419      0.2722

Code:
root@central-01:/mnt/ceph# dd if=/dev/zero of=test.img bs=1M count=10240 conv=fdatasync

10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 35.4724 s, 303 MB/s

root@central-01:/mnt/ceph# sync; echo 3 > /proc/sys/vm/drop_caches

root@central-01:/mnt/ceph# dd of=/dev/zero if=test.img bs=1M count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 35.1657 s, 305 MB/s

Code:
root@central-01:/# ceph osd tree
ID  CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF
 -1       4.54597 root default
 -3       0.90919     host ceph-01
  0   hdd 0.90919         osd.0        up  1.00000 1.00000
 -5       0.90919     host ceph-02
  1   hdd 0.90919         osd.1        up  1.00000 1.00000
 -7       0.90919     host ceph-03
  2   hdd 0.90919         osd.2        up  1.00000 1.00000
 -9       0.90919     host ceph-04
  3   hdd 0.90919         osd.3        up  1.00000 1.00000
-11       0.90919     host ceph-05
  4   hdd 0.90919         osd.4        up  1.00000 1.00000

Code:
ceph-05:/home/paradox # free -m
              total        used        free      shared  buff/cache   available
Mem:            718         252         262           9         203         338
Swap:             0           0           0
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!