Overhaul of Proxmox Ceph storage documentation

Just curious why the minimum requirement is 3 servers when it can be done with ease without additional tweaking with 2 servers.

Also after Zapping disks with #ceph-deploy disk zap node:vda, it is necessary to prepare disk with #ceph-deploy osd prepare node:vda. Only after that activate with #ceph-deploy osd activate node:vda1

Regardless one want to change the Journal path or not #ceph osd prepare has to run first. At least in my case this is how it happened. Could not jump to activate straight after zap.
 
Last edited:
I think the writer assumes an enterprise setup.

Ah that would make sense. May be additional info like "For testing purpose minimum 2 server can be used" can be added?

Just thinking of if somebody wanting to jump in for the first time with CEPH, minimum 3 servers strict requirement might be a turn off.
 
May be additional info like "For testing purpose minimum 2 server can be used" can be added?

AFAIK Ceph need at least 3 servers, else you loose quorum as soon as one server goes down.
I you do not care about that, you can also test with one single server?
 
The way I see it youd still want 3 servers even for testing. If you want to test the architecture you can test it with 3+ virtualized ceph nodes perfectly fine. If you want a performance test you should have 3 nodes or a significant amount of disks (8+) on a 2node setup to get even remotely useful test results.
 
Just curious why the minimum requirement is 3 servers when it can be done with ease without additional tweaking with 2 servers.

Also after Zapping disks with #ceph-deploy disk zap node:vda, it is necessary to prepare disk with #ceph-deploy osd prepare node:vda. Only after that activate with #ceph-deploy osd activate node:vda1

Regardless one want to change the Journal path or not #ceph osd prepare has to run first. At least in my case this is how it happened. Could not jump to activate straight after zap.

You are right, I wasn't completely correct in that section. There is actually a new way to prepare/zap/activate all in one step. I updated that as well as added some customization tips relevant to using Ceph with Proxmox.
 
AFAIK Ceph need at least 3 servers, else you loose quorum as soon as one server goes down.
I you do not care about that, you can also test with one single server?

I think we need little bit more clarification here about what "node" we are talking about. To setup a CEPH cluster we need at minimum( and that is the least minimum) 3 Monitor(MON) node, 1 Admin Node and 1 OSD node with several HDD in it. Thats total of 5 physical server. To save cash while learning, 3 MONs and the Admin node can be virtualized on Proxmox cluster using local storage and use a separate physical server to setup OSD node.
When i say one can setup CEPH with Minimum 1 or 2 server, in my mind i am only thinking of the OSD node. Beside putting MON on VM, it is also possible to setup MON, Admin and OSD all on one physical node. In which case the bare minimum requirement "is" 3 servers, because you still need 3 MONs and only one node can have one MON.

So in a way i am wrong and correct at the same time. We need minimum 3 Servers for basic production grade cluster, if not setting up MONs in VM. Only one node if using existing Proxmox cluster. OSD Nodes dont have quorum, its the MON nodes that badly need Quorum. Since MON quorum need to be in odd numbers, thus the minimum 3 nodes.

And yes one physical machine can be setup for Admin, MON, OSD purely for testing purpose. In this scenario obviously if the machine needs reboot, everything reboots at the same time. But if somebody extremely tight on test hardware, one node CEPH is very much possible.
 
Last edited:
I think we need little bit more clarification here about what "node" we are talking about. To setup a CEPH cluster we need at minimum( and that is the least minimum) 3 Monitor(MON) node, 1 Admin Node and 1 OSD node with several HDD in it. Thats total of 5 physical server. To save cash while learning, 3 MONs and the Admin node can be virtualized on Proxmox cluster using local storage and use a separate physical server to setup OSD node.
When i say one can setup CEPH with Minimum 1 or 2 server, in my mind i am only thinking of the OSD node. Beside putting MON on VM, it is also possible to setup MON, Admin and OSD all on one physical node. In which case the bare minimum requirement "is" 3 servers, because you still need 3 MONs and only one node can have one MON.

So in a way i am wrong and correct at the same time. We need minimum 3 Servers for basic production grade cluster, if not setting up MONs in VM. Only one node if using existing Proxmox cluster. OSD Nodes dont have quorum, its the MON nodes that badly need Quorum. Since MON quorum need to be in odd numbers, thus the minimum 3 nodes.

And yes one physical machine can be setup for Admin, MON, OSD purely for testing purpose. In this scenario obviously if the machine needs reboot, everything reboots at the same time. But if somebody extremely tight on test hardware, one node CEPH is very much possible.

How about this: "For a production system you need 3 servers minimum. For testing you can get by with less, although you may be unable to properly test all the features of the cluster."

This informs users that a reliable system will need at least 3 servers, but if you are just testing you can certainly use less. I also think it is an accurate statement that you cannot properly test the features of a cluster with less than 3 nodes (for example: resiliency of the cluster after a mon failure).
 
How about this: "For a production system you need 3 servers minimum. For testing you can get by with less, although you may be unable to properly test all the features of the cluster."

This informs users that a reliable system will need at least 3 servers, but if you are just testing you can certainly use less. I also think it is an accurate statement that you cannot properly test the features of a cluster with less than 3 nodes (for example: resiliency of the cluster after a mon failure).
Yep i totally agree with that. This should open doors for brand new users who wants to learn.
 
This is also my conclusion;) Only thing missing in the ZFS universe is a cluster solution for HA so I will await this moment with joy. If time passes on I might have to do the job myself:cool:
 
such a thing already exists. http://docs.oracle.com/cd/E19575-01/820-7359/gbspx/ (ZFS HA)

I also wonder why you consider ceph to be slow. inktank has tested a very bog standard installation on a super micro chassis with 24 spinners on 4 controllers and 8 SSDs over a 10GB ethernet and was easily able to max out the 10GB ethernet with 2GB/s reads and ~1.8GB/s writes. Now this is a 4 thread performance test, so may be not a test of a regular workday scenario but still... wouldnt think youd need more than 10GBit unless youre operating something HPC related, which makes it unlikely youd run virtualization...
 
Last edited:
HAStoragePlus comes with a huge license cost.

I do not say ceph is slow I simply stated that ZFS is faster and provides more IOs.
 
I also wonder why you consider ceph to be slow. inktank has tested a very bog standard installation on a super micro chassis with 24 spinners on 4 controllers and 8 SSDs over a 10GB ethernet and was easily able to max out the 10GB ethernet with 2GB/s reads and ~1.8GB/s writes. Now this is a 4 thread performance test, so may be not a test of a regular workday scenario but still... wouldnt think youd need more than 10GBit unless youre operating something HPC related, which makes it unlikely youd run virtualization...

I do recall reading about that test sometime when i was doing research on CEPH. I was trying to push for similar speed but soon realized my hardware was the limitation and on a day to day operations that kind of performance somewhat unrealistic. On mega storage cluster where HDD rules, converting all to SSD i snot going to happen sometime soon. Also a fully decked out server node with speed in mind will cost many dollars. In extreme cases that speed gain is posisble but for the most part it is somewhat slower than ZFS. ZFS heavily uses memory as cache but lack HA(we are only talking about out of the box economical solution). At the end, all things considered, CEPH leads in redundancy area and ZFS in performance.
I will add though, i live my life with CEPH and not moving away from it sometimes soon despite of its "slow" performance.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!