Separate names with a comma.
Discussion in 'Proxmox VE: Installation and configuration' started by Gilberto Ferreira, Sep 12, 2014.
Is that possible?? I meant. in Proxmox way...
its possible but it makes no sense. minimum is 3 nodes.
Thank to answer Tom... Can you point me some docs?? I try it through web interface, but no way! I try create a pool with size/min like 2/1 but after poweroff node 2, ceph stop to work...
Thanks a lot
If you run 2 ceph monitors, you will loose quorum if one dies.
Hum... I see... But can I use the quorum cluster as based to manage ceph?
Sorry, I do not understand that question. But it is quite simple - you need 3 nodes to run a ceph server.
perhaps I do not myself clear
can I use the cluster quorum configuration to provide quorum to a ceph server, or cluster quorum and ceph quorum are different things....
No. I suggest you start reading here - http://pve.proxmox.com/wiki/Ceph_Server#Further_readings_about_Ceph
It makes no sense to help to setup a useless setup and to point others to this. If you want Ceph, use at least 3 nodes.
It also possible to experiment with virtual Proxmox VE Ceph servers - especially if you do not have the 3 nodes for testing.
Would 2 (nodes with) OSDs and 3 (nodes with) MONs work?
Yes, of course. If quorum present it will work.
If You don,t need HA, even two nodes will work. But in this case if one node from two will down, the other node don't will work. Quorum - more then half nodes. This will be - 2 nodes for 2-node cluster (and 2 nodes too for 3-node cluster, for example). 3-node cluster may be HA, but 2-node cluster - don't.
three mon & 2 osd-server will work and give you a bit HA.
You need to set size "2" to your pool.
So you have a second copy from your data.
Its not perfect and not recommended, but works good.
You have to look carefully, that your osds have enough space left for taking copys from other osds if some fail.
I have used this a time. I could restart a osd-server without stopping vms on the other nodes..
But now we have three node-setup...
Got it working after some fights, trials and errors due to my network config... which pveceph DOES NOT LIKE if you have anything special.
RTFM is correct... but slightly obscure. Secret weapon is to manually edit ceph.conf and give explicitly all the values needed in public network / cluster network in [global] (2 sets of 3 addresses/32) and in the [osd.x] (2 sets of 2 addresses)
Hi Mark, could you please provide us with additional info about why having only two OSD nodes would not give you full HA? If I understand correctly...
if one node goes down all pg - copies will be placed on the remaining node.
This can result in a very high load for a long time & evtl. (too) full disks.
If then an disk dies your data could be lost / hard to repair.
(For maintenance you can set your osds "noout")