2 node cluster, CEPH, manual HA - questions ?

tytanick

Member
Feb 25, 2013
96
3
8
Hello guys, i have few questions.

I have 2 node proxmox cluster newest 3.2 pve with no external storage, just free avaible space on each node as /dev/mapper/pve-data.

0. Is Ceph the best way to do shared storage? on drives that are on node1 and node2 and completly no other external NAS ?
Some told me about GFS2 but why not to use ceph if it is implemented and i finally know how to use it :P

1. Creating OSD
Can i create osd from avaible free space in my /dev/mapper/pve-data or does it need to be raw disk like /dev/sdb with no partitions at all ?
I woul like to use free space from current /dev/mapper/pve-data , how can i do it ?
Can i alternatively use another LVM partition to be used in OSD ?

2. RAID in Ceph ?
I made on each node second disc to test ceph /dev/vdb with 10GB space
Now i made OSD and journal took 5GB so only 5GB is left of space - its normal and ok.
http://scr.hu/2o2c/somli
I added this DRB shared drive to resources
http://scr.hu/2o2c/08qzp
And i see that the avaible space is 10GB and i think it should be 5 GB as this should act like RAID1 right ?

So i have 2 OSD 5GB and 5 GB, and now i have 10GB of space ?
I think it should be 5 GB so that id one node will burn, then my data will presist like in RAID1 right ?
Here is screenshot of my pool.
Tell me what i am doing wrong ?
i want to have RAID1 of this ceph storage so on each node there will be FULL data
http://scr.hu/2o2c/3suqd

Another Q is should i create LVM on created DRB ? why?
How can i access do files created on DRB DRB resource ? - in case i want to upload VM image manually ?

And is this normal? http://scr.hu/2o2c/a3qm7
512 pgs degraded, stuck unclean, 50% objects degraded ?

3. Manual HA
I made proxmox 2 node cluster with ceph shared storage and as i see its not recommended to use HA on such setup so i just want to have possibility of turning on manually VMs on node B if node A will go down.
How can i do that ?
When all VM's are running on node A and no VM's running on node B,
then node A will go down, and i will see that, and i want to make manual migration but i cant do migration because node A is unavaible "no route" :)
How can i do that ?

Thx for answers :)
 
Last edited:
I would not recommend Ceph as shared storage on two node cluster (Ceph needs at least 3 nodes).
Better try to use DRBD for shared storage for such config (Search proxmox wiki "two node ha cluster").

You can manually migrate vms if node A goes down by moving vm config files from /etc/pve/nodes/proxmoxA/qemu-server to same path on proxmoxB.
 
Accualy when i made node1 down, and i am trying to move VM config file /etc/pve/nodes/node1/qemu-server to node2/qemu-server i have permission denied.
So how can i "start" VM on node 2 if i cant copy config ?

How should i do proper MANUAL migraton from dead node1 to online node2 ?

DRBD is working ok.
 
Last edited:
How should i do proper MANUAL migraton from dead node1 to online node2 ?

First, make sure that node1 is down. After that, you can set expected votes to 1:

# pvecm expected 1

Then you will get quorum and /etc/pve is writable again.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!