Ceph problem when master node is out

Wasim

i make all the tests via oracle virtual box
I too setup my very first Ceph cluster on Virtual Environment. So i know it works in that environment.

From all the info you have provided, it is apparent to me that there something going on in your cluster installation. It is not installing/configuring as it should. Weight should no be 0, storage.cfg should does not have entry as i should.

The only thing i can suggest now is to let one of us remote access to your computer to really see what is going on. I am setup with Teamviewer. You can download Teamviewer QuickSupport from www.teamviewer.com then email me the PartnerID and password given by Teamviewer after installation.

From all the posting in this thread i dont see a valid reason why it is not working for you.
 
I too setup my very first Ceph cluster on Virtual Environment. So i know it works in that environment.

From all the info you have provided, it is apparent to me that there something going on in your cluster installation. It is not installing/configuring as it should. Weight should no be 0, storage.cfg should does not have entry as i should.

The only thing i can suggest now is to let one of us remote access to your computer to really see what is going on. I am setup with Teamviewer. You can download Teamviewer QuickSupport from www.teamviewer.com then email me the PartnerID and password given by Teamviewer after installation.

From all the posting in this thread i dont see a valid reason why it is not working for you.

OK Wasim,
your part ;-)

Udo
 
@Konstantinos
Not sure if you sent me that email with Teamviewer ID or not but i did not receive anything. If you have not, send it to wahmed@symmcom.com.



OK Wasim,
your part ;-)
Udo
@Udo
If i cannot figure out heads or tails of it i will hand it to you. :) You are next in line.
 
@Udo,
As we suspected the weight of OSDs were the only cause of this issue. For reason unknown all 12 OSDs had weight of 0. So no PGs were created even though it showed all OSDs up and in. all 256 PGs were stuck and degraded. As soon as i reweighted them all PGs were created and shutting down node 1 did not effect the Ceph cluster at all.

I am suspecting it was the combination of Virtual Box and small 8GB virtual disk images for OSDs caused weight of 0. I manually set the weights to 0.006. Konstantinos installed same setup several times following Proxmox wiki to the letter. But weight was 0 every time without any manual intervention.

But.... it is all working now.
 
@Udo,
As we suspected the weight of OSDs were the only cause of this issue. For reason unknown all 12 OSDs had weight of 0. So no PGs were created even though it showed all OSDs up and in. all 256 PGs were stuck and degraded. As soon as i reweighted them all PGs were created and shutting down node 1 did not effect the Ceph cluster at all.

I am suspecting it was the combination of Virtual Box and small 8GB virtual disk images for OSDs caused weight of 0. I manually set the weights to 0.006. Konstantinos installed same setup several times following Proxmox wiki to the letter. But weight was 0 every time without any manual intervention.

But.... it is all working now.

Hi Wasim,
we all have more time, if they read the posts: http://forum.proxmox.com/threads/20700-Ceph-problem-when-master-node-is-out?p=105666#post105666

nevertheless - yes 8GB for an disk is to less. you need 10GB to get an weight of 0.01!

Udo