ceph osd always down

nttec

Renowned Member
Jun 1, 2016
95
0
71
42
I recently trying to learn ceph cluster with promox, but unfortunately I can't seem to move on from the error that I encounter. With everyone here I hope that I can get a clear answer or solution on my problem.

here's the problem that I encounter for the past few days. can anyone please point where I made a mistake on this?


$ceph health
HEALTH_ERR 64 pgs are stuck inactive for more than 300 seconds; 64 pgs stuck inactive

$ceph osd tree


ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY

-1 0.12778 root default

-2 0.06389 host ceph03

0 0.06389 osd.0 down 0 1.00000

-3 0.06389 host ceph04

1 0.06389 osd.1 down 0 1.00000



$ceph.conf


[global]

fsid = 0d20360e-92d3-4ded-8b73-2cbe4ce42ac0

mon_initial_members = ceph02

mon_host = 192.168.0.4

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

public_network = 192.168.0.0/24


[osd]

osd journal size = 10000


[osd.0]

host = ceph03


[osd.1]

host = ceph04
 
ceph pg stat

v8: 64 pgs: 64 creating; 0 bytes data, 0 kB used, 0 kB / 0 kB avail
 
I recently trying to learn ceph cluster with promox, but unfortunately I can't seem to move on from the error that I encounter. With everyone here I hope that I can get a clear answer or solution on my problem.

here's the problem that I encounter for the past few days. can anyone please point where I made a mistake on this?


$ceph health
HEALTH_ERR 64 pgs are stuck inactive for more than 300 seconds; 64 pgs stuck inactive

$ceph osd tree


ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY

-1 0.12778 root default

-2 0.06389 host ceph03

0 0.06389 osd.0 down 0 1.00000

-3 0.06389 host ceph04

1 0.06389 osd.1 down 0 1.00000



$ceph.conf


[global]

fsid = 0d20360e-92d3-4ded-8b73-2cbe4ce42ac0

mon_initial_members = ceph02

mon_host = 192.168.0.4

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

public_network = 192.168.0.0/24


[osd]

osd journal size = 10000


[osd.0]

host = ceph03


[osd.1]

host = ceph04
Hi,
you need three hosts with osds (and three mons also).
Default replica is 3 (as a good reason) and without tree osds your PGs can't be written...

weigt 0.06389: you use 64GB-Disks for ceph??

Udo
 
I
Hi,
you need three hosts with osds (and three mons also).
Default replica is 3 (as a good reason) and without tree osds your PGs can't be written...

weigt 0.06389: you use 64GB-Disks for ceph??

Udo


I was following this video on how to set-up ceph cluster. can you recommend a better set-up and how to do it guide?
 
Hi,
you need three hosts with osds (and three mons also).
Default replica is 3 (as a good reason) and without tree osds your PGs can't be written...

weigt 0.06389: you use 64GB-Disks for ceph??

Udo


is it required to have your filesystem be xfs and not ext4?

some say that I need to change my filesystem into xfs to make it work.

on the error log they found a problem which is "File System is too long"

and so they recommend to have my filesystem change from ext4 to xfs
 
is it required to have your filesystem be xfs and not ext4?

some say that I need to change my filesystem into xfs to make it work.

on the error log they found a problem which is "File System is too long"

and so they recommend to have my filesystem change from ext4 to xfs
Hi,
ext4 work (you need one setting in the ceph.conf) but some weeks ago the ext4-support is dropped by ceph...

Due this xfs is highly recomendet!

Udo