Run ceph -s or ceph health detail from node 2 or 3 and see what it shows.
From Node 2
root@demo2:~# ceph health
HEALTH_WARN 256 pgs degraded; 256 pgs stale; 256 pgs stuck unclean; recovery 3/6 objects degraded (50.000%); 4/12 in osds are down; 1 mons down, quorum 1,2 1,2
root@demo2:~# ceph -s
2015-01-07 22:53:57.799829 7f8219c71700 0 -- :/1015685 >> 192.168.1.201:6789/0 pipe(0x1ddb180 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x1ddb410).fault
cluster 6bbb954a-8c42-4d70-898d-6e6f8c69c429
health HEALTH_WARN 256 pgs degraded; 256 pgs stale; 256 pgs stuck unclean; recovery 3/6 objects degraded (50.000%); 4/12 in osds are down; 1 mons down, quo rum 1,2 1,2
monmap e3: 3 mons at {0=192.168.1.201:6789/0,1=192.168.1.202:6789/0,2=192.1 68.1.203:6789/0}, election epoch 18, quorum 1,2 1,2
osdmap e64: 12 osds: 8 up, 12 in
pgmap v185: 256 pgs, 4 pools, 16 bytes data, 3 objects
405 MB used, 36326 MB / 36731 MB avail
3/6 objects degraded (50.000%)
256 stale+active+degraded
/////////////////////////////////////////////////////////////
From node3
root@demo3:~# ceph health
HEALTH_WARN 256 pgs degraded; 256 pgs stale; 256 pgs stuck stale; 256 pgs stuck unclean; recovery 3/6 objects degraded (50.000%); 1 mons down, quorum 1,2 1,2
root@demo3:~# ceph -s
2015-01-07 22:57:00.285642 7f69f6ba0700 0 -- :/1012409 >> 192.168.1.201:6789/0 pipe(0x1f08180 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x1f08410).fault
cluster 6bbb954a-8c42-4d70-898d-6e6f8c69c429
health HEALTH_WARN 256 pgs degraded; 256 pgs stale; 256 pgs stuck stale; 256 pgs stuck unclean; recovery 3/6 objects degraded (50.000%); 1 mons down, quorum 1,2 1,2
monmap e3: 3 mons at {0=192.168.1.201:6789/0,1=192.168.1.202:6789/0,2=192.168.1.203:6789/0}, election epoch 18, quorum 1,2 1,2
osdmap e66: 12 osds: 8 up, 8 in
pgmap v188: 256 pgs, 4 pools, 16 bytes data, 3 objects
269 MB used, 24218 MB / 24487 MB avail
3/6 objects degraded (50.000%)
256 stale+active+degraded
regards