Has anyone tested a failure with CEPH 0.94.9 using 3 replications under 4.4?  I have a test environment set up using 4 proxmox 4.4 nodes (clustered) each with 2 OSD's (8 total).  Each OSD is 256GB.  3 monitors.   A pool named test is configured for 3/3.  256 PG's.  3 VM's using 100 GB RAW disks each are in use.
After CEPH is noted as Healthy, I pull 2 OSD's on any 2 random nodes. After a bit of repairing, 21 PG's are undersized (not enough replica's exist). Just curious if anyone has seen similar results in testing.
OSDs
In Out
Up 6 0
Down 0 2
Total: 8
PGs
active+clean:
226
active+remapped:
9
undersized+degraded+peered:
21
				
			After CEPH is noted as Healthy, I pull 2 OSD's on any 2 random nodes. After a bit of repairing, 21 PG's are undersized (not enough replica's exist). Just curious if anyone has seen similar results in testing.
OSDs
In Out
Up 6 0
Down 0 2
Total: 8
PGs
active+clean:
226
active+remapped:
9
undersized+degraded+peered:
21
 
	 
	 
 
		 ):
): 