destroy old CEPH Pool

Ronny

Well-Known Member
Sep 12, 2017
59
3
48
40
Hello anybody,

today we want to destroy our old CEPH Pool with old SSDs.

At first we set all SSDs on this Pool (cons_level_3) to OUT, then STOP, then DESTROY
All this SSDs on this Pool have an own replication rule!

After that we destroyed the CEPH-Pool - and now the Task stays for about 30Minutes with "checking storage 'cons_level_3' for RBD images.." and nothing happened.

ceph health detail says something about Reduced data availability - i think its OK, we want to destroy this Pool for ever.

ceph health detail
HEALTH_WARN Reduced data availability: 512 pgs inactive, 512 pgs stale; Degraded data redundancy: 2/10790277 objects degraded (0.000%), 1 pg degraded, 512 pgs undersized
[WRN] PG_AVAILABILITY: Reduced data availability: 512 pgs inactive, 512 pgs stale
pg 8.c0 is stuck inactive for 30m, current state stale+undersized+remapped+peered, last acting [69]
pg 8.c2 is stuck stale for 29m, current state stale+undersized+remapped+peered, last acting [69]
pg 8.c3 is stuck stale for 29m, current state stale+undersized+remapped+peered, last acting [69]
pg 8.d0 is stuck stale for 29m, current state stale+undersized+remapped+peered, last acting [69]
pg 8.d1 is stuck stale for 29m, current state stale+undersized+remapped+peered, last acting [69]
pg 8.d2 is stuck stale for 30m, current state stale+undersized+remapped+peered+wait, last acting [71]
pg 8.d3 is stuck stale for 29m, current state stale+undersized+remapped+peered, last acting [69]
pg 8.d4 is stuck stale for 29m, current state stale+undersized+remapped+peered, last acting [69]
pg 8.d5 is stuck stale for 30m, current state stale+undersized+remapped+peered, last acting [71]
pg 8.d6 is stuck stale for 30m, current state stale+undersized+remapped+peered, last acting [70]
pg 8.d7 is stuck stale for 30m, current state stale+undersized+remapped+peered, last acting [71]
pg 8.d8 is stuck stale for 30m, current state stale+undersized+remapped+peered+wait, last acting [71]
pg 8.d9 is stuck stale for 30m, current state stale+undersized+remapped+peered, last acting [73]
pg 8.da is stuck stale for 30m, current state stale+undersized+remapped+peered+wait, last acting [71]
pg 8.db is stuck stale for 30m, current state stale+undersized+remapped+peered, last acting [71]
pg 8.dc is stuck stale for 30m, current state stale+undersized+remapped+peered, last acting [70]
pg 8.dd is stuck stale for 29m, current state stale+undersized+peered, last acting [69]
pg 8.de is stuck stale for 29m, current state stale+undersized+remapped+peered, last acting [69]
pg 8.df is stuck stale for 29m, current state stale+undersized+remapped+peered, last acting [69]
....


our other Pools are still fine, but an restore job says at the end:

total bytes read 53687091200, sparse bytes 39551176704 (73.7%)
space reduction due to 4K zero blocks 3.57%
rescan volumes...


so what can we do?
waiting or is some stuck in progress?

any suggestions?


thanks a lot for some information on this sunny saturday :)

regards
Ronny
 
haha - we found it:

# pveceph pool destroy cons_level_3 -remove_storages --force

and instantly the old pool is gone and all is fine... omg


regards

Ronny
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!