Ceph pool created using .mgr

Gilberto Ferreira

Renowned Member
Hi
What's happens if someone create a pool using the .mgr reserved pool in Ceph?
Someone already do it and now I just can't remove the pool. I just want to know if this normal or not.
For my understanding the .mgr pool is reserved for ceph works, isn't???
 
Last edited:
Hi,

yes that is not ideal. The .mgr pool uses only one PG by default which is not ideal. On newer PVE nodes it should actually not be possible any more to add this as a RBD storage through the GUI. What error message do you get when trying to remove the pool? Is it in use by some VMs?
 
  • Like
Reactions: Gilberto Ferreira
Hi,

yes that is not ideal. The .mgr pool uses only one PG by default which is not ideal. On newer PVE nodes it should actually not be possible any more to add this as a RBD storage through the GUI. What error message do you get when trying to remove the pool? Is it in use by some VMs?
Hi Stefan
I'm glad about prompt answer.
Well... I not created the pool or even made the installation at all.
I am a little in the dark, since was another person that made everything, and, I am sorry to say that, made a lot of misconfiguration.
Now there's a lot of warnnings in the ceph screen. (I can't send any screenshots at the moment!).
And to be honest, things start to be slow down!
I can not remove the pool because, as you already said, is used now!
This .mgr pool was made with SATA HDD! :rolleyes:
I had have created another pool with some NVME which was available on, but still there's no available space to move VM disks from .mgr to this new NVME storage.
But, still, if the NVME storage had enough space, I suppose that if when I destroy the .mgr pool something really bad will gona happens.
I am try for now move the vm disks to another storage, local or whatever.

Thanks for any light!
 
Last edited:
Ideally you'd first remove the pool as a storage first. If there are any VMs using the pool, can you move their disks to another storage? At least temporarily? Does not necessarily need to be a Ceph backed one. You should be able to do so by selecting the VM then navigate to the Hardware tab. There select the disk you want to move and click "Disk action" on the top bar. It should have an option to move the disk to another storage. After you've done that there will be an unused disk left, delete that one. Once you've done that for all VMs on the affected storage, remove that RBD storage in the Datacenter configuration.

After that you should be able to create a new RBD pool and add that as a storage. You can move the disks back in place once you've done that.

If you want more details, it would be helpful if you could post any error messages you are getting.
 
Ideally you'd first remove the pool as a storage first. If there are any VMs using the pool, can you move their disks to another storage? At least temporarily? Does not necessarily need to be a Ceph backed one. You should be able to do so by selecting the VM then navigate to the Hardware tab. There select the disk you want to move and click "Disk action" on the top bar. It should have an option to move the disk to another storage. After you've done that there will be an unused disk left, delete that one. Once you've done that for all VMs on the affected storage, remove that RBD storage in the Datacenter configuration.
Yes. I am aware about this procedure. I do it all the time.
As I said before, I had have created another pool with NVME disks.
My first plan was move the vm disk from .mgr pool to this NVME pool.
After that,I will remove the .mgr pool. But my main concern is if I do so, i.e. remove the .mgr pool will not in some way cause some problem with NVME pool?
It will not broke something?
 
Last edited:
Hi,
I've the same question that Gilberto about re-create the .mgr pool.
My problem is that the .mgr is linked to an "inactive" pg
-> pg 10.0 is stuck inactive for 26h, current state unknown, last acting []

Thanks
 
Hi,
I've the same question that Gilberto about re-create the .mgr pool.
My problem is that the .mgr is linked to an "inactive" pg
-> pg 10.0 is stuck inactive for 26h, current state unknown, last acting []

Thanks
1. Delete the pool .mgr
2. delete the active manager under monitors
3. wait for an standby manager go active and the recently delete manager disappers in the list.
4. now create manager on the host again were it was recently deleted
5. This manager go standby now, but recreates the .mgr pool

as mentioned here: https://www.thomas-krenn.com/de/wiki/MGR_Pool_(.mgr)_neu_erstellen_in_Proxmox_VE
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!