Proxmox doesn't follow ceph.conf on external ceph cluster

ronweister

New Member
Mar 23, 2018
14
1
3
29
For the rbd pool we shouldn't have to specify --data on every single image creation. I hope proxmox by default doesn't do it as the pool being used is the default rbd pool regardless of the following setting in my ceph.conf which is followed by openstack.

[client.admin]
rbd default data pool = rbd-ec112

This should ideally work as it does with openstack. Any reason proxmox doesn't respect this rule?
 
For the rbd pool we shouldn't have to specify --data on every single image creation. I hope proxmox by default doesn't do it as the pool being used is the default rbd pool regardless of the following setting in my ceph.conf which is followed by openstack.
The PVE stack (and Proxmox Support) doesn't support EC on hyper-converged setups. The tooling uses the pool 'rbd' by default and doesn't check in the ceph.conf.

[client.admin]
rbd default data pool = rbd-ec112

This should ideally work as it does with openstack. Any reason proxmox doesn't respect this rule?
For external cluster, you can put a separate config file with your storage (naming: <storageid>.conf) under '/etc/pve/priv/ceph/'. Try if it works with that.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!