Ceph Bluestore & Erasure Coding

quartzeye

Member
Nov 29, 2019
10
0
6
64
Is there a road map on Ceph features to be implemented in Proxmox?

Is Bluestore and Erasure Coding planned for implementation into Proxmox? If so, when? If not, why?

I get the 3x replication pool replication use case but that is not the only valid use case. Erasure coding provides RAID-like resiliency while maximizing raw storage usage in ways that are more cost effective that pool replication currently in the base Ceph config.
 
Thanks! I see Nautilus is implemented and Nautilus supports Blustore and Erasure Coding. However, from what I have gleened from the forums is that Proxmox doesn't support those capabilities. At least not in the interface. If I create an EC pools on the hosts, can I use them for image and template storage for both VM's and containers? I have not located any post where someone was able to do that.

I am looking for something similar to the capability presented in OpenStack via the Heat interface.

As great as Proxmox is, some of the OpenStack implementations seem to be more mature related to VM/Container templating and provisioning. Not to mentions software defined networking. No criticism by my here, I simply would like to see a little more Openshifty approach vs a VMware-ish one.
 
Proxmox doesn't support those capabilities.

Bluestore is default since 6.0, see release notes from above.

AFAIR, EC is not suited for VM workloads, so we decided to not present this to users via GUI. Maybe someone else can add comments to this.
 
Since Proxmox supports CephFS and that could be used for non-VM workloads, I could see that as a reasonable argument to support EC.
I understand the reasoning. But currently, this is not considered as an option. Since we ship all parts of Ceph, you can always configure it directly with the Ceph tooling. Just be aware that the Proxmox VE tooling (+GUI) may not work with it. And that we don't support it.
 
AFAIR, EC is not suited for VM workloads, so we decided to not present this to users via GUI. Maybe someone else can add comments to this.

Hello,

is this statement still valid with PVE 6.3-2 and CEPH RBD?

Thank you
 
  • Like
Reactions: flwimmer
Hello,

is this statement still valid with PVE 6.3-2 and CEPH RBD?

Thank you
i would be interested in this statement too, do I really have only 33% of my cluster size available, and everything above is risky? I would consider Erasure Coding for webhosting.
Data has to be absolutely safe with no downtimes, so i think ceph hyper converged cluster is the right way for me.

other question from me: Is there a solution for CPU and RAM redundancy exept the known Proxmox HA Groups?

Greetings from Germany!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!