Hallo,
ich bin gerade damit beschäftigt unseren ersten Cluster mit Proxmox zu installieren und hätte noch die ein oder
andere Frage.
Hardware Daten:
6 Server mit je
2x AMD Epyc 9124
768GB RAM
6x 3,84TB NVME SSD
2x 2Port 25G NIC
Workload:
~100-120 Windows/Linux VMS
Auf den Servern habe...
Hi yall,
I'm thinking about creating a Ceph pool with a EC 2+4 scheme. Even though I did intensive Google reserch, I could not find any experience with that.
My idea is this:
The Ceph cluster is spread across two fault domains (latency < 1 ms, 40 disks on each side, all NVME SSDs, lots of CPU...
Dear Community.
We are currently in the process of building a Proxmox cluster with Ceph as the underlying storage.
We have 4 nodes, with 4 x 100TB OSDs attached to each node (16 OSDs total) and plan to scale this out by adding another 4 nodes with the same number of OSDs on attached to each...
So I have some servers where I am trying to setup the ssd to utilize erasure-coding. Problem is there are hdd's on as well and I needed to set the device class to ssd.. So using the instructions I was able to create the profile I wanted using the following command
ceph osd erasure-code-profile...
I've read posts about Proxmox not supporting Ceph EC. Last one was dated on May last year. I'd like to ask if, almost one year later, this is still the stance of Proxmox or we can expect EC support in some near future (2021).
I'm considering setting up a separate Ceph cluster (storage only...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.