Hello Proxmox Community,
I'm looking for some guidance on the best way to set up a Ceph File System (CephFS) within my 3-node Proxmox cluster to provide storage for multiple applications: FileCloud, Paperless-ngx, and Jellyfin.
FileCloud in particular needs a file system with a mountable path, similar to a local physical disk, so I want to use cephFS.
My goal is to achieve a usable storage capacity of around 4TB while also having some level of redundancy, using my existing hardware. I have 3 nodes in my cluster, and each node has a single 2TB SATA SSD dedicated to Ceph, giving me a raw capacity of 6TB.
From my understanding, simply creating a CephFS pool with replication on this setup would likely only yield about 2TB of usable space (with a replication factor of 3 for good redundancy).
Therefore, I'm exploring the possibility of using an Erasure Coded (EC) pool as the underlying storage for my CephFS. With a 3-OSD setup (one SSD per node), I believe a k=2, m=1 configuration might be suitable. This should theoretically give me approximately 4TB of usable space while still allowing for the failure of one SSD without data loss.
My questions are:
Thank you in advance for your help!
I'm looking for some guidance on the best way to set up a Ceph File System (CephFS) within my 3-node Proxmox cluster to provide storage for multiple applications: FileCloud, Paperless-ngx, and Jellyfin.
FileCloud in particular needs a file system with a mountable path, similar to a local physical disk, so I want to use cephFS.
My goal is to achieve a usable storage capacity of around 4TB while also having some level of redundancy, using my existing hardware. I have 3 nodes in my cluster, and each node has a single 2TB SATA SSD dedicated to Ceph, giving me a raw capacity of 6TB.
From my understanding, simply creating a CephFS pool with replication on this setup would likely only yield about 2TB of usable space (with a replication factor of 3 for good redundancy).
Therefore, I'm exploring the possibility of using an Erasure Coded (EC) pool as the underlying storage for my CephFS. With a 3-OSD setup (one SSD per node), I believe a k=2, m=1 configuration might be suitable. This should theoretically give me approximately 4TB of usable space while still allowing for the failure of one SSD without data loss.
My questions are:
- Is it feasible and recommended to run a CephFS on top of an Erasure Coded pool with 3 OSDs (one per node) in a Proxmox environment? Are there any significant performance or stability implications I should be aware of?
- What is the correct procedure within Proxmox (using shell commands like pveceph or the Web GUI) to create a CephFS that utilizes an existing Erasure Coded pool? I have already created a pool with a k=2, m=1 erasure coding profile with a metadata pool.
- How can I ensure that the CephFS I create leverages the capacity and redundancy provided by the Erasure Coded pool to achieve the desired 4TB usable space?
Thank you in advance for your help!