Hi all,
for our lab project we want to install Hyperconverged Proxmox environment with Ceph.
We bought 3 server with these specs and in brackets i wrote how i thought about using them:
I've read a lot of threads and saw a lot of video-example explain how to configure Ceph Cluster and i have understood that best practice is to create 1 osd for each disk, but i really don't understand how to use DB and WAL Disk and if it's is possible to use "2x3.2TB nVME high performance" for caching.
I also installed a proxmox ceph cluster (still there powered-off) on my Virtualbox to trying to understand something more about db-wal disks configuration but it didn't help so much.
Someone can help me with some suggestions?
Thank you so much.
for our lab project we want to install Hyperconverged Proxmox environment with Ceph.
We bought 3 server with these specs and in brackets i wrote how i thought about using them:
- 2 x CPU AMD Rome 7452 32Cores/64Thread @2.35Ghz (https://www.amd.com/en/products/cpu/amd-epyc-7452)
- 16 x 64Gbyte RDIMM (total 1TB RAM)
- 2 x 480Gbyte M.2 SATA (zfs mirror for OS and local-storage for snippets and isos)
- 2 x 3.2 TB nVME 2.5” high performance (RAW 6.4TB, should be used for cache pourpose)
- 10 x 7.68TB nVME 2.5” (RAW 76.8 TB, should be used for store VM disks)
- 2 x 10GB RJ45 (LACP - Proxmox Cluster and VMs networks vlan aware)
- 2 x 10GB SFP+ (LACP - Ceph Cluster Network)
- 2 x 1GB RJ45 (LACP - Ceph Public Network)
- 1200W Redundant Power Supplies Titanium Level (96%)
I've read a lot of threads and saw a lot of video-example explain how to configure Ceph Cluster and i have understood that best practice is to create 1 osd for each disk, but i really don't understand how to use DB and WAL Disk and if it's is possible to use "2x3.2TB nVME high performance" for caching.
I also installed a proxmox ceph cluster (still there powered-off) on my Virtualbox to trying to understand something more about db-wal disks configuration but it didn't help so much.
Someone can help me with some suggestions?
Thank you so much.