Hardware - Concept for Ceph Cluster + backup

yena

Renowned Member
Nov 18, 2011
385
6
83
Hello,

We are planning to set up a 4 node cluster with network storage (Ceph), live migration, HA and snapshot functionality.
I'm planning to buy this hardware:

4 nodes each
2 x Intel® Xeon® E5-2630L Six Core Processor
64GB RAM
2 x 6TB SATA HD
2 x Intel X520-DA2 Dual Port 10GbE SFP+ Ethernet Adapter
2 x Giga eth
2 x Samsung MZ-7KE256BW SSD 850 PRO, 256GB
1 Sata Dom 16Gb ( only for os proxmox setup )
2 x Sfp+ switch

We don't need to max out the space.
Performance and HA is more important.
My question is, how to use the drives most efficiently respecting performance and durability.
Does it make sense to use separate partitions on SSD for Journal and Cache Tiering ( is it possible, too small ssd ?) ?
What to expect in Write Speed inside an LXC container ?
Have i to add a 4 port Raid Card ( SSD Raid 1 + 6Tb raid1 ) ?
I whould like also to add an "external" Storage for Backup, now i use ZFS Replica.. what can i use in Ceph ?

Any suggestions?

Thanks
Enrico
 
Last edited:
for ha purposes i'd say use a hardware raid1 for ssd and (if possible) add some 2 additional sata-harddrives into the server.
use 2 of the harddrives for ceph and the raid1-ssd array for journal on these drives. create a pool with a replica of three you will get 18TB of usable Storage, thats fully HA and you can lose a death of two servers.

the additional two harddrives can be used as SW-RAID for ISOs and Backups.
 
  • Like
Reactions: yena
Proxmox does write quiet a bit on the rootfs. An enterprise SSD is recommended for the OS, a normal DOM might hit EOL quiet soon.
Performancewise it won't be a problem.

Jonas
 
  • Like
Reactions: yena
Proxmox does write quiet a bit on the rootfs. An enterprise SSD is recommended for the OS, a normal DOM might hit EOL quiet soon.
Performancewise it won't be a problem.

Jonas

Is it a bad idea install proxmox on an additional couple of Sata HD using ZFS Zraid1 ?
 
Are better 2 x 6Tb Sata HD as planned in my list or 4Hd x 2Tb per node, reading the ceph doc seem that write speed should be better with more HD...