Hello,
could you share your working and tested configuration (DELL anyone?) for use with Ceph?
I need a 3-nodes general purpose configuration which offers good performance (~10-20 vms).
Details are welcome (hdd/ssd models, benchmarks, problems encountered.... and so on).
In addition I have a few questions:
1) Is 10GbE really mandatory for storage network? What about 4x1Gb NIC bonding in round-robin with VLAN separation or maybe InfiniBand? Any experience to share?
2) Is it safe to have OSD journal on external SSD? What if it suddenly breaks? Will I lose all the backed OSDs as well?
3) Is Ceph like ZFS about RAID controllers? Does it need to manage disks by itself or it can make proper use of the controller cache? Could it be a good practice to mirror OSDs or SSDs used for journaling ?
Thanks,
M.
could you share your working and tested configuration (DELL anyone?) for use with Ceph?
I need a 3-nodes general purpose configuration which offers good performance (~10-20 vms).
Details are welcome (hdd/ssd models, benchmarks, problems encountered.... and so on).
In addition I have a few questions:
1) Is 10GbE really mandatory for storage network? What about 4x1Gb NIC bonding in round-robin with VLAN separation or maybe InfiniBand? Any experience to share?
2) Is it safe to have OSD journal on external SSD? What if it suddenly breaks? Will I lose all the backed OSDs as well?
3) Is Ceph like ZFS about RAID controllers? Does it need to manage disks by itself or it can make proper use of the controller cache? Could it be a good practice to mirror OSDs or SSDs used for journaling ?
Thanks,
M.
Last edited: