I have a few servers. R640,R630,and R720 all of them have 24+cores, 192+gb of ram and 2x600gb and 6x 1.8tb sas drives. I wanted to created a highly available system for my vms. I dont know much about ZFS or Ceph.
Ive used ZFS once and had an issue with padding overhead so my 2tb of data was taking 4.4tb of space.
I’m used to hardware raid this is my first time getting into software raid. I’m also not always near the servers, so I’ve occasionally had to have someone else change a drive for me when a drive has failed.
I had thought you could run ceph with a hardware raid but my reading today said it is not something that should be done.
Questions...
Is there a way to retain hardware raid and have High Availability? (Is that stupid?)
If I move forward with ZFS what are the best settings as to not have the overhead I got last time?
Is there a preference on ceph vs ZFS when it comes to HA and reliability? (I don’t have 10gb switches(yet))
Ive used ZFS once and had an issue with padding overhead so my 2tb of data was taking 4.4tb of space.
I’m used to hardware raid this is my first time getting into software raid. I’m also not always near the servers, so I’ve occasionally had to have someone else change a drive for me when a drive has failed.
I had thought you could run ceph with a hardware raid but my reading today said it is not something that should be done.
Questions...
Is there a way to retain hardware raid and have High Availability? (Is that stupid?)
If I move forward with ZFS what are the best settings as to not have the overhead I got last time?
Is there a preference on ceph vs ZFS when it comes to HA and reliability? (I don’t have 10gb switches(yet))