Ceph configuration

alex purser

Member
Mar 18, 2018
14
0
6
52
A bit of a steer from the Ceph experts amongst you please - i have 3 proxmox nodes in a cluster which are all running Ceph also. Im looking for a (crushmap?) configuration that will allow my VMs to run with a single server powered on (this is a power saving test mode for running vms without fault tollerance) and that will also allow minimal storage use when all 3 nodes are running (whilst allowing for a disk or server to fail). This is a small deployment and im only using ceph to allow me to fail vms between cluster nodes (perhaps i should try zfs instead?) and to accomodate basic failures... nothing mission critical... thanks for any guidance
 
thanks for your feedback Alwin - I was looking at a shortcut approach here - i.e. im looking for someone to show me a similar working config or their comments on such - i totally appreciate this system is documented and i will read that end to end if i need to. I think i understand your quorum point, but the cluster can be told to ignore that i think, and also i totally get that data is risked with a lack of replication.

Im explicitly looking to work in a none recommended fashion, at the limits of what the software can operate under, as opposed to what is recommended, so dont want to be hung up on best practice here... is now a good time to mention that the servers are HP DL360s and im using the stock RAID controllers, albeit with the disks allocated as single disk RAID0 devices ;-)