Hi Guys,
We came across Proxmox a couple of months ago as a viable alternative for VMware on a managed solution we sell. As it worked amazingly well (minus the costs of VMware on-top), we've decided that our ageing VMware production cluster (3 x Dell R510 with Equallogic PS4000 storage) could be replaced by Proxmox. Having paid for VMware vCenter, VSAN and associated licensing for a newer lab VMWare cluster ( 2 x R730 with quorum node), the costs of Proxmox are obviously much more competitive, and the feature-set of Ceph mimics some of those in VSAN. Add Ansible management into it, and our needs for automation are also covered. To that end, I wonder if anyone can offer advice on how to best configure Ceph on the following hardware -
4 x
Dell R440
1 x Xeon 2801 Silver
128GB Memory
8 x 2TB NL SAS
2 x 930GB Mixed Use SAS
Dual Port 10G Broadcom NIC
Dual-Port 1G on-board NIC
We've already built the cluster of 4 nodes, using the 10G ports in LACP (Openvswitch bond) with a Virtual Chassis of two Juniper EX4300-MP switches. Therefore resilience in-case of an EX failure is covered -
The two remaining NICs (1G) are also LACP (Openvswitch bond) with the cluster management through these interfaces. The OS is currently installed on one of the 930GB Mixed Use SSD drives (400GB partition).
How would anyone recommend the Ceph storage be setup? We'd like most of the storage for VM's, but do foresee container use too in the future.
Best regards
Andy
We came across Proxmox a couple of months ago as a viable alternative for VMware on a managed solution we sell. As it worked amazingly well (minus the costs of VMware on-top), we've decided that our ageing VMware production cluster (3 x Dell R510 with Equallogic PS4000 storage) could be replaced by Proxmox. Having paid for VMware vCenter, VSAN and associated licensing for a newer lab VMWare cluster ( 2 x R730 with quorum node), the costs of Proxmox are obviously much more competitive, and the feature-set of Ceph mimics some of those in VSAN. Add Ansible management into it, and our needs for automation are also covered. To that end, I wonder if anyone can offer advice on how to best configure Ceph on the following hardware -
4 x
Dell R440
1 x Xeon 2801 Silver
128GB Memory
8 x 2TB NL SAS
2 x 930GB Mixed Use SAS
Dual Port 10G Broadcom NIC
Dual-Port 1G on-board NIC
We've already built the cluster of 4 nodes, using the 10G ports in LACP (Openvswitch bond) with a Virtual Chassis of two Juniper EX4300-MP switches. Therefore resilience in-case of an EX failure is covered -
The two remaining NICs (1G) are also LACP (Openvswitch bond) with the cluster management through these interfaces. The OS is currently installed on one of the 930GB Mixed Use SSD drives (400GB partition).
How would anyone recommend the Ceph storage be setup? We'd like most of the storage for VM's, but do foresee container use too in the future.
Best regards
Andy