Hi everybody,
I'm planing to replace our 4-node Proxmox 5.1 cluster as the current HW is suffering from massive I/O problems (due to old and slow 7k spinning disks).
Our current workload:
- about 70 VMs in total, mostly Linux
- ~ 12 of these VMs are "essential" for our daily work; they run Jira, Jenkins, GitLab, ...
- the other VMs are for development and testing (including Jenkins build slaves); they tend to be rather "big" (> 4 cores, > 16 GB RAM) as our stuff heavily makes use of Java middleware products (Axway, TIBCO, ...).
- For the future we plan to move the Jenkins build slaves and some Dev VMs to LXC or - more likely - to Docker containers.
With the budget available I was thinking about the following (new) HW:
5x Dell PowerEdge R640, each with
- 2x Intel Xeon Silver 4114 (10 Core)
- 320GB DDR4 ECC RAM
- 6x 960GB Enterprise SSD (SATA)
- 2x 960GB Enterprise SSD (NVMe)
- 1x 1 GbE NIC
- 2x 10 GbE NIC
Storage:
- ZFS pool, using three mirror sets (3x 2 SSDs)
- Ceph, 2 OSDs per host (using the NVMe SSDs)
=> Ceph storage should be used (almost) exclusivly by the "essential" VMs; all other VMs/Containers should use local ZFS pools
Networking:
- 1x 10 GbE NIC for Ceph
- 1x 10 GbE NIC for VM communication; (test-) envs to be seperated by different VLANs
- 1x GbE uplink
Do you think that setup sounds reasonable?
I'm planing to replace our 4-node Proxmox 5.1 cluster as the current HW is suffering from massive I/O problems (due to old and slow 7k spinning disks).
Our current workload:
- about 70 VMs in total, mostly Linux
- ~ 12 of these VMs are "essential" for our daily work; they run Jira, Jenkins, GitLab, ...
- the other VMs are for development and testing (including Jenkins build slaves); they tend to be rather "big" (> 4 cores, > 16 GB RAM) as our stuff heavily makes use of Java middleware products (Axway, TIBCO, ...).
- For the future we plan to move the Jenkins build slaves and some Dev VMs to LXC or - more likely - to Docker containers.
With the budget available I was thinking about the following (new) HW:
5x Dell PowerEdge R640, each with
- 2x Intel Xeon Silver 4114 (10 Core)
- 320GB DDR4 ECC RAM
- 6x 960GB Enterprise SSD (SATA)
- 2x 960GB Enterprise SSD (NVMe)
- 1x 1 GbE NIC
- 2x 10 GbE NIC
Storage:
- ZFS pool, using three mirror sets (3x 2 SSDs)
- Ceph, 2 OSDs per host (using the NVMe SSDs)
=> Ceph storage should be used (almost) exclusivly by the "essential" VMs; all other VMs/Containers should use local ZFS pools
Networking:
- 1x 10 GbE NIC for Ceph
- 1x 10 GbE NIC for VM communication; (test-) envs to be seperated by different VLANs
- 1x GbE uplink
Do you think that setup sounds reasonable?