Setup recommendation

Discussion in 'Proxmox VE: Installation and configuration' started by Pontius, Aug 12, 2018.

  1. Pontius

    Pontius New Member

    Aug 12, 2018
    Likes Received:
    Hi everybody,

    I'm planing to replace our 4-node Proxmox 5.1 cluster as the current HW is suffering from massive I/O problems (due to old and slow 7k spinning disks).

    Our current workload:
    - about 70 VMs in total, mostly Linux
    - ~ 12 of these VMs are "essential" for our daily work; they run Jira, Jenkins, GitLab, ...
    - the other VMs are for development and testing (including Jenkins build slaves); they tend to be rather "big" (> 4 cores, > 16 GB RAM) as our stuff heavily makes use of Java middleware products (Axway, TIBCO, ...).
    - For the future we plan to move the Jenkins build slaves and some Dev VMs to LXC or - more likely - to Docker containers.

    With the budget available I was thinking about the following (new) HW:

    5x Dell PowerEdge R640, each with
    - 2x Intel Xeon Silver 4114 (10 Core)
    - 320GB DDR4 ECC RAM
    - 6x 960GB Enterprise SSD (SATA)
    - 2x 960GB Enterprise SSD (NVMe)
    - 1x 1 GbE NIC
    - 2x 10 GbE NIC

    - ZFS pool, using three mirror sets (3x 2 SSDs)
    - Ceph, 2 OSDs per host (using the NVMe SSDs)
    => Ceph storage should be used (almost) exclusivly by the "essential" VMs; all other VMs/Containers should use local ZFS pools
    - 1x 10 GbE NIC for Ceph
    - 1x 10 GbE NIC for VM communication; (test-) envs to be seperated by different VLANs
    - 1x GbE uplink

    Do you think that setup sounds reasonable?
    DerDanilo likes this.
  2. Pontius

    Pontius New Member

    Aug 12, 2018
    Likes Received:
    Ok, looks I didn't make any obvious mistake on this. ;)

    My major concern is the Ceph setup. 5 nodes with 2 OSDs each, connected by a 10 Gb network - is there any way to calculate the expected throughput on this?
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice