Hi,
I have read just about every post on this forum on this topic. Some have some useful information, but I can't get all the answers that I have questions for, hence this post.
I need to setup a 3 node high available cluster to host 14 virtual machines, Linux and Windows based. This is to replace the old physical servers for each of these VM's.
One of the Virtual Machines would be a very busy Linux mailserver, another would be a Windows MS SQL Server and another VM would be a VOIP server.
The target setup should deliver excellent performance and high availability.
The planned config is as follows:
3x SuperMicro 2U, 8 bay servers with:
any comments or recommendations would be appreciated.
[1] https://www.supermicro.com/products/nfo/SATADOM.cfm
I have read just about every post on this forum on this topic. Some have some useful information, but I can't get all the answers that I have questions for, hence this post.
I need to setup a 3 node high available cluster to host 14 virtual machines, Linux and Windows based. This is to replace the old physical servers for each of these VM's.
One of the Virtual Machines would be a very busy Linux mailserver, another would be a Windows MS SQL Server and another VM would be a VOIP server.
The target setup should deliver excellent performance and high availability.
The planned config is as follows:
3x SuperMicro 2U, 8 bay servers with:
- Dual XEON 12Core E52650V4 CPU
- 128GB ECC RAM
- 2x 1.2Tb Intel S3520 SSD drives
- 4x 8TB SATA drives
- 4port 10GbaseT, Intel XL710 and X557 network card
- Supermicro SMC SATA3 128GB MLC SATA DOM with Hook
- I plan on installing Proxmox on the SuperMicro MLC SATA DOM (Disk On Module) [1], but don't know if this MLC device will last very long. The advantage of using this is that I don't waste a drive bay for the OS.
- The 2 SSD drives will be used for the L2ARC and ZIL, though I think an Intel 7310 would be better?
- The I want to setup ZFS on the drives, possibly RAIDZ2 or 2 mirrors with the SATA drives.
- Then I want to setup GlusterFS or CEPH on top of ZFS for the Virtual Machines to achieve high availability.
- Lastly, I have been thinking about connecting a CAT 6a cable directly from each server's NIC, to another server for the storage network. The 10Gbe NIC's have 4 ports so I could run a cable from Server1/Port1 to Server2/Port2, another cable from Server1/Port3 to Server3/Port1, and another from Server2/Port2 to Server3/Port2. I'm not sure if this is possible, but I'm if it is, I could eliminate two very expensive 10Gbe switches.
any comments or recommendations would be appreciated.
[1] https://www.supermicro.com/products/nfo/SATADOM.cfm