Hi
I'm going to rebuild the server infrastructure of our companies and could use some input in what direction i should go.
The main goal of this is to remove single points of failure.
What runs on the infrastructure:
Critical:
Not so critical (May be installed on the old hardware):
Weekend/Night maintenance downtime is a small to no problem.
It is not a big problem if e.g. the samba shares/mail servers are offline for a have an hour (or even an hour) twice a year because of a crash.
It is a huge problem if we would have a unplanned downtime of several hours/days.
What I'm now thinking of is two of this machines: Supermicro 1028R-WTNRT
Or this one: Supermicro TwinPro 2028TP-DNCTR
And build a Proxmox HA cluster with one of the old Machines or a small additional one.
Each node with 32/64 GB Ram.
Each node with one 6/8 Core 2.4 GHz cpu.
Each node will have an Intel NVMe SSD 1.2TB (or 2 TB) for System, VMs and Fast Storage (e.g. users AppData)
For Storage about 4 disks per node 2TB SAS3 Seagate Enterprise Capacity 2.5 (7200RPM)
The nodes will be directly connected through 10GBs, for storage sync, and each is connected to the network by the other 10GBs port. For additional networks (DMZ, WLAN, Internet) I plan to install an additional 4 Port 1GBs card.
My biggest field of insecurity is the storage system (and syncing).
My first idea was to go with drbd over lvm over raid, but after reading this forum and some other resources I'm not sure if proxmox and drbd will work together good in the future.
Ceph seems to be interesting (also to learn more about) but I have the feeling that it is overkill for my needs and I would need another node and a 10GBs switch. And what I read is that even with 3 nodes ceph has to much overhead and is more for a separated storage networks.
I'm starting to read about GlusterFS but do not know much about it yet.
There are also other questions:
I'm thankful for any tips and hints.
Sorry I wanted to post links to the hardware parts but I'm not allowed to post them yet
I'm going to rebuild the server infrastructure of our companies and could use some input in what direction i should go.
The main goal of this is to remove single points of failure.
What runs on the infrastructure:
Critical:
- Samba Domain Controller (about 40 users with appData on share)
- Samba shares (1.5TB in use)
- File indexing/searching Service
- Mailserver (Linux 500GB storage in use)
- Groupware (Linux)
- Jabber Server
- Webservers (Company/project websites)
- MySQL server (No heavy load)
- DNS server
- Ldap Server
- Firewall (ipfire)
- ArchiCAD Bim Server (Windows large files 200GB in use)
- JEE Aplication Server (linux)
- Windows Remote Login VM
Not so critical (May be installed on the old hardware):
- Owncloud (100GB in use)
- Java build Server.
- Maven Repository Proxy (100GB in use)
Weekend/Night maintenance downtime is a small to no problem.
It is not a big problem if e.g. the samba shares/mail servers are offline for a have an hour (or even an hour) twice a year because of a crash.
It is a huge problem if we would have a unplanned downtime of several hours/days.
What I'm now thinking of is two of this machines: Supermicro 1028R-WTNRT
Or this one: Supermicro TwinPro 2028TP-DNCTR
And build a Proxmox HA cluster with one of the old Machines or a small additional one.
Each node with 32/64 GB Ram.
Each node with one 6/8 Core 2.4 GHz cpu.
Each node will have an Intel NVMe SSD 1.2TB (or 2 TB) for System, VMs and Fast Storage (e.g. users AppData)
For Storage about 4 disks per node 2TB SAS3 Seagate Enterprise Capacity 2.5 (7200RPM)
The nodes will be directly connected through 10GBs, for storage sync, and each is connected to the network by the other 10GBs port. For additional networks (DMZ, WLAN, Internet) I plan to install an additional 4 Port 1GBs card.
My biggest field of insecurity is the storage system (and syncing).
My first idea was to go with drbd over lvm over raid, but after reading this forum and some other resources I'm not sure if proxmox and drbd will work together good in the future.
Ceph seems to be interesting (also to learn more about) but I have the feeling that it is overkill for my needs and I would need another node and a 10GBs switch. And what I read is that even with 3 nodes ceph has to much overhead and is more for a separated storage networks.
I'm starting to read about GlusterFS but do not know much about it yet.
There are also other questions:
- Should i go with an all SSD approach? Would it be much faster or just cost much more?
- Is it really a good Idea to run Firewall VM, DMZ VMs and main network VMs on the same Hosts?
- Is it a good idea to use LXC containers for DMZ services or should I use KVM vms?
- Should I fill all CPU sockets or will the cores just be bored?
- Am I missing something super important?
I'm thankful for any tips and hints.
Sorry I wanted to post links to the hardware parts but I'm not allowed to post them yet