Got a decision to make and looking for public advice. Mostly duck-typing this out.
I have been volunteering at an elementary school to fix their network/server and now it's time to build their server(s) out. However, I can't decide between two setups. The driving force is minimal cost (really, I have just a little bit to finish either setup - but not both). Foremost though, what exactly will be running? Really, just 1 small thing and a few support VMs:
* Evergreen OpenSource Library (Postgres, Perl, C, Apache)
This software has very small requirements, especially for their small library of only about 5000 books and only a single "client" machine for the librarian, with only 3 or 4 teachers remotely at the very most. Recommended is 4 cores and 1GB of ram for this small size.
So, here's the hardware I have available:
* Single "big" server with dual 14C/28T CPUs, 64GB ram: Limited to only 2 SATA disks, total (mobo limitation). Most likely 2x 1TB disks overall in RAID1.
* Three "small" servers with 4C/4T, 16GB each (12C and 48GB total HA cluster): Limitation is dual 1G NICs (no 10G switch available, though I have 10G cards). Most likely can only afford 2x 30GB SSDs for raid1 OS and a single 120GB SSDs for data/Ceph/ZFS, per host.
I can only afford to build-out one or the other, but not both (i.e. purchase the rest of the hardware).
The issue I can't seem to decide on is Ceph over 3x nodes (min 2-available) with just 1x SSD per host over a single LAG 1Gbps connection. I know, corosync will have issues if the bonded connection gets saturated if not 10G - but we're only talking about a single-user using the cluster at any given time, and almost all just text input on a single DB - also, I can't afford a cheap 4x 10G switch like one of those Brocades. I also know ZFS is preferred in small clusters, but I need to leave room to expand easily in the future since they will only have single SSDs-per-host to start with.
The library's apps/services will not be load balanced (like 2x Postgres, etc). Running them all within a single VM on a single host is fine for their needs, as it can move to another host if a host/ssd fails. There's really only a single librarian accessing it any given time. At most, two teachers could hit the web UI at the same time.
I want them to have 3x nodes for Proxmox HA. But at the same time, it's that old argument, "I am introducing more complexity, when just a single host is all they need?"
---
As a side note: I want to contribute back to the Evergreen community by containerizing their software with a Proxmox container setup/bundle (they are really lacking in terms of current technical setups - there doesn't seem to be any Docker or images, available - everything is built and installed old school way). I want to design containerized versions of their services (Postgres in HA/master-slave options, etc as well as stateless items in lightweight containers to scale). But need a real cluster running to design and test this, which is one reason I am leaning towards the 3x node solution.
I have been volunteering at an elementary school to fix their network/server and now it's time to build their server(s) out. However, I can't decide between two setups. The driving force is minimal cost (really, I have just a little bit to finish either setup - but not both). Foremost though, what exactly will be running? Really, just 1 small thing and a few support VMs:
* Evergreen OpenSource Library (Postgres, Perl, C, Apache)
This software has very small requirements, especially for their small library of only about 5000 books and only a single "client" machine for the librarian, with only 3 or 4 teachers remotely at the very most. Recommended is 4 cores and 1GB of ram for this small size.
So, here's the hardware I have available:
* Single "big" server with dual 14C/28T CPUs, 64GB ram: Limited to only 2 SATA disks, total (mobo limitation). Most likely 2x 1TB disks overall in RAID1.
* Three "small" servers with 4C/4T, 16GB each (12C and 48GB total HA cluster): Limitation is dual 1G NICs (no 10G switch available, though I have 10G cards). Most likely can only afford 2x 30GB SSDs for raid1 OS and a single 120GB SSDs for data/Ceph/ZFS, per host.
I can only afford to build-out one or the other, but not both (i.e. purchase the rest of the hardware).
The issue I can't seem to decide on is Ceph over 3x nodes (min 2-available) with just 1x SSD per host over a single LAG 1Gbps connection. I know, corosync will have issues if the bonded connection gets saturated if not 10G - but we're only talking about a single-user using the cluster at any given time, and almost all just text input on a single DB - also, I can't afford a cheap 4x 10G switch like one of those Brocades. I also know ZFS is preferred in small clusters, but I need to leave room to expand easily in the future since they will only have single SSDs-per-host to start with.
The library's apps/services will not be load balanced (like 2x Postgres, etc). Running them all within a single VM on a single host is fine for their needs, as it can move to another host if a host/ssd fails. There's really only a single librarian accessing it any given time. At most, two teachers could hit the web UI at the same time.
I want them to have 3x nodes for Proxmox HA. But at the same time, it's that old argument, "I am introducing more complexity, when just a single host is all they need?"
---
As a side note: I want to contribute back to the Evergreen community by containerizing their software with a Proxmox container setup/bundle (they are really lacking in terms of current technical setups - there doesn't seem to be any Docker or images, available - everything is built and installed old school way). I want to design containerized versions of their services (Postgres in HA/master-slave options, etc as well as stateless items in lightweight containers to scale). But need a real cluster running to design and test this, which is one reason I am leaning towards the 3x node solution.
Last edited: