Need help with some initial setup decisions

kibuyeit

Member
Oct 3, 2022
3
0
6
TLDR; novice needs help with a setup for a charity hospital in rural Africa

Greetings,
So a bit of a primer: I'm a surgeon, not an IT professional. Unfortunately, I'm also one of the most technically capable individuals at my hospital in rural Burundi. There is a national push toward electronic health records and we are slowly trying to put in place the necessary pieces of hardware to make that possible. I just finished setting up a Unifi network with 30 access points using a netgate box for pfsense/pfblockerng firewall/filtration. The network is running well. It's currently set up with infrastructure on VLAN1, admin devices connected to VLAN 10, staff and medical students to VLAN 20, and an after-business hours open network on VLAN30 which are all currently completely isolated from one another. Our 6 MB/s connection (yes only 6) is now useable despite all of the concurrent users.

Here's what I have on hand to configure for our server:

Dell PowerEdge 720xd
2x Xenon E5-2670 8 core
24x 8GB ram (192GB total)
H710P mini raid controller w/ 1GB cache
4 port gbit NIC
4x 2TB Hitachi SAS drives
4x 4TB Seagate EXOS SATA drives
2x 240GB SSDs

Here's what we plan to use:

1 Ubuntu VM to run the electronic health record - OpenClinic GA
1 Ubuntu VM to run an imaging server for our digital xray and ultrasound images - Orthanc
1 Ubuntu VM to run a file server, print server, and any other less mission critical services

Theoretical number of concurrent users:
~35 doctors
~62 nurses
~20 ancillary staff
(caveat emptor - we have nowhere near this many computers available, many if not most of our staff do not have their own devices to bring to the table, and we are constantly trying to hire more staff to meet the needs of our hospital. It's constantly in flux but on any given weekday we have around 80 employees on site.)

I'm wondering how to best utilize the above hardware to set up a robust but speedy system. Entropy seems to work a little harder in our context, so data security is important. The integrated backup solution is one of the reasons I'm preferring Proxmox over something like ESXi. Multiple nodes is way outside our budget (though we are working toward purchasing a back server to run a prod/backup setup) so from what I've read Ceph is a bad idea despite its data integrity and data restoration benefits.

Should I go for hardware raid using the built-in controller? Should I head down the ZFS rabbit hole and if so, should I use one of the SSDs for write caching or would it be better to set them up in RAID 1?

Any guidance would be most appreciated. We're going to do a progressive build out (imaging first so we can start using our digital xray panel) and I don't want to mess up the initial configuration and then have to back pedal and try to fix things while still needing the system to be up.

Many thanks,
Michael

Couple notes:
Procurement is really difficult here, but if there is something essential that's missing, we can work toward obtaining it before I attempt an install.
I'm a novice, but hopefully not an idiot. Been into computers since the Pentium 2 days, studied Biomedical Engineering before moving onto medical school and doing my surgical training.
 
For redundancy and continuous operation when a drive fails, I would use 3 of each set of drives as a 3-way mirrors with ZFS and keep the fourth as a spare (to give you more time to get failed drives replaced).
Use the 2TB storage for virtual data drives for the VMs and the 4TB as storage for Proxmox Backup Server (PBS) running in a container. I would put the Proxmox VE installation also on the 4TB storage, as it does not need much performance and won't interfere with the VMs that way.
Use a mirror of SSDs for the other virtual drives of the VMs. I don't expect much profit from using them as write (SLOG) or read (L2ARC) cache. Maybe they could benefit you as special devices, but that helps containers and especially PBS much more than VMs and would reduce the redundance of the other two pools (because you only have two).
And I would get another such system, so they can be each others backup. I don't have a clue if the performance or storage space would be enough for your intended VMs.

PS: I'm just a random stranger on the internet, so please don't assume that I know what I'm talking about. And my best wishes for the good work you're doing.
 
Last edited:
Very great to hear and your quite high tech savy as well. I can only said 1 thing.. have backup ready. Install a ready config proxmox over an external ssd drive that in case of bug you plug it in and hop all ok after 10sec.
And best: cut all internet and do not do any update. Open source stuff is about the user getting error and reporting back. In your case, you want this to work and official version iso do. No reason to update at all.
*on the vm: i will go with Xubuntu , xfce is clear, clean and will not cause any bug, system slow down or so. More stable and more reliable.
(Good work, we need more like you.)
 
I understand that budget is critical, but I don't see how to solve the reliably without more servers. Wouldn't it be really bad if for example the mainboard dies and the server isn't working anymore and you can't access xrays and so on anymore, preventing you from curing someone?
I atleast would recommend two identical servers + a small thin-client as qdevice so you could run those two servers in a cluster with ZFS replication so it only takes a few minutes and not hours or days to get everything running again, in case there is a non-disk hardware problem with one the servers.
 
Last edited:
  • Like
Reactions: Neobin
For redundancy and continuous operation when a drive fails, I would use 3 of each set of drives as a 3-way mirrors with ZFS and keep the fourth as a spare (to give you more time to get failed drives replaced).
Use the 2TB storage for virtual data drives for the VMs and the 4TB as storage for Proxmox Backup Server (PBS) running in a container. I would put the Proxmox VE installation also on the 4TB storage, as it does not need much performance and won't interfere with the VMs that way.
Use a mirror of SSDs for the other virtual drives of the VMs. I don't expect much profit from using them as write (SLOG) or read (L2ARC) cache. Maybe they could benefit you as special devices, but that helps containers and especially PBS much more than VMs and would reduce the redundance of the other two pools (because you only have two).
And I would get another such system, so they can be each others backup. I don't have a clue if the performance or storage space would be enough for your intended VMs.

PS: I'm just a random stranger on the internet, so please don't assume that I know what I'm talking about. And my best wishes for the good work you're doing.
Thanks for the reply. I did plan on HD failures and didn't mention that I have 4 spare of the 2tb drives and 2 spare of the 4tb drives. I originally purchased 10 identical 4tb drives and currently have 4 installed in the server and 4 installed in a ubiquiti UNVR camera system. My plan is to always keep at least 2 spare of each type on hand.

Knowing that, would you still run a 3-way mirror for both sets of spinning rust keeping the fourth spare?

Re the storage space performance: the electronic health record is written to run on pretty basic machines so the data points being written are tiny snippets of text. Space efficiency would say I should use a small block size but with the hard drive size compared to the file size for the database I'll probably use a larger block size for improved read write performance (Interesting article on Ars Technica)

Thanks again,
Michael
 
I understand that budget is critical, but I don't see how to solve the reliably without more servers. Wouldn't it be really bad if for example the mainboard dies and the server isn't working anymore and you can't access xrays and so on anymore, preventing you from curing someone?
I atleast would recommend two identical servers + a small thin-client as qdevice so you could run those two servers in a cluster with ZFS replication so it only takes a few minutes and not hours or days to get everything running again, in case there is a non-disk hardware problem with one the servers.
Agreed completely, we are actively looking for a second server for just this scenario. Reading about the HA features of Proxmox to better understand what that would require.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!