Looking for pointers on small business setup

dtiKev

Well-Known Member
Apr 7, 2018
78
4
48
I just spent the last two weeks recovering from a combination of my failure to follow through on a proper backup plan and ESXi's free version not allowing live backups -- and a RAID10 two disk failure during a VM migration. So now that I'm back on solid ground I'm looking to build a system that is easier to backup.

First here's what we virtualize:
  • 4-6 debian based servers for w3/php/postgres/mysql type duties. None of these serve any meaningful amount of users. Even the dbs are there to either backup from cloud databases or serve as backends to web services
  • Three Windows test VMs (7/8.1/10) machines to run installs and versions of the software we produce through the ringer and then we like to reset back to snapshots and do it all over again
  • Three windows client machines (currently still XP) that listen to live market data and push results into a SQL database
  • A Win Server / SQL database to receive the above data as well as receive replication of offsite databases so we can back them up locally
  • Additional Windows Server with MSSQL, Windows desktop with scanners, and linux apache/php server as a QA environment.
Of these machines, the scanner/database pairs are the most intensive. The databases get thousands of inserts per minute from the scanners. Our production ones have 3 SSDs to handle majority of the inserts and the tables are split across them to increase output. QA wouldn't be as intense but the datastore where it pushes should likely be isolated soas not to effect the other VMs. In ESXi I have that machine segregated from the rest. The scanners use more compute to math out the market data. The network between the two gets hammered. The webserver answers QA/Test requests via the webserver by querying the database through odbc.

For comparison in ESXi which was running solid for about 7 years I had a dual cpu/32 core AMD Dell R715 with 128GB of RAM running almost all of this. There was another dual CPU Intel 16 core / 64GB RAM machine to handle the database and a few of the debian environments.

I don't think we need SAN/Ceph becasue we can handle downtime so long as it's manageable. Our biggest needs are easy live backups that we can get into our on-site/offsite rotation and the ability to migrate machines to other nodes as we figure out the balance or add more nodes.

Our current server hardware is in the 7-9 year old range so we will likely be updating. The machines we had before were likely overkill but they were re-purposed so I wasn't complaining. I don't know what budgetary requirements I'll have but I'm wondering what recommendations I'll get. I'm also wondering what the best way to get fast speeds out of non SSDs would be with some level of fault tolerance. I had a RAID10 array before with 4 1TB SATAs and a couple of SSDs on the side for DBs and other faster storage needs.

Thanks for any pointers...
dtiK
 
Hi,

I would go with zfs in you case because there you have replica and the possibility to make offside backup.
But ZFS which host DB need a very fast and durable SSD for the ZIL device and the proper amount memory.
 
How about:
Crucial MX500 1TB 3D NAND SATA 2.5 Inch Internal SSD - CT1000MX500SSD1
 
If you dig around there are some recommendations, but I'd look at Samsung SM863 drives for an affordable cache disk. Size isn't all that important: basically ZFS is going to queue up disk writes and synch them all to disk every 5 seconds or so, and synchronous writes that happen between those 5 second disk synchs need to be written to the ZFS Intent Log. You're going to put that on a fast and enterprise-grade SSD, and when you partition it you only need to set it up to be around the same size as 5 seconds of max write throughput to your ZFS pool. I think the recommendation I saw in the wiki was 50% of your RAM size, which is probably easier to calculate.

In my case I partitioned a 32G ZIL on SSD because I've got 64G of RAM and that's probably larger than it will ever need to be. Leaving the rest of the SSD unpartitioned will allow wear leveling algorithms in the SSD to work their magic to extend the life of your device.
 
Last edited:
I think the recommendation I saw in the wiki was 50% of your RAM size, which is probably easier to calculate
50% of the memory is default used as MAX ARC memory. The ZIL can be smaller but yes you correct with the duration.
 
Let me share the position I'm in and hope for some answers...

I am at the mercy of the CEO's need to micromanage without complete understanding of the technologies here... After a recent double disk failure in a hardware raid 10 from which we were miraculously able to recover most of the needed data he is anti-raid and in fear of ZFS due to lack of understanding. I've made it clear that my lack of a better backup plan was the true culprit to what happened but never the less I am at the mercy of his whims/fears.

He is telling me to throw two 1TB drives in and simply mirror them. We technically can fit the few VMs we need within a 1TB footprint but I'm not wanting to set this up wrong so I'm trying to cover all bases... His needs/wants are that if a single disk fails, the machine will still boot up and be able to run VMs while we recover the mirror. I've explained that we can setup a 2nd node that can get backups/snapshots/etc. and that in a failure it could run up to date replica's but he's not buying it. I will test and show him this in action once the new hard drives arrive.

But first, I will have to setup what is intended to be our primary node that will run three or so critical machines. If I put two 1TB SSDs into this server and select ZFS Mirror/RAID1 at installation will his needs above be covered? I would add another drive after install and add it to fstab just to have a backup location outside of the space used by VMs to incorporate into my onsite/offsite backup routine.

And finally, I have my own worries/needs in that one of the machines that I'll likely end up running on the second node is a SQL server that can get hammered during the day. What is my best option to spread the database VM across multiple physical disks with the virtual discs assigned to it so that in a case where we have a failure on node1 and I have to temporarily move VMs to node2 that the database's activity wont cause the critical machines to crawl?
 
His needs/wants are that if a single disk fails, the machine will still boot up and be able to run VMs while we recover the mirror.
So, here's what he needs to know:
  • RAID1 is a mirror. One drive fails and things keep on chugging. Two drives fail and you're screwed.
  • RAID10 is a way to string lots of RAID1 mirrors together. You can get better speed this way, and recovering from one failed drive is faster than with the options that will come later, and sometimes you can survive multiple drive failures. Or two drive failures might result in data loss - it depends on the drives that fail
  • RAID5 (RAIDZ1 in ZFS) is parity raid. This means that math is used to write data to all the drives but one - that one receives the parity data - and that math can be used to recover data if a drive fails. You can lose any drive without data loss, but losing two drives loses data.
  • RAID6 (RAIDZ2 in ZFS) is also parity RAID, but it's configured so that you can lose any two drives and not lose data.
  • RAIDZ3 is (you guessed it) triple parity RAID, with more calculations, and the ability to lose up to three drives before losing data.
It sounds to me like you want to install onto a RAID6/RAIDZ2 volume. This means you'll lose 2 drives worth of capacity, and it means that recovery from a drive failure will be much slower than you are used to with RAID1/RAID10. Writes will probably also be worse than you're used to with RAID10. To make this perform well you will need to do a couple of things:
  • Set up a Separate ZFS Intent Log like we're talking about. Faster is better here.
  • Have LOTS of RAM. If your machines are using 32G of RAM and you've got 128G on the machine, then IIRC the defaults on Proxmox mean that 32G of RAM will be devoted to the ARC (Adaptive Read Cache - cool tech - google it) which is a cache of the data you read the most from disk. This means that your most used data won't need to be read from the disk, which increases speed greatly.
  • If you need to, and only if your ARC isn't caching enough, you can set up a Level 2 ARC Cache (L2ARC) on a fast SSD (or the same SSD used for your SLOG, on a different partition.) So if you've got 5 TB of data you're accessing frequently you would be able to set up a 1TB SSD cache to increase speeds of frequently accessed data. This won't be nearly as fast as the ARC referenced above, because nothing is faster than RAM, and doing this reduces ARC size to devote space to an index of the data on your L2ARC, but there are cases where it's useful.
If it were me I'd install over 4-6 drives configured as RAIDZ2, put a good SSD in there as a SLOG, and max out the RAM in the machine. I think that meets your needs. Just buy a machine that supports way more RAM than you need.

Oh, and install a damn backup solution that runs automatically, that notifies you in case of failure, and that you test to know you can restore from backups. And don't forget to make sure off-site backups happen automatically as well. No, dumping to a USB/Tape automatically and depending on the boss to take the latest copy home every Friday doesn't count.
 
Last edited:
I appreciate your response but I think you missed the part where I say I am forced (by need for a paycheck) to do as the boss asks. I am well versed on RAID levels and know plenty about ZFS. What I want to do and what I am being made to do are two different things.

I've got a test machine and some mechanical drives so I guess I'll just install a ZFS mirrored setup, get a VM running... and pull out a drive to see what happens.

The other question I'd like answered for me personally is how I can setup a separate physical disk (or 2 - or 3) that I can throw virtual storage onto in a situation were I have a database that hammers the hard drive. I know how to add them and get them into fstab... but how can I setup a VM that has a hard drive outside of the pool that pmv creates? Pointers to correct threads/wiki/docs would be fine... I like to read.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!