What hardware do you suggest for homelab performance?

Marshalleq

Active Member
Oct 14, 2018
27
11
43
50
Hello all, for some time now I have been on a journey looking for a way of virtualising the workloads I toy around with at home. Ultimately I decided to shift from the free version of VMWare to proxmox because of the lack of features in VMWare for an unlicenced system and ended up on Proxmox. Mostly this has been working quite well until I decided to rebuild with ZFS.

I have read quite a lot about this topic on this forum and others, though there doesn't seem to be anywhere that bluntly puts what you can get away with and can't for different situations.

I have a Ryzen 1700X CPU, 16GB of 3200GHz memory, ntel 2 port Gigabit NIC + onboard NIC, 2x 3TB seagate Barracudas, 2x Samsung Consumer SSD's, an old enterprise 320GB HDD, I (which I haven't used because not sure it's any better yet) and have tried a few different confiugrations. Having used a lot of different file systems over the years I've not ever come across one so poorly performant out of the box.

I have been dancing around in my mind whether to get an enterprise grade SSD in some form or another (I don't care about redundancy - a nightly backup off to my NAS covers my needs perfectly) - I just care about speed to create VM's and the speed of copying files around etc. I don't mind buying new hardware, but I do have a financial limit given this is just a home lab for learning about stuff. These SSD's are expensive here in New Zealand.

What I'm looking to understand is, in this situation what would you do?
Where would be the best bang for buck?
Should I just forget ZFS altogether and get an Enterprise SSD? Would putting a ZIL on another drive (even if just that old enterprise drive) make a difference just because it's on another SATA channel?
I don't think 16GB is too small for general workloads + ZFS? I have added an SSD L2ARC, however I don't think that makes much difference.

From what I've read, I must have an enterprise SSD for VM workloads. What if I just did a RAID 0 across a few mechanical disks using MDADM? Unsupported I believe, but it doesn't seem to perform well in the ZFS 2 disk RAID 0 I've tried.

High Level Recommendations?

Thanks.
 
One thing that I didn't mention that I'm wondering about is I chose the smallest cluster size when setting up the ZFS Raid 0. Good for small files bad for big obviously.
 
So now I've disabled sync on my rpool because apparently if this speeds up fsyncs it means I would benefit from a fast external ZIL.

Before
root@proxmox:~# pveperf
CPU BOGOMIPS: 114964.80
REGEX/SECOND: 2476069
HD SIZE: 5109.62 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 91.27
DNS EXT: 314.08 ms
DNS INT: 2.12 ms

After
root@proxmox:~# zfs set sync=disabled rpool
root@proxmox:~# pveperf
CPU BOGOMIPS: 114964.80 (I love Ryzen :D)
REGEX/SECOND: 2459563
HD SIZE: 5109.62 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 36327.89
DNS EXT: 305.99 ms
DNS INT: 2.14 ms
According to this, I would massively benefit.

I think I'll try setting one up on a 10k Enterprise drive first to see if it works well enough - I don't think using consumer SSD's is a good idea from what I've read.
 
I found adding an old Hitachi 320GB 10k RPM drive as a SLOG, improved my FSYNC by about 2x. Given how old that is, seems like a flash based device is warranted. Looking at an Intel S3700 - the only one I can find is 200GB unfortunately which is a bit big. Would love it to be M.2 based but oh well I guess it will work enough as is.
 
For completeness running # zfs set sync=disabled rpool has made an incredible difference. However I still get IO delay of about 50-60% during file copy on a gigabit network with a three disk RAID0. Restoring a machine from backup still slows down the other machines enough to be very noticeable from a client perspective - i.e. my Roon media server box becomes unresponsive. Can anyone comment if this performance issue is to be expected even without sync and even RAID-0 barracudas? Not much input on this thread is there - I guess everyone runs more expensive setups than me...
 
Hi,

First of all, reading about zfs is one thing, and understand how is work is another story. You tell us only about what you do regarding zfs/storage but you miss a very important thing: what load do you put on it (only vm, mayby mixed vm/ct, how many and what load is on them, total memory alocated to all of yours ct/vm)
As you mention about sync=disable, will improve your performance, this happens because your load (from vm/ct) is using sync writes (data-bases, mybe nfs are most likely )

Another point is that you must know what is your hdd sector size (512 b or 4 k). Mybe your load need a big iops on storage?
 
Hi guletz, thanks for the reply, I appreciate you didn't have to.

Actually, I originally had a different thread title, which I changed to this more clickbait oriented one because I realised people love making solutions for others but don't seem to speak to general design principles that much (what I was really after).

I will answer your loading question, but ultimately my current loading will change because it's a home lab. In the mean time I have purchased an enterprise SSD (not arrived yet) and am back to running my VMS on a Samsung EVO 850 on ext4 in the mean time.

So to your question, I expect it is an IOPS thing nevertheless I didn't expect the whole system to be so unresponsive as a result - I guess I should install the OS on a different drive.

My loading is a mixed bag. I have pretty much everything - mysql / Mariadb databases, web pages, music server, automation scripts and a crash plan backup server and so on, mostly running in docker except for crash plan in it's own VM.

It all runs fine until such time as a single file is copied. That is what surprised me, there's not a lot of activity on my box really (just home use, not open to the internet or anything, single user access and a single file copy brings it to a standstill. This however only happens on ZFS which is how I came to think it was a ZFS thing. Workloads on EXT can slow down, but the system doesn't crawl to a halt.

Now that I've put my consumer SSD back in on EXT4, the IO Wait is less than 1%. I am aware IOPS in SSD is much greater than on spinning so that's OK. But if I put ZFS on that SSD, IOWait get's quite high. I didn't however try that with the sync=disable so that could be interesting.

Anyway, for my use case I think I should stick with EXT4. It seems like there's so much messing around required with ZFS that it's not worth the hassle unless you have mega $$ to spend, critical workloads and in a homelab anything but the lightest work loads.

Oh and I had already checked the 4k alignment in ZFS was good. I do have quite a few NFS mounts through a 1GB connection to a NAS, so that's an area I haven't looked at yet. It doesn't really do a lot with them (except for the backup system) but that doesn't seem to cause the issue, it's more copying within the Proxmox server that causes the issue. Actually, I stopped running backups on my local ZFS storage because it was so much slower and more painful than the NFS, so again it seemed to point to ZFS. I tried this even as a single bare ZFS drive.

Anyway, perhaps that's a bit more informative and there are some words of wisdom you can share. :D

Thanks again.

Marshalleq
 
Hi ... from the antipod ;)

I run myself a semi-lab at home ! If you do not need zfs, and you do not want to speare time to learn about zfs, then the best
option is to not use it! In such a case, I would use Hitachi 320GB to install the proxmox. Then I would add after instalation the 2x 3TB seagate as a separate storage(LVM over RAID0) for CTs/VMs. And the new SSD, I would use as a storage only for DB CTs(so you will get a better iops).

Good luck!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!