[SOLVED] Should I revert my upgrade, or buy more RAM? (I reverted)

DR4GON

Member
Sep 7, 2021
39
0
6
35
TL,DR: Buy 128GB of RAM for a i7 8700 system, or use two PC's, one for virtualization and one for storage without buying anything?

I recently upgraded my gaming PC, so I decided to use those components to "upgrade" my server, but I'm experiencing worse performance.

Old Components:
CPU: Dual Intel Xeon X5675
Motherboard: ASUS Z8NA-D6
RAM: 6x 16GB 1333mhz (96GB)

New Components:
CPU: Intel i7 8700
Motherboard: MPG Z390 GAMING EDGE AC
RAM: 4x 8GB 2666mhz (32GB)

I was told that the increased performance from the much newer i7 8700, would make up for loosing the second Xeon, and that I shouldn't worry about the loss of RAM. My requirements are running a VM for Ubuntu 20.04, with Plex, LanCache, Sonarr, Radarr, and PIA. A second VM is attempting to run MineOS but on the i7 8700 it keeps hitting massive lag spikes and kicking the player (only 1) and shutting down (it worked but was laggy, with 2 players, on the Xeons). I also have two Pools, 16x4TB in RAIDZ2 and 8x2TB in RAIDZ1 (eventually it will either be 4x 6x4TB in RAIDZ2 or 2x 8x4TB in RAIDZ2 depending on needs)

I used to use FreeNAS, which from my understanding is very RAM hungry, so I upgraded my server to the 96GB a few years ago. But because FreeNAS doesn't run VM's well, I was convinced to dive into Proxmox. The old system had some issues running everything, but I'm getting more issues with the "faster" PC.

Help, do I fork out for 128GB of RAM (motherboard limit), or do I build two PC's and run both? My electricity bill would be mad at me if I started up another computer, but 128GB of DDR4 is a MASSIVE investment and if it doesn't fix my issues, its a huge waste of money.

Note; the Xeons ran the same setup, better than the i7 does. Is it all down to RAM?
 
Last edited:
I was told that the increased performance from the much newer i7 8700, would make up for loosing the second Xeon, and that I shouldn't worry about the loss of RAM.
Who told you that RAM is not very important for virtualization? Memory is the one thing that you can't really overcommit on! Unless you run many, many very similar VMs, maybe.
For my workloads, I use less than 10% for ZFS instead of the default 50% and it works fine. You could try limiting the ZFS ARC size and see how it goes first.
Consider replacing the single Ubutu VM with multiple Linux services with multiple Proxmox containers, which are more efficient and can use the host ZFS pools directly (sharing the ARC).
 
Who told you that RAM is not very important for virtualization? Memory is the one thing that you can't really overcommit on!
Well, not that it's not Important, just that the new stuff would be faster, so I wouldn't notice the difference. I've only noticed the difference, but I don't know if I should just get more ram, or use two computers.
Consider replacing the single Ubutu VM with multiple Linux services with multiple Proxmox containers, which are more efficient and can use the host ZFS pools directly (sharing the ARC).
I've considered it, but I have been unsuccessful replicating the same functionality I had, on Proxmox. Long story short, virtualization Ubuntu 20.04 has been the only way to secure my NAS, allow access off site, and utilize all the apps I need. There would have to be a very good reason to waste another week trying to figure out containers again.

So do you think adding RAM would solve my issues? Or is it in my best interest to spin up a second computer? One for VM's and one for NAS. I'm guessing I have to go back to the Xeons (for the ECC memory) anyway.
 
Switching CPU(s) or buying more memory won't automatically solve your latency/responsiveness problems. For less lag, you need the sustem to not be at 100% at any time.

Try 6/N cores for each of the N VMs and leave the hyper-threads for background processing by Proxmox. Give no more than 24G/N of memory to each VM to stay below 80% memory usage and limit ZFS memory usage to 5G (instead of the default 16), if you're using ZFS on Proxmox or 5G for cache if you don't use ZFS, which leaves 2G and change for Proxmox itself.

I think you can also tune the old server better by using NUMA (also 6/N cores per VM but 2 sockets per VM) and you'll have more memory per VM (and more threads for Proxmox and more memory for ZFS). Then you can enable IO threads, without them starving or getting in the way of other processing, which can reduce latency.
If that improves on your lag spikes, you can of course exchange resources between VMs and/of ZFS when needed for better performance (but never more than 45% to one VM).

If you're more comfortable with Ubuntu and only run a few VMs, maybe you can consider running Ubuntu on the hardware and the VMs inside it. This would save you the overhead of Proxmox (being enterprise server oriented and all that).
 
Last edited:
  • Like
Reactions: DR4GON
For MineOS:
- set CPU type to "host" for better CPU performance
- use PaperMC and not the Vanilla MC Server for better parallelization and faster code
- split your worlds (nether/world/end) into own servers and use a waterfall cluster. That way each world got its own CPU so even better parallization
- best would be to run it on the i7 as MC needs primarily single-threaded performance
- run it of good SSDs (best would be enterprise NVMe, then enterprise SATA/SAS, then consumer NVMe...don't use HDDs or QLC SSDs)

What I would do with your hardware:
Use both machines as a server. The Xeon as the main machine for NAS and all of the VMs that doesn't need fast CPU performance. The i7 just for virtualization and only for VMs that really need the CPU performance like your MineOS, WinVMs and plex.
I would install PVE on both servers bare metal and then virtualize your TrueNAS on the Xeon server. For that you might want to buy enough HBAs to fit your 24 HDDs or even some additional SSDs that could boost the HDDs peformance in case you don't already got them. For example a 16 disk HBA + 8 disk HBA, two 16 dsk HBAs (in case you want to use special device SSDs too) or 3x 8 port HBA (isn't that expensive, you can then them for 30€ each). Then you could create a TrueNAS VM, give it most of your hosts RAM (for example 64GB RAM so you got 32GB for PVE itself and some other VMs) and passthrough those HBAs into your TrueNAS VM so TrueNAS can directly access the physical disks.

Consumer hardware won't make reliable servers so I wouldn't buy 128GB of RAM for it. Might be cheaper to just get a "new" second-hand server like a dual Xeon E5 v3/v4 with 64/128 GB of RAM you often get for 400-600€.
 
  • Like
Reactions: DR4GON
Switching CPU(s) or buying more memory won't automatically solve your latency/responsiveness problems. For less lag, you need the sustem to not be at 100% at any time.
As far as I can tell, changing the graph to "Year (max)" the most utilization I've ever had is 38%. I can only hit 40% if I deliberatly scrub around a 10GB 4k video on plex, using the integrated graphics. Using two PC's I'd have room for a graphics card too I guess.
Use both machines as a server. The Xeon as the main machine for NAS and all of the VMs that doesn't need fast CPU performance. The i7 just for virtualization and only for VMs that really need the CPU performance like your MineOS, WinVMs and plex.
I would install PVE on both servers bare metal and then virtualize your TrueNAS on the Xeon server.
[...]
Consumer hardware won't make reliable servers so I wouldn't buy 128GB of RAM for it.
Thanks for breaking it down like that. Already got the HBA's so I could implement that solution without any costly purchases. I've never thought about reliability being an issue, but I feel stupid for momentarily forgetting my upgrade abandoned ECC. It would have been an expensive lesson to learn after the fact.

You think TrueNAS VM over just running the ZFS array in PVE? Any reason?
 
You think TrueNAS VM over just running the ZFS array in PVE? Any reason?
Running PVE as your NAS is absolutely fine and will allow you even more stuff that TrueNAS won't allow you to do, like for example using partitions as SLOG/L2ARC/special device/DDT and not just whole disks, so the same mirrored two SSDs that could store your VMs could also be used as special devices for your both HDD pools to speed them up.
Downside of cause is that it is alot of work to setup everything right as PVE comes with no NAS functionalities so you would need to setup everything on your own using the CLI.
So a TrueNAS VM is still a valid option, especially when PCI passing through the HBAs, because it is way less work and more user friendly.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!