Newbie - need recommendation on best way to install Proxmox, VMs, containers, etc.

bthoven

New Member
Oct 8, 2021
9
0
1
66
I've been playing around Proxmox for a week on a 8GB ram/512GB HDD mini PC (i5). I'm about to build a more proper/low powered desktop server (Xeon E5-2560L-10cores/20threads, 32GB RECC, nvme 500GB, WD Red 4TB x2.)

Applications, for now:
Debian 11 VM with KDE environment (hosting at least pihole, qbittorrent, filebrowser, docker containers),
Windows 11 VM,
HomeAssistant VM,
OpenMediaVault VM, and
Nextcloud lxc container.

What I plan for installation:
1. Install Proxmox on nvme drive (32GB for pve root), the rest for LVM-thin storing all the above VMs and lxc containers guests
2. Create either Raid-1 or ZFS mirror drive on WD 4TBx2 with Proxmox
3. OpenMediaVault VM will mount the whole ZFS mirror drive, and mainly use for SMB shared for all the VMs, lxc Containers, and docker containers. The permanent data, e.g.configs, databases....of the VMs/Containers will be stored in ZFS drive via SMB shared from OpenMediaVault.

Is it the right way to do; or it would be better to do in other ways.

Thank you.
 
Should work. But some notes:
1.) Virtualization wants alot of RAM. Depending on how big your homeserver grows you might even want 128GB of RAM. So I would populate as less RAM slots as possible (or only as less as possible to make use of all RAM channels) so you got room to upgrade your RAM later if needed.
2.) The E5 supports alot of PCIe lanes and you might want alot of PCIe cards to add in later. Multiple GPUs for hardware accelerated video encoding (for plex/emby/jellyfin) or video decoding (for a WinVM that can playback videos in a browser without stuttering), maybe a 10Gbit NIC upgrade, maybe a quadport Gbit NIC if you want to virtualize a OPNsense/pfsense, maybe some PCIe to M.2 adapter cards to be able to add more NVMes later, maybe a HBA to be able to passthrough HDDs directly into a OMV/TrueNAS VM, maybe a USB card for PCI passthrough so you can use USB inside a VM without many of the problems and restrictions you run into when trying virtualized USB passthrough, ...
So I would get atleast a ATX board or you are wasting those PCIe lanes, because that CPU can handle more PCIe lanes than could fit on a small ITX/mATX board.
3.) Don't buy WD Reds. WD Reds aren't suitable to run inside a server or in a NAS because they use SMR. They are by design horrible slow at writing stuff and can cause a ZFS pool to degrade because they get that slow that ZFS thinks the drive is dead because it can't answer in time because the latency is that crappy. "WD Red Plus" and "WD Red Pro" are the drives that use CMR instead of SMR and these should work fine.
4.) You get alot of write amplification because of all the virtualization, filesystems ontop of each other, sync writes, mixed blocksizes and so on so depending on your workload consumer drives might wear out very fast. Especially if you think you got alot of sync writes (running some databases or stuff like that) you should consider buying enterprise grade SSDs with powerloss protection and a higher TBW/DWPD rating. Might not be that critical with LVM-thin but as soon as you would like to use ZFS there too enterprise SSDs would be highly recommended.
 
Last edited:
Thanks for the very useful tips.
I believe my WD Reds are still a CMR type (WD40EFRX).
This server is for my personal use and may not have a lot of traffic. I may consider a further upgrade later on.
Thanks again.
 
I believe my WD Reds are still a CMR type (WD40EFRX).
Jep, these "WD40EFRX" are label as "WD Red Plus" now and should be fine.
This server is for my personal use and may not have a lot of traffic. I may consider a further upgrade later on.
You never know. My homeservers only used locally by myself are now using 16 cores/32 threads, 114GB RAM, 7 PCIe cards, 10 HDDs, 20 SSDs and I'm already running out of RAM, CPU cores, HDD storage again and I would like to add in 3 more PCIe cards but got no free slots left. All RAM slots are fully populated too. And I started with a Raspberry Pi 1B, then it growed and growed^^...so not bad to think early about upgradeability. A ATX board doesn't cost that much more but you get 8 instead of 4 RAM slots and 7 instead of 3 PCIe slots.
 
Last edited:
  • Like
Reactions: bthoven
Oh....it's too late for me because all parts are here ready for assembling ($450). At least I can re-use my ram, HDD, etc.. when I decide to upgrade it, hopefully not in the very near future.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!