Moving from version 7 to 8 - what should I do differently?

jaytee129

Member
Jun 16, 2022
142
10
23
Am planning my move from proxmox 7 to 8 and leaning towards doing fresh install so I can do some things better.

Would like suggestions around what those improvements could be building on some mental notes I made over the last 18 months since I set the system up (my first proxmox host.)

First, here are the highlights of the system:

Hardware:
SuperMicro H12 board with EPYC 7282 CPU, 128 GB ECC RAM
2 x 1.9TB NVMe M.2 SSD (Samsung PM9A3 enterprise SSD)
2 x 2TB HDD
Nvidia GTX1600 SUPER Graphics card passed through to Windows VM
Nvidia GT710 graphics card to passed through to Linux VM
2 x 8TB USB3 external hard drives passed through to Windows VM

Always ON VMs:
3 Windows, including a Gaming VM tuned for performance
2 Linux VMs, including Plexserver

Soon to deploy a LXC for self hosted RustDesk server
I also regularly spin up and down various other VMs and containers for testing stuff

Disk pools:
One ZFS 1 (mirrored) 1.9TB pool for boot, VMs and ISO files, currently at about 30% capacity
One LVM thin pool on one of the 2TB HDD (for Plexserver media)
Other 2TB HDD used for testing

Using TrueNAS for backups (soon to be redeployed to run in a VM on a separate proxmox box)

Possibly interesting stat: SMART reports 2% wearout on both SSD's after about 18 months of use. (I was a bit surprised as I thought it would be a few years before wearout would start to show)

Here's what I'm thinking/reading would be things to do differently:
1) leave unassigned space on SSD
- as available space for SSD controller to use to replace worn out cells
- as buffer for growth and/or something like a swap partition if needed in the future
2) create different pools for boot, VMs/Containers, and ISO files
3) install proxmox on Debian or other Linux OS with software RAID at that level instead of using proxmox zfs mirroring


What are recommendations/experiences with the three items above? about how much space to assign?

For #3, some posts I've read suggest it's tedious put this layer below proxmox and not worth doing this way. if someone does recommending it, what RAID software works best? mdadm?

Any info would be appreciated.
 
Last edited:
interesting side note - I have a new proxmox box that I configured with ZFS 1 mirrors using new SSD's - more consumer class Samsung 870 EVO and, after maybe 1 month of use SMART reports 1% wearout. That seems soon for any wearout.

I've read that ZFS writes a lot to disk - would use of Debian (or other linux) with s/w RAID treat the SSD's a lot more gently?

Still hoping for comments on my original post too...
 
interesting side note - I have a new proxmox box that I configured with ZFS 1 mirrors using new SSD's - more consumer class Samsung 870 EVO and, after maybe 1 month of use SMART reports 1% wearout. That seems soon for any wearout.
Proxmox can wear consumer SSDs quickly (also depending on your VMs) but 1% per month will still last you eight years. It tends to increases more at the beginning because it's empty and you are writing new VMs to it. Give it some time before drawing conclusions.
I've read that ZFS writes a lot to disk - would use of Debian (or other linux) with s/w RAID treat the SSD's a lot more gently?
Yes, it has high write amplifaction because of the features like checksums.
Not using a hypervisor (that logs and writes data for graphs a lot) and not running VMs (which might also write a lot) would of course put less wear on SSDs.
 
interesting side note - I have a new proxmox box that I configured with ZFS 1 mirrors using new SSD's - more consumer class Samsung 870 EVO and, after maybe 1 month of use SMART reports 1% wearout. That seems soon for any wearout.

This is - essentially - an intentional anti-feature, but it's much worse in a cluster scenario:
https://forum.proxmox.com/threads/proxmox-and-ssds.153914/#post-700255

I've read that ZFS writes a lot to disk - would use of Debian (or other linux) with s/w RAID treat the SSD's a lot more gently?

ZFS is a filesystem that was never designed for SSDs, any copy-on-write filesystem will do poorly. I would use XFS on mdadm anytime, just beware of this PVE issue (and change the config accordingly):
https://bugzilla.proxmox.com/show_bug.cgi?id=5235
 
  • Like
Reactions: jaytee129

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!