Consumer grade NVMe mirrored or definitely not?

w1x5uxx

New Member
Dec 20, 2024
2
0
1
Hi,
I'm looking to replace my RPi4 setup with a miniPC (Fujitsu Futro) that can take 2 NVMe drives and one SATA.
I want to set up Proxmox and have stuff like Nextcloud, Pihole, Emby, HomeAssistant, file server etc. running.
I thought of setting up the NVMe drives as a RAID1 mirror to have drive fails easily covered.
I have been reading a couple of posts debating wear issues due to the nature of ZFS and stuff.
Here, others state that turning off atime sync and other write intensive features will solve the wear issues.
Unfortunately they don't elaborate about the downsides of such settings.

Now I'm confused to the max how to set up my rig!
The consumer grade hardware is given, what is the best I can do with that?
Thanks ahead for your help.
 
zfs set atime=off rpool # should inherit with new datasets, but you can also do foreach-dataset:

Code:
for ds in $(zfs list |awk '{print $1}'); do zfs set "$ds" atime=off; done; zfs get atime

Also turn off cluster services "corosync" and pve-ha-* ; you can Stop them in Nodes / (nodename) / System and also
' systemctl disable --now corosync ' at the console/ssh so they don't come back after a reboot

Install log2ram - and if you have ~16-20GB RAM or more on the host, zram for compressed in-RAM swap. Allocate 1GB swap if 16GB actual ram, 2GB swap if you have more. Unless you're running some specific workload, you probably shouldn't need more than that.

Keep on eye on wearout, every month or so check Nodes / (nodename) / Disks in the PVE GUI. If wearout gets above ~80% you'll want to replace the nvme with a 512GB or above that has a high TBW rating.

By following the practices recommended above, the cheapie little 256GB nvme that shipped with my Qotom firewall appliance is at 2% wearout and has been running ~24/7 since Feb. It's ext4/LVM and I have most of my VMs on other media tho, and use the lvm-thin on the 256 sparingly.
 
Thank you @Kingneutron .
Some additional questions:
What are the purpose of corosync and phve-ha*?
Can it be disabled without worries in every case or should it run if a specific service/appliance is necessary?

How is the wear of this ZFS RAID compared to a non-RAID setup?
If all VMs and containers are running on the 2TB ZFS RAID the wear might be higher nonetheless?
 
You only need those 3 services running if you're doing a cluster.

Personally I don't do zfs raidzX on SSD, so I can't say; if it's on spinners then wear shouldn't be an issue unless you're getting write amplification from doing something crazy like zfs-on-zfs (use lvm-thin for that)
 
ZFS RAID1 Mirror is a very common for Proxmox OS drives. For actual wear of SSD/NVMe, kind of RAID does not really causes any more wear than non-RAID. If anything else, wear might be lower on a full RAID such as RAIDZ2/RAIDZ3 etc because bits of data are scattered across multiple drives instead of 1 or 2 drives doing all the writing.

A mirror setup shields you from mass data loss unless you are very very unlucky and both drives fails about the same time. Very rare case.
Cheap NVMe in mirror works great, specially when you use ZRAM as @Kingneutron mentioned, to store all your logs such as /var/log directory. You can use zram to create 2 devices in RAM to store log directory and swap space. This alone eliminates 80% of writes that occur on Proxmox nodes thus saving your NVMe/SSD from excess wear.

The one caveat is the zram space is volatile. When you reboot a node it resets all data. Meaning you lose all your logs. You setup a log server such as Wazuh or GrayLog to centrally collect all your logs so a reboot wont matter any more.
Reading is not what causes true for SSD, it is the write. So anything you can do to reduce writes, helps with increased wear. Cheap SSDs such as Lexar NS100 or T-force Vulcan as Proxmox OS drive in mirror provides heck of bang for the buck than one might think when zram is leveraged.
 
  • Like
Reactions: Kingneutron

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!