So what for low grade SSD´s?

ororokorebuh

New Member
Nov 16, 2024
13
0
1
Hello, I have small server with 192GB RAM and nvme where proxmox is installed.

I have also 4 SSD´s on SATA ports. Two are 512GB and another two are 1TB. I would like to mesh them together like in ZFS (raidz) but as I know ZFS will destroy low grade SSD quite fast. So I would like to use LVM Thin (to be able to do snapshots), but it seems I can´t join those discs or create something like raid or mirroring between them...

My goal is to run Windows server in VM. This server will be migrated from physic server which has now two raids so like two physical discs. I will create VHDX and migrate it to QCOW2. SO I think I will have one big storage from all discs or 2+2.

Please could someone point me what is the best way to go?

Thank you.
 
With different-sized disks, you could have 2x zfs pools. 1 mirror for the 512s and 1 mirror for the 1TBs.

Or you could make it 1 big "raid10 equivalent" pool with mirror 2x512 + mirror 2x1TB, but your I/O won't be balanced.

ZFS wearout is mostly a concern for Proxmox boot/OS SSDs.
 
So you really think for ADATA SU650 and Patriot P220 is ZFS not a problem if I will use them for VM?
I could also have RAIDZ (for all SSD´s together) with ZFS right? Is it better or worst option than your recommended mirror?
 
Last edited:
You will not utilize all available space if you try to raidz different-sized disks together. Stick to mirrors unless you have 6-8x same-sized disks for a raidz2, and even then mirror is better for interactive VM performance.

As long as you set ' atime=off ' on the top level and all datasets, and keep an eye on the wear indicator in the GUI, and have a spare drive waiting in case one fails, should be OK
 
I run three Proxmox servers, all with consumer SSDs. Some are NVMe, some are 2.5 inch SATA drives. All of them except one are Team group drives from Newegg. I have ZFS mirrors on two machines and a single ZFS drive on the third machine. I disable corosync, pve-ha-crm, and pve-ha-lrm since I am not running a cluster. My mirrored drives are at zero % wear out after 18 months of use. My N100 box which has a Sabrent Rocket in a single drive configuration is at 4% wear out. I have the OS and the VM storage on these drives. I keep all of my data on a separate NAS. I think the idea that ZFS will eat your consumer drives is somewhat overblown, at least based on my setup and experience.
 
I run three Proxmox servers, all with consumer SSDs. Some are NVMe, some are 2.5 inch SATA drives. All of them except one are Team group drives from Newegg. I have ZFS mirrors on two machines and a single ZFS drive on the third machine. I disable corosync, pve-ha-crm, and pve-ha-lrm since I am not running a cluster. My mirrored drives are at zero % wear out after 18 months of use. My N100 box which has a Sabrent Rocket in a single drive configuration is at 4% wear out. I have the OS and the VM storage on these drives. I keep all of my data on a separate NAS. I think the idea that ZFS will eat your consumer drives is somewhat overblown, at least based on my setup and experience.
"I keep all of my data on a separate NAS" you mean you run your VM´s from NAS or you do backups there?
Disable atime, corosync, pve-ha-crm, and pve-ha-lrm - this is all done via CLI or somewhere in GUI?
 
Last edited:
My VMs sit on the local machines on the ZFS mirror. I run two NAS devices, a Synology and a virtualized instance of TrueNAS. I use them for ISO storage, VM and CT backups, NFS shares into my various VMs, such as NextCloud, for user data storage, and for docker volumes. So i could in theory lose any or all of my Proxmox nodes and not lose any of my data. All of my data sits on the NAS with a second copy backed up on site, and a third copy out on AWS Glacier. My virtualized instance of TrueNAS has the PCIe SATA controller passed through to it and has full control of the disks it uses.

Yes, I disable those services from the command line: sudo systemctl stop pve-ha-crm pve-ha-lrm corosync followed by sudo systemctl disable pve-ha-crm pve-ha-lrm corosync
 
So you really think for ADATA SU650 and Patriot P220 is ZFS not a problem if I will use them for VM?
Besides the wearout from PVE OS install (already discussed, not a problem here), you could run into low write performance and without PLP, you could run into a total pool loss on a power outage. Others reported about this, I never experienced this. Having good backups is always a good idea.
 
  • Like
Reactions: Johannes S