First Setup: Disk configuration look good?

TestInProd

New Member
Feb 15, 2024
11
0
1
I'm about to setup my first Proxmox server and looking for feedback on the disk configurations. IT savvy, but this is my first deep-dive into filesystems and first time using ZFS.

My PVE will be a single node with no plans for an additional one. The server will be on a UPS. VMs and Proxmox config will be manually backed up to an external drive, though may consider a PBS down the road (would probably start with a PBS VM in PVE, then eventually a dedicated server).

  • 1x WD SN700 500GB
    • OS/Host drive
    • ext4
    • Motherboard NVME
  • 2x WD SN700 1TB
    • VM/LXC drives
    • Mirrored LVM-Thin
    • Motherboard NVME
  • 4x WD Gold 8TB SATA HDDs
    • Storage drives for TrueNAS VM passed via HBA
    • RAIDZ2
    • LSI HBA
 
Just curious if anyone sees any issues or has recommendations on this setup before I slap it together.
 
SSDs aren't a great choice. For server workloads and ZFS it's highly recommended to use enterprise grade SSDs with PLP or wear and performance will be terrible once you do sync writes.

And how do you want to mirror LVM-Thin? Onboard pseudo-HW raid?

And the whole point of raid is to not have downtime/additional work when a disk fails so not great with a; unmirrored system disk as a single point of failure.
 
SSDs aren't a great choice. For server workloads and ZFS it's highly recommended to use enterprise grade SSDs with PLP or wear and performance will be terrible once you do sync writes.
Yeah, I've been wrestling with that. I havent found many enterprise SSDs that I can buy through Amazon. The WD one that I picked out is a NAS version and has "Endurance of up to 5100TBW". It's also cheap, so I assumed that was a good middle ground.

It doesn't have PLP, though, which I wish it did. To help compensate, the server is plugged into a beefy PSU (APC SmartUPS-3000) and I plan to configure NUT.

The other thing I was planning to help with SSD wear was to not use ZFS on the SSDs. I was planning on something more traditional like ext4, but is this a bad idea? Does lvm-thin still cause as much wear as ZFS?
 
It doesn't have PLP, though, which I wish it did. To help compensate, the server is plugged into a beefy PSU (APC SmartUPS-3000) and I plan to configure NUT.
But your UPS won't help with performance/wear of sync writes. The SSD simply can't cache sync writes in the SSDs DRAM cache without built-in PLP no matter if the server is running on a UPS or not. So when doing sync writes it is performing like a DRAM-less SSD (and those got terrible wear and performance if you ever used one ;)). So I would try to avoid sync writes. Async writes should be fine.
The other thing I was planning to help with SSD wear was to not use ZFS on the SSDs. I was planning on something more traditional like ext4, but is this a bad idea? Does lvm-thin still cause as much wear as ZFS?
LVM-Thin causes less wear. But PVE only officially supports ZFS or btrfs if you want software raid. If you want mirrored LVM-Thin you would need to use some hardware raid below or be experimental and try to set up mdadm raid1 or a LVM mirror manually outside of PVE via CLI.
 
Last edited:
But your UPS won't help with performance/wear of sync writes. The SSD simply can't cache sync writes in the SSDs DRAM cache without built-in PLP no matter if the server is running on a UPS or not. So when doing sync writes it is performing like a DRAM-less SSD (and those got terrible wear and performance if you ever used one ;)). So I would try to avoid sync writes. Async writes should be fine.

LVM-Thin causes less wear. But PVE only officially supports ZFS or btrfs if you want software raid. If you want mirrored LVM-Thin you would need to use some hardware raid below or be experimental and try to set up mdadm raid1 or a LVM mirror manually outside of PVE via CLI.
Well...damn. I'll look at getting something enterprise then that has PLP. Happy to get recommendations for 1TB NVMes if anyone has them.

In the meantime, the motherboard firmware supports RAID between two NVMes configurable in the UEFI, so I was planning to try and use that. I'll tinker with it and see if that works until I'm able to get some better SSDs, then switch over to using ZFS and mirroring in PVE.

Thanks for the info!
 
Last edited: