Ideas for storage layout

siebo

Member
Jun 3, 2020
4
0
6
40
Hello all,

I am currently running a homelab server with ESXi and because this 8 year old environment is to be upgraded, I thought about Proxmox when it came to the questions what platform the new server will run on. Why do I even want to migrate away from ESXi ? Since I am disappointed with a lot of small things and in addition the future standard version 7+ will possibly cut the support for my homelab-grade hardware whenever VMware wants to. Altogether and after some evalulation I though to give Proxmox a try.

The kind of hardware I will use is the following - as you can see, not professional datacenter style ;-):
  • Mainboard ASRock Z390M-ITX/ac, socket 1151v2/H4
  • CPU Intel Core i7-8700T, 2.4GHz, 6C, 12T, 35W TDP
  • RAM 32 GB Crucial Ballistix 3200 MHz, DDR4 (downsides of non-ECC known)
When it comes to storage, I would like to rely mostly on existing drives for obvious reasons:
  • SSD1 fastest, 500GB Western Digital WD_BLACK SN750 NVMe SSD
  • SSD2 faster, 256GB Samsung SSD 850 Pro (existing device)
  • SSD3 fast, 480GB SanDisk Ultra II SSD (existing device)
  • HDD1-4 slow, 4TB Western Digital WD40EZRX (green line with wdidle3-"hack", existing devices)
Now I have issues to figure out what storage layout I will use given the available drives and possibilites of Proxmox. My first thought was just "ok, something with ZFS" but when I looked deeper into it, the clear view became more and more blurry. I have the following requirements:
  • Virtual disks ~200GB, thick-provisioned, ~50% actually used
  • Fileshare ~9TB CIFS shares, 400GB of it more important than the rest (currently shared via separate QNAP NAS, to be migrated to Proxmox VM)
  • Redundancy I want to have single redundancy (mirror/raidz1) on all data where possible to keep RPO low
  • Backup I want to use snapshots for backup to keep RTO low (separate desaster offsite backup to cloud location handled via rclone)
  • Encryption I want to use transparent ZFS encryption on all pools where possible
  • Storage consumption I want to use data compression and/or deduplication where it makes sense
My first and current idea was/is the following:
  • 256GB/half of SSD1 + full SSD2 as 256GB mirror pool for virtual (OS) disks incl. Proxmox installation
    • Idea behind is to have a lot of speed (NVMe SSD) + redundancy (mirror)
    • Don't know if the slower drive (SSD2) will slow down the performance of the pool
    • Maybe usage of single drive (only SSD1) is better but then there is no redundancy
  • Full HDD1-4 as 12TB raidz1 pool for fileshare, maybe virtual data disks, ...
    • Large pool with redundancy, slow hard drives
    • Could possibly be speed up by addition of ZIL/L2ARC on free SSD drives (did not yet read much into this topic) ?
  • 256GB/other half of SSD1 and full SSD3
    • Currently no idea on how to be used
    • Especially fast performance of SSD1 could improve HDD pool performance (ZIL/L2ARC) ?
    • Also disks for test systems possible where redundancy does not matter (rest of SSD1, maybe mirrored with 256GB of SSD3) ?
    • Or I just remove SSD3 if there is no use for it anymore...
However, I don't know if creating pools from partitions (instead of whole disks) is conform to best practices, despite the fact that Proxmox doesn't allow to chose such a setup during installation.

This is my current situation and I would be happy to get some feedback on the general idea of using disk partitions for ZFS pools and maybe also THE idea of how to use the existing drives to get most of use out of it.

Thank you in advance.

Siebo
 
Just a few hints from my side:

HDD RaidZ1: If you can live with the loss of net space available, I would opt for a Raid 10 like setting with two mirrored VDEVs in that pool. You will have better IO and won't run into possible issues when storing VM disks on it (see https://forum.proxmox.com/threads/zfs-shows-42-22tib-available-but-only-gives-the-vm-25.69874/ )

Splitting SSDs: It's technically possible to split up the disks in partitions and use them for different pools. ZFS only wants a block device, it doesn't care if it is a file, a partition, or a full disk.
I would consider though, the fact that you might be creating dependencies which are more complicated than they might appear first. If you plan to install Proxmox via our installer, be aware that it will create 3 partitions for the boot drives if ZFS is used. If you use different sized disks, the smaller size is used IIRC, meaning that the bigger SSD should have free space that can be used.
But in general I would opt for setups that are simple and straightforward.


Storage consumption I want to use data compression and/or deduplication where it makes sense
Compression is great if the data can be easily compressed. Save yourself the pain and don't dabble with dedup! You don't have enough RAM and I know of very few people where it runs with good performance and actually saves a lot of space.
 
After some experimenting and consideration I came up with the following "final" layout etc.:

20200624 145328 Window.png

I took into consideration to keep it simple but also tried to re-use most of the hardware I had and to fulfill most of the needs as explained above.
The thoughts are as follows:
  • 480GB SSD1 only used for proxmox.
  • In case of reinstallation no fiddling with existing partitions/datasets/...
  • Currently I also use it for backups, templates, ISOs but only because there is spare space.
    A drive failure/data loss would "suck" but would also be acceptible in my home environment.
    To move these to the more redundant data pool is an option in case of need.

  • 500GB+256GB SSD2+3 partitioned but not affected during reinstallation.
  • 256GB mirrored for high available vmfs space. Daily backup to backup dataset.
  • Most of the rest single/striped for not-so-important disks. Daily backup to backup dataset.
  • 9GB used as ZIL for the data pool.

  • 16TB (effective RAIDZ1) HDD1-3 data pool not affected during reinstallation.
  • Mostly used for (encrypted) CIFS storage, optionally also for backups, templates, ISOs.
  • Frequent ZFS snapshots for use as volume shadow copy via samba.
  • Daily offsite backup of important data via rclone.
I currently share the CIFS shares via an LXC which bind mounts directories on the cifs_enc/cifs datasets.
ZFS snapshots are handled by the hypervisor itself (zfs-auto-snapshot), shares and rclone offsite backup is handled within the LXC.
This way the hypervisor is kept clean from any configuration/users/software besides the creation of ZFS snapshots.

I opted against using deduplication for reasons stated in this thread but also multiple times in the forum.
I am using compression on all pools/datasets and ZFS based encryption only for the cifs_enc dataset.
I opted against encryption for the vmfs* datasets for performance reasons and because the system drives should normally contain no sensitive data.

Currently I have the feeling the performance could be better (also subjectively in comparison to ESXi) but I need to dig into this at a later stage. Currently is "runs ok" and I need to do some other stuff before I have time again to fiddle around with the home lab ;-).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!