PVE 6 ZFS, SLOG, ARC, L2ARC- Disk Configuration

Splise

Member
Nov 11, 2019
23
0
21
Hi all,

I am relatively new to Proxmox, so please be gentle :). I have quite a bit of experience with VMware, so the general concepts are not foreign to me. I am currently building a "new" lab workstation, and would like some advice specific to the disk configuration. I have provided the general build details below. Essentially I am looking for the best disk configuration for the disks I currently have available, and their intended application. If the recommendation is to get rid of a disk or two, or buy another disk or two, that's not out of scope. However, I would like to use what I have now if possible.

The lab server will be used for just about anything including hosting multiple Windows and Linux VMs, and virtual firewalls. My intent is to build mini enterprise infrastructure on the system to include DNS, Active Directory, mail server, miners, web hosting, a file server, virtual firewalls, NextCloud, etc. The workstation will be connected to a 10gb network segment. I will be using an external NAS for backup purposes. And yes, I realize that building a hyper-converged solution is not redundant.

Current System Build:
Z10PE-D16 WS​
2 x Intel Xeon E5-2695 V3​
8 x 32GB RAM (256GB DDR4 Reg ECC 2400)​
2 x Samsung 860 Evo (500GB) 2.5" SSD​
2 x Samsung 970 Evo (500GB) M.2-2280 NVME SSD​
6 x Seagate IronWolf NAS (6 B) 3.5" 7200RPM HDD​
1 x Intel Optane 900P (280GB) SSD​

I would like to get the absolute best performance out of the hardware available, while still retaining good data integrity.

Below are some inital thoughts on how the disk could be utilized, fully knowing that it's probably not correct.​
ARC (128GB) = 8 x 32GB RAM (256GB DDR4 Reg ECC 2400)​
RAID-1 VDEV "Root" (500GB) = 2 x Samsung 860 Evo (500GB) 2.5" SSD​
RAID-1 VDEV "L2ARC" (500GB) 2 x Samsung 970 Evo (500GB) M.2-2280 NVME SSD​
RADIDZ2 "Data" = 6 x Seagate IronWolf NAS (6 B) 3.5" 7200RPM HDD​
SLOG = 1 x Intel Optane 900P (280GB) SSD​


Please let me know if additional detail is required?
Thanks in advance!
 
Hi,

Maybe you miss to tell that your goal it is to learn about(enterprise infrastructure) or not!? In any case "the best" setup does not exist if you do not define what you think is "best" for your point of view.

If I would be in your place, I do not be so concerned about ... what is the "best". But I would try to learn how to do basic things like AD, zfs storage, proxmox and so on. And after you will be comfortable with basics I would try to make a optimum setup (and not the best setup).

Anyway your hardware is awesome for me at least - how lucky you are


Good luck / Bafta
 
  • Like
Reactions: janssensm
Hi,

Thanks for the reply. I will try to be a little more succinct. I am just looking for a couple opinions on disk configuration, not a "fully optimized disk config based on your requirements". However, that is the long term goal. If I start with a particular design, I want it to be somewhat close so that I can fail forward, if possible. If it doesn't work as I thought it would, I will revise it, or start over. Most of the new environment doesn't exist yet, so I can only provide specifics on what I am using now, and speculate on the rest.

That said, isn't this how I would become comfortable with Proxmox and ZFS? By asking questions and working through the design, install, and configuration? That's always been my strategy.

Just to add context. I have been working in IT for 20 years at a senior level. I am well versed in networking, security, storage, and MS AD. I am just looking for optimization tips to create a baseline I can work from, I am not looking to boil the ocean.

Thanks.
 
That said, isn't this how I would become comfortable with Proxmox and ZFS? By asking questions and working through the design, install, and configuration? That's always been my strategy.

PVE6 has ZFS 0.8 so you can experiment with the new ZFS allocation classes and get an even faster system out of it by e.g. moving metadata to SSD.
In general, I would not recommend using EVO with ZFS, because they wear out very fast, so keep an eye on the wearout while testing.
 
PVE6 has ZFS 0.8 so you can experiment with the new ZFS allocation classes and get an even faster system out of it by e.g. moving metadata to SSD.
In general, I would not recommend using EVO with ZFS, because they wear out very fast, so keep an eye on the wearout while testing.
Thanks for the info, much appreciated. I was concerned the EVOs may not be a useful, but have read conflicting results. I was intending to over provision them for wear-leveling purposes to get more out of them. I've also read that noatime can be used to decrease wear.
 
Thanks for the info, much appreciated. I was concerned the EVOs may not be a useful, but have read conflicting results. I was intending to over provision them for wear-leveling purposes to get more out of them. I've also read that noatime can be used to decrease wear.

Yes, that's all true but fact is, they wear out very very fast in comparison to enterprise grade SSDs. I've never seen enterprise SSDs personally that wore out. My EVO, however, lost over 50% in 2 month with a ZFS PVE laptop I was working with.
 
Hi @Splise,

If you want to use zfs, and not waste your time, I would start to read the documentation from zfsonlinux.orghttps://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/. zfs is very different in many aspects, and the learning curve is not so easy, even for a experienced IT person.

If you will start this forum threads you will see that many users have performance problems with zfs.

Regarding your setup, because you have a external NAS(so the backup will not be a problem) I would go with a raid50 (2 stripped raidz1). Also because virtual kvm nics can not manage more then 200k pps, I will not use any virtual firewalls(syn flooding. ..).

Good luck / Bafta.
 
I appreciate all of the comments. I definitely see there are performance issues, and premature wear of consumer grade SSDs (which makes sense). The overhead is worth it considering what ZFS is doing under the covers. However, it's also the reason I was wanting some additional detail surrounding the design, and getting the most out of the disk I have.

The VM firewalls are for labs/testing only. But that is a good point, and I will do some additional research.

Some of the information I have found so far appears to be a bit dated. There's a couple new features in PVE6 that I thought may help in address existing issues...

"ZFS 0.8.1 with native encryption and SSD TRIM support: the new features for ZFS include enhanced security and data protection thanks to the added support for native encryption with comfortable key-handling by integrating the encryption directly into the `zfs` utilities. Encryption is as flexible as volume creation. TRIM support is included. The sub command `zpool trim` notifies devices about unused sectors, thus TRIM can improve the usage of the resources and contribute to longer SSD life. Also checkpoints on pool level are available."

and

"Support for ZFS on UEFI and on NVMe devices in the ISO installer: the installer now supports ZFS root via UEFI, for example you can boot a ZFS mirror on NVMe SSDs. By using `systemd-boot` as bootloader instead of grub all pool-level features can be enabled on the root pool."

Does anyone have any feedback on the above, especially TRIM support?
 
basically I would always recommend zfs except when it comes to speed...you basically need to understand what and which IOPS your workload will produce and then decide if zfs will help you or fights against you.

in your case I guess it will be random IOPS with 50/50 read/write and I also guess you will have sync io, otherwise your SLOG wouldn't do much.

I would not recommend you to start with L2ARC...I explained it in some posts already I think

I also wouldn't recommend RAIDZ2 due to the performance penalty you get - except you're absolutely sure that you need the fault tolerance of 2 disks. use RAIDZ1 or best case mirrored vdev's.


other than this I think you're good to go...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!