here is how i do it in ex2200:
ge-0/0/21 {
description SP1-data;
unit 0 {
family ethernet-switching {
interface-mode trunk;
vlan {
members [ Server-Vlan Host-Vlan Voice-Vlan Wifi-Vlan ];
I had similar problem with megaraid_sas, the zfs 1 boot disks couldnt write to it if the machine load was high. Once i've shutdown the Vms on it, the kernel upgrade or proxmox-boot-tool would run okay. this was supermicro.
What is the use-case for SaunaFS in Proxmox, for image storage like CephFS or do you run VM's on it? Since you are using sata drives, did you run some fio tests or similar to compare the perf to ceph?
If you are using hardware raid, then you format it as lvm-thin by default(i usually do). Then you create Splunk on top of that. But why raid5 instead of raid10, much more IOPS you get from that.