Hello everyone,
I’m currently working on building a hybrid virtualization environment using Proxmox for internal tooling and Hyper-V for production workloads. I’d love to get the community’s advice on ZFS best practices, disk setup, quorum, and PBS architecture, especially given the hardware I already own.
At HQ, I currently have:
Each Proxmox node will have:
I’m hesitating between the following:
I don’t care about OEM compliance — I just want long-lasting, reliable disks.
I want to avoid putting ZFS pools behind the SmartArray.
Options I’m considering:
If I go NVMe (U.2):
If I stay with SAS SSD (preferred):
Questions:
I’ll be clustering the 2 Proxmox nodes (the Hyper-V ones are separate).
I know 2-node clusters require a QDevice for quorum.
Questions:
I still haven’t decided where to place PBS.
Options I’m considering:
What’s safer and more practical?
Any pros/cons for long-term setups?
I’m aiming for a setup that is:
Thanks in advance! I’ve done a lot of reading, but would really appreciate feedback based on your real-world experience with DL380 Gen10, ZFS tuning, and Proxmox setups.
I’m currently working on building a hybrid virtualization environment using Proxmox for internal tooling and Hyper-V for production workloads. I’d love to get the community’s advice on ZFS best practices, disk setup, quorum, and PBS architecture, especially given the hardware I already own.
I manage IT for a mid-sized retail business with multiple remote shops and a central HQ.At HQ, I currently have:
- 4 × HPE DL380 Gen10 servers (each with 2× Xeon Gold 6138, 64–128 GB RAM, 8-bay 2.5" backplane)
- Each came with 4 × 2.4 TB 10K SAS HDDs
- I also have a few Dell R610/R620 servers available
- Networking: planning for 10GbE via Intel X520-DA2 connected to a MikroTik CRS317 and later to a Cisco C9200L
- Hyper-V (on 2x DL380s): for critical VMs – POS application servers, SAGE, AD/DNS, file servers.
- I got cheap Windows Datacenter licenses so this stack is safe and well-supported internally.
- Proxmox VE (on the remaining 2x DL380s): for SOC & internal tooling
- VMs include: Zabbix, GLPI, Wazuh, Velociraptor, Grafana, test containers.
- Nothing critical — just value-added services.
Each Proxmox node will have:
I’m hesitating between the following:
- 2 × Micron 5300 PRO 480 GB SATA (1 DWPD)
preferred for endurance and cost
- 2 × HPE 480 GB SAS RI (P04516-B21) – SAS dual-port, more expensive
- 2 × HPE SATA Read Intensive (P18424-B21) – If I must go NVMe (not preferred)
I don’t care about OEM compliance — I just want long-lasting, reliable disks.

- Planning to use a LSI 9300-8i (IT Mode) HBA for disk passthrough.
- Still unsure about disk type — I want to avoid NVMe unless really necessary due to:
- Backplane/PCIe complexity
- Cost and compatibility concerns
Options I’m considering:
If I go NVMe (U.2):
- 4 × Micron 7450 PRO 1.92 TB (U.2 NVMe, 1 DWPD)
- Layout: 2 mirrors (RAID10 ZFS) → ~3.8 TB usable

- 4 × HPE 1.92 TB SAS RI (P07922-B21)
- Same layout: 2 × ZFS mirrors

- Between Micron NVMe vs HPE SAS SSD, what’s more reliable and manageable under Proxmox ZFS?
- Are Micron 7450/5300 known to work well on DL380 Gen10 + ZFS?
- Is ZFS on SAS SSDs a solid long-term solution in Proxmox?
- I plan to reuse 2–3 x 2.4 TB SAS HDDs per node for:
- Logging / long retention data
- Passthrough to PBS or NAS VM (ZFS RAIDZ2 maybe)
- Scratch disks
I’ll be clustering the 2 Proxmox nodes (the Hyper-V ones are separate).
I know 2-node clusters require a QDevice for quorum.

- What’s the best option? Raspberry Pi? Intel NUC?
- Where should the QDevice ideally be placed (separate VLAN? same switch?)?
- Can a VM on another host (e.g., Hyper-V) work as a QDevice?
I still haven’t decided where to place PBS.
Options I’m considering:
- As a VM on one of the Proxmox nodes
- As a standalone server on a spare R610 or R620

- I’m leaning toward PBS on separate hardware to avoid recovery dependency.
- But a VM is easier to manage and backup internally.
Any pros/cons for long-term setups?
I’m aiming for a setup that is:
Stable and clean (no Ceph, no shared SAN)
ZFS-based, well-tuned for VM storage
Low-maintenance and easy to recover
Avoiding NVMe unless truly necessary
Long lifespan disks (enterprise SSDs, high DWPD)
Simple 2-node clustering with proper quorum
- Which disks would you choose in my situation?
- Best practices for ZFS pool layout and tuning in a light workload setup?
- Suggestions on boot disk strategy and HBA passthrough reliability?
- Where/how to deploy PBS for long-term stability?
- Tips on configuring a quorum device in a hybrid Proxmox/Hyper-V network?
Thanks in advance! I’ve done a lot of reading, but would really appreciate feedback based on your real-world experience with DL380 Gen10, ZFS tuning, and Proxmox setups.