Proxmox Deployment – Advice Needed (DL380 Gen10, ZFS Pools, PBS, Disk Strategy, Quorum, No Ceph)

Jolof_IT

New Member
May 23, 2025
2
0
1
Hello everyone,
I’m currently working on building a hybrid virtualization environment using Proxmox for internal tooling and Hyper-V for production workloads. I’d love to get the community’s advice on ZFS best practices, disk setup, quorum, and PBS architecture, especially given the hardware I already own.

I manage IT for a mid-sized retail business with multiple remote shops and a central HQ.


At HQ, I currently have:
  • 4 × HPE DL380 Gen10 servers (each with 2× Xeon Gold 6138, 64–128 GB RAM, 8-bay 2.5" backplane)
  • Each came with 4 × 2.4 TB 10K SAS HDDs
  • I also have a few Dell R610/R620 servers available
  • Networking: planning for 10GbE via Intel X520-DA2 connected to a MikroTik CRS317 and later to a Cisco C9200L

  • Hyper-V (on 2x DL380s): for critical VMs – POS application servers, SAGE, AD/DNS, file servers.
    • I got cheap Windows Datacenter licenses so this stack is safe and well-supported internally.
  • Proxmox VE (on the remaining 2x DL380s): for SOC & internal tooling
    • VMs include: Zabbix, GLPI, Wazuh, Velociraptor, Grafana, test containers.
    • Nothing critical — just value-added services.


Each Proxmox node will have:


I’m hesitating between the following:


  • 2 × Micron 5300 PRO 480 GB SATA (1 DWPD) ✅ preferred for endurance and cost
  • 2 × HPE 480 GB SAS RI (P04516-B21) – SAS dual-port, more expensive
  • 2 × HPE SATA Read Intensive (P18424-B21) – If I must go NVMe (not preferred)

I don’t care about OEM compliance — I just want long-lasting, reliable disks.


⚠️ I want to avoid putting ZFS pools behind the SmartArray.


  • Planning to use a LSI 9300-8i (IT Mode) HBA for disk passthrough.
  • Still unsure about disk type — I want to avoid NVMe unless really necessary due to:
    • Backplane/PCIe complexity
    • Cost and compatibility concerns

Options I’m considering:


If I go NVMe (U.2):


  • 4 × Micron 7450 PRO 1.92 TB (U.2 NVMe, 1 DWPD)
    • Layout: 2 mirrors (RAID10 ZFS) → ~3.8 TB usable

⚖️ If I stay with SAS SSD (preferred):


  • 4 × HPE 1.92 TB SAS RI (P07922-B21)
    • Same layout: 2 × ZFS mirrors

❓Questions:


  • Between Micron NVMe vs HPE SAS SSD, what’s more reliable and manageable under Proxmox ZFS?
  • Are Micron 7450/5300 known to work well on DL380 Gen10 + ZFS?
  • Is ZFS on SAS SSDs a solid long-term solution in Proxmox?


  • I plan to reuse 2–3 x 2.4 TB SAS HDDs per node for:
    • Logging / long retention data
    • Passthrough to PBS or NAS VM (ZFS RAIDZ2 maybe)
    • Scratch disks




I’ll be clustering the 2 Proxmox nodes (the Hyper-V ones are separate).


I know 2-node clusters require a QDevice for quorum.


❓Questions:


  • What’s the best option? Raspberry Pi? Intel NUC?
  • Where should the QDevice ideally be placed (separate VLAN? same switch?)?
  • Can a VM on another host (e.g., Hyper-V) work as a QDevice?


I still haven’t decided where to place PBS.


Options I’m considering:


  1. As a VM on one of the Proxmox nodes
  2. As a standalone server on a spare R610 or R620

❓What’s safer and more practical?


  • I’m leaning toward PBS on separate hardware to avoid recovery dependency.
  • But a VM is easier to manage and backup internally.

Any pros/cons for long-term setups?


I’m aiming for a setup that is:
  • ✅ Stable and clean (no Ceph, no shared SAN)
  • ✅ ZFS-based, well-tuned for VM storage
  • ✅ Low-maintenance and easy to recover
  • ✅ Avoiding NVMe unless truly necessary
  • ✅ Long lifespan disks (enterprise SSDs, high DWPD)
  • ✅ Simple 2-node clustering with proper quorum

  • Which disks would you choose in my situation?
  • Best practices for ZFS pool layout and tuning in a light workload setup?
  • Suggestions on boot disk strategy and HBA passthrough reliability?
  • Where/how to deploy PBS for long-term stability?
  • Tips on configuring a quorum device in a hybrid Proxmox/Hyper-V network?



Thanks in advance! I’ve done a lot of reading, but would really appreciate feedback based on your real-world experience with DL380 Gen10, ZFS tuning, and Proxmox setups.
 
Hi,
I am providing an answer regarding quorum and cluster recommendation in general, According to the documentation [0]:
  • at least three cluster nodes (to get reliable quorum)
  • shared storage for VMs and containers
  • hardware redundancy (everywhere)
  • use reliable “server” components.
Since your goal is to have 2 nodes, It is important to set up a QDevice to provide a third vote[0]. Please read thoroughly Corosync External Vote Support to understand the drawbacks and implications of using an additional external vote in an even-nodes cluster.

The cluster (Corosync) relies on a low-latency and stable network. Therefore, it should have its own dedicated network[1]. It doesn't require much bandwidth, 1G is more than sufficient, most important is low latency.

Optimally, corosync should run over own network interfaces, Separate switches are recommended. Since Corosync organizes the redundancy itself, it can be simple dumb switches. The separate networks/interfaces are important because if, for example, network load due to replication, VM data traffic, etc. utilizes the network, the latency of Corosync also increases and this can then lead to node failures.

Please check section cluster requirements [2] for more details and take a look at [4] for more information about fencing.

[0] https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_corosync_external_vote_support
[1]https://pve.proxmox.com/pve-docs/chapter-pvecm.html#pvecm_cluster_requirements
[2] https://pve.proxmox.com/pve-docs/chapter-ha-manager.html#_requirements
[4] https://pve.proxmox.com/pve-docs/chapter-ha-manager.html#ha_manager_fencing