New pve9 node - Storage set up help

mikeyo

Member
Oct 24, 2022
40
9
13
Hi

About to build a new Proxmox 9 node with the following hardware -

Motherboard: Asus W880-ACE-SE
CPU: Intel 285K
RAM: 96GB
Storage:
1 x 4TB gen5 nvme,
3 x Gen4 nvme 1-2TB,
1 x 500GB SATA SSD for Proxmox install.

My plan is to populate all the nvme slots 1xgen5 and 3xgen4. I'll be running a mix of vm, docker and AI workloads.

I want ther best I/O performance from the drives, I am not convinced that creating a ZFS pool for the 3 x nvme's will give me this.

Please can i have some suggestions on how to best layout the storage for optimum I/O.

Thank you.
 
If you want the 'best' performance from the drives, ZFS is not going to cut it. ZFS is about ultimate data resiliency, it still to this day has not been completely optimized for NVME drives.

I would suggest creating a ZFS pool for vm bootdisks, where you just use SAS/SATA SSDs as main storage tier and just assign an NVME for the slog/zil.

Where the ultimate performance is required for actual workloads, meaning latency as well as bandwidth, you can in those cases do pcie passthrough of NVME drives direct to the guests which actually need it, probably your AI guests.

A note: In case you weren't aware, keep in mind that motherboard uses switching to achieve the multiple m.2 ports, so you'll never get full bandwidth at the same time from the drives connected through the chipset switch.
 
Last edited:
If you want the 'best' performance from the drives, ZFS is not going to cut it. ZFS is about ultimate data resiliency, it still to this day has not been completely optimized for NVME drives.

I would suggest creating a ZFS pool for vm bootdisks, where you just use SAS/SATA SSDs as main storage tier and just assign an NVME for the slog/zil.

Where the ultimate performance is required for actual workloads, meaning latency as well as bandwidth, you can in those cases do pcie passthrough of NVME drives direct to the guests which actually need it, probably your AI guests.

A note: In case you weren't aware, keep in mind that motherboard uses switching to achieve the multiple m.2 ports, so you'll never get full bandwidth at the same time from the drives connected through the chipset switch.
was actually thinking of just formatting the drives as XFS and just use qcow2 volumes. ZFS made me curious about striping the drives for better throughput.
 
was actually thinking of just formatting the drives as XFS and just use qcow2 volumes. ZFS made me curious about striping the drives for better throughput.

Qcow files will have the additional overhead of another file system so I would go with LVM or ZFS Block storage