Best HDD scenario practise for low budget server ?

jhr

Member
Nov 29, 2021
70
10
13
51
Hello,

I am aware of informations provided by many people here in forum about buying enterprise grade SSD for Proxmox VE.
But many people runs Proxmox in own homelab with low budget HW.
I have for example this MB from Supermicro , now I have one M2 SSD Patriot M.2 P300 256GB (in MB) with 4x Hitachi/HGST Ultrastar 7K2.
Proxmox is installed on SSD, and the rest of HDD is in ZFS RAID10.

I have another old server as Proxmox Backup Server for backups.

But when I start VM backups, I have troubles with high IO and CPU utilitizations (in Proxmox and inside VMs). I tried to replace HDD in PBS, but it has no effect. Even network performance between those two servers not seems to be a bottleneck.

So I think the bottleneck is HDD where VMs are running (non SSD disks).

I am thinking about changing my HDD scenario. I will have one or two non-SSD HDD for Proxmox itself and two NVME SSD (perhaps in RAID1).

Unfortunately my MB have only one M2 slot, so I bought two adapters PCEM2-1U PCIE NVME M.2 and two Samsung SSD 990 PRO 1TB.

I think (not sure) I can use any PCI-E on my x12sth-ln4f, because adapter is PCI-E 3.0 with 4 lane only and MB have 1x PCIe 4.0 x16, 1x PCIe 4.0 x4 (in x8 slot) and 1x PCIe 3.0 x4 (in x8 slot). Am I right ?

The next question is howto configure those two NVME ? I must say I never ever used Proxmox without ZFS, so I have plan to using ZFS again.

I know Samsung SSD 990 PRO 1TB is not the best enterprise ssd on the market, but should be OK ?

Maybe my thoughts are completly wrong and someone will have a better idea howto build a better server for the same money ?

Many thanks.
 
I run my workstation setup on non-enterprise disks without a problem. I have various guests for testing etc. I would replace the HGST with sata ssd with two pairs in a mirror raid and boot pve off one pair. Then use the nvme for whichever vm you need most performance. Use ext4/lvm why bother with zfs? I dont see the point for budget setup more trouble than its worth.
 
why bother with zfs?

Because ZFS has zillion features, LVM plus ext4 does not have. I won't list them again, but I always try hard to use ZFS on all my systems - and especially on PVE.

But yeah, your mileage my vary...
 
I guess LVM with ext4 or xfs will be much faster then ZFS, it will use less RAM than ZFS, but as I wrote I never used Proxmox without ZFS.
Maybe I should give a chance to that.
 
So I think the bottleneck is HDD where VMs are running (non SSD disks).

In some usage scenarios rotating rust will be more usable when a "Special Device" as a mirrored NVMe is added to that pool.

For new data written, the Metadata will be put there. (And "small blocks" can be configured to be stored there also.) A "Special Device" can be small, below one percent of the data volume.

I would not recommend this approach for VM storage, but it is one piece of information you possibly didn't know yet.