Advice for new cluster setup

Testani

Member
Oct 22, 2022
38
3
13
Hi all,
i'm setting up my new infrastructure based on 3 nodes, this will be my setup:
3 x HP DL380 with 256 RAM, hba and 2 x 4tb SSD enterprise SAS disk for each node.

I don't want to use ceph and i prefer local ZFS with ha/replication. I'm quite new with this setup so i have few questions:

1- what's about avaiable TB storage? I mean, if i set these free host with 3 local zfs with same name and enabling ha how many TB i can use?
2- with this setup is it advisable to use only the replicas managing the pools separately?
3- it is not indicated in this forum but I ask for your advice, I currently use endian as a virtual firewall, do you recommend switching to pfsense/opnsense ?

Thanks in advance for your answers
 
I mean, if i set these free host with 3 local zfs with same name and enabling ha how many TB i can use?
2 * 4 TB in mirror? Never go above 80% usage --> 3.2 TB max. usable. For VM block storage with its specific random I/O you will find several articles recommending to stay below 50% --> 2 TB. So I would really aim to stay in the lower 2.x TB range.

If you replicate everything to both neighbors you store the same data three times --> so there is only 666Gb ...1066 GB per node usable.

Do not forget to consider space for snapshots - if you want to use that feature.
 
auch, not very good. what do you think about using also 2 x 10TB enterprise 7.2k HDD for each host?
In my mind i want to use that as separate ZFS mirror for replicas and slow I/O VM, do you think it's better to use 10TB hdd in mirror with SSD as cache for zfs volume?
Is it possible to replicate VM from zfs SSD dataset to hdd zfs dataset?
 
auch, not very good. what do you think about using also 2 x 10TB enterprise 7.2k HDD for each host?
Well..., technically it will work. Of course it can only deliver the (write-) performance of a single disk, distributed between all running VMs. Is this acceptable for your expected workload?
You have a lot of RAM. That helps to keep a large ARC, which speeds up normal operation a lot. But this also depends on your actual workload.
In my mind i want to use that as separate ZFS mirror for replicas and slow I/O VM, do you think it's better to use 10TB hdd in mirror with SSD as cache for zfs volume?
As you set up a new system I would recommend to search for "adding a 'special device'" (as a mirrored enterprise SSD/NVMe) instead of a (read-) Cache or a SLOG (for speed up sync(!) writes).

I have no heavy experience with it yet, but it seems that a "special device" is more clever than the classic cache approach. Of course one can have a special device AND a Cache AND an SLOG. I am not sure if all together is really useful in every case.

A special device should be added before writing a lot of data to the pool as only newly written data will potentially make use of it.
Is it possible to replicate VM from zfs SSD dataset to hdd zfs dataset?
Yes. The integrated replication mechanism requires the same name for a datastore on the relevant nodes. Wether that datastore consists of SSD/NVMe/HDD, with or w/o Cache/SLOG/Special Device, is not checked. Also the redundancy level - Mirror/RaidZ/RaidZ2 - is not checked. But please..., stay with mirrors!

Good luck. And as usual: just my 2€¢ :-)
 
you are very kind, thank you! So I can think of replicating from a ssd pool to hdd, I think it's the best solution, I need to have the certainty of being able to turn on the replication in the event of a failure, I don't need the ha for my applications. Obviously I will use both 2 x 4tb sdd and 2 x10tb hdd in radiz1
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!