cluster performance degradation

I wonder why all the guides talk about a minimum of 3 nodes? I could get to 4 at most and associate an SSD for each node, I have to think then that my solution should be ZFS HA, but I wouldn't do the migrations live right? or does it just take more time to migrate them?
 
1. Yes 1min is replication time, but you need to spin up machine on a different node when first one dies, right?
2. For ceph, 3 is min and it works great if you have fast disks. With more nodes it is even more great :)
 
also please make sure your memory is going to be enough since ZFS uses alot of memory for itself ... if you are doing hyperconverged there might be resource contention between your VMs and the ZFS needs.
 
also dont setup your total nodes = 4 in a cluster config. you would have split brain issue. always stick with odd number setup.
 
my intention is to always use 3 nodes, and then use the PBS for the bkp, am I doing well or not?
Yes that is perfectly fine and good choice. The mistake is using HDD and large size too 16TB per disk for your ceph and 3 nodes only IMO.
 
This is our setup for new ceph production and once everything tested ok, we would purchase the support from Proxmox too. we have 4 nodes for compute and 5 nodes for ceph only using enterprise NVME and 2x 100G per node.

1735120387770.png
 
Usually now (in the west) , you use 3.xTB and 7.xTB ssds or nvmes, and then you can get good density and performance.
 
yes, but now I bought 12 16 TB HDDs, so if I use ZFS could I have just 2 nodes? It doesn't make sense to put the third node, so I also save electricity, I had bought the third ninth to use ceph