You can get max failure of probaby disks, ie whole node down.
if 2 nodes die, data doesnt get corrupted, you just don't have quorum and write access to the cluster.
Your cluster is ideal for ceph beginnings and just testing everything(in production :) ) .
But this is not how PBS works, PBS doesnt use streaming, or anything inline, blocks are usually scattered on the whole raid, so you count random IOPS, not max.
With raid5 there isnt any performance gains,whatever you do. I have one PBS with it(since beta) and this is pretty much performance from it. If you need faster turnaround get ssds ,or atleast raid10 with special device.
I only had one client who split ceph and proxmox , it also worked well, but why i'm forcing small hyperconverged clusters , they are easier to maintain ,and ok if crash, because you usually build more small clusters.
Odd number of nodes, OSDs usually less than 10 per node ,and this should be it for smallish clusters(<15). Depending on hdd,ssd or nvme, 10g is minimum, with probably 25 or even 40gig in mind .
Network is okay for a small number of VMs, RAM probably is if those VMs don't user more than 10gb RAM per vm. These ssds work okay, i have a few of them in some clusters. All in all nice 3 node cluster for start.
You think you have honda,but it is rebranded trabant. If you have 100tb hdds, they work okay if you have 100 backups. But once you have 5k+ backups, you will see the pain.
Nobody told you to shut up, just only that your logic doesn't make sense. All the compession,deduplication and others things are costly, and this cost is translated to IOPS and CPU. I have installed and maintained PBS nodes from 1-140TB, some pure hdd, some ssd metadata and some full ssd.
Every...