First, a general note that you'd normally never risk a split brain (at least if you do not set a ceph's pool size to 2/1 or manually thinker with the cluster votes) in Proxmox VE with Ceph as that's exactly what quorum is for, and that's also why with both, 4 and three total nodes, only one can fail as with two the basic rule of >> 50% votes isn't achievable for either.
To the topic: Four storages node and a QDevice can work somewhat OK, but I'd still recommend going for 5 over 4, the main advantages is that in the worst, still working, situation, when two nodes fail, the load from the two failed nodes gets spread over the remaining three nodes. Whereas in the four node + QDevice for quorum setup the left-over two node need to handle all the failed nodes load. So you'd need to dimension the nodes with at least 50% extra capacity if you want to be prepared for a two node failure, in the five node case 40% capacity would be fine even in the worst case. Even if you only plan for the case where at max one node fully fails, it's 33% extra capacity vs. 25% extra capacity required for taking over storage/compute load of the failed nodes by the remaining three or four nodes, respectively.
Also, Ceph pools are recommended to be run with replica size/min = 3/2, meaning, three copies of every object and two copies successfully written before any write returns OK to the client. As a QDevice does not provide any real service you have two nodes but want three copies, ceph does not like this much, as with the default failure domain settings it tries to spread objects over different OSDs (disks) and hosts to reduce the likelihood of all copies being destroyed if one or two node faile completely.
IMO, it's better to run more slightly smaller nodes than few huge ones. Similarly, with OSD disk size: it's tempting to just use a few huge ones and be done, but using smaller ones not only increases performance (higher IOPS budget) and if one OSD fails there's less to re-balance. Naturally one needs to strike a trade-off, so it would be good to have a rough idea of the initial data usage required for your workload and the expected year-to-year growth in data usage.
A huge advantage of Ceph is how scaleable it is, you can start out with a three node cluster providing just a few TiB of space and end up with 15 nodes and 100s of TiB, all possible without a single downtime.
But also, ceph needs a bit of compute power to handle the data flow and those nice re-balancing features, so if you want to converge compute and storage you need to have that in mind too, and extra node can really help to take off steam.
In any way, I'd recommend to check out our ceph docs and relatively recent performance paper:
https://pve.proxmox.com/pve-docs/chapter-pveceph.html
https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2020-09-hyper-converged-with-nvme.76516/