I learned over time on the forum that it's my responsibility as a party voluntarily replying to assess the skillset, etc. of the asking party. They can't really tell us what they do not know (yet) because they do not know it.
+1 for this one, when you go testing things, first do it locally. I would also suggest to start with HA completely OFF. You can go check logs later and see how the cluster was doing (e.g. it was losing quroum, etc., during migration and such).
After you get familiar with the non-HA bits and pieces, then read the HA stack part, especially fencing:
https://pve.proxmox.com/wiki/High_Availability#ha_manager_fencing
Then when you check back your logs, you will realise that had you been running that setup as HA, you would be getting reboots all those times you lost quorum.
Feel free to ask here, but at this point you are brainstorming a bit of everything, not having specific-enough questions for a straightforward answer. People mean well here, but if you ask a question and in turn re-iterate something in the thread they had supposedly already answered, confirming we are not on the same page, it really needs to take a step back (before follow up questions to make sense).
To give you the context, also where I believe
@gfngfn256 is coming from ...
You reminded me I have yet to follow up on another thread here from earlier today:
https://forum.proxmox.com/threads/6-node-ha-cluster-split-brain.152081/#post-689437
That's 6 nodes setup with HA where the 3+3 are connected with LACP + MLAG across redundantly over 100G OS2 fibre link and ... it's not providing High Availability, at all despite having set up 2 rings on corosync:
https://pve.proxmox.com/wiki/Cluster_Manager#pvecm_redundancy
Once you start getting your head around those things, it's easier to answer.
Regarding the use cases you mentioned.
Have you considered simply replicating data (as offsite backup) to each other? It's a good start. Even there you have options if you want to e.g. run some tool like restic or zfs send | receive over that dedicated link, etc. If you cross connect it, you will have questions about routing here yet, no problem.
The other thing, from architecture point of view, maybe instead of e.g. Nextcloud, you are better off with a solution like Resilio Sync (used to have BitTorrent in the name):
https://www.resilio.com/individuals/
It literally allows you to make it decentralised and some of the "nodes" are encrypted, i.e. data replicated to there are already sent encrypted, keys do stay on nodes you choose. This works fine for environments like public cloud too.
For really private data, simply put them e.g. into VeraCrypt or LUKS container? Also instead of having anything open to the internet, consider putting all the VMs behind a wireguard or IPSec connection.
That's probably enough brainstorming to start thinking now?