Proxmox cluster

At the top of the forum/post edit box there are a few helpful icons that you can use to present your data in readable format:
</> - code block that should be used when pasting large amount of log/configuration data
eye with a slash through it - spoiler block when extra large amount of text is pasted

So you have 3 hosts each on it's own 2-IP subnet, communicating to each other over a gateway. And you have a VLAN that they belong to without the need to route the traffic. The cluster traffic should be on the VLAN, so the hosts are in the same L2 domain. Whether that will solve all your issues - I do not known.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I have no idea why you are using cgnat addresses. or why they are bridges. Instead of waxing poetic, allow me to create a sample interfaces file for you:

Code:
# /etc/network/interfaces

iface enp33s0f0np0 inet manual

iface enp33s0f1np1 inet manual

# Corosync R1
auto enp33s0f0np0.100
iface enp33s0f0np0.100 inet static
address 192.168.100.4 #PVE4
netmask 255.255.255.0

# Corosync R2
auto enp33s0f1np1.101
iface enp33s0f1np1.101 inet static
address 192.168.101.4 #PVE4
netmask 255.255.255.0

auto bond0
iface bond0 inet manual
bond-mode active-backup #you can use LACP if your switches support
bond-slaves enp33s0f0np0 enp33s1f0np1
bond-miimon 100 # change as relevant to your link speed

auto vmbr0
iface vmbr0 inet static
address 100.64.60.6/30
gateway 100.64.60.5
bridge-ports bond0
bridge-stp off
bridge-fd 0

auto vmbr1
iface vmbr1 inet manual
bridge-ports bond0.534 
bridge-stp off
bridge-fd 0

auto vmbr2
iface vmbr2 inet static
address 100.64.60.10/29
bridge-ports bond0.10
bridge-stp off
bridge-fd 0

RATIONALE:
when establishing your corosync rings, use R1 and R2 addresses, keeping their traffic segregated from each other physically and logically. This way you have both interfaces available in case of link contention. For the rest of your interfaces and bridges we bonded your interfaces to provide fault tolerance; this assumes each interface is connected to a different switch. If your switches support LACP across switches (or a single switch) you can change the bond algorithm to lacp.

IDEALLY you would not share your corosync interfaces with your other traffic, but as I gather at least two of these hosts only have two interfaces this is how I would approach it.
 
I suspect OP is in a Colo location, working with IP's his provider supplied.
I have to assume the same- but those obviously apply to v1 and v10 according to his network plan, and I left them the same. I'm operating under the assumption that addresses on other vlans would be arbitrary and can/should use normal reserved IP space. In any case, it can be deduced that his issues are due to network configuration.