for my production servers (10 nodes cluster), I use
2 network card - bond active-passive : switches : for my vm lans + proxmox hosts communication
2 network card - bond lacp - other dedicated switches : for storage
But I use separate vlans + network range for proxmox hosts and vms lan.
It's more by security, I don't wan't that my vms have network access to my proxmox hosts.
Now, if you have only 1 vlan for both proxmox host and vms (even if you use differents ip range), all the multicast packets are going to all yours vms,
So i'm not sure about the impact on corosync.
As you have unmanagement switchs, I think you cannot manage vlan ? So maybe it's better to have dedicated nics/swiches for proxmox hosts ?
2 network card - bond active-passive : switches : for my vm lans + proxmox hosts communication
2 network card - bond lacp - other dedicated switches : for storage
But I use separate vlans + network range for proxmox hosts and vms lan.
It's more by security, I don't wan't that my vms have network access to my proxmox hosts.
Now, if you have only 1 vlan for both proxmox host and vms (even if you use differents ip range), all the multicast packets are going to all yours vms,
So i'm not sure about the impact on corosync.
As you have unmanagement switchs, I think you cannot manage vlan ? So maybe it's better to have dedicated nics/swiches for proxmox hosts ?