TL;DR:
Given that you have another network for VM's traffic, simply add a VLAN to that bridge and use it for Corosync Link1. Leave the dedicated network for Corosync Link0. It uses very little bandwidth (i.e. will not impact VMs traffic at all) and you will have redundant networks that will keep your cluster quorate if one of them fails.
Long version:
To properly use a Proxmox cluster, you must use redundant links for corosync, split in different nic's and switches for each one in anything but test clusters, regardless of using or not shared storage or HA.
Remember that as long as you have no quorum you can log in the webUI if using >=7.3 afaik (will not work with previous versions). If you were logged before losing quorum you will still be able to use the webUI. You won't be able to do almost anything besides stopping VMs and LXCs. While you could force quorum with
pvecm expected
, its risky if you don't know what you are doing and it's implications.
In the example you described, the VM's will keep running while the switch reboots and no downtime should happen.
But now imagine that your cluster switch breaks. You may have a replacement at hand or you may not, and even getting to the location where it is installed might be tricky. As you lost quorum, no operations are allowed in the cluster, not even backups can run. So you use
pvecm expected 1
on each of your nodes, to regain operation on each node and run some backups, maybe even do a config change. Then, you connect a switch and nodes start seeing each other again but pmxcfs on each node will merge their changes and replication conflicts may arise... sounds like nightmare to me