Hey, trying to setup this Proxmox cluster (currently 2-node) that does clustering/VM migrations on a dedicated NIC segment.
I do have the ring0_addr settings within /etc/pve/corosync.conf as follows:
Though I do manage the UI via the front eth0 ports and assume since I set this up via UI its somehow taking or using that port for VM migrations. Is there a way or reason to perform VM migration steps within the UI but always use the 10.255.255.X subnet for transport? It appears to SSH tunnel the VM through eth0 (10.1.2.X/24) going over a 1gb link.
I do have the ring0_addr settings within /etc/pve/corosync.conf as follows:
Code:
cat /etc/pve/corosync.conf
logging {
debug: off
to_syslog: yes
}
nodelist {
node {
name: pve1
nodeid: 1
quorum_votes: 1
ring0_addr: 10.255.255.1
}
node {
name: pve2
nodeid: 2
quorum_votes: 1
ring0_addr: 10.255.255.2
}
}
quorum {
provider: corosync_votequorum
}
totem {
cluster_name: las2
config_version: 2
interface {
linknumber: 0
}
ip_version: ipv4-6
link_mode: passive
secauth: on
version: 2
}
Though I do manage the UI via the front eth0 ports and assume since I set this up via UI its somehow taking or using that port for VM migrations. Is there a way or reason to perform VM migration steps within the UI but always use the 10.255.255.X subnet for transport? It appears to SSH tunnel the VM through eth0 (10.1.2.X/24) going over a 1gb link.