Proxmox Cluster with dedicated NIC

scline

New Member
Apr 6, 2022
4
1
3
Hey, trying to setup this Proxmox cluster (currently 2-node) that does clustering/VM migrations on a dedicated NIC segment.

I do have the ring0_addr settings within /etc/pve/corosync.conf as follows:
Screenshot 2024-02-12 at 4.26.37 PM.png
Code:
cat /etc/pve/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: pve1
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 10.255.255.1
  }
  node {
    name: pve2
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 10.255.255.2
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: las2
  config_version: 2
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  link_mode: passive
  secauth: on
  version: 2
}

Though I do manage the UI via the front eth0 ports and assume since I set this up via UI its somehow taking or using that port for VM migrations. Is there a way or reason to perform VM migration steps within the UI but always use the 10.255.255.X subnet for transport? It appears to SSH tunnel the VM through eth0 (10.1.2.X/24) going over a 1gb link.
 

Attachments

  • Screenshot 2024-02-12 at 4.26.37 PM.png
    Screenshot 2024-02-12 at 4.26.37 PM.png
    38.5 KB · Views: 8
You can explicitly set the migration network in the datacenter configuration (Web UI > Datacenter > Options > Migration Settings).

Note that high traffic on the corosync network (i.e. through migrations) can lead to nodes fencing if you are using HA. Also, it seems like you have a 2 node cluster without a QDevice. It's stongly recommended to add a QDevice [1] to 2-node cluster for cases where one of the nodes becomes unavailable - particularly when using HA.

[1] https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support
 
  • Like
Reactions: scline
Oh my thank you so much! I missed this option and plan on adding an external vote to this system. Thank you so much for the quick reply!
 
  • Like
Reactions: shanreich