I have a pair of PVE installs with a 10G direct physical connection between the two on subnet 169.254.0.0/31. They also have a 1G connection for management and VM egress connectivity. I want all traffic between the nodes themselves to go over the 10G connection. The hosts file for both is:
10.10.24.21 pve1.example.com pve1
10.10.24.41 pve2.example.com pve2
169.254.0.0 lan-pve1
169.254.0.1 lan-pve2
I, as a human, interact with the GUI through https://10.10.24.21:8006/, which is only a 1G connection. When I do a VM migration from pve1 to pve2, the ssh copy process shows that its connecting to 10.10.24.41, not the 169.254 address for the copy and its slow.
Looking at the Gui, in Datacenter > Cluster, the "Link 0" column both list the 169.254 address, not the 10.10.24 address. The Nodename column has "pve1" and "pve2". I suspect that because Nodename is in /etc/hosts as the 10.10 address, the migration goes over the 1G connection
So what steps do I need to do to resolve this? Is it as simple as moving pve1 and pve2 definitions in /etc/hosts to the 169.254 addresses? Will that break management functionality?
10.10.24.21 pve1.example.com pve1
10.10.24.41 pve2.example.com pve2
169.254.0.0 lan-pve1
169.254.0.1 lan-pve2
I, as a human, interact with the GUI through https://10.10.24.21:8006/, which is only a 1G connection. When I do a VM migration from pve1 to pve2, the ssh copy process shows that its connecting to 10.10.24.41, not the 169.254 address for the copy and its slow.
Looking at the Gui, in Datacenter > Cluster, the "Link 0" column both list the 169.254 address, not the 10.10.24 address. The Nodename column has "pve1" and "pve2". I suspect that because Nodename is in /etc/hosts as the 10.10 address, the migration goes over the 1G connection
So what steps do I need to do to resolve this? Is it as simple as moving pve1 and pve2 definitions in /etc/hosts to the 169.254 addresses? Will that break management functionality?