Migration and two networks [solved]

rmoriz

New Member
Nov 17, 2024
4
0
1
solved: https://bugzilla.proxmox.com/show_bug.cgi?id=1177#c2


I've the following setup:


- 2 PVE nodes:
pve1 has 2 interfaces:
- 10.0.1.15 1Gbit/s
- 10.0.40.10 2.5/Gbit/s

pve2 has 2 interfaces:
- 10.0.1.16 1Gbit/s
- 10.0.40.11 2.5/Gbit/s

- the 1G interfaces are considered for external services (VMs, CTs), the 2.5G exclusively for migration (inter-cluster communication).
- Nodes can connect to each other on the different networks.
- Both nodes form a Proxmox cluster, link 0 is the 2.5G IP/IF and link 1 the 1G IP.
- Corosync.conf excerpt:
Code:
nodelist {
  node {
    name: pve1
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 10.0.40.10
    ring2_addr: 10.0.1.15
  }
  node {
    name: pve2
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 10.0.40.11
    ring2_addr: 10.0.1.16
  }
}

Issue:

When I migrate a VM from one node to another, the slow 1Gbit/s interface is used.

For example I move VM 104 from pve1 to pve2:

Code:
/usr/bin/ssh -e none \
-o BatchMode=yes \
-o HostKeyAlias=pve2 \
-o UserKnownHostsFile=/etc/pve/nodes/pve2/ssh_known_hosts \
-o GlobalKnownHostsFile=none root@10.0.1.16 \
-o ExitOnForwardFailure=yes \
-L /run/qemu-server/104_nbd.migrate:/run/qemu-server/104_nbd.migrate \
-L /run/qemu-server/104.migrate:/run/qemu-server/104.migrate \
/usr/sbin/qm mtunnel

I expected that the connection is made to 10.0.40.11 and not 10.0.1.16. What did I miss?

Thank you!
Roland
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!