[SOLVED] Live Migration Network

finish06

Renowned Member
Sep 2, 2014
41
4
73
Is it possible to select the network in which live migration occurs? Is it the default vmbr0? Or is it the same network at corosync - in which case I can change the network via this wiki guide: https://pve.proxmox.com/wiki/Separate_Cluster_Network. Or is it completely random?

I am unable to find any information about this by searching the forum or wiki.

Thanks
 
Yes, it uses the same addresses as the cluster (see /etc/pve/.members).

Thank you for confirming that information. It was what I suspected, however when setting up my new network, I wanted to be sure.
 
As a follow-up question to the above question... which interface do disk migrations occur on? From my observations, I believe it to be either the default vmbr0 or the corosync network (if segmented off of vmbr0)... Same question for NFS mounts, which network interface is responsible for mounting the actual share?

I worry it is the corosync interface, however that worries me because I put corosync on its own VLAN which now requires VLAN routing via my router which connects to my 10 GB switch via a 1 GB connection. (layer 2 switch).
 
Hi.

This is the configuration that i am working rigth now. (2 NIC - 4 1GB port on each server - CLUSTER of 3 nodes)

3 port for vmbr0 BRIDGE for management and Intranet communication segment 10.10.0.0
2 port independence for Corosync - CLUSTER communication, on another segment (IP 192.168.100.50, 192.168.101.50)
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_cluster_network

2 port on Linux bond on another segment to communication between server and NAS (segment 10.2.8.0)
1 port External IP address

All VM on LOCAL DISK for Replication - proxmox V5.2 -
NAS for BACKUP
HA enable.
 
As a follow-up question to the above question... which interface do disk migrations occur on? From my observations, I believe it to be either the default vmbr0 or the corosync network (if segmented off of vmbr0)... Same question for NFS mounts, which network interface is responsible for mounting the actual share?

I worry it is the corosync interface, however that worries me because I put corosync on its own VLAN which now requires VLAN routing via my router which connects to my 10 GB switch via a 1 GB connection. (layer 2 switch).
Not to wake the dead or anything by responding 7 years later, but you can do a lot of things to specify NFS traffic... That being said, any interface that has an IP associated with it will generally permit corosync to communicate in PVE. It will automatically select the lowest latency link, so generally even if you have a flat network, it won't impact your performance so long as at least one interface is not experiencing heavy traffic.

A simple solution for you even on a L2 switch, run a separate subnet on the same switch. It won't care, so make a 192.168.10.0/24 network with a round robin bond from your slowest links for corosync, and then make another round robin bond assigned 192.168.20.0/24 without a gateway using your fastest links. Your NFS server should be bonded and can easily do the same or only reside on the .20 until it is time for updates.

In order to separate other network activities, it's logically straight forward:
  • Disk Move - Source Device to Destination Device (follow the path).
    • If you're moving from local to SAN, it will be on that same NFS link.
    • If you're moving from SAN.a to SAN.b, it will go over whatever links are used by each SAN (SAN link if they're on the same subnet).
    • If you're moving from node.a local storage to node.b local storage, it then becomes a mystery.
  • Backup - Well, naturally it's either local (e.g. /dev/sdb) or it's going to go to your backup destination
    • Local backups take up no network bandwidth
    • Backups to a SAN evidently use the SAN network
    • Backups off-site.... Will use the path to your router, so use a local PBS server to backup off-site...
  • Clone - Identical steps to disk move... Except inter-node.
    • Depends on target storage.
    • Currently you cannot clone to another node's local storage (not even on the cli).
Cheers,


Tmanok
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!