Unable to Migrate VM from one Node to another

Feb 1, 2024
2
0
1
Hey Guys,

I have setup a 2 Node Cluster system in Proxmox.

Node 1 has a Public IP on one interface and another one is Private IP
Node 2 has a Public IP on one interface and another one is Private IP.

I have setup a DHCP on Node 1 and it pushes the IP for the Node 2.

Migration of VMs from Node 1 --> Node 2 worked smoothly but Migration of VMs from Node 2 to Node 1 isn't happening, below is the error message


Task viewer: VM 100 - Migrate (proxhetz02 ---> proxhetz01)


OutputStatus

Stop

Download
2024-02-09 22:09:36 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=proxhetz01' root@<Public IP> /bin/true
2024-02-09 22:09:36 ssh: connect to host <Public IP> port 22: No route to host
2024-02-09 22:09:36 ERROR: migration aborted (duration 00:00:03): Can't connect to destination address using public key
TASK ERROR: migration aborted

===================================================================================================

Node 1 Network Details

auto lo
iface lo inet loopback

iface enp195s0 inet manual

iface enx9e75f4189f48 inet manual

iface ens15 inet manual

iface enx2a91fdff11e5 inet manual

auto vmbr0
iface vmbr0 inet static
address <Public IP>
gateway <Public IP>
bridge-ports enp195s0
bridge-stp off
bridge-fd 0

auto vmbr1
iface vmbr1 inet static
address 192.168.100.1/24
bridge-ports ens15
bridge-stp off
bridge-fd 0

===================================================================================================
Node 1 Network Details

auto lo
iface lo inet loopback

iface enp193s0 inet manual

iface enx4a750efc2fba inet manual

iface enp197s0 inet manual

auto vmbr0
iface vmbr0 inet static
address <Public IP>
gateway <Public IP>
bridge-ports enp197s0
bridge-stp off
bridge-fd 0

auto vmbr1
iface vmbr1 inet static
address 192.168.100.14/24
gateway 192.168.100.1
bridge-ports enp193s0
bridge-stp off
bridge-fd 0
 
Last edited:
"ssh: connect to host <Public IP> port 22: No route to host"

Suggests some network config issue on Node 2.

Unsure as don't see "Node 2 Network Details"
 
Node 2 Network Details

auto lo
iface lo inet loopback

iface enp193s0 inet manual

iface enx4a750efc2fba inet manual

iface enp197s0 inet manual

auto vmbr0
iface vmbr0 inet static
address <Public IP>
gateway <Public IP>
bridge-ports enp197s0
bridge-stp off
bridge-fd 0

auto vmbr1
iface vmbr1 inet static
address 192.168.100.14/24
gateway 192.168.100.1
bridge-ports enp193s0
bridge-stp off
bridge-fd 0
 
1. shutdown vm's
2. create backups of vm's
3. copy backups to new node via scp
4. restore from backup

Why create difficulties for yourself?
 
Think the idea here is to be more hands off rather than manual approach thus the Cluster.

"Cluster" in Prox terms, as I know it to be requires 3 hosts to form a quorum.
2 boxes can be used for HA but only for the directories of the VMs layer roughly speaking.
 
I dont understande how you build cluster withous connections beetween nodes

try add to /etc/hosts
proxhetz01 - Public IP 1
proxhetz02 - Public IP 2


"Cluster" in Prox terms, as I know it to be requires 3 hosts to form a quorum.
# Disable cluster quorum for standalone booot
systemctl stop corosync pve-cluster
pmxcfs -l
nano /etc/pve/corosync.conf

quorum {
provider: corosync_votequorum
two_node: 1
wait_for_all: 0
}

reboot
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!