live migration fails with online migrate failure - unable to detect remote migration address

scottre

New Member
Nov 23, 2021
1
0
1
59
syslog from node pve
Nov 23 14:20:59 pve qm[1650800]: <root@pam> starting task UPID:pve:00193071:026F55E8:619D3F1A:qmstart:109:root@pam:
Nov 23 14:20:59 pve qm[1650801]: start VM 109: UPID:pve:00193071:026F55E8:619D3F1A:qmstart:109:root@pam:
Nov 23 14:20:59 pve systemd[1]: Started 109.scope.
Nov 23 14:20:59 pve systemd-udevd[1650818]: Using default interface naming scheme 'v247'.
Nov 23 14:20:59 pve systemd-udevd[1650818]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 23 14:21:00 pve kernel: device tap109i0 entered promiscuous mode
Nov 23 14:21:00 pve systemd-udevd[1650825]: Using default interface naming scheme 'v247'.
Nov 23 14:21:00 pve systemd-udevd[1650825]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 23 14:21:00 pve systemd-udevd[1650818]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 23 14:21:00 pve systemd-udevd[1650837]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 23 14:21:00 pve systemd-udevd[1650837]: Using default interface naming scheme 'v247'.
Nov 23 14:21:00 pve kernel: fwbr109i0: port 1(fwln109i0) entered blocking state
Nov 23 14:21:00 pve kernel: fwbr109i0: port 1(fwln109i0) entered disabled state
Nov 23 14:21:00 pve kernel: device fwln109i0 entered promiscuous mode
Nov 23 14:21:00 pve kernel: fwbr109i0: port 1(fwln109i0) entered blocking state
Nov 23 14:21:00 pve kernel: fwbr109i0: port 1(fwln109i0) entered forwarding state
Nov 23 14:21:00 pve kernel: vmbr1: port 2(fwpr109p0) entered blocking state
Nov 23 14:21:00 pve kernel: vmbr1: port 2(fwpr109p0) entered disabled state
Nov 23 14:21:00 pve kernel: device fwpr109p0 entered promiscuous mode
Nov 23 14:21:00 pve kernel: vmbr1: port 2(fwpr109p0) entered blocking state
Nov 23 14:21:00 pve kernel: vmbr1: port 2(fwpr109p0) entered forwarding state
Nov 23 14:21:00 pve kernel: fwbr109i0: port 2(tap109i0) entered blocking state
Nov 23 14:21:00 pve kernel: fwbr109i0: port 2(tap109i0) entered disabled state
Nov 23 14:21:00 pve kernel: fwbr109i0: port 2(tap109i0) entered blocking state
Nov 23 14:21:00 pve kernel: fwbr109i0: port 2(tap109i0) entered forwarding state
Nov 23 14:21:00 pve qm[1650800]: <root@pam> end task UPID:pve:00193071:026F55E8:619D3F1A:qmstart:109:root@pam: OK
Nov 23 14:21:00 pve sshd[1650794]: Received disconnect from 192.168.1.42 port 57828:11: disconnected by user
Nov 23 14:21:00 pve sshd[1650794]: Disconnected from user root 192.168.1.42 port 57828
Nov 23 14:21:00 pve sshd[1650794]: pam_unix(sshd:session): session closed for user root
Nov 23 14:21:00 pve systemd[1]: session-397.scope: Succeeded.
Nov 23 14:21:00 pve systemd[1]: session-397.scope: Consumed 1.407s CPU time.
Nov 23 14:21:00 pve systemd-logind[622]: Session 397 logged out. Waiting for processes to exit.
Nov 23 14:21:00 pve systemd-logind[622]: Removed session 397.
Nov 23 14:21:00 pve sshd[1650867]: Accepted publickey for root from 192.168.1.42 port 57830 ssh2: RSA SHA256:tGUZ46kaMSSmBKH+vGSsWfP7Jq+TAW7y43FByViRGb8
Nov 23 14:21:00 pve sshd[1650867]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Nov 23 14:21:00 pve systemd-logind[622]: New session 398 of user root.
Nov 23 14:21:00 pve systemd[1]: Started Session 398 of user root.
Nov 23 14:21:01 pve qm[1650873]: <root@pam> starting task UPID:pve:001930BA:026F5706:619D3F1D:qmstop:109:root@pam:
Nov 23 14:21:01 pve qm[1650874]: stop VM 109: UPID:pve:001930BA:026F5706:619D3F1D:qmstop:109:root@pam:
Nov 23 14:21:01 pve QEMU[1650814]: kvm: terminating on signal 15 from pid 1650874 (task UPID:pve:001930BA:026F5706:619D3F1D:qmstop:109:root@pam:)
Nov 23 14:21:01 pve qm[1650873]: <root@pam> end task UPID:pve:001930BA:026F5706:619D3F1D:qmstop:109:root@pam: OK

/etc/pve/corosync.conf

logging {
debug: off
to_syslog: yes
}

nodelist {
node {
name: pve
nodeid: 2
quorum_votes: 1
ring0_addr: fd2a:a019:c441:1537::1
ring1_addr: 192.168.1.45
}
node {
name: pve03
nodeid: 3
quorum_votes: 1
ring0_addr: fd2a:a019:c441:1537::3
ring1_addr: 192.168.1.4
}
node {
name: pve04
nodeid: 4
quorum_votes: 1
ring0_addr: fd2a:a019:c441:1537::4
ring1_addr: 192.168.1.42
}
}

quorum {
provider: corosync_votequorum
}

totem {
cluster_name: pvecluster
config_version: 9
interface {
linknumber: 0
}
interface {
linknumber: 1
}
ip_version: ipv4-6
link_mode: passive
secauth: on
version: 2
}

root@pve:/etc/pve# more /etc/corosync/corosync.conf
logging {
debug: off
to_syslog: yes
}

nodelist {
node {
name: pve
nodeid: 2
quorum_votes: 1
ring0_addr: fd2a:a019:c441:1537::1
ring1_addr: 192.168.1.45
}
node {
name: pve03
nodeid: 3
quorum_votes: 1
ring0_addr: fd2a:a019:c441:1537::3
ring1_addr: 192.168.1.4
}
node {
name: pve04
nodeid: 4
quorum_votes: 1
ring0_addr: fd2a:a019:c441:1537::4
ring1_addr: 192.168.1.42
}
}

quorum {
provider: corosync_votequorum
}

totem {
cluster_name: pvecluster
config_version: 9
interface {
linknumber: 0
}
interface {
linknumber: 1
}
ip_version: ipv4-6
link_mode: passive
secauth: on
version: 2
}


pve04:/etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.1.45 pve
192.168.1.4 pve03
192.168.1.42 pve04
192.168.101.1 pve-storage4
192.168.101.101 redsan01 truenas01
192.168.101.3 pve03-storage
192.168.101.4 pve04-storage

fd2a:a019:c441:1537::1 pve-storage6
fd2a:a019:c441:1537::3 pve03-storage6
fd2a:a019:c441:1537::3 pve04-storage6

# The following lines are desirable for IPv6 capable hosts

::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts



pve:/etc/hosts

127.0.0.1 localhost.localdomain localhost
192.168.1.42 pve04
192.168.1.45 pve
192.168.1.4 pve03

# The following lines are desirable for IPv6 capable hosts

::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

fd2a:a019:c441:1537::1 pve-storage6
fd2a:a019:c441:1537::3 pve03-storage6
fd2a:a019:c441:1537::3 pve04-storage6
 
please include the migration task log and pveversion -v from both nodes