proxmox migrate BUG

First, please make sure to always run the latest version!
We constantly release bug and security fixes.

Did you set redundant links for Corosync? [0]
Without those, once the network Corosync is running on fails, your hosts won't be able to reach the others in the cluster leading to them fencing themselves [1] if HA resources are defined on those hosts.

Can you provide your Corosync config? cat /etc/pve/corosync.conf
The network config as well, please: cat /etc/network/interfaces
And the output of the following command: ha-manager status


[0] https://pve.proxmox.com/pve-docs-7/pve-admin-guide.html#pvecm_redundancy
[1] https://pve.proxmox.com/pve-docs-7/pve-admin-guide.html#ha_manager_fencing
 
root@pvetest1:~# cat /etc/pve/corosync.conf
logging {
debug: off
to_syslog: yes
}

nodelist {
node {
name: pvetest1
nodeid: 1
quorum_votes: 1
ring0_addr: 192.168.11.11
}
node {
name: pvetest2
nodeid: 2
quorum_votes: 1
ring0_addr: 192.168.11.12
}
node {
name: pvetest3
nodeid: 3
quorum_votes: 1
ring0_addr: 192.168.11.13
}
}

quorum {
provider: corosync_votequorum
}

totem {
cluster_name: pve
config_version: 3
interface {
linknumber: 0
}
ip_version: ipv4-6
link_mode: passive
secauth: on
version: 2
}
 
root@pvetest1:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface ens33 inet manual

iface ens34 inet manual

auto vmbr0
iface vmbr0 inet static
address 192.168.123.201/24
gateway 192.168.123.1
bridge-ports ens33
bridge-stp off
bridge-fd 0

auto vmbr1
iface vmbr1 inet static
address 192.168.11.11/24
bridge-ports ens34
bridge-stp off
bridge-fd 0
 
root@pvetest1:~# ha-manager status
quorum OK
master pvetest2 (active, Wed Oct 4 15:32:21 2023)
lrm pvetest1 (active, Wed Oct 4 15:32:16 2023)
lrm pvetest2 (idle, Wed Oct 4 15:32:21 2023)
lrm pvetest3 (idle, Wed Oct 4 15:32:22 2023)
service vm:101 (pvetest1, started)
 
First, please make sure to always run the latest version!
We constantly release bug and security fixes.

Did you set redundant links for Corosync? [0]
Without those, once the network Corosync is running on fails, your hosts won't be able to reach the others in the cluster leading to them fencing themselves [1] if HA resources are defined on those hosts.

Can you provide your Corosync config? cat /etc/pve/corosync.conf
The network config as well, please: cat /etc/network/interfaces
And the output of the following command: ha-manager status


[0] https://pve.proxmox.com/pve-docs-7/pve-admin-guide.html#pvecm_redundancy
[1] https://pve.proxmox.com/pve-docs-7/pve-admin-guide.html#ha_manager_fencing
Thank you for your answer. I have written down the configurations you mentioned later
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!