I have a 3 node cluster. Each node has a static ipv4 and ipv6 address and additionally an layer2 ipv4 connection via switch.
Since one of the latest updates of PVE the cluster information showed the layer2 ipv4 addresses. But now I see the ipv6 address at the node overview.
In parallel i See replication errors where ssh via ipv6 address is tried and than failed because further communication is expecting the ipv4 addresses:
corosync:
example error message for replication:
What drives me crazy is, that this error does not happen all the time but 2-5 times per day for a 15 min interval replication.
Anyone an idea why the ring0 ip is not used for the server?
Since one of the latest updates of PVE the cluster information showed the layer2 ipv4 addresses. But now I see the ipv6 address at the node overview.
In parallel i See replication errors where ssh via ipv6 address is tried and than failed because further communication is expecting the ipv4 addresses:
corosync:
Code:
nodelist {
node {
name: server4
nodeid: 1
quorum_votes: 1
ring0_addr: 10.0.10.0
ring1_addr: 95.216.100.221
}
node {
name: server5
nodeid: 2
quorum_votes: 1
ring0_addr: 10.0.10.1
ring1_addr: 95.216.38.237
}
node {
name: server6
nodeid: 3
quorum_votes: 1
ring0_addr: 10.0.10.2
ring1_addr: 65.21.230.42
}
}
example error message for replication:
Code:
command '/usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=server6' root@2a01:4f9:6a:1cca::2 pvecm mtunnel -migration_network 10.0.10.0/30 -get_migration_ip' failed: exit code 255
What drives me crazy is, that this error does not happen all the time but 2-5 times per day for a 15 min interval replication.
Anyone an idea why the ring0 ip is not used for the server?