Update Proxmox cluster 5 to 6, unable to resolve ring0_addr

Inglebard

Renowned Member
May 20, 2016
102
7
83
32
Hi,

We use proxmox since proxmox 4.0 as a cluster for our VMs.
We have a small infrastructure with 4 nodes configured with one NIC each.
We upgrade from 4 to 5 (in-place) without issue.

We would like to do the same for 5 to 6.

However, I have some question about the docs and pv5to6 result.

First things first, you will find all files related to my question. Some of the files have been modified to hide original names or ips.


In the docs, I can see the following :

With Corosync 3 the on-the-wire format has changed. It is now incompatible with Corosync 2.x because it switched out the underlying multicast UDP stack with kronosnet. Configuration files generated by a Proxmox VE with version 5.2 or newer, are already compatible with the new Corosync 3.x (at least enough to process the upgrade without any issues).

Question 1 : Since I am using proxmox 5.4 but upgrade from 4.0, are my configuration files compatible with the new Corosync 3.x ?


So I decided to quickly check the output of pve5to6.


= CHECKING VERSION INFORMATION FOR PVE PACKAGES =

Checking for package updates..
WARN: updates for the following packages are available:
linux-libc-dev, pve-cluster, lxc-pve, pve-kernel-4.15, pve-kernel-4.15.18-20-pve

Checking proxmox-ve package version..
PASS: proxmox-ve package has version >= 5.4-2

Checking running kernel version..
PASS: expected running kernel '4.15.18-19-pve'.

= CHECKING CLUSTER HEALTH/SETTINGS =

PASS: systemd unit 'pve-cluster.service' is in state 'active'
PASS: systemd unit 'corosync.service' is in state 'active'
PASS: Cluster Filesystem is quorate.

Analzying quorum settings and state..
INFO: configured votes - nodes: 4
INFO: configured votes - qdevice: 0
INFO: current expected votes: 4
INFO: current total votes: 4

Checking nodelist entries..
FAIL: node4: unable to resolve ring0_addr 'node4' to an IP address according to Corosync's resolve strategy - cluster will potentially fail with Corosync 3.x/kronosnet!
WARN: node1: ring0_addr 'node1' resolves to '192.168.0.1'.
Consider replacing it with the currently resolved IP address.
FAIL: node2: unable to resolve ring0_addr 'node2' to an IP address according to Corosync's resolve strategy - cluster will potentially fail with Corosync 3.x/kronosnet!
FAIL: node3: unable to resolve ring0_addr 'node3' to an IP address according to Corosync's resolve strategy - cluster will potentially fail with Corosync 3.x/kronosnet!


Checking totem settings..
PASS: Corosync transport set to implicit default.
PASS: Corosync encryption and authentication enabled.

INFO: run 'pvecm status' to get detailed cluster status..

= CHECKING INSTALLED COROSYNC VERSION =

FAIL: corosync 2.x installed, cluster-wide upgrade to 3.x needed!

= CHECKING HYPER-CONVERGED CEPH STATUS =

SKIP: no hyper-converged ceph setup detected!

= CHECKING CONFIGURED STORAGES =

PASS: storage 'Storage1' enabled and active.
SKIP: storage 'Storage2' disabled.
PASS: storage 'Storage3' enabled and active.
PASS: storage 'Storage4' enabled and active.
PASS: storage 'Storage5' enabled and active.
SKIP: storage 'Storage6' disabled.
PASS: storage 'Storage7' enabled and active.

= MISCELLANEOUS CHECKS =

FAIL: Unsupported SSH Cipher configured for root in /root/.ssh/config: 3des

INFO: Checking common daemon services..
PASS: systemd unit 'pveproxy.service' is in state 'active'
PASS: systemd unit 'pvedaemon.service' is in state 'active'
PASS: systemd unit 'pvestatd.service' is in state 'active'
INFO: Checking for running guests..
WARN: 3 running guest(s) detected - consider migrating or stopping them.
INFO: Checking if the local node's hostname 'node1' is resolvable..
INFO: Checking if resolved IP is configured on local node..
PASS: Resolved node IP '192.168.0.1' configured and active on single interface.
INFO: Check node certificate's RSA key size
PASS: Certificate 'pve-root-ca.pem' passed Debian Busters security level for TLS connections (2048 >= 2048)
PASS: Certificate 'pve-ssl.pem' passed Debian Busters security level for TLS connections (2048 >= 2048)
INFO: Checking KVM nesting support, which breaks live migration for VMs using it..
PASS: KVM nested parameter not set.

= SUMMARY =

TOTAL: 30
PASSED: 19
SKIPPED: 3
WARNINGS: 3
FAILURES: 5

ATTENTION: Please check the output for detailed information!
Try to solve the problems one at a time and then run this checklist tool again.

127.0.0.1 localhost.localdomain localhost
192.168.0.1 node1.company.com node1 pvelocalhost

# The following lines are desirable for IPv6 capable hosts

::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

Ciphers blowfish-cbc,aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc

logging {
debug: off
to_syslog: yes
}

nodelist {
node {
name: node2
nodeid: 2
quorum_votes: 1
ring0_addr: node2
}

node {
name: node1
nodeid: 1
quorum_votes: 1
ring0_addr: node1
}

node {
name: node4
nodeid: 4
quorum_votes: 1
ring0_addr: node4
}

node {
name: node3
nodeid: 3
quorum_votes: 1
ring0_addr: node3
}

}

quorum {
provider: corosync_votequorum
}

totem {
cluster_name: PROXMOX-CLS
config_version: 10
ip_version: ipv4
secauth: on
version: 2
interface {
bindnetaddr: 192.168.0.1
ringnumber: 0
}

}

Question 2 : About "FAIL: Unsupported SSH Cipher configured for root in /root/.ssh/config: 3des", should I just remove "3des-cbc" from "/root/.ssh/config" from all nodes ?

Question 3 : About "FAIL: nodeX: unable to resolve ring0_addr 'nodeX' to an IP address according to Corosync's resolve strategy - cluster will potentially fail with Corosync 3.x/kronosnet!", should I just need to specify in "/etc/hosts" all nodes on all nodes ?

Question 4 : About "WARN: nodeX: ring0_addr 'nodeX' resolves to '192.168.0.1'. Consider replacing it with the currently resolved IP address.", (I precise we use one NIC for each nodes and we don't have a Separate Cluster Network (I don't think it was possible when we created the cluster) ), What should I do since it is already specify in /etc/hosts?
 
I cannot give you a guarantee, but in general triple-DES should not be used and PVE does not require it. This is working for me
Code:
root@clusterA:~# cat /root/.ssh/config
Ciphers aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com

For the name resolution problem: Using IP addresses instead of names is recommended for corosync.
 
Hi,

Thanks @Dominic for the answer.

I cannot give you a guarantee, but in general triple-DES should not be used and PVE does not require it. This is working for me
I hope so because I have not idea what to do if it doesn't work.

For the name resolution problem: Using IP addresses instead of names is recommended for corosync.
So I need to edit manualy as described here : https://pve.proxmox.com/wiki/Cluster_Manager#pvecm_edit_corosync_conf ?


I will heavily test these changes before production because I see a disaster scenario in case of corosync misconfiguration. :eek:
 
The thing with using hostnames in corosync is that it will break as soon as they are not resolvable anymore. This situation is even more likely if you rely on another server in the network and don't use /etc/hosts for it.
I hope so because I have not idea what to do if it doesn't work.
I've added and removed 3des-cbc in my file with some rebooting in between just to be sure and everything is still working.
I'd recommend to do so.
I will heavily test these changes before production
This is a very good idea.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!