cannot upgrade from 6.4 to 7

anton2023

New Member
Jun 20, 2023
4
1
3
please help, I want to try upgrading from Proxmox 6.4 to 7, but there is a warning like this

WARN: cluster consists of less than three quorum-providing nodes!



pve6to7
Code:
= CHECKING VERSION INFORMATION FOR PVE PACKAGES =

Checking for package updates..
PASS: all packages uptodate

Checking proxmox-ve package version..
PASS: proxmox-ve package has version >= 6.4-1

Checking running kernel version..
PASS: expected running kernel '5.4.106-1-pve'.

= CHECKING CLUSTER HEALTH/SETTINGS =

PASS: systemd unit 'pve-cluster.service' is in state 'active'
PASS: systemd unit 'corosync.service' is in state 'active'
PASS: Cluster Filesystem is quorate.

Analzying quorum settings and state..
INFO: configured votes - nodes: 2
INFO: configured votes - qdevice: 0
INFO: current expected votes: 2
INFO: current total votes: 2
WARN: cluster consists of less than three quorum-providing nodes!

Checking nodelist entries..
PASS: nodelist settings OK

Checking totem settings..
PASS: totem settings OK

INFO: run 'pvecm status' to get detailed cluster status..

= CHECKING HYPER-CONVERGED CEPH STATUS =

SKIP: no hyper-converged ceph setup detected!

= CHECKING CONFIGURED STORAGES =

SKIP: storage 'disk1df' disabled.
SKIP: storage 'disk2df' disabled.
SKIP: storage 'disk3df' disabled.
PASS: storage 'disk4tb' enabled and active.
PASS: storage 'hdd2' enabled and active.
PASS: storage 'local' enabled and active.
PASS: storage 'local-lvm' enabled and active.
PASS: storage 'nfs236' enabled and active.
PASS: storage 'nfs243' enabled and active.

= MISCELLANEOUS CHECKS =

INFO: Checking common daemon services..
PASS: systemd unit 'pveproxy.service' is in state 'active'
PASS: systemd unit 'pvedaemon.service' is in state 'active'
PASS: systemd unit 'pvestatd.service' is in state 'active'
INFO: Checking for running guests..
PASS: no running guest detected.
INFO: Checking if the local node's hostname 'df520' is resolvable..
INFO: Checking if resolved IP is configured on local node..
PASS: Resolved node IP '192.168.112.239' configured and active on single interface.
INFO: Checking backup retention settings..
INFO: storage 'local' - no backup retention settings defined - by default, PVE 7.x will no longer keep only the last backup, but all backups
INFO: storage 'hdd2' - no backup retention settings defined - by default, PVE 7.x will no longer keep only the last backup, but all backups
PASS: no problems found.
INFO: checking CIFS credential location..
PASS: no CIFS credentials at outdated location found.
INFO: Checking custom roles for pool permissions..
INFO: Checking node and guest description/note legnth..
PASS: All node config descriptions fit in the new limit of 64 KiB
PASS: All guest config descriptions fit in the new limit of 8 KiB
INFO: Checking container configs for deprecated lxc.cgroup entries
PASS: No legacy 'lxc.cgroup' keys found.
INFO: Checking storage content type configuration..
PASS: no problems found
INFO: Checking if the suite for the Debian security repository is correct..
INFO: Make sure to change the suite of the Debian security repository from 'buster/updates' to 'bullseye-security' - in /etc/apt/sources.list:6
SKIP: NOTE: Expensive checks, like CT cgroupv2 compat, not performed without '--full' parameter

= SUMMARY =

TOTAL:    31
PASSED:   25
SKIPPED:  5
WARNINGS: 1
FAILURES: 0

ATTENTION: Please check the output for detailed information!
 
Hi, before rebooting the node, you can change the weight of the node.

For example, if you have pve1 (weight=1) and pve2 (weight=1) you can change in pve1 (weight=2) and pve2 (weight=1) in the corosync.conf file in order to avoid and you can loss the quorum during the reboot proces

MM
 
Last edited: