One Node and one QDevice permanently shutdown

SilicaStorm

Member
Jul 19, 2021
11
5
8
57
Ok, I did have a total of 2 nodes and one QDevice working great. But the second node has been shut down permanently and also the QDevice.
Therefore leaving one node it seems I have not done a proper clean up of the previous cluster configuration. Cluster is no longer required.

PVENAS1 is the primary node that will continue.

In PVENAS1 systemlog the following repeats:

Code:
,
Aug 24 05:50:34 PVENAS1 pmxcfs[3281]: [confdb] crit: cmap_initialize failed: 2
,

Task spinning : Bulk start VMs and Containers
//
error message if i attempt to manual start a VM or CT
///
cluster not ready - no quorum? (500)
///

Code:
Linux PVENAS1 6.8.12-1-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-1 (2024-08-05T16:17Z) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sat Aug 24 05:25:44 +07 2024 on tty1
root@PVENAS1:~# systemctl status corosync
○ corosync.service - Corosync Cluster Engine
     Loaded: loaded (/lib/systemd/system/corosync.service; disabled; preset: enabled)
     Active: inactive (dead)
  Condition: start condition failed at Sat 2024-08-24 05:24:54 +07; 13min ago
       Docs: man:corosync
             man:corosync.conf
             man:corosync_overview

Aug 24 05:24:53 PVENAS1 systemd[1]: corosync.service - Corosync Cluster Engine was skipped because of an unmet condition check (ConditionPathExists=/etc/corosync/corosync.conf).
Aug 24 05:24:53 PVENAS1 systemd[1]: corosync.service - Corosync Cluster Engine was skipped because of an unmet condition check (ConditionPathExists=/etc/corosync/corosync.conf).
Aug 24 05:24:54 PVENAS1 systemd[1]: corosync.service - Corosync Cluster Engine was skipped because of an unmet condition check (ConditionPathExists=/etc/corosync/corosync.conf).
Aug 24 05:24:54 PVENAS1 systemd[1]: corosync.service - Corosync Cluster Engine was skipped because of an unmet condition check (ConditionPathExists=/etc/corosync/corosync.conf).
Aug 24 05:24:54 PVENAS1 systemd[1]: corosync.service - Corosync Cluster Engine was skipped because of an unmet condition check (ConditionPathExists=/etc/corosync/corosync.conf).
Aug 24 05:24:54 PVENAS1 systemd[1]: corosync.service - Corosync Cluster Engine was skipped because of an unmet condition check (ConditionPathExists=/etc/corosync/corosync.conf).
root@PVENAS1:~# cd /etc/corosync/corosync.conf
-bash: cd: /etc/corosync/corosync.conf: No such file or directory
root@PVENAS1:~#

Code:
root@PVENAS1:/etc/pve# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.12-1-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
pve-kernel-5.15: 7.4-14
proxmox-kernel-6.8: 6.8.12-1
proxmox-kernel-6.8.12-1-pve-signed: 6.8.12-1
proxmox-kernel-6.8.8-2-pve-signed: 6.8.8-2
pve-kernel-5.15.158-1-pve: 5.15.158-1
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx9
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.2
libpve-guest-common-perl: 5.1.4
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.9
libpve-storage-perl: 8.2.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.7-1
proxmox-backup-file-restore: 3.2.7-1
proxmox-firewall: 0.5.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.1.12
pve-docs: 8.2.3
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.13-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.2
pve-qemu-kvm: 9.0.2-2
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.4
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1
root@PVENAS1:/etc/pve#

Thanks for any assistance to clean up the cluster config...
 
thanks for the quick response Falk R.
following results from the CLI provided

root@PVENAS1:~# pvecm expected 1
Cannot initialize CMAP service
root@PVENAS1:~#
 
following the Cluster Wiki for - Separate a Node Without Reinstalling

the following command allowed the VM and CTs to bulk start

Code:
root@PVENAS1:~# pmxcfs -l
[main] notice: resolved node name 'PVENAS1' to '192.168.1.80' for default node IP address
[main] notice: forcing local mode (although corosync.conf exists)
 
Sorry for asking but I dont want to do anymore damage...

This is correct:

Code:
root@PVENAS1:/etc/pve# rm /etc/pve/corosync.conf
 
  • Like
Reactions: Falk R.
Thanks Falk R.

Everything seems to be running ok now on all VM and CT.... Systemlog not longer saturated

After a full metal box reboot,,, all seems ok

If I have any other questions can I use this same thread regarding clean up?


./
 
If I have any other questions can I use this same thread regarding clean up?

You don't have to ask. :) It's your thread even. People who were in it will get notification that there's new post if that's what you ask for. You can also mention people here @esi_y here in new threads. Everyone who doesn't like it can limit these on the receiving side.

Generally, if you have a new topic, I think it helps starting with a new title though. If you ask if you can ask, you already asked without permission, for the permission. ;)

NB The cleanup if at all would be just in /etc/pve/nodes ...

PS What helps others is setting good titles and setting threads as "solved" - accessible by editing the title in the top right.
 
Last edited:
  • Like
Reactions: SilicaStorm