Guten Abend,
nach einer "missglückten" Cluster-Konfiguration möchte ich dieses Cluster wieder in lokale nodes auflösen.
Aktueller Status: Auf beiden Nodes sind virtuelle Maschinen, die erhalten bleiben müssen.
Status server06 (version 5.4-13):
Status server07 (version 5.4-13):
Wie ist vorzugehen? Ich habe folgendes Video gefunden: https://www.youtube.com/watch?v=GSg-aeQ5gT8, bin mir allerdings nicht sicher, ob dass die virtuellen Maschinen erhält. Zudem schlägt dies fehl:
Vielen Dank für Eure Hilfe!
nach einer "missglückten" Cluster-Konfiguration möchte ich dieses Cluster wieder in lokale nodes auflösen.
Aktueller Status: Auf beiden Nodes sind virtuelle Maschinen, die erhalten bleiben müssen.
Status server06 (version 5.4-13):
Code:
root@server06 ~ # pvecm nodes
Membership information
----------------------
Nodeid Votes Name
1 1 138.201.123.234 (local)
Code:
= CHECKING VERSION INFORMATION FOR PVE PACKAGES =
Checking for package updates..
WARN: updates for the following packages are available:
libcurl3, linux-image-amd64, linux-image-4.9.0-12-amd64, pve-kernel-4.15, pve-kernel-4.15.18-26-pve, curl, libcurl3-gnutls
Checking proxmox-ve package version..
PASS: proxmox-ve package has version >= 5.4-2
Checking running kernel version..
PASS: expected running kernel '4.15.18-12-pve'.
= CHECKING CLUSTER HEALTH/SETTINGS =
FAIL: systemd unit 'pve-cluster.service' is in state 'inactive'
PASS: systemd unit 'corosync.service' is in state 'active'
PASS: Cluster Filesystem is quorate.
Analzying quorum settings and state..
INFO: configured votes - nodes: 2
INFO: configured votes - qdevice: 0
INFO: current expected votes: 2
INFO: current total votes: 1
WARN: total votes < expected votes: 1/2!
WARN: cluster consists of less than three nodes!
FAIL: corosync.conf (2) and pmxcfs (0) don't agree about size of nodelist.
Checking nodelist entries..
PASS: server07: ring0_addr is configured to use IP address '138.201.123.235'
PASS: server06: ring0_addr is configured to use IP address '138.201.123.234'
Checking totem settings..
PASS: Corosync transport set to implicit default.
PASS: Corosync encryption and authentication enabled.
INFO: run 'pvecm status' to get detailed cluster status..
= CHECKING INSTALLED COROSYNC VERSION =
FAIL: corosync 2.x installed, cluster-wide upgrade to 3.x needed!
= CHECKING HYPER-CONVERGED CEPH STATUS =
SKIP: no hyper-converged ceph setup detected!
= CHECKING CONFIGURED STORAGES =
PASS: storage 'local' enabled and active.
= MISCELLANEOUS CHECKS =
INFO: Checking common daemon services..
PASS: systemd unit 'pveproxy.service' is in state 'active'
PASS: systemd unit 'pvedaemon.service' is in state 'active'
PASS: systemd unit 'pvestatd.service' is in state 'active'
INFO: Checking for running guests..
WARN: 30 running guest(s) detected - consider migrating or stopping them.
INFO: Checking if the local node's hostname 'server06' is resolvable..
INFO: Checking if resolved IP is configured on local node..
PASS: Resolved node IP '2a01:4f8:172:2a1a::2' configured and active on single interface.
INFO: Check node certificate's RSA key size
PASS: Certificate 'pve-root-ca.pem' passed Debian Busters security level for TLS connections (4096 >= 2048)
PASS: Certificate 'pve-ssl.pem' passed Debian Busters security level for TLS connections (2048 >= 2048)
PASS: Certificate 'pveproxy-ssl.pem' passed Debian Busters security level for TLS connections (2048 >= 2048)
INFO: Checking KVM nesting support, which breaks live migration for VMs using it..
PASS: KVM nested parameter not set.
= SUMMARY =
TOTAL: 25
PASSED: 17
SKIPPED: 1
WARNINGS: 4
FAILURES: 3
ATTENTION: Please check the output for detailed information!
Try to solve the problems one at a time and then run this checklist tool again.
Status server07 (version 5.4-13):
Code:
root@server07 ~ # pvecm nodes
Corosync config '/etc/pve/corosync.conf' does not exist - is this node part of a cluster?
Cannot initialize CMAP service
Wie ist vorzugehen? Ich habe folgendes Video gefunden: https://www.youtube.com/watch?v=GSg-aeQ5gT8, bin mir allerdings nicht sicher, ob dass die virtuellen Maschinen erhält. Zudem schlägt dies fehl:
Code:
root@server06 ~ # pmxcfs -l
[main] notice: unable to acquire pmxcfs lock - trying again
[main] crit: unable to acquire pmxcfs lock: Resource temporarily unavailable
[main] notice: exit proxmox configuration filesystem (-1)
Vielen Dank für Eure Hilfe!