Hi,
After reinstalling PVE2 on the node, it did not connect to the cluster PVE1. The current node PVE1 is not making a vm copy, the message is as in the picture.
How to reset or delete a cluster on PVE1 without stopping the vm.
When I try to run shell I get the message:
TASK ERROR: command '/usr/bin/termproxy 5900 --path /nodes/pve --perm Sys.Console -- /bin/login -f root' failed: exit code 1
How can I make a vm backup in this situation? This is a priority. If I succeed, I will reinstall PVE1 again.
When I try to run shell I get the message:
TASK ERROR: command '/usr/bin/termproxy 5900 --path /nodes/pve --perm Sys.Console -- /bin/login -f root' failed: exit code 1
I am not sure where you got this message. Debugging your situation should take place as user "root" at the local console - either physically or via IPMI/Drac/Amt/Serial... Edit: or ssh of course, if the network is available...
What about the output of "pvecm status" I'd asked for?
Almost sure you have a lost-quorum situation. Don't know what cluster setup you have corosync, storage, ceph, qdevice etc. Search the forums as applicable. You'll find what you need.
If you are really frustrated; see this, it may even help you.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.