Thank you for your post! I have had this exact issue that presented itself after a power failure knocked out some of my servers and I was rebuilding stuff. I thought I was loosing it as I had already replaced the RAID controller trying to figure this out.
Do you know if there have been any bug...
That is exactly what I did already. The cluster was originally at version 11 and I followed those exact steps to bump it to version 12.
At this point I have simply wiped the node and reinstalled it as I needed it back online ASAP and am no longer having issues. Thank you for your time.
I have tried that already when I bumped the version to 12. It changed it across all nodes including the one that is not working in the corosync and pve directories. I have rebooted the entire cluster since then and the issue with the pve2 node is still present. That is why I am at such a loss...
I did, as well as rebooting the node when simply restarting the services did not work.
Working node:
root@pve1:~# pvecm status
Cluster information
-------------------
Name: Cluster-01
Config Version: 12
Transport: knet
Secure auth: on
Quorum information...
Additionally, in the occasions where I have been able to get the corosync file in the pve directory to match the one in the corosync directory and the other nodes, it appears as though that is not being followed by proxmox as none of the issues are resolved.
Sorry, I was not clear. I have verified that corosync file in both the /etc/corosync/ and /etc/pve/ directories match. When starting the node, the one in the pve directory reverts back to its previous one, with the one in the corosync directory remaining updated. Even after manually updating the...
Currently the file cannot even be opened due to the error in the screenshot above. However I have already copied the corosync file from the working nodes to the non-working node and restarted pve-cluster as well as a full reboot of the node. Additionally in the GUI the node appears to be fully...
Hello, I have a 4 node Proxmox 6 cluster that I am having issues with. I recently had to change the IP addresses and host names of two of the nodes in my cluster, and have managed to get that done. However I have one node (not one that was changed) that is now giving me troubles. The /etc/pve...
Okay, so in an attempt to get some of my services back online I moved some VM's and Containers off of the VLANs and they still were not able to connect. With this discovery I did some playing around. The guests are not able to connect if they are going over a bonded network. So in this case I...
I have double, and triple checked the environment already. The firewall has not been modified and I have disabled automatic updates there. The switches are also unchanged. Additionally there are additional devices on the same VLANs that some of the VM's are trying to use and those devices work...
I recently updated my 4-node Proxmox cluster to version 6.0-11 and after the update machines that are on a VLAN are no longer able to access the internet or connect to any other machine, or get an IP address. VM's and containers not on a VLAN continue to work just fine. There were no other...
Since upgrading to 6.0 I have noticed that my 4-node cluster has become unstable. At random times up to 3 of the 4 nodes will suddenly drop all connections and appear as if they have rebooted. However inspecting my machines iDRAC logs no reboots or power issues occur. It would appear that this...
I have had this issue for a while now, and after upgrading to Proxmox 6 and the new Ceph it is still there.
The problem is that the Ceph Display page shows that I have 17 OSD's when I only have 16. It shows the extra one as being down and out. (Side note, I do in fact have one OSD that is down...
It looks like it had already done the update. My load balancer had just connected me to a node in the cluster that the update had not yet been performed on.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.