Well, we disabled HA on all the VMs when this occurred last time so nothing had moved around this time.
c1-h5-i and c1-h9-i reported "table full, dropping packet" but didnt get fenced or reboot.
Could the other nodes reboot because either of those nodes tell them to? Can you tell what...
I assume that this could be any of the VMs also so any of those that have high connections could impact the node entirely.
We had a far higher limit set than we were using with Xen. Although, the conntrack errors only appeared on two nodes that didn't restart at all.
It's on it's own network 10.x space on the second NIC.
Backups are not enabled and looking at traffic graphs, nothing more than usual, few Mbit/s.
Public traffic is on the first NIC.
Nothing as far as I am aware. Everything was stable and that was the first node that went down and some others followed after.
How is it setup? Could you elaborate on what information you want so I can provide.
Last week something happened where nodes just started rebooting each other because I assume, they thought things were down so we removed all VMs from cluster.
This morning, a similar thing happened.
One node went down, then another and another. Initially, we thought this was due to conntrack...
We've got all our nodes online and added one server to HA but it remains in the queued status.
I have attached an image showing the cluster but it just remains in the dead status. Have restarted the cluster process too without much change.
Is there anything else to check?
Looking to move some internal VMs over from OnApp to Proxmox.
The backup image file that OnApp appears to create is a just a file level based backup. Does anyone have any experience in getting those in to Proxmox 5 with local storage (to test).
Thanks!
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.