Apparently the corosync3 file system is not working as expected. I just had a look at /root/.ssh/authorized_keys (symbolic link to /etc/pve/priv/authorized_keys) and same issue there:
^R^C^@^@^@^@^@^@^^^@^@^@^@^@^@^@^@^@^@^@priv/authorized_keys.tmp.1501^@
ssh-rsa AAAAB3N[...]
ssh-rsa AAAAB3N[...]
Maybe a bit offtopic but just a suggestion:
Judging on the IP Range you are using Amazon AWS [1][2], which is not a good idea to run Proxmox on: You are trying to run a hypervisor (Proxmox/KVM) on virtualized hardware (EC2/Xen) - so if CPU load on the host is high your virtualized network cards...
Like i wrote: we only got it working with IPs from the same /24 subnet. Using /16 with corosync 2 did not work for us, but maybe things have changed with corosync 3.
Hi,
we have a similar setup up and running: ring0 with private IP over eth0, ring1 with public IP over eth1. We only got it working with public IPs being from the same /24 subnet, so it depends on what public IPs you have assigned.
So e.g.
103.43.75.180
103.43.75.190
103.43.75.232
works for...
That did the trick, thanks!
So solution for now is
add deb http://download.proxmox.com/debian/pve stretch pvetest to your /etc/apt/sources.list
apt-get update ; apt-get install lxc-pve=3.0.2+pve1-1
shutdown and restart all containers
remove pvetest from sources.list again ;)
Thanks again :)
Not sure how this could help - i am running 200+ containers on this cluster. All their consoles stopped working after updating to pve-manager 5.2-7.
Here you go:
cat /etc/pve/nodes/pmx02/lxc/10271.conf
arch: amd64
cores: 2
hostname: dd-apollo13
memory: 4096
nameserver: 8.8.8.8 9.9.9.9
net0...
Which is what he wrote: Host console (">_Shell") works fine, container console does not.
Pressing Enter in the container console does not solve it. Like i wrote: This worked in 5.2-6 and stopped working in 5.2-7.
I had the same issue today, the pointer for me was in https://forum.proxmox.com/threads/corosync-alerts-and-errors-unicast.43101/#post-207172
I have moved a node from a old cluster to a new one - and it turned out that until i removed the node entry from the old cluster the cluster kept...
Hi Fabian,
that sounds exactly like what i am seeing: corosync causing high load, pve-ha-lrm being stuck. I will try the new packages on pvetest and see if they fix it for me. Any further gotchas/tips for the update?
Holger
Hi,
i am running a cluster of 5 nodes with Proxmox 5.1 since months and yesterday all nodes all of a sudden stopped "seeing" eachother and i find these error logs in `dmesg -T` repeated several times on all servers at around the same time.
[Sun Apr 29 15:22:56 2018] INFO: task pvesr:19470...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.