Hello,
An incident occurs from time to time on a proxmox node. We have the following logs for several hours (daemon.log) :
Dec 3 19:06:18 ns3058 systemd[1]: Started Proxmox VE replication runner.
Dec 3 19:07:02 ns3058 systemd[1]: Starting Proxmox VE replication runner...
Dec 3 19:07:26...
systemd-hostnamed issue is fixed by disabling apparmor on this container :
lxc.apparmor.profile: unconfined
I can now use hostnamectl to manually set the hostname. But still the same problem on buster and stretch containers.
I do not reproduce this problem on pure LXC 3.
Hello,
I would like to change the hostname of a debian buster container, to have a long host name.
But it seems to be not permanent.
I changed the host name on the proxmox UI (short name to long name) and I have the long host name in /etc/pve/nodes/NODE/lxc/CTN.conf.
But after a container...
Hello,
I have renamed a proxmox node (/etc/hosts and /etc/hostname changed). The node still appears in the cluster but :
"proxy loop detected (500)" appears in the front UI for this node ;
containers are not shown with pct list (.conf files are in the right folder...
As I mentionned, node came back into the cluster as soon as I started ssmping, as if a first exchange of multicast packets had been necessary.
I don't know the reason, but it is now stable.
Right, the cluster is under a Vrack.
It seems that it was the initialization of multicast which posed a problem. I'll ask to OVH if multicast is not stable.
Thank you for your responses !
You're right, omping has a deb package, I didn't see this.
IGMP snooping is on switch level of my host provider (OVH), I do not have control over that. Maybe a bug at OVH ?
(I preferred to use ssmping, because I don't want to install git, gcc, make... on the node.)
ssmping shows correct multicast response between nodes.
In fact, as soon as I started ssmping, the node came back into the cluster. I don't know why, but it seems stable.
Thanks for the clue !
Hello,
I have a cluster with 5.3.1 version nodes.
I try to add a new node, version 5.4.1.
The new node is added for a few minutes only, after it leaves the cluster for an unknown reason.
On the other nodes, 'pvecm nodes' does not show the new node but it appears in /etc/pve/corosync.conf...
I think the problem is due to a full pool (I do not know the reason) :
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-cotzM- 1007.17g 12.93 99.91
Does anyone know what I can do ?
Hello,
I want to remove an old container, but :
# pct start 514
Job for pve-container@514.service failed because the control process exited with error code.
See "systemctl status pve-container@514.service" and "journalctl -xe" for details.
command 'systemctl start pve-container@514' failed...
The advantage is that ephemeral containers use an overlay system, they are much lighter than conventional containers. It would be nice...
However, clone and destroy do the trick.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.