Proxmox VE 5 cluster some strange log

juniper

Well-Known Member
Oct 21, 2013
84
0
46
Proxmox VE 5 3 nodes cluster up and running

one test Container (Centos 7)

during migration i find on logs some strange messages:

Jul 13 08:57:00 *****srv1 systemd[1]: Starting Proxmox VE replication runner...
Jul 13 08:57:00 *****srv1 pvedaemon[1444]: unable to get PID for CT 100 (not running?)
Jul 13 08:57:00 *****srv1 systemd[1]: lxc@100.service: Main process exited, code=exited, status=1/FAILURE
Jul 13 08:57:00 ******srv1 systemd[1]: lxc@100.service: Unit entered failed state.
Jul 13 08:57:00 ******srv1 systemd[1]: lxc@100.service: Failed with result 'exit-code'.

Jul 13 08:57:00 ******srv1 systemd[1]: Started Proxmox VE replication runner.
Jul 13 08:57:07 ******srv1 pve-ha-crm[1448]: service 'ct:100': state changed from 'migrate' to 'started' (node = *****srv2)

on server receiving migration:

Jul 13 08:57:09 *****srv2 pve-ha-lrm[1422]: successfully acquired lock 'ha_agent_timlabsrv2_lock'
Jul 13 08:57:09 *****srv2 pve-ha-lrm[1422]: watchdog active
Jul 13 08:57:09 *****srv2 pve-ha-lrm[1422]: status change wait_for_agent_lock => active
Jul 13 08:57:09 *****srv2 pve-ha-lrm[3779]: starting service ct:100
Jul 13 08:57:09 *****srv2 pve-ha-lrm[3780]: starting CT 100: UPID:timlabsrv2:00000EC4:0001DB14:596719C5:vzstart:100:root@pam:
Jul 13 08:57:09 *****srv2 systemd[1]: Starting LXC Container: 100...
Jul 13 08:57:09 *****srv2 systemd-udevd[3815]: Could not generate persistent MAC address for veth9ELIQB: No such file or directory
Jul 13 08:57:10 *****srv2 systemd[1]: Started LXC Container: 100.
Jul 13 08:57:10 ******srv2 pve-ha-lrm[3779]: service status ct:100 started

but all seems working fine...
 
Have you figured this one out? I am running into the same problem. Take a look at my logs:

Nov 09 03:06:07 fender systemd-udevd[3879]: Could not generate persistent MAC address for vethBFAGET: No such file or directory
Nov 09 03:06:07 fender kernel: IPv6: ADDRCONF(NETDEV_UP): veth100i0: link is not ready

Nov 09 03:06:07 fender lxc-start[3861]: lxc-start: 100: lxccontainer.c: wait_on_daemonized_start: 760 Received container state "STOPPING" instead of "RU
Nov 09 03:06:07 fender lxc-start[3861]: lxc-start: 100: tools/lxc_start.c: main: 368 The container failed to start.
Nov 09 03:06:07 fender lxc-start[3861]: lxc-start: 100: tools/lxc_start.c: main: 370 To get more details, run the container in foreground mode.
Nov 09 03:06:07 fender lxc-start[3861]: lxc-start: 100: tools/lxc_start.c: main: 372 Additional information can be obtained by setting the --logfile and
Nov 09 03:06:07 fender systemd[1]: pve-container@100.service: Control process exited, code=exited status=1
Nov 09 03:06:07 fender pvedaemon[15634]: unable to get PID for CT 100 (not running?)
Nov 09 03:06:07 fender systemd[1]: pve-container@100.service: Killing process 3863 (lxc-start) with signal SIGKILL.
Nov 09 03:06:07 fender systemd[1]: pve-container@100.service: Killing process 3909 (sh) with signal SIGKILL.
Nov 09 03:06:08 fender systemd[1]: Failed to start PVE LXC Container: 100.
-- Subject: Unit pve-container@100.service has failed
 
The persistent MAC address message is just weird systemd-networkd/udevd defaults (https://github.com/systemd/systemd/issues/3374). Not harmful, can be ignored.
In most cases it's probably fine to disable this entirely
Code:
# /etc/systemd/network/99-default.link
[Link]
MACAddressPolicy=none
followed by a `systemctl restart systemd-udevd`

Udev sees a new network device and decides it wants to generate a persistent mac address for it, but these devices are temporary virtual devices which are about to be moved into the container's namespace, there's no point in doing anything with them really, especially since they're usually gone by the time it starts acting.

Edit:
As for the IPv6 message... my general advice is to have net.ipv6.conf.default.disable_ivp6=1 and only enable it explicitly on interfaces where you want to use it (same for .autoconf and .accept_ra). The container will usually enable/disable it on its own side anyway.
 
  • Like
Reactions: Arthur
The persistent MAC address message is just weird systemd-networkd/udevd defaults (https://github.com/systemd/systemd/issues/3374). Not harmful, can be ignored.
In most cases it's probably fine to disable this entirely
Code:
# /etc/systemd/network/99-default.link
[Link]
MACAddressPolicy=none
followed by a `systemctl restart systemd-udevd`

Udev sees a new network device and decides it wants to generate a persistent mac address for it, but these devices are temporary virtual devices which are about to be moved into the container's namespace, there's no point in doing anything with them really, especially since they're usually gone by the time it starts acting.


Thanks for pointing out Wolfgang. It helps in understanding that my problem isn't related to this thread then. Thank you. I'm having an extremely weird behavior while starting containers in the new PVE 5.1.

Basically, it fails when executing the script /usr/share/lxc/lxcnetaddbr while starting the container. I noticed this because I generated a log while running the container in foreground mode.


This is the relevant part of the log in my understanding:

lxc-start 4001 20171109091041.210 ERROR lxc_network - network.c:lxc_create_network_priv:2401 - Failed to create network device
lxc-start 4001 20171109091041.210 ERROR lxc_start - start.c:lxc_spawn:1185 - Failed to create the network.
lxc-start 4001 20171109091041.210 ERROR lxc_start - start.c:__lxc_start:1469 - Failed to spawn container "4001".
lxc-start 4001 20171109091041.210 TRACE lxc_start - start.c:lxc_serve_state_socket_pair:437 - Sent container state "STOPPING" to 5
lxc-start 4001 20171109091041.210 TRACE lxc_start - start.c:lxc_serve_state_clients:364 - set container state to STOPPING
lxc-start 4001 20171109091041.210 TRACE lxc_start - start.c:lxc_serve_state_clients:367 - no state clients registered
lxc-start 4001 20171109091041.210 TRACE lxc_start - start.c:lxc_serve_state_clients:364 - set container state to STOPPED
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!