ERROR lxc_network - network.c:instantiate_veth:130 - Failed to create veth pair

MiniMiner

Member
Jan 20, 2018
18
0
6
24
Hello Proxmox Forum!

I got a little problem with my proxmox instance.
I can't say exactly when the problem arose, but for some time now I'm having trouble starting Proxmox LXC CTs.

First, I noticed the following(Screenshot 1): The node is not in a cluster, but appears as offline. Proxmox and the CTs were available normally. (Except for one CT)
After some time, I found out that the process of the above-mentioned CT (lxc monitor) seemed to hang during the startup. After I killed this process and ran

service pvedaemon restart
service pveproxy restart
service pvestatd restart

everything was displayed normally again.
However, I was still unable to start this CT. After I tried to start the CT with the logfile parameter, I got the following messages:

root@pve:~# cat /log.txt
lxc-start 191 20180120144719.581 ERROR lxc_network - network.c:instantiate_veth:130 - Failed to create veth pair "veth191i0" and "vethMJL64R": File exists
lxc-start 191 20180120144719.581 ERROR lxc_network - network.c:lxc_create_network_priv:2401 - Failed to create network device
lxc-start 191 20180120144719.581 ERROR lxc_start - start.c:lxc_spawn:1185 - Failed to create the network.
lxc-start 191 20180120144719.581 ERROR lxc_start - start.c:__lxc_start:1469 - Failed to spawn container "191".
lxc-start 191 20180120144719.581 ERROR lxc_container - lxccontainer.c:wait_on_daemonized_start:760 - Received container state "STOPPING" instead of "RUNNING"
lxc-start 191 20180120144719.581 ERROR lxc_start_ui - tools/lxc_start.c:main:368 - The container failed to start.
lxc-start 191 20180120144719.581 ERROR lxc_start_ui - tools/lxc_start.c:main:370 - To get more details, run the container in foreground mode.
lxc-start 191 20180120144719.581 ERROR lxc_start_ui - tools/lxc_start.c:main:372 - Additional information can be obtained by setting the --logfile and --logpriority options.
lxc-start 191 20180120144858.222 ERROR lxc_cgfsng - cgroups/cgfsng.c:create_path_for_hierarchy:1335 - Path "/sys/fs/cgroup/cpuset//lxc/191" already existed.
lxc-start 191 20180120144858.222 ERROR lxc_cgfsng - cgroups/cgfsng.c:cgfsng_create:1431 - Failed to create "/sys/fs/cgroup/cpuset//lxc/191"
lxc-start 191 20180120144858.222 ERROR lxc_cgfsng - cgroups/cgfsng.c:create_path_for_hierarchy:1335 - Path "/sys/fs/cgroup/cpuset//lxc/191-1" already existed.
lxc-start 191 20180120144858.222 ERROR lxc_cgfsng - cgroups/cgfsng.c:cgfsng_create:1431 - Failed to create "/sys/fs/cgroup/cpuset//lxc/191-1"
lxc-start 191 20180120144858.222 ERROR lxc_cgfsng - cgroups/cgfsng.c:create_path_for_hierarchy:1335 - Path "/sys/fs/cgroup/cpuset//lxc/191-2" already existed.
lxc-start 191 20180120144858.222 ERROR lxc_cgfsng - cgroups/cgfsng.c:cgfsng_create:1431 - Failed to create "/sys/fs/cgroup/cpuset//lxc/191-2"

That's why I tried to create a new CT and copy the data to it via backup.
Without success.

My current status is the following:

I can not start any CT anymore, because each time this message

Job for pve-container@257.service failed because the control process exited with error code.
See "systemctl status pve-container@257.service" and "journalctl -xe" for details.
TASK ERROR: command 'systemctl start pve-container@257' failed: exit code 1

appears. I did not find anything on the internet, which is why I turn directly to the Proxmox community. Here's pveversion -v (if relevant):

root@pve:~# pveversion
pve-manager/5.1-35/722cc488 (running kernel: 4.13.4-1-pve)
root@pve:~# pveversion -v
proxmox-ve: 5.1-25 (running kernel: 4.13.4-1-pve)
pve-manager: 5.1-35 (running version: 5.1-35/722cc488)
pve-kernel-4.13.4-1-pve: 4.13.4-25
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-15
qemu-server: 5.0-17
pve-firmware: 2.0-3
libpve-common-perl: 5.0-20
libpve-guest-common-perl: 2.0-13
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-16
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-2
pve-container: 2.0-17
pve-firewall: 3.0-3
pve-ha-manager: 2.0-3
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.0-2
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.2-pve1~bpo90

I would be very happy, if I could find an answer here, because I have no clue what to do against this and on the node are customers who can not control their servers atm.
 

Attachments

  • Screenshot_74.png
    Screenshot_74.png
    4.1 KB · Views: 3
Last edited:
After a little bit more testing, it seems that the problem only occours with a up-to-date centos 7 system. Which OS are you using? Debian is wirking fine for me. Could be a CentOS related problem.
 
Hello Proxmox Forum!

I got a little problem with my proxmox instance.
I can't say exactly when the problem arose, but for some time now I'm having trouble starting Proxmox LXC CTs.

First, I noticed the following(Screenshot 1): The node is not in a cluster, but appears as offline. Proxmox and the CTs were available normally. (Except for one CT)
After some time, I found out that the process of the above-mentioned CT (lxc monitor) seemed to hang during the startup. After I killed this process and ran

service pvedaemon restart
service pveproxy restart
service pvestatd restart

everything was displayed normally again.
However, I was still unable to start this CT. After I tried to start the CT with the logfile parameter, I got the following messages:

root@pve:~# cat /log.txt
lxc-start 191 20180120144719.581 ERROR lxc_network - network.c:instantiate_veth:130 - Failed to create veth pair "veth191i0" and "vethMJL64R": File exists
lxc-start 191 20180120144719.581 ERROR lxc_network - network.c:lxc_create_network_priv:2401 - Failed to create network device
lxc-start 191 20180120144719.581 ERROR lxc_start - start.c:lxc_spawn:1185 - Failed to create the network.
lxc-start 191 20180120144719.581 ERROR lxc_start - start.c:__lxc_start:1469 - Failed to spawn container "191".
lxc-start 191 20180120144719.581 ERROR lxc_container - lxccontainer.c:wait_on_daemonized_start:760 - Received container state "STOPPING" instead of "RUNNING"
lxc-start 191 20180120144719.581 ERROR lxc_start_ui - tools/lxc_start.c:main:368 - The container failed to start.
lxc-start 191 20180120144719.581 ERROR lxc_start_ui - tools/lxc_start.c:main:370 - To get more details, run the container in foreground mode.
lxc-start 191 20180120144719.581 ERROR lxc_start_ui - tools/lxc_start.c:main:372 - Additional information can be obtained by setting the --logfile and --logpriority options.
lxc-start 191 20180120144858.222 ERROR lxc_cgfsng - cgroups/cgfsng.c:create_path_for_hierarchy:1335 - Path "/sys/fs/cgroup/cpuset//lxc/191" already existed.
lxc-start 191 20180120144858.222 ERROR lxc_cgfsng - cgroups/cgfsng.c:cgfsng_create:1431 - Failed to create "/sys/fs/cgroup/cpuset//lxc/191"
lxc-start 191 20180120144858.222 ERROR lxc_cgfsng - cgroups/cgfsng.c:create_path_for_hierarchy:1335 - Path "/sys/fs/cgroup/cpuset//lxc/191-1" already existed.
lxc-start 191 20180120144858.222 ERROR lxc_cgfsng - cgroups/cgfsng.c:cgfsng_create:1431 - Failed to create "/sys/fs/cgroup/cpuset//lxc/191-1"
lxc-start 191 20180120144858.222 ERROR lxc_cgfsng - cgroups/cgfsng.c:create_path_for_hierarchy:1335 - Path "/sys/fs/cgroup/cpuset//lxc/191-2" already existed.
lxc-start 191 20180120144858.222 ERROR lxc_cgfsng - cgroups/cgfsng.c:cgfsng_create:1431 - Failed to create "/sys/fs/cgroup/cpuset//lxc/191-2"

That's why I tried to create a new CT and copy the data to it via backup.
Without success.

My current status is the following:

I can not start any CT anymore, because each time this message

Job for pve-container@257.service failed because the control process exited with error code.
See "systemctl status pve-container@257.service" and "journalctl -xe" for details.
TASK ERROR: command 'systemctl start pve-container@257' failed: exit code 1

appears. I did not find anything on the internet, which is why I turn directly to the Proxmox community. Here's pveversion -v (if relevant):

root@pve:~# pveversion
pve-manager/5.1-35/722cc488 (running kernel: 4.13.4-1-pve)
root@pve:~# pveversion -v
proxmox-ve: 5.1-25 (running kernel: 4.13.4-1-pve)
pve-manager: 5.1-35 (running version: 5.1-35/722cc488)
pve-kernel-4.13.4-1-pve: 4.13.4-25
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-15
qemu-server: 5.0-17
pve-firmware: 2.0-3
libpve-common-perl: 5.0-20
libpve-guest-common-perl: 2.0-13
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-16
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-2
pve-container: 2.0-17
pve-firewall: 3.0-3
pve-ha-manager: 2.0-3
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.0-2
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.2-pve1~bpo90

I would be very happy, if I could find an answer here, because I have no clue what to do against this and on the node are customers who can not control their servers atm.


Fixed outdated kernel to version 4.10.17-5
root@pve-xxx-yy:~# aptitude install pve-kernel-4.10.17-5-pve
root@pve-xxx-yy:~# vi /boot/grub/grub.cfg
root@pve-xxx-yy:~# reboot
...
root@pve-xxx-yy:~# uname -a
Linux pve-xxx-yy 4.10.17-5-pve #1 SMP PVE 4.10.17-25 (Mon, 6 Nov 2017 12:37:43 +0100) x86_64 GNU/Linux


more information:
https://forum.proxmox.com/threads/lxc-wont-start-stop-failed-to-create-veth-pair.39250/
 
In my case it was trying to add an IP via container /etc/pve/lxc/*.conf to a vmbr bridge that didnt exist. The /etc/network/interfaces of the host had an error and the bridge didnt exist.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!