Operation not supported - Failed to create veth pair "veth110i0" and "vethnNG5Zy"

johntdyer

New Member
Aug 25, 2022
7
1
3
So I have two proxmox servers, and one runs my LXC containers but after some hardware swaps my container instance no longer seems to have veth support which causes the containers to fail to start

Example failure start of container

Code:
root@pve:~# pct start 110 --debug
netdev_configure_server_veth: 662 Operation not supported - Failed to create veth pair "veth110i0" and "vethnNG5Zy"
lxc_create_network_priv: 3427 Operation not supported - Failed to create network device
lxc_spawn: 1843 Failed to create the network
__lxc_start: 2074 Failed to spawn container "110"
534]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "reject_force_umount  # comment this to allow umount -f;  not recommended"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "[all]"
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "kexec_load errno 1"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[246:kexec_load] action[327681:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[246:kexec_load] action[327681:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[246:kexec_load] action[327681:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "open_by_handle_at errno 1"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[304:open_by_handle_at] action[327681:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[304:open_by_handle_at] action[327681:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[304:open_by_handle_at] action[327681:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "init_module errno 1"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[175:init_module] action[327681:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[175:init_module] action[327681:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[175:init_module] action[327681:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "finit_module errno 1"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[313:finit_module] action[327681:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[313:finit_module] action[327681:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[313:finit_module] action[327681:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "delete_module errno 1"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[176:delete_module] action[327681:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[176:delete_module] action[327681:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[176:delete_module] action[327681:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:1017 - Merging compat seccomp contexts into main context
INFO     start - ../src/lxc/start.c:lxc_init:884 - Container "110" is initialized
INFO     cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_monitor_create:1029 - The monitor process uses "lxc.monitor/110" as cgroup
DEBUG    storage - ../src/lxc/storage/storage.c:storage_query:231 - Detected rootfs type "dir"
INFO     cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_payload_create:1137 - The container process uses "lxc/110/ns" as inner and "lxc/110" as limit cgroup
INFO     start - ../src/lxc/start.c:lxc_spawn:1765 - Cloned CLONE_NEWNS
INFO     start - ../src/lxc/start.c:lxc_spawn:1765 - Cloned CLONE_NEWPID
INFO     start - ../src/lxc/start.c:lxc_spawn:1765 - Cloned CLONE_NEWUTS
INFO     start - ../src/lxc/start.c:lxc_spawn:1765 - Cloned CLONE_NEWIPC
INFO     start - ../src/lxc/start.c:lxc_spawn:1765 - Cloned CLONE_NEWNET
INFO     start - ../src/lxc/start.c:lxc_spawn:1765 - Cloned CLONE_NEWCGROUP
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved mnt namespace via fd 18 and stashed path as mnt:/proc/8270/fd/18
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved pid namespace via fd 19 and stashed path as pid:/proc/8270/fd/19
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved uts namespace via fd 20 and stashed path as uts:/proc/8270/fd/20
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved ipc namespace via fd 21 and stashed path as ipc:/proc/8270/fd/21
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved net namespace via fd 22 and stashed path as net:/proc/8270/fd/22
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved cgroup namespace via fd 23 and stashed path as cgroup:/proc/8270/fd/23
WARN     cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_setup_limits_legacy:2767 - Invalid argument - Ignoring legacy cgroup limits on pure cgroup2 system
INFO     cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_setup_limits:2863 - Limits for the unified cgroup hierarchy have been setup
ERROR    network - ../src/lxc/network.c:netdev_configure_server_veth:662 - Operation not supported - Failed to create veth pair "veth110i0" and "vethnNG5Zy"
ERROR    network - ../src/lxc/network.c:lxc_create_network_priv:3427 - Operation not supported - Failed to create network device
ERROR    start - ../src/lxc/start.c:lxc_spawn:1843 - Failed to create the network
DEBUG    network - ../src/lxc/network.c:lxc_delete_network:4173 - Deleted network devices
ERROR    start - ../src/lxc/start.c:__lxc_start:2074 - Failed to spawn container "110"
WARN     start - ../src/lxc/start.c:lxc_abort:1039 - No such process - Failed to send SIGKILL via pidfd 17 for process 8286
startup for container '110' failed
root@pve:~#

110.conf
Code:
## Zwave-JS-UI LXC
#### https%3A//tteck.github.io/Proxmox/
#<a href='https%3A//ko-fi.com/D1D7EP4GF'><img src='https%3A//img.shields.io/badge/%E2%98%95-Buy me a coffee-red' /></a>
#lxc.mount.entry%3A /dev/bus/usb/002/017 dev/bus/usb/002/017 none bind,optional,create=file
# lxc.mount.entry%3A /dev/bus/usb/002/009 dev/bus/usb/002/009 none bind,optional,create=file
arch: amd64
cores: 2
features: nesting=1
hostname: zwave-js-ui
memory: 1024
net0: name=eth0,bridge=vmbr0,gw=192.168.120.1,hwaddr=62:39:5B:9B:4D:1D,ip=192.168.120.5/24,tag=120,type=veth
onboot: 1
ostype: debian
rootfs: storage:vm-110-disk-0,size=4G
startup: order=4
swap: 512
lxc.cgroup2.devices.allow: a
lxc.cap.drop:
lxc.cgroup2.devices.allow: c 188:* rwm
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.mount.entry: /dev/serial/by-id  dev/serial/by-id  none bind,optional,create=dir
lxc.mount.entry: /dev/ttyUSB0       dev/ttyUSB0       none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM0       dev/ttyACM0       none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM1       dev/ttyACM1       none bind,optional,create=file
lxc.mount.entry: /dev/zwave dev/zwave none bind,optional,create=file
root@pve:~#


Working instance

Code:
root@proxmox:~# pveversion -v
proxmox-ve: 7.3-1 (running kernel: 5.15.83-1-pve)
pve-manager: 7.3-4 (running version: 7.3-4/d69b70d4)
pve-kernel-5.15: 7.3-1
pve-kernel-helper: 7.3-1
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
ceph-fuse: 15.2.17-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.3
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.3-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-1
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.3-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.3.1-1
proxmox-backup-file-restore: 2.3.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.3
pve-cluster: 7.3-1
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.6-2
pve-ha-manager: 3.5.1
pve-i18n: 2.8-1
pve-qemu-kvm: 7.1.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-2
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.7-pve1
root@proxmox:~#

Code:
root@proxmox:~# cat /etc/modules
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
#
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

root@proxmox:~#

Code:
root@proxmox:~#  modprobe veth
root@proxmox:~# lsmod | grep veth
veth                   32768  0
root@proxmox:~#


Broken instance


Code:
root@pve:~# pveversion -v
proxmox-ve: 7.3-1 (running kernel: 5.15.74-1-pve)
pve-manager: 7.3-4 (running version: 7.3-4/d69b70d4)
pve-kernel-5.15: 7.3-1
pve-kernel-helper: 7.3-1
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.15.39-4-pve: 5.15.39-4
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 15.2.16-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.3
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.3-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-1
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.3-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.3.1-1
proxmox-backup-file-restore: 2.3.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.0-1
proxmox-widget-toolkit: 3.5.3
pve-cluster: 7.3-1
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.6-2
pve-ha-manager: 3.5.1
pve-i18n: 2.8-1
pve-qemu-kvm: 7.1.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-2
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.7-pve1
root@pve:~#


Code:
root@pve:~# cat /etc/modules
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
#
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

root@pve:~#

Code:
root@pve:~#   modprobe veth
modprobe: FATAL: Module veth not found in directory /lib/modules/5.15.74-1-pve
root@pve:~# modprobe: FATAL: Module veth not found in directory^C
root@pve:~# lsmod | grep veth
root@pve:~#
 
Code:
proxmox-ve: 7.3-1 (running kernel: 5.15.74-1-pve)
pve-manager: 7.3-4 (running version: 7.3-4/d69b70d4)
pve-kernel-5.15: 7.3-1
pve-kernel-helper: 7.3-1
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.15.39-4-pve: 5.15.39-4
pve-kernel-5.15.30-2-pve: 5.15.30-3
Code:
modprobe: FATAL: Module veth not found in directory /lib/modules/5.15.74-1-pve

You are running kernel: 5.15.74-1-pve, but it is not installed anymore (for whatever reason).
So, I would reboot the PVE-host to boot with the: 5.15.83-1-pve kernel and see if that fixes it.
 
Yea its super strange, I dont even see that installed
Code:
root@pve:~# apt list --installed | grep kernel

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

pve-kernel-5.15.30-2-pve/stable,now 5.15.30-3 amd64 [installed]
pve-kernel-5.15.39-4-pve/stable,now 5.15.39-4 amd64 [installed,automatic]
pve-kernel-5.15.83-1-pve/stable,now 5.15.83-1 amd64 [installed,automatic]
pve-kernel-5.15/stable,now 7.3-1 all [installed]
pve-kernel-helper/stable,now 7.3-1 all [installed]

I pinned the kernel and rebooted.. Ran update-grub even though the pin said it was doing that as part of its process...
Code:
root@pve:~# pve-efiboot-tool kernel list
Manually selected kernels:
None.

Automatically selected kernels:
5.15.39-4-pve
5.15.83-1-pve

Pinned kernel:
5.15.83-1-pve
root@pve:~#

Then I rebooted

Code:
root@pve:~# uname -av
Linux pve 5.15.74-1-pve #1 SMP PVE 5.15.74-1 (Mon, 14 Nov 2022 20:17:15 +0100) x86_64 GNU/Linux
root@pve:~#

I also tried removing it but it didnt say that 5.15.74 was installed..

Code:
root@pve:~# dpkg -P pve-kernel-5.15.74-1-pve
dpkg: warning: ignoring request to remove pve-kernel-5.15.74-1-pve which isn't installed
root@pve:~#

also worth mentioning that 5.15.74 isnt even listed when I update grub

Code:
root@pve:~# update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.15.83-1-pve
Found initrd image: /boot/initrd.img-5.15.83-1-pve
Found linux image: /boot/vmlinuz-5.15.39-4-pve
Found initrd image: /boot/initrd.img-5.15.39-4-pve
Found linux image: /boot/vmlinuz-5.15.30-2-pve
Found initrd image: /boot/initrd.img-5.15.30-2-pve
Found memtest86+ image: /boot/memtest86+.bin
Found memtest86+ multiboot image: /boot/memtest86+_multiboot.bin
Warning: os-prober will not be executed to detect other bootable partitions.
Systems on them will not be added to the GRUB boot configuration.
Check GRUB_DISABLE_OS_PROBER documentation entry.
done
root@pve:~#
 
Last edited:
Absolutely weird.

What happens, if you install (or reinstall) the: pve-kernel-5.15.74-1-pve?

Last time I tried, the kernel pinning did not work for me; so do you have physical/local or via IPMI access to this PVE-host and can manually select the: 5.15.83-1-pve kernel on boot?

But to be honest, I have no clue what is going on here, sorry. :oops:
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!