Network initialization issues after latest update

graemes

Active Member
Jun 2, 2018
1
1
43
54
After the latest update:
a) LXC containers fail to start ;
b) VM's start but return an error ;

I've applied the latest vesrion of the packages (including pve-container package update version 4.3-5).

Container error log:

root@server:~# pct start 127 --debug run_buffer: 321 Script exited with status 255 lxc_create_network_priv: 3427 No such device - Failed to create network device lxc_spawn: 1843 Failed to create the network __lxc_start: 2074 Failed to spawn container "127" m/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor INFO conf - ../src/lxc/conf.c:run_script_argv:337 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "127", config section "lxc" DEBUG conf - ../src/lxc/conf.c:run_buffer:310 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 127 lxc pre-start produced output: unknown ID 'almalinux' in /etc/os-release file, trying fallback detection DEBUG seccomp - ../src/lxc/seccomp.c:parse_config_v2:656 - Host native arch is [3221225534] INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "reject_force_umount # comment this to allow umount -f; not recommended" INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "[all]" INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "kexec_load errno 1" INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[246:kexec_load] action[327681:errno] arch[0] INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[304:open_by_handle_at] action[327681:errno] arch[1073741827] INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[304:open_by_handle_at] action[327681:errno] arch[1073741886] INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "init_module errno 1" INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[175:init_module] action[327681:errno] arch[0] INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[175:init_module] action[327681:errno] arch[1073741827] INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[175:init_module] action[327681:errno] arch[1073741886] INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "finit_module errno 1" INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[313:finit_module] action[327681:errno] arch[0] INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[313:finit_module] action[327681:errno] arch[1073741827] INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[313:finit_module] action[327681:errno] arch[1073741886] INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "delete_module errno 1" INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[176:delete_module] action[327681:errno] arch[0] INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[176:delete_module] action[327681:errno] arch[1073741827] INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[176:delete_module] action[327681:errno] arch[1073741886] INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "ioctl errno 1 [1,0x9400,SCMP_CMP_MASKED_EQ,0xff00]" INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:547 - arg_cmp[0]: SCMP_CMP(1, 7, 65280, 37888) INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[16:ioctl] action[327681:errno] arch[0] INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:547 - arg_cmp[0]: SCMP_CMP(1, 7, 65280, 37888) INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[16:ioctl] action[327681:errno] arch[1073741827] INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:547 - arg_cmp[0]: SCMP_CMP(1, 7, 65280, 37888) INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[16:ioctl] action[327681:errno] arch[1073741886] INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "keyctl errno 38" INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[250:keyctl] action[327718:errno] arch[0] INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[250:keyctl] action[327718:errno] arch[1073741827] INFO seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[250:keyctl] action[327718:errno] arch[1073741886] INFO seccomp - ../src/lxc/seccomp.c:parse_config_v2:1017 - Merging compat seccomp contexts into main context INFO start - ../src/lxc/start.c:lxc_init:884 - Container "127" is initialized INFO cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_monitor_create:1029 - The monitor process uses "lxc.monitor/127" as cgroup DEBUG storage - ../src/lxc/storage/storage.c:storage_query:231 - Detected rootfs type "dir" DEBUG storage - ../src/lxc/storage/storage.c:storage_query:231 - Detected rootfs type "dir" INFO cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_payload_create:1137 - The container process uses "lxc/127/ns" as inner and "lxc/127" as limit cgroup INFO start - ../src/lxc/start.c:lxc_spawn:1765 - Cloned CLONE_NEWUSER INFO start - ../src/lxc/start.c:lxc_spawn:1765 - Cloned CLONE_NEWNS INFO start - ../src/lxc/start.c:lxc_spawn:1765 - Cloned CLONE_NEWPID INFO start - ../src/lxc/start.c:lxc_spawn:1765 - Cloned CLONE_NEWUTS INFO start - ../src/lxc/start.c:lxc_spawn:1765 - Cloned CLONE_NEWIPC INFO start - ../src/lxc/start.c:lxc_spawn:1765 - Cloned CLONE_NEWCGROUP DEBUG start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved user namespace via fd 17 and stashed path as user:/proc/44574/fd/17 DEBUG start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved mnt namespace via fd 18 and stashed path as mnt:/proc/44574/fd/18 DEBUG start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved pid namespace via fd 19 and stashed path as pid:/proc/44574/fd/19 DEBUG start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved uts namespace via fd 20 and stashed path as uts:/proc/44574/fd/20 DEBUG start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved ipc namespace via fd 21 and stashed path as ipc:/proc/44574/fd/21 DEBUG start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved cgroup namespace via fd 22 and stashed path as cgroup:/proc/44574/fd/22 DEBUG conf - ../src/lxc/conf.c:idmaptool_on_path_and_privileged:3520 - The binary "/usr/bin/newuidmap" does have the setuid bit set DEBUG conf - ../src/lxc/conf.c:idmaptool_on_path_and_privileged:3520 - The binary "/usr/bin/newgidmap" does have the setuid bit set DEBUG conf - ../src/lxc/conf.c:lxc_map_ids:3605 - Functional newuidmap and newgidmap binary found INFO cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_setup_limits:2863 - Limits for the unified cgroup hierarchy have been setup DEBUG conf - ../src/lxc/conf.c:idmaptool_on_path_and_privileged:3520 - The binary "/usr/bin/newuidmap" does have the setuid bit set DEBUG conf - ../src/lxc/conf.c:idmaptool_on_path_and_privileged:3520 - The binary "/usr/bin/newgidmap" does have the setuid bit set INFO conf - ../src/lxc/conf.c:lxc_map_ids:3603 - Caller maps host root. Writing mapping directly NOTICE utils - ../src/lxc/utils.c:lxc_drop_groups:1368 - Dropped supplimentary groups INFO start - ../src/lxc/start.c:do_start:1107 - Unshared CLONE_NEWNET NOTICE utils - ../src/lxc/utils.c:lxc_drop_groups:1368 - Dropped supplimentary groups NOTICE utils - ../src/lxc/utils.c:lxc_switch_uid_gid:1344 - Switched to gid 0 NOTICE utils - ../src/lxc/utils.c:lxc_switch_uid_gid:1353 - Switched to uid 0 DEBUG start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved net namespace via fd 5 and stashed path as net:/proc/44574/fd/5 INFO conf - ../src/lxc/conf.c:run_script_argv:337 - Executing script "/usr/share/lxc/lxcnetaddbr" for container "127", config section "net" DEBUG conf - ../src/lxc/conf.c:run_buffer:310 - Script exec /usr/share/lxc/lxcnetaddbr 127 net up veth veth127i0 produced output: RTNETLINK answers: Operation not supported DEBUG conf - ../src/lxc/conf.c:run_buffer:310 - Script exec /usr/share/lxc/lxcnetaddbr 127 net up veth veth127i0 produced output: command '/sbin/bridge fdb append 86:F9:16:98:3B:04 dev veth127i0 master static' failed: exit code 255 ERROR conf - ../src/lxc/conf.c:run_buffer:321 - Script exited with status 255 ERROR network - ../src/lxc/network.c:lxc_create_network_priv:3427 - No such device - Failed to create network device ERROR start - ../src/lxc/start.c:lxc_spawn:1843 - Failed to create the network DEBUG network - ../src/lxc/network.c:lxc_delete_network:4173 - Deleted network devices ERROR start - ../src/lxc/start.c:__lxc_start:2074 - Failed to spawn container "127" WARN start - ../src/lxc/start.c:lxc_abort:1039 - No such process - Failed to send SIGKILL via pidfd 16 for process 44597 startup for container '127' failed

VM error message:

root@server:~# qm start 100 generating cloud-init ISO RTNETLINK answers: Operation not supported command '/sbin/bridge fdb append 3A:B3:14:75:27:F4 dev tap100i0 master static' failed: exit code 255

Installed packages:

root@cryptos:~# pveversion --verbose proxmox-ve: 7.2-1 (running kernel: 5.15.74-1-pve) pve-manager: 7.2-14 (running version: 7.2-14/65898fbc) pve-kernel-5.15: 7.2-14 pve-kernel-helper: 7.2-14 pve-kernel-5.15.74-1-pve: 5.15.74-1 pve-kernel-5.15.64-1-pve: 5.15.64-1 pve-kernel-5.15.30-2-pve: 5.15.30-3 ceph-fuse: 17.2.5-pve1 corosync: 3.1.7-pve1 criu: 3.15-1+pve-1 glusterfs-client: 9.2-1 ifupdown2: 3.1.0-1+pmx3 ksm-control-daemon: 1.4-1 libjs-extjs: 7.0.0-1 libknet1: 1.24-pve2 libproxmox-acme-perl: 1.4.2 libproxmox-backup-qemu0: 1.3.1-1 libpve-access-control: 7.2-5 libpve-apiclient-perl: 3.2-1 libpve-common-perl: 7.2-7 libpve-guest-common-perl: 4.2-2 libpve-http-server-perl: 4.1-5 libpve-storage-perl: 7.2-12 libspice-server1: 0.14.3-2.1 lvm2: 2.03.11-2.1 lxc-pve: 5.0.0-3 lxcfs: 4.0.12-pve1 novnc-pve: 1.3.0-3 openvswitch-switch: 2.15.0+ds1-2+deb11u1 proxmox-backup-client: 2.2.7-1 proxmox-backup-file-restore: 2.2.7-1 proxmox-mini-journalreader: 1.3-1 proxmox-offline-mirror-helper: 0.5.0-1 proxmox-widget-toolkit: 3.5.2 pve-cluster: 7.2-3 pve-container: 4.3-5 pve-docs: 7.2-3 pve-edk2-firmware: 3.20220526-1 pve-firewall: 4.2-7 pve-firmware: 3.5-6 pve-ha-manager: 3.4.0 pve-i18n: 2.7-2 pve-qemu-kvm: 7.1.0-3 pve-xtermjs: 4.16.0-1 qemu-server: 7.2-10 smartmontools: 7.2-pve3 spiceterm: 3.2-2 swtpm: 0.8.0~bpo11+2 vncterm: 1.7-1 zfsutils-linux: 2.1.6-pve1

As per https://forum.proxmox.com/threads/a...i0-master-static-failed-exit-code-255.118222/ seems to be related to openvswitch (which I am also using).

All workarounds gratefully accepted :-)
 
  • Like
Reactions: kam821
I second this, I'm seeing exactly the same behaviour after applying the latest updates on a 5 node cluster. I'm using openvswitch as well.
 
  • Like
Reactions: kam821
Okay, good to see I'm not the only one. So better not restart containers until this is fixed. Really bad issue. Luckily I have a second host which I haven't converted to OpenVSwitch yet, so at least I could start the container there. But not an ideal situation.
 
  • Like
Reactions: kam821
Okay, good to see I'm not the only one. So better not restart containers until this is fixed. Really bad issue. Luckily I have a second host which I haven't converted to OpenVSwitch yet, so at least I could start the container there. But not an ideal situation.
@graemes @sseidel
Workaround that works - enabling 'firewall' for CT network interface.
https://forum.proxmox.com/threads/a...code-255-when-ovs-is-used.118222/#post-512104
Another option is to downgrade packages, but I don't know for sure which one are causing the issue (is pve-container package downgrade itself enough to solve this issue).
 
Last edited:
  • Like
Reactions: sidkang
After upgrading from Proxmox 6 to 7, my LXCs would not start up. Installing ifupdown2 seemed to change the error on starting an LXC to the "Failed to create network device" error. Enabling the firewall did the trick, they start up now, but when I restart the Proxmox server, I now get
"RTNETLINK answers: Operation not supported" and a "bridge fdb append" command fails for a few of the VMs. However, the LXCs do start and appear to be functioning normally. Can/should I address this error?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!