[SOLVED] LXC's failing to start (create new LXC fails as well)

jimi

Member
Apr 25, 2021
22
1
8
29
Hello, I've been running a server with some LXC's for awhile now and seemingly without any changes (hardware or updates,) LXC's are failing to start as of today after a reboot. I also tried to create a new LXC and that failed too.

Code:
()lxc_spawn: 1734 Operation not permitted - Failed to clone a new set of namespaces
__lxc_start: 2074 Failed to spawn container "123"
TASK ERROR: startup for container '123' failed

lxc-start 123 20221124235616.579 ERROR    start - ../src/lxc/start.c:lxc_spawn:1734 - Operation not permitted - Failed to clone a new set of namespaces
lxc-start 123 20221124235616.580 ERROR    start - ../src/lxc/start.c:__lxc_start:2074 - Failed to spawn container "123"
lxc-start 123 20221124235616.701 ERROR    conf - ../src/lxc/conf.c:userns_exec_1:5052 - Failed to clone process in new user namespace
lxc-start 123 20221124235617.506 ERROR    lxc_start - ../src/lxc/tools/lxc_start.c:main:306 - The container failed to start
lxc-start 123 20221124235617.506 ERROR    lxc_start - ../src/lxc/tools/lxc_start.c:main:311 - Additional information can be obtained by setting the --logfile and --logpriority options

_______________________

Code:
root@host:~# systemctl status pve-container@123.service
● pve-container@123.service - PVE LXC Container: 123
     Loaded: loaded (/lib/systemd/system/pve-container@.service; static)
     Active: failed (Result: exit-code) since Thu 2022-11-24 16:25:07 PST; 27s ago
       Docs: man:lxc-start
             man:lxc
             man:pct
    Process: 19947 ExecStart=/usr/bin/lxc-start -F -n 123 (code=exited, status=1/FAILURE)
   Main PID: 19947 (code=exited, status=1/FAILURE)
        CPU: 381ms

Nov 24 16:25:06 host systemd[1]: Started PVE LXC Container: 123.
Nov 24 16:25:07 host systemd[1]: pve-container@123.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 16:25:07 host systemd[1]: pve-container@123.service: Failed with result 'exit-code'.

_______________________

And here's the different (but similar) messages when trying to create a new LXC

Code:
Formatting '/mnt/HOMEPOOL/lxc/images/142/vm-142-disk-0.raw', fmt=raw size=8589934592 preallocation=off
Creating filesystem with 2097152 4k blocks and 524288 inodes
Filesystem UUID: 3220745f-952c-4450-b285-878023b67b1a
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
extracting archive '/mnt/HOMEPOOL/iso/template/cache/debian-11-standard_11.3-1_amd64.tar.zst'
../src/lxc/cmd/lxc_usernsexec.c: 407: main - Operation not permitted - Failed to unshare mount and user namespace
../src/lxc/cmd/lxc_usernsexec.c: 452: main - Inappropriate ioctl for device - Failed to read from pipe file descriptor 3
TASK ERROR: unable to create CT 142 - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar xpf - --zstd --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' -C /var/lib/lxc/142/rootfs --skip-old-files --anchored --exclude './dev/*'' failed: exit code 1

_______________________

lxcfs is running

Code:
root@host:~# systemctl status lxcfs.service
● lxcfs.service - FUSE filesystem for LXC
     Loaded: loaded (/lib/systemd/system/lxcfs.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2022-11-24 14:44:29 PST; 1h 3min ago
       Docs: man:lxcfs(1)
   Main PID: 32121 (lxcfs)
      Tasks: 3 (limit: 76998)
     Memory: 728.0K
        CPU: 4ms
     CGroup: /system.slice/lxcfs.service
             └─32121 /usr/bin/lxcfs /var/lib/lxcfs

Nov 24 14:44:29 host lxcfs[32121]: - proc_diskstats
Nov 24 14:44:29 host lxcfs[32121]: - proc_loadavg
Nov 24 14:44:29 host lxcfs[32121]: - proc_meminfo
Nov 24 14:44:29 host lxcfs[32121]: - proc_stat
Nov 24 14:44:29 host lxcfs[32121]: - proc_swaps
Nov 24 14:44:29 host lxcfs[32121]: - proc_uptime
Nov 24 14:44:29 host lxcfs[32121]: - shared_pidns
Nov 24 14:44:29 host lxcfs[32121]: - cpuview_daemon
Nov 24 14:44:29 host lxcfs[32121]: - loadavg_daemon
Nov 24 14:44:29 host lxcfs[32121]: - pidfds
_______________________

Thanks.
 
Last edited:
What's the output of using pct start VMID --debug (replace VMID with actual one from a container).

Also, what's your Proxmox VE versions? pveversion -v
 
Hello, here you go, thanks.

Code:
root@host:~# pct start 123 --debug
lxc_spawn: 1734 Operation not permitted - Failed to clone a new set of namespaces
__lxc_start: 2074 Failed to spawn container "123"
../src/lxc/confile.c:set_config_idmaps:2267 - Read uid map: type g nsid 0 hostid 100000 range 65536
INFO     lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor
INFO     conf - ../src/lxc/conf.c:run_script_argv:337 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "123", config section "lxc"
DEBUG    seccomp - ../src/lxc/seccomp.c:parse_config_v2:656 - Host native arch is [3221225534]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "reject_force_umount  # comment this to allow umount -f;  not recommended"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "[all]"
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "kexec_load errno 1"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[246:kexec_load] action[327681:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[246:kexec_load] action[327681:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[246:kexec_load] action[327681:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "open_by_handle_at errno 1"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[304:open_by_handle_at] action[327681:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[304:open_by_handle_at] action[327681:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[304:open_by_handle_at] action[327681:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "init_module errno 1"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[175:init_module] action[327681:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[175:init_module] action[327681:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[175:init_module] action[327681:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "finit_module errno 1"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[313:finit_module] action[327681:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[313:finit_module] action[327681:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[313:finit_module] action[327681:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "delete_module errno 1"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[176:delete_module] action[327681:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[176:delete_module] action[327681:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[176:delete_module] action[327681:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "ioctl errno 1 [1,0x9400,SCMP_CMP_MASKED_EQ,0xff00]"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:547 - arg_cmp[0]: SCMP_CMP(1, 7, 65280, 37888)
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[16:ioctl] action[327681:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:547 - arg_cmp[0]: SCMP_CMP(1, 7, 65280, 37888)
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[16:ioctl] action[327681:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:547 - arg_cmp[0]: SCMP_CMP(1, 7, 65280, 37888)
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[16:ioctl] action[327681:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "keyctl errno 38"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[250:keyctl] action[327718:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[250:keyctl] action[327718:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[250:keyctl] action[327718:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:1017 - Merging compat seccomp contexts into main context
INFO     start - ../src/lxc/start.c:lxc_init:884 - Container "123" is initialized
INFO     cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_monitor_create:1029 - The monitor process uses "lxc.monitor/123" as cgroup
DEBUG    storage - ../src/lxc/storage/storage.c:storage_query:231 - Detected rootfs type "dir"
DEBUG    storage - ../src/lxc/storage/storage.c:storage_query:231 - Detected rootfs type "dir"
INFO     cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_payload_create:1137 - The container process uses "lxc/123/ns" as inner and "lxc/123" as limit cgroup
ERROR    start - ../src/lxc/start.c:lxc_spawn:1734 - Operation not permitted - Failed to clone a new set of namespaces
DEBUG    network - ../src/lxc/network.c:lxc_delete_network:4173 - Deleted network devices
ERROR    start - ../src/lxc/start.c:__lxc_start:2074 - Failed to spawn container "123"
startup for container '123' failed

_______________________

Code:
root@host:~# pveversion -v
proxmox-ve: 7.2-1 (running kernel: 5.15.64-1-pve)
pve-manager: 7.2-11 (running version: 7.2-11/b76d3178)
pve-kernel-5.15: 7.2-13
pve-kernel-helper: 7.2-13
pve-kernel-5.13: 7.1-9
pve-kernel-5.11: 7.0-10
pve-kernel-5.4: 6.4-6
pve-kernel-5.15.64-1-pve: 5.15.64-1
pve-kernel-5.15.60-1-pve: 5.15.60-1
pve-kernel-5.15.39-4-pve: 5.15.39-4
pve-kernel-5.15.39-2-pve: 5.15.39-2
pve-kernel-5.15.39-1-pve: 5.15.39-1
pve-kernel-5.15.35-3-pve: 5.15.35-6
pve-kernel-5.15.35-2-pve: 5.15.35-5
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-3-pve: 5.13.19-7
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.13.19-1-pve: 5.13.19-3
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.4.140-1-pve: 5.4.140-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve2
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-3
libpve-guest-common-perl: 4.1-4
libpve-http-server-perl: 4.1-4
libpve-storage-perl: 7.2-10
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.7-1
proxmox-backup-file-restore: 2.2.7-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-3
pve-docs: 7.2-2
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-6
pve-firmware: 3.5-6
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 7.0.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.6-pve1
 
So I guess I'm the only one experiencing this. I see 'pve-container' is one of the packages to be upgraded with 'apt update && apt full-upgrade', wondering if I just give that a go and hope it solves the problem.

Code:
The following NEW packages will be installed:
  proxmox-mail-forward pve-kernel-5.15.74-1-pve
The following packages will be upgraded:
  corosync grub-common grub-efi-amd64-bin grub-efi-ia32-bin grub-pc grub-pc-bin grub2-common krb5-locales libcfg7 libcmap4 libcorosync-common4 libcpg4
  libgssapi-krb5-2 libgssrpc4 libk5crypto3 libknet1 libkrad0 libkrb5-3 libkrb5support0 libnozzle1 libpixman-1-0 libpve-access-control libpve-cluster-api-perl
  libpve-cluster-perl libpve-common-perl libpve-guest-common-perl libpve-http-server-perl libpve-rs-perl libpve-storage-perl libquorum5 librados2-perl
  libtpms0 libvotequorum8 proxmox-archive-keyring proxmox-backup-client proxmox-backup-file-restore proxmox-ve proxmox-widget-toolkit pve-cluster
  pve-container pve-docs pve-firewall pve-ha-manager pve-i18n pve-kernel-5.15 pve-kernel-helper pve-manager pve-qemu-kvm qemu-server swtpm swtpm-libs
  swtpm-tools
52 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.

I don't think I mentioned it previously, but I did make some hardware changes including adding in some PCIe cards and shuffling them around for testing some things, but that is back to the original configuration now. Also in UEFI I enabled a second NIC on the motherboard which did require that I update the network config files manually since the interface names changed.

I also don't think that I mentioned VMs work fine, it's only LXCs that won't start.

This is the first problem with Proxmox in nearly two years of use that couldn't be explained by my own error and/or resolved with a search here or on google/duck, hoping to find a solution soon.

Thanks.
 
Last edited:
Glad you solved it!

Fyi your issue was a bit strange and I forgot then to reply here, but the following errors:
Operation not permitted - Failed to clone a new set of namespaces
Operation not permitted - Failed to unshare mount and user namespace
are odd, as getting EPERM on namespace creation here would be best explained by booting a kernel that has user namespace disabled completely, which ours obviously don't do. It definitively isn't also resource exhaustion, as there you'd get another error and the limit for amount of namespaces is in the hundreds of thousands, not easily reached.

Sorry to not have a more enlightening answer here, but that would require checking out the host much more closely, and now that an upgrade fixed it for you, however that might have happened, I think we should write it off as fluke, at least if there ain't more reports.
 
Oh I have not upgraded yet, and the problem remains. I was just considering that path.

I will make some time soon to do so though, unless there's another option.

I was holding off upgrading for now as I don't want to make matters worse.

Thank you.
 
Last edited:
OK, update. I have run 'apt-get full-upgrade' without issue and rebooted.

Problem is unchanged.

Existing LXCs fail to start with errors as shown previously.

I attempted to create multiple new LXCs and they also failed with errors as shown previously. This time I also tried creating them with different rootfs using both,

local-zfs | type: zfspool
&
lxc | type: dir

These both failed.

But I tried again with 'Unprivileged container' unchecked using both rootfs types, and in both cases the creation of the new LXC worked, and they both start.

All of the existing LXC that won't start are 'Unprivileged container'.
 
Last edited:
Also, I have found a recent thread here where installing package 'binutils' enabled LXCs to start again. I already have this package installed.

Lastly, some other similar threads ask for the LXC config so here's that:
Code:
root@host:~# pct config 123
arch: amd64
cores: 4
features: nesting=1
hostname: pihole
memory: 2048
nameserver: 192.168.100.37
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.100.1,hwaddr=06:27:9D:23:83:1B,ip=192.168.100.37/24,type=veth
ostype: debian
rootfs: lxc:123/vm-123-disk-0.raw,size=15G
searchdomain: domain
swap: 2048
unprivileged: 1
 
Oh I have not upgraded yet, and the problem remains. I was just considering that path.
Ah ok, misread you.

Look, your error is way off, not having namespace support is a fundamental issue and means that your host is running in either a very odd environment (is PVE even running directly on bare metal?) or has some serious misconfiguration that pulled in a non-pve kernel, or a self-compiled PVE kernel with changed kconfig.

Also, I have found a recent thread here where installing package 'binutils' enabled LXCs to start again. I already have this package installed.
That has nothing to do with your situation, available binutils or not don't change if one can create kernel mount/user namespaces or not.
 
Hello, yes, it's running on bare-metal AORUS Master X570 for 18 months with many update/upgrades between then and now.

Code:
root@host:~# uname -a
Linux host 5.15.74-1-pve #1 SMP PVE 5.15.74-1 (Mon, 14 Nov 2022 20:17:15 +0100) x86_64 GNU/Linux

Code:
root@host:~# neofetch 
         .://:`              `://:.            root@host
       `hMMMMMMd/          /dMMMMMMh`          ----------- 
        `sMMMMMMMd:      :mMMMMMMMs`           OS: Proxmox VE 7.3-3 x86_64 
`-/+oo+/:`.yMMMMMMMh-  -hMMMMMMMy.`:/+oo+/-`   Host: X570 AORUS MASTER -CF 
`:oooooooo/`-hMMMMMMMyyMMMMMMMh-`/oooooooo:`   Kernel: 5.15.74-1-pve 
  `/oooooooo:`:mMMMMMMMMMMMMm:`:oooooooo/`     Uptime: 2 mins 
    ./ooooooo+- +NMMMMMMMMN+ -+ooooooo/.       Packages: 1022 (dpkg) 
      .+ooooooo+-`oNMMMMNo`-+ooooooo+.         Shell: bash 5.1.4 
        -+ooooooo/.`sMMs`./ooooooo+-           CPU: AMD Ryzen 7 5800X (16) @ 3.800GHz 
          :oooooooo/`..`/oooooooo:             GPU: AMD ATI Radeon HD 6450/7450/8450 / R5 230 OEM 
          :oooooooo/`..`/oooooooo:             GPU: NVIDIA GeForce RTX 3060 Ti 
        -+ooooooo/.`sMMs`./ooooooo+-           Memory: 2229MiB / 64231MiB 
      .+ooooooo+-`oNMMMMNo`-+ooooooo+.         Disk (/): 17G / 129G (14%) 
    ./ooooooo+- +NMMMMMMMMN+ -+ooooooo/.       Local IP: 192.168.100.19 
  `/oooooooo:`:mMMMMMMMMMMMMm:`:oooooooo/`     Locale: en_US.UTF-8 
`:oooooooo/`-hMMMMMMMyyMMMMMMMh-`/oooooooo:`
`-/+oo+/:`.yMMMMMMMh-  -hMMMMMMMy.`:/+oo+/-`                           
        `sMMMMMMMm:      :dMMMMMMMs`
       `hMMMMMMd/          /dMMMMMMh`
         `://:`              `://:`
 
Here is the journalctl output.

Code:
Dec 01 02:42:06 host pvedaemon[10590]: starting CT 123: UPID:host:0000295E:0000B4B5:638884FE:vzstart:123:root@pam:
Dec 01 02:42:06 host pvedaemon[4212]: <root@pam> starting task UPID:host:0000295E:0000B4B5:638884FE:vzstart:123:root@pam:
Dec 01 02:42:06 host systemd[1]: Created slice PVE LXC Container Slice.
Dec 01 02:42:06 host systemd[1]: Started PVE LXC Container: 123.
Dec 01 02:42:06 host kernel: loop0: detected capacity change from 0 to 31457280
Dec 01 02:42:06 host kernel: EXT4-fs (loop0): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Dec 01 02:42:06 host audit[10618]: AVC apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-123_</var/lib/lxc>" pid=10618 comm="
apparmor_parser"
Dec 01 02:42:06 host kernel: kauditd_printk_skb: 9 callbacks suppressed
Dec 01 02:42:06 host kernel: audit: type=1400 audit(1669891326.613:21): apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-123
_</var/lib/lxc>" pid=10618 comm="apparmor_parser"
Dec 01 02:42:06 host pvedaemon[10590]: startup for container '123' failed
Dec 01 02:42:06 host pvedaemon[4212]: <root@pam> end task UPID:host:0000295E:0000B4B5:638884FE:vzstart:123:root@pam: startup for container '123' failed
Dec 01 02:42:06 host audit[10620]: AVC apparmor="STATUS" operation="profile_remove" profile="/usr/bin/lxc-start" name="lxc-123_</var/lib/lxc>" pid=10620 comm
="apparmor_parser"
Dec 01 02:42:06 host kernel: audit: type=1400 audit(1669891326.769:22): apparmor="STATUS" operation="profile_remove" profile="/usr/bin/lxc-start" name="lxc-1
23_</var/lib/lxc>" pid=10620 comm="apparmor_parser"
Dec 01 02:42:07 host systemd[1]: pve-container@123.service: Main process exited, code=exited, status=1/FAILURE
Dec 01 02:42:07 host systemd[1]: pve-container@123.service: Failed with result 'exit-code'.
Dec 01 02:42:15 host pvestatd[4150]: modified cpu set for lxc/123: 0-3
Dec 01 02:42:15 host pvestatd[4150]: failed to open '/sys/fs/cgroup/lxc/123/cpuset.cpus' - Permission denied

Code:
root@host:~# file /sys/fs/cgroup/lxc/123/cpuset.cpus
/sys/fs/cgroup/lxc/123/cpuset.cpus: cannot open `/sys/fs/cgroup/lxc/123/cpuset.cpus' (No such file or directory)
root@host:~# ls -laFtrh /sys/fs/cgroup/lxc/123/
total 0
drwxr-xr-x 3 root root 0 Dec  1 02:42 ../
drwxr-xr-x 2 root root 0 Dec  1 02:48 ns/
drwxr-xr-x 3 root root 0 Dec  1 02:48 ./
-rw-r--r-- 1 root root 0 Dec  1 02:51 memory.pressure
-rw-r--r-- 1 root root 0 Dec  1 02:51 io.pressure
-r--r--r-- 1 root root 0 Dec  1 02:51 cpu.stat
-rw-r--r-- 1 root root 0 Dec  1 02:51 cpu.pressure
-rw-r--r-- 1 root root 0 Dec  1 02:51 cgroup.type
-rw-r--r-- 1 root root 0 Dec  1 02:51 cgroup.threads
-rw-r--r-- 1 root root 0 Dec  1 02:51 cgroup.subtree_control
-r--r--r-- 1 root root 0 Dec  1 02:51 cgroup.stat
-rw-r--r-- 1 root root 0 Dec  1 02:51 cgroup.procs
-rw-r--r-- 1 root root 0 Dec  1 02:51 cgroup.max.descendants
-rw-r--r-- 1 root root 0 Dec  1 02:51 cgroup.max.depth
--w------- 1 root root 0 Dec  1 02:51 cgroup.kill
-rw-r--r-- 1 root root 0 Dec  1 02:51 cgroup.freeze
-r--r--r-- 1 root root 0 Dec  1 02:51 cgroup.events
-r--r--r-- 1 root root 0 Dec  1 02:51 cgroup.controllers
 
Last edited:
I can't format the following text with color as in the terminal output due to the <code> tags but the three lines below with '>>>' indicate this is a problem, how can I best resolve it? Thanks.

Code:
root@host:~# lxc-checkconfig
LXC version 5.0.0
Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-5.15.74-1-pve
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled

--- Control groups ---
Cgroups: enabled
Cgroup namespace: enabled

Cgroup v1 mount points:


Cgroup v2 mount points:
/sys/fs/cgroup

>>> Cgroup v1 systemd controller: missing
>>> Cgroup v1 freezer controller: missing
>>> Cgroup ns_cgroup: required
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled, loaded
Macvlan: enabled, not loaded
Vlan: enabled, not loaded
Bridges: enabled, not loaded
Advanced netfilter: enabled, not loaded
CONFIG_IP_NF_TARGET_MASQUERADE: enabled, not loaded
CONFIG_IP6_NF_TARGET_MASQUERADE: enabled, not loaded
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled, not loaded
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled, loaded
FUSE (for use with lxcfs): enabled, not loaded

--- Checkpoint/Restore ---
checkpoint restore: enabled
CONFIG_FHANDLE: enabled
CONFIG_EVENTFD: enabled
CONFIG_EPOLL: enabled
CONFIG_UNIX_DIAG: enabled
CONFIG_INET_DIAG: enabled
CONFIG_PACKET_DIAG: enabled
CONFIG_NETLINK_DIAG: enabled
File capabilities:

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig
 
Last edited:
'pveversion -v' after 'apt-get update && apt-get full-upgrade' and reboot.

Code:
root@host:~# pveversion -v
proxmox-ve: 7.3-1 (running kernel: 5.15.74-1-pve)
pve-manager: 7.3-3 (running version: 7.3-3/c3928077)
pve-kernel-5.15: 7.2-14
pve-kernel-helper: 7.2-14
pve-kernel-5.13: 7.1-9
pve-kernel-5.4: 6.4-6
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.15.64-1-pve: 5.15.64-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.4.140-1-pve: 5.4.140-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve2
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-1
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.3-1
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.3.1-1
proxmox-backup-file-restore: 2.3.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.0-1
proxmox-widget-toolkit: 3.5.3
pve-cluster: 7.3-1
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.5-6
pve-ha-manager: 3.5.1
pve-i18n: 2.8-1
pve-qemu-kvm: 7.1.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-1
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.6-pve1
 
The error message doesn't make sense for what it tries to be doing unless you're running inside another container subsystem or something.

Do you have any additional software installed on your host which might have made unexpected changes?
Does `unshare -mnU` work as root?
Did you add any systemd snippets to the pve-container service? (What's the output of `systemctl cat pve-container@.service`)
After a failed `pct start 123`, can you try - as root on a shell: lxc-start -F -n 123 and see if this fails as well?
Did you make any unusual settings changes?
Did you add any additional kernel command line options? (`cat /proc/cmdline`)
What do you have in `/etc/sysctl.conf`, `/etc/sysctl.d/*`?
 
One of the lxc developers shared this:

Code:
root@host:~# cat ./gftrace.sh
#!/usr/bin/bash
if [ "$#" -lt 1 ]; then
    echo "usage: $0 <kernel_function> [<kernel_function> ...]"
    echo "kernel_function - functions to trace (separated by spaces), all should be in #available_filter_functions list"
    exit 1
fi

KFUNCS="${@:1}"

for KFUNC in $KFUNCS
do
    # Check that kernel function is traceable
    if ! cat /sys/kernel/debug/tracing/available_filter_functions \
            | grep "\<$KFUNC\>" >/dev/null; then
        echo "There is no traceable kfunc \"$KFUNC\" may be you mean:"
        cat /sys/kernel/debug/tracing/available_filter_functions | grep $KFUNC
        exit 1
    fi
done

# Disable previous tracing
echo 0 > /sys/kernel/debug/tracing/tracing_on
echo nop > /sys/kernel/debug/tracing/current_tracer
echo 0 > /sys/kernel/debug/tracing/max_graph_depth
echo 0 > /sys/kernel/debug/tracing/events/enable

# Setup tracing all call graphs for a KFUNCS kernel functions
echo "$KFUNCS" > /sys/kernel/debug/tracing/set_graph_function

echo "Will graph trace:"
cat /sys/kernel/debug/tracing/set_graph_function

echo function_graph > /sys/kernel/debug/tracing/current_tracer

# Setup some useful tracing options:
echo funcgraph-tail > /sys/kernel/debug/tracing/trace_options 2>/dev/null
echo funcgraph-abstime > /sys/kernel/debug/tracing/trace_options
echo nofuncgraph-irqs > /sys/kernel/debug/tracing/trace_options

# Set max recursion to 5
echo 5 > /sys/kernel/debug/tracing/max_graph_depth

finish_trace() {
    echo 0 > /sys/kernel/debug/tracing/tracing_on
    cat /sys/kernel/debug/tracing/trace > trace
    echo "hint: cat ./trace | less"

    echo nop > /sys/kernel/debug/tracing/current_tracer
    echo 0 > /sys/kernel/debug/tracing/max_graph_depth
    if [ -f "/sys/kernel/debug/tracing/events/probe/enable" ]; then
        echo 0 > /sys/kernel/debug/tracing/events/enable
    fi
    exit 0
}

trap 'finish_trace' SIGINT

if [ -f "/sys/kernel/debug/tracing/events/probe/enable" ]; then
    # Enable probes
    echo 1 > /sys/kernel/debug/tracing/events/probe/enable
fi

# Enable ftrace
echo 1 > /sys/kernel/debug/tracing/tracing_on

echo "Enter something (or ctrl+c) to stop tracing"
read
finish_trace

And the result:

Code:
root@host:~# cat trace
# tracer: function_graph
#
#     TIME        CPU  DURATION                  FUNCTION CALLS
#      |          |     |   |                     |   |   |   |
 7819.496279 |    2)               |  ksys_unshare() {
 7819.496280 |    2)   0.120 us    |    unshare_fd();
 7819.496280 |    2)   0.070 us    |    unshare_userns();
 7819.496281 |    2)               |    unshare_nsproxy_namespaces() {
 7819.496281 |    2)               |      ns_capable() {
 7819.496281 |    2)               |        security_capable() {
 7819.496281 |    2)   0.070 us    |          cap_capable();
 7819.496281 |    2)   0.220 us    |          apparmor_capable();
 7819.496281 |    2)   0.500 us    |        } /* security_capable */
 7819.496281 |    2)   0.620 us    |      } /* ns_capable */
 7819.496281 |    2)               |      create_new_namespaces() {
 7819.496281 |    2)               |        kmem_cache_alloc() {
 7819.496281 |    2)   0.090 us    |          __cond_resched();
 7819.496282 |    2)   0.060 us    |          should_failslab();
 7819.496282 |    2)   0.070 us    |          rcu_read_unlock_strict();
 7819.496282 |    2)   0.060 us    |          rcu_read_unlock_strict();
 7819.496282 |    2)   0.070 us    |          obj_cgroup_charge();
 7819.496282 |    2)   0.140 us    |          rcu_read_unlock_strict();
 7819.496282 |    2)   0.130 us    |          mod_objcg_state();
 7819.496283 |    2)   0.060 us    |          rcu_read_unlock_strict();
 7819.496283 |    2)   1.320 us    |        } /* kmem_cache_alloc */
 7819.496283 |    2)               |        copy_mnt_ns() {
 7819.496283 |    2)   2.080 us    |          alloc_mnt_ns();
 7819.496285 |    2)   0.110 us    |          down_write();
 7819.496285 |    2) ! 148.721 us  |          copy_tree();
 7819.496435 |    2)   0.310 us    |          namespace_unlock();
 7819.496435 |    2)   0.080 us    |          mntput_no_expire();
 7819.496435 |    2)   0.090 us    |          mntput_no_expire();
 7819.496435 |    2) ! 152.721 us  |        } /* copy_mnt_ns */
 7819.496436 |    2)   0.060 us    |        copy_utsname();
 7819.496436 |    2)   0.090 us    |        copy_ipcs();
 7819.496436 |    2)   0.090 us    |        copy_pid_ns();
 7819.496436 |    2)   0.050 us    |        copy_cgroup_ns();
 7819.496436 |    2)   0.080 us    |        copy_net_ns();
 7819.496436 |    2)   0.090 us    |        copy_time_ns();
 7819.496437 |    2) ! 155.321 us  |      } /* create_new_namespaces */
 7819.496437 |    2) ! 156.141 us  |    } /* unshare_nsproxy_namespaces */
 7819.496437 |    2)               |    switch_task_namespaces() {
 7819.496437 |    2)               |      __cond_resched() {
 7819.496437 |    2)   0.060 us    |        rcu_all_qs();
 7819.496437 |    2)   0.180 us    |      } /* __cond_resched */
 7819.496437 |    2)   0.060 us    |      _raw_spin_lock();
 7819.496437 |    2)   0.390 us    |    } /* switch_task_namespaces */
 7819.496437 |    2)   0.050 us    |    _raw_spin_lock();
 7819.496437 |    2) ! 158.591 us  |  } /* ksys_unshare */
 
The error message doesn't make sense for what it tries to be doing unless you're running inside another container subsystem or something.

Do you have any additional software installed on your host which might have made unexpected changes?
Does `unshare -mnU` work as root?
Did you add any systemd snippets to the pve-container service? (What's the output of `systemctl cat pve-container@.service`)
After a failed `pct start 123`, can you try - as root on a shell: lxc-start -F -n 123 and see if this fails as well?
Did you make any unusual settings changes?
Did you add any additional kernel command line options? (`cat /proc/cmdline`)
What do you have in `/etc/sysctl.conf`, `/etc/sysctl.d/*`?

Hello, I don't believe I have any additional software that could have made changes. No unusual settings changes either.

Code:
root@host:~# unshare -mnU
unshare: unshare failed: Operation not permitted

Code:
root@host:~# systemctl cat pve-container@.service
# /lib/systemd/system/pve-container@.service
# based on lxc@.service, but without an install section because
# starting and stopping should be initiated by PVE code, not
# systemd.
[Unit]
Description=PVE LXC Container: %i
DefaultDependencies=No
After=lxc.service
Wants=lxc.service
Documentation=man:lxc-start man:lxc man:pct

[Service]
Type=simple
Delegate=yes
KillMode=mixed
TimeoutStopSec=120s
ExecStart=/usr/bin/lxc-start -F -n %i
ExecStop=/usr/share/lxc/pve-container-stop-wrapper %i
# Environment=BOOTUP=serial
# Environment=CONSOLETYPE=serial
# Prevent container init from putting all its output into the journal
StandardOutput=null
StandardError=file:/run/pve/ct-%i.stderr

Code:
root@host:~# pct start 123
lxc_spawn: 1734 Operation not permitted - Failed to clone a new set of namespaces
__lxc_start: 2074 Failed to spawn container "123"
startup for container '123' failed
root@host:~# lxc-start -F -n 123
lxc-start: 123: ../src/lxc/start.c: lxc_spawn: 1734 Operation not permitted - Failed to clone a new set of namespaces
lxc-start: 123: ../src/lxc/start.c: __lxc_start: 2074 Failed to spawn container "123"
lxc-start: 123: ../src/lxc/conf.c: userns_exec_1: 5052 Failed to clone process in new user namespace
lxc-start: 123: ../src/lxc/tools/lxc_start.c: main: 306 The container failed to start
lxc-start: 123: ../src/lxc/tools/lxc_start.c: main: 311 Additional information can be obtained by setting the --logfile and --logpriority options

Code:
root@host:~# cat /proc/cmdline
initrd=\EFI\proxmox\5.15.74-1-pve\initrd.img-5.15.74-1-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs amd_iommu=on

Code:
root@host:~# cat /etc/sysctl.conf
#
# /etc/sysctl.conf - Configuration file for setting system variables
# See /etc/sysctl.d/ for additional system variables.
# See sysctl.conf (5) for information.
#

#kernel.domainname = example.com

# Uncomment the following to stop low-level messages on console
#kernel.printk = 3 4 1 3

###################################################################
# Functions previously found in netbase
#

# Uncomment the next two lines to enable Spoof protection (reverse-path filter)
# Turn on Source Address Verification in all interfaces to
# prevent some spoofing attacks
#net.ipv4.conf.default.rp_filter=1
#net.ipv4.conf.all.rp_filter=1

# Uncomment the next line to enable TCP/IP SYN cookies
# See http://lwn.net/Articles/277146/
# Note: This may impact IPv6 TCP sessions too
#net.ipv4.tcp_syncookies=1

# Uncomment the next line to enable packet forwarding for IPv4
#net.ipv4.ip_forward=1

# Uncomment the next line to enable packet forwarding for IPv6
#  Enabling this option disables Stateless Address Autoconfiguration
#  based on Router Advertisements for this host
#net.ipv6.conf.all.forwarding=1


###################################################################
# Additional settings - these settings can improve the network
# security of the host and prevent against some network attacks
# including spoofing attacks and man in the middle attacks through
# redirection. Some network environments, however, require that these
# settings are disabled so review and enable them as needed.
#
# Do not accept ICMP redirects (prevent MITM attacks)
#net.ipv4.conf.all.accept_redirects = 0
#net.ipv6.conf.all.accept_redirects = 0
# _or_
# Accept ICMP redirects only for gateways listed in our default
# gateway list (enabled by default)
# net.ipv4.conf.all.secure_redirects = 1
#
# Do not send ICMP redirects (we are not a router)
#net.ipv4.conf.all.send_redirects = 0
#
# Do not accept IP source route packets (we are not a router)
#net.ipv4.conf.all.accept_source_route = 0
#net.ipv6.conf.all.accept_source_route = 0
#
# Log Martian Packets
#net.ipv4.conf.all.log_martians = 1
#

###################################################################
# Magic system request Key
# 0=disable, 1=enable all, >1 bitmask of sysrq functions
# See https://www.kernel.org/doc/html/latest/admin-guide/sysrq.html
# for what other values do
#kernel.sysrq=438

Code:
root@host:~# ls -laFtrh /etc/sysctl.d/
total 22K
-rw-r--r--   1 root root 639 May 31  2018 README.sysctl
lrwxrwxrwx   1 root root  14 Aug  7 06:25 99-sysctl.conf -> ../sysctl.conf
drwxr-xr-x   2 root root   4 Oct  1 09:47 ./
drwxr-xr-x 112 root root 217 Dec  1 00:21 ../
 
Code:
root@host:~# unshare -mnU
unshare: unshare failed: Operation not permitted
That's unexpected. Are you really logged in as root and running the PVE kernel? (uname -a)
Are there any special syslog/journal messages while running this command?
Also I wonder which which namespace fails exactly - can you try `unshare -m`, `unshare -n` and `unshare -U` separately to see which (or if all) of the 3 namespace types are causing issues?

And please provide some more info that could affect this, as root:
cat /proc/self/status
cat /proc/self/attr/current
cat /proc/self/mountinfo
 
That's unexpected. Are you really logged in as root and running the PVE kernel? (uname -a)
Are there any special syslog/journal messages while running this command?
Also I wonder which which namespace fails exactly - can you try `unshare -m`, `unshare -n` and `unshare -U` separately to see which (or if all) of the 3 namespace types are causing issues?

And please provide some more info that could affect this, as root:
cat /proc/self/status
cat /proc/self/attr/current
cat /proc/self/mountinfo
Well I'm a bit embarrassed and I thank you for the help. The problem was that I had a recursive rpool snapshot on a non-root dataset mounted at '/'... As soon as I unmounted it, changed canmount to noauto, and changed the mountpoint to the path where the remote snapshot lives, for good measure, I tested 3 unprivileged LXCs and they all started fine.

I also reproduced the error by changing the mountpoint back to '/' and mounting it again, same errors seen again on LXC startup.

Code:
root@host:~# zfs list -ro canmount,mountpoint,mounted,name
CANMOUNT  MOUNTPOINT                                                            MOUNTED  NAME
...
noauto    /                                                                     no       bigpool/pvebupz/20221113v2
...
...
on        /                                                                     yes      exos/rpoolbupz_all_20221121
...
...
on        /homepool/iso                                                         yes      homepool/iso
on        /homepool/lxc                                                         yes      homepool/lxc
on        /homepool/vm                                                          yes      homepool/vm
on        /rpool                                                                yes      rpool
on        /rpool/ROOT                                                           yes      rpool/ROOT
on        /                                                                     yes      rpool/ROOT/pve-1
...
...

When I read your last question a lightbulb went on.

Thanks for your time, I hope my stumbling onto this at least helps someone else in the future. I had noticed in creating and sending/receiving rpool snapshots in the past that they default to canmount==on with a mountpoint==/ and had manually fixed this prior to reboot, but not in this case unfortunately.

What I mean is creating a recursive snapshot like this:
Code:
root@host:~# zfs snapshot -r rpool/ROOT/pve-1@20221202 && zfs send -Rwv rpool/ROOT/pve-1@20221202 | pv | zfs receive -Fuv exos/rpoolbupz_all_20221202

Results in a snapshot with the following attributes:
Code:
root@host:~# zfs list -ro canmount,mountpoint,mounted,name | head
CANMOUNT  MOUNTPOINT                                                            MOUNTED  NAME
on        /                                                                     no       exos/rpoolbupz_all_20221202

And without changing canmount to noauto or the mountpoint to something other than '/', it will mount there on reboot.

I spent an hour or two in the irc support channels for lxc (#lxc @ libera.chat) regarding this and they suggested I create a thread on their forum as well. So I will update that too. Sorry for the hassle.
 
Last edited:
  • Like
Reactions: Clete2
Whoops. Glad this could be cleared up.
Hope you didn't loose important data from this mixup.
 
Well I'm a bit embarrassed and I thank you for the help. The problem was that I had a recursive rpool snapshot on a non-root dataset mounted at '/'... As soon as I unmounted it, changed canmount to noauto, and changed the mountpoint to the path where the remote snapshot lives, for good measure, I tested 3 unprivileged LXCs and they all started fine.

I also reproduced the error by changing the mountpoint back to '/' and mounting it again, same errors seen again on LXC startup.

Code:
root@host:~# zfs list -ro canmount,mountpoint,mounted,name
CANMOUNT  MOUNTPOINT                                                            MOUNTED  NAME
...
noauto    /                                                                     no       bigpool/pvebupz/20221113v2
...
...
on        /                                                                     yes      exos/rpoolbupz_all_20221121
...
...
on        /homepool/iso                                                         yes      homepool/iso
on        /homepool/lxc                                                         yes      homepool/lxc
on        /homepool/vm                                                          yes      homepool/vm
on        /rpool                                                                yes      rpool
on        /rpool/ROOT                                                           yes      rpool/ROOT
on        /                                                                     yes      rpool/ROOT/pve-1
...
...

When I read your last question a lightbulb went on.

Thanks for your time, I hope my stumbling onto this at least helps someone else in the future. I had noticed in creating and sending/receiving rpool snapshots in the past that they default to canmount==on with a mountpoint==/ and had manually fixed this prior to reboot, but not in this case unfortunately.

What I mean is creating a recursive snapshot like this:
Code:
root@host:~# zfs snapshot -r rpool/ROOT/pve-1@20221202 && zfs send -Rwv rpool/ROOT/pve-1@20221202 | pv | zfs receive -Fuv exos/rpoolbupz_all_20221202

Results in a snapshot with the following attributes:
Code:
root@host:~# zfs list -ro canmount,mountpoint,mounted,name | head
CANMOUNT  MOUNTPOINT                                                            MOUNTED  NAME
on        /                                                                     no       exos/rpoolbupz_all_20221202

And without changing canmount to noauto or the mountpoint to something other than '/', it will mount there on reboot.

I spent an hour or two in the irc support channels for lxc (#lxc @ libera.chat) regarding this and they suggested I create a thread on their forum as well. So I will update that too. Sorry for the hassle.
THANK YOU! I was backing up my pool before doing some destructive changes. I forgot about the mountpoints.
 
  • Like
Reactions: jimi

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!