Error message when creating or restoring unprivileged LXC container on PVE 6.4

May 11, 2021
2
0
1
42
Hello!


I had to re-install PVE 6.4 on my server due to a 6.3->6.4 update mishap. However, I get the following error message if I try to create or restore an unprivileged LXC container:

Code:
extracting archive '/var/lib/vz/template/cache/devuan-3.0-standard_3.0_amd64.tar.gz'
cmd/lxc_usernsexec.c: 417: main - Operation not permitted - Failed to unshare mount and user namespace
cmd/lxc_usernsexec.c: 462: main - Inappropriate ioctl for device - Failed to read from pipe file descriptor 3
TASK ERROR: unable to create CT 101 - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar xpf - -z --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' -C /var/lib/lxc/101/rootfs --skip-old-files --anchored --exclude './dev/*'' failed: exit code 1

I can create or restore a container if it's set privileged, but this is AFAIK not a good idea for internet-facing servers.
Any ideas or tips will be greatly appreciated :)
 
* was the container originally unprivileged?

* are you having this issue only with this container template or also with others?

* if it's only this container, please post the configuration of it
 
* was the container originally unprivileged?
Yep.

* are you having this issue only with this container template or also with others?
Tried creating new containers from three different templates (Devuan 3, Debian 10 and Ubuntu 21.04), same error as when trying to restore the backups. Installed PVE on an alternate machine today, restore worked fine there.

It seems like any command run via lxc_usernsexec gives the "Operation not permitted (...)" and "Inappropriate ioctl for device (...)" errors. Guess it's time to nuke the drives and reinstall PVE once everything is running on the alternate machine.
 
Hello,

I'm getting the same error
Code:
extracting archive '/var/lib/vz/template/cache/ubuntu-20.04-standard_20.04-1_amd64.tar.gz'
cmd/lxc_usernsexec.c: 417: main - Operation not permitted - Failed to unshare mount and user namespace
cmd/lxc_usernsexec.c: 462: main - Inappropriate ioctl for device - Failed to read from pipe file descriptor 3
TASK ERROR: unable to create CT 102 - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar xpf - -z --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' -C /var/lib/lxc/102/rootfs --skip-old-files --anchored --exclude './dev/*'' failed: exit code 1

I'm not able to create unprivileged container when trying to do it on encrypted zfs ( I followed instruction from https://forum.proxmox.com/threads/encrypting-proxmox-ve-best-methods.88191/#post-387731 - clean install on new nvme, no VM / CT).
Could anyone be so kind and help me?

EDIT:
Issue is also present when creating privileged container ->

Code:
run_buffer: 316 Script exited with status 1
lxc_setup: 3686 Failed to run mount hooks
do_start: 1265 Failed to setup container "102"
sync_wait: 36 An error occurred in another process (expected sequence number 5)
__lxc_start: 2073 Failed to spawn container "102"
TASK ERROR: startup for container '102' failed

VM I was able to create without issues.
 
Last edited:
what's the output of pveversion -v? contents of /etc/pve/storage.cfg?
 
@fabian

I updated pve kernel to 5.15 due to KVM virtualization problem when rebooting / shutting down (but cannot remove 5.13.19-1, dependency problems) ->
Code:
proxmox-ve: 7.1-1 (running kernel: 5.15.7-1-pve)
pve-manager: 7.1-4 (running version: 7.1-4/ca457116)
pve-kernel-5.15: 7.1-7
pve-kernel-5.13: 7.1-4
pve-kernel-helper: 7.1-4
pve-kernel-5.15.7-1-pve: 5.15.7-1
pve-kernel-5.13.19-1-pve: 5.13.19-2
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-3
libpve-storage-perl: 7.0-15
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.14-1
proxmox-backup-file-restore: 2.0.14-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.4-2
pve-cluster: 7.1-2
pve-container: 4.1-2
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-1
pve-qemu-kvm: 6.1.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-3
smartmontools: 7.2-pve2
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3

Command
Code:
 /etc/pve/storage.cfg
cannot access, permission denied (even though I operate as root).

I checked and I have only one root account (getent passwd / getent group), but command cat /etc/subgid shows two roots
Code:
root:100000:65536
root:100000:100001

I also checked command
Code:
 lxc-checkconfig
and there are 2 things missing + Cgroup namespace: required ->

Code:
LXC version 4.0.9
Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-5.15.7-1-pve
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled

--- Control groups ---
Cgroups: enabled

Cgroup v1 mount points: 


Cgroup v2 mount points: 
/sys/fs/cgroup

Cgroup v1 systemd controller: missing
Cgroup v1 freezer controller: missing
Cgroup namespace: required
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled, loaded
Macvlan: enabled, not loaded
Vlan: enabled, not loaded
Bridges: enabled, not loaded
Advanced netfilter: enabled, not loaded
CONFIG_NF_NAT_IPV4: missing
CONFIG_NF_NAT_IPV6: missing
CONFIG_IP_NF_TARGET_MASQUERADE: enabled, not loaded
CONFIG_IP6_NF_TARGET_MASQUERADE: enabled, not loaded
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled, not loaded
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled, not loaded
FUSE (for use with lxcfs): enabled, not loaded

--- Checkpoint/Restore ---
checkpoint restore: enabled
CONFIG_FHANDLE: enabled
CONFIG_EVENTFD: enabled
CONFIG_EPOLL: enabled
CONFIG_UNIX_DIAG: enabled
CONFIG_INET_DIAG: enabled
CONFIG_PACKET_DIAG: enabled
CONFIG_NETLINK_DIAG: enabled
File capabilities: 

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig

Also I checked and container 102 is mounted, but still I get same error if trying to create it as privileged -> zfs list
Code:
NAME                                     USED  AVAIL     REFER  MOUNTPOINT
rpool                                   73.5G   826G      104K  /rpool
rpool/ROOT                              3.23G   826G      192K  /rpool/ROOT
rpool/ROOT/pve-1                        3.23G   826G     3.11G  /
rpool/copyroot                          1.15G   826G       96K  /rpool/copyroot
rpool/copyroot/pve-1                    1.15G   826G     1.15G  /
rpool/data                              69.1G   826G       96K  /rpool/data
rpool/data/encrypted                    69.1G   826G      232K  /rpool/data/encrypted
rpool/data/encrypted/base-100-disk-0    35.7G   859G     2.70G  -
rpool/data/encrypted/subvol-102-disk-0   394M  31.6G      394M  /rpool/data/encrypted/subvol-102-disk-0
rpool/data/encrypted/vm-100-cloudinit      6M   826G       88K  -
rpool/data/encrypted/vm-101-cloudinit      6M   826G      108K  -
rpool/data/encrypted/vm-101-disk-0      33.0G   856G     2.74G  -

Sysctl kernel.unprivileged_userns_clone gices output 1 so that is okey. I'm thinking it is somehow connected with root privileges but at least I should be able to create privileged containers...

In node syslog I see only error connected with Starting User Login Management ->
Code:
Dec 29 10:06:55 alexpiesel systemd[1]: Starting User Login Management...
Dec 29 10:06:55 alexpiesel systemd-logind[265123]: Failed to connect to system bus: No such file or directory
Dec 29 10:06:55 alexpiesel systemd-logind[265123]: Failed to fully start up daemon: No such file or directory
Dec 29 10:06:55 alexpiesel systemd[1]: systemd-logind.service: Main process exited, code=exited, status=1/FAILURE
Dec 29 10:06:55 alexpiesel systemd[1]: systemd-logind.service: Failed with result 'exit-code'.
Dec 29 10:06:55 alexpiesel systemd[1]: Failed to start User Login Management.
Dec 29 10:06:55 alexpiesel systemd[1]: systemd-logind.service: Scheduled restart job, restart counter is at 5.
Dec 29 10:06:55 alexpiesel systemd[1]: Stopped User Login Management.
Dec 29 10:06:55 alexpiesel systemd[1]: modprobe@drm.service: Start request repeated too quickly.
Dec 29 10:06:55 alexpiesel systemd[1]: modprobe@drm.service: Failed with result 'start-limit-hit'.
Dec 29 10:06:55 alexpiesel systemd[1]: Failed to start Load Kernel Module drm.
Dec 29 10:06:55 alexpiesel systemd[1]: systemd-logind.service: Start request repeated too quickly.
Dec 29 10:06:55 alexpiesel systemd[1]: systemd-logind.service: Failed with result 'exit-code'.
Dec 29 10:06:55 alexpiesel systemd[1]: Failed to start User Login Management.
 
@fabian

I updated pve kernel to 5.15 due to KVM virtualization problem when rebooting / shutting down (but cannot remove 5.13.19-1, dependency problems) ->
Code:
proxmox-ve: 7.1-1 (running kernel: 5.15.7-1-pve)
pve-manager: 7.1-4 (running version: 7.1-4/ca457116)
pve-kernel-5.15: 7.1-7
pve-kernel-5.13: 7.1-4
pve-kernel-helper: 7.1-4
pve-kernel-5.15.7-1-pve: 5.15.7-1
pve-kernel-5.13.19-1-pve: 5.13.19-2
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-3
libpve-storage-perl: 7.0-15
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.14-1
proxmox-backup-file-restore: 2.0.14-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.4-2
pve-cluster: 7.1-2
pve-container: 4.1-2
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-1
pve-qemu-kvm: 6.1.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-3
smartmontools: 7.2-pve2
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3

that is okay, I just wanted to make sure you are on a PVE kernel and not stock Debian.

Command
Code:
 /etc/pve/storage.cfg
cannot access, permission denied (even though I operate as root).
that's because it's a config file, not a command (I asked for the contents ;))
I checked and I have only one root account (getent passwd / getent group), but command cat /etc/subgid shows two roots
Code:
root:100000:65536
root:100000:100001

that looks fishy and is likely the cause of your problem - did you edit that file manually? while you can specify multiple ranges for a single user, they shouldn't overlap like that. same is true for /etc/subuid , so I'd check that as well.

Also I checked and container 102 is mounted, but still I get same error if trying to create it as privileged -> zfs list
Code:
NAME                                     USED  AVAIL     REFER  MOUNTPOINT
rpool                                   73.5G   826G      104K  /rpool
rpool/ROOT                              3.23G   826G      192K  /rpool/ROOT
rpool/ROOT/pve-1                        3.23G   826G     3.11G  /
rpool/copyroot                          1.15G   826G       96K  /rpool/copyroot
rpool/copyroot/pve-1                    1.15G   826G     1.15G  /
rpool/data                              69.1G   826G       96K  /rpool/data
rpool/data/encrypted                    69.1G   826G      232K  /rpool/data/encrypted
rpool/data/encrypted/base-100-disk-0    35.7G   859G     2.70G  -
rpool/data/encrypted/subvol-102-disk-0   394M  31.6G      394M  /rpool/data/encrypted/subvol-102-disk-0
rpool/data/encrypted/vm-100-cloudinit      6M   826G       88K  -
rpool/data/encrypted/vm-101-cloudinit      6M   826G      108K  -
rpool/data/encrypted/vm-101-disk-0      33.0G   856G     2.74G  -

Sysctl kernel.unprivileged_userns_clone gices output 1 so that is okey. I'm thinking it is somehow connected with root privileges but at least I should be able to create privileged containers...

In node syslog I see only error connected with Starting User Login Management ->
Code:
Dec 29 10:06:55 alexpiesel systemd[1]: Starting User Login Management...
Dec 29 10:06:55 alexpiesel systemd-logind[265123]: Failed to connect to system bus: No such file or directory
Dec 29 10:06:55 alexpiesel systemd-logind[265123]: Failed to fully start up daemon: No such file or directory
Dec 29 10:06:55 alexpiesel systemd[1]: systemd-logind.service: Main process exited, code=exited, status=1/FAILURE
Dec 29 10:06:55 alexpiesel systemd[1]: systemd-logind.service: Failed with result 'exit-code'.
Dec 29 10:06:55 alexpiesel systemd[1]: Failed to start User Login Management.
Dec 29 10:06:55 alexpiesel systemd[1]: systemd-logind.service: Scheduled restart job, restart counter is at 5.
Dec 29 10:06:55 alexpiesel systemd[1]: Stopped User Login Management.
Dec 29 10:06:55 alexpiesel systemd[1]: modprobe@drm.service: Start request repeated too quickly.
Dec 29 10:06:55 alexpiesel systemd[1]: modprobe@drm.service: Failed with result 'start-limit-hit'.
Dec 29 10:06:55 alexpiesel systemd[1]: Failed to start Load Kernel Module drm.
Dec 29 10:06:55 alexpiesel systemd[1]: systemd-logind.service: Start request repeated too quickly.
Dec 29 10:06:55 alexpiesel systemd[1]: systemd-logind.service: Failed with result 'exit-code'.
Dec 29 10:06:55 alexpiesel systemd[1]: Failed to start User Login Management.
I'd correct the subuid/subgid range issue, then reboot, and then retry - if creating/starting containers still fails then, please try to provide full logs. thanks!
 
that's because it's a config file, not a command (I asked for the contents ;))

I'm noob, I'm sorry!:) Content is ->
Code:
root@alexpiesel:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content backup,vztmpl,iso

zfspool: local-zfs
        pool rpool/data
        content rootdir,images
        sparse 1

zfspool: encrypted_zfs
        pool rpool/data/encrypted
        mountpoint /rpool/data/encrypted

I corrected subuid/subgid.

Now after reboot I have to manually load key (I don't know why it is not loaded) ->
Code:
TASK ERROR: unable to create CT 102 - zfs error: cannot create 'rpool/data/encrypted/subvol-102-disk-0': encryption root's key is not loaded or provided
I load it with
Code:
zfs load-key rpool/data/encrypted
but get the same error as before when trying to create unprivileged ->
Code:
extracting archive '/var/lib/vz/template/cache/ubuntu-20.04-standard_20.04-1_amd64.tar.gz'
cmd/lxc_usernsexec.c: 417: main - Operation not permitted - Failed to unshare mount and user namespace
cmd/lxc_usernsexec.c: 462: main - Inappropriate ioctl for device - Failed to read from pipe file descriptor 3
TASK ERROR: unable to create CT 102 - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar xpf - -z --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' -C /var/lib/lxc/102/rootfs --skip-old-files --anchored --exclude './dev/*'' failed: exit code 1

I can create privileged container but then I cannot start it (same error as before) ->
Code:
run_buffer: 316 Script exited with status 1
lxc_setup: 3686 Failed to run mount hooks
do_start: 1265 Failed to setup container "102"
sync_wait: 36 An error occurred in another process (expected sequence number 5)
__lxc_start: 2073 Failed to spawn container "102"
TASK ERROR: startup for container '102' failed
 
please post the full contents of the /etc/subuid /etc/subgid files!
 
@fabian

/etc/subuid/
Code:
root@alexpiesel:~# cat /etc/subuid
root:100000:65536

/etc/subgui/
Code:
root@alexpiesel:~# cat /etc/subgid
root:100000:65536

Additional commands ->
Code:
root@alexpiesel:~# id -u
0
root@alexpiesel:~# cat /proc/1/uid_map
         0          0 4294967295
root@alexpiesel:~#
 
Last edited:
can you try

Code:
lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- whoami

as root?

is there anything else out of the ordinary in the boot log? could you maybe post that (journalctl -b)?
 
@fabian

Code:
root@alexpiesel:~# lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- whoami
cmd/lxc_usernsexec.c: 417: main - Operation not permitted - Failed to unshare mount and user namespace
cmd/lxc_usernsexec.c: 462: main - Operation not permitted - Failed to read from pipe file descriptor 3

journalctl -b in attachment (too long to post).

Btw. thank you very much for trying to help me, much appreciated!!
 

Attachments

  • journalctl.txt
    90.2 KB · Views: 1
that's not the full journal since booting - but it does show some weird stuff going on (e.g. lxcfs not being able to access fuse).

going back through the thread, the following looks suspicious as well:
Code:
rpool                                   73.5G   826G      104K  /rpool
rpool/ROOT                              3.23G   826G      192K  /rpool/ROOT
rpool/ROOT/pve-1                        3.23G   826G     3.11G  /
rpool/copyroot                          1.15G   826G       96K  /rpool/copyroot
rpool/copyroot/pve-1                    1.15G   826G     1.15G  /

is it possible you have two datasets mounted over eachother on / ? could you post the output of mount?
 
@fabian

mount

Code:
root@alexpiesel:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=16307412k,nr_inodes=4076853,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=3267812k,mode=755,inode64)
rpool/ROOT/pve-1 on / type zfs (rw,relatime,xattr,noacl)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,inode64)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=19875)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
rpool/copyroot/pve-1 on / type zfs (rw,noatime,xattr,noacl)
rpool on /rpool type zfs (rw,noatime,xattr,noacl)
rpool/copyroot on /rpool/copyroot type zfs (rw,noatime,xattr,noacl)
rpool/data on /rpool/data type zfs (rw,noatime,xattr,noacl)
rpool/ROOT on /rpool/ROOT type zfs (rw,noatime,xattr,noacl)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
rpool/data/encrypted on /rpool/data/encrypted type zfs (rw,noatime,xattr,noacl)
rpool/data/encrypted/subvol-102-disk-0 on /rpool/data/encrypted/subvol-102-disk-0 type zfs (rw,noatime,xattr,posixacl)
 
yeah, it's like I suspected - you have two datasets with their mountpoint set to /, and one is mounted in the initrd stage, then later the second one is mounted over it. at this point I'd likely start over from scratch with a fresh install (and an eye out for the source of this misconfiguration), since you cannot know what ended up where and which parts of the file system(s) are inconsistent..
 
  • Like
Reactions: Okeur and Proxomx
Kk, thank you!! I will do secure erase of nvme and clean install again, hopefully this time everything works out, I let you know later:)

EDIT: @fabian
Just to be 100% clear -> following this instrucition https://gist.github.com/yvesh/ae77a68414484c8c79da03c4a4f6fd55

Correct commands should be
# Copy the files from the copy to the new encrypted zfs root
zfs send -R /rpool/copyroot/pve-1@copy | zfs receive -o encryption=on rpool/ROOT/pve-1

# Set the Mountpoint
zfs set mountpoint=/ rpool/ROOT/pve-1

Now it should work out? Or should I change mountpoint differently?
 
Last edited:
that part should be okay - but you then need to either set a different mountpoint for the old unencrypted root dataset (the 'copyroot' one ;)), or destroy it altogether before rebooting.
 
@fabian

Of course you were right!:) Before mounting
Code:
 zfs set mountpoint=/ rpool/ROOT/pve-1
I had to first remove rootcopy pool ->
Code:
 zfs destroy -r rpool/copyroot
and then I have only mountpoint ->
Code:
NAME                                     USED      AVAIL   REFER    MOUNTPOINT
rpool                                       4.07G    895G      104K     /rpool
rpool/ROOT                           3.65G    895G      192K     /rpool/ROOT
rpool/ROOT/pve-1                3.65G   895G     3.59G     /
rpool/data                              432M   895G         96K    /rpool/data
rpool/data/encrypted           432M   895G      200K    /rpool/data/encrypted

Then I followed instruction to encrypt dataset for VM and CT ->
  1. Create an encryption key:
    Code:
           Code:    
       
           openssl rand -hex -out /root/data-encrypted.key 32
  2. Create new encrypted dataset:
    Code:
           Code:    
       
           zfs create -o encryption=on -o keyformat=hex -o keylocation=file:///root/data-encrypted.key rpool/data/encrypted
  3. Add the encrypted dataset to proxmox:
    Code:
           Code:    
       
           pvesm add zfspool encrypted_zfs -pool rpool/data/encrypted
The problem I have there is that I have to manually exec command
Code:
 zfs load-key rpool/data/encrypted
after each reboot (otherwise I get command I cannot create VM/CT) - any idea how to make it permament?

After loading key I was able to create unprivileged container :D
I very much appreciated your help again!! Thank you!!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!