Hi,
I got a unprivileged LXC where I modified the hosts subuid/subgid files with "root:1100:4". The LXCs config got:
I needed that user/group remapping because I had to bind-mount an SMB share (owned by UID/GID 1103) from the host to the unprivileged LXC.
Now I wanted to convert that unprivileged LXC into a privileged one so I could enable the LXCs SMB feature to be able to directly mount the SMB share inside the LXC. The LXC is only locally available, so security isn't that of a big concern and I would prefer a privileged LXC. Hence, it makes it easier to move the LXC between my PVE nodes (so I don't have to edit the hosts subuid/subgid files each time I want to migrate it).
Problem is now that I don't know how to do it.
As far as I understand, the way to go would be backup that unprivileged LXC and restore it as a privileged LXC. I can only back it up while subuid/subgid files are modified and LXCs config file is modiefied with "lxc.idmap" because otherwise the LXC won't be able to start. But when I then backup that unprivileged LXC and restore it as privileged, that LXC can't be started because subuid/subgid/LXC config file are still setup to use modified user remapping. I can then revert subuid/subgid files to default and remove the "lxc.idmap" lines from the LXCs config file. Then the privileged LXC will start but when logging in into the console from webUI (ssh won't work anymore) I see that all UIDs/GIDs are wrong. Everyting owned previously by UID 0 is now owned by UID 100000 and so on. So looks like I would need to chown all files/folder from all UID/GID 100000-165535 to 0-65535. Is PVE supposed to do that when restoring a unprivileged LXC as privileged?
If not, is there a script/command that I can use to chown all UIDs/GIDs to n-100000?
Edit:
Made this script to change all UID/GIDs to n-100000:
Its now running...will see in a couple of hours if the LXC still works.
Edit:
Doesn't work. After changing all UIDs/GIDs to n-100000 the LXC will start but I can't login anymore:
I got a unprivileged LXC where I modified the hosts subuid/subgid files with "root:1100:4". The LXCs config got:
Code:
lxc.idmap: u 0 100000 1103
lxc.idmap: g 0 100000 1103
lxc.idmap: u 1103 1103 1
lxc.idmap: g 1103 1103 1
lxc.idmap: u 1104 101104 64432
lxc.idmap: g 1104 101104 64432
I needed that user/group remapping because I had to bind-mount an SMB share (owned by UID/GID 1103) from the host to the unprivileged LXC.
Now I wanted to convert that unprivileged LXC into a privileged one so I could enable the LXCs SMB feature to be able to directly mount the SMB share inside the LXC. The LXC is only locally available, so security isn't that of a big concern and I would prefer a privileged LXC. Hence, it makes it easier to move the LXC between my PVE nodes (so I don't have to edit the hosts subuid/subgid files each time I want to migrate it).
Problem is now that I don't know how to do it.
As far as I understand, the way to go would be backup that unprivileged LXC and restore it as a privileged LXC. I can only back it up while subuid/subgid files are modified and LXCs config file is modiefied with "lxc.idmap" because otherwise the LXC won't be able to start. But when I then backup that unprivileged LXC and restore it as privileged, that LXC can't be started because subuid/subgid/LXC config file are still setup to use modified user remapping. I can then revert subuid/subgid files to default and remove the "lxc.idmap" lines from the LXCs config file. Then the privileged LXC will start but when logging in into the console from webUI (ssh won't work anymore) I see that all UIDs/GIDs are wrong. Everyting owned previously by UID 0 is now owned by UID 100000 and so on. So looks like I would need to chown all files/folder from all UID/GID 100000-165535 to 0-65535. Is PVE supposed to do that when restoring a unprivileged LXC as privileged?
If not, is there a script/command that I can use to chown all UIDs/GIDs to n-100000?
Edit:
Made this script to change all UID/GIDs to n-100000:
Code:
#!/bin/bash
# mount LXCs filesystem on the PVE host, change the following to its mountpoint:
lxcmountpoint="/VMpool/VLT/VM/subvol-126-disk-0/"
# will take some time (hours to months), as it steps 131.070 times through all files/folders
for olduid in {100000..165535}
do
newuid=$((olduid-100000))
# change UID - 100000
chown --from=$olduid -Rhc $newuid $lxcmountpoint
# change GID - 100000
chown --from=:$olduid -Rhc :$newuid $lxcmountpoint
echo "Progress: $olduid/165535"
done
Edit:
Doesn't work. After changing all UIDs/GIDs to n-100000 the LXC will start but I can't login anymore:
Last edited: