Hi Oguz thank you very much for your prompt reply. I found this tutorial[¹] but there is no mention about the /etc/fstab but there are some other steps to do for example create devices because CT doesn't support udev... so I fear that the process is not so simple.remove unneeded stuff (for example /etc/fstab isn't needed) then configure the container using PVE
I found this tutorial[¹] but there is no mention about the /etc/fstab
To prevent any problems I’ve commented out each and every line within the guests “etc/fstab” configuration file.
/etc/
/usr/
/home/
/root/
/var/
/etc/network/interfaces
and /etc/fstab
are not needed (you can simply delete them)both machine are up and running? If I have well understand you create a lxc container named lxcserver then from the machine to clone you run the commandFWIW i usually clone that this way:
rsync -aAXv --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/usr/tmp/*","/run/*","/mnt/*","/media/*","/var/cache/*","/","/lost+found","/boot/} /* root@lxcserver:/
you are right if we are taking about cloning a desktop PC but if you are cloning a server I'm not agree with you.I have done this a few times, and decided that it is easier just to migrate app, then to copy everything and the remove what is not used.
Yes that should do it.both machine are up and running? If I have well understand you create a lxc container named lxcserver then from the machine to clone you run the command
# rsync -aAXv --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/usr/tmp/*","/run/*","/mnt/*","/media/*","/var/cache/*","/","/lost+found","/boot/} /* root@lxcserver:/
is it correct?
Piviul
From the node that host the CT I have done a:hi,
i'm not aware of any fully automatic process, but basically you can copy the whole rootfs of your machine and unpack it in a fresh container, remove unneeded stuff (for example /etc/fstab isn't needed) then configure the container using PVE
The CT seems to start correctly but when I try to logon says that the password is incorrect. Furthermore doesn't seems that the services on the CT are accessible from the lan.
/etc/passwd
and /etc/shadow
so you may need to use the old credentials.pct enter CTID
command to attach to the container and run commands to change the password and bring services up and so onYes of course, the credentials I used was from cloned machine!when you rsync'd it overwrote/etc/passwd
and/etc/shadow
so you may need to use the old credentials.
Doesn't seems to workif that doesn't work you can usepct enter CTID
command to attach to the container and run commands to change the password and bring services up and so on
# s -arilh /etc/passwd /etc/shadow
1520977 -rw-r----- 1 root shadow 1.6K May 28 12:06 /etc/shadow
1514335 -rw-r--r-- 1 root root 2.7K May 28 12:06 /etc/passwd
bash: /root/.bashrc: Permission denied
Your method seems to work. I have only made very little changes to it:FWIW i usually clone that this way:
rsync -aAXv --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/usr/tmp/*","/run/*","/mnt/*","/media/*","/var/cache/*","/","/lost+found","/boot/} /* root@lxcserver:/
rsync -aAXv --exclude={"/dev/*","/proc/","/sys/","/tmp/*","/usr/tmp/*","/run/*","/mnt/*","/media/*","/var/cache/*","/","/lost+found","/boot/} /* root@lxcserver:/
can't create /var/cache/apt-show-versions/files: No such file or directory at /usr/bin/apt-show-versions line 199.
Reading package lists... Done
E: Problem executing scripts APT::Update::Post-Invoke-Success 'test -x /usr/bin/apt-show-versions || exit 0 ; apt-show-versions -i'
E: Sub-process returned an error code
what are the permissions on /etc/passwd and /etc/shadow ?
# ls -l /zfspool/subvol-105-disk-0/etc/{passwd,shadow}
-rw-r--r-- 1 root root 2168 Jun 6 09:02 /zfspool/subvol-105-disk-0/etc/passwd
-rw-r----- 1 root shadow 1335 Jun 6 09:02 /zfspool/subvol-105-disk-0/etc/shadow
root@pve:~# pct enter 105
bash: /root/.bashrc: Permission denied
root@ct105:~# ls -l
ls: cannot open directory '.': Permission denied
root@ct105:~# whoami
root
[...]
also this is curious:
what are the permissions on that file?bash: /root/.bashrc: Permission denied
root@pve:~# ls -l /zfspool/subvol-105-disk-0/root/.bashrc
-rw-r--r-- 1 root root 570 Jan 31 2010 /zfspool/subvol-105-disk-0/root/.bashrc
I don't know seems to be root:check the uid/gid of the files in the CT rootfs.
# ls -ld /zfspool/subvol-105-disk-0
drwxr-xr-x 23 root root 27 Jun 16 14:17 /zfspool/subvol-105-disk-0
I've tried to learn something about containers. How can I know what are the correct uids?i think the owners of the files need to be set to the correct uids.
That's a good question; I think unprivileged but I'm not sure: how can I verify it?is the container privileged or unprivileged?
That's a good question; I think unprivileged but I'm not sure: how can I verify it?
pct config CTID
. unprivileged: 1
in the output would indicate an unprivileged container.yes, there's something else to look for.I've executed the command from the node that hosts the CT 105.
...or there is something else to look for?
pct mount CTID
and it should mount the rootfs and tell you where it's mounted.-n
flag for ls
(numeric uid/gid in output)I don't know seems to be root:
root
in the container and the container thinks your uid is 0, in reality your uid is mapped to something else (by default to 100000).Ok, unprivileged:[...]
check the output ofpct config CTID
.unprivileged: 1
in the output would indicate an unprivileged container.
# pct config 105 | grep ^unpriv
unprivileged: 1
Done:yes, there's something else to look for.
you checked the uid/guid of the zfs subvol which contains the rootfs, however what we're interested in is the owners of the files inside the rootfs.
to check this, you will need to mount it first. try withpct mount CTID
and it should mount the rootfs and tell you where it's mounted.
# pct mount 105
mounted CT 105 in '/var/lib/lxc/105/rootfs'
This is the result:go to that directory and check the uid/gid. better way to do this is using the-n
flag forls
(numeric uid/gid in output)
# ls -na /var/lib/lxc/105/rootfs
total 185
drwxr-xr-x 23 0 0 27 Jun 16 14:17 .
drwxr-xr-x 3 0 0 4096 Jun 17 07:56 ..
drwxr-xr-x 2 0 0 158 Jun 6 08:59 bin
drwxr-xr-x 3 0 0 7 Jun 8 11:17 boot
drwxr-xr-x 4 0 0 4 Jun 5 15:29 build
drwxr-xr-x 17 0 0 157 Jun 9 15:18 dev
drwxr-xr-x 139 0 0 249 Jun 16 14:17 etc
-rw-r--r-- 1 100000 100000 0 Jun 16 14:17 fastboot
drwxr-xr-x 6 0 0 6 Apr 7 09:28 home
lrwxrwxrwx 1 0 0 30 Jun 6 06:47 initrd.img -> boot/initrd.img-4.9.0-12-amd64
drwxr-xr-x 17 0 0 23 Jun 8 11:18 lib
drwxr-xr-x 2 0 0 3 Jun 6 06:40 lib64
drwx------ 2 0 0 2 Mar 6 16:08 lost+found
drwxr-xr-x 3 0 0 4 Mar 6 16:08 media
drwxr-xr-x 2 0 0 2 Jun 5 09:52 mnt
drwxr-xr-x 4 0 0 4 Jun 8 11:41 opt
dr-xr-xr-x 246 0 0 266 Jun 9 15:18 proc
-rw------- 1 0 0 1024 Mar 6 16:22 .rnd
drwx------ 7 0 0 22 Jun 9 14:56 root
drwxr-xr-x 3 0 0 4 Jun 9 15:24 run
drwxr-xr-x 2 0 0 232 Jun 8 11:18 sbin
drwxr-xr-x 2 0 0 2 Mar 6 16:08 srv
dr-xr-xr-x 2 0 0 2 Jun 9 15:18 sys
drwxrwxrwt 7 0 0 7 Jun 17 07:39 tmp
drwxr-xr-x 10 0 0 10 Mar 6 16:08 usr
drwxr-xr-x 12 0 0 14 Apr 2 12:03 var
lrwxrwxrwx 1 0 0 27 Jun 6 06:47 vmlinuz -> boot/vmlinuz-4.9.0-12-amd64
Except for the file fastboot all other seem to be not mapped at all... :?in unprivileged containers, uids and gids in the container are actually mapped to a different uid/gid on the host, to make it more secure against breakout techniques. so for example even if you areroot
in the container and the container thinks your uid is 0, in reality your uid is mapped to something else (by default to 100000).
chown -R 100000:100000 /var/lib/lxc/105/rootfs
Except for the file fastboot all other seem to be not mapped at all... :?
-o
to preserve owners..root
:pct mount CTID
cd /var/lib/lxc/CTID/rootfs
find . -uid 0 -gid 0 -exec chown 100000:100000 -R {} \;
We use essential cookies to make this site work, and optional cookies to enhance your experience.