Migration OpenVZ to LXC problems

rcd

Active Member
Jul 12, 2019
242
20
38
62
I'm trying to migrate a few ancient openvz containers to LXC. After much trial and error I have finally managed to install a working version of vzdump on the old openvz host and dumped the containers, and was able to import them in Proxmox with the --ostype unmanaged, as the containers are based on RHEL 4 and I no longer have the original medie for this.

Problem is, when I start the container in Proxmox it runs for a moment and then stops again.

I tried running the pct start with --logfile=lxc.log --logpriority=debug and it produced a lot of debugging info, but if I just grep for ERROR one of the first errors is that it can't find /bin/init. That kinda suggest to me that either the vzdump or the import expected to find system files somewhere else.

Since the dump file is very close to the size of the container data /vz/private/2529 I think it probably is dumping everything in the container, so what could have gone wrong?

I attach the lxc.log file for info.
 

Attachments

  • lxc.log
    25.4 KB · Views: 4
hi,

rhel4 isn't one of our supported containers so you will need to stay with 'unmanaged' ostype, which means you need to do distro-specific setup to get it running again.

lxc is checking for /sbin/init so for starters you can try to get that working by copying or symlinking the actual init
 
Yes I know that RHEL 4 obviously is no longer supported by anyone in this world, but if I dump a full RHEL 4 openvz container, should I not be able to restore it again, with everything that was there in the working container?

I mean, without having to find a RHEL 4 ISO somewhere?
 
yes you can, that's what vzdump did.. just try mounting the CT with pct mount CTID and see if /sbin/init exists
 
Sorry but I don't have much experience with LXD. In Openvz the container would be under /vz/root, and when mounted you could see the content under /vz/private. With LXD I only see the root.hdd (under /var/lib/vz/images/) where can I see the actual filesystem?
 
as i said pct mount CTID will mount the filesystem and tell you the path it's mounted.
 
Yes, it mounts to /var/lib/lxc/CTID/rootfs -- where we find:

Code:
./dev
./proc
./root.hdd
./root.hdd/root.hdd
./root.hdd/DiskDescriptor.xml.lck
./root.hdd/DiskDescriptor.xml
./root.hdd/root.hdd.{0793f2f5-b890-4117-b473-126a412c2f51}
./dump
./Snapshots.xml
./vzpbackup_snapshot
./etc
./etc/vzdump
./etc/vzdump/vps.conf

# ll root.hdd
total 8735170
-rw-r--r-- 1 root root        1135 Feb  9 18:56 DiskDescriptor.xml
-rw------- 1 root root           0 Feb  9 19:27 DiskDescriptor.xml.lck
-rw------- 1 root root 21458059264 Feb  9 19:19 root.hdd
-rw------- 1 root root 14401142784 Jun 11 20:37 root.hdd.{0793f2f5-b890-4117-b473-126a412c2f51}

i.e. the root.hdd I mentioned before. OpenVZ has the same under /vz/root, but unpacked under /vz/private.

Where is root.hdd unpacked in LXC?
 
I already read that thread. It says "mount the root.hdd file" but not how you do it. How do you mount a root.hdd file?

Anyway, I still don't get why /sbin/init is missing when it obviously is there in the openvz container.
 
Anyway, I still don't get why /sbin/init is missing when it obviously is there in the openvz container.

it's probably inside the ploop image (root.hdd)
 
ploop images weren't supported when proxmox used openvz. the process documented on our wiki is meant for old proxmox installations and not arbitrary openvz setups, that's why it wasn't automatically extracted
 
Ok, I understand.

Anyway, installed ploop, and mounted, but failed:

Code:
(pve)
# ploop mount /var/lib/lxc/2529/rootfs/root.hdd/DiskDescriptor.xml
Opening delta /var/lib/lxc/2529/rootfs/root.hdd/root.hdd
Error in ploop_getdevice (ploop.c:870): Can't open /proc/vz/ploop_minor: No such file or directory
# ploop check /var/lib/lxc/2529/rootfs/root.hdd/DiskDescriptor.xml
# ploop info /var/lib/lxc/2529/rootfs/root.hdd/DiskDescriptor.xml
Opening delta /var/lib/lxc/2529/rootfs/root.hdd/root.hdd
Error in ploop_getdevice (ploop.c:870): Can't open /proc/vz/ploop_minor: No such file or directory

Instead I went to the openvz server and did the same. Now I could mount it, but that just give me another block device...

Code:
(openvz)
# ploop mount DiskDescriptor.xml
Opening delta /backup/dumpdir/root.hdd/root.hdd
Adding delta dev=/dev/ploop29202 img=/backup/dumpdir/root.hdd/root.hdd (ro)
Adding delta dev=/dev/ploop29202 img=/backup/dumpdir/root.hdd/root.hdd.{0793f2f5-b890-4117-b473-126a412c2f51} (rw)
# ls -alR /dev/ploop29202
brw-rw---- 1 root disk 182, 467232 Jun 15 13:44 /dev/ploop29202

Now what?
 
Last edited:
add the -m option for your command followed by the path you want to mount it on [0] (search for mount_point in the page)

afterwards i'd just make a tar archive of the filesystem and extract it in a fresh container (just transfer the archive to the container and extract it at the right place)

[0]: https://wiki.openvz.org/Man/ploop.8
 
Ok, that worked fine, and I was able to make at tar backup of the server.

I then created a CT, and mounted it (without starting it), removed the content and restored the tar backup. Unfortunately the container again starts briefly and then stops.

daemon.oog has this about it, which isn't much. I don't know if this is the best logfile to get info from? Otherwise, what could go wrong?

Code:
Jun 18 21:41:31 server37 pct[25805]: starting CT 2529: UPID:server37:000064CD:26DFF939:5EEBDF8B:vzstart:2529:root@pam:
Jun 18 21:41:31 server37 systemd[1]: Started PVE LXC Container: 2529.
Jun 18 21:41:31 server37 pct[25782]: <root@pam> end task UPID:server37:000064CD:26DFF939:5EEBDF8B:vzstart:2529:root@pam: OK
Jun 18 21:41:32 server37 pvestatd[2157]: unable to get PID for CT 2529 (not running?)
Jun 18 21:41:33 server37 systemd[1]: pve-container@2529.service: Main process exited, code=exited, status=1/FAILURE
Jun 18 21:41:33 server37 systemd[1]: pve-container@2529.service: Failed with result 'exit-code'.
 
I then created a CT, and mounted it (without starting it), removed the content and restored the tar backup

did you delete the whole container? if you did then it probably won't work...

i'd rather unpack the files from within the container so the permissions/owners match the container userspace.

daemon.oog has this about it, which isn't much. I don't know if this is the best logfile to get info from? Otherwise, what could go wrong?
you can see here[0] but my guess is that you deleted something that the container needs.

[0]: https://pve.proxmox.com/pve-docs/chapter-pct.html#_obtaining_debugging_logs
 
Well, I had to delete the content, as the container i need to restore is based on RHEL 4AS, and so any files already in the container will not be compatible.

I can't find any RHEL 4AS container. I have an iso with it, but I don't know how I get from an iso to a template, nor can I find it explained anywhere.
 
could have cloned the RHEL container :)

anyway, i guess your problem is solved. if so, please mark the thread as [SOLVED] so others know what to expect
 
No, the problem is not solved. The container didn't work the way I did it first. I suppose I need to make a template out of the RHEL 4 iso I have, but I don't know how?

What do you mean "could have cloned the RHEL container" ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!