[SOLVED] Problems with migrate OpenVZ containers to PVE

GKmst

Active Member
Aug 25, 2017
14
0
41
59
Netherlands
I am running OpenVZ for many years. Now i installed Promox 5.4 on a other machine and like to migrate several Debian Jessie OpenVZ guest systems to Proxmox.

I made backups of the OpenVZ containers and uploaded them to the proxmox host machine.
In the storage view i clicking restore but i get errors:

TASK ERROR: unable to restore CT 401 - unable to parse config line: VE_LAYOUT=ploop

I searched the proxmox forum but didn't find anything that could be helpful.
Can anybody help to point me in the right direction how to recover my openvz containers on proxmox 5.4?
 
unable to parse config line: VE_LAYOUT=ploop

I guess it has to do with your OpenVZ config file. Maybe remove this VE_LAYOUT line and give it another shot, backup and restore?
 
Oke that makes a lot of sense ;)
Thank you.
I removed the line now i get this error:

Virtual Environment 5.4-4
Storage 'Storage_1_500GB' on node 'pve1'
Search:
Logs

()
Formatting '/mnt/Storage_1_500MB/pve-storage/images/601/vm-601-disk-0.raw', fmt=raw size=53687091200
mke2fs 1.43.4 (31-Jan-2017)
Discarding device blocks: 4096/13107200 done
Creating filesystem with 13107200 4k blocks and 3276800 inodes
Filesystem UUID: c94e5ce5-8f26-438d-9e28-6bc2e320634c
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424

Allocating group tables: 0/400 done
Writing inode tables: 0/400 done
Creating journal (65536 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information: 0/400 done

TASK ERROR: unable to restore CT 601 - file does not look like a template archive: /mnt/Storage_1_500MB/pve-storage/dump/vzdump-openvz-601-2019_04_30-01_12_50.tgz
 
After i changed the openvz backup file extension from tgz to tar.gz like i read in https://forum.proxmox.com/threads/pct-restore-unable-to-parse-config-line.36145/#post-243879 it generated another error.

Virtual Environment 5.4-4
Storage 'Storage_1_500GB' on node 'pve1'
Search:
Logs
()
Formatting '/mnt/Storage_1_500MB/pve-storage/images/401/vm-401-disk-0.raw', fmt=raw size=5368709120
mke2fs 1.43.4 (31-Jan-2017)
Discarding device blocks: 4096/1310720 done
Creating filesystem with 1310720 4k blocks and 327680 inodes
Filesystem UUID: 0c158517-47a4-4325-9a4c-170faf9fc877
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: 0/40 done
Writing inode tables: 0/40 done
Creating journal (16384 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information: 0/40 done

extracting archive '/mnt/Storage_1_500MB/pve-storage/dump/vzdump-openvz-401-2019_05_01-18_33_48.tar.gz'
Total bytes read: 3114280960 (3.0GiB, 108MiB/s)
Architecture detection failed: open '/bin/sh' failed: No such file or directory

Falling back to amd64.
Use `pct set VMID --arch ARCH` to change.
###########################################################
Converting OpenVZ configuration to LXC.
Please check the configuration and reconfigure the network.
###########################################################
TASK ERROR: unable to restore CT 401 - unable to detect OS distribution
 
Hi,

Architecture detection failed: open '/bin/sh' failed: No such file or directory

This is quite weird. /bin/sh should exist in a Debian system since it's POSIX mandatory.

TASK ERROR: unable to restore CT 401 - unable to detect OS distribution

If your container is debian, check if there's a file in /etc/debian_version on your container's root filesystem. (If this file doesn't exist, check if you have the package `base-files` installed in your container. If not, install it and try to convert it from scratch.)

These both errors could be also an indication that there's something wrong in general with your root filesystem of the container. Maybe it's corrupted?

Can we also see your OpenVZ configuration file?
 
Hello Oguz,

Thank you for for your help.

This is quite weird. /bin/sh should exist in a Debian system since it's POSIX mandatory.
Indeed very weird.
However /bin/sh is present in container 401
It is a softlink to /bin/dash.

If your container is debian, check if there's a file in /etc/debian_version on your container's root filesystem. (If this file doesn't exist, check if you have the package `base-files` installed in your container. If not, install it and try to convert it from scratch.)
/etc/debian_version is present and contains "7.11" without the double quotes.

My OpenVZ containers work very well for years they are just old.
They will be replaced but in the meantime i would like to migrate them to vmware / proxmox / lxc.
So i keep them running while configuring the new containers.

I am orientating how to go forward and am testing some options on a old desktop system HP elite 8300 i7 16gb.
Like i am testing Proxmox at the moment on this system.
 
Can we also see your OpenVZ configuration file?
Oh yes of course i almost forgot.

Code:
PHYSPAGES="0:106125"
SWAPPAGES="0:212250"

KMEMSIZE="unlimited"
DCACHESIZE="unlimited"
LOCKEDPAGES="unlimited"
PRIVVMPAGES="unlimited"
SHMPAGES="unlimited"
NUMPROC="unlimited"
VMGUARPAGES="0:unlimited"
OOMGUARPAGES="0:unlimited"
NUMTCPSOCK="unlimited"
NUMFLOCK="unlimited"
NUMPTY="unlimited"
NUMSIGINFO="unlimited"
TCPSNDBUF="unlimited"
TCPRCVBUF="unlimited"
OTHERSOCKBUF="unlimited"
DGRAMRCVBUF="unlimited"
NUMOTHERSOCK="unlimited"
NUMFILE="unlimited"
NUMIPTENT="unlimited"

DISKSPACE="5242880:5242880"
DISKINODES="7389131:8128045"

VE_ROOT="/var/lib/vz/root/$VEID"
VE_PRIVATE="/var/lib/vz/private/$VEID"
OSTEMPLATE="debian-7.0-x86_64"
ORIGIN_SAMPLE="xxxxxx"

ONBOOT="yes"

HOSTNAME="ns.xxxxxx.xxx"
IP_ADDRESS="192.168.166.8 192.168.166.5"
NAMESERVER="192.168.166.1"
NAME="ns"

CAPABILITY="SYS_TIME:on"

VE_LAYOUT="ploop"
 
Well, then in that case the only part the error could be occuring is the backing up of the VZ container itself. (Since the files /bin/sh and /etc/debian_version exist in the container rootfs)

How do you back it up? Did you notice any error messages or weird information during the backup process? Can we get an output of it?
 
On another note, I forgot to tell you something.
I think we never supported `ploop` back when we used OpenVZ anyway, so if your container is not from us, our software wouldn't be able to work with ploop.

Although it might be possible to use something else on the container and then make a backup, or even just mount the rootfs somewhere and make a tar archive out of it to restore on PVE.
 
I back up with:
vzdump --compress --dumpdir /var/lib/vz/dump/dump_401_03-05-2019-14:58:13 --stop 401 --bwlimit 25000

The created archive is '/var/lib/vz/dump/dump_401_03-05-2019-14:58:13/vzdump-openvz-401-2019_05_03-14_58_15.tgz'

Code:
mei 03 14:58:15 INFO: Starting Backup of VM 401 (openvz)
mei 03 14:58:15 INFO: CTID 401 exist mounted running
mei 03 14:58:15 INFO: status = CTID 401 exist mounted running
mei 03 14:58:15 INFO: backup mode: stop
mei 03 14:58:15 INFO: bandwidth limit: 25000 KB/s
mei 03 14:58:15 INFO: stopping vm
mei 03 14:58:15 INFO: Stopping container ...
mei 03 14:58:19 INFO: Container was stopped
mei 03 14:58:19 INFO: Unmounting file system at /mnt/data/vz/root/401
mei 03 14:58:20 INFO: Unmounting device /dev/ploop11086
mei 03 14:58:20 INFO: Container is unmounted
mei 03 14:58:20 INFO: creating archive '/var/lib/vz/dump/dump_401_03-05-2019-14:58:13/vzdump-openvz-401-2019_05_03-14_58_15.tgz'
mei 03 15:10:41 INFO: Total bytes written: 3116380160 (3.0GiB, 4.1MiB/s)
mei 03 15:10:41 INFO: archive file size: 991MB
mei 03 15:10:41 INFO: restarting vm
mei 03 15:10:47 INFO: vm is online again after 752 seconds
mei 03 15:10:47 INFO: Finished Backup of VM 401 (00:12:32)

Recovering / restoring works flawlessly.
 
On another note, I forgot to tell you something.
I think we never supported `ploop` back when we used OpenVZ anyway, so if your container is not from us, our software wouldn't be able to work with ploop.

Although it might be possible to use something else on the container and then make a backup, or even just mount the rootfs somewhere and make a tar archive out of it to restore on PVE.

Aha oke thats bad news hmm
I wil try that. If that doesn't work either then i have to rethink how to go forward.

Thank you for clarifying.
 
You're very welcome.

Let me know if it works with simfs or just plain tar on the rootfs (it should). If it works, you can mark the thread [SOLVED] by editing your first post.
 
  • Like
Reactions: GKmst
Yesss yesss it worked.

I mounted the root.hdd of openvz container. Then i copied the openvz config inside de root.hdd to /etc/vz/vzdump/vps.conf.
Then created a tarball of the mounted root.hdd and restored the container tarball on PVE with "unprivileged container" disabled.

However now i try to login on the console it is not letting me login.
 
Last edited:
I changed the console mode from tty to shell now i am in.
Then i changed the network settings.
After that the container is fully working as before on openvz.
Great!
 
Awesome. Glad everything went fine.

Cheers.
 
  • Like
Reactions: GKmst

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!