[SOLVED] Container: from Virtuozzo to Proxmox

Morphushka

Well-Known Member
Jun 25, 2019
49
7
48
35
Syberia
Hello. I want to move containers from virtuozzo server to proxmox 6. Here things I done:

I stop container and try:
Code:
vzdump 0d6c6bf8-ce7f-4be6-a5e0-0ae102e8f323
and get error:
Code:
ERROR: strange VPS ID '0d6c6bf8-ce7f-4be6-a5e0-0ae102e8f323'
Hmm.. ok, I found that I can change VPS ID and do that by command
Code:
vzmlocal OLD_CTID:NEW_CTID
Now my ct has 1001 id:
Code:
vzmlocal d6ed0009-38ff-4cc9-bf38-ee804389db84:1001
Moving/copying CT d6ed0009-38ff-4cc9-bf38-ee804389db84 -> CT 1001, [], [] ...
locking d6ed0009-38ff-4cc9-bf38-ee804389db84
locking 1001
Move /vz/private/d6ed0009-38ff-4cc9-bf38-ee804389db84 /vz/private/1001
Copying/modifying config scripts of CT d6ed0009-38ff-4cc9-bf38-ee804389db84 ...
Register CT 1001 uuid=0d6c6bf8-ce7f-4be6-a5e0-0ae102e8f323
Successfully completed
unlocking d6ed0009-38ff-4cc9-bf38-ee804389db84
unlocking 1001
I try vzdump again and all looks good.
Code:
[root@localhost 1001]# vzdump 1001
INFO: Starting new backup job - vzdump 1001
INFO: Starting Backup of VM 1001 (openvz)
INFO: status = VEID 1001 exist unmounted down
INFO: creating archive '/vz/dump/vzdump-1001.dat' (/vz/private/1001)
INFO: Всего записано байт: 1943080960 (1,9GiB, 10MiB/s)
INFO: file size 1.81GB
INFO: Finished Backup of VM 1001 (00:03:06)
Archive structure:
[root@localhost dump]# tar vtf vzdump-1001.tar
drwx------ 0/0 0 2019-07-30 14:20 ./
drwx------ 0/0 0 2019-06-24 13:55 ./scripts/
-rw-r----- 0/0 36 2019-07-30 14:11 ./.owner
drwx------ 0/0 0 2019-06-24 13:55 ./fs/
-rw-r----- 0/0 805 2019-06-24 13:56 ./vzmtmpfile.H3OquZ
-rw-r----- 0/0 27 2019-07-30 12:57 ./.uptime
drwxr-xr-x 0/0 0 2019-07-30 14:20 ./etc/
drwxr-xr-x 0/0 0 2019-07-30 14:20 ./etc/vzdump/
-rw-r--r-- 0/0 773 2019-07-30 14:20 ./etc/vzdump/vps.conf
-rw-r--r-- 0/0 773 2019-07-30 14:11 ./ve.conf
-rw-r----- 0/0 22483 2019-06-24 13:56 ./.ve.xml
drwxr-xr-x 0/0 0 2019-07-30 12:57 ./root.hdd/
-rw-r--r-- 0/0 0 2017-09-01 10:47 ./root.hdd/DiskDescriptor.xml.lck
-rw------- 0/0 1943011328 2019-07-30 12:57 ./root.hdd/root.hds
-rw-r--r-- 0/0 790 2017-09-01 10:47 ./root.hdd/DiskDescriptor.xml
drwxr-xr-x 0/0 0 2017-09-01 10:57 ./root.hdd/templates/
-rw-r--r-- 0/0 15737 2017-09-01 10:57 ./root.hdd/templates/vzpackages
drwxr-xr-x 0/0 0 2017-09-01 10:57 ./root.hdd/templates/debian-9.0-x86_64/
-rw-r--r-- 0/0 20 2017-09-01 10:57 ./root.hdd/templates/debian-9.0-x86_64/timestamp
-rw-r--r-- 0/0 40 2019-07-30 12:57 ./root.hdd/.statfs
lrwxrwxrwx 0/0 0 2019-06-24 13:55 ./.ve.layout -> 5
lrwxrwxrwx 0/0 0 2019-06-24 13:55 ./templates -> root.hdd/templates
drwxr-x--- 0/0 0 2019-06-24 13:55 ./.brand/
-rw-r----- 0/0 8 2019-06-24 13:55 ./.brand/dispatcher-build
-rw-r--r-- 0/0 27 2019-06-24 13:55 ./.brand/system-release
drwx------ 0/0 0 2019-06-24 13:55 ./dump/
Copy tar file to proxmox machine:
Code:
scp vzdump-1001.tar root@10.222.222.133:/var/lib/vz/dump
next I try to restore it:
root@pve1:/var/lib/vz/dump# pct restore 1001 ./vzdump-1001.tar
400 Parameter verification failed.
storage: storage 'local' does not support container directories
pct restore <vmid> <ostemplate> [OPTIONS]
root@pve1:/var/lib/vz/dump# pct restore 1001 ./vzdump-1001.tar -storage local-lvm
Logical volume "vm-1001-disk-0" created.
mke2fs 1.44.5 (15-Dec-2018)
Discarding device blocks: done
Creating filesystem with 1310720 4k blocks and 327680 inodes
Filesystem UUID: 060fe083-95de-4a5f-aa10-76b6569f04a2
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information: done

extracting archive '/var/lib/vz/dump/vzdump-1001.tar'
Total bytes read: 1943080960 (1.9GiB, 649MiB/s)
Architecture detection failed: open '/bin/sh' failed: No such file or directory

Falling back to amd64.
Use `pct set VMID --arch ARCH` to change.
###########################################################
Converting OpenVZ configuration to LXC.
Please check the configuration and reconfigure the network.
###########################################################
Logical volume "vm-1001-disk-0" successfully removed
unable to restore CT 1001 - unable to detect OS distribution

So, here I got stuck.
I also read this thread: https://forum.proxmox.com/threads/migratiing-from-legacy-openvz-host-to-proxmox.54616/#post-251563 with conclusion: proxmox don't support ploop.

What can I do ? How to check my containers use ploop or not ? What wrong with my vzdump approach ?
Help please.

UPD:
It seems my ct ploop based.
[root@localhost 1001]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 7,3T 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
├─sda3 8:3 0 23,6G 0 part
└─sda4 8:4 0 7,3T 0 part
├─openvz-root 253:0 0 24G 0 lvm /
└─openvz-vz 253:1 0 7,2T 0 lvm /vz
ploop11845 182:189520 0 1T 0 disk
└─ploop11845p1 182:189521 0 1024G 0 part /vz/root/6c689cdc-034a-4789-a177-08526fd153d1
ploop13194 182:211104 0 100G 0 disk
└─ploop13194p1 182:211105 0 100G 0 part /vz/root/656f47a7-b86d-42d7-877d-040357067597
ploop19240 182:307840 0 3,4G 0 disk
└─ploop19240p1 182:307841 0 3,4G 0 part /vz/root/9256f6e2-28d9-45ee-a046-76f265b1a74e
ploop23892 182:382272 0 50G 0 disk
└─ploop23892p1 182:382273 0 50G 0 part /vz/root/8db082f9-c013-4a67-8837-afd116b076cd
ploop24280 182:388480 0 50G 0 disk
└─ploop24280p1 182:388481 0 50G 0 part /vz/root/bc85b32a-ee8c-47e2-93d0-36683330cd05
ploop36476 182:583616 0 100G 0 disk
└─ploop36476p1 182:583617 0 100G 0 part /vz/root/53b2a4e4-5a46-4467-9d26-c0f68d9c2981
ploop36695 182:587120 0 20G 0 disk
└─ploop36695p1 182:587121 0 20G 0 part /vz/root/22a970f8-1489-402f-8e41-799cc7e78433
ploop42885 182:686160 0 100G 0 disk
└─ploop42885p1 182:686161 0 100G 0 part /vz/root/19968aa4-0017-413d-af64-6acb462d6d5f
ploop46096 182:737536 0 50G 0 disk
└─ploop46096p1 182:737537 0 50G 0 part /vz/root/9f1ac17c-699d-4950-8ca2-f4f679b276f5
ploop46169 182:738704 0 10G 0 disk
└─ploop46169p1 182:738705 0 10G 0 part /vz/root/1001
ploop47230 182:755680 0 20G 0 disk
└─ploop47230p1 182:755681 0 20G 0 part /vz/root/7807dd8a-44c4-4afe-93fe-aa717bc6c50c
ploop47304 182:756864 0 10G 0 disk
└─ploop47304p1 182:756865 0 10G 0 part /vz/root/6dc3d29e-6887-4e32-b5d2-912bfbed8b30
ploop48872 182:781952 0 100G 0 disk
└─ploop48872p1 182:781953 0 100G 0 part /vz/root/ffca7f48-cd0d-4b88-acfe-683c1a9cb950
ploop49960 182:799360 0 25G 0 disk
└─ploop49960p1 182:799361 0 25G 0 part /vz/root/69423bf6-573a-4fa5-a61b-b0dd55dacbbe
ploop50224 182:803584 0 400G 0 disk
└─ploop50224p1 182:803585 0 400G 0 part /vz/root/4c5509f7-2583-4a38-b4be-0b0499f27240
ploop58666 182:938656 0 50G 0 disk
└─ploop58666p1 182:938657 0 50G 0 part /vz/root/8b183f8e-b5db-4350-93bc-6d4f2716fa59
ploop60770 182:972320 0 500G 0 disk
└─ploop60770p1 182:972321 0 500G 0 part /vz/root/142e6d68-c9c4-42bb-9313-65f48036a431
ploop60976 182:975616 0 10G 0 disk
└─ploop60976p1 182:975617 0 10G 0 part /vz/root/baa4d054-a1f9-4a2a-8b0a-062cbc8d960e
 
Last edited:
  • Like
Reactions: realaaa
Hello. I want to move containers from virtuozzo server to proxmox 6. Here things I done:

I stop container and try:
Code:
vzdump 0d6c6bf8-ce7f-4be6-a5e0-0ae102e8f323
and get error:
Code:
ERROR: strange VPS ID '0d6c6bf8-ce7f-4be6-a5e0-0ae102e8f323'
Hmm.. ok, I found that I can change VPS ID and do that by command
Code:
vzmlocal OLD_CTID:NEW_CTID
Now my ct has 1001 id:
Code:
vzmlocal d6ed0009-38ff-4cc9-bf38-ee804389db84:1001
Moving/copying CT d6ed0009-38ff-4cc9-bf38-ee804389db84 -> CT 1001, [], [] ...
locking d6ed0009-38ff-4cc9-bf38-ee804389db84
locking 1001
Move /vz/private/d6ed0009-38ff-4cc9-bf38-ee804389db84 /vz/private/1001
Copying/modifying config scripts of CT d6ed0009-38ff-4cc9-bf38-ee804389db84 ...
Register CT 1001 uuid=0d6c6bf8-ce7f-4be6-a5e0-0ae102e8f323
Successfully completed
unlocking d6ed0009-38ff-4cc9-bf38-ee804389db84
unlocking 1001
I try vzdump again and all looks good.
Code:
[root@localhost 1001]# vzdump 1001
INFO: Starting new backup job - vzdump 1001
INFO: Starting Backup of VM 1001 (openvz)
INFO: status = VEID 1001 exist unmounted down
INFO: creating archive '/vz/dump/vzdump-1001.dat' (/vz/private/1001)
INFO: Всего записано байт: 1943080960 (1,9GiB, 10MiB/s)
INFO: file size 1.81GB
INFO: Finished Backup of VM 1001 (00:03:06)
Archive structure:
[root@localhost dump]# tar vtf vzdump-1001.tar
drwx------ 0/0 0 2019-07-30 14:20 ./
drwx------ 0/0 0 2019-06-24 13:55 ./scripts/
-rw-r----- 0/0 36 2019-07-30 14:11 ./.owner
drwx------ 0/0 0 2019-06-24 13:55 ./fs/
-rw-r----- 0/0 805 2019-06-24 13:56 ./vzmtmpfile.H3OquZ
-rw-r----- 0/0 27 2019-07-30 12:57 ./.uptime
drwxr-xr-x 0/0 0 2019-07-30 14:20 ./etc/
drwxr-xr-x 0/0 0 2019-07-30 14:20 ./etc/vzdump/
-rw-r--r-- 0/0 773 2019-07-30 14:20 ./etc/vzdump/vps.conf
-rw-r--r-- 0/0 773 2019-07-30 14:11 ./ve.conf
-rw-r----- 0/0 22483 2019-06-24 13:56 ./.ve.xml
drwxr-xr-x 0/0 0 2019-07-30 12:57 ./root.hdd/
-rw-r--r-- 0/0 0 2017-09-01 10:47 ./root.hdd/DiskDescriptor.xml.lck
-rw------- 0/0 1943011328 2019-07-30 12:57 ./root.hdd/root.hds
-rw-r--r-- 0/0 790 2017-09-01 10:47 ./root.hdd/DiskDescriptor.xml
drwxr-xr-x 0/0 0 2017-09-01 10:57 ./root.hdd/templates/
-rw-r--r-- 0/0 15737 2017-09-01 10:57 ./root.hdd/templates/vzpackages
drwxr-xr-x 0/0 0 2017-09-01 10:57 ./root.hdd/templates/debian-9.0-x86_64/
-rw-r--r-- 0/0 20 2017-09-01 10:57 ./root.hdd/templates/debian-9.0-x86_64/timestamp
-rw-r--r-- 0/0 40 2019-07-30 12:57 ./root.hdd/.statfs
lrwxrwxrwx 0/0 0 2019-06-24 13:55 ./.ve.layout -> 5
lrwxrwxrwx 0/0 0 2019-06-24 13:55 ./templates -> root.hdd/templates
drwxr-x--- 0/0 0 2019-06-24 13:55 ./.brand/
-rw-r----- 0/0 8 2019-06-24 13:55 ./.brand/dispatcher-build
-rw-r--r-- 0/0 27 2019-06-24 13:55 ./.brand/system-release
drwx------ 0/0 0 2019-06-24 13:55 ./dump/
Copy tar file to proxmox machine:
Code:
scp vzdump-1001.tar root@10.222.222.133:/var/lib/vz/dump
next I try to restore it:
root@pve1:/var/lib/vz/dump# pct restore 1001 ./vzdump-1001.tar
400 Parameter verification failed.
storage: storage 'local' does not support container directories
pct restore <vmid> <ostemplate> [OPTIONS]
root@pve1:/var/lib/vz/dump# pct restore 1001 ./vzdump-1001.tar -storage local-lvm
Logical volume "vm-1001-disk-0" created.
mke2fs 1.44.5 (15-Dec-2018)
Discarding device blocks: done
Creating filesystem with 1310720 4k blocks and 327680 inodes
Filesystem UUID: 060fe083-95de-4a5f-aa10-76b6569f04a2
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information: done

extracting archive '/var/lib/vz/dump/vzdump-1001.tar'
Total bytes read: 1943080960 (1.9GiB, 649MiB/s)
Architecture detection failed: open '/bin/sh' failed: No such file or directory

Falling back to amd64.
Use `pct set VMID --arch ARCH` to change.
###########################################################
Converting OpenVZ configuration to LXC.
Please check the configuration and reconfigure the network.
###########################################################
Logical volume "vm-1001-disk-0" successfully removed
unable to restore CT 1001 - unable to detect OS distribution

So, here I got stuck.
I also read this thread: https://forum.proxmox.com/threads/migratiing-from-legacy-openvz-host-to-proxmox.54616/#post-251563 with conclusion: proxmox don't support ploop.

What can I do ? How to check my containers use ploop or not ? What wrong with my vzdump approach ?
Help please.

UPD:
It seems my ct ploop based.
[root@localhost 1001]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 7,3T 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
├─sda3 8:3 0 23,6G 0 part
└─sda4 8:4 0 7,3T 0 part
├─openvz-root 253:0 0 24G 0 lvm /
└─openvz-vz 253:1 0 7,2T 0 lvm /vz
ploop11845 182:189520 0 1T 0 disk
└─ploop11845p1 182:189521 0 1024G 0 part /vz/root/6c689cdc-034a-4789-a177-08526fd153d1
ploop13194 182:211104 0 100G 0 disk
└─ploop13194p1 182:211105 0 100G 0 part /vz/root/656f47a7-b86d-42d7-877d-040357067597
ploop19240 182:307840 0 3,4G 0 disk
└─ploop19240p1 182:307841 0 3,4G 0 part /vz/root/9256f6e2-28d9-45ee-a046-76f265b1a74e
ploop23892 182:382272 0 50G 0 disk
└─ploop23892p1 182:382273 0 50G 0 part /vz/root/8db082f9-c013-4a67-8837-afd116b076cd
ploop24280 182:388480 0 50G 0 disk
└─ploop24280p1 182:388481 0 50G 0 part /vz/root/bc85b32a-ee8c-47e2-93d0-36683330cd05
ploop36476 182:583616 0 100G 0 disk
└─ploop36476p1 182:583617 0 100G 0 part /vz/root/53b2a4e4-5a46-4467-9d26-c0f68d9c2981
ploop36695 182:587120 0 20G 0 disk
└─ploop36695p1 182:587121 0 20G 0 part /vz/root/22a970f8-1489-402f-8e41-799cc7e78433
ploop42885 182:686160 0 100G 0 disk
└─ploop42885p1 182:686161 0 100G 0 part /vz/root/19968aa4-0017-413d-af64-6acb462d6d5f
ploop46096 182:737536 0 50G 0 disk
└─ploop46096p1 182:737537 0 50G 0 part /vz/root/9f1ac17c-699d-4950-8ca2-f4f679b276f5
ploop46169 182:738704 0 10G 0 disk
└─ploop46169p1 182:738705 0 10G 0 part /vz/root/1001
ploop47230 182:755680 0 20G 0 disk
└─ploop47230p1 182:755681 0 20G 0 part /vz/root/7807dd8a-44c4-4afe-93fe-aa717bc6c50c
ploop47304 182:756864 0 10G 0 disk
└─ploop47304p1 182:756865 0 10G 0 part /vz/root/6dc3d29e-6887-4e32-b5d2-912bfbed8b30
ploop48872 182:781952 0 100G 0 disk
└─ploop48872p1 182:781953 0 100G 0 part /vz/root/ffca7f48-cd0d-4b88-acfe-683c1a9cb950
ploop49960 182:799360 0 25G 0 disk
└─ploop49960p1 182:799361 0 25G 0 part /vz/root/69423bf6-573a-4fa5-a61b-b0dd55dacbbe
ploop50224 182:803584 0 400G 0 disk
└─ploop50224p1 182:803585 0 400G 0 part /vz/root/4c5509f7-2583-4a38-b4be-0b0499f27240
ploop58666 182:938656 0 50G 0 disk
└─ploop58666p1 182:938657 0 50G 0 part /vz/root/8b183f8e-b5db-4350-93bc-6d4f2716fa59
ploop60770 182:972320 0 500G 0 disk
└─ploop60770p1 182:972321 0 500G 0 part /vz/root/142e6d68-c9c4-42bb-9313-65f48036a431
ploop60976 182:975616 0 10G 0 disk
└─ploop60976p1 182:975617 0 10G 0 part /vz/root/baa4d054-a1f9-4a2a-8b0a-062cbc8d960e


Hi,

I can only guessing that you can convert your ploop format to simfs if I remember about openvz. You can create a new simfs openvz CT and then use rsync to sync the ploop CT with simfs CT. Then you could convert the simfs CT to PMX .


Good luck!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!