Moving from ext4 storage (.raw) to thick provisioned-lvm

breakaway9000

Renowned Member
Dec 20, 2015
94
24
73
Hi,

I thought this would be as simple as creating a backup & restoring - but it looks like it is not. Restoring it to my lvm (thick) storage gives this error.

Code:
TASK ERROR: unable to detect disk size - please specify mp0 (size)

Restoring it back to the ext4 datastore (where this ct originally is stored) also gives the same error.

So then I tried to restore it to my ceph storage. It gave me this error:

Code:
close (rename) atomic file /var/log/pve/tasks/active' failed: No such file or directory (500)

Is there a pathway to shifting a container from a ext4 based datastore to a lvm (thick provisioned) storage?
 
Short term solution... I have managed to restore this VM back to its original location.

Code:
pct restore 124 vzdump-lxc-136-2018_05_07-15_08_48.tar.lzo -rootfs my.storage :SIZE=10 -mp0 my.storage:SIZE=100,mp=/data

It looks like the built-in backup doesn't like the fact that this container has a 2nd 100GB mount point on mp0 mounted on /data.

The above command restores it correctly.

But I am still trying to work out if I can back this ct up and move it to lvm storage or not.
 
Last edited:
I thought this would be as simple as creating a backup & restoring - but it looks like it is not. Restoring it to my lvm (thick) storage gives this error.

Code:
TASK ERROR: unable to detect disk size - please specify mp0 (size)

I do not get that error, and can restore without problems. But please can you post the corresponding command line causing that error?
 
Hi Dietmar. First off I tried doing it through the WebGUI so the command will be whatever proxmox generates internally.

Secodly, I SSH'ed into the host and tried it from there:

Code:
# pct restore 124 vzdump-lxc-136-2018_05_07-15_08_48.tar --storage my.storage
unable to detect disk size - please specify mp0 (size)

Finally I googled and found this thread https://forum.proxmox.com/threads/pct-restore-doesnt-work.36497/ which still has the wrong syntax in the original post but your post (#2) gives the solution.

In the end I came up with this command which did the job, and the container was up & running shortly afterwards.

Code:
pct restore 124 vzdump-lxc-136-2018_05_07-15_08_48.tar.lzo -rootfs my.storage:10 -mp0 my.storage:100,mp=/data

Incase someone has trouble with this here's what goes into this command:

pct restore <ID you want CT restored as> <path to the vzdump tar.lzo file> -rootfs <name of storage>:<size in GB> -mp0 <name of storage>:<size in GB>,mp=<path where this storage is mounted to your ct>

Replace everything in red to suit your setup. If you have more mount points, I suppose you'd go -mp1, mp2 and so on and fill in the relevant details.
 
Last edited:
Code:
#  pveversion -v
proxmox-ve: 5.1-42 (running kernel: 4.13.16-2-pve)
pve-manager: 5.1-51 (running version: 5.1-51/96be5354)
pve-kernel-4.13: 5.1-44
pve-kernel-4.13.16-2-pve: 4.13.16-47
pve-kernel-4.13.13-6-pve: 4.13.13-42
pve-kernel-4.13.13-5-pve: 4.13.13-38
pve-kernel-4.13.13-2-pve: 4.13.13-33
pve-kernel-4.4.98-2-pve: 4.4.98-101
pve-kernel-4.4.83-1-pve: 4.4.83-96
pve-kernel-4.4.76-1-pve: 4.4.76-94
pve-kernel-4.4.49-1-pve: 4.4.49-86
pve-kernel-4.4.35-1-pve: 4.4.35-77
ceph: 12.2.4-pve1
corosync: 2.4.2-pve4
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-30
libpve-guest-common-perl: 2.0-14
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-18
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-2
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-15
pve-cluster: 5.0-25
pve-container: 2.0-21
pve-docs: 5.1-17
pve-firewall: 3.0-8
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-4
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-2
qemu-server: 5.0-25
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.7-pve1~bpo9
 
As of the latest update, it is now possible to move containers from one storage to another (but the CT must be powered off)