[SOLVED] Move virtual disk from VM to CT

vDan

Member
May 15, 2021
8
0
6
40
Hello all
I tried to move a disk, which was originally used in a VM (ID210), to a CT (ID170). The disk is part of a LVM, the group name is data2.
Now the CT is not starting and Proxmox gives the following error message:
Code:
run_buffer: 322 Script exited with status 32
lxc_init: 844 Failed to run lxc.hook.pre-start for container "170"
__lxc_start: 2027 Failed to initialize container "170"
TASK ERROR: startup for container '170' failed
I have done the following steps:
  1. Detach the disk by deleting the line (scsi1: local-lvm-data2:vm-210-disk-1,size=8G) in the conf file of VM /etc/pve/qemu-server/210.conf
  2. Rename the disk: lvrename data2/vm-210-disk-1 data2/vm-170-disk-0
  3. Add the disk to conf file of CT /etc/pve/lxc/170.conf by adding the following: mp0: local-lvm-data2:vm-170-disk-0,mp=/mnt/data,size=8G
  4. Update disk tag: lvchange --deltag vm-210-disk-1 --addtag data2/vm-170-disk-0 data2/vm-170-disk-0

Did I miss something?
 
Last edited:
Hello all
I tried to move a disk, which was originally used in a VM (ID210), to a CT (ID170). The disk is part of a LVM, the group name is data2.
Now the CT is not starting and Proxmox gives the following error message:
Code:
run_buffer: 322 Script exited with status 32
lxc_init: 844 Failed to run lxc.hook.pre-start for container "170"
__lxc_start: 2027 Failed to initialize container "170"
TASK ERROR: startup for container '170' failed
I have done the following steps:
  1. Detach the disk by deleting the line (scsi1: local-lvm-data2:vm-210-disk-1,size=8G) in the conf file of VM /etc/pve/qemu-server/210.conf
  2. Rename the disk: lvrename data2/vm-210-disk-1 data2/vm-170-disk-0
  3. Add the disk to conf file of CT /etc/pve/lxc/170.conf by adding the following: mp0: local-lvm-data2:vm-170-disk-0,mp=/mnt/data,size=8G
  4. Update disk tag: lvchange --deltag vm-210-disk-1 --addtag data2/vm-170-disk-0 data2/vm-170-disk-0

Did I miss something?
Difficult to say. Note that as CT mount point a virtual disk must be specified which contains exactly 1 partition with filesystem ext4 on it. Maybe the disk when used in the VM was partitioned differently.
In such a case as described I would rather use the GUI method for detaching the disk from VM as well as create a new one in the ct (with mount point) and afterwards transfer the content.
 
Hello, sorry for the late reply.
Difficult to say. Note that as CT mount point a virtual disk must be specified which contains exactly 1 partition with filesystem ext4 on it. Maybe the disk when used in the VM was partitioned differently.
Good point. I have checked that, the VM contains one partition covering the whole disk with file system ext4. So, this should not be the problem.
In such a case as described I would rather use the GUI method for detaching the disk from VM as well as create a new one in the ct (with mount point) and afterwards transfer the content.
Yes, normally I would do it that way, but the old and the new VM must be on the same physical hard drive and there is no space to duplicate the files. Also, the copy process would be very slow since it has to read and write from the same device...

Any other suggestions?
 
Good point. I have checked that, the VM contains one partition covering the whole disk with file system ext4. So, this should not be the problem.
Not only this - for containers it's expected that there is no partition table at all, i.e. seen with a partition tool as partition-table type "loop"
Yes, normally I would do it that way, but the old and the new VM must be on the same physical hard drive and there is no space to duplicate the files. Also, the copy process would be very slow since it has to read and write from the same device...

Any other suggestions?
Not necessary to copy file by file, you can copy the whole partition using e.g. `dd`, but - according to the above - only the partition itself and not the "overhead" (partition table, in principle).
 
Not only this - for containers it's expected that there is no partition table at all, i.e. seen with a partition tool as partition-table type "loop"
Hello. I moved the (test) disk back to the VM and deleted the partition table there with the command wipefs -a -f /dev/sdx and after that I gave the disk the label "loop" by using parted. Was this correct or not? I'm not an expert when it comes to partition tables etc. If I move the disk back to CT, I see the same problem as before. CT is not able to start and the Error message in Proxmox GUI appears.
 
-
I moved the (test) disk back to the VM and deleted the partition table there with the command wipefs -a -f /dev/sdx and after that I gave the disk the label "loop" by using parted. Was this correct or not?

Rather not, but since I do not know details about you VM disk I cannot say for sure.

Note that partition table type "loop" means "no partition table exists". In other words:

In case of filesystem on disk
- without partition table (which is identical to partition table "loop") data for filesystem begins at offset x00000000
- with partition table (usually type "gpt") and assuming you have just one partition data for filesystem begins at offset x00100000 and on offset x00000000 the partition table itself is located.

When now using a "GPT" partitioned disk as "loop" disk the partition table data are tried to interpreted as filesystem data which does not succeed.

Example (independent form having virtual or physical disks): Let's assume /dev/sda has a "gpt" partition table with 1 partition and a filesystem in it bit /dev/sdb had no partition table but just a filesystem:
In the first case you can mount /dev/sda1
in the second case you can mount /dev/sdb

Trying to mount /dev/sda will fail. This is was probably happened when assigning VM's disk to a container.
 
  • Like
Reactions: vDan
@Richard: Thank you so much for this very helpful, detailed information!!
Trying to mount /dev/sda will fail. This is was probably happened when assigning VM's disk to a container.
This is exactly what happens. I've tested the whole procedure with a test disk with no partition and an ext4 file system starting at Block 0 like in your example with /dev/sdb. With the result that moving the disk from VM to CT worked!

Now the only question that remains is how I modify a disk with one partition that covers the same as a whole so that this also works.
Following the partition table of the test disk (similar to the disk I want to move). Do you need more information?

Code:
~# fdisk -l /dev/sdb
Disk /dev/sdb: 1 GiB, 1073741824 bytes, 2097152 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 8202091F-E45B-4F9B-BFDC-82C6A6E0D6EB

Device     Start     End Sectors  Size Type
/dev/sdb1   2048 2095103 2093056 1022M Linux filesystem

It would be very helpful if you could write down the steps I need to take to reach my goal.
Like deleting partition table, change start block of file system, etc

Thank you very much in advance!
 
Last edited:
What I would do:

- create a new "Mountpoint" disk for the container (same size as original disk)
- connect both VM (original) disk and container (new) disk as loop device (via losetup) to Proxmox host. Important that neither container nor VM must run!
- let's say VM disk is now /dev/loop1 and container disk /dev/loop2
- run then at Proxmox host:
Code:
dd if=/dev/loop1p1 of=/dev/loop2 bs=1M

After finished disconnect both devices and you can start the container.
 
  • Like
Reactions: vDan
Hi @Richard, thanks for the instructions! I freed some space on the disk and tried your way, and of course it worked.
Maybe it's worth mentioning that in my case the dd command must be as follows, so that only the data and not the partition table is copied:
Code:
dd if=/dev/loop0 of=/dev/loop1 bs=512 skip=2048 status=progress
Again thx and best regards!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!