Moving a RAW disk format file from NFS to LVM block device

donty

Member
Mar 31, 2009
42
0
6
Hi
OK, I now finally sorted out storage plans and have a 1.5TB LVM and a 500GB NFS share on a 3 node cluster on 2.1.

I have a bunch of VMs on 1.9 and intend to backup/restore them to the new 2.1 cluster. However, previously we used qcow2 as a disk format so I have to follow what feels a cumbersome process:
1) Restore a VM backup tgz on 2.1
2) Convert qcow2 to raw format
3) Create a new disk in 2.1 GUI for the VM
4) DD the new RAW disk to the LVM group block device using the new HD as a target eg:

dd if=vm-101-disk-1.raw of=/dev/vlgroup01/vm-101-disk-1 bs=1024k

However, I get through 4.2GB and it bombs out saying no space left on the target, even though in this instance the raw file was 32GB and the VM had a new 33GB on the new LVM storage (checked location and name with lvdisplay just to be sure!)

dd: writing `/dev/vlgroup01/vm-101-disk-1': No space left on device
3971+0 records in
3970+0 records out
4163366912 bytes (4.2 GB) copied, 7.56579 s, 550 MB/s

Any ideas what might cause this? I mounted and booted the converted raw disk fine so it doesn't appear to be a source file problem.

I have created, installed, run, live migrated and backed up from the LVM so I know that works as expected but I just cant get the raw disk migration to work.

Thanks in advance for any help.
 
Last edited:
Hi
OK, I now finally sorted out storage plans and have a 1.5TB LVM and a 500GB NFS share on a 3 node cluster on 2.1.

I have a bunch of VMs on 1.9 and intend to backup/restore them to the new 2.1 cluster. However, previously we used qcow2 as a disk format so I have to follow what feels a cumbersome process:
1) Restore a VM backup tgz on 2.1
2) Convert qcow2 to raw format
3) Create a new disk in 2.1 GUI for the VM
4) DD the new RAW disk to the LVM group block device using the new HD as a target eg:

dd if=vm-101-disk-1.raw of=/dev/vlgroup01/vm-101-disk-1 bs=1024k

However, I get through 4.2GB and it bombs out saying no space left on the target, even though in this instance the raw file was 32GB and the VM had a new 33GB on the new LVM storage (checked location and name with lvdisplay just to be sure!)

dd: writing `/dev/vlgroup01/vm-101-disk-1': No space left on device
3971+0 records in
3970+0 records out
4163366912 bytes (4.2 GB) copied, 7.56579 s, 550 MB/s

Any ideas what might cause this? I mounted and booted the converted raw disk fine so it doesn't appear to be a source file problem.

I have created, installed, run, live migrated and backed up from the LVM so I know that works as expected but I just cant get the raw disk migration to work.

Thanks in advance for any help.
Hi,
can you post the output of following commands?
Code:
ls -lsa vm-101-disk-1.raw
lvdisplay /dev/vlgroup01/vm-101-disk-1
Udo
 
Thanks for replying, always helps to have an informed response like that. It helped show me what I think was the problem. I had previously deleted the original and recreated a drive sized 32GB RAW so this information reflects that drive.

The output for

lvdisplay

was

--- Logical volume ---
LV Path /dev/vlgroup01/vm-101-disk-1
LV Name vm-101-disk-1
VG Name vlgroup01
LV UUID sbDotb-xIQ8-eABm-gsRa-ZfoS-2TAX-0WU18Y
LV Write Access read/write
LV Creation host, time vp8, 2012-09-06 13:36:46 +0100
LV Status NOT available
LV Size 32.00 GiB
Current LE 8192
Segments 1
Allocation inherit
Read ahead sectors auto

So reading it more carefully than the cursory glance, I see it is showing as NOT available. I did

vgchange -a y

to make the volume available and then was able to DD successfully.

For whatever reason the GUI didnt do that when it was created and added to the VM, but then that may be a step I need to do when accessing for the commandline?

NB: When I tried

ls -lsa vm-101-disk-1.raw

it responded with:
ls: cannot access vm-101-disk-1.raw: No such file or directory

so I tried with the path as:

ls -lsa /dev/vlgroup01/vm-101-disk-1

and that now shows:

0 lrwxrwxrwx 1 root root 7 Sep 6 13:45 /dev/vlgroup01/vm-101-disk-1 -> ../dm-4

Seems to be copying now, so it is probably solved but I will follow-up when all is confirmed.

Thanks!
 
...
NB: When I tried

ls -lsa vm-101-disk-1.raw

it responded with:
ls: cannot access vm-101-disk-1.raw: No such file or directory

so I tried with the path as:

ls -lsa /dev/vlgroup01/vm-101-disk-1

and that now shows:

0 lrwxrwxrwx 1 root root 7 Sep 6 13:45 /dev/vlgroup01/vm-101-disk-1 -> ../dm-4

Seems to be copying now, so it is probably solved but I will follow-up when all is confirmed.

Thanks!
Hi,
in your "dd"-example there was also no path - this is the raeson, why I wrote "ls -lsa vm-101-disk-1.raw". This means you must be in the same folder like the file.
Or you use the full path (for a normal local disk): "ls -lsa /var/lib/vz/images/101/vm-101-disk-1.raw".

Udo
 
Thanks Udo, wasnt a slight it was simply a record of events to make sure it was helpful to others trying something similar. Your help very much appreciated, it all worked fine.

It would be nice to have a web gui way to move from one storage type to another eg from local or NFS to block device, it would certainly help ease the migration process to be able to stack up and throttle a sequence of moves.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!