'dd' raw drives to lvm is slow. Is there a better way?

rzr

New Member
Oct 13, 2016
14
0
1
69
I have a proxmox installation.
I'd like to import a vmdk to a lvm and the procedure is:
$ qemu-img convert -p -f vmdk "IE11 - Win7-disk1.vmdk" -O raw "IE11 - Win7-disk1.raw"
and then:
$ dd if="IE11 - Win7-disk1.raw" bs=1M|pv| dd of=/dev/mapper/pve-vm--103--disk--1
And that works fine.

But there are 8.8GB data on a 127GB virtual drive:
$ qemu-img info "IE11 - Win7-disk1.raw"
image: IE11 - Win7-disk1.raw
file format: raw
virtual size: 127G (136365211648 bytes)
disk size: 8.8G


And when i "dd" the raw drive I see 127GB copied when only 8.8G of them matter.
And that's painstakingly slow.

Is there a better way?
 
Last edited:
Add the bs parameter to dd, and set it to 64k. Example...

Code:
dd if="IE11 - Win7-disk1.raw" --bs=64k|pv| dd --bs=64k of=/dev/mapper/pve-vm--103--disk--1
 
Add the bs parameter to dd, and set it to 64k. Example...
Code:
dd if="IE11 - Win7-disk1.raw" --bs=64k|pv| dd --bs=64k of=/dev/mapper/pve-vm--103--disk--1

The problem is not the blocksize, I actually do bs=1M. I'll update the question.
The problem is that it's copying 127GB instead of 8.8GB
 
you can add 'conv=sparse' to your dd command
from the manpage:
sparse try to seek rather than write the output for NUL input blocks
 
I have a proxmox installation.
I'd like to import a vmdk to a lvm and the procedure is:
$ qemu-img convert -p -f vmdk "IE11 - Win7-disk1.vmdk" -O raw "IE11 - Win7-disk1.raw"
and then:
$ dd if="IE11 - Win7-disk1.raw" bs=1M|pv| dd of=/dev/mapper/pve-vm--103--disk--1
And that works fine.

But there are 8.8GB data on a 127GB virtual drive:
$ qemu-img info "IE11 - Win7-disk1.raw"
image: IE11 - Win7-disk1.raw
file format: raw
virtual size: 127G (136365211648 bytes)
disk size: 8.8G


And when i "dd" the raw drive I see 127GB copied when only 8.8G of them matter.
And that's painstakingly slow.

Is there a better way?
Hi,
yes there is an better way: don't use pv!

Here a test (the dd make no sense - it's for data-transfer only):
Code:
root@pve-c:~# dd if=/var/lib/vz/images/105/vm-105-disk-1.qcow2 bs=1M  of=/dev/sata/vm-105-disk-1
3702+1 records in
3702+1 records out
3882688512 bytes (3.9 GB) copied, 109.594 s, 35.4 MB/s
root@pve-c:~# dd if=/var/lib/vz/images/105/vm-105-disk-1.qcow2 bs=1M | pv | dd of=/dev/sata/vm-105-disk-1
3702+1 records in28.9MiB/s] [  <=>  ]
3702+1 records out
3882688512 bytes (3.9 GB) copied, 183.975 s, 21.1 MB/s
3.62GiB 0:03:03 [20.1MiB/s] [  <=>  ]
7583376+0 records in
7583376+0 records out
3882688512 bytes (3.9 GB) copied, 188.68 s, 20.6 MB/s
Udo
 
Hi,
yes there is an better way: don't use pv!

Here a test (the dd make no sense - it's for data-transfer only):
Code:
root@pve-c:~# dd if=/var/lib/vz/images/105/vm-105-disk-1.qcow2 bs=1M  of=/dev/sata/vm-105-disk-1
3702+1 records in
3702+1 records out
3882688512 bytes (3.9 GB) copied, 109.594 s, 35.4 MB/s
root@pve-c:~# dd if=/var/lib/vz/images/105/vm-105-disk-1.qcow2 bs=1M | pv | dd of=/dev/sata/vm-105-disk-1
3702+1 records in28.9MiB/s] [  <=>  ]
3702+1 records out
3882688512 bytes (3.9 GB) copied, 183.975 s, 21.1 MB/s
3.62GiB 0:03:03 [20.1MiB/s] [  <=>  ]
7583376+0 records in
7583376+0 records out
3882688512 bytes (3.9 GB) copied, 188.68 s, 20.6 MB/s
Udo

You forgot the bs in the second, slower case. pv does not limit the throughput significantly.