[SOLVED] Import/convert/export raw images to ZFS volume

TimDrub

Member
Mar 2, 2015
2
0
21
Hi, I just installed PVE 3.4 and since I had a harddisk failure I took the chance to directly set it up using the new ZFS support. Installation went just fine and everything works as expected. Only I am not able to move my old qcow2 images to the new ZFS partition.

The ZFS partition is mounted as /zfs01 but if I create a new VM there are no files visible on that partition that I could substitute.

I found out that a newly created VM will become a ZFS volume that I can see with

Code:
zfs list -t volume

I already converted my images to raw but now I have no clue how I am supposed to create a ZFS volume from my raw image.
The other way around would also be very interesting, how do I create an e.g. qcow2 from a ZFS volume.

Any help is highly appreciated.
Thanks in advance.

Tim
 
Last edited:
You can simply dd if=your_raw_file.raw of=/dev/zvol/<pool>/<volume> bs=1m

The other way:

dd if=/dev/zvol/<pool>/<volume> of=file.raw bs=1m
qemu-img convert -f raw -O qcow2 file.raw file.qcow2
 
Last edited:
  • Like
Reactions: Dmitry567543
Thanks for the quick reply, that got me going.

One thing to add for other ZFS newbies is that you have to create the ZFS volume first, e.g.

Code:
zfs create -V 5gb <Pool>/<Volume>

You can than use the dd command above to convert the image.
To use it in a Proxmox VM you have to edit the conf file in /etc/pve/qemu-server/...
 
ZVOL is a block device like /dev/sda for example.

It is not a filesystem like ext3,ext4 etc , so you can dd a raw image directly to the block device as you would do with a normal hdd.
 
  • Like
Reactions: MikeP
Thanks.
So... I have an ova/ovf/vmdk... I want to put that into Proxmox. I found out how to do it with converting it to a qcow2, creating the VM, then moving the new qcow2 to overwrite the default one... How do I do it with a Zvol? I'm guessing convert to raw, then dd it to the zvol?
Any suggestions before I screw it up and get annoyed for screwing it up? :)
 
What I would do after your 'overwrite the qcow2' step, is to use the GUI to move the disk to ZFS storage. That can be done online while the VM is running even. Also, for best results, make sure your ZFS data set has compression on.
 
What I would do after your 'overwrite the qcow2' step, is to use the GUI to move the disk to ZFS storage. That can be done online while the VM is running even. Also, for best results, make sure your ZFS data set has compression on.

By default the root pool has compression on and it's inherited to all datasets, correct?
 
I use an alternate ZFS pool for my data, so I just had it in my notes to set compression on for it. The root pool is probably configured that way already, as you say.
 
Note for others in this situation on qcow files can convert more quickly using an nbd device.

Pre-build your vm in the gui, then overwrite the zfs dev with dd-

Code:
modprobe nbd max_part=63
qemu-nbd -c /dev/nbd0 /mnt/oldstorage/images/100/vm-100-disk-1.qcow2
dd if=/dev/nbd0 of=/dev/zvol/rpool/data/vm-100-disk-1 bs=1M
 
  • Like
Reactions: LazyHorse
Not sure if I need to start a separate topic for this, but this is the problem I'm experiencing:
I have 2 machines, Virtual Environment 4.3-12/6894c9d9 (source) and Virtual Environment 4.3-14/3a8c61c7 (destination).

On the source machine some VM's are running and the disks are stored on a zfs volume as raw format. There raws were earler converted from qcow2 files.

However, when I transfer the raw files to the destination:
source: cat /tank/images/102/vm-102-disk-1.raw | pv -s 16G | nc 192.168.10.13 1234
destination: nc -l -p 1234 -t2 | pv -s 960G | dd of=/dev/tank/vm-102-disk-1 bs=2048M

I cannot boot the VM. no bootable device.

what is wrong? does dd somehow mess up the bytes?
Help appreciated!
 
Opening a new thread would be best, do not hijack.

Why do you not use backup & restore to migrate the VM? This takes longer, but it the supported and proven way to go including the configuration file etc.

Do I read your output correctly that you sync a 16G disk to a 960G and expect everything to work magically? This is highly depending on the guest os.
 
> Why do you not use backup & restore to migrate the VM?

because I wanted to uze zvol instead of raw image files.

What does work (or at least for the first VM I tried)
- make backup
- rsync backup file to new host
- restore backup
- dd raw file to zvol
- start vm

works so far...

> Do I read your output correctly that you sync a 16G disk to a 960

apparently I mixed up different lines of my script posting here.
this is about a vm having 1x 16GB and 3 x 960GB disks / raw files.

In the past I very often used dd to move disks between different hosts. very fast in combination with nc. This is indeed way faster than using the available backup method.

So far, things seem to work with the backcup. However, it bothers me that the dd method does fail...
 
> Why do you not use backup & restore to migrate the VM?

because I wanted to uze zvol instead of raw image files.

What does work (or at least for the first VM I tried)
- make backup
- rsync backup file to new host
- restore backup
- dd raw file to zvol
- start vm

works so far...

You can just restore to ZFS without any problems:

restore.png
> Do I read your output correctly that you sync a 16G disk to a 960

apparently I mixed up different lines of my script posting here.
this is about a vm having 1x 16GB and 3 x 960GB disks / raw files.

In the past I very often used dd to move disks between different hosts. very fast in combination with nc. This is indeed way faster than using the available backup method.

So far, things seem to work with the backcup. However, it bothers me that the dd method does fail...

It is of course faster but having Proxmox VE backing up everything and restoring it on the other node is one (long) line of code for each and you're done. Let Proxmox VE do the heavy lifting.
 
For those who googled this post (Proxmox VE 5.3-6).

If you need to migrate physical disk (real hardware machine) to vm zfs volume.

1. Make sure you have enabled zfs storage in Proxmox UI

2. Create a vm place-holder in Proxmox UI
- CPU and Memory should be chosen approximately the same as on the real hardware
- HDD size should be same or slightly bigger size then real HDD/SSD
- check volume name of vm Hard Disk: VM -> Hardware -> vm:vm-VM_ID-disk-DISK_ID,size=DISK_SIZE_GB

3. Clone HDD to your machine which has access to Proxmox server first, assuming you have any hot-plug usb bay or you can connect it to internal SATA/SAS bus. It takes 2-3h over USB-2.0 clone hub.

Code:
dd if=/dev/HDD_BLOCK_NAME | pv -s HDD_REAL_SIZE_GB | dd of=IMAGE_PATH/IMAGE_NAME bs=1M

- HDD_BLOCK_NAME: name of your HDD
- HDD_REAL_SIZE_GB: real size of your HDD could be check with smartctl -i /dev/HDD_BLOCK_NAME
- IMAGE_PATH:
path to your future image
- IMAGE_NAME: any preferable name of your future disk image

4. Sync your IMAGE_NAME to Proxmox server. I takes 1-2h over 1000MB/s Ethernet

Code:
rsync -az --append --progress IMAGE_NAME user@proxmox-ve-hostname-or-ip:

Make sure you have write permission for loging user.

5. On Proxmox server sync IMAGE_NAME to vm-VM_ID-disk-DISK_ID zfs volume. I takes 1-1.5h on Dell R820, 12.0TB zfs pool

Code:
dd if=IMAGE_NAME | pv -s HDD_REAL_SIZE_GB | dd of=/dev/ZFS_POOL_NAME/vm-VM_ID-disk-DISK_ID bs=1M

6. Start up your fresh backed VM and alter MEM size or number of CPU if you want.

TOTAL time: 4-6.5h to migrate 120G HDD to vm
 
Last edited:
  • Like
Reactions: iGadget

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!