Live-converting vmdk to qcow2

virtRoo

Member
Jan 27, 2019
27
4
23
Hi all,

Has anyone every tried live-converting a VM with vmdk disks (moved form ESXi) to qcow2 while the VM is still running?

We've currently got a large amount of VMs that have at least one or even multiple large virtual disks (>500GB avg.) running on a prodution VMware vShere cluster.

Recently we're thinking giving Proxmox a shot and see if there's a better conversion approach that can reduce downtime without using 'qemu-img convert' or virt-v2v that requires bringing the VMs offline.

From what we've found so far:
  • Proxmox allows creating a VM with VMDK (a standalone disk as monolithicSparse)
  • Proxmox seems to be able to sucessfully boot a VM off a '*.vmdk' descriptor and the associated actual flat '*-flat.vmdk' file transferred straight from a VMFS datastore on ESXi. Well, at least the VM could boot into CentOS 7 without any apparent file system problems.
  • Proxmox allows a virtual disk to be moved (with 'move disk' feature) between qcow2, raw and vmdk while the VM is running

All these seem to be working in our test environment. That said, we know this would be a bold try, so I was wondering if anyone's got similar experience before?

Also, how reliable is it to run a VM with VMDK disk format created by Proxmox? Are there any limitations?

Thanks in advance.
 
Hello virtRoo,

I´m searching for the same answer. But I´m curious how you started the transfered VMs from ESXi in vmdk format into proxmox?

I tried it too, but never get succsess. First I build a VM on Proxm ox (with vmdk), After that I copied my ESXi *.vmdk over it.

But Proxmox only has 1 *.vmdk and ESXi has 2 (descriptor has one file and flat file is the second).

How to solve this?

Best regards,
Kai
 
Hi Kai,

Assuming you have vCenter with Storage vMotion support, and the VM (let's call it testvm1) to be converted runs Linux.

!!! ALWAYS DO A FULL BACKUP OF EVERYTHING FIRST !!!

Disclaimer: All information here is provided on an as-is basis. Use it at your own discretion!

My steps are:
  1. Set up an NFS share that serves as a common bridge between ESXi and Proxmox. The NFS share is hosted on a separate server (e.g. CentOS 7) as a temporary storage pool.
  2. On Proxmox, pre-create a new VM with vmdk (any random size) on the NFS share/datastore:
    • Virtual disk controller as VirtIO SCSI with vmdk (or VMware PVSCSI, see notes below)
    • Virtual NIC as VirtIO (or VMware vmxnet3, see notes below) (whether to retain the original MAC addresss is up to you)
  3. On Proxmox, detach the virtual disk from the new VM and delete the blank vmdk off.
  4. Assuming you've got vCenter in place that supports Storage vMotion, then live migrate the VM's disk(s) to said NFS datastore.
  5. Shut down the VM once the virtual disk has been migrated to the NFS datastore, which is also mounted on Proxmox.
  6. On the NFS datastore, locate the VM's folder that was migrated by VMware, move/rename the vmdk descriptor file into the new VM's folder created by Promox and match with Proxmox's virtual disk naming convention (e.g. testvm1.vmdk to vm-101-disk-1.vmdk). Also move the actual flat vmdk file (e.g. testvm1-flat.vmdk) into the new VM's folder created by Proxmox, the file name can remain intact, as long as it matches the name defined in the vmdk descriptor file. These step wouldn't take too much time as it's all within the same NFS datastore.
  7. On Proxmox, re-attach the virtual disk as vmdk (which was previously detached) by clicking on 'Edit' then 'OK' button on the VM in GUI. Once re-attached, the actual disk size will be re-detected.
  8. Start up the VM and do any adjustments in the guest OS if necessary (e.g. uninstalling VMware tools, installing qemu guest agent, fixing network configs, etc.)
  9. Then use the 'Move disk' feature in Proxmox to live convert/migrate from vmdk to qcow2 or any other format/final storage destination. Maybe do a file system check after the final disk conversion if you can afford to have more down time.
Extra notes:
  • Steps for Windows should be similar but with minor differences.
  • If the guest OS runs into kernel panic upon first boot after the vmdk descriptor file has been mounted/attached to the VM, consider changing the virtual disk controller to VMware PVSCSI. I did experience kernel panic with VirtIO SCSI with a CentOS 7 VM, but CentOS 6 was fine for some reason.
  • Set vNIC to VMware vmxnet3 if there's a compatibility problem, or if you intend to keep the same MAC address and don't want to re-configure the network configs in the guest OS. I haven't fully tested this, but I think setting vNIC to VirtIO will make it become a new device, granted the MAC address is intact.
  • One of my major problems in my initial post is I'm not sure how reliable it can be in long run if VMware PVSCSI and/or VMware vmxnet3 is used.
  • Depending on one's requirement and existing setup, maybe making the final storage destination on Proxmox's side become the NFS share instead of setting up a separate NFS server would make the transition/conversion more efficient? Especially if one's source ESXi doesn't support Storage vMotion, I guess transferring the VM files/disks over to Proxmox via NFS is probably more reliable then using SCP from ESXi to Proxmox, let alone SCP doesn't seem to support sparse files.

Although I've tested this approach quite a few times but only in a test environment, so I was wondering if other people have had similar experience before. I'll still have to give it a try sooner or later, but I was previously quite skeptical with how KVM platforms in general handled live storage migration due to my prior experience, so even things do seem to work with my conversion approach, I just wanted to hear from opinions from others.

Hope this helps.
 
I've just tested a live conversion on Proxmox 5.3-11 but now it just fails, but in-placing mounting a vmdk descriptor file along with the flat vmdk file still remains fine.

More testing is still in progress.
 
It turns vmdk descriptor file version 3 doesn't play well with qemu, but version 1 and 2 are fine. Although Proxmox can still in-place mount a vmdk descriptor file as v3, live-conversion wouldn't work, and the actual disk size of the flat vmdk file will be incorrectly detected.

As per https://pubs.vmware.com/vsphere-6-5/index.jsp?topic=/com.vmware.vddk.pg.doc/vddkTasks.8.4.html , to workaround the problem, simply modify the vmdk version to 1 (or 2) and comment out any references to Changed Block Tracking (CBT).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!