[SOLVED] Migrating VMs from ESXi to PVE (ovftool)

linux

New Member
Dec 14, 2020
14
3
3
Australia
G'day from Australia,

We've got a predicament that we'd appreciate some guidance with.

4x managed VMs run on 2x ESXi hosts (2+2), and there's no supported pathway to migrate them (ie. ditch ESXi in favour of PVE).

Across the remainder of our infrastructure, ESXi has been gladly purged and our migrations were successful, though we managed those servers and they were primarily CentOS based rather than Debian. The software stacks on everything else is documented and was rebuilt - these managed servers run proprietary/closed software instead.

Looking further into this, ESXi has some nice caveats which make migrating away quite difficult. Aside from this, we've never hit a VMware freemium limit.
ESXi is running under free licences, with versions in the 6.x branch. We do have spare storage/compute to facilitate a stepping stone in-flight if needed.

The best success we had was disabling the usbarbitrator service on the hosts, then copying out the VMDKs, though despite using a WD USB SSD directly connected to the boxes, speed was crippled by VMware rather kindly. 1-2 days per VM to copy them out, at which point we'd have to tolerate some logging discrepancies, though the proprietary systems should be smart enough to stitch it all together, as the 4x VMs operate in a cluster. At this stage, we can afford some hassles if it kills off ESXi. One problem was the VMFS version (USB SSD) being tricky to read externally.

Use-case of the VMs is quite critical, so we've reconfigured our systems to allow 2 of them to be pulled at a time, though 2 of them will cause headaches which the others won't (ie. 1x VM is the cluster master). Those 2x VMs can be migrated over a weekend to minimise interruptions. We've looked into free software that claims to allow exactly what we want (export of VMDK likely at a machine code level similar to Drive Snapshot), to allow an external conversion over to qcow2, to then import to PVE once the underlying nodes have been restructured and reinstalled) - they were lack lustre although I wouldn't say that we've attempted every option on the market.

What's the most logical way to approach this? Our love for VMware is nil, so we're not looking to take up support coverage. Management plans over the VMs don't include migrations, and the costing to have them rebuilt is monumental. This is the sum total of our ESXi workloads now, so we're eager to put them to bed and be 100% PVE-based.

If there's a tried and trusted system or approach for this problem, we'd gladly schedule a window and attempt it. PVE on our other hardware has been brilliant to work with, especially in comparison to ESXi. If nothing else, a stable and user-friendly Web UI sets PVE well ahead of its competitors, having used a bunch of HVs over the years. The latest release is exciting, and it's clear that the many years of work are culminating into a diverse and capable hypervisor that's a worthy challenger against the others.

Happy to answer questions if they help to paint a clearer picture of where we are.

Many thanks in advance for giving this some consideration. :)

Cheers,
LinuxOz
 
Last edited:

Dominic

Proxmox Staff Member
Staff member
Mar 18, 2019
1,202
135
68
Hi!

1-2 days per VM to copy them out
How big are those VMs?
How fast is writing to the USB SSD? Is copying via network maybe faster?

Can you get the ESXi and Proxmox VE hosts into a network? If yes then you could take a look at the ovftool of VMware.
 

linux

New Member
Dec 14, 2020
14
3
3
Australia
Hi Dominic,

Many thanks for your prompt reply!

How big are those VMs?
How fast is writing to the USB SSD? Is copying via network maybe faster?
Approx 250GB disk, 8GB RAM and 2 cores per VM. So average?

Writing to the USB SSD is very slow, which the internet suggests is due to ESXi and not accidental.

ESXi has a limited "busybox" shell which restricts otherwise incredibly useful tools like rsync. There might be a method that we've not tried?

Can you get the ESXi and Proxmox VE hosts into a network? If yes then you could take a look at the ovftool of VMware.

We've trying to avoid merging the two hypervisor sets together, and are keeping Proxmox on the "new network".

ovftool looks to be our best option, and we may need to deal with the traffic that's involved. Thanks for the pointer!

Do you happen to know which version (and where to get it) would suit best for ESXi 6.5? One is on 6.7, the other's 6.5.

Cheers,
LinuxOz
 

ness1602

Active Member
Oct 28, 2014
147
20
38
Serbia
These machines are not big, why dont you export them with ovftool directly to your computer with:
"C:\Program Files\VMware\VMware OVF Tool\ovftool.exe" c:\ova\machine.ova
This is really the fastest way
 
  • Like
Reactions: Dominic

Dominic

Proxmox Staff Member
Staff member
Mar 18, 2019
1,202
135
68
Do you happen to know which version (and where to get it) would suit best for ESXi 6.5?
Sorry, haven't tried that yet.

We've trying to avoid merging the two hypervisor sets together, and are keeping Proxmox on the "new network".
With 250GB you could also try to download it via the Browser: Just click the export button in the GUI as outlined in the Wiki.
 

linux

New Member
Dec 14, 2020
14
3
3
Australia
Thank you Dominic & ness1602 for your help, it's much appreciated.

We hit some issues, but were able to migrate the VMs over using ovftool and an intermediary device (for storage capacity).

Ended up configuring an NFS share from that intermediary device, configured within PVE to allow for the ovfimport jobs to run.

(Note for the wiki: You need to remove any attached DVD/ISO from the VM in ESXi, BEFORE it will allow ovftool to export it elsewhere)
(Further, at least if you're not performing the migration on the PVE host directly, local-lvm needs to be appended to the end of your example)
(As for my previous question about which ovftool version to use for ESXi 6.5, I'm glad to report that we used ovftool 4.4.0 against ESXi 6.5 and 6.7)

Another interesting fact is that despite the SCSI default on all of our PVE hosts/clusters being LSI 53C895A, the 4x imported VMs have been assigned that default, however our other (made-on-PVE non-imported) instances are all VirtIO SCSI (which is pre-selected when creating a new VM, despite it not being the default option). This felt worth mentioning given the potential for it to be a note-worthy bug. PVE 6.3 may have fixed that?

Cheers,
LinuxOz
 
Last edited:

linux

New Member
Dec 14, 2020
14
3
3
Australia
@Dominic - I've marked this as Solved and will create a new thread for the live migration problem. Thank you kindly!

However, please see the top of reply #6 for some tidbits that would be worth adding into your Wiki (re: ovftool). :)
 

Dominic

Proxmox Staff Member
Staff member
Mar 18, 2019
1,202
135
68
However, please see the top of reply #6 for some tidbits that would be worth adding into your Wiki (re: ovftool). :)
Thank you :) I added all 3 (compatibility, missing storage, detaching disks) to the Wiki
 
  • Like
Reactions: linux

linux

New Member
Dec 14, 2020
14
3
3
Australia
Thank you :) I added all 3 (compatibility, missing storage, detaching disks) to the Wiki

Fantastic! Thanks for that, Dominic. Really appreciate your help with this and the other thread. :)

My apologies however - I made a mistake and meant to write ESXi 6.5 and 6.7 above (I've edited my reply now).

Also, what are your thoughts on the SCSI default behaviour that I explained in reply #6? I'm not sure if it's a bug or desired.
It might not be worth the time to investigate, though if you can replicate it in PVE 6.3 then perhaps it's worth logging it to triage & fix?
 

Dominic

Proxmox Staff Member
Staff member
Mar 18, 2019
1,202
135
68
Happy to help :)

The wiki thing is fixed.

About SCSI: If you create a VM on the command line/API without specifying the parameter "scsihw", then the VM configuration file will not contain that property. If the VM configuration does not mention the property "scsihw" then the default value "lsi" is assumed. You can search the qm man page for "scsihw".

Note that "lsi" in the configuration is short for LSI 53C895A in the GUI. Translation happens here in the code by the way.

A quick guess would be that the import function does not set scsihw and then the default "lsi" is used.
I actually don't know why we use a different default in the VM creation wizard than on the API.
But the same happens, for example, for the assigned memory. 2048 is the default in the VM creation wizard but on the API it is 512.
So it should not be a bug. If you need another controller, you can change it in the GUI in the "Hardware" menu of a VM.
 
  • Like
Reactions: linux

linux

New Member
Dec 14, 2020
14
3
3
Australia
Thanks for fixing up my mistakes, and for the explanations. That all makes plenty of sense - glad to know it's just a difference.

Appreciate the man and git links too - interesting to see the translations & more of the inner workings (what a comparison to ESXi's closed source!).

It's interesting that there's a difference in defaults between the GUI and the API. Sounds like the GUI might be "ahead" of the API with regard to defaults? If you were to diff the relevant lines of code that specify those defaults, I wonder if you could find a point-in-time where they aligned.

For example, a 512MB memory allocation sounds like something I'd expect on an older hypervisor, with 2GB being more of a modern-day default.
 

Dominic

Proxmox Staff Member
Staff member
Mar 18, 2019
1,202
135
68
Sounds like the GUI might be "ahead" of the API with regard to defaults?
I think you could say so. We try to keep the API as stable as possible.

I wonder if you could find a point-in-time where they aligned.

For example, a 512MB memory allocation sounds like something I'd expect on an older hypervisor, with 2GB being more of a modern-day default.
This change happened in May 2020
https://git.proxmox.com/?p=pve-manager.git;a=commit;h=954c7dd8cc670074e8bfc0ad90fecdbc0aaeaf0e
 
  • Like
Reactions: linux

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!