G'day from Australia,
We've got a predicament that we'd appreciate some guidance with.
4x managed VMs run on 2x ESXi hosts (2+2), and there's no supported pathway to migrate them (ie. ditch ESXi in favour of PVE).
Across the remainder of our infrastructure, ESXi has been gladly purged and our migrations were successful, though we managed those servers and they were primarily CentOS based rather than Debian. The software stacks on everything else is documented and was rebuilt - these managed servers run proprietary/closed software instead.
Looking further into this, ESXi has some nice caveats which make migrating away quite difficult. Aside from this, we've never hit a VMware freemium limit.
ESXi is running under free licences, with versions in the 6.x branch. We do have spare storage/compute to facilitate a stepping stone in-flight if needed.
The best success we had was disabling the usbarbitrator service on the hosts, then copying out the VMDKs, though despite using a WD USB SSD directly connected to the boxes, speed was crippled by VMware rather kindly. 1-2 days per VM to copy them out, at which point we'd have to tolerate some logging discrepancies, though the proprietary systems should be smart enough to stitch it all together, as the 4x VMs operate in a cluster. At this stage, we can afford some hassles if it kills off ESXi. One problem was the VMFS version (USB SSD) being tricky to read externally.
Use-case of the VMs is quite critical, so we've reconfigured our systems to allow 2 of them to be pulled at a time, though 2 of them will cause headaches which the others won't (ie. 1x VM is the cluster master). Those 2x VMs can be migrated over a weekend to minimise interruptions. We've looked into free software that claims to allow exactly what we want (export of VMDK likely at a machine code level similar to Drive Snapshot), to allow an external conversion over to qcow2, to then import to PVE once the underlying nodes have been restructured and reinstalled) - they were lack lustre although I wouldn't say that we've attempted every option on the market.
What's the most logical way to approach this? Our love for VMware is nil, so we're not looking to take up support coverage. Management plans over the VMs don't include migrations, and the costing to have them rebuilt is monumental. This is the sum total of our ESXi workloads now, so we're eager to put them to bed and be 100% PVE-based.
If there's a tried and trusted system or approach for this problem, we'd gladly schedule a window and attempt it. PVE on our other hardware has been brilliant to work with, especially in comparison to ESXi. If nothing else, a stable and user-friendly Web UI sets PVE well ahead of its competitors, having used a bunch of HVs over the years. The latest release is exciting, and it's clear that the many years of work are culminating into a diverse and capable hypervisor that's a worthy challenger against the others.
Happy to answer questions if they help to paint a clearer picture of where we are.
Many thanks in advance for giving this some consideration.
Cheers,
LinuxOz
We've got a predicament that we'd appreciate some guidance with.
4x managed VMs run on 2x ESXi hosts (2+2), and there's no supported pathway to migrate them (ie. ditch ESXi in favour of PVE).
Across the remainder of our infrastructure, ESXi has been gladly purged and our migrations were successful, though we managed those servers and they were primarily CentOS based rather than Debian. The software stacks on everything else is documented and was rebuilt - these managed servers run proprietary/closed software instead.
Looking further into this, ESXi has some nice caveats which make migrating away quite difficult. Aside from this, we've never hit a VMware freemium limit.
ESXi is running under free licences, with versions in the 6.x branch. We do have spare storage/compute to facilitate a stepping stone in-flight if needed.
The best success we had was disabling the usbarbitrator service on the hosts, then copying out the VMDKs, though despite using a WD USB SSD directly connected to the boxes, speed was crippled by VMware rather kindly. 1-2 days per VM to copy them out, at which point we'd have to tolerate some logging discrepancies, though the proprietary systems should be smart enough to stitch it all together, as the 4x VMs operate in a cluster. At this stage, we can afford some hassles if it kills off ESXi. One problem was the VMFS version (USB SSD) being tricky to read externally.
Use-case of the VMs is quite critical, so we've reconfigured our systems to allow 2 of them to be pulled at a time, though 2 of them will cause headaches which the others won't (ie. 1x VM is the cluster master). Those 2x VMs can be migrated over a weekend to minimise interruptions. We've looked into free software that claims to allow exactly what we want (export of VMDK likely at a machine code level similar to Drive Snapshot), to allow an external conversion over to qcow2, to then import to PVE once the underlying nodes have been restructured and reinstalled) - they were lack lustre although I wouldn't say that we've attempted every option on the market.
What's the most logical way to approach this? Our love for VMware is nil, so we're not looking to take up support coverage. Management plans over the VMs don't include migrations, and the costing to have them rebuilt is monumental. This is the sum total of our ESXi workloads now, so we're eager to put them to bed and be 100% PVE-based.
If there's a tried and trusted system or approach for this problem, we'd gladly schedule a window and attempt it. PVE on our other hardware has been brilliant to work with, especially in comparison to ESXi. If nothing else, a stable and user-friendly Web UI sets PVE well ahead of its competitors, having used a bunch of HVs over the years. The latest release is exciting, and it's clear that the many years of work are culminating into a diverse and capable hypervisor that's a worthy challenger against the others.
Happy to answer questions if they help to paint a clearer picture of where we are.
Many thanks in advance for giving this some consideration.
Cheers,
LinuxOz
Last edited: