Already test again exsi 6.5.0 (Build 4887370), works when the vm was offline. But if the vm state on, got failed, even if use live migration.
This is a misunderstanding of the terminology.Already test again exsi 6.5.0 (Build 4887370), works when the vm was offline. But if the vm state on, got failed, even if use live migration.
This is a misunderstanding of the terminology.
You can't migrate a running VM. Live-Migration means that you import a (power-off) VM, but this starts immediately and then migrates in the background to Proxmox. Didn't have very good experience with this since you need a vers stable and fast network connection between all the servers and components.
For minimum downtime i suggest using a shared storage (like NFS) for the VM on VMWare, power off, edit the vmdk and power on the VM on Proxmox. Then you can migrate to your "normal" Proxmox storage in the background.
(You will find a howto in the wiki)
And I think very slowAlready test again exsi 6.5.0 (Build 4887370), works when the vm was offline. But if the vm state on, got failed, even if use live migration.
This isn't a backup tool of course. It's meant to be a one time migration from esxi to proxmox. The "Live" migration does work if you rtfm. You shut down the source machine first. The importer will start the machine up on proxmox during the migration, so the machine is running during the migration on the new host. In practice it doesn't' run very fast though because all of the IO is going across the wire.And I think very slow
I tried with another backup tool and works flawless.
I didn't say live migration didn't work for me. I was the one that made mistake in to do with the VM power on in the VMWare side.This isn't a backup tool of course. It's meant to be a one time migration from esxi to proxmox. The "Live" migration does work if you rtfm. You shut down the source machine first. The importer will start the machine up on proxmox during the migration, so the machine is running during the migration on the new host. In practice it doesn't' run very fast though because all of the IO is going across the wire.
The aim of the live import is to have as little downtime as possible since the VM is fully functioning during migration process.I didn't say live migration didn't work for me. I was the one that made mistake in to do with the VM power on in the VMWare side.
After power off the VM in the VMWare Live migration works, but as I said, very slow compare whith the other third-party tool I used.
I have 10gb networking and it's still bordering on too slow to be usable. The migration is faster if you don't do the live migrations. I'm leaning toward just scheduling some down time for each VM. I have about 100 of them left to do and nothing that I can't have down for an hour or so.The aim of the live import is to have as little downtime as possible since the VM is fully functioning during migration process.
But it doesn't make any sense if you don't have any decent 10Gb network because with slow networking a) the migration is very slow and b) the VM isn't usuable during migration process.
And I got a complicated use case here with 2 disk in the VM inside VMWare.I have 10gb networking and it's still bordering on too slow to be usable. The migration is faster if you don't do the live migrations. I'm leaning toward just scheduling some down time for each VM. I have about 100 of them left to do and nothing that I can't have down for an hour or so.
What are you using for live migration?And I got a complicated use case here with 2 disk in the VM inside VMWare.
The live migration initiated the VM but since the second disk is not available yet, got a crash system complain about mount point not read.
Any way! I am using other tool, for live migration and for not live migration. Works better for me.
VinChin Backup & Recovery.What are you using for live migration?
yes, for migrate/import 1Tb disk size, it around 6-7 hours use 10G connection. I think the limits from my SAS disk.And I think very slow
I tried with another backup tool and works flawless.
create full clone of drive (vmware9524:ha-datacenter/datastore1/HERO02/HERO02.vmdk)
transferred 0.0 B of 584.0 MiB (0.00%)
transferred 6.0 MiB of 584.0 MiB (1.03%)
transferred 12.0 MiB of 584.0 MiB (2.05%)
qemu-img: error while reading at byte 0: Input/output error
TASK ERROR: unable to create VM 101 - cannot import from 'vmware9524:ha-datacenter/datastore1/HERO02/HERO02.vmdk' - copy failed: command '/usr/bin/qemu-img convert -p -n -t none -f vmdk -O raw /run/pve/import/esxi/vmware9524/mnt/ha-datacenter/datastore1/HERO02/HERO02.vmdk zeroinit:/dev/zvol/rpool/data/vm-101-disk-0' failed: exit code 1
https://forum.proxmox.com/threads/n...sxi-based-virtual-machines.144023/post-648115New feature is really nice. I was able to import a VM from Vsphere. But when I try to boot it up it blue screens and is unable to boot. Any suggestions?
All I can say is it's not that slow for me.I have some Feedback to the ESXI-Import Feature.
Migrated some VM's already and its working absolutely perfect, sure you have to change the VM later on etc... But thats normal and Expected.
However what utterly nerves me, is the Migration Speed itself, 20GB Takes around 1,5hours. Its so slow...
Thats the only reason, why im migrating the Old way with the ovftool. It's like 10x faster.
And i have to migrate some really big VMs with 1tb+ drives.
However, it's a very cool and nice feature, the only issue is that it's simply too Slow.
Source: vCenter Server 6.0 (3node-cluster) with Iscsi-san (2x10G+Multipath on each node)
Destination: Amd Genoa PVE 8.2 Cluster
And there is nothing in the way that is Slower as 10G, the old esxi-servers have 10G, the new Genoa ones have at least 25G.
It looks to me, like its simply the Import-Wizard isn't as Perfect, or the way the Import-Wizard works is not Optimal, or sth like that. Since the ovftool is like 10x faster.
Cheers
Yeah its all fine here, it's anyway extremely strange, for me it seems like a Windows VM vs Linux VM.All I can say is it's not that slow for me.
Make sure none of you management interfaces (including both vmware and proxmox) are at 1gb. Ideally have the management interfaces for proxmox and vmware on the same subnet.