HOW TO MIGRATE PROXMOX VE VMs TO VMWARE ESXI

manusamir

New Member
Jul 9, 2015
23
0
1
Hi all,
I've a question:
In a clustered Proxmox environment with 4 nodes when all vms are stored in differents LUNs in a clustered 2 storages Synology connected with ISCSI how can I retrieve the singles disks of the singles vms?
All the system work (iscsi target regolary logged, migration function is ok, ha cluster up..).
Under /dev/mapper/ or /dev/lunx/ I can see the singles disks ex: vm-xxx-disk-1 -> ../dm-7 ONLY IF THE VM IS UP.

I copy with dd for example the vm-xxx-disk-1 to a shared disk with a output like vm.qcow or vm.raw or vm.img...NOTE I see in the pve web interface under storage view - lunxxx... that the vm disks are all LVM and with raw format.
If the dd is the only method to copy out the vm disk what format I have to use for output? because when i create a vm I always use qcow2 so I don't understand the format raw that i see under storage view...(raw?qcow2?img?)

If this is ok..for the conversion the qemu-img convert command from (raw or img or qcow2) to vmdk is the right choice??I also have starwind if is better...
I always use qemu-img convert from vmdk to qcow2 and is ok..but no for the opposite.

So my target is copy a pve vm disk from pve (in a system like this) and import the vm disk in vmdk format to vmware (is not a problem that).
I think many users don't know how to do this..

Thanks in advance
Emanuele
 
Easiest thing would be to use the built in 'move disk' command to move to e.g. NFS storage and then convert via qemu-img.

To migrate the servers, it may be even easier to create the vm in VMware and bootup a live linux in both and then transfer the harddisk bit-by-bit via netcat over the network directly into VMware. If you create a scripted, PXE-enabled live boot, this is a breeze.
 
Hi all,
I've a question:
In a clustered Proxmox environment with 4 nodes when all vms are stored in differents LUNs in a clustered 2 storages Synology connected with ISCSI how can I retrieve the singles disks of the singles vms?
All the system work (iscsi target regolary logged, migration function is ok, ha cluster up..).
Under /dev/mapper/ or /dev/lunx/ I can see the singles disks ex: vm-xxx-disk-1 -> ../dm-7 ONLY IF THE VM IS UP.

I copy with dd for example the vm-xxx-disk-1 to a shared disk with a output like vm.qcow or vm.raw or vm.img...NOTE I see in the pve web interface under storage view - lunxxx... that the vm disks are all LVM and with raw format.
If the dd is the only method to copy out the vm disk what format I have to use for output? because when i create a vm I always use qcow2 so I don't understand the format raw that i see under storage view...(raw?qcow2?img?)

If this is ok..for the conversion the qemu-img convert command from (raw or img or qcow2) to vmdk is the right choice??I also have starwind if is better...
I always use qemu-img convert from vmdk to qcow2 and is ok..but no for the opposite.

So my target is copy a pve vm disk from pve (in a system like this) and import the vm disk in vmdk format to vmware (is not a problem that).
I think many users don't know how to do this..

Thanks in advance
Emanuele
Hi,
your iSCSI-Luns are devices for lvm?

Look with "lvs", "vgs" and "pvs".

With lvs you see, that VM-disks from offline VMs dont have an a-attribute, which mean they aren't aktiv.

Simply activate with "lvchange -a y /dev/VGNAME/LVNAME" and yopu can runn your dd ;-)

Udo
 
Thank you for all the replies!
I tried to move the disk with the builtin "move disk" feature, but unfortunately the "destination" combo box is grayed out and I cannot select the destination path for the copy.

I tried several time with DD and qemu-img convert but with no result :(

I'm gonna try
 
Thank you for all the replies!
I tried to move the disk with the builtin "move disk" feature, but unfortunately the "destination" combo box is grayed out and I cannot select the destination path for the copy.

I tried several time with DD and qemu-img convert but with no result :(

I'm gonna try
Hi,
I guess it's grayed out because the disk is to big? Or is the destination storage not marked as type for disk?

An LV is an raw file of an vm-hdd - so you can copy the content with:
Code:
dd if=/dev/storagevg/vm-100-disk-1 of=/mnt/bigstorage/vm-100-disk-1.raw bs=1M
after that run qemu-img to convert to the right format.

Udo
 
Hi,
I guess it's grayed out because the disk is to big? Or is the destination storage not marked as type for disk?

An LV is an raw file of an vm-hdd - so you can copy the content with:
Code:
dd if=/dev/storagevg/vm-100-disk-1 of=/mnt/bigstorage/vm-100-disk-1.raw bs=1M
after that run qemu-img to convert to the right format.

Udo

You can use qemu-img directly on the LVM volume to convert it. The dd step not necessary.

One remark: The VM should be stopped to export a consistent state.
 
IMHO you cannot live migrate a storage yet. You can also not copy a machine while it is running in a non-snapshotted mode (technically you can of course, but it is then not consistent).
 
IMHO you cannot live migrate a storage yet. You can also not copy a machine while it is running in a non-snapshotted mode (technically you can of course, but it is then not consistent).
Storage migration work like a charm (if you disabled iothreads ;-) ).
The question is, which version is running? We got no real facts from manusamir about versions/destination-storage and I assume an old installation without actual patches.

An output of following would be helpfull:
Code:
pveversion -v
cat /etc/pve/storage.cfg
Udo
 
Last edited:
pveversion -v
proxmox-ve-2.6.32: 3.4-156 (running kernel: 2.6.32-39-pve)
pve-manager: 3.4-6 (running version: 3.4-6/102d4547)
pve-kernel-2.6.32-39-pve: 2.6.32-156
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-17
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1


cat /etc/pve/storage.cfg
lvm: lun4
vgname lun4
content images
shared

dir: local
path /var/lib/vz
content images,iso,vztmpl,rootdir,backup
maxfiles 0

lvm: lun3
vgname lun3
content images
shared

lvm: lun5
vgname lun5
content images
shared

dir: Snapshots
path /var/lib/vz/snapshot
content images,iso,vztmpl,rootdir,backup
maxfiles 1
nodes mikppx01,mikppx03,mikppx04,mikppx02

dir: Stock
path /mnt/stock
shared
content images,iso,vztmpl,rootdir,backup
maxfiles 1
nodes mikppx01,mikppx03,mikppx04,mikppx02
 
NOTHING..this time a power down the vm and i activate with lvchange it..i clone with dd under /dev/lun3/vmxxx-disk-1 to my share disk in .raw format..the dd succesfully and the disk is 21 gb equal to my pve vm..
after that I do qemu-img convert -f raw -O vmdk Z:\vm-30-09.raw Z:\vm-30-09.vmdk...I import the vmdk into vmware 6 in datastore in a equal vm in vmware..the result is almost the same..no bootable disk found..(the disk in vmware is ide0:0 and the bios setting is ok)...with netcat how can I do it?
 
if I try with the netcat system? I create a vm with a disk with same disk space..i boot with live cd ubuntu16 only i have..this is my example

1) Create a virtual machine with a disk about the same size or larger than your source (not smaller)

Pick an arbitrary port, (9001 in this example) and set up your firewall or VSE to allow that port to the target machine.

2) Boot that new VM into a rescue environment or use a live cd.

3) Use the following commands:

On the VM: nc -l -p 9001 | dd of=/dev/sda --->dev sda failed to open ??????????????????????

On your source machine: dd if=/dev/sda | nc 9001

4) Wait a long time… I averaged around 15Mbps from my test machine to my new VM, it ranged from 30Mbps down to 7Mbps. I’m sure that had more to do with my network than anything. Still, this can take a while.
 
As a last-ditch effort, you could use the VMWare converter application inside the VM's to accomplish the move. Just migrate it to the VCenter, etc.
 
ok so install vmware converter into the pve machine and migrate out to my share disk?
If you wish, but you can actually migrate it to the VCenter itself. If you aren't using VCenter (you really should), than yes, to a shared storage location. You can also migrate it to a particular ESXi too.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!