Failing to migrate from vSphere to Proxmox

Microkernel

New Member
Mar 31, 2019
14
0
1
24
So I've spent countless hours now trying to migrate my vSphere VM's to Proxmox, but haven't been successful.

My vSphere VM directory has the following files :
  • VMNAME.nvram (9KB)
  • VMNAME.vmdk (1KB)
  • VMNAME.vmsd (0KB)
  • VMNAME.vmx (3KB)
  • VMNAME.vmxf (1KB)
  • VMNAME.727c2cda.hlog (2KB)
  • VMNAME-aux.xml (1KB)
  • VMNAME-ctk.vmdk (321KB)
  • VMNAME-flat.vmdk (5242880KB)
  • vmware.log
  • vmware-[11-16].log
What I've tried so far :
  1. Copy all of those files to a Windows Server VM.
  2. Install VMWare Workstation on this VM.
  3. Run "C:\Program Files (x86)\VMware\VMware Workstation\vmware-vdiskmanager.exe" -r VMNAME.vmdk -t 0 VMNAME-pve.vmdk
  4. This returns two new files which are VMNAME-pve.ctk.vmdk (321KB) and VMNAME-pve.vmdk (2723776KB).
  5. Create a new Proxmox vm with ID 225, with no media under the OS tab and Type: Linux, a 6GB SCSI Hard Disk, 2GB of Memory and a network interface(virtio).
  6. Copy the files returned by vdiskmanager from the windows server vm to /var/lib/vz/images/225 on my proxmox host.
  7. Run qemu-img convert -f vmdk VMNAME-pve.vmdk -O qcow2 VMNAME-pve.qcow2, which successfully creates a 2.6G qcow2 file.
  8. Delete the .vmdk files on the Proxmox host.
  9. Run qm rescan.
  10. Booting the VM now returns: Boot failed: Not a bootable disk.
I've also tried using vdiskmanager to convert to preallocated virtual disk and I've tried using the Rescue CD Disk method, both result in the same error when booting. I consider myself fairly knowledgeable with vSphere and I've been using it for years now to run dozens of VM's, but I've decided to move my cluster to Proxmox for the sake of open source and trying/learning something new. I'm kind of at a loss here though and would greatly appreciate any help.
 
Hi,
try to copy
VMNAME-flat.vmdk (5242880KB)
to /var/lib/vz/images/225/vm-225-disk-1.vmdk

then run qm recan.

If this work use move disk to convert the disk from vmdk to qcow2
 
Hi,
what is the output of
Code:
file /var/lib/vz/images/225/vm-225-disk-1.vmdk
???

Your disk was 5GB in size, is this correct?


Can you post the VM-config?
Code:
qm config 225
What type of OS is inside the guest? Windows or linux?

Udo
Thanks for your reply. The VM id is now 333.
Output of file /var/lib/vz/images/333/vm-333-disk-1.vmdk is
Code:
vm-333-disk-1.vmdk: DOS/MBR boot sector

The disk size was 5GB in vSphere, but I set it to 6GB in proxmox, just for good measure.

WM-config:
Code:
bootdisk: scsi0
cores: 1
ide2: none,media=cdrom
memory: 512
name: KMS
net0: virtio=16:12:D4:D1:57:4E,bridge=vmbr1,tag=50
numa: 0
ostype: l26
scsi0: BANK_SSD:vm-333-disk-0,size=6G
scsihw: virtio-scsi-pci
smbios1: uuid=254af9ca-f7dc-430e-956d-3bb0bd1d9bcc
sockets: 1
vmgenid: 5030c7c2-efcb-43b0-b5b8-490f507d8bb4
Should I change scsi0 to BANK_SSD:vm-333-disk-1,size=5G maybe ?

The guest os is Debain Linux.
 
Thanks for your reply. The VM id is now 333.
Output of file /var/lib/vz/images/333/vm-333-disk-1.vmdk is
Code:
vm-333-disk-1.vmdk: DOS/MBR boot sector
Ok,
that's looks good.
WM-config:
Code:
scsi0: BANK_SSD:vm-333-disk-0,size=6G
here is the mistake!
BANK_SSD:vm-333-disk-0 is not /var/lib/vz/images/333/vm-333-disk-1.vmdk

Pleas post your storage-config
Code:
cat /etc/pve/storage.cfg
Should I change scsi0 to BANK_SSD:vm-333-disk-1,size=5G maybe ?
No, that's not the issue
The guest os is Debain Linux.
That should work fine.

Udo
 
Ok,
that's looks good.

here is the mistake!
BANK_SSD:vm-333-disk-0 is not /var/lib/vz/images/333/vm-333-disk-1.vmdk

Pleas post your storage-config
Code:
cat /etc/pve/storage.cfg

No, that's not the issue

That should work fine.

Udo

My storage config is

Code:
dir: local
   path /var/lib/vz
   content vztmpl,iso,backup

lvmthin: local-lvm
   thinpool data
   vgname pve
   content images,rootdir

lvm: BANK_SSD
   vgname BANK_SSD
   content images,rootdir
   nodes bank
   shared 0

nfs: HELIUM_ISO
   export /Elliot/ISO
   path /mnt/pve/HELIUM_ISO
   server *hostname*
   content iso
   options vers=3
 
My storage config is

Code:
dir: local
   path /var/lib/vz
   content vztmpl,iso,backup

lvmthin: local-lvm
   thinpool data
   vgname pve
   content images,rootdir

lvm: BANK_SSD
   vgname BANK_SSD
   content images,rootdir
   nodes bank
   shared 0

nfs: HELIUM_ISO
   export /Elliot/ISO
   path /mnt/pve/HELIUM_ISO
   server *hostname*
   content iso
   options vers=3
Ok,
you can do two things - rename the config (and modify your storage config to allow images on storage local - but should work without this):
1.
Code:
scsi0: local:333/vm-333-disk-1.vmdk,size=5G
or, preferred - copy the data
Code:
dd if=/var/lib/vz/images/333/vm-333-disk-1.vmdk of=/dev/BANK_SSD/vm-333-disk-0 bs=1M
after that, start the VM and enjoy.

Udo
 
Ok,
you can do two things - rename the config (and modify your storage config to allow images on storage local - but should work without this):
1.
Code:
scsi0: local:333/vm-333-disk-1.vmdk,size=5G
or, preferred - copy the data
Code:
dd if=/var/lib/vz/images/333/vm-333-disk-1.vmdk of=/dev/BANK_SSD/vm-333-disk-0 bs=1M
after that, start the VM and enjoy.

Udo

root@bank:/dev/BANK_SSD# dd if=/var/lib/vz/images/333/vm-333-disk-1.vmdk of=/dev/BANK_SSD/vm-333-disk-0 bs=1M
dd: error writing '/dev/BANK_SSD/vm-333-disk-0': No space left on device
3933+0 records in
3932+0 records out
4123271168 bytes (4.1 GB, 3.8 GiB) copied, 9.56675 s, 431 MB/s

The datastore usage in the UI is at 11%.
 
Hi,
what is the output of
Code:
vgs
lvs
Udo
Code:
root@bank:/dev/BANK_SSD# vgs
  VG       #PV #LV #SN Attr   VSize   VFree  
  BANK_SSD   1   5   0 wz--n- 697.12g 621.12g
  pve        1   3   0 wz--n- 231.87g  16.00g
root@bank:/dev/BANK_SSD# lvs
  LV            VG       Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  vm-101-disk-0 BANK_SSD -wi-ao----  20.00g                                                   
  vm-102-disk-0 BANK_SSD -wi-ao----  15.00g                                                   
  vm-110-disk-0 BANK_SSD -wi-ao----  15.00g                                                   
  vm-150-disk-0 BANK_SSD -wi-ao----  20.00g                                                   
  vm-333-disk-0 BANK_SSD -wi-------   6.00g                                                   
  data          pve      twi-aotz-- 148.10g             0.00   0.05                           
  root          pve      -wi-ao----  57.75g                                                   
  swap          pve      -wi-ao----   7.00g
 
Code:
root@bank:/dev/BANK_SSD# vgs
  VG       #PV #LV #SN Attr   VSize   VFree 
  BANK_SSD   1   5   0 wz--n- 697.12g 621.12g
  pve        1   3   0 wz--n- 231.87g  16.00g
root@bank:/dev/BANK_SSD# lvs
  LV            VG       Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  vm-101-disk-0 BANK_SSD -wi-ao----  20.00g                                                  
  vm-102-disk-0 BANK_SSD -wi-ao----  15.00g                                                  
  vm-110-disk-0 BANK_SSD -wi-ao----  15.00g                                                  
  vm-150-disk-0 BANK_SSD -wi-ao----  20.00g                                                  
  vm-333-disk-0 BANK_SSD -wi-------   6.00g                                                  
  data          pve      twi-aotz-- 148.10g             0.00   0.05                          
  root          pve      -wi-ao----  57.75g                                                  
  swap          pve      -wi-ao----   7.00g
ok,
device not active - activate with
Code:
lvchange -a y /dev/BANK_SSD/vm-333-disk-0
and use dd again.

Udo
 
root@bank:/dev/BANK_SSD# dd if=/var/lib/vz/images/333/vm-333-disk-1.vmdk of=/dev/BANK_SSD/vm-333-disk-0 bs=1M
dd: error writing '/dev/BANK_SSD/vm-333-disk-0': No space left on device
3933+0 records in
3932+0 records out
4123271168 bytes (4.1 GB, 3.8 GiB) copied, 9.56675 s, 431 MB/s

The datastore usage in the UI is at 11%.
Hi,
just see, that you filled your root-partition (No space left on device) because you wrote to an new file (the logical volume wasn't open).

To correct stop the VM 333, deactivate the LV and remove the big file:
Code:
qm stop 333
lvchange -a n /dev/BANK_SSD/vm-333-disk-0
ls -lsa /dev/BANK_SSD/vm-333-disk-0

# if this is not an link to an dm-file and has an size near 4GB, then remove
rm /dev/BANK_SSD/vm-333-disk-0

# and again
lvchange -a y /dev/BANK_SSD/vm-333-disk-0
ls -lsa /dev/BANK_SSD/vm-333-disk-0

dd ...
Udo
 
Hi,
just see, that you filled your root-partition (No space left on device) because you wrote to an new file (the logical volume wasn't open).

To correct stop the VM 333, deactivate the LV and remove the big file:
Code:
qm stop 333
lvchange -a n /dev/BANK_SSD/vm-333-disk-0
ls -lsa /dev/BANK_SSD/vm-333-disk-0

# if this is not an link to an dm-file and has an size near 4GB, then remove
rm /dev/BANK_SSD/vm-333-disk-0

# and again
lvchange -a y /dev/BANK_SSD/vm-333-disk-0
ls -lsa /dev/BANK_SSD/vm-333-disk-0

dd ...
Udo

Thanks! I'm slightly confused and not exactly sure what I did, but It's working now.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!