Amazon is attempting to lock in their customers into their EC2 offering. I'm going to show you one way how to break that lock-in.
Amazon purposely makes it as difficult as possible to migrate virtual machines off of EC2 and onto other formats. AWS REQUIRES that you invoke their command-line tools for export, and even then, they put multiple bureaucratic restrictions in place on their export, such that it is next to impossible to get your disk images off of Amazon.
Especially if you instanced your EC2 VM from the Amazon Marketplace, then the EC2 export commands will complain that it is not possible to do such an export, and Amazon explicitly prevents you from exporting. These restrictions make no sense if the Marketplace images are based on free software, but the restrictions are fixed nonetheless.
Well, Amazon is not the boss of me. Here is specifically what worked for me, to get my VMs off of Amazon and into Proxmox.
First, install VMware vSphere version 6.7.0U3 onto a Proxmox VM. You must install specifically this version, as it is the last ESXi version that supports e1000, which is required to make this work. The default install will work for 60 days, which is long enough for you to do this transfer. To set up the nested VM for your ESXi install, you will need a large virtual SATA drive (not SCSI) to hold the disk image, and you will need to use the e1000 network driver as mentioned. From there, do the ESXi install as normal, booting from the ISO and installing onto the SATA virtual drive.
Next, use the latest version of VMware Converter to copy the live virtual machine from "Remote Linux host" to the ESXi instance. You'll need root credentials on the AWS instance and you'll need root credentials on the ESXi instance you just set up.
Then, you can export the virtual machine from your ESXi installation. I accessed the ESXi instance from the http address mentioned on the ESXi virtual machine console, select the VM to be exported, select the Actions cog, and select Export with images. You will now be able to download an OVF containing the virtual machine.
Lastly, you can now load the exported OVF image into Proxmox and run it and do with it whatever you want.
I'm sure that this procedure can be improved upon and further automated, but I at least wanted to leave some breadcrumbs for someone else to make a better process in the future. In particular, it should be possible to create a script that attaches to an arbitrary running remote Linux instance, detects the number and size of the local disks, and dd | gzip | dd's those remote drives onto a new local Proxmox instance with new local virtual disks. Implementation is left as an exercise for the reader. Such a feature would be a market breaker, if someone at Proxmox decided to hack something like this together.
Amazon purposely makes it as difficult as possible to migrate virtual machines off of EC2 and onto other formats. AWS REQUIRES that you invoke their command-line tools for export, and even then, they put multiple bureaucratic restrictions in place on their export, such that it is next to impossible to get your disk images off of Amazon.
Especially if you instanced your EC2 VM from the Amazon Marketplace, then the EC2 export commands will complain that it is not possible to do such an export, and Amazon explicitly prevents you from exporting. These restrictions make no sense if the Marketplace images are based on free software, but the restrictions are fixed nonetheless.
Well, Amazon is not the boss of me. Here is specifically what worked for me, to get my VMs off of Amazon and into Proxmox.
First, install VMware vSphere version 6.7.0U3 onto a Proxmox VM. You must install specifically this version, as it is the last ESXi version that supports e1000, which is required to make this work. The default install will work for 60 days, which is long enough for you to do this transfer. To set up the nested VM for your ESXi install, you will need a large virtual SATA drive (not SCSI) to hold the disk image, and you will need to use the e1000 network driver as mentioned. From there, do the ESXi install as normal, booting from the ISO and installing onto the SATA virtual drive.
Next, use the latest version of VMware Converter to copy the live virtual machine from "Remote Linux host" to the ESXi instance. You'll need root credentials on the AWS instance and you'll need root credentials on the ESXi instance you just set up.
Then, you can export the virtual machine from your ESXi installation. I accessed the ESXi instance from the http address mentioned on the ESXi virtual machine console, select the VM to be exported, select the Actions cog, and select Export with images. You will now be able to download an OVF containing the virtual machine.
Lastly, you can now load the exported OVF image into Proxmox and run it and do with it whatever you want.
I'm sure that this procedure can be improved upon and further automated, but I at least wanted to leave some breadcrumbs for someone else to make a better process in the future. In particular, it should be possible to create a script that attaches to an arbitrary running remote Linux instance, detects the number and size of the local disks, and dd | gzip | dd's those remote drives onto a new local Proxmox instance with new local virtual disks. Implementation is left as an exercise for the reader. Such a feature would be a market breaker, if someone at Proxmox decided to hack something like this together.
Last edited: