Import from VMWare to Ceph

normic

Active Member
Apr 17, 2018
19
3
43
Germany, Baden
Hello,
I'm using Proxmox a while now, with several VMs and containers on a single machine.

Now I got access to new hardware which is set up with 3 nodes using Ceph. I'd like to import a VM from VMWare to this system.
I learned that I have to:
- create a VM
- copy and convert the vmdk to /var/lib/vz/images/VMID

Using qemu-img like so:
Code:
qemu-img convert -f vmdk ./debian-flat.vmdk -O qcow2 /var/lib/vz/images/100/debian.qcow2

But how to do this with Ceph? As there is no /var/lib/vz/images...
And I think I have to use format RAW.

Would be great if someone could shed some light on this.

Thanks in advance,
normic
 
Just got it working. It's quite simple - if one knows what to do.
Sometimes RTFM really helps ;)

Maybe it helps someone else out, so I write down my steps:
- create VM
- copy vmdk files to node
- convert vmdk to raw:
Code:
qemu-img convert -f vmdk myvmfile.vmdk -O raw myvmfile.raw
- copy converted file to Ceph-Storage and existing VM:
Code:
qm importdisk VMID myvmfile.raw NameOfCephpool

Switching from unneeded empty disk to imported one can be done via GUI.
If someone has suggestions to simplify this - I'm willing to learn.

Linux VMs running, now to the tricky Windows stuff.
 
Old post but very relevant information.

I'd like to expand on this by adding in an NFS share to the process.

I have restored my VMware images backed up to Vembu to an NFS share on the network. Vembu allows you to export RAW, so that saves the step of having to convert - but the above command is correct.

1. Create a placeholder VM to the specs of your backup you want to migrate to Prox. To keep it simple, I like to write down the disk name to keep the naming scheme consistent before detaching the disk that was created with the new VM. ('vm-100-disk-0')

--Step 3 might not be necessary, but I do it just to keep things clean and tidy.--

3. Delete the disk in Ceph

Bash:
rdb list -p <name of Ceph pool selected when VM created>
vm-100-disk-0

rbd rm vm-100-disk-0 <name of pool>

2. On the NFS share, rename the RAW .iso to 'vm-100-disk-0'

3. In Prox on the node that you created the placeholder VM, navigate to the NFS mount point to where the file is located - cd /mnt/pve/<NFS>/..... (assuming here that you already have the NFS mounted to Prox)

4. Copy the iso to target Ceph pool:

Code:
qm importdisk 100 vm-100-disk-0.img <name of Ceph pool>

5. Wait for the copy to complete. Depending on size of iso, network speed to the NFS share and disk speeds, this might take some time, so be patient.

6. In the GUI, under Hardware tab on the VM, you should now see at the bottom an 'Unused Disk' called <name of Ceph pool>:vm-100-disk-0

7. Edit the disk, assign it as a SCSI device.

8. Under the Options tab, open Boot Order to enable SCSI and make it first in the list of boot order.

Now you can finally launch the VM and repeat the above steps for any additional VMs you want to migrate to Prox using CEPH storage.

HTH
 
Last edited:
current PVE versions allow importing disks as part of the creation, see man qm:

Code:
qm create <vmid> [OPTIONS]
Create or restore a virtual machine.
<vmid>: <integer> (1 - N)
The (unique) ID of the VM.

...

--sata[n] [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<seconds>] [,bps_rd=<bps>] [,bps_rd_max_length=<seconds>] [,bps_wr=<bps>] [,bps_wr_max_length=<seconds>] [,cache=<enum>] [,cyls=<integer>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<enum>] [,heads=<integer>] [,import-from=<source volume>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<seconds>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<seconds>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<seconds>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,secs=<integer>] [,serial=<serial>] [,shared=<1|0>] [,size=<DiskSize>] [,snapshot=<1|0>] [,ssd=<1|0>] [,trans=<none|lba|auto>] [,werror=<enum>] [,wwn=<wwn>]
Use volume as SATA hard disk or CD-ROM (n is 0 to 5). Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume. Use STORAGE_ID:0 and the import-from parameter to import from an existing volume.

...

should work with vmdk images (e.g., passing in the absolute path to the image as exported) as well AFAIK
 
current PVE versions allow importing disks as part of the creation, see man qm:

Code:
qm create <vmid> [OPTIONS]
Create or restore a virtual machine.
<vmid>: <integer> (1 - N)
The (unique) ID of the VM.

...

--sata[n] [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<seconds>] [,bps_rd=<bps>] [,bps_rd_max_length=<seconds>] [,bps_wr=<bps>] [,bps_wr_max_length=<seconds>] [,cache=<enum>] [,cyls=<integer>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<enum>] [,heads=<integer>] [,import-from=<source volume>] [,iops=<iops>] [,iops_max=<iops>] [,iops_max_length=<seconds>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<seconds>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<seconds>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,replicate=<1|0>] [,rerror=<ignore|report|stop>] [,secs=<integer>] [,serial=<serial>] [,shared=<1|0>] [,size=<DiskSize>] [,snapshot=<1|0>] [,ssd=<1|0>] [,trans=<none|lba|auto>] [,werror=<enum>] [,wwn=<wwn>]
Use volume as SATA hard disk or CD-ROM (n is 0 to 5). Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume. Use STORAGE_ID:0 and the import-from parameter to import from an existing volume.

...

should work with vmdk images (e.g., passing in the absolute path to the image as exported) as well AFAIK
Well, that's certainly an easier method ! Thanks for sharing, I'll give that a try.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!