Move a VM without vzdump

cwoelkers

Active Member
Aug 17, 2017
19
2
43
I am attempting to replace one of our hypervisors. It is not part of the cluster, though the new one will be. I was able to get all but one of the VMs that was running on it transferred to the other hypervisors via the backup, transfer, and restore method. But one of them is being problematic.
The problem is occuring during the restore with the following:
Code:
** (process:27485): ERROR **: restore failed - got wrong block address - write bejond end
/bin/bash: line 1: 27484 Broken pipe             lzop -d -c /mnt/backup/dump/vzdump-qemu-252-2018_05_24-00_00_02.vma.lzo
     27485 Trace/breakpoint trap   | vma extract -v -r /var/tmp/vzdumptmp27482.fifo - /var/tmp/vzdumptmp27482
This happens no matter what I try during the backup, snapshot, stop, web, or console, or the transfer, backup and restore directly from NFS or copy the dump via SCP. The storage I am restoring to has plenty of free space to support the VM. This also happens no matter which hypervisor I try to restore to, there are three others.
I am now left with two choices, at least that I can see.
First stop the VM than copy the drive images and configs to the new hypervisor. With any luck it will start right back up.
The second, and longest, is to recreate the VM from scratch on another hypervisor. While the VM's OS does need to be upgraded this isn't the time to do so.

Everything I have found via Google and other searches use vzdump or the web interface to backup and then move the VM. I have yet to find a how to, forum thread, or article on doing it without backing up being the first step.
So has anyone done a transfer without backing up and can help me out?
 
It looks like the restore process creates a target block device with smaller size than required. What is your target storage? If the source archive is not corrupted, this might be a qmrestore bug. It's also possible to copy VM disks without using vzdump, I've done it many times. But then you need to do some manual work preparing the target storage and VM config.
 
The target storage is a local LVM "drive" of at least 200GB of available storage, different hypervisors. The overall storage space of the VM should be no more than 180GB, the total size of the VM's drive.
I could see the issue of the source archive being corrupted. That's why I tried backing up to both NFS, my main transfer storage available to all hypervisors, and the local backup area than using SCP to copy the dump to another hypervisor for restore. I'll check the bug tracker for issues with qmrestore.
As for the manual work I'll gladly do it to move this VM. Do you have a basic how-to or a page you can point me to?
 
Manual move: just create a vm on the target node/storage with identical specs. Then you can stop the source vm and dd over the block device contents through ssh.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!