[SOLVED] Migrating VMs to different storage

pmontepagano

New Member
Jul 17, 2013
7
1
1
When two nodes in a cluster don't have the same storage name, I cannot migrate from one node to the other using Proxmox tools.

For example, I have two nodes on same cluster, both have local ZFS storages, but the ZFS pools are named differently, so the storage has to be named differently too. (one storage is named zfslocal, with zpool "storage", the other, local-zfs, with zpool "rpool")

I tried doing this command:


qm migrate 159 ares --targetstorage local-zfs --online --with-local-disks

but it failed:

2018-12-05 13:46:45 starting migration of VM 159 to node 'ares' (192.168.0.60)
2018-12-05 13:46:45 found local disk 'zfslocal:vm-159-disk-0' (in current VM config)
2018-12-05 13:46:45 copying disk images
full send of storage/vm-159-disk-0@__migration__ estimated size is 1.72G
total estimated size is 1.72G
TIME SENT SNAPSHOT
cannot open 'storage/vm-159-disk-0': dataset does not exist
cannot receive new filesystem stream: dataset does not exist
cannot open 'storage/vm-159-disk-0': dataset does not exist
command 'zfs recv -F -- storage/vm-159-disk-0' failed: exit code 1
command 'zfs send -Rpv -- storage/vm-159-disk-0@__migration__' failed: got signal 13
send/receive failed, cleaning up snapshot(s)..
2018-12-05 13:46:47 ERROR: Failed to sync data - command 'set -o pipefail && pvesm export zfslocal:vm-159-disk-0 zfs - -with-snapshots 0 -snapshot __migration__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=ares' root@192.168.0.60 -- pvesm import zfslocal:vm-159-disk-0 zfs - -with-snapshots 0 -delete-snapshot __migration__' failed: exit code 1
2018-12-05 13:46:47 aborting phase 1 - cleanup resources
2018-12-05 13:46:47 ERROR: found stale volume copy 'zfslocal:vm-159-disk-0' on node 'ares'
2018-12-05 13:46:47 ERROR: migration aborted (duration 00:00:02): Failed to sync data - command 'set -o pipefail && pvesm export zfslocal:vm-159-disk-0 zfs - -with-snapshots 0 -snapshot __migration__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=ares' root@192.168.0.60 -- pvesm import zfslocal:vm-159-disk-0 zfs - -with-snapshots 0 -delete-snapshot __migration__' failed: exit code 1
migration aborted


So instead I found that I can do the migration manually with zfs send/recv, but I wanted to ask you if I am missing any steps.

  1. I take the VM offline
  2. I take a snapshot of the VM through the GUI, named "migration"
  3. I do a zfs send like this:
    1. zfs send --replicate --large-block --compressed --verbose storage/subvol-106-disk-0@migration | ssh 192.168.0.60 zfs recv -v rpool/data/subvol-106-disk-0
  4. I move the config file like this:
    1. mv /etc/pve/nodes/poseidon/qemu-server/106.conf /etc/pve/nodes/ares/qemu-server/106.conf
  5. I edit that file and replace all mentions of "zfslocal" with "local-zfs"
Is there anything else to do? Am I missing something? I tried with a sample VM and everything seems to work smoothly, but I'd rather be sure....
I should also delete the ZFS dataset in the server I migrated from after I finish, but since I'm going to re-format that node, I don't even waste my time doing that.
 
  • Like
Reactions: scintilla13

wolfgang

Proxmox Staff Member
Staff member
Oct 1, 2014
6,497
459
103
Hi,

the storage migration with another storage as target should work.
Can you please send the cat /etc/pve/storage.cfg and the VM.conf
 

pmontepagano

New Member
Jul 17, 2013
7
1
1
Sure, here's my storage.cfg:

dir: local
path /var/lib/vz
content iso,vztmpl
maxfiles 1
shared 0
nfs: vzdump
export /srv/vzdump
path /mnt/pve/vzdump
server orfeo.example.org
content backup,vztmpl,iso
maxfiles 3
zfspool: zfslocal
pool storage
content rootdir,images
nodes poseidon
sparse 1
nfs: proxmox-backups
export /storage2017/backups-ccc/proxmox
path /mnt/pve/proxmox-backups
server zeus.example.org
content backup,iso
maxfiles 1
options vers=3

zfspool: local-zfs

pool rpool/data
content images,rootdir
nodes apolo,ares
sparse 1


And here's the VM's conf file:

bootdisk: scsi0
cores: 2
ide2: vzdump:iso/debian-9.4.0-amd64-netinst.iso,media=cdrom
memory: 4096
name: demovm
net0: virtio=E6:F8:6C:82:F2:42,bridge=vmbr1,tag=107
numa: 0
ostype: l26
scsi0: zfslocal:vm-159-disk-0,size=62G
scsihw: virtio-scsi-pci
smbios1: uuid=e1c5343c-3760-40b1-ac55-0a2bb08bd50c
sockets: 2
vmgenid: a7c689c9-9691-46c3-a93b-4b5b4022307e

Node poseidon is on Proxmox 5.2-10 and nodes apolo and ares are on Proxmox 5.3-5
 

pmontepagano

New Member
Jul 17, 2013
7
1
1
Oh! I hadn't found that. Thanks!

Is my workaround OK? I mean, manually doing zfs send/recv and moving the conf file in /etc from one node's directory to the other's. Or am I missing something? I tired with a couple of non-critical VMs and they seem to be working fine.
 

wolfgang

Proxmox Staff Member
Staff member
Oct 1, 2014
6,497
459
103
Yes, your workaround is fine.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!