Proxmox Cluster Migrating non-shared storage ZFS-backed

Denver

New Member
Apr 25, 2016
2
0
1
27
Hi all

I have a few Questings around Migrating VMs within a Proxmox cluster, but just want to introduce myself. I have been using Proxmox since I first started working for a Local I.T Company Technology Wise in New Zealand we manage Small Linux based Server solutions and really enjoy providing Open Source alternatives for clients.

We currently have a Proxmox Cluster with Three nodes, all nodes are on the latest version of Proxmox

Quorum information
------------------
Date: Wed Apr 27 09:49:13 2016
Quorum provider: corosync_votequorum
Nodes: 3
Node ID: 0x00000003
Ring ID: 7564
Quorate: Yes

Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 3
Quorum: 2
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 192.168.222.250
0x00000003 1 192.168.222.251 (local)
0x00000002 1 192.168.222.252

Each node has it's own local ZFS Pool that we are using to store VMs, is it possible to Migrate VMs from one node to another I'm not fussed about it being an online migration or anything like that.

At the moment If I make the VM with virtual drive on non-shared ZFS storage "zpool" on node1, it will not let me migrate to node 2 because storage "zpool" does not exist on node 2. On node 2 I named the ZFS storage "zpool2".

The ZVOL is created on the other node but seems to be empty and the migration process appears to almost fully copy across the whole VM but fails right at the end, then other times it just fails immediately with the below error message

cannot open 'ZFSsata/vm-118-disk-1': dataset does not exist
cannot receive new filesystem stream: dataset does not exist
warning: cannot send 'ZFSsata/vm-118-disk-1@__migration__': Broken pipe
Apr 27 09:57:41 ERROR: Failed to sync data - command 'set -o pipefail && zfs send -Rpv ZFSsata/vm-118-disk-1@__migration__ | ssh root@192.168.222.250 zfs recv ZFSsata/vm-118-disk-1' failed: exit code 1
Apr 27 09:57:41 aborting phase 1 - cleanup resources
Apr 27 09:57:41 ERROR: found stale volume copy 'ZFSsata:vm-118-disk-1' on node 'twt250sv'
Apr 27 09:57:41 ERROR: migration aborted (duration 00:00:02): Failed to sync data - command 'set -o pipefail && zfs send -Rpv ZFSsata/vm-118-disk-1@__migration__ | ssh root@192.168.222.250 zfs recv ZFSsata/vm-118-disk-1' failed: exit code 1
TASK ERROR: migration aborted


This sounds to be something similar with an old bug:
https://forum.proxmox.com/threads/migrating-non-shared-storage-zfs-backed-vm.23395/


Many Thanks,

Denver.
 
My experience is that the migration will only work if the name of the local ZFS storage on both nodes is the same. The name of the pool doesn't matter so much as what you name the storage within proxmox.
 
Sadly I tried that however because all nodes are in a cluster it wouldn't let me have two ZFS pools with the same name in Proxmox.
 
Sadly I tried that however because all nodes are in a cluster it wouldn't let me have two ZFS pools with the same name in Proxmox.

Why do you want to create two storage definitions? Use a single one, and name all pools the same on your nodes. What is exactly the problem?
 
  • Like
Reactions: vkhera