[SOLVED] VM Migration failed. Incorrect storage detection

chelin

New Member
Oct 20, 2019
7
0
1
45
Hi Folks!

I'm trying out Proxmox VE ( 6.0-9 ) plus Linstor Plugin as a Datacenter Virtualization Solution for the company I work for. So far, pretty good. I've found a glitch though, and as I dont really know if it's about proxmox or about Linstor + ZFS, I'll post this in both communities. My scenario:

- 2 clustered nodes for testing
- Both feature a ZFS pool ( tank - to be original :)
- Installed and configured Linstor + DRBD9 + Controller in both ( node1 as Controller )
- Created DRBD storage pool based on ZFS ( tried both thick & thin )

So, to sum up, I'm using a ZFS pool on both nodes as VM image storage and as a storage pool from which to create my DRBD replicated volumes. I then created a couple VMs, one of them backed by a ZFS volume, and the other one backed by a DRBD replicated resource.

Here's the glitch: The ZFS backed VM I can migrate, no problem, but the DRBD backed VM, it wont migrate. When trying to, it looks like proxmox incorrectly detects it's backing storage as a ZFS volume, instead of the DRBD resource. VM's local disk is: "drbdstorage:vm-101-disk-1". I'm wondering if using the same ZFS pool for both VMs and Linstor is not supported for this setup . . .

This is my storage.cfg:

drbd: drbdstorage
content rootdir,images
controller 10.17.0.71
redundancy 2

zfspool: zp1
pool tank
content images,rootdir
sparse 1

Here's my problematic VM's config:

agent: 1
bootdisk: scsi0
cores: 2
ide2: none,media=cdrom
memory: 2048
name: vm-xenial-server
net0: virtio=06:04:B3:E1:75:AF,bridge=vmbr0,tag=1
numa: 0
ostype: l26
scsi0: drbdstorage:vm-101-disk-1,cache=unsafe,size=8G
scsihw: virtio-scsi-pci
smbios1: uuid=9a8e8301-a865-4544-a367-5111cfcbff16
sockets: 2
vmgenid: 2b7a93b0-f808-44a5-818a-cdfb4a70586e


Here's the migration log extract:
2019-10-20 08:52:25 use dedicated network address for sending migration traffic (10.17.0.72)
2019-10-20 08:52:25 starting migration of VM 101 to node 'ich2' (10.17.0.72)
-------> 2019-10-20 08:52:25 found local disk 'zp1:vm-101-disk-1_00000' (via storage) <-------
2019-10-20 08:52:25 copying disk images
full send of tank/vm-101-disk-1_00000@__migration__ estimated size is 8.20G
total estimated size is 8.20G
TIME SENT SNAPSHOT tank/vm-101-disk-1_00000@__migration__
tank/vm-101-disk-1_00000 name tank/vm-101-disk-1_00000 -
volume 'tank/vm-101-disk-1_00000' already exists
command 'zfs send -Rpv -- tank/vm-101-disk-1_00000@__migration__' failed: got signal 13
send/receive failed, cleaning up snapshot(s)..
2019-10-20 08:52:27 ERROR: Failed to sync data - command 'set -o pipefail && pvesm export zp1:vm-101-disk-1_00000 zfs - -with-snapshots 0 -snapshot __migration__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=ich2' root@10.17.0.72 -- pvesm import zp1:vm-101-disk-1_00000 zfs - -with-snapshots 0 -delete-snapshot __migration__' failed: exit code 255
2019-10-20 08:52:27 aborting phase 1 - cleanup resources
2019-10-20 08:52:27 ERROR: found stale volume copy 'zp1:vm-101-disk-1_00000' on node 'ich2' <------- This is no stale volume copy, it's the DRBD replica
2019-10-20 08:52:27 ERROR: migration aborted (duration 00:00:02): Failed to sync data - command 'set -o pipefail && pvesm export zp1:vm-101-disk-1_00000 zfs - -with-snapshots 0 -snapshot __migration__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=ich2' root@10.17.0.72 -- pvesm import zp1:vm-101-disk-1_00000 zfs - -with-snapshots 0 -delete-snapshot __migration__' failed: exit code 255

TASK ERROR: migration aborted

Anyone tried something of this sort ? Any pointers ?

Thank you!
marcelo
 
Forgot to mention: If I remove the ZFS Pool as a storage option in the Proxmox Cluster, then the DRBD-backed VM's migration goes through without an issue.
 
Hi,
The problem is you can't use the same storage backend with tow different storage definitions.
Proxmox VE finds the DRBD disk on the ZFS and try to sync it. What does not work?
If you want to use the ZFS as local storage to you must create an extra subset.
 
Hi,
The problem is you can't use the same storage backend with tow different storage definitions.
Proxmox VE finds the DRBD disk on the ZFS and try to sync it. What does not work?
If you want to use the ZFS as local storage to you must create an extra subset.

Hi wolfgang, thank you for your answer! I understand what you say, BUT . . . I wonder, shouldnt Proxmox see the storage backend for the VM's disk is drbd, ( config reads: scsi0: drbdstorage:vm-101-disk-1,cache=unsafe,size=8G ) which is of type 'Shared', and then dont even try to sync it in the first place ?

That behavior would be great, as it would allow for nodes to use ZFS as storage management, and then have some VM's with local storage, and others with shared storage - through DRBD.
 
I wonder, shouldnt Proxmox see the storage backend for the VM's disk is drbd, ( config reads: scsi0: drbdstorage:vm-101-disk-1,cache=unsafe,size=8G ) which is of type 'Shared', and then dont even try to sync it in the first place ?
Not if you use the storage two times.
At all migrations, Proxmox VE scans for disks that are not referenced in the config.
Because you use the same ZFS dataset as Storage for DRBD and Proxmox VE conf you get two times the same disk.
The disk itself has no internal Identifier to tell PVE it is the same disk.
A VM can have per storage one disk thats called vm-100-disk-0.
That behavior would be great, as it would allow for nodes to use ZFS as storage management, and then have some VM's with local storage, and others with shared storage - through DRBD.
I'm not sure if this is great, be careful with this kind of setups.
DRBD is not a part of Proxmox VE so I can't tell what is possible or not.
 
Thank you. I'll see if I can work around it, or just pick one of the two storage types.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!