Proxmox VE 6.0.9 with SAN FC Storage

kleberlyra

New Member
Nov 4, 2019
2
0
1
50
Boa Vista, Roraima, Brazil
Good Morning

I have a cluster with 3 nodes, all are connected to a storage in a SAN FC network, via HBA. LUNs are delivered to each node as local disks. I deployed multipath, created VGs and LVs manually and can create VMs on all nodes. However I can not migrate VMs between nodes, I get the message that it is not possible to migrate local disks.

task started by HA resource agent
2019-11-04 06:58:02 starting migration of VM 100 to node 'sings' (10.177.150.12)
2019-11-04 06:58:02 found local disk 'VOL00: vm-100-disk-0' (in current VM config)
2019-11-04 06:58:02 can't migrate local disk 'VOL00: vm-100-disk-0': can't migrate attached local disks without with-local-disks option
2019-11-04 06:58:02 ERROR: Failed to sync data - can't migrate VM - check log
2019-11-04 06:58:02 aborting phase 1 - cleanup resources
2019-11-04 06:58:02 ERROR: migration aborted (duration 00:00:00): Failed to sync data - can't migrate VM - check log
TASK ERROR: migration aborted

My storage does not have iscsi, SCSI delivery via FC.

Already tried to do the migration from the command line, but it didn't work.

qm migrate 100 node02 --online --with-local-disks

Can someone help me?
 
qm migrate 100 canta --online --with-local-disks

task started by HA resource agent
2019-11-04 09:21:21 starting migration of VM 100 to node 'canta' (10.177.150.12)
2019-11-04 09:21:22 found local disk 'VOL00:vm-100-disk-0' (in current VM config)
2019-11-04 09:21:22 copying disk images
volume DataStore-V7K-00-VG/vm-100-disk-0 already exists
command 'dd 'if=/dev/DataStore-V7K-00-VG/vm-100-disk-0' 'bs=64k'' failed: got signal 13
send/receive failed, cleaning up snapshot(s)..
2019-11-04 09:21:23 ERROR: Failed to sync data - command 'set -o pipefail && pvesm export VOL00:vm-100-disk-0 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=canta' root@10.177.150.12 -- pvesm import VOL00:vm-100-disk-0 raw+size - -with-snapshots 0' failed: exit code 255
2019-11-04 09:21:23 aborting phase 1 - cleanup resources
2019-11-04 09:21:23 ERROR: found stale volume copy 'VOL00:vm-100-disk-0' on node 'canta'
2019-11-04 09:21:23 ERROR: migration aborted (duration 00:00:02): Failed to sync data - command 'set -o pipefail && pvesm export VOL00:vm-100-disk-0 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=canta' root@10.177.150.12 -- pvesm import VOL00:vm-100-disk-0 raw+size - -with-snapshots 0' failed: exit code 255
TASK ERROR: migration aborted

pveversion --verbose
proxmox-ve: 6.0-2 (running kernel: 5.0.21-3-pve)
pve-manager: 6.0-9 (running version: 6.0-9/508dcee0)
pve-kernel-5.0: 6.0-9
pve-kernel-helper: 6.0-9
pve-kernel-5.0.21-3-pve: 5.0.21-7
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-5
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.0-9
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-65
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-8
pve-cluster: 6.0-7
pve-container: 3.0-7
pve-docs: 6.0-7
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-3
pve-qemu-kvm: 4.0.1-3
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-9
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve1
 
It seems that you have not created the volume group properly in PVE. Please check the file /etc/pve/storage.cfg if it contains the shared flag like on my san:

Code:
lvm: san-dx100
        vgname san-dx100
        content rootdir,images
        shared 1