uninitialized value in perl programs during migration

Mark Tetrode

Active Member
Jan 9, 2018
11
0
41
59
When migrating, I get an error of use of uninitialized value in perl programs

Code:
task started by HA resource agent
2023-07-03 10:08:40 starting migration of VM 102 to node 'a' (51.91.30.11)
2023-07-03 10:08:40 found local disk 'local-zfs:vm-102-disk-0' (in current VM config)
2023-07-03 10:08:40 copying local disk images
Use of uninitialized value $target_storeid in string eq at /usr/share/perl5/PVE/Storage.pm line 779.
Use of uninitialized value $targetsid in concatenation (.) or string at /usr/share/perl5/PVE/QemuMigrate.pm line 680.
2023-07-03 10:08:40 ERROR: storage migration for 'local-zfs:vm-102-disk-0' to storage '' failed - no storage ID specified
2023-07-03 10:08:40 aborting phase 1 - cleanup resources
2023-07-03 10:08:40 ERROR: migration aborted (duration 00:00:00): storage migration for 'local-zfs:vm-102-disk-0' to storage '' failed - no storage ID specified
TASK ERROR: migration aborted

running version 7.4.13:
Code:
pve-manager/7.4-13/46c37d9c (running kernel: 5.15.107-2-pve)
 
please post the full output of "pveversion -v", the VM config and the storage config. thanks!
 
Code:
proxmox-ve: 7.4-1 (running kernel: 5.15.107-1-pve)
pve-manager: 7.4-13 (running version: 7.4-13/46c37d9c)
pve-kernel-5.15: 7.4-3
pve-kernel-5.4: 6.4-20
pve-kernel-5.3: 6.1-6
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.15.107-1-pve: 5.15.107-1
pve-kernel-5.4.203-1-pve: 5.4.203-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 15.2.17-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx4
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4.1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-2
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.7
libpve-storage-perl: 7.4-3
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.2-1
proxmox-backup-file-restore: 2.4.2-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.7.3
pve-cluster: 7.3-3
pve-container: 4.4-4
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-4~bpo11+1
pve-firewall: 4.3-4
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1
 
Last edited:
please also post the configuration files!
 
Sorry, I read over that

vm config:
Code:
boot: order=scsi0;net0
cores: 2
memory: 2048
name: a-21.smartcall.cc
net0: virtio=02:00:00:d2:a8:a0,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-zfs:vm-102-disk-0,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=47656cfb-fe69-49f3-a90e-0bd0ebd0699d
sockets: 2
vmgenid: 573648f0-fcac-4ca5-ad3b-8a401f21b269

storage config:
Code:
dir: local
        path /var/lib/vz
        content backup,iso,vztmpl
        prune-backups keep-last=3
        shared 0

zfspool: local-zfs
        pool rpool/data
        content rootdir,images
        sparse 1

nfs: backup-t
        export /export/ftpbackup/ns3052658.ip-137-74-203.eu
        path /mnt/pve/backup-t
        server ftpback-rbx7-618.ovh.net
        content vztmpl,snippets,backup,iso,rootdir,images
        prune-backups keep-all=1

nfs: nas
        export /zpool-129404/smartcallHA/
        path /mnt/pve/nas
        server 10.201.9.204
        content images,rootdir,iso,backup,snippets,vztmpl
        prune-backups keep-all=1
 
Hi,
can you post the output of pvesm list local-zfs on the node with the VM?
 
Hi Fiona

Code:
# pvesm list local-zfs
Volid                       Format  Type              Size VMID
local-zfs:subvol-110-disk-0 subvol  rootdir    42949672960 110
local-zfs:subvol-110-disk-1 subvol  rootdir    21474836480 110
local-zfs:subvol-111-disk-0 subvol  rootdir    42949672960 111
local-zfs:subvol-112-disk-2 subvol  rootdir    75161927680 112
local-zfs:subvol-112-disk-3 subvol  rootdir    42949672960 112
local-zfs:subvol-113-disk-0 subvol  rootdir    85899345920 113
local-zfs:subvol-113-disk-1 subvol  rootdir    85899345920 113
local-zfs:subvol-113-disk-2 subvol  rootdir    75161927680 113
local-zfs:subvol-113-disk-3 subvol  rootdir    42949672960 113
local-zfs:subvol-114-disk-4 subvol  rootdir    75161927680 114
local-zfs:subvol-114-disk-5 subvol  rootdir    42949672960 114
local-zfs:subvol-115-disk-0 subvol  rootdir   128849018880 115
local-zfs:subvol-116-disk-0 subvol  rootdir    51539607552 116
local-zfs:subvol-117-disk-0 subvol  rootdir    18253611008 117
local-zfs:subvol-123-disk-0 subvol  rootdir    17179869184 123
local-zfs:subvol-127-disk-0 subvol  rootdir     8589934592 127
local-zfs:subvol-128-disk-0 subvol  rootdir    85899345920 128
local-zfs:subvol-138-disk-0 subvol  rootdir     8589934592 138
local-zfs:subvol-139-disk-0 subvol  rootdir     8589934592 139
local-zfs:vm-100-disk-0     raw     images    322122547200 100
local-zfs:vm-100-disk-1     raw     images     34359738368 100
local-zfs:vm-104-disk-0     raw     images    214748364800 104
local-zfs:vm-131-disk-0     raw     images    322122547200 131
local-zfs:vm-131-disk-1     raw     images     34359738368 131
local-zfs:vm-133-disk-0     raw     images    128849018880 133
 
Thanks for the insight.

I now have a situation where on one machine I have the disk and on another machine I have the config

How can I rectify this, i.e. create the config on the machine that has the disk?
 
You can either manually move the config between nodes: from/to /etc/pve/nodes/<node>/qemu-server/ replacing <node> with what your node names are. Or you can detach the disk in the configuration, migrate the VM and then reattach it on the target.