no storage ID

SamTzu

Active Member
ERROR: migration aborted (duration 00:00:06): storage migration for 'vdd:subvol-601-disk-1' to storage '' failed - no storage ID specified TASK ERROR: migration aborted

I tried to migrate 1 LXC container to another from one (nested) Qemu node to another and got this error.
I remember when I created the ZFS storage with the Proxmox GUI (on the second node) I got an error saying something like the "drive name is all ready registered" (in the datacenter storage list) so I just created the drive "locally" on the node. (It should work as long as the NFS mount names are same.)

Now I'm wondering how can I migrate LXC containers from one nested node to this nested node without "proper" ZFS storage ID?
 
Last edited:

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
8,792
1,160
174
34
Vienna
can you post the guest config and the storage config?
 

SamTzu

Active Member
Code:
root@vm2401:~# cat /etc/pve/lxc/144.conf
arch: i386
cores: 1
features: nesting=1
hostname: geodns1.ic4.eu
memory: 512
nameserver: 1.1.1.1 8.8.8.8
net0: name=eth0,bridge=vmbr1,firewall=1,gw=79.134.108.129,hwaddr=7A:54:79:4A:4B:C4,ip=79.134.108.144/26,ip6=auto,rate=2,type=veth
onboot: 1
ostype: debian
rootfs: vdd:subvol-144-disk-0,size=10G
searchdomain: ic4.eu
swap: 0
unprivileged: 1
 

SamTzu

Active Member
Code:
root@vm2401:~# cat /etc/pve/storage.cfg
dir: local
        disable
        path /var/lib/vz
        content backup,iso,vztmpl
        shared 0

zfspool: local-zfs
        disable
        pool rpool/data
        content images,rootdir
        sparse 1

zfspool: vdd
        pool vdd
        content rootdir,images
        nodes vm2401,kvm-p1
        sparse 0

nfs: nfs1
        export /mnt/nfs
        path /mnt/pve/nfs1
        server nfs1.ic4.eu
        content snippets,vztmpl,images,iso,rootdir,backup
        prune-backups keep-all=1
 

SamTzu

Active Member
Here is a peek in other failure for newer amd64 LXC container.

Code:
2022-03-03 00:34:05 successfully imported 'vdd:subvol-601-disk-2'
2022-03-03 00:34:05 delete previous replication snapshot '__replicate_601-0_1645925586__' on vdd:subvol-601-disk-0
2022-03-03 00:34:07 delete previous replication snapshot '__replicate_601-0_1645925586__' on vdd:subvol-601-disk-2
2022-03-03 00:34:12 (remote_finalize_local_job) delete stale replication snapshot '__replicate_601-0_1645925586__' on vdd:subvol-601-disk-0
2022-03-03 00:34:12 (remote_finalize_local_job) delete stale replication snapshot '__replicate_601-0_1645925586__' on vdd:subvol-601-disk-2
2022-03-03 00:34:12 end replication job
Use of uninitialized value $target_storeid in string eq at /usr/share/perl5/PVE/Storage.pm line 669.
Use of uninitialized value $targetsid in concatenation (.) or string at /usr/share/perl5/PVE/LXC/Migrate.pm line 322.
2022-03-03 00:34:12 ERROR: storage migration for 'vdd:subvol-601-disk-1' to storage '' failed - no storage ID specified
2022-03-03 00:34:12 aborting phase 1 - cleanup resources
2022-03-03 00:34:12 ERROR: found stale volume copy 'vdd:subvol-601-disk-1' on node 'vm2401'
2022-03-03 00:34:12 start final cleanup
2022-03-03 00:34:12 start container on source node
2022-03-03 00:34:16 ERROR: migration aborted (duration 00:13:34): storage migration for 'vdd:subvol-601-disk-1' to storage '' failed - no storage ID specified
TASK ERROR: migration aborted

Now that I look at that closer I realize the disk-1 that it speaks about was deleted earlier. It no longer exist. hmm...
 
Last edited:

SamTzu

Active Member
Code:
root@kvm-p1:~# cat /etc/pve/lxc/601.conf
arch: amd64
cores: 2
hostname: pk1.ic4.eu
memory: 4096
mp1: vdd:subvol-601-disk-2,mp=/var/backup,size=20G
nameserver: 1.1.1.1 8.8.8.8
net0: name=eth0,bridge=vmbr1,firewall=1,gw=79.134.108.129,gw6=2a00:1190:c003:ffff::1,hwaddr=26:08:82:F4:63:20,ip=79.134.108.138/26,ip6=2a00:1190:c003:ffff::138/64,rate=3,type=veth
onboot: 1
ostype: debian
rootfs: vdd:subvol-601-disk-0,quota=1,size=100G
searchdomain: ic4.eu
startup: order=1
swap: 1024
 

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
8,792
1,160
174
34
Vienna
can you post the output of 'pveversion -v' of both nodes and the commandline (or options) of how you migrate?
also the output of 'zfs list' of both nodes, and possible replication config
 

fiona

Proxmox Staff Member
Staff member
Aug 1, 2019
2,969
638
118
Hi,
I think I can see the issue. There were some recent changes and now the unreferenced disk (i.e. vdd:subvol-601-disk-1 which is found when scanning the storage) is not handled correctly anymore. Still, please post the information @dcsapak asked for, so we can be sure.
 

SamTzu

Active Member
I just created a new qemu "node" with vdd drive (zfs) that I created using Proxmox GUI.
I had the "Add to Storage" marked when I created the ZFS drive. After that I joined the new node to cluster and tried to move LXC container to it.

Code:
()
2022-03-04 17:22:18 starting migration of CT 174 to node 'vm2402' (10.100.10.101)
2022-03-04 17:22:18 found local volume 'vdd:subvol-174-disk-0' (via storage)
2022-03-04 17:22:18 found local volume 'vdd:subvol-174-disk-1' (in current VM config)
2022-03-04 17:22:20 full send of vdd/subvol-174-disk-1@__migration__ estimated size is 3.78G
2022-03-04 17:22:20 total estimated size is 3.78G
2022-03-04 17:22:21 TIME        SENT   SNAPSHOT vdd/subvol-174-disk-1@__migration__
2022-03-04 17:22:21 17:22:21   16.1M   vdd/subvol-174-disk-1@__migration__
...
2022-03-04 17:23:24 17:23:24   3.79G   vdd/subvol-174-disk-1@__migration__
2022-03-04 17:23:26 successfully imported 'vdd:subvol-174-disk-1'
2022-03-04 17:23:27 volume 'vdd:subvol-174-disk-1' is 'vdd:subvol-174-disk-1' on the target
Use of uninitialized value $target_storeid in string eq at /usr/share/perl5/PVE/Storage.pm line 669.
Use of uninitialized value $targetsid in concatenation (.) or string at /usr/share/perl5/PVE/LXC/Migrate.pm line 322.
2022-03-04 17:23:27 ERROR: storage migration for 'vdd:subvol-174-disk-0' to storage '' failed - no storage ID specified
2022-03-04 17:23:27 aborting phase 1 - cleanup resources
2022-03-04 17:23:27 ERROR: found stale volume copy 'vdd:subvol-174-disk-1' on node 'vm2402'
2022-03-04 17:23:27 ERROR: found stale volume copy 'vdd:subvol-174-disk-0' on node 'vm2402'
2022-03-04 17:23:27 start final cleanup
2022-03-04 17:23:27 ERROR: migration aborted (duration 00:01:10): storage migration for 'vdd:subvol-174-disk-0' to storage '' failed - no storage ID specified
TASK ERROR: migration aborted
 

SamTzu

Active Member
I don't know where this came from subvol-174-disk-0' (via storage).
The container only had disk-1 active (or visible anywhere.)
So I just backed up the container and restored it on the new node.
But now the container has this drive subvol-174-disk-2
 

SamTzu

Active Member
My guess is when you move drive from storage 2 storage it is registered somewhere (via storage) and that info is used when migrating the container. When the old drive is missing the migration is aborted.
 
Last edited:

fiona

Proxmox Staff Member
Staff member
Aug 1, 2019
2,969
638
118
Container migration currently does not clean up migrated disks on the target node when migration fails, so that's where the left-over disks might come from. You should see the unreferenced volumes (in fact all volumes) when you check the storage content. The problem you ran into has been fixed in git now, and will be in pve-container >= 4.1-5 (but it's not packaged yet).
 

meichthys

Member
Sep 25, 2019
22
6
8
33
Just to follow up on this, pve-container 4.1-5 is now available for upgrade via the proxmox web ui.
I can confirm that upgrading to this version indeed fixed this `no storage ID specified` container migration issue
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!