[SOLVED] Cannot migrate guests

Proximate

Member
Feb 13, 2022
219
12
23
64
Solved: This was solved by giving up and rebuilding the entire cluster.



I wanted to upgrade all of my 7.1.7 to 7.2 but one node is not allowing migration.
I tried while it was still 7.1.7 and got the errors so I upgraded to 7.2 with the guests on it and no difference after a reboot.

2022-08-29 18:33:59 starting migration of VM 152 to node 'pro02' (10.0.0.71)
2022-08-29 18:33:59 found local disk 'local-zfs:vm-152-disk-0' (in current VM config)
2022-08-29 18:33:59 copying local disk images
Use of uninitialized value $target_storeid in string eq at /usr/share/perl5/PVE/Storage.pm line 776.
Use of uninitialized value $targetsid in concatenation (.) or string at /usr/share/perl5/PVE/QemuMigrate.pm line 565.
2022-08-29 18:33:59 ERROR: storage migration for 'local-zfs:vm-152-disk-0' to storage '' failed - no storage ID specified
2022-08-29 18:33:59 aborting phase 1 - cleanup resources
2022-08-29 18:33:59 ERROR: migration aborted (duration 00:00:00): storage migration for 'local-zfs:vm-152-disk-0' to storage '' failed - no storage ID specified
TASK ERROR: migration aborted

2022-08-29 17:51:41 starting migration of VM 323 to node 'pro03' (10.0.0.72)
2022-08-29 17:51:42 found local disk 'local-zfs:vm-323-disk-0' (in current VM config)
2022-08-29 17:51:42 found local disk 'local-zfs:vm-323-disk-1' (in current VM config)
2022-08-29 17:51:42 copying local disk images
Use of uninitialized value $target_storeid in string eq at /usr/share/perl5/PVE/Storage.pm line 776.
Use of uninitialized value $targetsid in concatenation (.) or string at /usr/share/perl5/PVE/QemuMigrate.pm line 565.
2022-08-29 17:51:42 ERROR: storage migration for 'local-zfs:vm-323-disk-0' to storage '' failed - no storage ID specified
2022-08-29 17:51:42 aborting phase 1 - cleanup resources
2022-08-29 17:51:42 ERROR: migration aborted (duration 00:00:01): storage migration for 'local-zfs:vm-323-disk-0' to storage '' failed - no storage ID specified
TASK ERROR: migration aborted

After reboot, they won't start either. No zvol device?

1661825703830.png

# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 3.62T 48.7G 3.58T - - 2% 1% 1.00x ONLINE -

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 35.4G 2.52T 151K /rpool
rpool/ROOT 35.3G 2.52T 140K /rpool/ROOT
rpool/ROOT/pve-1 35.3G 2.52T 35.3G /
rpool/data 140K 2.52T 140K /rpool/data

# zpool status
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 00:16:48 with 0 errors on Sun Aug 14 00:40:50 2022
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
scsi-35000c50059222917-part3 ONLINE 0 0 0
scsi-35000c500628f656f-part3 ONLINE 0 0 0
scsi-35000c500565777eb-part3 ONLINE 0 0 0
scsi-35000c500565353db-part3 ONLINE 0 0 0

errors: No known data errors

Lot's of posts found and read but I don't want to break this setup so can anyone provide any correct solutions or link to an article, something that is not 'try this'.
 
Last edited:
Hi,
I wanted to upgrade all of my 7.1.7 to 7.2 but one node is not allowing migration.
I tried while it was still 7.1.7 and got the errors so I upgraded to 7.2 with the guests on it and no difference after a reboot.

2022-08-29 18:33:59 starting migration of VM 152 to node 'pro02' (10.0.0.71)
2022-08-29 18:33:59 found local disk 'local-zfs:vm-152-disk-0' (in current VM config)
2022-08-29 18:33:59 copying local disk images
Use of uninitialized value $target_storeid in string eq at /usr/share/perl5/PVE/Storage.pm line 776.
Use of uninitialized value $targetsid in concatenation (.) or string at /usr/share/perl5/PVE/QemuMigrate.pm line 565.
2022-08-29 18:33:59 ERROR: storage migration for 'local-zfs:vm-152-disk-0' to storage '' failed - no storage ID specified
2022-08-29 18:33:59 aborting phase 1 - cleanup resources
2022-08-29 18:33:59 ERROR: migration aborted (duration 00:00:00): storage migration for 'local-zfs:vm-152-disk-0' to storage '' failed - no storage ID specified
TASK ERROR: migration aborted

2022-08-29 17:51:41 starting migration of VM 323 to node 'pro03' (10.0.0.72)
2022-08-29 17:51:42 found local disk 'local-zfs:vm-323-disk-0' (in current VM config)
2022-08-29 17:51:42 found local disk 'local-zfs:vm-323-disk-1' (in current VM config)
2022-08-29 17:51:42 copying local disk images
Use of uninitialized value $target_storeid in string eq at /usr/share/perl5/PVE/Storage.pm line 776.
Use of uninitialized value $targetsid in concatenation (.) or string at /usr/share/perl5/PVE/QemuMigrate.pm line 565.
2022-08-29 17:51:42 ERROR: storage migration for 'local-zfs:vm-323-disk-0' to storage '' failed - no storage ID specified
2022-08-29 17:51:42 aborting phase 1 - cleanup resources
2022-08-29 17:51:42 ERROR: migration aborted (duration 00:00:01): storage migration for 'local-zfs:vm-323-disk-0' to storage '' failed - no storage ID specified
TASK ERROR: migration aborted
please post the output of pveversion -v and qm config 152 on the source node. Does migration work if you select an explicit target storage?

After reboot, they won't start either. No zvol device?

View attachment 40506

# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 3.62T 48.7G 3.58T - - 2% 1% 1.00x ONLINE -

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 35.4G 2.52T 151K /rpool
rpool/ROOT 35.3G 2.52T 140K /rpool/ROOT
rpool/ROOT/pve-1 35.3G 2.52T 35.3G /
rpool/data 140K 2.52T 140K /rpool/data

# zpool status
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 00:16:48 with 0 errors on Sun Aug 14 00:40:50 2022
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
scsi-35000c50059222917-part3 ONLINE 0 0 0
scsi-35000c500628f656f-part3 ONLINE 0 0 0
scsi-35000c500565777eb-part3 ONLINE 0 0 0
scsi-35000c500565353db-part3 ONLINE 0 0 0

errors: No known data errors

Lot's of posts found and read but I don't want to break this setup so can anyone provide any correct solutions or link to an article, something that is not 'try this'.
Regarding the zvol link error, can you try the workaround mentioned here?
 
Hi, thanks for helping.

You want me to try this in the other post?

>for i in $(ls -1 /dev/zd* |grep -v '/dev/zd[0-9]*p[0-9]*'); do echo $i; /lib/udev/zvol_id $i ; done |grep -B1 <name of missing zvol>
>(replace <name of missing zvol> by the name of a zvol which has the problem (in your previous outputs: R1_1.6TB_SSD_EVO860/vm-1002-disk-0)

I'm new to zfs, how do I find the zvol name? Is it raidz1-0, what I show above?

>Does migration work if you select an explicit target storage?

I don't see any options.
1661888105121.png

Code:
# qm config 152
agent: 1
bios: ovmf
boot: order=scsi0
cores: 4
memory: 8194
name: c8-fox-003
net0: virtio=F6:3F:4F:BE:7B:53,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-zfs:vm-152-disk-0,format=raw,size=50G
scsihw: pvscsi
smbios1: uuid=c6daef31-1346-4d8b-ac69-3dd42641cc43
sockets: 2
vmgenid: b8c49011-ad93-46be-b535-a42d726e9e3f

Code:
~# pveversion -v
proxmox-ve: 7.2-1 (running kernel: 5.15.39-4-pve)
pve-manager: 7.2-7 (running version: 7.2-7/d0dd0e85)
pve-kernel-5.15: 7.2-9
pve-kernel-helper: 7.2-9
pve-kernel-5.13: 7.1-9
pve-kernel-5.15.39-4-pve: 5.15.39-4
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-3
libpve-storage-perl: 7.2-8
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.5-1
proxmox-backup-file-restore: 2.2.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-2
pve-docs: 7.2-2
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-5
pve-firmware: 3.5-1
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 7.0.0-2
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.5-pve1
 
Hi, thanks for helping.

You want me to try this in the other post?

>for i in $(ls -1 /dev/zd* |grep -v '/dev/zd[0-9]*p[0-9]*'); do echo $i; /lib/udev/zvol_id $i ; done |grep -B1 <name of missing zvol>
>(replace <name of missing zvol> by the name of a zvol which has the problem (in your previous outputs: R1_1.6TB_SSD_EVO860/vm-1002-disk-0)

I'm new to zfs, how do I find the zvol name? Is it raidz1-0, what I show above?
Yes, for example
Code:
root@pve701 ~ # for i in $(ls -1 /dev/zd* |grep -v '/dev/zd[0-9]*p[0-9]*'); do echo $i; /lib/udev/zvol_id $i ; done |grep -B1 myzpool/vm-110-disk-0
/dev/zd0
myzpool/vm-110-disk-0
root@pve701 ~ # udevadm trigger /dev/zd0 --settle
In the second command, you need to use the output of the first. For the default local-zfs pool, the names should be of the form rpool/data/vm-<ID>-disk-<N>

>Does migration work if you select an explicit target storage?

I don't see any options.
View attachment 40540
It's only exposed in the UI for online migration currently. For qm migrate in the CLI there is a --targetstorage option.
 
I think I'm not seeing the full command you want me to run. 152 is one of the guests.

~# for i in $(ls -1 /dev/zd* |grep -v '/dev/zd[0-9]*p[0-9]*'); do echo $i; /lib/udev/zvol_id $i ; done |grep -B1 myzpool/vm-152-disk-0 /dev/zd0 myzpool/vm-152-disk-0
grep: /dev/zd0: No such file or directory
grep: myzpool/vm-152-disk-0: No such file or directory
ls: cannot access '/dev/zd*': No such file or directory

# ls -la /dev/zd0
ls: cannot access '/dev/zd0': No such file or directory

Does this mean the pool is gone?

>It's only exposed in the UI for online migration currently. For qm migrate in the CLI there is a --targetstorage option.

I've never run migrate from the command line so not sure what the targetstorage would be. Guess I have to do more reading.
Something that should have taken hours has turned into days now :).
 
I think I'm not seeing the full command you want me to run. 152 is one of the guests.

~# for i in $(ls -1 /dev/zd* |grep -v '/dev/zd[0-9]*p[0-9]*'); do echo $i; /lib/udev/zvol_id $i ; done |grep -B1 myzpool/vm-152-disk-0 /dev/zd0 myzpool/vm-152-disk-0
grep: /dev/zd0: No such file or directory
grep: myzpool/vm-152-disk-0: No such file or directory
ls: cannot access '/dev/zd*': No such file or directory
You copied too much. The /dev/zd0 and myzpool/vm-110-disk-0 are the output of the command in my example. And for you, the volume should be rpool/data/vm-<ID>-disk-<N>, not prefixed with myzpool/.

# ls -la /dev/zd0
ls: cannot access '/dev/zd0': No such file or directory

Does this mean the pool is gone?
For you, it's likely not zd0 but some other zdX. But the ls: cannot access '/dev/zd*': No such file or directory is strange. What is the output of zfs list and zpool status -v?

>It's only exposed in the UI for online migration currently. For qm migrate in the CLI there is a --targetstorage option.

I've never run migrate from the command line so not sure what the targetstorage would be. Guess I have to do more reading.
Something that should have taken hours has turned into days now :).
You can see the man page man qm, but I don't think migration will work, if the zvol links don't show up unfortunately.
 
I tried moving the storage to another device but it shows 'no content'.

# find /dev | grep 152
/dev/shm/qb-2589-3152932-30-EgKM9V
/dev/shm/qb-2589-3152932-30-EgKM9V/qb-event-pve2-data
/dev/shm/qb-2589-3152932-30-EgKM9V/qb-event-pve2-header
/dev/shm/qb-2589-3152932-30-EgKM9V/qb-response-pve2-data
/dev/shm/qb-2589-3152932-30-EgKM9V/qb-response-pve2-header
/dev/shm/qb-2589-3152932-30-EgKM9V/qb-request-pve2-data
/dev/shm/qb-2589-3152932-30-EgKM9V/qb-request-pve2-header

So it seems that the guests/storage are gone?
How is it possible that there are no drive errors, no problems at all that I can see yet the guests are no longer on the system?

It shows over 30GB being used no less but then nothing there to see.

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 35.4G 2.52T 151K /rpool
rpool/ROOT 35.3G 2.52T 140K /rpool/ROOT
rpool/ROOT/pve-1 35.3G 2.52T 35.3G /
rpool/data 140K 2.52T 140K /rpool/data

# zpool iostat
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
rpool 48.7G 3.58T 0 36 722 320K

This doesn't compare to the GUI which states that local-zfs has 142KB used of 2.77TB.

Seems something is not working right on proxmox and it ruined the guests.
This node is part of a cluster, does it mean other guests could simply vanish?
Never seen that kind of thing happen with vmware ESX.
 
Last edited:
I tried moving the storage to another device but it shows 'no content'.
What do you mean by this? Which commands did you use?

It shows over 30GB being used no less but then nothing there to see.
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 35.4G 2.52T 151K /rpool
rpool/ROOT 35.3G 2.52T 140K /rpool/ROOT
rpool/ROOT/pve-1 35.3G 2.52T 35.3G /
Yes, your root partition is there using that space.

rpool/data 140K 2.52T 140K /rpool/data
# zpool iostat
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
rpool 48.7G 3.58T 0 36 722 320K

This doesn't compare to the GUI which states that local-zfs has 142KB used of 2.77TB.
local-zfs is rpool/data, not rpool.

Seems something is not working right on proxmox and it ruined the guests.
This node is part of a cluster, does it mean other guests could simply vanish?
Never seen that kind of thing happen with vmware ESX.
Of course that shouldn't happen and it won't under normal circumstances. Please check your zpool history and /var/log/syslog.
 
# ls -la /rpool/
drwxr-xr-x 2 root root 2 May 9 18:41 data
drwxr-xr-x 3 root root 3 May 9 18:41 ROOT

# ls -la /rpool/data/
drwxr-xr-x 2 root root 2 May 9 18:41 .
drwxr-xr-x 4 root root 4 May 9 18:41 ..

I mean I went to move the storage to NFS for example but as I already know, there is nothing there to move.

# ls -la /dev/z*
crw-rw-rw- 1 root root 1, 5 Aug 29 18:52 /dev/zero
crw-rw-rw- 1 root root 10, 249 Aug 29 18:52 /dev/zfs

Truly have no idea how this happened. The node and guests have been working fine for months but I wanted to upgrade from 7.1 to 7.2 and finally started doing that. When I got to this node, I migrated guests to other nodes but three of them would not move.

Those had been shut down at least two months ago. Now their content is completely gone which is why they won't migrate of course. They had been running for months without problem before being shut down.

What happened though, that is what I'm trying to understand before rebuilding this node.
The oldest syslog I have is Aug 7 and what ever happened to these guests seems to have happened before then because there is no more mention of them in the logs after Aug.

1662125651883.png

1662126196972.png


I searched for the name of one of the lost guests;

Code:
# grep -r "c8-fox" /etc/
/etc/pve/.rrd:pve2.3-vm/151:0:c8-fox-002:stopped:0:1662127428:8:U:8592031744:U:53687091200:0:U:U:U:U
/etc/pve/.rrd:pve2.3-vm/152:0:c8-fox-003:stopped:0:1662127424:8:U:8592031744:U:53687091200:0:U:U:U:U
/etc/pve/.rrd:pve2.3-vm/150:389465:c8-fox-001:running:0:1662127428:8:0.149404501825584:8592031744:7754248192:53687091200:0:889533482705:282164047711:40865792:1561741824
/etc/pve/nodes/pro04/qemu-server/152.conf:name: c8-fox-003
/etc/pve/nodes/pro03/qemu-server/151.conf:name: c8-fox-002
/etc/pve/nodes/pro02/qemu-server/150.conf:name: c8-fox-001

# cat /etc/pve/nodes/pro04/qemu-server/152.conf
agent: 1
bios: ovmf
boot: order=scsi0
cores: 4
memory: 8194
name: c8-fox-003
net0: virtio=F6:3F:4F:BE:7B:53,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-zfs:vm-152-disk-0,format=raw,size=50G
scsihw: pvscsi
smbios1: uuid=c6daef31-1346-4d8b-ac69-3dd42641cc43
sockets: 2
vmgenid: b8c49011-ad93-46be-b535-a42d726e9e3f
And if it helps at all, these are the log remnants I can find relating to one of those guests. I can remove the following if it is not useful.

Code:
~# grep -r "vm-152" /var/log/
/var/log/daemon.log:Aug 29 11:23:32 pro04 pvedaemon[4150482]: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/daemon.log:Aug 29 11:23:32 pro04 pvedaemon[4050818]: <root@pam> end task UPID:pro04:003F54D2:01A247BF:630D02F9:qmstart:152:root@pam: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/daemon.log:Aug 29 13:05:48 pro04 pve-guests[3614]: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/daemon.log:Aug 29 13:05:49 pro04 pvesh[3612]: Starting VM 152 failed: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/daemon.log:Aug 29 13:22:08 pro04 pvedaemon[53288]: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/daemon.log:Aug 29 13:22:08 pro04 pvedaemon[3437]: <root@pam> end task UPID:pro04:0000D028:000196AC:630D1EC5:qmstart:152:root@pam: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/daemon.log:Aug 29 18:57:59 pro04 pve-guests[3089]: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/daemon.log:Aug 29 18:58:00 pro04 pvesh[3077]: Starting VM 152 failed: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/daemon.log:Sep  1 15:23:48 pro04 pvedaemon[3186335]: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/daemon.log:Sep  1 15:23:48 pro04 pvedaemon[867581]: <root@pam> end task UPID:pro04:00309E9F:0177FCB6:63112FC8:qmmove:152:root@pam: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/daemon.log:Sep  2 06:41:15 pro04 pvedaemon[4021457]: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/daemon.log:Sep  2 06:41:15 pro04 pvedaemon[3902831]: <root@pam> end task UPID:pro04:003D5CD1:01CBFBC9:631206D0:qmmove:152:root@pam: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/daemon.log:Sep  2 06:48:10 pro04 pvedaemon[4027072]: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/active:UPID:pro04:003D72C0:01CC9D93:6312086F:qmmove:152:root@pam: 1 6312099E timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/active:UPID:pro04:003D5CD1:01CBFBC9:631206D0:qmmove:152:root@pam: 1 631207FB timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/active:UPID:pro04:00309E9F:0177FCB6:63112FC8:qmmove:152:root@pam: 1 631130F4 timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/active:UPID:pro04:00000C11:0000165A:630D6D7B:qmstart:152:root@pam: 1 630D6EA8 timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/active:UPID:pro04:0000D028:000196AC:630D1EC5:qmstart:152:root@pam: 1 630D1FF0 timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/5/UPID:pro04:0000D028:000196AC:630D1EC5:qmstart:152:root@pam::TASK ERROR: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/8/UPID:pro04:00309E9F:0177FCB6:63112FC8:qmmove:152:root@pam::TASK ERROR: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/2/UPID:pro04:0005DDA3:010AF440:627D8A02:qmstart:152:root@pam::TASK ERROR: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/C/UPID:pro04:00050A76:0102B11B:627D74DC:qmstart:152:root@pam::TASK ERROR: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/9/UPID:pro04:003D3287:0197D4DD:630CE839:qmigrate:152:root@pam::2022-08-29 09:24:26 found local disk 'local-zfs:vm-152-disk-0' (in current VM config)
/var/log/pve/tasks/9/UPID:pro04:003D3287:0197D4DD:630CE839:qmigrate:152:root@pam::2022-08-29 09:24:26 ERROR: storage migration for 'local-zfs:vm-152-disk-0' to storage '' failed - no storage ID specified
/var/log/pve/tasks/9/UPID:pro04:003D3287:0197D4DD:630CE839:qmigrate:152:root@pam::2022-08-29 09:24:26 ERROR: migration aborted (duration 00:00:01): storage migration for 'local-zfs:vm-152-disk-0' to storage '' failed - no storage ID specified
/var/log/pve/tasks/9/UPID:pro04:003F54D2:01A247BF:630D02F9:qmstart:152:root@pam::TASK ERROR: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/F/UPID:pro04:00057FEB:01073CD4:627D807F:qmstart:152:root@pam::TASK ERROR: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/F/UPID:pro04:003D72C0:01CC9D93:6312086F:qmmove:152:root@pam::TASK ERROR: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/index:UPID:pro04:000009CC:00001292:627ADECB:qmstart:152:root@pam: 627ADFF7 timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/index:UPID:pro04:00050A76:0102B11B:627D74DC:qmstart:152:root@pam: 627D7607 timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/index:UPID:pro04:00057FEB:01073CD4:627D807F:qmstart:152:root@pam: 627D81AA timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/index:UPID:pro04:0005DDA3:010AF440:627D8A02:qmstart:152:root@pam: 627D8B2E timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/index:UPID:pro04:00000D3C:000016B9:6308D467:qmstart:152:root@pam: 6308D593 timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/index:UPID:pro04:003F54D2:01A247BF:630D02F9:qmstart:152:root@pam: 630D0424 timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/index:UPID:pro04:00000E1E:00001782:630D1AF0:qmstart:152:root@pam: 630D1C1D timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/index:UPID:pro04:0000D028:000196AC:630D1EC5:qmstart:152:root@pam: 630D1FF0 timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/index:UPID:pro04:00000C11:0000165A:630D6D7B:qmstart:152:root@pam: 630D6EA8 timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/index:UPID:pro04:00309E9F:0177FCB6:63112FC8:qmmove:152:root@pam: 631130F4 timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/index:UPID:pro04:003D5CD1:01CBFBC9:631206D0:qmmove:152:root@pam: 631207FB timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/index:UPID:pro04:003D72C0:01CC9D93:6312086F:qmmove:152:root@pam: 6312099E timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/B/UPID:pro04:00000C0F:00001650:630D6D7B:startall::root@pam::Starting VM 152 failed: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/B/UPID:pro04:000009CB:00001287:627ADECB:startall::root@pam::Starting VM 152 failed: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/B/UPID:pro04:00000C11:0000165A:630D6D7B:qmstart:152:root@pam::TASK ERROR: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/B/UPID:pro04:000009CC:00001292:627ADECB:qmstart:152:root@pam::TASK ERROR: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/0/UPID:pro04:00000E1E:00001782:630D1AF0:qmstart:152:root@pam::TASK ERROR: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/0/UPID:pro04:00000C08:00001458:6308D460:startall::root@pam::Starting VM 152 failed: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/0/UPID:pro04:00000E1D:00001778:630D1AF0:startall::root@pam::Starting VM 152 failed: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/0/UPID:pro04:003D5CD1:01CBFBC9:631206D0:qmmove:152:root@pam::TASK ERROR: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/pve/tasks/0/UPID:pro04:003DDEB2:019B1D5B:630CF0A0:qmigrate:152:root@pam::2022-08-29 10:00:17 found local disk 'local-zfs:vm-152-disk-0' (in current VM config)
/var/log/pve/tasks/0/UPID:pro04:003DDEB2:019B1D5B:630CF0A0:qmigrate:152:root@pam::2022-08-29 10:00:17 ERROR: storage migration for 'local-zfs:vm-152-disk-0' to storage '' failed - no storage ID specified
/var/log/pve/tasks/0/UPID:pro04:003DDEB2:019B1D5B:630CF0A0:qmigrate:152:root@pam::2022-08-29 10:00:17 ERROR: migration aborted (duration 00:00:01): storage migration for 'local-zfs:vm-152-disk-0' to storage '' failed - no storage ID specified
/var/log/pve/tasks/7/UPID:pro04:00249C34:001E9885:630D6907:qmigrate:152:root@pam::2022-08-29 18:33:59 found local disk 'local-zfs:vm-152-disk-0' (in current VM config)
/var/log/pve/tasks/7/UPID:pro04:00249C34:001E9885:630D6907:qmigrate:152:root@pam::2022-08-29 18:33:59 ERROR: storage migration for 'local-zfs:vm-152-disk-0' to storage '' failed - no storage ID specified
/var/log/pve/tasks/7/UPID:pro04:00249C34:001E9885:630D6907:qmigrate:152:root@pam::2022-08-29 18:33:59 ERROR: migration aborted (duration 00:00:00): storage migration for 'local-zfs:vm-152-disk-0' to storage '' failed - no storage ID specified
/var/log/pve/tasks/7/UPID:pro04:00000D3C:000016B9:6308D467:qmstart:152:root@pam::TASK ERROR: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/syslog.1:Aug 26 07:15:46 pro04 pve-guests[3388]: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/syslog.1:Aug 26 07:15:47 pro04 pvesh[3015]: Starting VM 152 failed: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
grep: /var/log/journal/817a034536314e10b7b6118398f5c580/system@0005df7a2a73c839-050cb2477103dd44.journal~: binary file matches
grep: /var/log/journal/817a034536314e10b7b6118398f5c580/system.journal: binary file matches
/var/log/syslog:Aug 29 11:23:32 pro04 pvedaemon[4150482]: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/syslog:Aug 29 11:23:32 pro04 pvedaemon[4050818]: <root@pam> end task UPID:pro04:003F54D2:01A247BF:630D02F9:qmstart:152:root@pam: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/syslog:Aug 29 13:05:48 pro04 pve-guests[3614]: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/syslog:Aug 29 13:05:49 pro04 pvesh[3612]: Starting VM 152 failed: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/syslog:Aug 29 13:22:08 pro04 pvedaemon[53288]: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/syslog:Aug 29 13:22:08 pro04 pvedaemon[3437]: <root@pam> end task UPID:pro04:0000D028:000196AC:630D1EC5:qmstart:152:root@pam: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/syslog:Aug 29 18:57:59 pro04 pve-guests[3089]: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/syslog:Aug 29 18:58:00 pro04 pvesh[3077]: Starting VM 152 failed: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/syslog:Sep  1 15:23:48 pro04 pvedaemon[3186335]: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/syslog:Sep  1 15:23:48 pro04 pvedaemon[867581]: <root@pam> end task UPID:pro04:00309E9F:0177FCB6:63112FC8:qmmove:152:root@pam: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/syslog:Sep  2 06:41:15 pro04 pvedaemon[4021457]: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/syslog:Sep  2 06:41:15 pro04 pvedaemon[3902831]: <root@pam> end task UPID:pro04:003D5CD1:01CBFBC9:631206D0:qmmove:152:root@pam: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/syslog:Sep  2 06:48:10 pro04 pvedaemon[4027072]: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/daemon.log.1:Aug 26 07:15:46 pro04 pve-guests[3388]: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
/var/log/daemon.log.1:Aug 26 07:15:47 pro04 pvesh[3015]: Starting VM 152 failed: timeout: no zvol device link for 'vm-152-disk-0' found after 300 sec found.
 
Last edited:
What does zpool history tell you about the volumes?
 
Thanks for offering to help but after things getting even stranger, I decided to rebuild the entire cluster.
I backed all guests up onto NFS storage then rebuilt all of the hosts one at a time to keep everything running while doing so.

Never found what happened but it was ssh related.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!