[SOLVED] BUGREPORT: TASK ERROR: timeout: no zvol device link for 'xxx' found after 300 sec found.

Swfty

Member
Jan 23, 2021
20
2
8
Germany - Hungary
swifty.hu
Hi Proxmox Team!

I have already seen similar threads, however in my case nothing helped, yet.
It started a few days ago, when the backup process failed:

Code:
2023-04-24T00:04:59.884946+02:00 gyor pvescheduler[3087873]: ERROR: Backup of VM 100 failed - timeout: no zvol device link for 'vm-100-disk-3' found after 300 sec found.
2023-04-24T00:09:58.947103+02:00 gyor pvescheduler[3087873]: ERROR: Backup of VM 101 failed - timeout: no zvol device link for 'vm-101-disk-0' found after 300 sec found.
2023-04-24T00:14:58.318009+02:00 gyor pvescheduler[3087873]: ERROR: Backup of VM 110 failed - timeout: no zvol device link for 'vm-110-disk-0' found after 300 sec found.
2023-04-24T00:19:57.382491+02:00 gyor pvescheduler[3087873]: ERROR: Backup of VM 500 failed - timeout: no zvol device link for 'vm-500-disk-0' found after 300 sec found.
2023-04-24T00:24:56.454325+02:00 gyor pvescheduler[3087873]: ERROR: Backup of VM 510 failed - timeout: no zvol device link for 'vm-510-disk-0' found after 300 sec found.
2023-04-24T00:29:55.528290+02:00 gyor pvescheduler[3087873]: ERROR: Backup of VM 600 failed - timeout: no zvol device link for 'vm-600-disk-0' found after 300 sec found.
2023-04-24T00:34:54.605200+02:00 gyor pvescheduler[3087873]: ERROR: Backup of VM 700 failed - timeout: no zvol device link for 'vm-700-disk-0' found after 300 sec found.
2023-04-24T00:39:53.685649+02:00 gyor pvescheduler[3087873]: ERROR: Backup of VM 701 failed - timeout: no zvol device link for 'vm-701-disk-0' found after 300 sec found.
2023-04-24T00:44:52.763089+02:00 gyor pvescheduler[3087873]: ERROR: Backup of VM 702 failed - timeout: no zvol device link for 'vm-702-disk-0' found after 300 sec found.
2023-04-24T00:49:51.841008+02:00 gyor pvescheduler[3087873]: ERROR: Backup of VM 703 failed - timeout: no zvol device link for 'vm-703-disk-0' found after 300 sec found.
2023-04-24T00:54:50.917260+02:00 gyor pvescheduler[3087873]: ERROR: Backup of VM 704 failed - timeout: no zvol device link for 'vm-704-disk-0' found after 300 sec found.
2023-04-24T00:59:50.003090+02:00 gyor pvescheduler[3087873]: ERROR: Backup of VM 800 failed - timeout: no zvol device link for 'vm-800-disk-0' found after 300 sec found.


Code:
#zfs list
NAME                              USED  AVAIL     REFER  MOUNTPOINT
DATA-5TB                         3.11T  1.31T       96K  /DATA-5TB
DATA-5TB/trash                   3.11T  1.31T     3.11T  /DATA-5TB/trash
DATA-SAS                          497G  1.27T      112K  /DATA-SAS
DATA-SAS/nextcloud                 96K  1.27T       96K  /DATA-SAS/nextcloud
DATA-SAS/pve                     3.28G  1.27T      112K  /DATA-SAS/pve
DATA-SAS/pve/basevol-903-disk-0   306M  7.70G      306M  /DATA-SAS/pve/basevol-903-disk-0
DATA-SAS/pve/subvol-980-disk-0    306M  7.70G      306M  /DATA-SAS/pve/subvol-980-disk-0
DATA-SAS/pve/subvol-990-disk-0   2.68G  5.32G     2.68G  /DATA-SAS/pve/subvol-990-disk-0
DATA-SAS/vm-100-disk-0             84K  1.27T       84K  -
DATA-SAS/vm-100-disk-1           28.1G  1.27T     28.1G  -
DATA-SAS/vm-100-disk-2             76K  1.27T       76K  -
DATA-SAS/vm-100-disk-3           26.7G  1.27T     26.7G  -
DATA-SAS/vm-101-disk-0           99.0G  1.34T     24.5G  -
DATA-SAS/vm-110-disk-0             56K  1.27T       56K  -
DATA-SAS/vm-500-disk-0           16.5G  1.28T     6.13G  -
DATA-SAS/vm-510-disk-0           1.88G  1.27T     1.88G  -
DATA-SAS/vm-510-disk-1           7.21G  1.27T     7.21G  -
DATA-SAS/vm-600-disk-0           16.5G  1.28T     2.31G  -
DATA-SAS/vm-700-disk-0           16.5G  1.28T     6.30G  -
DATA-SAS/vm-701-disk-0           16.5G  1.29T     1.13G  -
DATA-SAS/vm-701-disk-1           15.9G  1.27T     15.9G  -
DATA-SAS/vm-702-disk-0           16.5G  1.28T     8.95G  -
DATA-SAS/vm-702-disk-1            139G  1.27T      139G  -
DATA-SAS/vm-703-disk-0           66.0G  1.33T     9.70G  -
DATA-SAS/vm-704-disk-0           2.96G  1.27T     2.96G  -
DATA-SAS/vm-800-disk-0           16.5G  1.29T     1.09G  -
DATA-SAS/vm-800-disk-1           3.69G  1.27T     3.69G  -
DATA-SAS/vm-800-disk-2           3.62G  1.27T     3.62G  -

Code:
# systemctl | grep service | grep -iE '(proxmox|pve)'
  pve-cluster.service                                                                                              loaded active     running   The Proxmox VE cluster filesystem
  pve-firewall.service                                                                                             loaded active     running   Proxmox VE firewall
  pve-guests.service                                                                                               loaded active     exited    PVE guests
  pve-ha-crm.service                                                                                               loaded active     running   PVE Cluster HA Resource Manager Daemon
  pve-ha-lrm.service                                                                                               loaded active     running   PVE Local HA Resource Manager Daemon
  pve-lxc-syscalld.service                                                                                         loaded active     running   Proxmox VE LXC Syscall Daemon
  pvebanner.service                                                                                                loaded active     exited    Proxmox VE Login Banner
  pvedaemon.service                                                                                                loaded active     running   PVE API Daemon
  pvefw-logger.service                                                                                             loaded active     running   Proxmox VE firewall logger
  pvenetcommit.service                                                                                             loaded active     exited    Commit Proxmox VE network changes
  pveproxy.service                                                                                                 loaded active     running   PVE API Proxy Server
  pvescheduler.service                                                                                             loaded active     running   Proxmox VE scheduler
  pvestatd.service                                                                                                 loaded active     running   PVE Status Daemon
  qmeventd.service                                                                                                 loaded active     running   PVE Qemu Event Daemon
  spiceproxy.service                                                                                               loaded active     running   PVE SPICE Proxy Server
  watchdog-mux.service                                                                                             loaded active     running   Proxmox VE watchdog multiplexer

Code:
# pveversion -v
proxmox-ve: 7.4-1 (running kernel: 6.2.11-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-6.2: 7.4-2
pve-kernel-5.15: 7.3-3
pve-kernel-6.2.11-1-pve: 6.2.11-1
pve-kernel-6.2.6-1-pve: 6.2.6-1
pve-kernel-libc-dev: 5.19.17-1
pve-kernel-5.15.102-1-pve: 5.15.102-1
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve1
criu: 3.17.1-2
glusterfs-client: 10.3-4
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.25-1
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-4
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-network-perl: 0.7.3
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libqb0: not correctly installed
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-2-1
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.1-1
proxmox-backup-file-restore: 2.4.1-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.6.5
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-1
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-3
smartmontools: 7.3-1+b1
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1

Code:
2023-04-28T12:51:09.349736+02:00 gyor pve-guests[3620]: timeout: no zvol device link for 'vm-800-disk-0' found after 300 sec found.
2023-04-28T12:51:09.431201+02:00 gyor pvesh[2498]: Starting VM 800 failed: timeout: no zvol device link for 'vm-800-disk-0' found after 300 sec found.
2023-04-28T12:56:08.641225+02:00 gyor pve-guests[12693]: timeout: no zvol device link for 'vm-700-disk-0' found after 300 sec found.
2023-04-28T12:56:09.586175+02:00 gyor pvesh[2498]: Starting VM 700 failed: timeout: no zvol device link for 'vm-700-disk-0' found after 300 sec found.
2023-04-28T13:01:08.799027+02:00 gyor pve-guests[18777]: timeout: no zvol device link for 'vm-701-disk-0' found after 300 sec found.
2023-04-28T13:01:09.750802+02:00 gyor pvesh[2498]: Starting VM 701 failed: timeout: no zvol device link for 'vm-701-disk-0' found after 300 sec found.
2023-04-28T13:06:08.951672+02:00 gyor pve-guests[24839]: timeout: no zvol device link for 'vm-702-disk-0' found after 300 sec found.
2023-04-28T13:06:09.900874+02:00 gyor pvesh[2498]: Starting VM 702 failed: timeout: no zvol device link for 'vm-702-disk-0' found after 300 sec found.
2023-04-28T13:11:09.114886+02:00 gyor pve-guests[30958]: timeout: no zvol device link for 'vm-703-disk-0' found after 300 sec found.
2023-04-28T13:11:10.060158+02:00 gyor pvesh[2498]: Starting VM 703 failed: timeout: no zvol device link for 'vm-703-disk-0' found after 300 sec found.
2023-04-28T13:16:09.268311+02:00 gyor pve-guests[37014]: timeout: no zvol device link for 'vm-704-disk-0' found after 300 sec found.
2023-04-28T13:16:10.216598+02:00 gyor pvesh[2498]: Starting VM 704 failed: timeout: no zvol device link for 'vm-704-disk-0' found after 300 sec found.
2023-04-28T13:21:09.430377+02:00 gyor pve-guests[43125]: timeout: no zvol device link for 'vm-100-disk-3' found after 300 sec found.
2023-04-28T13:21:10.377793+02:00 gyor pvesh[2498]: Starting VM 100 failed: timeout: no zvol device link for 'vm-100-disk-3' found after 300 sec found.

Code:
# zpool status DATA-SAS
  pool: DATA-SAS
 state: ONLINE
  scan: scrub canceled on Fri Apr 28 12:28:44 2023
config:

    NAME                                        STATE     READ WRITE CKSUM
    DATA-SAS                                    ONLINE       0     0     0
      mirror-0                                  ONLINE       0     0     0
        scsi-3690b11c03d88db0027c86fd30ccb801e  ONLINE       0     0     0
        scsi-3690b11c03d88db0027c872780cbe654d  ONLINE       0     0     0
      mirror-1                                  ONLINE       0     0     0
        scsi-3690b11c03d88db0027c87bb6090993ce  ONLINE       0     0     0
        scsi-3690b11c03d88db0027c87ead08ef0827  ONLINE       0     0     0

errors: No known data errors

Code:
# qm config 100
affinity: 48-55
agent: 1,fstrim_cloned_disks=1
balloon: 0
boot: order=virtio0
cores: 4
cpu: qemu64
description: xxx
efidisk0: DATA-SAS:vm-100-disk-2,efitype=4m,pre-enrolled-keys=1,size=1M
hotplug: disk,network,usb
kvm: 1
machine: pc-q35-5.1
memory: 8192
name: xxx
net0: virtio=EE:70:59:B8:8B:22,bridge=vmbr0
numa: 0
onboot: 1
ostype: win10
scsihw: virtio-scsi-single
smbios1: uuid=0c0cf276-5a5f-43f3-ba34-562f896334ad
sockets: 2
startup: order=30,up=10
tpmstate0: DATA-SAS:vm-100-disk-0,size=4M,version=v2.0
unused0: DATA-SAS:vm-100-disk-1
usb0: host=148f:5370
vga: std,memory=32
virtio0: DATA-SAS:vm-100-disk-3,aio=native,cache=directsync,discard=on,iothread=1,size=50G
vmgenid: 455ef932-680b-4218-beb4-6730886b40a1

Code:
# zvol_wait
Testing 20 zvol links
All zvol links are now present.

Code:
# for i in $(ls -1  /dev/zd* |grep -v '/dev/zd[0-9]*p[0-9]*'); do echo $i;  /lib/udev/zvol_id $i ; done
/dev/zd0
DATA-SAS/vm-600-disk-0
/dev/zd112
DATA-SAS/vm-800-disk-0
/dev/zd128
DATA-SAS/vm-100-disk-3
/dev/zd144
DATA-SAS/vm-702-disk-0
/dev/zd16
DATA-SAS/vm-704-disk-0
/dev/zd160
DATA-SAS/vm-701-disk-0
/dev/zd176
DATA-SAS/vm-100-disk-1
/dev/zd192
DATA-SAS/vm-510-disk-0
/dev/zd208
DATA-SAS/vm-703-disk-0
/dev/zd224
DATA-SAS/vm-800-disk-2
/dev/zd240
DATA-SAS/vm-702-disk-1
/dev/zd256
DATA-SAS/vm-100-disk-2
/dev/zd272
DATA-SAS/vm-800-disk-1
/dev/zd288
DATA-SAS/vm-101-disk-0
/dev/zd304
DATA-SAS/vm-500-disk-0
/dev/zd32
DATA-SAS/vm-100-disk-0
/dev/zd48
DATA-SAS/vm-510-disk-1
/dev/zd64
DATA-SAS/vm-701-disk-1
/dev/zd80
DATA-SAS/vm-110-disk-0
/dev/zd96
DATA-SAS/vm-700-disk-0

I have already tried to zfs export, zfs import with no avail.

Any help would be deeply appreciated.

Thank you!
 
I have found the problem and that is why I am changing this topic to a bug report.

Here is the story:
First, when I installed Proxmox on this server, I used the whole ZFS pool (DATA-SAS) with the ID DATA-SAS as a storage, and I have created VMs only.
After a while I started to create CTs too in this pool. Shortly after, I realized that I needed to have some extra storage as a mount point and created a ZFS pool (DATA-SAS/nextcloud).
I did not like the configuration of that storage, so I decided to rearrange. I migrated the CTs to another storage (local) and created yet another ZFS pool (DATA-SAS/pve).
Here comes the bug... I have removed the old storage configuration from Proxmox and added back with same ID, but now with the "DATA-SAS/pve" pool.
Proxmox allowed me to remove the storage from the system while I still had a lot of VMs using the old pool.
I believe, that we need a check there...
 
  • Like
Reactions: Nuke Bloodaxe

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!