"timeout: no zvol device link" when I try to start a VM

erer2001

Member
May 25, 2021
10
5
8
22
Every time i try to start this VM after rebooting the PVE host i get TASK ERROR: timeout: no zvol device link for 'vm-205-disk-0' found after 300 sec found.

Everything is updated.
I have another win10 VM and that one starts without any issue. (Disk is on rpool)

I suspect it have something to do with nvme-pool zpool put I scrubbed it and I found nothing wrong.

Each time I have to restore from backup and then it will start. (Restore on the same storage pool)

zvol_wait Loops saying it's waiting for a zfs link forever.

I searched everywhere but I couldn't fine any answer. I will try to attach as much info as possible here:

Code:
root@proxmox:~# cat /etc/pve/qemu-server/205.conf
agent: 1,fstrim_cloned_disks=1
balloon: 1024
boot: order=scsi0;net0
cores: 4
cpu: host
machine: q35
memory: 4096
name: Ubuntu
net0: virtio=<REMOVED>,bridge=vmbr0,tag=30
numa: 0
ostype: l26
scsi0: nvme-pool:vm-205-disk-0,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=<REMOVED>
sockets: 1
vga: qxl
vmgenid: <REMOVED>

Code:
root@proxmox:~# zfs get all nvme-pool
NAME       PROPERTY               VALUE                  SOURCE
nvme-pool  type                   filesystem             -
nvme-pool  creation               Thu Mar  3  0:10 2022  -
nvme-pool  used                   320G                   -
nvme-pool  available              130G                   -
nvme-pool  referenced             287G                   -
nvme-pool  compressratio          1.07x                  -
nvme-pool  mounted                yes                    -
nvme-pool  quota                  none                   default
nvme-pool  reservation            none                   default
nvme-pool  recordsize             128K                   default
nvme-pool  mountpoint             /nvme-pool             default
nvme-pool  sharenfs               off                    default
nvme-pool  checksum               on                     default
nvme-pool  compression            zstd                   local
nvme-pool  atime                  off                    local
nvme-pool  devices                on                     default
nvme-pool  exec                   on                     default
nvme-pool  setuid                 on                     default
nvme-pool  readonly               off                    default
nvme-pool  zoned                  off                    default
nvme-pool  snapdir                hidden                 default
nvme-pool  aclmode                discard                default
nvme-pool  aclinherit             restricted             default
nvme-pool  createtxg              1                      -
nvme-pool  canmount               on                     default
nvme-pool  xattr                  on                     default
nvme-pool  copies                 1                      default
nvme-pool  version                5                      -
nvme-pool  utf8only               off                    -
nvme-pool  normalization          none                   -
nvme-pool  casesensitivity        sensitive              -
nvme-pool  vscan                  off                    default
nvme-pool  nbmand                 off                    default
nvme-pool  sharesmb               off                    default
nvme-pool  refquota               none                   default
nvme-pool  refreservation         none                   default
nvme-pool  guid                   <REMOVED>    -
nvme-pool  primarycache           all                    default
nvme-pool  secondarycache         all                    default
nvme-pool  usedbysnapshots        0B                     -
nvme-pool  usedbydataset          287G                   -
nvme-pool  usedbychildren         33.1G                  -
nvme-pool  usedbyrefreservation   0B                     -
nvme-pool  logbias                latency                default
nvme-pool  objsetid               54                     -
nvme-pool  dedup                  off                    default
nvme-pool  mlslabel               none                   default
nvme-pool  sync                   standard               default
nvme-pool  dnodesize              legacy                 default
nvme-pool  refcompressratio       1.05x                  -
nvme-pool  written                287G                   -
nvme-pool  logicalused            333G                   -
nvme-pool  logicalreferenced      303G                   -
nvme-pool  volmode                default                default
nvme-pool  filesystem_limit       none                   default
nvme-pool  snapshot_limit         none                   default
nvme-pool  filesystem_count       none                   default
nvme-pool  snapshot_count         none                   default
nvme-pool  snapdev                hidden                 default
nvme-pool  acltype                off                    default
nvme-pool  context                none                   default
nvme-pool  fscontext              none                   default
nvme-pool  defcontext             none                   default
nvme-pool  rootcontext            none                   default
nvme-pool  relatime               off                    default
nvme-pool  redundant_metadata     all                    default
nvme-pool  overlay                on                     default
nvme-pool  encryption             off                    default
nvme-pool  keylocation            none                   default
nvme-pool  keyformat              none                   default
nvme-pool  pbkdf2iters            0                      default
nvme-pool  special_small_blocks   0                      default
nvme-pool  com.sun:auto-snapshot  false                  local

Code:
root@proxmox:~# journalctl -b
-- Journal begins at Wed 2022-03-02 23:34:40 CET, ends at Mon 2022-08-08 00:59:40 CEST. --
Aug 08 00:32:20 proxmox kernel: Linux version 5.15.39-3-pve (build@proxmox) (gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for >
Aug 08 00:32:20 proxmox kernel: Command line: initrd=\EFI\proxmox\5.15.39-3-pve\initrd.img-5.15.39-3-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs
Aug 08 00:32:20 proxmox kernel: KERNEL supported cpus:
Aug 08 00:32:20 proxmox kernel:   Intel GenuineIntel
Aug 08 00:32:20 proxmox kernel:   AMD AuthenticAMD
Aug 08 00:32:20 proxmox kernel:   Hygon HygonGenuine
Aug 08 00:32:20 proxmox kernel:   Centaur CentaurHauls
Aug 08 00:32:20 proxmox kernel:   zhaoxin   Shanghai
Aug 08 00:32:20 proxmox kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Aug 08 00:32:20 proxmox kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Aug 08 00:32:20 proxmox kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Aug 08 00:32:20 proxmox kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers'
Aug 08 00:32:20 proxmox kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR'
Aug 08 00:32:20 proxmox kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Aug 08 00:32:20 proxmox kernel: x86/fpu: xstate_offset[3]:  832, xstate_sizes[3]:   64
Aug 08 00:32:20 proxmox kernel: x86/fpu: xstate_offset[4]:  896, xstate_sizes[4]:   64
Aug 08 00:32:20 proxmox kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format.
Aug 08 00:32:20 proxmox kernel: signal: max sigframe size: 2032
Aug 08 00:32:20 proxmox kernel: BIOS-provided physical RAM map:
Aug 08 00:32:20 proxmox kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009dfff] usable
Aug 08 00:32:20 proxmox kernel: BIOS-e820: [mem 0x000000000009e000-0x000000000009efff] reserved
Aug 08 00:32:20 proxmox kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] usable
Aug 08 00:32:20 proxmox kernel: BIOS-e820: [mem 0x00000000000a0000-0x00000000000fffff] reserved
Aug 08 00:32:20 proxmox kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000885fefff] usable
Aug 08 00:32:20 proxmox kernel: BIOS-e820: [mem 0x00000000885ff000-0x0000000088bfefff] reserved
Aug 08 00:32:20 proxmox kernel: BIOS-e820: [mem 0x0000000088bff000-0x0000000088cfefff] ACPI data
Aug 08 00:32:20 proxmox kernel: BIOS-e820: [mem 0x0000000088cff000-0x0000000088efefff] ACPI NVS
Aug 08 00:32:20 proxmox kernel: BIOS-e820: [mem 0x0000000088eff000-0x0000000089bfefff] reserved
Aug 08 00:32:20 proxmox kernel: BIOS-e820: [mem 0x0000000089bff000-0x0000000089bfffff] usable
Aug 08 00:32:20 proxmox kernel: BIOS-e820: [mem 0x0000000089c00000-0x000000008f7fffff] reserved
Aug 08 00:32:20 proxmox kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved
Aug 08 00:32:20 proxmox kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved
Aug 08 00:32:20 proxmox kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
Aug 08 00:32:20 proxmox kernel: BIOS-e820: [mem 0x00000000fed00000-0x00000000fed00fff] reserved


Code:
root@proxmox:~# zpool status
  pool: BACKUP
 state: ONLINE
  scan: scrub repaired 0B in 01:20:22 with 0 errors on Sun Jul 10 01:44:25 2022
config:

        NAME                                STATE     READ WRITE CKSUM
        BACKUP                              ONLINE       0     0     0
          ata-TOSHIBA_MQ<REMOVED>_<REMOVED>  ONLINE       0     0     0

errors: No known data errors

  pool: RAIDZ12TB
 state: ONLINE
  scan: scrub repaired 0B in 04:01:52 with 0 errors on Sun Jul 10 04:25:59 2022
config:

        NAME                                 STATE     READ WRITE CKSUM
        RAIDZ12TB                            ONLINE       0     0     0
          raidz1-0                           ONLINE       0     0     0
            ata-ST4000VN008-<REMOVED>  ONLINE       0     0     0
            ata-ST4000VN008-<REMOVED>  ONLINE       0     0     0
            ata-ST4000VN008-<REMOVED>  ONLINE       0     0     0
            ata-ST4000VN008-<REMOVED>  ONLINE       0     0     0

errors: No known data errors

  pool: SINGLE1TB
 state: ONLINE
  scan: scrub repaired 0B in 02:51:01 with 0 errors on Sun Jul 10 03:15:14 2022
config:

        NAME                                       STATE     READ WRITE CKSUM
        SINGLE1TB                                  ONLINE       0     0     0
          ata-HGST_HTS54<REMOVED>_JA<REMOVED>  ONLINE       0     0     0

errors: No known data errors

  pool: SINGLE4TB
 state: ONLINE
  scan: scrub repaired 0B in 06:27:10 with 0 errors on Sun Jul 10 06:51:26 2022
config:

        NAME                                        STATE     READ WRITE CKSUM
        SINGLE4TB                                   ONLINE       0     0     0
          ata-WDC_WD40EZAZ-<REMOVED>_WD-WX32<REMOVED>  ONLINE       0     0     0

errors: No known data errors

  pool: nvme-pool
 state: ONLINE
  scan: scrub repaired 0B in 00:02:59 with 0 errors on Tue Jul 26 20:48:31 2022
config:

        NAME                                        STATE     READ WRITE CKSUM
        nvme-pool                                   ONLINE       0     0     0
          nvme-WDC_WDS500G2B0C-00<REMOVED>  ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 00:04:24 with 0 errors on Tue Jul 26 20:49:45 2022
config:

        NAME                                               STATE     READ WRITE CKSUM
        rpool                                              ONLINE       0     0     0
          nvme-eui.1911<REMOVED>3f00-part3  ONLINE       0     0     0

errors: No known data errors

Code:
root@proxmox:~# pveversion -v
proxmox-ve: 7.2-1 (running kernel: 5.15.39-3-pve)
pve-manager: 7.2-7 (running version: 7.2-7/d0dd0e85)
pve-kernel-5.15: 7.2-8
pve-kernel-helper: 7.2-8
pve-kernel-5.13: 7.1-9
pve-kernel-5.15.39-3-pve: 5.15.39-3
pve-kernel-5.15.39-2-pve: 5.15.39-2
pve-kernel-5.15.39-1-pve: 5.15.39-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-3
libpve-storage-perl: 7.2-7
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.5-1
proxmox-backup-file-restore: 2.2.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-2
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.5-1
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 6.2.0-11
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.5-pve1
 
first question

does zfs list | grep nvme-pool show a dataset for the 205 vm? do you have any snapshots on that pool you could rollback to?

second, why are you using zfs with single devices? wouldn't you be better using lvm-thin?
 
first question

does zfs list | grep nvme-pool show a dataset for the 205 vm? do you have any snapshots on that pool you could rollback to?

second, why are you using zfs with single devices? wouldn't you be better using lvm-thin?
1) Yes it shows the zvol of vm 205. I typically solve by restoring from backup. 2) ZFS is so much more than redundancy.
 
Hi,
can you try if the workaround suggested here helps?
Hi, today I had to reboot the server so I tested it and after running
Code:
systemctl restart systemd-udevd; udevadm trigger; udevadm settle
it worked. But it looks like a temporary fix.


Funny how I remember stumbling on that thread a while ago but I missed that line and only tried
Code:
for i in $(ls -1 /dev/zd* |grep -v '/dev/zd[0-9]*p[0-9]*'); do udevadm trigger $i; done
which did not fix the issue.

Still, it's strange how the VMs on rpool have no issue starting. Guess as temporary fix I might add it to cron and make it run on every reboot.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!