"storage migration failed" moving a local disk to lvmthin

larsen

Active Member
Feb 28, 2020
155
15
38
I have a VM that is using a local qcow2 disk (virtual 465 GB, physical 55 GB) that is stored on a local directory (100 GB). The VM has recently been migrated from another node to the new, second node in a cluster. It's running fine, only the disk needs to be moved to a lvmthin storage (7 TB).

This process seems to run fine up until 100%, but then I get this error message:

Code:
create full clone of drive scsi0 (local:108/vm-108-disk-1.qcow2)
  Rounding up size to full physical extent <465.77 GiB
  Logical volume "vm-108-disk-0" created.
transferred: 0 bytes remaining: 500109934592 bytes total: 500109934592 bytes progression: 0.00 %
transferred: 5001099345 bytes remaining: 495108835247 bytes total: 500109934592 bytes progression: 1.00 %
...
transferred: 495959022134 bytes remaining: 4150912458 bytes total: 500109934592 bytes progression: 99.17 %
transferred: 500109934592 bytes remaining: 0 bytes total: 500109934592 bytes progression: 100.00 %
transferred: 500109934592 bytes remaining: 0 bytes total: 500109934592 bytes progression: 100.00 %
  WARNING: Device /dev/dm-6 not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/dm-6 not initialized in udev database even after waiting 10000000 microseconds.
  Logical volume "vm-108-disk-0" successfully removed
  WARNING: Device /dev/dm-6 not initialized in udev database even after waiting 10000000 microseconds.
TASK ERROR: storage migration failed: command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --options lv_size /dev/pve/vm-108-disk-0' failed: got timeout

Some info:
Code:
Virtual Environment 6.1-8

atl-vm03:~# pveperf
CPU BOGOMIPS:      128004.60
REGEX/SECOND:      1688019
HD SIZE:           93.99 GB (/dev/mapper/pve-root)
BUFFERED READS:    246.93 MB/sec
AVERAGE SEEK TIME: 6.09 ms
FSYNCS/SECOND:     102.35
DNS EXT:           144.12 ms
DNS INT:           0.76 ms (atl.local)

atl-vm03:~# cat /etc/pve/storage.cfg
dir: Backup
    path /var/lib/vz/dump
    content backup
    maxfiles 2

dir: local
    path /var/lib/vz
    content rootdir,images,vztmpl,iso
    maxfiles 0

lvmthin: lvmthin
    thinpool data
    vgname pve
    content images,rootdir
    nodes atl-vm03

atl-vm03:~# ll /dev/dm*
brw-rw---- 1 root disk 253, 0 2020-04-14 16:59 /dev/dm-0
brw-rw---- 1 root disk 253, 1 2020-04-14 16:59 /dev/dm-1
brw-rw---- 1 root disk 253, 2 2020-04-15 15:05 /dev/dm-2
brw-rw---- 1 root disk 253, 3 2020-04-15 15:05 /dev/dm-3
brw-rw---- 1 root disk 253, 4 2020-04-15 15:05 /dev/dm-4
brw-rw---- 1 root disk 253, 5 2020-04-15 15:05 /dev/dm-5

How can I fix this?
 
could you try running `udevadm settle`, in a shell, while the migration is running?
 
Did this while the migration was at 7% (still running right now). Didn't produce any output. Should it?

Edit: Still getting the same error:
Code:
  WARNING: Device /dev/dm-6 not initialized in udev database even after waiting 10000000 microseconds.
  Logical volume "vm-108-disk-0" successfully removed
  WARNING: Device /dev/dm-6 not initialized in udev database even after waiting 10000000 microseconds.
TASK ERROR: storage migration failed: command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --options lv_size /dev/pve/vm-108-disk-0' failed: got timeout
 
Last edited:
I have now added a second disk on lvm-thin and migrated the old disk with PartedMagic/Clonezilla.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!