Issues when cloning VMs

aljaxus

Active Member
Jul 6, 2019
43
5
28
Them Interwebz
aljaxus.eu
Hello, I've just installed Proxmox-6 over an old installation of Proxmox-5

The installation was successful without any problems whatsoever, but now when I try to clone an already created VM template I get the following errors
Code:
  WARNING: Device /dev/dm-28 not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/dm-28 not initialized in udev database even after waiting 10000000 microseconds.
TASK ERROR: clone failed: command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --options lv_size /dev/vmdata/vm-300-disk-0' failed: got timeout

I've looked at some other threads on this forum and found the following [link]

The following spoilers include output of the commands shown
Code:
root@apollo:~# cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content vztmpl,backup,iso

lvmthin: local-lvm
    thinpool data
    vgname pve
    content images,rootdir

lvmthin: local-vmstore
    thinpool vmstore
    vgname vmdata
    content rootdir,images
Code:
root@apollo:~# pvs -a
  WARNING: Device /dev/dm-28 not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/dm-28 not initialized in udev database even after waiting 10000000 microseconds.
  PV                                           VG     Fmt  Attr PSize    PFree 
  /dev/dm-28                                               ---        0       0
  /dev/sda2                                                ---        0       0
  /dev/sda3                                    pve    lvm2 a--  <127.50g <15.88g
  /dev/sdb1                                    vmdata lvm2 a--    <1.64t  26.43g
  /dev/vmdata/vm-100-state-snapshot_2019_07_28             ---        0       0
  /dev/vmdata/vm-151-state-a_2019_08_01                    ---        0       0
  /dev/vmdata/vm-151-state-snap                            ---        0       0
  /dev/vmdata/vm-152-state-updated                         ---        0       0
root@apollo:~#
 
Code:
root@apollo:~# vgs -a
  WARNING: Device /dev/dm-28 not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/dm-28 not initialized in udev database even after waiting 10000000 microseconds.
  VG     #PV #LV #SN Attr   VSize    VFree 
  pve      1   3   0 wz--n- <127.50g <15.88g
  vmdata   1  26   0 wz--n-   <1.64t  26.43g
Code:
root@apollo:~# lvs -a
  WARNING: Device /dev/dm-28 not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/dm-28 not initialized in udev database even after waiting 10000000 microseconds.
  LV                                     VG     Attr       LSize   Pool    Origin        Data%  Meta%  Move Log Cpy%Sync Convert
  data                                   pve    twi-a-tz--  69.87g                       0.00   1.60                           
  [data_tdata]                           pve    Twi-ao----  69.87g                                                             
  [data_tmeta]                           pve    ewi-ao----   1.00g                                                             
  [lvol0_pmspare]                        pve    ewi-------   1.00g                                                             
  root                                   pve    -wi-ao----  31.75g                                                             
  swap                                   pve    -wi-ao----   8.00g                                                             
  base-200-disk-0                        vmdata Vri---tz-k  32.00g vmstore                                                     
  base-201-disk-0                        vmdata Vri---tz-k  50.00g vmstore                                                     
  [lvol0_pmspare]                        vmdata ewi------- 104.00m                                                             
  snap_vm-100-disk-0_snapshot_2019_07_28 vmdata Vri---tz-k 105.00g vmstore vm-100-disk-0                                       
  snap_vm-151-disk-0_a_2019_08_01        vmdata Vri---tz-k  25.00g vmstore vm-151-disk-0                                       
  snap_vm-151-disk-0_snap                vmdata Vri---tz-k  25.00g vmstore vm-151-disk-0                                       
  snap_vm-152-disk-1_updated             vmdata Vri---tz-k  50.00g vmstore vm-152-disk-1                                       
  vm-100-disk-0                          vmdata Vwi-a-tz-- 105.00g vmstore               20.01                                 
  vm-100-state-snapshot_2019_07_28       vmdata Vwi-a-tz-- <16.18g vmstore               27.01                                 
  vm-101-disk-0                          vmdata Vwi-a-tz--  50.00g vmstore               30.38                                 
  vm-102-disk-0                          vmdata Vwi-a-tz--  64.00g vmstore               3.57                                  
  vm-150-disk-0                          vmdata Vwi-a-tz--  15.00g vmstore               12.72                                 
  vm-151-disk-0                          vmdata Vwi-a-tz--  25.00g vmstore               55.71                                 
  vm-151-state-a_2019_08_01              vmdata Vwi-a-tz--  13.11g vmstore               27.14                                 
  vm-151-state-snap                      vmdata Vwi-a-tz--  13.11g vmstore               24.29                                 
  vm-152-disk-0                          vmdata Vwi-a-tz-- 300.00g vmstore               43.67                                 
  vm-152-disk-1                          vmdata Vwi-a-tz-- 150.00g vmstore               52.01                                 
  vm-152-state-updated                   vmdata Vwi-a-tz--  16.80g vmstore               6.54                                  
  vm-153-disk-0                          vmdata Vwi-a-tz--  32.00g vmstore               51.70                                 
  vm-154-disk-0                          vmdata Vwi-a-tz-- 150.00g vmstore               5.22                                  
  vm-156-disk-0                          vmdata Vwi-a-tz-- 150.00g vmstore               52.08                                 
  vm-300-disk-0                          vmdata Vwi-a-tz--  32.00g vmstore               0.00                                  
  vm-301-disk-0                          vmdata Vwi-a-tz--  32.00g vmstore               43.75                                 
  vm-301-disk-1                          vmdata Vwi-a-tz--  32.00g vmstore               43.75                                 
  vm-303-disk-0                          vmdata Vwi-a-tz--  32.00g vmstore               43.75                                 
  vm-303-disk-1                          vmdata Vwi-a-tz--  32.00g vmstore               43.75                                 
  vmstore                                vmdata twi-aotz--   1.61t                       29.73  25.03                          
  [vmstore_tdata]                        vmdata Twi-ao----   1.61t                                                             
  [vmstore_tmeta]                        vmdata ewi-ao---- 104.00m


I also got a few error messages related to "systemd-udevd" and "qemu-img" services being "stuck" for more than 120s.
The following spoiler includes some output from journalctl log
Code:
Aug 31 22:19:02 apollo systemd[1]: Started Proxmox VE replication runner.
Aug 31 22:19:09 apollo pvestatd[1446]: status update time (5.735 seconds)
Aug 31 22:19:18 apollo pvestatd[1446]: status update time (5.743 seconds)
Aug 31 22:19:20 apollo systemd-udevd[714]: dm-29: Worker [1993] processing SEQNUM=7519 killed
Aug 31 22:19:29 apollo pvestatd[1446]: status update time (5.915 seconds)
Aug 31 22:19:39 apollo pvestatd[1446]: status update time (5.539 seconds)
Aug 31 22:19:44 apollo systemd-udevd[714]: dm-30: Worker [2034] processing SEQNUM=7524 killed
Aug 31 22:19:48 apollo pvestatd[1446]: status update time (5.571 seconds)
Aug 31 22:19:59 apollo pvestatd[1446]: status update time (5.519 seconds)
Aug 31 22:20:00 apollo systemd[1]: Starting Proxmox VE replication runner...
Aug 31 22:20:02 apollo systemd[1]: pvesr.service: Succeeded.
Aug 31 22:20:02 apollo systemd[1]: Started Proxmox VE replication runner.
Aug 31 22:20:08 apollo pvestatd[1446]: status update time (5.435 seconds)
Aug 31 22:20:10 apollo kernel: INFO: task systemd-udevd:1993 blocked for more than 120 seconds.
Aug 31 22:20:10 apollo kernel:       Tainted: P          IO      5.0.21-1-pve #1
Aug 31 22:20:10 apollo kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug 31 22:20:10 apollo kernel: systemd-udevd   D    0  1993    714 0x00000324
Aug 31 22:20:10 apollo kernel: Call Trace:
Aug 31 22:20:10 apollo kernel:  __schedule+0x2d4/0x870
Aug 31 22:20:10 apollo kernel:  ? proc_destroy_inode+0x1c/0x20
Aug 31 22:20:10 apollo kernel:  schedule+0x2c/0x70
Aug 31 22:20:10 apollo kernel:  schedule_preempt_disabled+0xe/0x10
Aug 31 22:20:10 apollo kernel:  __mutex_lock.isra.10+0x2e4/0x4c0
Aug 31 22:20:10 apollo kernel:  ? exact_lock+0x11/0x20
Aug 31 22:20:10 apollo kernel:  ? disk_map_sector_rcu+0x70/0x70
Aug 31 22:20:10 apollo kernel:  __mutex_lock_slowpath+0x13/0x20
Aug 31 22:20:10 apollo kernel:  mutex_lock+0x2c/0x30
Aug 31 22:20:10 apollo kernel:  __blkdev_get+0x7b/0x550
Aug 31 22:20:10 apollo kernel:  ? bd_acquire+0xd0/0xd0
Aug 31 22:20:10 apollo kernel:  blkdev_get+0x10c/0x330
Aug 31 22:20:10 apollo kernel:  ? bd_acquire+0xd0/0xd0
Aug 31 22:20:10 apollo kernel:  blkdev_open+0x92/0x100
Aug 31 22:20:10 apollo kernel:  do_dentry_open+0x143/0x3a0
Aug 31 22:20:10 apollo kernel:  vfs_open+0x2d/0x30
Aug 31 22:20:10 apollo kernel:  path_openat+0x2d4/0x16d0
Aug 31 22:20:10 apollo kernel:  ? page_add_file_rmap+0x5f/0x220
Aug 31 22:20:10 apollo kernel:  ? alloc_set_pte+0x104/0x5b0
Aug 31 22:20:10 apollo kernel:  do_filp_open+0x93/0x100
Aug 31 22:20:10 apollo kernel:  ? strncpy_from_user+0x57/0x1c0
Aug 31 22:20:10 apollo kernel:  ? __alloc_fd+0x46/0x150
Aug 31 22:20:10 apollo kernel:  do_sys_open+0x177/0x280
Aug 31 22:20:10 apollo kernel:  __x64_sys_openat+0x20/0x30
Aug 31 22:20:10 apollo kernel:  do_syscall_64+0x5a/0x110
Aug 31 22:20:10 apollo kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xa9
Aug 31 22:20:10 apollo kernel: RIP: 0033:0x7fde830441ae
Aug 31 22:20:10 apollo kernel: Code: Bad RIP value.
Aug 31 22:20:10 apollo kernel: RSP: 002b:00007ffe314b1140 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
Aug 31 22:20:10 apollo kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fde830441ae
Aug 31 22:20:10 apollo kernel: RDX: 0000000000080000 RSI: 0000559e9a116880 RDI: 00000000ffffff9c
Aug 31 22:20:10 apollo kernel: RBP: 00007fde82863c60 R08: 0000559e98c6c270 R09: 000000000000000f
Aug 31 22:20:10 apollo kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 00000000ffffffff
Aug 31 22:20:10 apollo kernel: R13: 0000000000000000 R14: 0000000000000000 R15: 0000559e9a104a00

I would really appreciate any help or hints.


//edit 1
What I also observed is that this happens when I trigger two actions at once (eg. two VM template -> new VM cloning at once)
 
Last edited:
To be perfectly honest, I do not even remember. I've had these issues ~2 months ago. I think that I just reinstalled the whole OS again (and wiped everything from PVE 5).
I decided to do that, because I have all VM images stored in another disk array, which was not wiped, so I just created a new VM with some random ID (so it did not override an existing VM image), copied the config multiple times, modified each version to match the requirements per each VM and went on with my life and never having a single issue with Proxmox ever since.
 
I'm experiencing this issue, too. I have 2 virtual disks, one is 134217728 bytes, and it moved properly. The other is 125861888 bytes and I cannot get it to migrate to LVM-THIN, even using "udevadm trigger". On PVE version pve-manager/6.0-4/2a719255 (running kernel: 5.0.15-1-pve)

EDIT: It finally worked, after waiting one minute or so while the task was at 100% and issuing some "udevadm trigger" commands in the console while the task was hung at 100%...
 
  • Like
Reactions: rubi2020
I'm experiencing this issue, too. I have 2 virtual disks, one is 134217728 bytes, and it moved properly. The other is 125861888 bytes and I cannot get it to migrate to LVM-THIN, even using "udevadm trigger". On PVE version pve-manager/6.0-4/2a719255 (running kernel: 5.0.15-1-pve)

EDIT: It finally worked, after waiting one minute or so while the task was at 100% and issuing some "udevadm trigger" commands in the console while the task was hung at 100%...

@Kurgan I am having same problem. Can you please tell me the commands that got it working?
 
Some time has passed, so I don't remember exactly, but I believe that I have just waited for the migration task to arrive at 100% (on the web interface) and then I just entered a lot of "udevadm trigger" commands in console. Just "udevadm trigger" and enter and then again and again until somehow the task completed successfully. Which is basically an idiotic thing to do, but it worked.
 
  • Like
Reactions: rubi2020
Some time has passed, so I don't remember exactly, but I believe that I have just waited for the migration task to arrive at 100% (on the web interface) and then I just entered a lot of "udevadm trigger" commands in console. Just "udevadm trigger" and enter and then again and again until somehow the task completed successfully. Which is basically an idiotic thing to do, but it worked.

yep I did the same after your comment and It worked for me too. Thank you :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!