I have an error when clone a VM
And I see this error on syslog:
The server is new, not run any VM yet, the disk (LVM) are OK.
Version Running 6.2-12
I need a solution as soon as posible, thanks
create full clone of drive virtio0 (datos:vm-1001-disk-0)
Logical volume "vm-11002-disk-0" created.
transferred: 0 bytes remaining: 42949672960 bytes total: 42949672960 bytes progression: 0.00 %
transferred: 429496729 bytes remaining: 42520176231 bytes total: 42949672960 bytes progression: 1.00 %
transferred: 858993459 bytes remaining: 42090679501 bytes total: 42949672960 bytes progression: 2.00 %
.....
transferred: 42133629173 bytes remaining: 816043787 bytes total: 42949672960 bytes progression: 98.10 %
transferred: 42563125903 bytes remaining: 386547057 bytes total: 42949672960 bytes progression: 99.10 %
transferred: 42949672960 bytes remaining: 0 bytes total: 42949672960 bytes progression: 100.00 %
transferred: 42949672960 bytes remaining: 0 bytes total: 42949672960 bytes progression: 100.00 %
WARNING: Device /dev/dm-13 not initialized in udev database even after waiting 10000000 microseconds.
WARNING: Device /dev/dm-13 not initialized in udev database even after waiting 10000000 microseconds.
Logical volume "vm-11002-disk-0" successfully removed
WARNING: Device /dev/dm-13 not initialized in udev database even after waiting 10000000 microseconds.
TASK ERROR: clone failed: command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --options lv_size /dev/hdd1/vm-11002-disk-0' failed: got timeout
And I see this error on syslog:
Oct 06 21:03:49 pve220 kernel: INFO: task systemd-udevd:5718 blocked for more than 120 seconds.
Oct 06 21:03:49 pve220 kernel: Tainted: P O 5.4.65-1-pve #1
Oct 06 21:03:49 pve220 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Oct 06 21:03:49 pve220 kernel: systemd-udevd D 0 5718 983 0x00000324
Oct 06 21:03:49 pve220 kernel: Call Trace:
Oct 06 21:03:49 pve220 kernel: __schedule+0x2e6/0x6f0
Oct 06 21:03:49 pve220 kernel: ? fsnotify_grab_connector+0x4e/0x90
Oct 06 21:03:49 pve220 kernel: schedule+0x33/0xa0
Oct 06 21:03:49 pve220 kernel: schedule_preempt_disabled+0xe/0x10
Oct 06 21:03:49 pve220 kernel: __mutex_lock.isra.10+0x2c9/0x4c0
Oct 06 21:03:49 pve220 kernel: ? exact_lock+0x11/0x20
Oct 06 21:03:49 pve220 kernel: ? disk_map_sector_rcu+0x70/0x70
Oct 06 21:03:49 pve220 kernel: __mutex_lock_slowpath+0x13/0x20
Oct 06 21:03:49 pve220 kernel: mutex_lock+0x2c/0x30
Oct 06 21:03:49 pve220 kernel: __blkdev_get+0x7a/0x560
Oct 06 21:03:49 pve220 kernel: blkdev_get+0xef/0x150
Oct 06 21:03:49 pve220 kernel: ? blkdev_get_by_dev+0x50/0x50
Oct 06 21:03:49 pve220 kernel: blkdev_open+0x87/0xa0
Oct 06 21:03:49 pve220 kernel: do_dentry_open+0x143/0x3a0
Oct 06 21:03:49 pve220 kernel: vfs_open+0x2d/0x30
Oct 06 21:03:49 pve220 kernel: path_openat+0x2e9/0x16f0
Oct 06 21:03:49 pve220 kernel: ? unlock_page_memcg+0x12/0x20
Oct 06 21:03:49 pve220 kernel: ? page_add_file_rmap+0x131/0x190
Oct 06 21:03:49 pve220 kernel: ? wp_page_copy+0x37b/0x750
Oct 06 21:03:49 pve220 kernel: do_filp_open+0x93/0x100
Oct 06 21:03:49 pve220 kernel: ? __alloc_fd+0x46/0x150
Oct 06 21:03:49 pve220 kernel: do_sys_open+0x177/0x280
Oct 06 21:03:49 pve220 kernel: __x64_sys_openat+0x20/0x30
Oct 06 21:03:49 pve220 kernel: do_syscall_64+0x57/0x190
Oct 06 21:03:49 pve220 kernel: entry_SYSCALL_64_after_hwframe+0x44/0xa9
Oct 06 21:03:49 pve220 kernel: RIP: 0033:0x7feaefdac1ae
Oct 06 21:03:49 pve220 kernel: Code: Bad RIP value.
Oct 06 21:03:49 pve220 kernel: RSP: 002b:00007ffee3a2c980 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
Oct 06 21:03:49 pve220 kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007feaefdac1ae
Oct 06 21:03:49 pve220 kernel: RDX: 0000000000080000 RSI: 0000562316fc9b60 RDI: 00000000ffffff9c
Oct 06 21:03:49 pve220 kernel: RBP: 00007feaef5cbc60 R08: 000056231533e270 R09: 000000000000000f
Oct 06 21:03:49 pve220 kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 00000000ffffffff
Oct 06 21:03:49 pve220 kernel: R13: 0000000000000000 R14: 0000000000000000 R15: 0000562316fd09b0
Oct 06 21:04:00 pve220 systemd[1]: Starting Proxmox VE replication runner...
Oct 06 21:04:01 pve220 systemd[1]: pvesr.service: Succeeded.
The server is new, not run any VM yet, the disk (LVM) are OK.
Version Running 6.2-12
proxmox-ve: 6.2-2 (running kernel: 5.4.65-1-pve)
pve-manager: 6.2-12 (running version: 6.2-12/b287dd27)
pve-kernel-5.4: 6.2-7
pve-kernel-helper: 6.2-7
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.60-1-pve: 5.4.60-2
pve-kernel-5.4.55-1-pve: 5.4.55-1
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: not correctly installed
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-2
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 0.9.0-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-12
pve-cluster: 6.1-8
pve-container: 3.2-2
pve-docs: 6.2-6
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-1
pve-qemu-kvm: 5.1.0-2
pve-xtermjs: 4.7.0-2
qemu-server: 6.2-14
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.4-pve1
I need a solution as soon as posible, thanks