[error] clone failed: command '/sbin/lvs --separator : --noheadings ...... got timeout

Mar 17, 2019
1
0
1
37
Hello,

I get recently (after upgrading) the below error very often when i am trying to clone from a template a full clone:


clone failed: command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --options lv_size /dev/vmdata/vm-11151-disk-0' failed: got timeout

Any idea please?
 
Hi,

Same issue for us while cloning for FS based storage to LVM based storage.

Did you find a solution ?

Thanks
 
check the logs of the system (`journalctl -b`, `dmesg`).
Is this a shared LVM?
 
Hi,

LVM volume is not a shared volume.
Today I´ve had another error cloning a vm qemu, but perhaps the problem was trying to make two clone of diferent snapshot of the same VM at the same time.
Days ago errors occurred cloning only one VM at the same time


journalctl:

Code:
Jul 03 13:36:11 proxmox smartd[3420]: Device: /dev/sda [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 76 to 78
Jul 03 13:36:11 proxmox smartd[3420]: Device: /dev/sda [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 73 to 72
Jul 03 13:36:11 proxmox smartd[3420]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 27 to 28
...
Jul 03 15:07:53 proxmox pvedaemon[69760]: VM 153 qmp command failed - VM 153 not running
Jul 03 15:08:00 proxmox systemd[1]: Starting Proxmox VE replication runner...
Jul 03 15:08:01 proxmox pvestatd[4287]: status update time (18.666 seconds)
Jul 03 15:08:02 proxmox pvedaemon[69760]: clone failed: command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --options lv_size /dev/lvm-iscsi-hp/vm-133-disk-0' failed: got timeout
Jul 03 15:08:02 proxmox pvedaemon[63624]: <root@pam> end task UPID:proxmox:00011080:019BF8CF:5D1C92FD:qmclone:153:root@pam: clone failed: command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --options lv_size /dev/lvm-iscsi-hp/vm-133-disk-0' failed: g

SMART test:

Code:
# smartctl -l selftest /dev/sda
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.15.18-16-pve] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke,

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%      7185         -

dmesg:

Code:
[   11.811662] ipmi device interface
[   11.823414] ACPI Error: No handler for Region [IPMI] (00000000ece7b71a) [IPMI] (20170831/evregion-166)
[   11.823517] ACPI Error: Region IPMI (ID=7) has no handler (20170831/exfldio-299)
[   11.823615] No Local Variables are initialized for Method [_GHL]
[   11.823617] No Arguments are initialized for method [_GHL]
[   11.823620] ACPI Error: Method parse/execution failed \_SB.PMI0._GHL, AE_NOT_EXIST (20170831/psparse-550)
[   11.823724] ACPI Error: Method parse/execution failed \_SB.PMI0._PMC, AE_NOT_EXIST (20170831/psparse-550)
[   11.823827] ACPI Exception: AE_NOT_EXIST, Evaluating _PMC (20170831/power_meter-755)
[   11.836469] IPMI System Interface driver.
[   11.836488] ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS
[   11.836491] ipmi_si: SMBIOS: io 0xca8 regsize 1 spacing 4 irq 0
[   11.836492] ipmi_si: Adding SMBIOS-specified kcs state machine
[   11.836636] ipmi_si: Trying SMBIOS-specified kcs state machine at i/o address 0xca8, slave address 0x20, irq 0
[   12.064730] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
[   12.157350] dcdbas dcdbas: Dell Systems Management Base Driver (version 5.6.0-3.2)
[   12.164349] ipmi_si dmi-ipmi-si.0: Found new BMC (man_id: 0x0002a2, prod_id: 0x0100, dev_id: 0x20)
[   12.170917] gpio_ich: GPIO from 451 to 511 on gpio_ich
[   12.180911] input: PC Speaker as /devices/platform/pcspkr/input/input6
[   12.191362] ipmi_si dmi-ipmi-si.0: IPMI kcs interface initialized
[   12.247189] IPMI SSIF Interface driver
[   12.921064] SSE version of gcm_enc/dec engaged.
[   12.939210] Error: Driver 'pcspkr' is already registered, aborting...
[   13.431102] [TTM] Zone  kernel: Available graphics memory: 132061666 kiB
[   13.431105] [TTM] Zone   dma32: Available graphics memory: 2097152 kiB
 
Last edited:
hmm - the parts of the logs you posted do not point to any particular problem - maybe the complete log would show a different picture.

else - what is the concrete setup of your storage?
* `cat /etc/storage.cfg`
* `pvs -a`
* `vgs -a`
* what kind of disks are you using?

how long does lvs -a take until it completes?
 
hmm - the parts of the logs you posted do not point to any particular problem - maybe the complete log would show a different picture.

else - what is the concrete setup of your storage?
* `cat /etc/storage.cfg`
* `pvs -a`
* `vgs -a`
* what kind of disks are you using?

how long does lvs -a take until it completes?

Hi,
I´ve been deleting almost 40 VM´s, so now I don´t have so many VM´s .

Here is the output:
Code:
PV                                         VG           Fmt  Attr PSize   PFree
  /dev/raid0/base-135-disk-1                                   ---       0     0
  /dev/raid0/base-138-disk-0                                   ---       0     0
  /dev/raid0/base-139-disk-1                                   ---       0     0
  /dev/raid0/base-143-disk-1                                   ---       0     0
  /dev/raid0/base-151-disk-1                                   ---       0     0
  /dev/raid0/vm-100-disk-0                                     ---       0     0
  /dev/raid0/vm-101-disk-1                                     ---       0     0
  /dev/raid0/vm-103-disk-1                                     ---       0     0
  /dev/raid0/vm-109-disk-0                                     ---       0     0
  /dev/raid0/vm-109-disk-1                                     ---       0     0
  /dev/raid0/vm-118-state-ParaPruebasAndared                   ---       0     0
  /dev/raid0/vm-120-disk-0                                     ---       0     0
  /dev/raid0/vm-121-disk-1                                     ---       0     0
  /dev/raid0/vm-122-disk-0                                     ---       0     0
  /dev/raid0/vm-122-disk-1                                     ---       0     0
  /dev/raid0/vm-127-disk-1                                     ---       0     0
  /dev/raid0/vm-128-disk-1                                     ---       0     0
  /dev/raid0/vm-129-disk-1                                     ---       0     0
  /dev/raid0/vm-690-state-Inicial                              ---       0     0
  /dev/raid0/vm-700-disk-1                                     ---       0     0
  /dev/sda2                                                    ---       0     0
  /dev/sda3                                  pve          lvm2 a--  931.26g    0
  /dev/sdb1                                  raid0        lvm2 a--    3.27t    0
  /dev/sdd2                                                    ---       0     0
  /dev/sdd3                                                    ---       0     0
  /dev/sdf1                                                    ---       0     0
  /dev/sdg                                   lvm-iscsi-hp lvm2 a--    3.64t 3.22t

Code:
VG           #PV #LV #SN Attr   VSize   VFree
  lvm-iscsi-hp   1   5   0 wz--n-   3.64t 3.22t
  pve            1   3   0 wz--n- 931.26g    0
  raid0          1  71   0 wz--n-   3.27t    0

Code:
cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content vztmpl,iso
        shared 0

dir: Backups
        path /var/lib/vz/vz-copia
        content backup
        maxfiles 1
        shared 0

lvmthin: local-raid0
        thinpool raid0-lvthin
        vgname raid0
        content images,rootdir

iscsi: CabinaHP
        portal 10.0.1.1
        target iqn.2019-03.aso.xxx.com:proxmox.cabina01
        content none

lvm: Disco-iSCSI-HP
        vgname lvm-iscsi-hp
        base CabinaHP:0.0.1.scsi-360000000000000000e00000000010001
        content images,rootdir
        shared 0

iscsi: CabinaIMG-HP
        portal 10.0.1.1
        target iqn.2019-03.aso.xxx.com:proxmox.cabina02
        content none

The lvs -a command does not take more than 0.3 seconds.

I use a:
- RAID 0 with 6x600GB SAS 15000k to store VM´s
- SATA III 1Tb to store OS and important backups
- iSCSI 4TB (4x1TB SATA disks) to store backups
 
hmm - the one occasion I know where lvs (actually most lvm utilities) take a very long time to complete an action if you have two volume groups with the same name.
This is usually not possible, however in context of a hypervisor, where the guests themselves also use LVM it can happen that 2 guests have the same volume group name.

* did you maybe delete 2 VMs which had a LVM and used the same volume-group name?

in any case it is good practice to add the Device-links of the volume-groups you use for guest images to the global filter in lvm.conf:
* just put a fitting regex for your VGs into the 'global_filter' line in '/etc/lvm/lvm.conf'
* in your case it should be something like: 'r|/dev/raid0.*|, r|/dev/mapper/raid0-.*|'

I hope this helps!
 
I'm having the exact same problem.

create full clone of drive ide0 (local-lvm:vm-118-disk-0)
Using default stripesize 64.00 KiB.
Logical volume "vm-121-disk-0" created.
transferred: 0 bytes remaining: 42949672960 bytes total: 42949672960 bytes progression: 0.00 %
transferred: 429496729 bytes remaining: 42520176231 bytes total: 42949672960 bytes progression: 1.00 %
transferred: 858993459 bytes remaining: 42090679501 bytes total: 42949672960 bytes progression: 2.00 %
transferred: 1288490188 bytes remaining: 41661182772 bytes total: 42949672960 bytes progression: 3.00 %
transferred: 1717986918 bytes remaining: 41231686042 bytes total: 42949672960 bytes progression: 4.00 %
transferred: 2147483648 bytes remaining: 40802189312 bytes total: 42949672960 bytes progression: 5.00 %
transferred: 2581275344 bytes remaining: 40368397616 bytes total: 42949672960 bytes progression: 6.01 %
transferred: 3010772074 bytes remaining: 39938900886 bytes total: 42949672960 bytes progression: 7.01 %
transferred: 3440268804 bytes remaining: 39509404156 bytes total: 42949672960 bytes progression: 8.01 %
transferred: 3869765533 bytes remaining: 39079907427 bytes total: 42949672960 bytes progression: 9.01 %
transferred: 4299262263 bytes remaining: 38650410697 bytes total: 42949672960 bytes progression: 10.01 %
transferred: 4728758992 bytes remaining: 38220913968 bytes total: 42949672960 bytes progression: 11.01 %
transferred: 5158255722 bytes remaining: 37791417238 bytes total: 42949672960 bytes progression: 12.01 %
transferred: 5587752452 bytes remaining: 37361920508 bytes total: 42949672960 bytes progression: 13.01 %
transferred: 6017249181 bytes remaining: 36932423779 bytes total: 42949672960 bytes progression: 14.01 %
transferred: 6446745911 bytes remaining: 36502927049 bytes total: 42949672960 bytes progression: 15.01 %
transferred: 6880537608 bytes remaining: 36069135352 bytes total: 42949672960 bytes progression: 16.02 %
transferred: 7310034337 bytes remaining: 35639638623 bytes total: 42949672960 bytes progression: 17.02 %
transferred: 7739531067 bytes remaining: 35210141893 bytes total: 42949672960 bytes progression: 18.02 %
transferred: 8169027796 bytes remaining: 34780645164 bytes total: 42949672960 bytes progression: 19.02 %
transferred: 8598524526 bytes remaining: 34351148434 bytes total: 42949672960 bytes progression: 20.02 %
transferred: 9028021256 bytes remaining: 33921651704 bytes total: 42949672960 bytes progression: 21.02 %
transferred: 9457517985 bytes remaining: 33492154975 bytes total: 42949672960 bytes progression: 22.02 %
transferred: 9887014715 bytes remaining: 33062658245 bytes total: 42949672960 bytes progression: 23.02 %
transferred: 10316511444 bytes remaining: 32633161516 bytes total: 42949672960 bytes progression: 24.02 %
transferred: 10746008174 bytes remaining: 32203664786 bytes total: 42949672960 bytes progression: 25.02 %
transferred: 11179799871 bytes remaining: 31769873089 bytes total: 42949672960 bytes progression: 26.03 %
transferred: 11609296601 bytes remaining: 31340376359 bytes total: 42949672960 bytes progression: 27.03 %
transferred: 12038793330 bytes remaining: 30910879630 bytes total: 42949672960 bytes progression: 28.03 %
transferred: 12468290060 bytes remaining: 30481382900 bytes total: 42949672960 bytes progression: 29.03 %
transferred: 12897786789 bytes remaining: 30051886171 bytes total: 42949672960 bytes progression: 30.03 %
transferred: 13327283519 bytes remaining: 29622389441 bytes total: 42949672960 bytes progression: 31.03 %
transferred: 13756780249 bytes remaining: 29192892711 bytes total: 42949672960 bytes progression: 32.03 %
transferred: 14186276978 bytes remaining: 28763395982 bytes total: 42949672960 bytes progression: 33.03 %
transferred: 14615773708 bytes remaining: 28333899252 bytes total: 42949672960 bytes progression: 34.03 %
transferred: 15045270437 bytes remaining: 27904402523 bytes total: 42949672960 bytes progression: 35.03 %
transferred: 15479062134 bytes remaining: 27470610826 bytes total: 42949672960 bytes progression: 36.04 %
transferred: 15908558864 bytes remaining: 27041114096 bytes total: 42949672960 bytes progression: 37.04 %
transferred: 16338055593 bytes remaining: 26611617367 bytes total: 42949672960 bytes progression: 38.04 %
transferred: 16767552323 bytes remaining: 26182120637 bytes total: 42949672960 bytes progression: 39.04 %
transferred: 17197049053 bytes remaining: 25752623907 bytes total: 42949672960 bytes progression: 40.04 %
transferred: 17626545782 bytes remaining: 25323127178 bytes total: 42949672960 bytes progression: 41.04 %
transferred: 18056042512 bytes remaining: 24893630448 bytes total: 42949672960 bytes progression: 42.04 %
transferred: 18485539241 bytes remaining: 24464133719 bytes total: 42949672960 bytes progression: 43.04 %
transferred: 18915035971 bytes remaining: 24034636989 bytes total: 42949672960 bytes progression: 44.04 %
transferred: 19344532701 bytes remaining: 23605140259 bytes total: 42949672960 bytes progression: 45.04 %
transferred: 19774029430 bytes remaining: 23175643530 bytes total: 42949672960 bytes progression: 46.04 %
transferred: 20207821127 bytes remaining: 22741851833 bytes total: 42949672960 bytes progression: 47.05 %
transferred: 20637317857 bytes remaining: 22312355103 bytes total: 42949672960 bytes progression: 48.05 %
transferred: 21066814586 bytes remaining: 21882858374 bytes total: 42949672960 bytes progression: 49.05 %
transferred: 21496311316 bytes remaining: 21453361644 bytes total: 42949672960 bytes progression: 50.05 %
transferred: 21925808046 bytes remaining: 21023864914 bytes total: 42949672960 bytes progression: 51.05 %
transferred: 22355304775 bytes remaining: 20594368185 bytes total: 42949672960 bytes progression: 52.05 %
transferred: 22784801505 bytes remaining: 20164871455 bytes total: 42949672960 bytes progression: 53.05 %
transferred: 23214298234 bytes remaining: 19735374726 bytes total: 42949672960 bytes progression: 54.05 %
transferred: 23643794964 bytes remaining: 19305877996 bytes total: 42949672960 bytes progression: 55.05 %
transferred: 24073291694 bytes remaining: 18876381266 bytes total: 42949672960 bytes progression: 56.05 %
transferred: 24507083390 bytes remaining: 18442589570 bytes total: 42949672960 bytes progression: 57.06 %
transferred: 24936580120 bytes remaining: 18013092840 bytes total: 42949672960 bytes progression: 58.06 %
transferred: 25366076850 bytes remaining: 17583596110 bytes total: 42949672960 bytes progression: 59.06 %
transferred: 25795573579 bytes remaining: 17154099381 bytes total: 42949672960 bytes progression: 60.06 %
transferred: 26225070309 bytes remaining: 16724602651 bytes total: 42949672960 bytes progression: 61.06 %
transferred: 26654567038 bytes remaining: 16295105922 bytes total: 42949672960 bytes progression: 62.06 %
transferred: 27084063768 bytes remaining: 15865609192 bytes total: 42949672960 bytes progression: 63.06 %
transferred: 27513560498 bytes remaining: 15436112462 bytes total: 42949672960 bytes progression: 64.06 %
transferred: 27943057227 bytes remaining: 15006615733 bytes total: 42949672960 bytes progression: 65.06 %
transferred: 28372553957 bytes remaining: 14577119003 bytes total: 42949672960 bytes progression: 66.06 %
transferred: 28806345654 bytes remaining: 14143327306 bytes total: 42949672960 bytes progression: 67.07 %
transferred: 29235842383 bytes remaining: 13713830577 bytes total: 42949672960 bytes progression: 68.07 %
transferred: 29665339113 bytes remaining: 13284333847 bytes total: 42949672960 bytes progression: 69.07 %
transferred: 30094835843 bytes remaining: 12854837117 bytes total: 42949672960 bytes progression: 70.07 %
transferred: 30524332572 bytes remaining: 12425340388 bytes total: 42949672960 bytes progression: 71.07 %
transferred: 30953829302 bytes remaining: 11995843658 bytes total: 42949672960 bytes progression: 72.07 %
transferred: 31383326031 bytes remaining: 11566346929 bytes total: 42949672960 bytes progression: 73.07 %
transferred: 31812822761 bytes remaining: 11136850199 bytes total: 42949672960 bytes progression: 74.07 %
transferred: 32242319491 bytes remaining: 10707353469 bytes total: 42949672960 bytes progression: 75.07 %
transferred: 32671816220 bytes remaining: 10277856740 bytes total: 42949672960 bytes progression: 76.07 %
transferred: 33105607917 bytes remaining: 9844065043 bytes total: 42949672960 bytes progression: 77.08 %
transferred: 33535104647 bytes remaining: 9414568313 bytes total: 42949672960 bytes progression: 78.08 %
transferred: 33964601376 bytes remaining: 8985071584 bytes total: 42949672960 bytes progression: 79.08 %
transferred: 34394098106 bytes remaining: 8555574854 bytes total: 42949672960 bytes progression: 80.08 %
transferred: 34823594835 bytes remaining: 8126078125 bytes total: 42949672960 bytes progression: 81.08 %
transferred: 35253091565 bytes remaining: 7696581395 bytes total: 42949672960 bytes progression: 82.08 %
transferred: 35682588295 bytes remaining: 7267084665 bytes total: 42949672960 bytes progression: 83.08 %
transferred: 36112085024 bytes remaining: 6837587936 bytes total: 42949672960 bytes progression: 84.08 %
transferred: 36541581754 bytes remaining: 6408091206 bytes total: 42949672960 bytes progression: 85.08 %
transferred: 36971078483 bytes remaining: 5978594477 bytes total: 42949672960 bytes progression: 86.08 %
transferred: 37400575213 bytes remaining: 5549097747 bytes total: 42949672960 bytes progression: 87.08 %
transferred: 37834366910 bytes remaining: 5115306050 bytes total: 42949672960 bytes progression: 88.09 %
transferred: 38263863640 bytes remaining: 4685809320 bytes total: 42949672960 bytes progression: 89.09 %
transferred: 38693360369 bytes remaining: 4256312591 bytes total: 42949672960 bytes progression: 90.09 %
transferred: 39122857099 bytes remaining: 3826815861 bytes total: 42949672960 bytes progression: 91.09 %
transferred: 39552353828 bytes remaining: 3397319132 bytes total: 42949672960 bytes progression: 92.09 %
transferred: 39981850558 bytes remaining: 2967822402 bytes total: 42949672960 bytes progression: 93.09 %
transferred: 40411347288 bytes remaining: 2538325672 bytes total: 42949672960 bytes progression: 94.09 %
transferred: 40840844017 bytes remaining: 2108828943 bytes total: 42949672960 bytes progression: 95.09 %
transferred: 41270340747 bytes remaining: 1679332213 bytes total: 42949672960 bytes progression: 96.09 %
transferred: 41699837476 bytes remaining: 1249835484 bytes total: 42949672960 bytes progression: 97.09 %
transferred: 42133629173 bytes remaining: 816043787 bytes total: 42949672960 bytes progression: 98.10 %
transferred: 42563125903 bytes remaining: 386547057 bytes total: 42949672960 bytes progression: 99.10 %
transferred: 42949672960 bytes remaining: 0 bytes total: 42949672960 bytes progression: 100.00 %
transferred: 42949672960 bytes remaining: 0 bytes total: 42949672960 bytes progression: 100.00 %
Logical volume "vm-121-disk-0" successfully removed
TASK ERROR: clone failed: command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --options lv_size /dev/pve/vm-121-disk-0' failed: got timeout

Is there any way to do this by hand without jacking things up? I'm cloning vm-118-disk-0.
 
I've the same issue.
It's on a fresh proxmox 6.0, up to date (6.0.5). The HDD is an hardware RAID 1.
I've created only one VM and convert it to template. No other VM exist, only the single template and i can't clone it

cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,vztmpl,backup

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

pvs -a
PV VG Fmt Attr PSize PFree
/dev/sda2 --- 0 0
/dev/sda3 pve lvm2 a-- 1.86t <16.38g

vgs -a
VG #PV #LV #SN Attr VSize VFree
pve 1 4 0 wz--n- 1.86t <16.38g

lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
base-150-disk-0 pve Vri-a-tz-k 50.00g data 35.42
data pve twi-aotz-- 1.74t 0.99 0.20
[data_tdata] pve Twi-ao---- 1.74t
[data_tmeta] pve ewi-ao---- 15.81g
[lvol0_pmspare] pve ewi------- 15.81g
root pve -wi-ao---- 50.00g
swap pve -wi-ao---- 25.00g


The last line of the GUI:
transferred: 53687091200 bytes remaining: 0 bytes total: 53687091200 bytes progression: 100.00 %
transferred: 53687091200 bytes remaining: 0 bytes total: 53687091200 bytes progression: 100.00 %
WARNING: Device /dev/dm-7 not initialized in udev database even after waiting 10000000 microseconds.
WARNING: Device /dev/dm-7 not initialized in udev database even after waiting 10000000 microseconds.
Logical volume "vm-100-disk-0" successfully removed
WARNING: Device /dev/dm-7 not initialized in udev database even after waiting 10000000 microseconds.
TASK ERROR: clone failed: command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --options lv_size /dev/pve/vm-100-disk-0' failed: got timeout

in journalctl:
systemd-udevd[651]: dm-7: Worker [4707] processing SEQNUM=9082 killed
kernel: INFO: task kworker/u32:9:359 blocked for more than 120 seconds.
kernel: Tainted: P O 5.0.18-1-pve #1
kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kernel: kworker/u32:9 D 0 359 2 0x80000000
kernel: Workqueue: writeback wb_workfn (flush-253:7)
kernel: Call Trace:
kernel: __schedule+0x2d4/0x870
kernel: schedule+0x2c/0x70
kernel: io_schedule+0x16/0x40
kernel: __lock_page+0x122/0x220
kernel: ? file_check_and_advance_wb_err+0xe0/0xe0
kernel: write_cache_pages+0x23e/0x4d0
kernel: ? update_dl_rq_load_avg+0x1a8/0x220
kernel: ? __wb_calc_thresh+0x130/0x130
kernel: ? update_nohz_stats+0x46/0x60
kernel: generic_writepages+0x56/0x90
kernel: blkdev_writepages+0xe/0x10
kernel: do_writepages+0x41/0xd0
kernel: __writeback_single_inode+0x40/0x350
kernel: writeback_sb_inodes+0x211/0x500
kernel: __writeback_inodes_wb+0x67/0xb0
kernel: wb_writeback+0x25f/0x2f0
kernel: wb_workfn+0x32c/0x3f0
kernel: process_one_work+0x20f/0x410
kernel: worker_thread+0x34/0x400
kernel: kthread+0x120/0x140
kernel: ? process_one_work+0x410/0x410
kernel: ? __kthread_parkme+0x70/0x70
kernel: ret_from_fork+0x35/0x40
kernel: INFO: task qemu-img:3971 blocked for more than 120 seconds.
kernel: Tainted: P O 5.0.18-1-pve #1
kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kernel: qemu-img D 0 3971 3943 0x00000000
kernel: Call Trace:
kernel: __schedule+0x2d4/0x870
kernel: schedule+0x2c/0x70
kernel: io_schedule+0x16/0x40
kernel: wait_on_page_bit+0x141/0x210
kernel: ? file_check_and_advance_wb_err+0xe0/0xe0
kernel: write_cache_pages+0x381/0x4d0
kernel: ? __wb_calc_thresh+0x130/0x130
kernel: generic_writepages+0x56/0x90
kernel: blkdev_writepages+0xe/0x10
kernel: do_writepages+0x41/0xd0
kernel: ? __wake_up_common_lock+0x8e/0xc0
kernel: __filemap_fdatawrite_range+0xc5/0x100
kernel: filemap_write_and_wait+0x31/0x90
kernel: __blkdev_put+0x72/0x1e0
kernel: ? fsnotify+0x28b/0x3c0
kernel: ? fsnotify+0x2ef/0x3c0
kernel: blkdev_put+0x4c/0xd0
kernel: blkdev_close+0x34/0x70
kernel: __fput+0xbc/0x230
kernel: ____fput+0xe/0x10
kernel: task_work_run+0x9d/0xc0
kernel: exit_to_usermode_loop+0xf2/0x100
kernel: do_syscall_64+0xf0/0x110
kernel: entry_SYSCALL_64_after_hwframe+0x44/0xa9
kernel: RIP: 0033:0x7f8578a525d7
kernel: Code: Bad RIP value.
kernel: RSP: 002b:00007fffe82fbfd0 EFLAGS: 00000293 ORIG_RAX: 0000000000000003
kernel: RAX: 0000000000000000 RBX: 000000000000000a RCX: 00007f8578a525d7
kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000000000000000a
kernel: RBP: 0000000000000000 R08: 0000000000000001 R09: 00007f856c736960
kernel: R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000000
kernel: R13: 0000000000000000 R14: ffffffffffffffff R15: 0000000000000000
kernel: INFO: task systemd-udevd:4707 blocked for more than 120 seconds.
kernel: Tainted: P O 5.0.18-1-pve #1
kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kernel: systemd-udevd D 0 4707 651 0x00000324
systemd-udevd[651]: Worker [4707] terminated by signal 9 (KILL)
systemd-udevd[651]: dm-7: Worker [4707] failed
pvedaemon[3943]: VM 150 qmp command failed - VM 150 not running
pmxcfs[1427]: [status] notice: received log
pmxcfs[1427]: [status] notice: received log
pmxcfs[1427]: [dcdb] notice: members: 1/1427
pmxcfs[1427]: [status] notice: members: 1/1427
corosync[1721]: [TOTEM ] A new membership (1:40) was formed. Members left: 2
corosync[1721]: [CPG ] downlist left_list: 1 received
corosync[1721]: [QUORUM] This node is within the non-primary component and will NOT provide any services.
corosync[1721]: [QUORUM] Members[1]: 1
corosync[1721]: [MAIN ] Completed service synchronization, ready to provide service.
pmxcfs[1427]: [status] notice: node lost quorum
corosync[1721]: [KNET ] link: host: 2 link: 0 is down
corosync[1721]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 1)
corosync[1721]: [KNET ] host: host: 2 has no active links
snmpd[1320]: error on subcontainer 'ia_addr' insert (-1)
pvestatd[1940]: status update time (20.322 seconds)
pvedaemon[3943]: clone failed: command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --options lv_size
pvedaemon[2016]: <root@pam> end task UPID:cheli-pve01:00000F67:00008458:5D4D6B85:qmclone:150:root@pam: clone failed: command '/sb
 
* does `lvs -a`, `vgs` take a long time to complete?
* any other messages in the journal while the problem is observed?
* what's the status of the udev services and lvm services?

Thanks!
 
lvs-a, vgs -a and pvs -a commands are fast, no delay.
No other messages in journalctl

systemctl status udev.service
● systemd-udevd.service - udev Kernel Device Manager
Loaded: loaded (/lib/systemd/system/systemd-udevd.service; static; vendor preset: enabled)
Active: active (running) since Fri 2019-08-09 14:42:39 CEST; 2h 23min ago
Docs: man:systemd-udevd.service(8)
man:udev(7)
Main PID: 651 (systemd-udevd)
Status: "Processing with 136 children at max"
Tasks: 1
Memory: 37.0M
CGroup: /system.slice/systemd-udevd.service
└─651 /lib/systemd/systemd-udevd

Aug 09 14:42:39 pve01 systemd-udevd[719]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Aug 09 14:42:41 pve01 systemd-udevd[683]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Aug 09 14:42:41 pve01 systemd-udevd[683]: Could not generate persistent MAC address for ovs-system: No such file or directory
Aug 09 14:42:41 pve01 systemd-udevd[683]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Aug 09 14:42:41 pve01 systemd-udevd[683]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Aug 09 14:42:41 pve01 systemd-udevd[683]: Could not generate persistent MAC address for admin: No such file or directory
Aug 09 14:50:26 pve01 systemd-udevd[651]: dm-7: Worker [4707] processing SEQNUM=9082 is taking a long time
Aug 09 14:52:26 pve01 systemd-udevd[651]: dm-7: Worker [4707] processing SEQNUM=9082 killed
Aug 09 14:59:21 pve01 systemd-udevd[651]: Worker [4707] terminated by signal 9 (KILL)
Aug 09 14:59:21 pve01 systemd-udevd[651]: dm-7: Worker [4707] failed

systemctl status lvm*
● lvm2-pvscan@8:3.service - LVM event activation on device 8:3
Loaded: loaded (/lib/systemd/system/lvm2-pvscan@.service; static; vendor preset: enabled)
Active: active (exited) since Fri 2019-08-09 14:42:41 CEST; 2h 24min ago
Docs: man:pvscan(8)
Process: 851 ExecStart=/sbin/lvm pvscan --cache --activate ay 8:3 (code=exited, status=0/SUCCESS)
Main PID: 851 (code=exited, status=0/SUCCESS)

Aug 09 14:42:40 pve01 systemd[1]: Starting LVM event activation on device 8:3...
Aug 09 14:42:41 pve01 lvm[851]: 3 logical volume(s) in volume group "pve" now active
Aug 09 14:42:41 pve01 systemd[1]: Started LVM event activation on device 8:3.

● lvm2-lvmpolld.socket - LVM2 poll daemon socket
Loaded: loaded (/lib/systemd/system/lvm2-lvmpolld.socket; enabled; vendor preset: enabled)
Active: active (listening) since Fri 2019-08-09 14:42:39 CEST; 2h 24min ago
Docs: man:lvmpolld(8)
Listen: /run/lvm/lvmpolld.socket (Stream)
CGroup: /system.slice/lvm2-lvmpolld.socket

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

● lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
Loaded: loaded (/lib/systemd/system/lvm2-monitor.service; enabled; vendor preset: enabled)
Active: active (exited) since Fri 2019-08-09 14:42:40 CEST; 2h 24min ago
Docs: man:dmeventd(8)
man:lvcreate(8)
man:lvchange(8)
man:vgchange(8)
Process: 621 ExecStart=/sbin/lvm vgchange --monitor y (code=exited, status=0/SUCCESS)
Main PID: 621 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 4915)
Memory: 0B
CGroup: /system.slice/lvm2-monitor.service

Aug 09 14:42:39 pve01 lvm[621]: 1 logical volume(s) in volume group "pve" monitored
Aug 09 14:42:40 pve01 systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.

14:42 is the time of the reboot.
 
Hi ,
I encountered the same issue. I try above lvs -a vgs -a pvs -a
found that it takes a long time to response

Is there any suggestions about this symptom?

Thanks.

root@pve:~# time lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
vm-102-disk-0 extspace -wi-a----- 32.00g
vm-102-disk-1 extspace -wi-a----- 4.00m
vm-106-disk-0 extspace -wi-a----- 8.00g
vm-108-disk-0 extspace -wi-a----- 32.00g
vm-109-disk-0 extspace -wi-a----- 72.00g
vm-110-disk-0 extspace -wi-a----- 4.00m
vm-110-disk-1 extspace -wi-ao---- 32.00g
data pve twi-aotz-- 140.45g 85.98 4.91
[data_tdata] pve Twi-ao---- 140.45g
[data_tmeta] pve ewi-ao---- 1.43g
[lvol0_pmspare] pve ewi------- 1.43g
root pve -wi-ao---- 55.75g
snap_vm-100-disk-0_FaceIDservice_nodeJSUI pve Vri---tz-k 32.00g data
snap_vm-100-disk-0_GPU_cudnn pve Vri---tz-k 32.00g data
snap_vm-100-disk-0_faceID_fix1 pve Vri---tz-k 32.00g data
snap_vm-100-disk-0_faceID_fix2 pve Vri---tz-k 32.00g data
snap_vm-100-disk-0_faceID_ready pve Vri---tz-k 32.00g data
snap_vm-100-disk-0_mosquitto pve Vri---tz-k 32.00g data
snap_vm-100-disk-0_origional pve Vri---tz-k 32.00g data
snap_vm-100-disk-1_FaceIDservice_nodeJSUI pve Vri---tz-k 4.00m data
snap_vm-100-disk-1_GPU_cudnn pve Vri---tz-k 4.00m data
snap_vm-100-disk-1_faceID_fix1 pve Vri---tz-k 4.00m data
snap_vm-100-disk-1_faceID_fix2 pve Vri---tz-k 4.00m data
snap_vm-100-disk-1_faceID_ready pve Vri---tz-k 4.00m data
snap_vm-100-disk-1_mosquitto pve Vri---tz-k 4.00m data
snap_vm-100-disk-1_origional pve Vri---tz-k 4.00m data
snap_vm-101-disk-0_dockercenvidiadocker pve Vri---tz-k 32.00g data vm-101-disk-0
snap_vm-101-disk-1_dockercenvidiadocker pve Vri---tz-k 4.00m data vm-101-disk-1
snap_vm-103-disk-0_base pve Vri---tz-k 32.00g data
snap_vm-103-disk-0_base10 pve Vri---tz-k 32.00g data
snap_vm-103-disk-0_base2 pve Vri---tz-k 32.00g data
snap_vm-103-disk-0_base3 pve Vri---tz-k 32.00g data
snap_vm-103-disk-0_base4 pve Vri---tz-k 32.00g data
snap_vm-103-disk-0_base5 pve Vri---tz-k 32.00g data
snap_vm-103-disk-0_base6 pve Vri---tz-k 32.00g data
snap_vm-103-disk-0_base7 pve Vri---tz-k 32.00g data
snap_vm-103-disk-0_base8 pve Vri---tz-k 32.00g data
snap_vm-103-disk-0_base9 pve Vri---tz-k 32.00g data
swap pve -wi-ao---- 8.00g
vm-100-disk-0 pve Vwi-a-tz-- 32.00g data snap_vm-100-disk-0_faceID_fix2 96.92
vm-100-disk-1 pve Vwi-a-tz-- 4.00m data snap_vm-100-disk-1_faceID_fix2 3.12
vm-101-disk-0 pve Vwi-a-tz-- 32.00g data 97.13
vm-101-disk-1 pve Vwi-a-tz-- 4.00m data 3.12
vm-103-disk-0 pve Vwi-a-tz-- 32.00g data snap_vm-103-disk-0_base10 65.03
[lvol0_pmspare] thindata ewi------- 120.00m
thindata thindata twi-aotz-- 465.52g 1.22 0.41
[thindata_tdata] thindata Twi-ao---- 465.52g
[thindata_tmeta] thindata ewi-ao---- 120.00m

real 0m7.602s
user 0m0.003s
sys 0m0.010s

root@pve:~# time vgs -a
VG #PV #LV #SN Attr VSize VFree
extspace 1 7 0 wz--n- 223.57g 47.56g
pve 1 34 0 wz--n- 223.07g 16.00g
thindata 1 1 0 wz--n- 465.76g 0

real 0m7.588s
user 0m0.007s
sys 0m0.007s

root@pve:~# time pvs -a
PV VG Fmt Attr PSize PFree
/dev/extspace/vm-102-disk-1 --- 0 0
/dev/extspace/vm-106-disk-0 --- 0 0
/dev/extspace/vm-110-disk-0 --- 0 0
/dev/sda extspace lvm2 a-- 223.57g 47.56g
/dev/sdb thindata lvm2 a-- 465.76g 0
/dev/sdc2 --- 0 0
/dev/sdc3 pve lvm2 a-- 223.07g 16.00g

real 0m10.569s
user 0m0.013s
sys 0m0.000s
 
I have the same problem. Nothing from above posts are not helps.
I found that that issue make palce only when i try to move ".raw" image to lvm.
".raw" disk image was made by restoring backup.
 
I can report that I, too, see this same issue on a fresh install of 6.0 updated to the latest packages in the no-subscription repo. I'm fairly unfamiliar with PVE as I'm transitioning over from ESXi but this is preventing me from using this in my lab environment for testing. An interesting note here is that this is not a problem on another box in my lab that is also a 6.0 install up-to-date (I used the same installer, both updated today).

Because of this difference, I'd suspect that there's something different about the hardware that impacts this. Here are some notes on each host:
  • vs01 (cannot clone):
    • HP ProLiant DL360 G7
    • 32GB RAM
    • HP P410i RAID controller
      • 2 x 500GB SSD, each their own RAID0 volume (this is a hack to get JBOD on the P410i controllers).
  • vs02 (clones fine):
    • Intel DH61R Motherboard (custom build)
    • 8GB RAM
    • Intel 6 series/C200 SATA AHCI controller (built-in on the board)
      • 1 x 1TB HDD
I'm familiar with troubleshooting Linux systems in general so I'll see if I can get to the bottom of the problem.
 
Last edited:
Hi, I had exactly same problem.
I was moving hard disks with GUI Interface.
.- I moved (a win8.1 machine - with lvm_raw ) --> to a filew81.raw (directory) --> OK.
.- I moved that filew81.raw --> to other (win8.1' lvm-raw' ) at different hardDisk --> OK.
.- Finally, i when tried to move again that last (win8.1'lvm-raw') --> to other ( win8.1''' - lvm-raw ''') --> FAIL
with TASK ERROR: ....failed: got timeout, exactly like TuttiPazzo.
( It fails TWICE )

.- My solution:
-- I started W8.1 --> chkdsk c: /f --> reboot again win8.1 --> chkdsk -OK- ===> I tried 3rd, to move again --> OK.

I hope this helps you.
 
Dear friends,

I faced the same issue and posted a question in here but to no avail. I initially installed version 6.1-7 later upgraded to version 6.2-6. I found a workaround to this issue with a little trial and error session.

My finding;

1st situation - able to clone the source VM successfully while the VM is in running state, otherwise I get an error message like this "VM 100 qmp command failed - VM 100 not running"

2nd situation - able to clone the source VM successfully while the VM is in the stopped state, to do this I had left the "Qemu Agent" checkbox unticked under the "System" tab when I spun new VM.

LVM Configuration
LVM - 500GB
LVM-Thin 1.7TB
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!