Moving Disk: Kernel crash & qmmove fails

marsian

Well-Known Member
Sep 27, 2016
55
5
48
Hi there,

we're currently resorting our locally stored VMs on some servers, and moving virtual disks from sda to sdb etc.

While doing so, we received a Kernel crash along with a failed qmmove (tried to do 3 moves in parallel). One the crash happend, the whole Server moved into the "Questionmark" state, where all VM names have vanished etc. Looking at the monitoring at that time, there's obviously traffic on the disks and the storage controller (HPE p440ar), but not as much that we would expect a crash? PVE is 5.4.x with Kernel "Linux 4.15.18-25-pve #1 SMP PVE 4.15.18-53"

As the log is too long for the post, please find the full extract attached and some excerpts here:

Code:
Mar 20 20:26:29 pve03 pvedaemon[26262]: <root@pam> move disk VM 112: move --disk ide0 --storage local-lvm2
Mar 20 20:26:29 pve03 pvedaemon[26262]: <root@pam> starting task UPID:pve03:000041D3:09CC4050:5E7518E5:qmmove:112:root@pam:
Mar 20 20:26:50 pve03 pvedaemon[26877]: <root@pam> move disk VM 134: move --disk scsi0 --storage local-lvm2
Mar 20 20:26:51 pve03 pvedaemon[26877]: <root@pam> starting task UPID:pve03:0000421C:09CC48CE:5E7518FB:qmmove:134:root@pam:
Mar 20 20:27:00 pve03 systemd[1]: Starting Proxmox VE replication runner...
Mar 20 20:27:01 pve03 systemd[1]: Started Proxmox VE replication runner.
Mar 20 20:27:24 pve03 pvedaemon[40988]: <root@pam> move disk VM 133: move --disk scsi0 --storage local-lvm2
Mar 20 20:27:25 pve03 pvedaemon[40988]: <root@pam> starting task UPID:pve03:0000431D:09CC565D:5E75191D:qmmove:133:root@pam:
Mar 20 20:28:00 pve03 systemd[1]: Starting Proxmox VE replication runner...
Mar 20 20:28:01 pve03 systemd[1]: Started Proxmox VE replication runner.
Mar 20 20:28:59 pve03 pvestatd[4845]: status update time (7.113 seconds)
Mar 20 20:29:00 pve03 systemd[1]: Starting Proxmox VE replication runner...
Mar 20 20:29:04 pve03 systemd[1]: Started Proxmox VE replication runner.
Mar 20 20:29:17 pve03 pvestatd[4845]: status update time (5.163 seconds)
Mar 20 20:29:39 pve03 pvestatd[4845]: status update time (6.957 seconds)
Mar 20 20:30:00 pve03 systemd[1]: Starting Proxmox VE replication runner...
Mar 20 20:30:01 pve03 systemd[1]: Started Proxmox VE replication runner.
Mar 20 20:30:42 pve03 systemd-udevd[674]: seq 887811 '/devices/virtual/block/dm-26' is taking a long time
Mar 20 20:30:51 pve03 systemd-udevd[674]: seq 887812 '/devices/virtual/block/dm-20' is taking a long time
Mar 20 20:31:00 pve03 systemd[1]: Starting Proxmox VE replication runner...
Mar 20 20:31:01 pve03 systemd[1]: Started Proxmox VE replication runner.
Mar 20 20:32:00 pve03 systemd[1]: Starting Proxmox VE replication runner...
Mar 20 20:32:01 pve03 systemd[1]: Started Proxmox VE replication runner.
Mar 20 20:32:23 pve03 kernel: INFO: task qemu-img:16871 blocked for more than 120 seconds.
Mar 20 20:32:23 pve03 kernel:       Tainted: P           O     4.15.18-25-pve #1
Mar 20 20:32:23 pve03 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Mar 20 20:32:23 pve03 kernel: qemu-img        D    0 16871  16851 0x00000000
Mar 20 20:32:23 pve03 kernel: Call Trace:
Mar 20 20:32:23 pve03 kernel:  __schedule+0x3e0/0x870
Mar 20 20:32:23 pve03 kernel:  schedule+0x36/0x80
Mar 20 20:32:23 pve03 kernel:  io_schedule+0x16/0x40
Mar 20 20:32:23 pve03 kernel:  wait_on_page_bit_common+0xf3/0x190
Mar 20 20:32:23 pve03 kernel:  ? page_cache_tree_insert+0xe0/0xe0
Mar 20 20:32:23 pve03 kernel:  __filemap_fdatawait_range+0xfa/0x160
Mar 20 20:32:23 pve03 kernel:  filemap_write_and_wait+0x4d/0x90
Mar 20 20:32:23 pve03 kernel:  __blkdev_put+0x7a/0x210
Mar 20 20:32:23 pve03 kernel:  ? fsnotify+0x266/0x460
Mar 20 20:32:23 pve03 kernel:  blkdev_put+0x4c/0xd0
Mar 20 20:32:23 pve03 kernel:  blkdev_close+0x34/0x70
Mar 20 20:32:23 pve03 kernel:  __fput+0xea/0x220
Mar 20 20:32:23 pve03 kernel:  ____fput+0xe/0x10
Mar 20 20:32:23 pve03 kernel:  task_work_run+0x9d/0xc0
Mar 20 20:32:23 pve03 kernel:  exit_to_usermode_loop+0xc4/0xd0
Mar 20 20:32:23 pve03 kernel:  do_syscall_64+0x100/0x130
Mar 20 20:32:23 pve03 kernel:  entry_SYSCALL_64_after_hwframe+0x3d/0xa2
Mar 20 20:32:23 pve03 kernel: RIP: 0033:0x7fce8720c28d
Mar 20 20:32:23 pve03 kernel: RSP: 002b:00007ffc27af0cf0 EFLAGS: 00000293 ORIG_RAX: 0000000000000003
Mar 20 20:32:23 pve03 kernel: RAX: 0000000000000000 RBX: 00007fce80821500 RCX: 00007fce8720c28d
Mar 20 20:32:23 pve03 kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000000000000000a
Mar 20 20:32:23 pve03 kernel: RBP: 00007fce808b1640 R08: 0000000000000008 R09: 0000000000000000
Mar 20 20:32:23 pve03 kernel: R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000000
Mar 20 20:32:23 pve03 kernel: R13: 00007ffc27af0e70 R14: 0000000000000000 R15: 0000000000000000
Mar 20 20:32:23 pve03 kernel: INFO: task qemu-img:16944 blocked for more than 120 seconds.
Mar 20 20:32:23 pve03 kernel:       Tainted: P           O     4.15.18-25-pve #1
Mar 20 20:32:23 pve03 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Mar 20 20:32:23 pve03 kernel: qemu-img        D    0 16944  16924 0x00000000
Mar 20 20:32:23 pve03 kernel: Call Trace:
Mar 20 20:32:23 pve03 kernel:  __schedule+0x3e0/0x870
Mar 20 20:32:23 pve03 kernel:  schedule+0x36/0x80
Mar 20 20:32:23 pve03 kernel:  io_schedule+0x16/0x40
Mar 20 20:32:23 pve03 kernel:  wait_on_page_bit+0xf6/0x130
Mar 20 20:32:23 pve03 kernel:  ? page_cache_tree_insert+0xe0/0xe0
Mar 20 20:32:23 pve03 kernel:  write_cache_pages+0x303/0x470
Mar 20 20:32:23 pve03 kernel:  ? try_to_wake_up+0x59/0x4b0
Mar 20 20:32:23 pve03 kernel:  ? __wb_calc_thresh+0x140/0x140
Mar 20 20:32:23 pve03 kernel:  generic_writepages+0x61/0xa0
Mar 20 20:32:23 pve03 kernel:  blkdev_writepages+0x2f/0x40
Mar 20 20:32:23 pve03 kernel:  ? blkdev_writepages+0x2f/0x40
Mar 20 20:32:23 pve03 kernel:  do_writepages+0x1f/0x70
Mar 20 20:32:23 pve03 kernel:  __filemap_fdatawrite_range+0xd4/0x110
Mar 20 20:32:23 pve03 kernel:  filemap_write_and_wait+0x31/0x90
Mar 20 20:32:23 pve03 kernel:  __blkdev_put+0x7a/0x210
Mar 20 20:32:23 pve03 kernel:  ? fsnotify+0x266/0x460
Mar 20 20:32:23 pve03 kernel:  blkdev_put+0x4c/0xd0
Mar 20 20:32:23 pve03 kernel:  blkdev_close+0x34/0x70
Mar 20 20:32:23 pve03 kernel:  __fput+0xea/0x220
Mar 20 20:32:23 pve03 kernel:  ____fput+0xe/0x10
Mar 20 20:32:23 pve03 kernel:  task_work_run+0x9d/0xc0
Mar 20 20:32:23 pve03 kernel:  exit_to_usermode_loop+0xc4/0xd0
Mar 20 20:32:23 pve03 kernel:  do_syscall_64+0x100/0x130
Mar 20 20:32:23 pve03 kernel:  entry_SYSCALL_64_after_hwframe+0x3d/0xa2
Mar 20 20:32:23 pve03 kernel: RIP: 0033:0x7efdb532028d
Mar 20 20:32:23 pve03 kernel: RSP: 002b:00007ffe27d93af0 EFLAGS: 00000293 ORIG_RAX: 0000000000000003
Mar 20 20:32:23 pve03 kernel: RAX: 0000000000000000 RBX: 00007efdae821500 RCX: 00007efdb532028d
Mar 20 20:32:23 pve03 kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000000000000000a
Mar 20 20:32:23 pve03 kernel: RBP: 00007efdae8b1640 R08: 0000000000000008 R09: 0000000000000000
Mar 20 20:32:23 pve03 kernel: R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000000
Mar 20 20:32:23 pve03 kernel: R13: 00007ffe27d93c70 R14: 0000000000000000 R15: 0000000000000000

Code:
Mar 20 20:52:39 pve03 pvedaemon[16851]: storage migration failed: command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --options lv_size /dev/pve2/vm-112-disk-0' failed: got timeout
Mar 20 20:52:41 pve03 pvedaemon[16924]: storage migration failed: command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --options lv_size /dev/pve2/vm-134-disk-0' failed: got timeout
Mar 20 20:52:41 pve03 pvedaemon[26877]: <root@pam> end task UPID:pve03:0000421C:09CC48CE:5E7518FB:qmmove:134:root@pam: storage migration failed: command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --options lv_size /dev/pve2/vm-134-disk-0' failed: got timeout

Any ideas, or just a given?

Thanks!
 

Attachments

  • move_disk_20200320_pve03.txt
    32.2 KB · Views: 2
Last edited:
Hi,

This is not a kernel crash it says a process hangs more than 120 sec.
So the kernel is fine but it can't work properly because the io is hanging.
I guess there is an interrupt that got lost. This can happen if the system is under very high load or the storage are too fast for the system.
Try to move just 2 disks in parallel.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!