lvm tools still buggy

ccube

Active Member
Apr 5, 2011
49
0
26
Passau, Germany, Germany
hi,
we diskussed this before and i am wondering about noone else having this problems.
using pve+squeeze there are some issues regarding lvm.

lvremove is hanging as a defunct process
processes are waiting
Code:
INFO: task lvremove:203495 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
lvremove      D ffff880126786f80     0 203495  93941    0 0x00000000
 ffff88011ec01b18 0000000000000082 0000000000000000 ffffffff81404e6c
 0000000000000008 0000000000001000 0000000000000000 000000000000000c
 ffff88011ec01b08 ffff880126787520 ffff88011ec01fd8 ffff88011ec01fd8
Call Trace:
 [<ffffffff81404e6c>] ? dm_table_unplug_all+0x5c/0x100
 [<ffffffff8150adb3>] io_schedule+0x73/0xc0
 [<ffffffff811c856e>] __blockdev_direct_IO_newtrunc+0x6ee/0xb80
 [<ffffffff811c8a5e>] __blockdev_direct_IO+0x5e/0xd0
 [<ffffffff811c5330>] ? blkdev_get_blocks+0x0/0xc0
 [<ffffffff811c6187>] blkdev_direct_IO+0x57/0x60
 [<ffffffff811c5330>] ? blkdev_get_blocks+0x0/0xc0
 [<ffffffff811223db>] generic_file_aio_read+0x70b/0x780
 [<ffffffff811c6c61>] ? blkdev_open+0x71/0xc0
 [<ffffffff811889d3>] ? __dentry_open+0x113/0x330
 [<ffffffff81226338>] ? devcgroup_inode_permission+0x48/0x50
 [<ffffffff8118aeba>] do_sync_read+0xfa/0x140
 [<ffffffff8119c102>] ? user_path_at+0x62/0xa0
 [<ffffffff81094280>] ? autoremove_wake_function+0x0/0x40
 [<ffffffff811c576c>] ? block_ioctl+0x3c/0x40
 [<ffffffff8119e712>] ? vfs_ioctl+0x22/0xa0
 [<ffffffff8119e8ba>] ? do_vfs_ioctl+0x8a/0x5d0
 [<ffffffff8118b895>] vfs_read+0xb5/0x1a0
 [<ffffffff8118b9d1>] sys_read+0x51/0x90
 [<ffffffff8100b182>] system_call_fastpath+0x16/0x1b
INFO: task vgs:203515 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
vgs           D ffff88042b1e7300     0 203515   2121    0 0x00000000
 ffff880103ea5b18 0000000000000086 0000000000000000 ffffffff81404e6c
 0000000000000008 0000000000001000 0000000000000000 000000000000000c
 ffff880103ea5b08 ffff88042b1e78a0 ffff880103ea5fd8 ffff880103ea5fd8
Call Trace:
 [<ffffffff81404e6c>] ? dm_table_unplug_all+0x5c/0x100
 [<ffffffff8150adb3>] io_schedule+0x73/0xc0
 [<ffffffff811c856e>] __blockdev_direct_IO_newtrunc+0x6ee/0xb80
 [<ffffffff811c8a5e>] __blockdev_direct_IO+0x5e/0xd0
 [<ffffffff811c5330>] ? blkdev_get_blocks+0x0/0xc0
 [<ffffffff811c6187>] blkdev_direct_IO+0x57/0x60
 [<ffffffff811c5330>] ? blkdev_get_blocks+0x0/0xc0
 [<ffffffff811223db>] generic_file_aio_read+0x70b/0x780
 [<ffffffff811c6c61>] ? blkdev_open+0x71/0xc0
 [<ffffffff811889d3>] ? __dentry_open+0x113/0x330
 [<ffffffff81226338>] ? devcgroup_inode_permission+0x48/0x50
 [<ffffffff8118aeba>] do_sync_read+0xfa/0x140
 [<ffffffff8119c102>] ? user_path_at+0x62/0xa0
 [<ffffffff81094280>] ? autoremove_wake_function+0x0/0x40
 [<ffffffff811c576c>] ? block_ioctl+0x3c/0x40
 [<ffffffff8119e712>] ? vfs_ioctl+0x22/0xa0
 [<ffffffff8119e8ba>] ? do_vfs_ioctl+0x8a/0x5d0
 [<ffffffff8118b895>] vfs_read+0xb5/0x1a0
 [<ffffffff8118b9d1>] sys_read+0x51/0x90
 [<ffffffff8100b182>] system_call_fastpath+0x16/0x1b

WHO exactly is responsible for this? Do we need an upstream bug? we have to fix this problem NOW, because it is too long existent now. :(
 
hi,
we diskussed this before and i am wondering about noone else having this problems.
using pve+squeeze there are some issues regarding lvm.

lvremove is hanging as a defunct process
processes are waiting
Code:
INFO: task lvremove:203495 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
lvremove      D ffff880126786f80     0 203495  93941    0 0x00000000
 ffff88011ec01b18 0000000000000082 0000000000000000 ffffffff81404e6c
 0000000000000008 0000000000001000 0000000000000000 000000000000000c
 ffff88011ec01b08 ffff880126787520 ffff88011ec01fd8 ffff88011ec01fd8
Call Trace:
 [<ffffffff81404e6c>] ? dm_table_unplug_all+0x5c/0x100
 [<ffffffff8150adb3>] io_schedule+0x73/0xc0
 [<ffffffff811c856e>] __blockdev_direct_IO_newtrunc+0x6ee/0xb80
 [<ffffffff811c8a5e>] __blockdev_direct_IO+0x5e/0xd0
 [<ffffffff811c5330>] ? blkdev_get_blocks+0x0/0xc0
 [<ffffffff811c6187>] blkdev_direct_IO+0x57/0x60
 [<ffffffff811c5330>] ? blkdev_get_blocks+0x0/0xc0
 [<ffffffff811223db>] generic_file_aio_read+0x70b/0x780
 [<ffffffff811c6c61>] ? blkdev_open+0x71/0xc0
 [<ffffffff811889d3>] ? __dentry_open+0x113/0x330
 [<ffffffff81226338>] ? devcgroup_inode_permission+0x48/0x50
 [<ffffffff8118aeba>] do_sync_read+0xfa/0x140
 [<ffffffff8119c102>] ? user_path_at+0x62/0xa0
 [<ffffffff81094280>] ? autoremove_wake_function+0x0/0x40
 [<ffffffff811c576c>] ? block_ioctl+0x3c/0x40
 [<ffffffff8119e712>] ? vfs_ioctl+0x22/0xa0
 [<ffffffff8119e8ba>] ? do_vfs_ioctl+0x8a/0x5d0
 [<ffffffff8118b895>] vfs_read+0xb5/0x1a0
 [<ffffffff8118b9d1>] sys_read+0x51/0x90
 [<ffffffff8100b182>] system_call_fastpath+0x16/0x1b
INFO: task vgs:203515 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
vgs           D ffff88042b1e7300     0 203515   2121    0 0x00000000
 ffff880103ea5b18 0000000000000086 0000000000000000 ffffffff81404e6c
 0000000000000008 0000000000001000 0000000000000000 000000000000000c
 ffff880103ea5b08 ffff88042b1e78a0 ffff880103ea5fd8 ffff880103ea5fd8
Call Trace:
 [<ffffffff81404e6c>] ? dm_table_unplug_all+0x5c/0x100
 [<ffffffff8150adb3>] io_schedule+0x73/0xc0
 [<ffffffff811c856e>] __blockdev_direct_IO_newtrunc+0x6ee/0xb80
 [<ffffffff811c8a5e>] __blockdev_direct_IO+0x5e/0xd0
 [<ffffffff811c5330>] ? blkdev_get_blocks+0x0/0xc0
 [<ffffffff811c6187>] blkdev_direct_IO+0x57/0x60
 [<ffffffff811c5330>] ? blkdev_get_blocks+0x0/0xc0
 [<ffffffff811223db>] generic_file_aio_read+0x70b/0x780
 [<ffffffff811c6c61>] ? blkdev_open+0x71/0xc0
 [<ffffffff811889d3>] ? __dentry_open+0x113/0x330
 [<ffffffff81226338>] ? devcgroup_inode_permission+0x48/0x50
 [<ffffffff8118aeba>] do_sync_read+0xfa/0x140
 [<ffffffff8119c102>] ? user_path_at+0x62/0xa0
 [<ffffffff81094280>] ? autoremove_wake_function+0x0/0x40
 [<ffffffff811c576c>] ? block_ioctl+0x3c/0x40
 [<ffffffff8119e712>] ? vfs_ioctl+0x22/0xa0
 [<ffffffff8119e8ba>] ? do_vfs_ioctl+0x8a/0x5d0
 [<ffffffff8118b895>] vfs_read+0xb5/0x1a0
 [<ffffffff8118b9d1>] sys_read+0x51/0x90
 [<ffffffff8100b182>] system_call_fastpath+0x16/0x1b

WHO exactly is responsible for this? Do we need an upstream bug? we have to fix this problem NOW, because it is too long existent now. :(
Hi,
do you use also an lsi raid controller? I'm a little bit afraid about the answer, because I plan to use two new machines which are shipped with an lsi-card...

Udo
 
This also is not limited to proxmox. I have seen this on Ubuntu 10.10 + KVM using dd as a backup tool.