1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

lvm tools still buggy

Discussion in 'Proxmox VE: Installation and configuration' started by ccube, Mar 31, 2012.

  1. ccube

    ccube Member

    Joined:
    Apr 5, 2011
    Messages:
    39
    Likes Received:
    0
    hi,
    we diskussed this before and i am wondering about noone else having this problems.
    using pve+squeeze there are some issues regarding lvm.

    lvremove is hanging as a defunct process
    processes are waiting
    Code:
    INFO: task lvremove:203495 blocked for more than 120 seconds.
    "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    lvremove      D ffff880126786f80     0 203495  93941    0 0x00000000
     ffff88011ec01b18 0000000000000082 0000000000000000 ffffffff81404e6c
     0000000000000008 0000000000001000 0000000000000000 000000000000000c
     ffff88011ec01b08 ffff880126787520 ffff88011ec01fd8 ffff88011ec01fd8
    Call Trace:
     [<ffffffff81404e6c>] ? dm_table_unplug_all+0x5c/0x100
     [<ffffffff8150adb3>] io_schedule+0x73/0xc0
     [<ffffffff811c856e>] __blockdev_direct_IO_newtrunc+0x6ee/0xb80
     [<ffffffff811c8a5e>] __blockdev_direct_IO+0x5e/0xd0
     [<ffffffff811c5330>] ? blkdev_get_blocks+0x0/0xc0
     [<ffffffff811c6187>] blkdev_direct_IO+0x57/0x60
     [<ffffffff811c5330>] ? blkdev_get_blocks+0x0/0xc0
     [<ffffffff811223db>] generic_file_aio_read+0x70b/0x780
     [<ffffffff811c6c61>] ? blkdev_open+0x71/0xc0
     [<ffffffff811889d3>] ? __dentry_open+0x113/0x330
     [<ffffffff81226338>] ? devcgroup_inode_permission+0x48/0x50
     [<ffffffff8118aeba>] do_sync_read+0xfa/0x140
     [<ffffffff8119c102>] ? user_path_at+0x62/0xa0
     [<ffffffff81094280>] ? autoremove_wake_function+0x0/0x40
     [<ffffffff811c576c>] ? block_ioctl+0x3c/0x40
     [<ffffffff8119e712>] ? vfs_ioctl+0x22/0xa0
     [<ffffffff8119e8ba>] ? do_vfs_ioctl+0x8a/0x5d0
     [<ffffffff8118b895>] vfs_read+0xb5/0x1a0
     [<ffffffff8118b9d1>] sys_read+0x51/0x90
     [<ffffffff8100b182>] system_call_fastpath+0x16/0x1b
    INFO: task vgs:203515 blocked for more than 120 seconds.
    "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    vgs           D ffff88042b1e7300     0 203515   2121    0 0x00000000
     ffff880103ea5b18 0000000000000086 0000000000000000 ffffffff81404e6c
     0000000000000008 0000000000001000 0000000000000000 000000000000000c
     ffff880103ea5b08 ffff88042b1e78a0 ffff880103ea5fd8 ffff880103ea5fd8
    Call Trace:
     [<ffffffff81404e6c>] ? dm_table_unplug_all+0x5c/0x100
     [<ffffffff8150adb3>] io_schedule+0x73/0xc0
     [<ffffffff811c856e>] __blockdev_direct_IO_newtrunc+0x6ee/0xb80
     [<ffffffff811c8a5e>] __blockdev_direct_IO+0x5e/0xd0
     [<ffffffff811c5330>] ? blkdev_get_blocks+0x0/0xc0
     [<ffffffff811c6187>] blkdev_direct_IO+0x57/0x60
     [<ffffffff811c5330>] ? blkdev_get_blocks+0x0/0xc0
     [<ffffffff811223db>] generic_file_aio_read+0x70b/0x780
     [<ffffffff811c6c61>] ? blkdev_open+0x71/0xc0
     [<ffffffff811889d3>] ? __dentry_open+0x113/0x330
     [<ffffffff81226338>] ? devcgroup_inode_permission+0x48/0x50
     [<ffffffff8118aeba>] do_sync_read+0xfa/0x140
     [<ffffffff8119c102>] ? user_path_at+0x62/0xa0
     [<ffffffff81094280>] ? autoremove_wake_function+0x0/0x40
     [<ffffffff811c576c>] ? block_ioctl+0x3c/0x40
     [<ffffffff8119e712>] ? vfs_ioctl+0x22/0xa0
     [<ffffffff8119e8ba>] ? do_vfs_ioctl+0x8a/0x5d0
     [<ffffffff8118b895>] vfs_read+0xb5/0x1a0
     [<ffffffff8118b9d1>] sys_read+0x51/0x90
     [<ffffffff8100b182>] system_call_fastpath+0x16/0x1b
    
    WHO exactly is responsible for this? Do we need an upstream bug? we have to fix this problem NOW, because it is too long existent now. :(
     
  2. e100

    e100 Active Member
    Proxmox VE Subscriber

    Joined:
    Nov 6, 2010
    Messages:
    1,154
    Likes Received:
    8
    I run vzdump with snapshot daily on multiple 2.0 servers.

    Have not seen a single issue related to LVM.

    Whatever is causing this it does not seem to be an issue for everyone.
     
  3. udo

    udo Active Member

    Joined:
    Apr 22, 2009
    Messages:
    4,452
    Likes Received:
    19
    Hi,
    do you use also an lsi raid controller? I'm a little bit afraid about the answer, because I plan to use two new machines which are shipped with an lsi-card...

    Udo
     
  4. udi

    udi Member

    Joined:
    Apr 1, 2011
    Messages:
    73
    Likes Received:
    0
    this happens on various hardware, not lsi specific
     
  5. half_life

    half_life Member

    Joined:
    Feb 16, 2012
    Messages:
    35
    Likes Received:
    0
    This also is not limited to proxmox. I have seen this on Ubuntu 10.10 + KVM using dd as a backup tool.
     

Share This Page