LXC fails to migrate; "volume deactivation failed" ???

tycoonbob

Member
Aug 25, 2014
67
0
6
Hi all.

Doing a little weekend maintenance to a 3-node cluster, but I have the inability to migrate 1 LXC container off a node.

Code:
2018-01-12 16:26:11 starting migration of CT 190 to node 'pve01' (172.16.1.201)
2018-01-12 16:26:11 volume 'ceph_vm:vm-190-disk-1' is on shared storage 'ceph_vm'
2018-01-12 16:26:11 volume 'ceph_vm:vm-190-disk-2' is on shared storage 'ceph_vm'
rbd: sysfs write failed
can't unmap rbd volume vm-190-disk-1: rbd: sysfs write failed
rbd: sysfs write failed
can't unmap rbd volume vm-190-disk-2: rbd: sysfs write failed
2018-01-12 16:26:11 ERROR: volume deactivation failed: ceph_vm:vm-190-disk-1 ceph_vm:vm-190-disk-2 at /usr/share/perl5/PVE/Storage.pm line 999.
2018-01-12 16:26:11 aborting phase 1 - cleanup resources
2018-01-12 16:26:11 start final cleanup
2018-01-12 16:26:11 ERROR: migration aborted (duration 00:00:00): volume deactivation failed: ceph_vm:vm-190-disk-1 ceph_vm:vm-190-disk-2 at /usr/share/perl5/PVE/Storage.pm line 999.
TASK ERROR: migration aborted

It's a CentOS 7 LXC, with 2 disks on Ceph storage. vm-190-disk-2 is ~1TB in size, so large, with about 650GB on it. I have manual rbd snapshots setup on this guy, so I'm wondering if that's my problem here?

Code:
root@pve02:~# rbd --pool ceph_vm snap ls vm-190-disk-2
SNAPID NAME                                SIZE TIMESTAMP
    12 first                            1024 GB Wed Dec 27 21:23:00 2017
    13 initial-copy                     1024 GB Thu Dec 28 09:08:42 2017
   681 snap-weekly_2017-12-31_0200      1024 GB Sun Dec 31 02:00:01 2017
   947 snap-monthly_2018-01-01_0300     1024 GB Mon Jan  1 03:00:02 2018
  2200 snap-daily_2018-01-06_0100       1024 GB Sat Jan  6 01:00:02 2018
  2454 snap-daily_2018-01-07_0100       1024 GB Sun Jan  7 01:00:02 2018
  2468 snap-weekly_2018-01-07_0200      1024 GB Sun Jan  7 02:00:02 2018
  2713 snap-daily_2018-01-08_0100       1024 GB Mon Jan  8 01:00:01 2018
  2974 snap-daily_2018-01-09_0100       1024 GB Tue Jan  9 01:00:01 2018
  3228 snap-daily_2018-01-10_0100       1024 GB Wed Jan 10 01:00:02 2018
  3485 snap-daily_2018-01-11_0100       1024 GB Thu Jan 11 01:00:02 2018
  3739 snap-daily_2018-01-12_0100       1024 GB Fri Jan 12 01:00:01 2018
  3848 snap-hourly_2018-01-12_1100      1024 GB Fri Jan 12 11:00:02 2018
  3858 snap-hourly_2018-01-12_1200      1024 GB Fri Jan 12 12:00:02 2018
  3867 snap-hourly_2018-01-12_1300      1024 GB Fri Jan 12 13:00:02 2018
  3878 snap-hourly_2018-01-12_1400      1024 GB Fri Jan 12 14:00:02 2018
  3889 snap-hourly_2018-01-12_1500      1024 GB Fri Jan 12 15:00:01 2018
  3896 snap-quarterhour_2018-01-12_1545 1024 GB Fri Jan 12 15:45:02 2018
  3899 snap-hourly_2018-01-12_1600      1024 GB Fri Jan 12 16:00:02 2018
  3900 snap-quarterhour_2018-01-12_1600 1024 GB Fri Jan 12 16:00:04 2018
  3903 snap-quarterhour_2018-01-12_1615 1024 GB Fri Jan 12 16:15:02 2018
  3905 snap-quarterhour_2018-01-12_1630 1024 GB Fri Jan 12 16:30:01 2018

This LXC stores userdrive data, shared out via NFS. I do quartly, hourly, daily, weekly, and monthly snapshots on this guy with some scripts I wrote up. This is the only container I have with manual rbd snapshots, so I'm thinking this is a related issue, and this is a relatively new container. Is there something I can do to fix this? If I can't migrate because of the snapshots, is there any workarounds, or do I need to find another solution for this?

Thanks!
 
please post your pveversion -v output, and upgrade to the latest packages if you haven't already. there has been a related bug fix somewhat recently.
 
Hi @fabian. I forgotten that I updated the day I posted this, but forgot to reboot.

Code:
root@pve02:~# uptime
 07:47:36 up 42 days, 19:56,  1 user,  load average: 0.91, 0.68, 0.47
root@pve02:~# clear
root@pve02:~# pveversion -V
proxmox-ve: 5.1-35 (running kernel: 4.13.8-3-pve)
pve-manager: 5.1-42 (running version: 5.1-42/724a6cb3)
pve-kernel-4.13.13-4-pve: 4.13.13-35
pve-kernel-4.10.17-2-pve: 4.10.17-20
pve-kernel-4.10.15-1-pve: 4.10.15-15
pve-kernel-4.13.8-3-pve: 4.13.8-30
pve-kernel-4.10.17-3-pve: 4.10.17-23
pve-kernel-4.10.17-1-pve: 4.10.17-18
libpve-http-server-perl: 2.0-8
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-19
qemu-server: 5.0-19
pve-firmware: 2.0-3
libpve-common-perl: 5.0-25
libpve-guest-common-perl: 2.0-14
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-17
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-3
pve-docs: 5.1-16
pve-qemu-kvm: 2.9.1-5
pve-container: 2.0-18
pve-firewall: 3.0-5
pve-ha-manager: 2.0-4
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.1-2
lxcfs: 2.0.8-1
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.3-pve1~bpo9
openvswitch-switch: 2.7.0-2
ceph: 12.2.2-pve1

After a reboot, the container migrated fine. This is embarrassing. :/
 
We're have similar case:

Code:
pveversion -v
proxmox-ve: 5.1-41 (running kernel: 4.13.13-6-pve)
pve-manager: 5.1-46 (running version: 5.1-46/ae8241d4)
pve-kernel-4.13.13-6-pve: 4.13.13-41
pve-kernel-4.13.13-5-pve: 4.13.13-38
pve-kernel-4.13.13-4-pve: 4.13.13-35
pve-kernel-4.13.13-3-pve: 4.13.13-34
pve-kernel-4.13.13-2-pve: 4.13.13-33
pve-kernel-4.13.4-1-pve: 4.13.4-26
ceph: 12.2.2-pve1
corosync: 2.4.2-pve3
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-common-perl: 5.0-28
libpve-guest-common-perl: 2.0-14
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-17
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 2.1.1-2
lxcfs: 2.0.8-2
novnc-pve: 0.6-4
openvswitch-switch: 2.7.0-2
proxmox-widget-toolkit: 1.0-11
pve-cluster: 5.0-20
pve-container: 2.0-19
pve-docs: 5.1-16
pve-firewall: 3.0-5
pve-firmware: 2.0-3
pve-ha-manager: 2.0-5
pve-i18n: 1.0-4
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.9.1-9
pve-xtermjs: 1.0-2
qemu-server: 5.0-22
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.6-pve1~bpo9

CT offline migrate:
Code:
2018-03-06 17:55:56 shutdown CT 143
2018-03-06 17:55:56 starting migration of CT 143 to node 'node-test' (x.x.x.x)
2018-03-06 17:55:56 volume 'ceph-lxc:vm-143-disk-1' is on shared storage 'ceph-lxc'
rbd: sysfs write failed
can't unmap rbd volume vm-143-disk-1: rbd: sysfs write failed
2018-03-06 17:55:56 ERROR: volume deactivation failed: ceph-lxc:vm-143-disk-1 at /usr/share/perl5/PVE/Storage.pm line 999.
2018-03-06 17:55:56 aborting phase 1 - cleanup resources
2018-03-06 17:55:56 start final cleanup
2018-03-06 17:55:56 ERROR: migration aborted (duration 00:00:00): volume deactivation failed: ceph-lxc:vm-143-disk-1 at /usr/share/perl5/PVE/Storage.pm line 999.
TASK ERROR: migration aborted

Start/stop doesnt work:
Code:
root@node-small:~# pct status 143
unable to get PID for CT 143 (not running?)
status: stopped
root@node-small:~# pct start 143
CT 143 already running

dmesg:
[759393.857569] INFO: task umount:12703 blocked for more than 120 seconds.
[759393.859662] Tainted: P O 4.13.13-6-pve #1
[759393.860761] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[759393.861851] umount D 0 12703 12697 0x00000000
[759393.861854] Call Trace:
[759393.861859] __schedule+0x3e0/0x870
[759393.861860] schedule+0x36/0x80
[759393.861863] io_schedule+0x16/0x40
[759393.861873] __lock_page+0xff/0x140
[759393.861875] ? page_cache_tree_insert+0xc0/0xc0
[759393.861877] truncate_inode_pages_range+0x495/0x830
[759393.861879] truncate_inode_pages_final+0x4d/0x60
[759393.861881] ext4_evict_inode+0x9e/0x5d0
[759393.861883] evict+0xca/0x1a0
[759393.861884] dispose_list+0x39/0x50
[759393.861885] evict_inodes+0x171/0x1a0
[759393.861887] generic_shutdown_super+0x44/0x120
[759393.861888] kill_block_super+0x2c/0x80
[759393.861889] deactivate_locked_super+0x48/0x80
[759393.861890] deactivate_super+0x4e/0x60
[759393.861892] cleanup_mnt+0x3f/0x80
[759393.861893] __cleanup_mnt+0x12/0x20
[759393.861895] task_work_run+0x85/0xb0
[759393.861897] exit_to_usermode_loop+0xc4/0xd0
[759393.861899] syscall_return_slowpath+0x59/0x60
[759393.861901] entry_SYSCALL_64_fastpath+0xa9/0xab
[759393.861902] RIP: 0033:0x7fa0b360db67
[759393.861903] RSP: 002b:00007ffdbfcaa5f8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
[759393.861904] RAX: 0000000000000000 RBX: 000055a32fa80060 RCX: 00007fa0b360db67
[759393.861905] RDX: 0000000000000001 RSI: 0000000000000000 RDI: 000055a32fa885a0
[759393.861905] RBP: 000055a32fa885a0 R08: 000055a32fa87a60 R09: 0000000000000014
[759393.861906] R10: 00000000000006b4 R11: 0000000000000246 R12: 00007fa0b3b0fe64
[759393.861906] R13: 0000000000000000 R14: 0000000000000000 R15: 00007ffdbfcaa910
[759514.683621] INFO: task umount:12703 blocked for more than 120 seconds.
[759514.684717] Tainted: P O 4.13.13-6-pve #1
[759514.685815] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[759514.686871] umount D 0 12703 12697 0x00000000
[759514.686873] Call Trace:
[759514.686879] __schedule+0x3e0/0x870
[759514.686881] schedule+0x36/0x80
[759514.686883] io_schedule+0x16/0x40
[759514.686885] __lock_page+0xff/0x140
[759514.686887] ? page_cache_tree_insert+0xc0/0xc0
[759514.686889] truncate_inode_pages_range+0x495/0x830
[759514.686891] truncate_inode_pages_final+0x4d/0x60
[759514.686893] ext4_evict_inode+0x9e/0x5d0
[759514.686895] evict+0xca/0x1a0
[759514.686896] dispose_list+0x39/0x50
[759514.686906] evict_inodes+0x171/0x1a0
[759514.686908] generic_shutdown_super+0x44/0x120
[759514.686909] kill_block_super+0x2c/0x80
[759514.686910] deactivate_locked_super+0x48/0x80
[759514.686911] deactivate_super+0x4e/0x60
[759514.686913] cleanup_mnt+0x3f/0x80
[759514.686914] __cleanup_mnt+0x12/0x20
[759514.686916] task_work_run+0x85/0xb0
[759514.686918] exit_to_usermode_loop+0xc4/0xd0
[759514.686920] syscall_return_slowpath+0x59/0x60
[759514.686922] entry_SYSCALL_64_fastpath+0xa9/0xab
[759514.686924] RIP: 0033:0x7fa0b360db67
[759514.686924] RSP: 002b:00007ffdbfcaa5f8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
[759514.686926] RAX: 0000000000000000 RBX: 000055a32fa80060 RCX: 00007fa0b360db67
[759514.686926] RDX: 0000000000000001 RSI: 0000000000000000 RDI: 000055a32fa885a0
[759514.686927] RBP: 000055a32fa885a0 R08: 000055a32fa87a60 R09: 0000000000000014
[759514.686928] R10: 00000000000006b4 R11: 0000000000000246 R12: 00007fa0b3b0fe64
[759514.686928] R13: 0000000000000000 R14: 0000000000000000 R15: 00007ffdbfcaa910
[759635.509688] INFO: task umount:12703 blocked for more than 120 seconds.
[759635.510793] Tainted: P O 4.13.13-6-pve #1
[759635.511871] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[759635.512949] umount D 0 12703 12697 0x00000000
[759635.512951] Call Trace:
[759635.512956] __schedule+0x3e0/0x870
[759635.512957] schedule+0x36/0x80
[759635.512960] io_schedule+0x16/0x40
[759635.512962] __lock_page+0xff/0x140
[759635.512964] ? page_cache_tree_insert+0xc0/0xc0
[759635.512966] truncate_inode_pages_range+0x495/0x830
[759635.512969] truncate_inode_pages_final+0x4d/0x60
[759635.512971] ext4_evict_inode+0x9e/0x5d0
[759635.512973] evict+0xca/0x1a0
[759635.512974] dispose_list+0x39/0x50
[759635.512975] evict_inodes+0x171/0x1a0
[759635.512977] generic_shutdown_super+0x44/0x120
[759635.512978] kill_block_super+0x2c/0x80
[759635.512979] deactivate_locked_super+0x48/0x80
[759635.512981] deactivate_super+0x4e/0x60
[759635.512982] cleanup_mnt+0x3f/0x80
[759635.512983] __cleanup_mnt+0x12/0x20
[759635.512986] task_work_run+0x85/0xb0
[759635.512987] exit_to_usermode_loop+0xc4/0xd0
[759635.512989] syscall_return_slowpath+0x59/0x60
[759635.512991] entry_SYSCALL_64_fastpath+0xa9/0xab
[759635.512993] RIP: 0033:0x7fa0b360db67
[759635.512993] RSP: 002b:00007ffdbfcaa5f8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
[759635.512994] RAX: 0000000000000000 RBX: 000055a32fa80060 RCX: 00007fa0b360db67
[759635.512995] RDX: 0000000000000001 RSI: 0000000000000000 RDI: 000055a32fa885a0
[759635.512996] RBP: 000055a32fa885a0 R08: 000055a32fa87a60 R09: 0000000000000014
[759635.512996] R10: 00000000000006b4 R11: 0000000000000246 R12: 00007fa0b3b0fe64
[759635.512997] R13: 0000000000000000 R14: 0000000000000000 R15: 00007ffdbfcaa910
[759756.335735] INFO: task umount:12703 blocked for more than 120 seconds.
[759756.336899] Tainted: P O 4.13.13-6-pve #1
[759756.337965] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[759756.339023] umount D 0 12703 12697 0x00000000
[759756.339026] Call Trace:
[759756.339031] __schedule+0x3e0/0x870
[759756.339032] schedule+0x36/0x80
[759756.339035] io_schedule+0x16/0x40
[759756.339037] __lock_page+0xff/0x140
[759756.339039] ? page_cache_tree_insert+0xc0/0xc0
[759756.339041] truncate_inode_pages_range+0x495/0x830
[759756.339043] truncate_inode_pages_final+0x4d/0x60
[759756.339045] ext4_evict_inode+0x9e/0x5d0
[759756.339047] evict+0xca/0x1a0
[759756.339048] dispose_list+0x39/0x50
[759756.339049] evict_inodes+0x171/0x1a0
[759756.339051] generic_shutdown_super+0x44/0x120
[759756.339052] kill_block_super+0x2c/0x80
[759756.339053] deactivate_locked_super+0x48/0x80
[759756.339054] deactivate_super+0x4e/0x60
[759756.339056] cleanup_mnt+0x3f/0x80
[759756.339057] __cleanup_mnt+0x12/0x20
[759756.339060] task_work_run+0x85/0xb0
[759756.339061] exit_to_usermode_loop+0xc4/0xd0
[759756.339063] syscall_return_slowpath+0x59/0x60
[759756.339065] entry_SYSCALL_64_fastpath+0xa9/0xab
[759756.339066] RIP: 0033:0x7fa0b360db67
[759756.339067] RSP: 002b:00007ffdbfcaa5f8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
[759756.339069] RAX: 0000000000000000 RBX: 000055a32fa80060 RCX: 00007fa0b360db67
[759756.339069] RDX: 0000000000000001 RSI: 0000000000000000 RDI: 000055a32fa885a0
[759756.339070] RBP: 000055a32fa885a0 R08: 000055a32fa87a60 R09: 0000000000000014
[759756.339071] R10: 00000000000006b4 R11: 0000000000000246 R12: 00007fa0b3b0fe64
[759756.339071] R13: 0000000000000000 R14: 0000000000000000 R15: 00007ffdbfcaa910
[759877.161774] INFO: task umount:12703 blocked for more than 120 seconds.
[759877.162851] Tainted: P O 4.13.13-6-pve #1
[759877.163955] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[759877.165056] umount D 0 12703 12697 0x00000000
[759877.165060] Call Trace:
[759877.165065] __schedule+0x3e0/0x870
[759877.165069] schedule+0x36/0x80
[759877.165073] io_schedule+0x16/0x40
[759877.165075] __lock_page+0xff/0x140
[759877.165077] ? page_cache_tree_insert+0xc0/0xc0
[759877.165079] truncate_inode_pages_range+0x495/0x830
[759877.165082] truncate_inode_pages_final+0x4d/0x60
[759877.165084] ext4_evict_inode+0x9e/0x5d0
[759877.165085] evict+0xca/0x1a0
[759877.165087] dispose_list+0x39/0x50
[759877.165088] evict_inodes+0x171/0x1a0
[759877.165091] generic_shutdown_super+0x44/0x120
[759877.165096] kill_block_super+0x2c/0x80
[759877.165100] deactivate_locked_super+0x48/0x80
[759877.165102] deactivate_super+0x4e/0x60
[759877.165103] cleanup_mnt+0x3f/0x80
[759877.165106] __cleanup_mnt+0x12/0x20
[759877.165109] task_work_run+0x85/0xb0
[759877.165112] exit_to_usermode_loop+0xc4/0xd0
[759877.165115] syscall_return_slowpath+0x59/0x60
[759877.165117] entry_SYSCALL_64_fastpath+0xa9/0xab
[759877.165120] RIP: 0033:0x7fa0b360db67
[759877.165121] RSP: 002b:00007ffdbfcaa5f8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
[759877.165123] RAX: 0000000000000000 RBX: 000055a32fa80060 RCX: 00007fa0b360db67
[759877.165123] RDX: 0000000000000001 RSI: 0000000000000000 RDI: 000055a32fa885a0
[759877.165124] RBP: 000055a32fa885a0 R08: 000055a32fa87a60 R09: 0000000000000014
[759877.165125] R10: 00000000000006b4 R11: 0000000000000246 R12: 00007fa0b3b0fe64
[759877.165125] R13: 0000000000000000 R14: 0000000000000000 R15: 00007ffdbfcaa910
[759997.987822] INFO: task umount:12703 blocked for more than 120 seconds.
[759997.988921] Tainted: P O 4.13.13-6-pve #1
[759997.990011] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[759997.991084] umount D 0 12703 12697 0x00000000
[759997.991086] Call Trace:
[759997.991093] __schedule+0x3e0/0x870
[759997.991095] schedule+0x36/0x80
[759997.991097] io_schedule+0x16/0x40
[759997.991099] __lock_page+0xff/0x140
[759997.991101] ? page_cache_tree_insert+0xc0/0xc0
[759997.991103] truncate_inode_pages_range+0x495/0x830
[759997.991105] truncate_inode_pages_final+0x4d/0x60
[759997.991108] ext4_evict_inode+0x9e/0x5d0
[759997.991110] evict+0xca/0x1a0
[759997.991111] dispose_list+0x39/0x50
[759997.991112] evict_inodes+0x171/0x1a0
[759997.991114] generic_shutdown_super+0x44/0x120
[759997.991115] kill_block_super+0x2c/0x80
[759997.991116] deactivate_locked_super+0x48/0x80
[759997.991117] deactivate_super+0x4e/0x60
[759997.991119] cleanup_mnt+0x3f/0x80
[759997.991120] __cleanup_mnt+0x12/0x20
[759997.991122] task_work_run+0x85/0xb0
[759997.991124] exit_to_usermode_loop+0xc4/0xd0
[759997.991126] syscall_return_slowpath+0x59/0x60
[759997.991128] entry_SYSCALL_64_fastpath+0xa9/0xab
[759997.991129] RIP: 0033:0x7fa0b360db67
[759997.991130] RSP: 002b:00007ffdbfcaa5f8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
[759997.991131] RAX: 0000000000000000 RBX: 000055a32fa80060 RCX: 00007fa0b360db67
[759997.991132] RDX: 0000000000000001 RSI: 0000000000000000 RDI: 000055a32fa885a0
[759997.991132] RBP: 000055a32fa885a0 R08: 000055a32fa87a60 R09: 0000000000000014
[759997.991133] R10: 00000000000006b4 R11: 0000000000000246 R12: 00007fa0b3b0fe64
[759997.991134] R13: 0000000000000000 R14: 0000000000000000 R15: 00007ffdbfcaa910
[760118.813879] INFO: task umount:12703 blocked for more than 120 seconds.
[760118.815053] Tainted: P O 4.13.13-6-pve #1
[760118.816122] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[760118.817187] umount D 0 12703 12697 0x00000000
[760118.817189] Call Trace:
[760118.817195] __schedule+0x3e0/0x870
[760118.817196] schedule+0x36/0x80
[760118.817199] io_schedule+0x16/0x40
[760118.817201] __lock_page+0xff/0x140
[760118.817203] ? page_cache_tree_insert+0xc0/0xc0
[760118.817205] truncate_inode_pages_range+0x495/0x830
[760118.817207] truncate_inode_pages_final+0x4d/0x60
[760118.817209] ext4_evict_inode+0x9e/0x5d0
[760118.817211] evict+0xca/0x1a0
[760118.817212] dispose_list+0x39/0x50
[760118.817213] evict_inodes+0x171/0x1a0
[760118.817215] generic_shutdown_super+0x44/0x120
[760118.817217] kill_block_super+0x2c/0x80
[760118.817218] deactivate_locked_super+0x48/0x80
[760118.817219] deactivate_super+0x4e/0x60
[760118.817220] cleanup_mnt+0x3f/0x80
[760118.817222] __cleanup_mnt+0x12/0x20
[760118.817224] task_work_run+0x85/0xb0
[760118.817226] exit_to_usermode_loop+0xc4/0xd0
[760118.817228] syscall_return_slowpath+0x59/0x60
[760118.817230] entry_SYSCALL_64_fastpath+0xa9/0xab
[760118.817231] RIP: 0033:0x7fa0b360db67
[760118.817232] RSP: 002b:00007ffdbfcaa5f8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
[760118.817233] RAX: 0000000000000000 RBX: 000055a32fa80060 RCX: 00007fa0b360db67
[760118.817234] RDX: 0000000000000001 RSI: 0000000000000000 RDI: 000055a32fa885a0
[760118.817235] RBP: 000055a32fa885a0 R08: 000055a32fa87a60 R09: 0000000000000014
[760118.817235] R10: 00000000000006b4 R11: 0000000000000246 R12: 00007fa0b3b0fe64
[760118.817236] R13: 0000000000000000 R14: 0000000000000000 R15: 00007ffdbfcaa910
[760239.643934] INFO: task umount:12703 blocked for more than 120 seconds.
[760239.643937] Tainted: P O 4.13.13-6-pve #1
[760239.643938] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[760239.643939] umount D 0 12703 12697 0x00000000
[760239.643941] Call Trace:
[760239.643955] __schedule+0x3e0/0x870
[760239.643957] schedule+0x36/0x80
[760239.643959] io_schedule+0x16/0x40
[760239.643962] __lock_page+0xff/0x140
[760239.643963] ? page_cache_tree_insert+0xc0/0xc0
[760239.643965] truncate_inode_pages_range+0x495/0x830
[760239.643967] truncate_inode_pages_final+0x4d/0x60
[760239.643969] ext4_evict_inode+0x9e/0x5d0
[760239.643971] evict+0xca/0x1a0
[760239.643972] dispose_list+0x39/0x50
[760239.643973] evict_inodes+0x171/0x1a0
[760239.643975] generic_shutdown_super+0x44/0x120
[760239.643976] kill_block_super+0x2c/0x80
[760239.643977] deactivate_locked_super+0x48/0x80
[760239.643978] deactivate_super+0x4e/0x60
[760239.643979] cleanup_mnt+0x3f/0x80
[760239.643981] __cleanup_mnt+0x12/0x20
[760239.643983] task_work_run+0x85/0xb0
[760239.643985] exit_to_usermode_loop+0xc4/0xd0
[760239.643986] syscall_return_slowpath+0x59/0x60
[760239.643988] entry_SYSCALL_64_fastpath+0xa9/0xab
[760239.643990] RIP: 0033:0x7fa0b360db67
[760239.643990] RSP: 002b:00007ffdbfcaa5f8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
[760239.643991] RAX: 0000000000000000 RBX: 000055a32fa80060 RCX: 00007fa0b360db67
[760239.643992] RDX: 0000000000000001 RSI: 0000000000000000 RDI: 000055a32fa885a0
[760239.643993] RBP: 000055a32fa885a0 R08: 000055a32fa87a60 R09: 0000000000000014
[760239.643993] R10: 00000000000006b4 R11: 0000000000000246 R12: 00007fa0b3b0fe64
[760239.643994] R13: 0000000000000000 R14: 0000000000000000 R15: 00007ffdbfcaa910

Proc's:
Code:
root@node-small:~# ps aux | grep 143
root      5948  0.0  0.0  50260  4016 ?        Ss   Feb25   0:30 [lxc monitor] /var/lib/lxc 143
root      7247  0.0  0.0  14688  1564 ?        Ss   Mar05   0:00 /usr/bin/dtach -A /var/run/dtach/vzctlconsole143 -r winch -z lxc-console -n 143 -e -1
root      7248  0.0  0.0  41768  4136 pts/6    Ss+  Mar05   0:00 lxc-console -n 143 -e -1
root     12696  0.0  0.0   4292   808 ?        S    17:41   0:00 sh -c /usr/share/lxc/hooks/lxc-pve-poststop-hook 143 lxc post-stop
root     12697  0.0  0.2 299076 68972 ?        S    17:41   0:00 /usr/bin/perl /usr/share/lxc/hooks/lxc-pve-poststop-hook 143 lxc post-stop
root     12703  0.0  0.0  21984  1196 ?        D    17:41   0:00 umount --recursive /var/lib/lxc/143/rootfs
root     25651  0.0  0.0  12788  1000 pts/0    S+   18:06   0:00 grep 143