KVM guests freeze (hung tasks) during backup/restore/migrate

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
13,673
426
83
Jul 4, 2017
34
0
6
Please upgrade to latest kernel and zfs packages, there were quite some improvements on many places which helps in these cases.

https://pve.proxmox.com/wiki/Downloads#Update_a_running_Proxmox_Virtual_Environment_5.x_to_latest_5.2
Hi Tom,
Thanks for your message. Just to know, can I move the VMs (using migrate) from the host I'd like to upgrade to another one, upgrade, restart, put back the VMs from the unupdated host to that one and do this for all three nodes in the cluster ? There is no HA.
Yann
 

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
13,673
426
83
yes, upgrade step by step.
 
Jul 4, 2017
34
0
6
@tom Well, updated to the latest packages few days ago and tried again to restore a VM backup with the same results : freezes of guest VMs which make them unusable and not reachable from outside.
 

Humbug

Member
Nov 14, 2012
30
1
8

Morbious

New Member
Feb 27, 2018
12
0
1
41
Any good news ?I updated to newest version of Proxmox and behaviour is worst than before 5.2.
My vm remount to read-only after few ours of copying data...
 

gkovacs

Active Member
Dec 22, 2008
503
45
28
Budapest, Hungary
Any good news ?I updated to newest version of Proxmox and behaviour is worst than before 5.2.
My vm remount to read-only after few ours of copying data...
Unfortunately the Proxmox developers @tom @fabian @dcsapak never acknowledged this as a problem or bug on it's own, rather they try to blame hardware or software configuration, and they even killed the bugreport I filed that started to accumulate many user reports:
https://bugzilla.proxmox.com/show_bug.cgi?id=1453
(The bug has status RESOLVED INVALID, which is funny as the issue is neither resolved, nor invalid).

There are many more threads discussing this issue, here is another one:
https://forum.proxmox.com/threads/virtio-task-xxx-blocked-for-more-than-120-seconds.25835

Problem description
In reality, this is most likely a serious and overlooked kernel issue, probably related to memory management or scheduling, as it happens on many, entirely different hardware (single HDD, HDD HW or SW RAID, single SSD, SSD cached HW or SW HDD RAID, SSD HW or SW RAID) configurations, as well as entirely different filesystem setups (LVM vg+lv, LVM+ext4+qcow2, ZFS+qcow2, ZFS+zvol, etc.) and all the processor architectures from the past 10+ years.

What seems to be happening is: when there is significant load on host local storage or guest IO devices (not limited to VirtIO disks, as it happens on IDE disks and even VirtIO NICs as well), the memory access requests of some VMs slow down / get timed out, this is what you see in the guest logs as hung tasks, CPU soft lockups, CPU / rcu_scheduler stalls, applications blocked in syslog or even stack dumps.

I experienced this problem during backups, restores, migrations involving local storage on the host, but also when a VM was doing very heavy network IO (a speedtest.net run from a browser for example), it can cause the same effect in other guests on the same physical system.

Mitigations
I don't see any real solution happening to this problem, apart from using the tweaks that I posted many times before:
- bandwidth limit backups, restores and migrations
- put swap on an NVMe SSD, also ZIL+L2ARC if you use ZFS
- use recommended swap settings from the ZFS wiki
- use vm.swappiness=1 on host and in guests
- increase vm.min_free_kbytes on both hosts and guests
- decrease vm.dirty_ratio to 2, vm.dirty_background_ratio to 1 on both hosts and guests

These hacks lessen the severity of this kernel issue (lockups happen less frequently), but they do not solve it completely.
 
Last edited:

Morbious

New Member
Feb 27, 2018
12
0
1
41
That's very sad that support doesn't care about it.
I saw a lot of similar problems and the only answer was "update/upgrade" that doesn't help much.
The strangest thing is that I have two the same machine with 1.5TB RAM, NVME disks and one is running Proxmox old version ( 5.1) and is OK and the other is having newest (5.3) and I got errors.
 

gkovacs

Active Member
Dec 22, 2008
503
45
28
Budapest, Hungary
That's very sad that support doesn't care about it.
I saw a lot of similar problems and the only answer was "update/upgrade" that doesn't help much.
The strangest thing is that I have two the same machine with 1.5TB RAM, NVME disks and one is running Proxmox old version ( 5.1) and is OK and the other is having newest (5.3) and I got errors.
Hi Morbious, can you please post some more info?
- host HW configuration (CPU and memory type and amount, how many NVMe disks)
- host SW configuration (kernel versions on both machines, storage type and filesystem used), how many VMs
- VM configs of the VMs you get errors on (cpu number, memory, virtio devices, OS type, OS version)
- are there similar VMs on the host you don't get errors on?
- errors and context (what kind of errors you get in the VMs and during what, backups or restores?)
- do you use any of the mitigations I posted above?
 

Morbious

New Member
Feb 27, 2018
12
0
1
41
Hi Morbious, can you please post some more info?
- host HW configuration (CPU and memory type and amount, how many NVMe disks)
- host SW configuration (kernel versions on both machines, storage type and filesystem used), how many VMs
- VM configs of the VMs you get errors on (cpu number, memory, virtio devices, OS type, OS version)
- are there similar VMs on the host you don't get errors on?
- errors and context (what kind of errors you get in the VMs and during what, backups or restores?)
- do you use any of the mitigations I posted above?
HW:
4x CPU Intel(R) Xeon(R) CPU E7-8860 v3 @ 2.20GHz
1.5 TB RAM
6 NVME (ZFS)
8 HDD 1,6 TB (hardware RAID 5)

SW:

Proxmox 5.1 (working)
100 GB system
1 GB LVM /mnt/backup
rest thin-provision for guests machines
5 VM with heavy load ( 4x solr cluster on NVME/HDD,1 database on NVME/HDD)
example_machine: 8 cpu, 32 GB RAM, 32 HDD system, 750 GB data (HDD) [previously 500 GB NVME]

Proxmox 5.2/5.3 ( not working)
100 GB system
1 GB LVM /mnt/backup
rest thin-provision for guests machines
1 VM example_machine moved from Proxmox_5.1 (32 GB RAM, 8 cpu) without data, than create new data volume 750 GB (HDD) and fed with scp , during this operation I got:

[23370.114508] scsi_io_completion: 13 callbacks suppressed
[23370.114561] sd 2:0:0:1: [sdb] tag#1 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[23370.114566] sd 2:0:0:1: [sdb] tag#1 Sense Key : Aborted Command [current]
[23370.114569] sd 2:0:0:1: [sdb] tag#1 Add. Sense: I/O process terminated
[23370.114574] sd 2:0:0:1: [sdb] tag#1 CDB: Synchronize Cache(10) 35 00 00 00 00 00 00 00 00 00
[23370.114578] blk_update_request: 13 callbacks suppressed
[23370.114580] blk_update_request: I/O error, dev sdb, sector 784633688
[23370.115664] Aborting journal on device sdb-8.
[23370.154477] sd 2:0:0:1: [sdb] tag#1 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[23370.154505] sd 2:0:0:1: [sdb] tag#1 Sense Key : Aborted Command [current]
[23370.154513] sd 2:0:0:1: [sdb] tag#1 Add. Sense: I/O process terminated
[23370.154521] sd 2:0:0:1: [sdb] tag#1 CDB: Synchronize Cache(10) 35 00 00 00 00 00 00 00 00 00
[23370.154530] blk_update_request: I/O error, dev sdb, sector 784596992
[23370.155075] Buffer I/O error on dev sdb, logical block 98074624, lost sync page write
[23370.155597] JBD2: Error -5 detected when updating journal superblock for sdb-8.
[23372.010446] sd 2:0:0:1: [sdb] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[23372.010461] sd 2:0:0:1: [sdb] tag#0 Sense Key : Aborted Command [current]
[23372.010464] sd 2:0:0:1: [sdb] tag#0 Add. Sense: I/O process terminated
[23372.010468] sd 2:0:0:1: [sdb] tag#0 CDB: Synchronize Cache(10) 35 00 00 00 00 00 00 00 00 00
[23372.010472] blk_update_request: I/O error, dev sdb, sector 0
[23372.010995] Buffer I/O error on dev sdb, logical block 0, lost sync page write
[23372.011532] EXT4-fs error (device sdb): ext4_journal_check_start:56: Detected aborted journal
[23372.012121] EXT4-fs (sdb): Remounting filesystem read-only
[23372.012646] EXT4-fs (sdb): previous I/O error to superblock detected
[23372.050557] sd 2:0:0:1: [sdb] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[23372.050586] sd 2:0:0:1: [sdb] tag#0 Sense Key : Aborted Command [current]
[23372.050589] sd 2:0:0:1: [sdb] tag#0 Add. Sense: I/O process terminated
[23372.050592] sd 2:0:0:1: [sdb] tag#0 CDB: Synchronize Cache(10) 35 00 00 00 00 00 00 00 00 00
[23372.050595] blk_update_request: I/O error, dev sdb, sector 0
[23372.051131] Buffer I/O error on dev sdb, logical block 0, lost sync page write

And on Dom0:

[Wed Jan 2 22:36:52 2019] INFO: task kvm:11230 blocked for more than 120 seconds.
[Wed Jan 2 22:36:52 2019] Tainted: P O 4.15.18-9-pve #1
[Wed Jan 2 22:36:52 2019] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Wed Jan 2 22:36:52 2019] kvm D 0 11230 1 0x00000000
[Wed Jan 2 22:36:52 2019] Call Trace:
[Wed Jan 2 22:36:52 2019] __schedule+0x3e0/0x870
[Wed Jan 2 22:36:52 2019] schedule+0x36/0x80
[Wed Jan 2 22:36:52 2019] schedule_timeout+0x1d4/0x360
[Wed Jan 2 22:36:52 2019] io_schedule_timeout+0x1e/0x50
[Wed Jan 2 22:36:52 2019] wait_for_completion_io+0xb4/0x140
[Wed Jan 2 22:36:52 2019] ? wake_up_q+0x80/0x80
[Wed Jan 2 22:36:52 2019] submit_bio_wait+0x61/0x90
[Wed Jan 2 22:36:52 2019] blkdev_issue_flush+0x85/0xb0
[Wed Jan 2 22:36:52 2019] blkdev_fsync+0x35/0x50
[Wed Jan 2 22:36:52 2019] vfs_fsync_range+0x51/0xb0
[Wed Jan 2 22:36:52 2019] do_fsync+0x3d/0x70
[Wed Jan 2 22:36:52 2019] SyS_fdatasync+0x13/0x20
[Wed Jan 2 22:36:52 2019] do_syscall_64+0x73/0x130
[Wed Jan 2 22:36:52 2019] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[Wed Jan 2 22:36:52 2019] RIP: 0033:0x7fbf31a1e60d
[Wed Jan 2 22:36:52 2019] RSP: 002b:00007fb6e4ffc5f0 EFLAGS: 00000293 ORIG_RAX: 000000000000004b
[Wed Jan 2 22:36:52 2019] RAX: ffffffffffffffda RBX: 00000000fffffffb RCX: 00007fbf31a1e60d
[Wed Jan 2 22:36:52 2019] RDX: 00007fbf23cc1e90 RSI: 000055bdd2f006e0 RDI: 0000000000000010
[Wed Jan 2 22:36:52 2019] RBP: 00007fb717308ac0 R08: 0000000000000000 R09: 00000000ffffffff
[Wed Jan 2 22:36:52 2019] R10: 00007fb6e4ffc620 R11: 0000000000000293 R12: 00007fbf23dbfd40
[Wed Jan 2 22:36:52 2019] R13: 00007fbf23cc1ef8 R14: 00007fb718f3d890 R15: 00007fbf4a531040
[Wed Jan 2 22:38:53 2019] INFO: task kvm:10826 blocked for more than 120 seconds.
[Wed Jan 2 22:38:53 2019] Tainted: P O 4.15.18-9-pve #1
[Wed Jan 2 22:38:53 2019] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Wed Jan 2 22:38:53 2019] kvm D 0 10826 1 0x00000002
[Wed Jan 2 22:38:53 2019] Call Trace:
[Wed Jan 2 22:38:53 2019] __schedule+0x3e0/0x870
[Wed Jan 2 22:38:53 2019] schedule+0x36/0x80
[Wed Jan 2 22:38:53 2019] io_schedule+0x16/0x40
[Wed Jan 2 22:38:53 2019] wait_on_page_bit_common+0xf3/0x190
[Wed Jan 2 22:38:53 2019] ? page_cache_tree_insert+0xe0/0xe0
[Wed Jan 2 22:38:53 2019] __filemap_fdatawait_range+0xfa/0x160
[Wed Jan 2 22:38:53 2019] file_write_and_wait_range+0x70/0xb0
[Wed Jan 2 22:38:53 2019] blkdev_fsync+0x1b/0x50
[Wed Jan 2 22:38:53 2019] vfs_fsync_range+0x51/0xb0
[Wed Jan 2 22:38:53 2019] do_fsync+0x3d/0x70
[Wed Jan 2 22:38:53 2019] SyS_fdatasync+0x13/0x20
[Wed Jan 2 22:38:53 2019] do_syscall_64+0x73/0x130
[Wed Jan 2 22:38:53 2019] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[Wed Jan 2 22:38:53 2019] RIP: 0033:0x7fbf31a1e60d
[Wed Jan 2 22:38:53 2019] RSP: 002b:00007fb6e7ffc5f0 EFLAGS: 00000293 ORIG_RAX: 000000000000004b
[Wed Jan 2 22:38:53 2019] RAX: ffffffffffffffda RBX: 00000000fffffffb RCX: 00007fbf31a1e60d
[Wed Jan 2 22:38:53 2019] RDX: 00007fbf23cc1e90 RSI: 000055bdd2f006e0 RDI: 0000000000000011
[Wed Jan 2 22:38:53 2019] RBP: 00007fb718b6a200 R08: 0000000000000000 R09: 00000000ffffffff
[Wed Jan 2 22:38:53 2019] R10: 00007fb6e7ffc620 R11: 0000000000000293 R12: 00007fbf23dbfe00
[Wed Jan 2 22:38:53 2019] R13: 00007fbf23cc1ef8 R14: 00007fb717372160 R15: 00007fbf4a531040
[Wed Jan 2 22:38:53 2019] INFO: task kvm:11230 blocked for more than 120 seconds.
[Wed Jan 2 22:38:53 2019] Tainted: P O 4.15.18-9-pve #1
[Wed Jan 2 22:38:53 2019] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Wed Jan 2 22:38:53 2019] kvm D 0 11230 1 0x00000000
[Wed Jan 2 22:38:53 2019] Call Trace:
[Wed Jan 2 22:38:53 2019] __schedule+0x3e0/0x870
[Wed Jan 2 22:38:53 2019] schedule+0x36/0x80
[Wed Jan 2 22:38:53 2019] schedule_timeout+0x1d4/0x360
[Wed Jan 2 22:38:53 2019] io_schedule_timeout+0x1e/0x50
[Wed Jan 2 22:38:53 2019] wait_for_completion_io+0xb4/0x140
[Wed Jan 2 22:38:53 2019] ? wake_up_q+0x80/0x80
[Wed Jan 2 22:38:53 2019] submit_bio_wait+0x61/0x90
[Wed Jan 2 22:38:53 2019] blkdev_issue_flush+0x85/0xb0
[Wed Jan 2 22:38:53 2019] blkdev_fsync+0x35/0x50
[Wed Jan 2 22:38:53 2019] vfs_fsync_range+0x51/0xb0
[Wed Jan 2 22:38:53 2019] do_fsync+0x3d/0x70
[Wed Jan 2 22:38:53 2019] SyS_fdatasync+0x13/0x20
[Wed Jan 2 22:38:53 2019] do_syscall_64+0x73/0x130
[Wed Jan 2 22:38:53 2019] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[Wed Jan 2 22:38:53 2019] RIP: 0033:0x7fbf31a1e60d
[Wed Jan 2 22:38:53 2019] RSP: 002b:00007fb6e4ffc5f0 EFLAGS: 00000293 ORIG_RAX: 000000000000004b
[Wed Jan 2 22:38:53 2019] RAX: ffffffffffffffda RBX: 00000000fffffffb RCX: 00007fbf31a1e60d
[Wed Jan 2 22:38:53 2019] RDX: 00007fbf23cc1e90 RSI: 000055bdd2f006e0 RDI: 0000000000000010
[Wed Jan 2 22:38:53 2019] RBP: 00007fb717308ac0 R08: 0000000000000000 R09: 00000000ffffffff
[Wed Jan 2 22:38:53 2019] R10: 00007fb6e4ffc620 R11: 0000000000000293 R12: 00007fbf23dbfd40
[Wed Jan 2 22:38:53 2019] R13: 00007fbf23cc1ef8 R14: 00007fb718f3d890 R15: 00007fbf4a531040
[Wed Jan 2 22:42:55 2019] INFO: task kvm:11108 blocked for more than 120 seconds.
[Wed Jan 2 22:42:55 2019] Tainted: P O 4.15.18-9-pve #1
[Wed Jan 2 22:42:55 2019] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Wed Jan 2 22:42:55 2019] kvm D 0 11108 1 0x00000002
[Wed Jan 2 22:42:55 2019] Call Trace:
[Wed Jan 2 22:42:55 2019] __schedule+0x3e0/0x870
[Wed Jan 2 22:42:55 2019] schedule+0x36/0x80
[Wed Jan 2 22:42:55 2019] io_schedule+0x16/0x40
[Wed Jan 2 22:42:55 2019] wait_on_page_bit_common+0xf3/0x190
[Wed Jan 2 22:42:55 2019] ? page_cache_tree_insert+0xe0/0xe0
[Wed Jan 2 22:42:55 2019] __filemap_fdatawait_range+0xfa/0x160
[Wed Jan 2 22:42:55 2019] file_write_and_wait_range+0x70/0xb0
[Wed Jan 2 22:42:55 2019] blkdev_fsync+0x1b/0x50
[Wed Jan 2 22:42:55 2019] vfs_fsync_range+0x51/0xb0
[Wed Jan 2 22:42:55 2019] do_fsync+0x3d/0x70
[Wed Jan 2 22:42:55 2019] SyS_fdatasync+0x13/0x20
[Wed Jan 2 22:42:55 2019] do_syscall_64+0x73/0x130
[Wed Jan 2 22:42:55 2019] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[Wed Jan 2 22:42:55 2019] RIP: 0033:0x7fbf31a1e60d
[Wed Jan 2 22:42:55 2019] RSP: 002b:00007fb702ffb5f0 EFLAGS: 00000293 ORIG_RAX: 000000000000004b
[Wed Jan 2 22:42:55 2019] RAX: ffffffffffffffda RBX: 00000000fffffffb RCX: 00007fbf31a1e60d
[Wed Jan 2 22:42:55 2019] RDX: 00007fbf23cc1e90 RSI: 000055bdd2f006e0 RDI: 0000000000000011
[Wed Jan 2 22:42:55 2019] RBP: 00007fb7172bbac0 R08: 0000000000000000 R09: 00000000ffffffff
[Wed Jan 2 22:42:55 2019] R10: 00007fb702ffb620 R11: 0000000000000293 R12: 00007fbf23dbfe00
[Wed Jan 2 22:42:55 2019] R13: 00007fbf23cc1ef8 R14: 00007fb717372080 R15: 00007fbf4a531040
[Wed Jan 2 22:42:55 2019] INFO: task kvm:12220 blocked for more than 120 seconds.
[Wed Jan 2 22:42:55 2019] Tainted: P O 4.15.18-9-pve #1
[Wed Jan 2 22:42:55 2019] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Wed Jan 2 22:42:55 2019] kvm D 0 12220 1 0x00000000
[Wed Jan 2 22:42:55 2019] Call Trace:
[Wed Jan 2 22:42:55 2019] __schedule+0x3e0/0x870
[Wed Jan 2 22:42:55 2019] schedule+0x36/0x80
[Wed Jan 2 22:42:55 2019] io_schedule+0x16/0x40
[Wed Jan 2 22:42:55 2019] wait_on_page_bit_common+0xf3/0x190
[Wed Jan 2 22:42:55 2019] ? page_cache_tree_insert+0xe0/0xe0
[Wed Jan 2 22:42:55 2019] __filemap_fdatawait_range+0xfa/0x160
[Wed Jan 2 22:42:55 2019] file_write_and_wait_range+0x70/0xb0
[Wed Jan 2 22:42:55 2019] blkdev_fsync+0x1b/0x50
[Wed Jan 2 22:42:55 2019] vfs_fsync_range+0x51/0xb0
[Wed Jan 2 22:42:55 2019] do_fsync+0x3d/0x70
[Wed Jan 2 22:42:55 2019] SyS_fdatasync+0x13/0x20
[Wed Jan 2 22:42:55 2019] do_syscall_64+0x73/0x130
[Wed Jan 2 22:42:55 2019] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[Wed Jan 2 22:42:55 2019] RIP: 0033:0x7fbf31a1e60d
[Wed Jan 2 22:42:55 2019] RSP: 002b:00007fb6deffc5f0 EFLAGS: 00000293 ORIG_RAX: 000000000000004b
[Wed Jan 2 22:42:55 2019] RAX: ffffffffffffffda RBX: 00000000fffffffb RCX: 00007fbf31a1e60d
[Wed Jan 2 22:42:55 2019] RDX: 00007fbf23cc1e90 RSI: 000055bdd2f006e0 RDI: 0000000000000010
[Wed Jan 2 22:42:55 2019] RBP: 00007fb716086980 R08: 0000000000000000 R09: 00000000ffffffff
[Wed Jan 2 22:42:55 2019] R10: 00007fb6deffc620 R11: 0000000000000293 R12: 00007fbf23dbfd40
[Wed Jan 2 22:42:55 2019] R13: 00007fbf23cc1ef8 R14: 00007fb718f3c550 R15: 00007fbf4a531040
[Wed Jan 2 22:44:56 2019] INFO: task kvm:11108 blocked for more than 120 seconds.
[Wed Jan 2 22:44:56 2019] Tainted: P O 4.15.18-9-pve #1
[Wed Jan 2 22:44:56 2019] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Wed Jan 2 22:44:56 2019] kvm D 0 11108 1 0x00000002
[Wed Jan 2 22:44:56 2019] Call Trace:
[Wed Jan 2 22:44:56 2019] __schedule+0x3e0/0x870
[Wed Jan 2 22:44:56 2019] schedule+0x36/0x80
[Wed Jan 2 22:44:56 2019] io_schedule+0x16/0x40
[Wed Jan 2 22:44:56 2019] wait_on_page_bit_common+0xf3/0x190
[Wed Jan 2 22:44:56 2019] ? page_cache_tree_insert+0xe0/0xe0
[Wed Jan 2 22:44:56 2019] __filemap_fdatawait_range+0xfa/0x160
[Wed Jan 2 22:44:56 2019] file_write_and_wait_range+0x70/0xb0
[Wed Jan 2 22:44:56 2019] blkdev_fsync+0x1b/0x50
[Wed Jan 2 22:44:56 2019] vfs_fsync_range+0x51/0xb0
[Wed Jan 2 22:44:56 2019] do_fsync+0x3d/0x70
[Wed Jan 2 22:44:56 2019] SyS_fdatasync+0x13/0x20
[Wed Jan 2 22:44:56 2019] do_syscall_64+0x73/0x130
[Wed Jan 2 22:44:56 2019] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[Wed Jan 2 22:44:56 2019] RIP: 0033:0x7fbf31a1e60d
[Wed Jan 2 22:44:56 2019] RSP: 002b:00007fb702ffb5f0 EFLAGS: 00000293 ORIG_RAX: 000000000000004b
[Wed Jan 2 22:44:56 2019] RAX: ffffffffffffffda RBX: 00000000fffffffb RCX: 00007fbf31a1e60d
[Wed Jan 2 22:44:56 2019] RDX: 00007fbf23cc1e90 RSI: 000055bdd2f006e0 RDI: 0000000000000011
[Wed Jan 2 22:44:56 2019] RBP: 00007fb7172bbac0 R08: 0000000000000000 R09: 00000000ffffffff
[Wed Jan 2 22:44:56 2019] R10: 00007fb702ffb620 R11: 0000000000000293 R12: 00007fbf23dbfe00
[Wed Jan 2 22:44:56 2019] R13: 00007fbf23cc1ef8 R14: 00007fb717372080 R15: 00007fbf4a531040
[Wed Jan 2 22:44:56 2019] INFO: task kvm:12220 blocked for more than 120 seconds.
[Wed Jan 2 22:44:56 2019] Tainted: P O 4.15.18-9-pve #1
[Wed Jan 2 22:44:56 2019] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Wed Jan 2 22:44:56 2019] kvm D 0 12220 1 0x00000000
[Wed Jan 2 22:44:56 2019] Call Trace:
[Wed Jan 2 22:44:56 2019] __schedule+0x3e0/0x870
[Wed Jan 2 22:44:56 2019] schedule+0x36/0x80
[Wed Jan 2 22:44:56 2019] io_schedule+0x16/0x40
[Wed Jan 2 22:44:56 2019] wait_on_page_bit_common+0xf3/0x190
[Wed Jan 2 22:44:56 2019] ? page_cache_tree_insert+0xe0/0xe0
[Wed Jan 2 22:44:56 2019] __filemap_fdatawait_range+0xfa/0x160
[Wed Jan 2 22:44:56 2019] file_write_and_wait_range+0x70/0xb0
[Wed Jan 2 22:44:56 2019] blkdev_fsync+0x1b/0x50
[Wed Jan 2 22:44:56 2019] vfs_fsync_range+0x51/0xb0
[Wed Jan 2 22:44:56 2019] do_fsync+0x3d/0x70
[Wed Jan 2 22:44:56 2019] SyS_fdatasync+0x13/0x20
[Wed Jan 2 22:44:56 2019] do_syscall_64+0x73/0x130
[Wed Jan 2 22:44:56 2019] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[Wed Jan 2 22:44:56 2019] RIP: 0033:0x7fbf31a1e60d
[Wed Jan 2 22:44:56 2019] RSP: 002b:00007fb6deffc5f0 EFLAGS: 00000293 ORIG_RAX: 000000000000004b
[Wed Jan 2 22:44:56 2019] RAX: ffffffffffffffda RBX: 00000000fffffffb RCX: 00007fbf31a1e60d
[Wed Jan 2 22:44:56 2019] RDX: 00007fbf23cc1e90 RSI: 000055bdd2f006e0 RDI: 0000000000000010
[Wed Jan 2 22:44:56 2019] RBP: 00007fb716086980 R08: 0000000000000000 R09: 00000000ffffffff
[Wed Jan 2 22:44:56 2019] R10: 00007fb6deffc620 R11: 0000000000000293 R12: 00007fbf23dbfd40
[Wed Jan 2 22:44:56 2019] R13: 00007fbf23cc1ef8 R14: 00007fb718f3c550 R15: 00007fbf4a531040
[Wed Jan 2 23:17:09 2019] INFO: task kvm:16152 blocked for more than 120 seconds.
[Wed Jan 2 23:17:09 2019] Tainted: P O 4.15.18-9-pve #1
[Wed Jan 2 23:17:09 2019] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Wed Jan 2 23:17:09 2019] kvm D 0 16152 1 0x00000000
[Wed Jan 2 23:17:09 2019] Call Trace:
[Wed Jan 2 23:17:09 2019] __schedule+0x3e0/0x870
[Wed Jan 2 23:17:09 2019] schedule+0x36/0x80
[Wed Jan 2 23:17:09 2019] schedule_timeout+0x1d4/0x360
[Wed Jan 2 23:17:09 2019] io_schedule_timeout+0x1e/0x50
[Wed Jan 2 23:17:09 2019] wait_for_completion_io+0xb4/0x140
[Wed Jan 2 23:17:09 2019] ? wake_up_q+0x80/0x80
[Wed Jan 2 23:17:09 2019] submit_bio_wait+0x61/0x90
[Wed Jan 2 23:17:09 2019] blkdev_issue_flush+0x85/0xb0
[Wed Jan 2 23:17:09 2019] blkdev_fsync+0x35/0x50
[Wed Jan 2 23:17:09 2019] vfs_fsync_range+0x51/0xb0
[Wed Jan 2 23:17:09 2019] do_fsync+0x3d/0x70
[Wed Jan 2 23:17:09 2019] SyS_fdatasync+0x13/0x20
[Wed Jan 2 23:17:09 2019] do_syscall_64+0x73/0x130
[Wed Jan 2 23:17:09 2019] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[Wed Jan 2 23:17:09 2019] RIP: 0033:0x7fbf31a1e60d
[Wed Jan 2 23:17:09 2019] RSP: 002b:00007fb6e2ffc5f0 EFLAGS: 00000293 ORIG_RAX: 000000000000004b
[Wed Jan 2 23:17:09 2019] RAX: ffffffffffffffda RBX: 00000000fffffffb RCX: 00007fbf31a1e60d
[Wed Jan 2 23:17:09 2019] RDX: 00007fbf23cc1e90 RSI: 000055bdd2f006e0 RDI: 0000000000000010
[Wed Jan 2 23:17:09 2019] RBP: 00007fb716087b00 R08: 0000000000000000 R09: 00000000ffffffff
[Wed Jan 2 23:17:09 2019] R10: 00007fb6e2ffc620 R11: 0000000000000293 R12: 00007fbf23dbfd40
[Wed Jan 2 23:17:09 2019] R13: 00007fbf23cc1ef8 R14: 00007fb7173733c0 R15: 00007fbf4a531040
[Wed Jan 2 23:23:11 2019] INFO: task kvm:16934 blocked for more than 120 seconds.
[Wed Jan 2 23:23:11 2019] Tainted: P O 4.15.18-9-pve #1
[Wed Jan 2 23:23:11 2019] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Wed Jan 2 23:23:11 2019] kvm D 0 16934 1 0x00000000
[Wed Jan 2 23:23:11 2019] Call Trace:
[Wed Jan 2 23:23:11 2019] __schedule+0x3e0/0x870
[Wed Jan 2 23:23:11 2019] schedule+0x36/0x80
[Wed Jan 2 23:23:11 2019] schedule_timeout+0x1d4/0x360
[Wed Jan 2 23:23:11 2019] io_schedule_timeout+0x1e/0x50
[Wed Jan 2 23:23:11 2019] wait_for_completion_io+0xb4/0x140
[Wed Jan 2 23:23:11 2019] ? wake_up_q+0x80/0x80
[Wed Jan 2 23:23:11 2019] submit_bio_wait+0x61/0x90
[Wed Jan 2 23:23:11 2019] blkdev_issue_flush+0x85/0xb0
[Wed Jan 2 23:23:11 2019] blkdev_fsync+0x35/0x50
[Wed Jan 2 23:23:11 2019] vfs_fsync_range+0x51/0xb0
[Wed Jan 2 23:23:11 2019] do_fsync+0x3d/0x70
[Wed Jan 2 23:23:11 2019] SyS_fdatasync+0x13/0x20
[Wed Jan 2 23:23:11 2019] do_syscall_64+0x73/0x130
[Wed Jan 2 23:23:11 2019] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[Wed Jan 2 23:23:11 2019] RIP: 0033:0x7fbf31a1e60d
[Wed Jan 2 23:23:11 2019] RSP: 002b:00007fb6efffc5f0 EFLAGS: 00000293 ORIG_RAX: 000000000000004b
[Wed Jan 2 23:23:11 2019] RAX: ffffffffffffffda RBX: 00000000fffffffb RCX: 00007fbf31a1e60d
[Wed Jan 2 23:23:11 2019] RDX: 00007fbf23cc1e90 RSI: 000055bdd2f006e0 RDI: 0000000000000010
[Wed Jan 2 23:23:11 2019] RBP: 00007fb719ba1600 R08: 0000000000000000 R09: 00000000ffffffff
[Wed Jan 2 23:23:11 2019] R10: 00007fb6efffc620 R11: 0000000000000293 R12: 00007fbf23dbfd40
[Wed Jan 2 23:23:11 2019] R13: 00007fbf23cc1ef8 R14: 00007fb7173732e0 R15: 00007fbf4a531040
[Wed Jan 2 23:29:14 2019] INFO: task kvm:17669 blocked for more than 120 seconds.
[Wed Jan 2 23:29:14 2019] Tainted: P O 4.15.18-9-pve #1
[Wed Jan 2 23:29:14 2019] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Wed Jan 2 23:29:14 2019] kvm D 0 17669 1 0x00000000
[Wed Jan 2 23:29:14 2019] Call Trace:
[Wed Jan 2 23:29:14 2019] __schedule+0x3e0/0x870
[Wed Jan 2 23:29:14 2019] schedule+0x36/0x80
[Wed Jan 2 23:29:14 2019] schedule_timeout+0x1d4/0x360
[Wed Jan 2 23:29:14 2019] io_schedule_timeout+0x1e/0x50
[Wed Jan 2 23:29:14 2019] wait_for_completion_io+0xb4/0x140
[Wed Jan 2 23:29:14 2019] ? wake_up_q+0x80/0x80
[Wed Jan 2 23:29:14 2019] submit_bio_wait+0x61/0x90
[Wed Jan 2 23:29:14 2019] blkdev_issue_flush+0x85/0xb0
[Wed Jan 2 23:29:14 2019] blkdev_fsync+0x35/0x50
[Wed Jan 2 23:29:14 2019] vfs_fsync_range+0x51/0xb0
[Wed Jan 2 23:29:14 2019] do_fsync+0x3d/0x70
[Wed Jan 2 23:29:14 2019] SyS_fdatasync+0x13/0x20
[Wed Jan 2 23:29:14 2019] do_syscall_64+0x73/0x130
[Wed Jan 2 23:29:14 2019] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[Wed Jan 2 23:29:14 2019] RIP: 0033:0x7fbf31a1e60d
[Wed Jan 2 23:29:14 2019] RSP: 002b:00007fb70bef75f0 EFLAGS: 00000293 ORIG_RAX: 000000000000004b
[Wed Jan 2 23:29:14 2019] RAX: ffffffffffffffda RBX: 00000000fffffffb RCX: 00007fbf31a1e60d
[Wed Jan 2 23:29:14 2019] RDX: 00007fbf23cc1e90 RSI: 000055bdd2f006e0 RDI: 0000000000000010
[Wed Jan 2 23:29:14 2019] RBP: 00007fb717383640 R08: 0000000000000000 R09: 00000000ffffffff
[Wed Jan 2 23:29:14 2019] R10: 00007fb70bef7620 R11: 0000000000000293 R12: 00007fbf23dbfd40
[Wed Jan 2 23:29:14 2019] R13: 00007fbf23cc1ef8 R14: 00007fb7173734a0 R15: 00007fbf4a531040
[Wed Jan 2 23:44:32 2019] perf: interrupt took too long (2504 > 2500), lowering kernel.perf_event_max_sample_rate to 79750
[Thu Jan 3 00:54:41 2019] perf: interrupt took too long (3150 > 3130), lowering kernel.perf_event_max_sample_rate to 63500
[Thu Jan 3 02:48:07 2019] perf: interrupt took too long (3939 > 3937), lowering kernel.perf_event_max_sample_rate to 50750
[Thu Jan 3 03:47:49 2019] device-mapper: thin: 253:5: reached low water mark for metadata device: sending event.
[cut]
[Thu Jan 3 04:24:45 2019] device-mapper: space map metadata: unable to allocate new metadata block
[Thu Jan 3 04:24:45 2019] device-mapper: thin: 253:5: metadata operation 'dm_thin_insert_block' failed: error = -28
[Thu Jan 3 04:24:45 2019] device-mapper: thin: 253:5: aborting current metadata transaction
[Thu Jan 3 04:24:45 2019] device-mapper: thin: 253:5: switching pool to read-only mode
[Thu Jan 3 04:24:45 2019] Buffer I/O error on dev dm-21, logical block 95061824, lost async page write
[Thu Jan 3 04:24:45 2019] Buffer I/O error on dev dm-21, logical block 95061825, lost async page write
[Thu Jan 3 04:24:45 2019] Buffer I/O error on dev dm-21, logical block 95061826, lost async page write
[Thu Jan 3 04:24:45 2019] Buffer I/O error on dev dm-21, logical block 95061827, lost async page write
[Thu Jan 3 04:24:45 2019] Buffer I/O error on dev dm-21, logical block 95061828, lost async page write
[Thu Jan 3 04:24:45 2019] Buffer I/O error on dev dm-21, logical block 95061829, lost async page write
[Thu Jan 3 04:24:45 2019] Buffer I/O error on dev dm-21, logical block 95061830, lost async page write
[Thu Jan 3 04:24:45 2019] Buffer I/O error on dev dm-21, logical block 95061831, lost async page write
[Thu Jan 3 04:24:45 2019] Buffer I/O error on dev dm-21, logical block 95061832, lost async page write
[Thu Jan 3 04:24:46 2019] Buffer I/O error on dev dm-21, logical block 95061833, lost async page write

Even if I tried to copy content from one partition to aother on Proxmox 5.2/5.3 I got errors and freezing.
Same job on the Proxmox 5.1 did to the end without any errors.
 

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
1,862
179
63
Thu Jan 3 03:47:49 2019] device-mapper: thin: 253:5: reached low water mark for metadata device: sending event.
[cut]
[Thu Jan 3 04:24:45 2019] device-mapper: space map metadata: unable to allocate new metadata block
[Thu Jan 3 04:24:45 2019] device-mapper: thin: 253:5: metadata operation 'dm_thin_insert_block' failed: error = -28
[Thu Jan 3 04:24:45 2019] device-mapper: thin: 253:5: aborting current metadata transaction
[Thu Jan 3 04:24:45 2019] device-mapper: thin: 253:5: switching pool to read-only mode
seems you are using lvm-thin as a storage backend, and that your thin-pool is full
check the output of:
* vgs
* lvs -a
 

Morbious

New Member
Feb 27, 2018
12
0
1
41
seems you are using lvm-thin as a storage backend, and that your thin-pool is full
check the output of:
* vgs
* lvs -a
Yes, I use thin-pool but it wasn't full.
Today I reinstalled machine to fresh 5-1.41 and got the same errors.
During restore I got pik in IOPS .Don't you think it's starnge on completly fresh machine with no VM on board?
 

Attachments

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
1,862
179
63
The error message I quoted above does indicate that the thinpool (most likely its metadata-volume) is full.

please post the output of `lvs -a` when the condition happens.
 

Morbious

New Member
Feb 27, 2018
12
0
1
41
Probably you're right - now I see on the other machine that metadata has only 80m - is it normal?
How does thin-pool reserve space for metadata?
I have large data partition ~10TB and it only reserves 80m.
Why on one machine where is 10 VM I have reserve 10% metadata and on the other which has only 4 VM there is 15% of reserved space?
 

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
1,862
179
63
The automatic sizing of a thinpool is done by the lvm-utilities and is set between 80m and 200m - trying to have enough space to reference each chunk in the thinpool twice. Since the needed metadatasize depends on many factors (e.g. number of snapshots, number of linked clones) there cannot be a one size fits all approach. - However 80m sounds a bit on the low end.

If you resize a thinpool make sure to extend the metadatasize as well.
The PVE-installer sets the metadatasize to at least 1G for this reason.

Check the `lvmthin` manpage - it describes the technology rather well.

In case your metadata got full you most likely ended up with lost writes, and thus data-corruption in the images - if in doubt restore from a backup.
 

Morbious

New Member
Feb 27, 2018
12
0
1
41
The automatic sizing of a thinpool is done by the lvm-utilities and is set between 80m and 200m - trying to have enough space to reference each chunk in the thinpool twice. Since the needed metadatasize depends on many factors (e.g. number of snapshots, number of linked clones) there cannot be a one size fits all approach. - However 80m sounds a bit on the low end.

If you resize a thinpool make sure to extend the metadatasize as well.
The PVE-installer sets the metadatasize to at least 1G for this reason.

Check the `lvmthin` manpage - it describes the technology rather well.

In case your metadata got full you most likely ended up with lost writes, and thus data-corruption in the images - if in doubt restore from a backup.
Thank you for an answer.
 

Marcin Kubica

New Member
Feb 13, 2019
5
0
1
42
I'm new to Proxmox in general, but having similar problem. Proxmox 5.2-5 Upon restore all guests went frozen after 2 minutes and reported sata failures. One guest pretty bad, lost partition tables on two disks. No indication in host logs of any issue. How about setting no cache on guest disks? Can this be a cause?

Code:
Feb 12 09:07:45 guest kernel: [9997833.895507] ata4.00: exception Emask 0x0 SAct 0x400 SErr 0x0 action 0x6 frozen
Feb 12 09:07:45 guest kernel: [9997833.897401] ata4.00: failed command: WRITE FPDMA QUEUED
Feb 12 09:07:45 guest kernel: [9997833.899192] ata4.00: cmd 61/08:50:f0:21:44/00:00:00:00:00/40 tag 10 ncq dma 4096 out
Feb 12 09:07:45 guest kernel: [9997833.899192]          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Feb 12 09:07:45 guest kernel: [9997833.902651] ata4.00: status: { DRDY }
Feb 12 09:07:45 guest kernel: [9997833.904449] ata4: hard resetting link
Feb 12 09:08:06 guest kernel: [9997854.918748] INFO: rcu_sched detected stalls on CPUs/tasks:
Feb 12 09:08:06 guest kernel: [9997854.920656]     1-...: (1 GPs behind) idle=f91/140000000000000/0 softirq=97807910/97807911 fqs=2620
Feb 12 09:08:06 guest kernel: [9997854.922454]     (detected by 4, t=5252 jiffies, g=91836373, c=91836372, q=546)
Feb 12 09:08:06 guest kernel: [9997854.924226] Task dump for CPU 1:
Feb 12 09:08:06 guest kernel: [9997854.925918] scsi_eh_3       R  running task        0   148      2 0x00000008
Feb 12 09:08:06 guest kernel: [9997854.927684]  ffff95036b4f04c0 ffff95053fc58980 ffff95051e08c3c0 0000000000000000
Feb 12 09:08:06 guest kernel: [9997854.929487]  ffff95051e2672c0 ffffa9b5c39239a8 ffffffffa9a10ef1 ffffffffa94e9f74
Feb 12 09:08:06 guest kernel: [9997854.931239]  0000000000000040 ffff95053fc58980 00ffffffa94d04f1 ffff95051e08c3c0
Feb 12 09:08:06 guest kernel: [9997854.933049] Call Trace:
Feb 12 09:08:06 guest kernel: [9997854.934932]  [<ffffffffa9a10ef1>] ? __schedule+0x241/0x6f0
Feb 12 09:08:06 guest kernel: [9997854.936751]  [<ffffffffa94e9f74>] ? hrtimer_start_range_ns+0x194/0x360
Feb 12 09:08:06 guest kernel: [9997854.938572]  [<ffffffffa94e9997>] ? hrtimer_try_to_cancel+0x27/0x110
Feb 12 09:08:06 guest kernel: [9997854.940379]  [<ffffffffa9a14c15>] ? schedule_hrtimeout_range_clock+0xc5/0x1a0
Feb 12 09:08:06 guest kernel: [9997854.942230]  [<ffffffffa94e9760>] ? __hrtimer_init+0xa0/0xa0
Feb 12 09:08:06 guest kernel: [9997854.944105]  [<ffffffffc04cb060>] ? ahci_scr_read+0x60/0x60 [libahci]
Feb 12 09:08:06 guest kernel: [9997854.946058]  [<ffffffffc053a431>] ? sata_link_resume+0x61/0x160 [libata]
Feb 12 09:08:06 guest kernel: [9997854.947908]  [<ffffffffc054026c>] ? sata_link_hardreset+0x12c/0x210 [libata]
Feb 12 09:08:06 guest kernel: [9997854.949737]  [<ffffffffc04cc707>] ? ahci_hardreset+0xe7/0x140 [libahci]
Feb 12 09:08:06 guest kernel: [9997854.951542]  [<ffffffffc04cc620>] ? ahci_dev_classify+0x80/0x80 [libahci]
Feb 12 09:08:06 guest kernel: [9997854.953300]  [<ffffffffc0549f41>] ? ata_eh_reset+0x3a1/0xd30 [libata]
Feb 12 09:08:06 guest kernel: [9997854.955060]  [<ffffffffc04cc620>] ? ahci_dev_classify+0x80/0x80 [libahci]
Feb 12 09:08:06 guest kernel: [9997854.956893]  [<ffffffffc04cc490>] ? ahci_pmp_attach+0x60/0x60 [libahci]
Feb 12 09:08:06 guest kernel: [9997854.958606]  [<ffffffffc04ce350>] ? ahci_do_softreset+0x240/0x240 [libahci]
Feb 12 09:08:06 guest kernel: [9997854.960322]  [<ffffffffc04ce350>] ? ahci_do_softreset+0x240/0x240 [libahci]
Feb 12 09:08:06 guest kernel: [9997854.961997]  [<ffffffffc04cc620>] ? ahci_dev_classify+0x80/0x80 [libahci]
Feb 12 09:08:06 guest kernel: [9997854.963587]  [<ffffffffc04cc490>] ? ahci_pmp_attach+0x60/0x60 [libahci]
Feb 12 09:08:06 guest kernel: [9997854.965222]  [<ffffffffc054ac95>] ? ata_eh_recover+0x275/0x1360 [libata]
Feb 12 09:08:06 guest kernel: [9997854.966749]  [<ffffffffc053fed0>] ? ata_phys_link_offline+0x60/0x60 [libata]
Feb 12 09:08:06 guest kernel: [9997854.968264]  [<ffffffffc04ce350>] ? ahci_do_softreset+0x240/0x240 [libahci]
Feb 12 09:08:06 guest kernel: [9997854.969790]  [<ffffffffc04cc620>] ? ahci_dev_classify+0x80/0x80 [libahci]
Feb 12 09:08:06 guest kernel: [9997854.971246]  [<ffffffffc04cc490>] ? ahci_pmp_attach+0x60/0x60 [libahci]
Feb 12 09:08:06 guest kernel: [9997854.972766]  [<ffffffffc0551ce7>] ? sata_pmp_error_handler+0xc7/0x9b0 [libata]
Feb 12 09:08:06 guest kernel: [9997854.974250]  [<ffffffffa9a15d70>] ? __switch_to_asm+0x40/0x70
Feb 12 09:08:06 guest kernel: [9997854.975737]  [<ffffffffa9a15d70>] ? __switch_to_asm+0x40/0x70
Feb 12 09:08:06 guest kernel: [9997854.977157]  [<ffffffffa9a15d64>] ? __switch_to_asm+0x34/0x70
Feb 12 09:08:06 guest kernel: [9997854.978535]  [<ffffffffa9a15d70>] ? __switch_to_asm+0x40/0x70
Feb 12 09:08:06 guest kernel: [9997854.979952]  [<ffffffffa94e64c4>] ? lock_timer_base+0x74/0x90
Feb 12 09:08:06 guest kernel: [9997854.981287]  [<ffffffffc04ccf98>] ? ahci_error_handler+0x18/0x60 [libahci]
Feb 12 09:08:06 guest kernel: [9997854.982670]  [<ffffffffc054c297>] ? ata_scsi_port_error_handler+0x437/0x780 [libata]
Feb 12 09:08:06 guest kernel: [9997854.984261]  [<ffffffffc03e6bc0>] ? scsi_eh_get_sense+0x210/0x210 [scsi_mod]
Feb 12 09:08:06 guest kernel: [9997854.985601]  [<ffffffffc03e4210>] ? scsi_try_target_reset+0x80/0x80 [scsi_mod]
Feb 12 09:08:06 guest kernel: [9997854.986900]  [<ffffffffc054c683>] ? ata_scsi_error+0xa3/0xe0 [libata]
Feb 12 09:08:06 guest kernel: [9997854.988187]  [<ffffffffc03e6c9e>] ? scsi_error_handler+0xde/0x890 [scsi_mod]
Feb 12 09:08:06 guest kernel: [9997854.989511]  [<ffffffffa9a10ef1>] ? __schedule+0x241/0x6f0
Feb 12 09:08:06 guest kernel: [9997854.990777]  [<ffffffffa94bbaaf>] ? __wake_up_common+0x4f/0x90
Feb 12 09:08:06 guest kernel: [9997854.992021]  [<ffffffffc03e6bc0>] ? scsi_eh_get_sense+0x210/0x210 [scsi_mod]
Feb 12 09:08:06 guest kernel: [9997854.993283]  [<ffffffffa94993e9>] ? kthread+0xd9/0xf0
Feb 12 09:08:06 guest kernel: [9997854.994521]  [<ffffffffa9a15d64>] ? __switch_to_asm+0x34/0x70
Feb 12 09:08:06 guest kernel: [9997854.995803]  [<ffffffffa9499310>] ? kthread_park+0x60/0x60
Feb 12 09:08:06 guest kernel: [9997854.997023]  [<ffffffffa9a15df7>] ? ret_from_fork+0x57/0x70
Feb 12 09:08:15 guest kernel: [9997863.627530] ata4: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Feb 12 09:08:15 guest kernel: [9997863.629601] ata4.00: configured for UDMA/100
Feb 12 09:08:15 guest kernel: [9997863.630874] ata4.00: device reported invalid CHS sector 0
Feb 12 09:08:15 guest kernel: [9997863.632086] ata4: EH complete
Feb 12 09:08:47 guest kernel: [9997895.342348] ata1.00: exception Emask 0x0 SAct 0x60 SErr 0x0 action 0x6 frozen
Feb 12 09:08:47 guest kernel: [9997895.343619] ata1.00: failed command: WRITE FPDMA QUEUED
Feb 12 09:08:47 guest kernel: [9997895.344816] ata1.00: cmd 61/08:28:78:6f:d8/00:00:01:00:00/40 tag 5 ncq dma 4096 out
Feb 12 09:08:47 guest kernel: [9997895.344816]          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Feb 12 09:08:47 guest kernel: [9997895.347160] ata1.00: status: { DRDY }
Feb 12 09:08:47 guest kernel: [9997895.348428] ata1.00: failed command: WRITE FPDMA QUEUED
Feb 12 09:08:47 guest kernel: [9997895.349565] ata1.00: cmd 61/08:30:90:7e:d4/00:00:01:00:00/40 tag 6 ncq dma 4096 out
Feb 12 09:08:47 guest kernel: [9997895.349565]          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Feb 12 09:08:47 guest kernel: [9997895.351912] ata1.00: status: { DRDY }
Feb 12 09:08:47 guest kernel: [9997895.353218] ata1: hard resetting link
Feb 12 09:09:08 guest kernel: [9997916.605495] INFO: rcu_sched detected stalls on CPUs/tasks:
Feb 12 09:09:08 guest kernel: [9997916.606790]     3-...: (1 ticks this GP) idle=275/140000000000000/0 softirq=91251630/91251630 fqs=299
Feb 12 09:09:08 guest kernel: [9997916.608033]     (detected by 6, t=5314 jiffies, g=91836501, c=91836500, q=32)
Feb 12 09:09:08 guest kernel: [9997916.609244] Task dump for CPU 3:
Feb 12 09:09:08 guest kernel: [9997916.610438] scsi_eh_0       R  running task        0   142      2 0x00000008
Feb 12 09:09:08 guest kernel: [9997916.611762]  ffff9505238401c0 ffff95053fcd8980 ffff95051f1a8e80 0000000000000000
Feb 12 09:09:08 guest kernel: [9997916.613005]  ffff95051e2672c0 ffffa9b5c37ff9a8 ffffffffa9a10ef1 ffffffffa94e9f74
Feb 12 09:09:08 guest kernel: [9997916.614264]  0000000000000040 ffff95053fcd8980 0000000000000283 ffff95051f1a8e80
Feb 12 09:09:08 guest kernel: [9997916.615533] Call Trace:
Feb 12 09:09:08 guest kernel: [9997916.616816]  [<ffffffffa9a10ef1>] ? __schedule+0x241/0x6f0
Feb 12 09:09:08 guest kernel: [9997916.618124]  [<ffffffffa94e9f74>] ? hrtimer_start_range_ns+0x194/0x360
Feb 12 09:09:08 guest kernel: [9997916.619402]  [<ffffffffa94e9997>] ? hrtimer_try_to_cancel+0x27/0x110
Feb 12 09:09:08 guest kernel: [9997916.620690]  [<ffffffffa9a14c15>] ? schedule_hrtimeout_range_clock+0xc5/0x1a0
Feb 12 09:09:08 guest kernel: [9997916.621969]  [<ffffffffa94e9760>] ? __hrtimer_init+0xa0/0xa0
Feb 12 09:09:08 guest kernel: [9997916.623251]  [<ffffffffc04cb060>] ? ahci_scr_read+0x60/0x60 [libahci]
Feb 12 09:09:08 guest kernel: [9997916.624564]  [<ffffffffc053a431>] ? sata_link_resume+0x61/0x160 [libata]
Feb 12 09:09:08 guest kernel: [9997916.625865]  [<ffffffffc054026c>] ? sata_link_hardreset+0x12c/0x210 [libata]
Feb 12 09:09:08 guest kernel: [9997916.627168]  [<ffffffffc04cc707>] ? ahci_hardreset+0xe7/0x140 [libahci]
Feb 12 09:09:08 guest kernel: [9997916.628501]  [<ffffffffc04cc620>] ? ahci_dev_classify+0x80/0x80 [libahci]
Feb 12 09:09:08 guest kernel: [9997916.629841]  [<ffffffffc0549f41>] ? ata_eh_reset+0x3a1/0xd30 [libata]
Feb 12 09:09:08 guest kernel: [9997916.631176]  [<ffffffffc04cc620>] ? ahci_dev_classify+0x80/0x80 [libahci]
Feb 12 09:09:08 guest kernel: [9997916.632530]  [<ffffffffc04cc490>] ? ahci_pmp_attach+0x60/0x60 [libahci]
Feb 12 09:09:08 guest kernel: [9997916.633854]  [<ffffffffc04ce350>] ? ahci_do_softreset+0x240/0x240 [libahci]
Feb 12 09:09:08 guest kernel: [9997916.635174]  [<ffffffffc04ce350>] ? ahci_do_softreset+0x240/0x240 [libahci]
Feb 12 09:09:08 guest kernel: [9997916.636489]  [<ffffffffc04cc620>] ? ahci_dev_classify+0x80/0x80 [libahci]
Feb 12 09:09:08 guest kernel: [9997916.637783]  [<ffffffffc04cc490>] ? ahci_pmp_attach+0x60/0x60 [libahci]
Feb 12 09:09:08 guest kernel: [9997916.639097]  [<ffffffffc054ac95>] ? ata_eh_recover+0x275/0x1360 [libata]
Feb 12 09:09:08 guest kernel: [9997916.640408]  [<ffffffffc053fed0>] ? ata_phys_link_offline+0x60/0x60 [libata]
Feb 12 09:09:08 guest kernel: [9997916.641698]  [<ffffffffc04ce350>] ? ahci_do_softreset+0x240/0x240 [libahci]
Feb 12 09:09:08 guest kernel: [9997916.642985]  [<ffffffffc04cc620>] ? ahci_dev_classify+0x80/0x80 [libahci]
Feb 12 09:09:08 guest kernel: [9997916.644279]  [<ffffffffc04cc490>] ? ahci_pmp_attach+0x60/0x60 [libahci]
Feb 12 09:09:08 guest kernel: [9997916.645570]  [<ffffffffc0551ce7>] ? sata_pmp_error_handler+0xc7/0x9b0 [libata]
Feb 12 09:09:08 guest kernel: [9997916.646885]  [<ffffffffa9424694>] ? __switch_to+0x1f4/0x690
Feb 12 09:09:08 guest kernel: [9997916.648195]  [<ffffffffa9a15d70>] ? __switch_to_asm+0x40/0x70
Feb 12 09:09:08 guest kernel: [9997916.649468]  [<ffffffffa9a15d64>] ? __switch_to_asm+0x34/0x70
Feb 12 09:09:08 guest kernel: [9997916.650722]  [<ffffffffa94e64c4>] ? lock_timer_base+0x74/0x90
Feb 12 09:09:08 guest kernel: [9997916.651984]  [<ffffffffc04ccf98>] ? ahci_error_handler+0x18/0x60 [libahci]
Feb 12 09:09:08 guest kernel: [9997916.653249]  [<ffffffffc054c297>] ? ata_scsi_port_error_handler+0x437/0x780 [libata]
Feb 12 09:09:08 guest kernel: [9997916.654514]  [<ffffffffc03e6bc0>] ? scsi_eh_get_sense+0x210/0x210 [scsi_mod]
Feb 12 09:09:08 guest kernel: [9997916.655789]  [<ffffffffc03e4210>] ? scsi_try_target_reset+0x80/0x80 [scsi_mod]
Feb 12 09:09:08 guest kernel: [9997916.657041]  [<ffffffffc054c683>] ? ata_scsi_error+0xa3/0xe0 [libata]
Feb 12 09:09:08 guest kernel: [9997916.658285]  [<ffffffffc03e6c9e>] ? scsi_error_handler+0xde/0x890 [scsi_mod]
Feb 12 09:09:08 guest kernel: [9997916.659514]  [<ffffffffa9a10ef1>] ? __schedule+0x241/0x6f0
Feb 12 09:09:08 guest kernel: [9997916.660761]  [<ffffffffa94bbaaf>] ? __wake_up_common+0x4f/0x90
Feb 12 09:09:08 guest kernel: [9997916.661985]  [<ffffffffc03e6bc0>] ? scsi_eh_get_sense+0x210/0x210 [scsi_mod]
Feb 12 09:09:08 guest kernel: [9997916.663212]  [<ffffffffa94993e9>] ? kthread+0xd9/0xf0
Feb 12 09:09:08 guest kernel: [9997916.664434]  [<ffffffffa9a15d64>] ? __switch_to_asm+0x34/0x70
Feb 12 09:09:08 guest kernel: [9997916.665649]  [<ffffffffa9499310>] ? kthread_park+0x60/0x60
Feb 12 09:09:08 guest kernel: [9997916.666851]  [<ffffffffa9499310>] ? kthread_park+0x60/0x60
Feb 12 09:09:08 guest kernel: [9997916.668027]  [<ffffffffa9a15df7>] ? ret_from_fork+0x57/0x70
Feb 12 09:09:15 guest kernel: [9997923.993524] INFO: task drbd_r_r1:2094140 blocked for more than 120 seconds.
Feb 12 09:09:15 guest kernel: [9997923.994917]       Not tainted 4.9.0-8-amd64 #1 Debian 4.9.110-3+deb9u6
Feb 12 09:09:15 guest kernel: [9997923.996161] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 12 09:09:15 guest kernel: [9997923.997441] drbd_r_r1       D    0 2094140      2 0x00000004
Feb 12 09:09:15 guest kernel: [9997923.998679]  ffff95051e2672c0 0000000000000000 ffff9502ca666000 ffff95053fc18980
Feb 12 09:09:15 guest kernel: [9997923.999996]  ffff950523fe0e00 ffffa9b5d13d7c60 ffffffffa9a10ee9 0000000000000008
Feb 12 09:09:15 guest kernel: [9997924.001231]  0000000000000000 ffff95053fc18980 0000000000000206 ffff9502ca666000
Feb 12 09:09:15 guest kernel: [9997924.002528] Call Trace:
Feb 12 09:09:15 guest kernel: [9997924.003782]  [<ffffffffa9a10ee9>] ? __schedule+0x239/0x6f0
Feb 12 09:09:15 guest kernel: [9997924.005154]  [<ffffffffc06e5c80>] ? drbd_bump_write_ordering+0x340/0x340 [drbd]
Feb 12 09:09:15 guest kernel: [9997924.006485]  [<ffffffffa9a113d2>] ? schedule+0x32/0x80
Feb 12 09:09:15 guest kernel: [9997924.007774]  [<ffffffffa9a1475d>] ? schedule_timeout+0x1dd/0x380
Feb 12 09:09:15 guest kernel: [9997924.009020]  [<ffffffffa94efd1e>] ? ktime_get+0x3e/0xb0
Feb 12 09:09:15 guest kernel: [9997924.010310]  [<ffffffffc06e5c80>] ? drbd_bump_write_ordering+0x340/0x340 [drbd]
Feb 12 09:09:15 guest kernel: [9997924.011649]  [<ffffffffa9a10c4d>] ? io_schedule_timeout+0x9d/0x100
Feb 12 09:09:15 guest kernel: [9997924.012907]  [<ffffffffc06dde8c>] ? _drbd_wait_ee_list_empty+0x9c/0xe0 [drbd]
Feb 12 09:09:15 guest kernel: [9997924.014218]  [<ffffffffa94bc070>] ? prepare_to_wait_event+0xf0/0xf0
Feb 12 09:09:15 guest kernel: [9997924.015510]  [<ffffffffc06f14b0>] ? drbd_destroy_connection+0x100/0x100 [drbd]
Feb 12 09:09:15 guest kernel: [9997924.016842]  [<ffffffffc06ddf22>] ? conn_wait_active_ee_empty+0x52/0x110 [drbd]
Feb 12 09:09:15 guest kernel: [9997924.018169]  [<ffffffffc06e5cf4>] ? receive_Barrier+0x74/0x4b0 [drbd]
Feb 12 09:09:15 guest kernel: [9997924.019480]  [<ffffffffc06e5c80>] ? drbd_bump_write_ordering+0x340/0x340 [drbd]
Feb 12 09:09:15 guest kernel: [9997924.020819]  [<ffffffffc06e8690>] ? drbd_receiver+0x1a0/0x310 [drbd]
Feb 12 09:09:15 guest kernel: [9997924.022229]  [<ffffffffc06f17a3>] ? drbd_thread_setup+0x53/0x120 [drbd]
Feb 12 09:09:15 guest kernel: [9997924.023561]  [<ffffffffc06f1750>] ? drbd_destroy_device+0x2a0/0x2a0 [drbd]
Feb 12 09:09:15 guest kernel: [9997924.024867]  [<ffffffffa94993e9>] ? kthread+0xd9/0xf0
Feb 12 09:09:15 guest kernel: [9997924.026243]  [<ffffffffa9a15d64>] ? __switch_to_asm+0x34/0x70
Feb 12 09:09:15 guest kernel: [9997924.027647]  [<ffffffffa9499310>] ? kthread_park+0x60/0x60
Feb 12 09:09:15 guest kernel: [9997924.028956]  [<ffffffffa947e580>] ? SyS_exit_group+0x10/0x10
Feb 12 09:09:15 guest kernel: [9997924.030298]  [<ffffffffa9a15df7>] ? ret_from_fork+0x57/0x70
Feb 12 09:09:15 guest kernel: [9997924.031683] INFO: task kworker/u16:0:2917747 blocked for more than 120 seconds.
Feb 12 09:09:15 guest kernel: [9997924.033022]       Not tainted 4.9.0-8-amd64 #1 Debian 4.9.110-3+deb9u6
Feb 12 09:09:15 guest kernel: [9997924.034382] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 12 09:09:15 guest kernel: [9997924.035803] kworker/u16:0   D    0 2917747      2 0x00000000
Feb 12 09:09:15 guest kernel: [9997924.037214] Workqueue: writeback wb_workfn (flush-254:4)
Feb 12 09:09:15 guest kernel: [9997924.038626]  ffff95051e2672c0 0000000000000000 ffff9505238a7140 ffff95053fc58980
Feb 12 09:09:15 guest kernel: [9997924.040106]  ffff95051e0d2e40 ffffa9b5c51d7610 ffffffffa9a10ee9 0000000000000000
Feb 12 09:09:15 guest kernel: [9997924.041538]  00ffa9b5c51d7658 ffff95053fc58980 0000000000000046 ffff9505238a7140
Feb 12 09:09:15 guest kernel: [9997924.042943] Call Trace:
Feb 12 09:09:15 guest kernel: [9997924.044391]  [<ffffffffa9a10ee9>] ? __schedule+0x239/0x6f0
Feb 12 09:09:15 guest kernel: [9997924.045822]  [<ffffffffa9a11b90>] ? bit_wait+0x50/0x50
Feb 12 09:09:15 guest kernel: [9997924.047228]  [<ffffffffa9a113d2>] ? schedule+0x32/0x80
Feb 12 09:09:15 guest kernel: [9997924.048643]  [<ffffffffa9a1475d>] ? schedule_timeout+0x1dd/0x380
Feb 12 09:09:15 guest kernel: [9997924.050119]  [<ffffffffa9459f4a>] ? kvm_clock_get_cycles+0x1a/0x20
Feb 12 09:09:15 guest kernel: [9997924.051611]  [<ffffffffa9a11b90>] ? bit_wait+0x50/0x50
Feb 12 09:09:15 guest kernel: [9997924.052968]  [<ffffffffa9a10c4d>] ? io_schedule_timeout+0x9d/0x100
Feb 12 09:09:15 guest kernel: [9997924.054378]  [<ffffffffa94bbd67>] ? prepare_to_wait_exclusive+0x57/0x80
Feb 12 09:09:15 guest kernel: [9997924.055832]  [<ffffffffa9a11ba7>] ? bit_wait_io+0x17/0x60
Feb 12 09:09:15 guest kernel: [9997924.057221]  [<ffffffffa9a117e6>] ? __wait_on_bit_lock+0x56/0xc0
Feb 12 09:09:15 guest kernel: [9997924.058632]  [<ffffffffa9a11b90>] ? bit_wait+0x50/0x50
Feb 12 09:09:15 guest kernel: [9997924.060049]  [<ffffffffa9a11a2e>] ? out_of_line_wait_on_bit_lock+0x7e/0xa0
Feb 12 09:09:15 guest kernel: [9997924.061503]  [<ffffffffa94bc110>] ? wake_atomic_t_function+0x60/0x60
Feb 12 09:09:15 guest kernel: [9997924.062898]  [<ffffffffc05ecc57>] ? do_get_write_access+0x267/0x450 [jbd2]
Feb 12 09:09:15 guest kernel: [9997924.064298]  [<ffffffffc05ece8c>] ? jbd2_journal_get_write_access+0x4c/0x60 [jbd2]
Feb 12 09:09:15 guest kernel: [9997924.065823]  [<ffffffffc064f476>] ? __ext4_journal_get_write_access+0x36/0x70 [ext4]
Feb 12 09:09:15 guest kernel: [9997924.067219]  [<ffffffffc0648a8e>] ? ext4_ext_insert_extent+0x3ee/0x1320 [ext4]
Feb 12 09:09:15 guest kernel: [9997924.068610]  [<ffffffffc0647a55>] ? ext4_find_extent+0x145/0x310 [ext4]
Feb 12 09:09:15 guest kernel: [9997924.069991]  [<ffffffffc064c2b9>] ? ext4_ext_map_blocks+0x1c9/0xd70 [ext4]
Feb 12 09:09:15 guest kernel: [9997924.071312]  [<ffffffffc061a914>] ? ext4_map_blocks+0x164/0x5d0 [ext4]
Feb 12 09:09:15 guest kernel: [9997924.072699]  [<ffffffffc061e64f>] ? ext4_writepages+0x70f/0xd60 [ext4]
Feb 12 09:09:15 guest kernel: [9997924.074063]  [<ffffffffa990f9b3>] ? dev_hard_start_xmit+0xa3/0x1e0
Feb 12 09:09:15 guest kernel: [9997924.075361]  [<ffffffffa9a15d64>] ? __switch_to_asm+0x34/0x70
Feb 12 09:09:15 guest kernel: [9997924.076661]  [<ffffffffa9a15d70>] ? __switch_to_asm+0x40/0x70
Feb 12 09:09:15 guest kernel: [9997924.078019]  [<ffffffffa9a15d70>] ? __switch_to_asm+0x40/0x70
Feb 12 09:09:15 guest kernel: [9997924.079257]  [<ffffffffa963832d>] ? __writeback_single_inode+0x3d/0x320
Feb 12 09:09:15 guest kernel: [9997924.080511]  [<ffffffffa9a15d70>] ? __switch_to_asm+0x40/0x70
Feb 12 09:09:15 guest kernel: [9997924.081781]  [<ffffffffa9638ad1>] ? writeback_sb_inodes+0x221/0x4f0
Feb 12 09:09:15 guest kernel: [9997924.083017]  [<ffffffffa9638e27>] ? __writeback_inodes_wb+0x87/0xb0
Feb 12 09:09:15 guest kernel: [9997924.084293]  [<ffffffffa963919e>] ? wb_writeback+0x27e/0x310
Feb 12 09:09:15 guest kernel: [9997924.085557]  [<ffffffffa9624cdc>] ? get_nr_inodes+0x3c/0x60
Feb 12 09:09:15 guest kernel: [9997924.086767]  [<ffffffffa9639b04>] ? wb_workfn+0x2b4/0x380
Feb 12 09:09:15 guest kernel: [9997924.087982]  [<ffffffffa94930aa>] ? process_one_work+0x18a/0x420
Feb 12 09:09:15 guest kernel: [9997924.089179]  [<ffffffffa949338d>] ? worker_thread+0x4d/0x490
Feb 12 09:09:15 guest kernel: [9997924.090420]  [<ffffffffa9493340>] ? process_one_work+0x420/0x420
Feb 12 09:09:15 guest kernel: [9997924.091662]  [<ffffffffa94993e9>] ? kthread+0xd9/0xf0
Feb 12 09:09:15 guest kernel: [9997924.092845]  [<ffffffffa9a15d64>] ? __switch_to_asm+0x34/0x70
Feb 12 09:09:15 guest kernel: [9997924.094082]  [<ffffffffa9499310>] ? kthread_park+0x60/0x60
Feb 12 09:09:15 guest kernel: [9997924.095284]  [<ffffffffa947e4c3>] ? SyS_exit+0x13/0x20
Feb 12 09:09:15 guest kernel: [9997924.096475]  [<ffffffffa9a15df7>] ? ret_from_fork+0x57/0x70
Feb 12 09:09:24 guest kernel: [9997931.978574] ata8: lost interrupt (Status 0x58)
Feb 12 09:09:24 guest kernel: [9997932.281938] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Feb 12 09:09:24 guest kernel: [9997932.284971] ata1.00: configured for UDMA/100
Feb 12 09:09:24 guest kernel: [9997932.287106] ata1.00: device reported invalid CHS sector 0
Feb 12 09:09:24 guest kernel: [9997932.289038] ata1.00: device reported invalid CHS sector 0
Feb 12 09:09:24 guest kernel: [9997932.290824] ata1: EH complete
Feb 12 09:09:24 guest kernel: [9997932.916878] ata8: drained 65536 bytes to clear DRQ
Feb 12 09:09:24 guest kernel: [9997932.917006] ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
Feb 12 09:09:24 guest kernel: [9997932.918284] ata8.00: cmd a0/00:00:00:08:00/00:00:00:00:00/a0 tag 0 pio 16392 in
Feb 12 09:09:24 guest kernel: [9997932.918284]          Get event status notification 4a 01 00 00 10 00 00 00 08 00res 40/00:02:00:08:00/00:00:00:00:00/a0 Emask 0x4 (timeout)
Feb 12 09:09:24 guest kernel: [9997932.921643] ata8.00: status: { DRDY }
Feb 12 09:09:24 guest kernel: [9997932.923136] ata8: soft resetting link
Feb 12 09:09:24 guest kernel: [9997933.082732] ata8.00: configured for MWDMA2
Feb 12 09:09:24 guest kernel: [9997933.083814] ata8: EH complete
Feb 12 09:10:25 guest kernel: [9997993.636029] ata4.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
Feb 12 09:10:25 guest kernel: [9997993.636624] ata4.00: failed command: FLUSH CACHE EXT
Feb 12 09:10:25 guest kernel: [9997993.637116] ata4.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 13
Feb 12 09:10:25 guest kernel: [9997993.637116]          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Feb 12 09:10:25 guest kernel: [9997993.638117] ata4.00: status: { DRDY }
Feb 12 09:10:25 guest kernel: [9997993.640354] ata4: hard resetting link
Feb 12 09:10:25 guest kernel: [9997993.640910] ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
Feb 12 09:10:25 guest kernel: [9997993.641391] ata2.00: failed command: FLUSH CACHE EXT
Feb 12 09:10:25 guest kernel: [9997993.641864] ata2.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 29
Feb 12 09:10:25 guest kernel: [9997993.641864]          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Feb 12 09:10:25 guest kernel: [9997993.642808] ata2.00: status: { DRDY }
Feb 12 09:10:25 guest kernel: [9997993.643286] ata2: hard resetting link
Feb 12 09:10:25 guest kernel: [9997993.643812] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
Feb 12 09:10:25 guest kernel: [9997993.644305] ata1.00: failed command: FLUSH CACHE
Feb 12 09:10:25 guest kernel: [9997993.644783] ata1.00: cmd e7/00:00:00:00:00/00:00:00:00:00/a0 tag 17
Feb 12 09:10:25 guest kernel: [9997993.644783]          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Feb 12 09:10:25 guest kernel: [9997993.645736] ata1.00: status: { DRDY }
Feb 12 09:10:25 guest kernel: [9997993.646241] ata1: hard resetting link
Feb 12 09:10:46 guest kernel: [9998014.743562] INFO: rcu_sched detected stalls on CPUs/tasks:
Feb 12 09:10:46 guest kernel: [9998014.744163]     3-...: (1 GPs behind) idle=a89/140000000000000/0 softirq=91251703/91251704 fqs=2361
Feb 12 09:10:46 guest kernel: [9998014.744817]     (detected by 2, t=5275 jiffies, g=91836708, c=91836707, q=65)
Feb 12 09:10:46 guest kernel: [9998014.745570] Task dump for CPU 3:
Feb 12 09:10:46 guest kernel: [9998014.746099] scsi_eh_3       R  running task        0   148      2 0x00000008
Feb 12 09:10:46 guest kernel: [9998014.746751]  ffff95051f20f668 0000000000000000 0000000000000003 0000000000000000
Feb 12 09:10:46 guest kernel: [9998014.747452]  0000000000000000 0023851f4b7ecdef 0000000000000000 ffff95053fc99440
Feb 12 09:10:46 guest kernel: [9998014.748138]  ffff95051e08c3c0 ffff95051f1a8e80 ffff95051f20f668 ffffffffffffff10
Feb 12 09:10:46 guest kernel: [9998014.748785] Call Trace:
Feb 12 09:10:46 guest kernel: [9998014.749540]  [<ffffffffa94c3773>] ? mutex_spin_on_owner.isra.1+0x23/0x40
Feb 12 09:10:46 guest kernel: [9998014.750257]  [<ffffffffa94c38cf>] ? mutex_optimistic_spin+0x13f/0x1b0
Feb 12 09:10:46 guest kernel: [9998014.750846]  [<ffffffffc04cb280>] ? ahci_fill_cmd_slot+0x40/0x40 [libahci]
Feb 12 09:10:46 guest kernel: [9998014.751500]  [<ffffffffa9a1305f>] ? __mutex_lock_slowpath+0x3f/0x130
Feb 12 09:10:46 guest kernel: [9998014.752129]  [<ffffffffc04cb060>] ? ahci_scr_read+0x60/0x60 [libahci]
Feb 12 09:10:46 guest kernel: [9998014.752822]  [<ffffffffc053a431>] ? sata_link_resume+0x61/0x160 [libata]
Feb 12 09:10:46 guest kernel: [9998014.753554]  [<ffffffffc054026c>] ? sata_link_hardreset+0x12c/0x210 [libata]
Feb 12 09:10:46 guest kernel: [9998014.754164]  [<ffffffffc04cc707>] ? ahci_hardreset+0xe7/0x140 [libahci]
Feb 12 09:10:46 guest kernel: [9998014.754826]  [<ffffffffc04cc620>] ? ahci_dev_classify+0x80/0x80 [libahci]
Feb 12 09:10:46 guest kernel: [9998014.755524]  [<ffffffffc0549f41>] ? ata_eh_reset+0x3a1/0xd30 [libata]
Feb 12 09:10:46 guest kernel: [9998014.756235]  [<ffffffffc04cc620>] ? ahci_dev_classify+0x80/0x80 [libahci]
Feb 12 09:10:46 guest kernel: [9998014.756911]  [<ffffffffc04cc490>] ? ahci_pmp_attach+0x60/0x60 [libahci]
Feb 12 09:10:46 guest kernel: [9998014.757610]  [<ffffffffc04ce350>] ? ahci_do_softreset+0x240/0x240 [libahci]
Feb 12 09:10:46 guest kernel: [9998014.758428]  [<ffffffffc04ce350>] ? ahci_do_softreset+0x240/0x240 [libahci]
Feb 12 09:10:46 guest kernel: [9998014.759196]  [<ffffffffc04cc620>] ? ahci_dev_classify+0x80/0x80 [libahci]
Feb 12 09:10:46 guest kernel: [9998014.759976]  [<ffffffffc04cc490>] ? ahci_pmp_attach+0x60/0x60 [libahci]
Feb 12 09:10:46 guest kernel: [9998014.760643]  [<ffffffffc054ac95>] ? ata_eh_recover+0x275/0x1360 [libata]
Feb 12 09:10:46 guest kernel: [9998014.761352]  [<ffffffffc053fed0>] ? ata_phys_link_offline+0x60/0x60 [libata]
Feb 12 09:10:46 guest kernel: [9998014.762055]  [<ffffffffc04ce350>] ? ahci_do_softreset+0x240/0x240 [libahci]
Feb 12 09:10:46 guest kernel: [9998014.762624]  [<ffffffffc04cc620>] ? ahci_dev_classify+0x80/0x80 [libahci]
Feb 12 09:10:46 guest kernel: [9998014.763297]  [<ffffffffc04cc490>] ? ahci_pmp_attach+0x60/0x60 [libahci]
Feb 12 09:10:46 guest kernel: [9998014.763983]  [<ffffffffc0551ce7>] ? sata_pmp_error_handler+0xc7/0x9b0 [libata]
Feb 12 09:10:46 guest kernel: [9998014.764657]  [<ffffffffa9a15d70>] ? __switch_to_asm+0x40/0x70
Feb 12 09:10:46 guest kernel: [9998014.765344]  [<ffffffffa9a15d70>] ? __switch_to_asm+0x40/0x70
Feb 12 09:10:46 guest kernel: [9998014.765980]  [<ffffffffa9a15d64>] ? __switch_to_asm+0x34/0x70
Feb 12 09:10:46 guest kernel: [9998014.766634]  [<ffffffffa9a15d70>] ? __switch_to_asm+0x40/0x70
Feb 12 09:10:46 guest kernel: [9998014.767208]  [<ffffffffa94e64c4>] ? lock_timer_base+0x74/0x90
Feb 12 09:10:46 guest kernel: [9998014.767858]  [<ffffffffc04ccf98>] ? ahci_error_handler+0x18/0x60 [libahci]
Feb 12 09:10:46 guest kernel: [9998014.768571]  [<ffffffffc054c297>] ? ata_scsi_port_error_handler+0x437/0x780 [libata]
Feb 12 09:10:46 guest kernel: [9998014.769272]  [<ffffffffc03e6bc0>] ? scsi_eh_get_sense+0x210/0x210 [scsi_mod]
Feb 12 09:10:46 guest kernel: [9998014.770080]  [<ffffffffc03e4210>] ? scsi_try_target_reset+0x80/0x80 [scsi_mod]
Feb 12 09:10:46 guest kernel: [9998014.770743]  [<ffffffffc054c683>] ? ata_scsi_error+0xa3/0xe0 [libata]
Feb 12 09:10:46 guest kernel: [9998014.771402]  [<ffffffffc03e6c9e>] ? scsi_error_handler+0xde/0x890 [scsi_mod]
Feb 12 09:10:46 guest kernel: [9998014.772058]  [<ffffffffa9a10ef1>] ? __schedule+0x241/0x6f0
Feb 12 09:10:46 guest kernel: [9998014.772566]  [<ffffffffa94bbaaf>] ? __wake_up_common+0x4f/0x90
Feb 12 09:10:46 guest kernel: [9998014.773061]  [<ffffffffc03e6bc0>] ? scsi_eh_get_sense+0x210/0x210 [scsi_mod]
Feb 12 09:10:46 guest kernel: [9998014.773560]  [<ffffffffa94993e9>] ? kthread+0xd9/0xf0
Feb 12 09:10:46 guest kernel: [9998014.774059]  [<ffffffffa9a15d64>] ? __switch_to_asm+0x34/0x70
Feb 12 09:10:46 guest kernel: [9998014.774550]  [<ffffffffa9499310>] ? kthread_park+0x60/0x60
Feb 12 09:10:46 guest kernel: [9998014.775038]  [<ffffffffa9a15df7>] ? ret_from_fork+0x57/0x70
 

guletz

Active Member
Apr 19, 2017
998
140
43
Brasov, Romania
Hi Marcin,

In this guest do you use DRBD?

Feb 12 09:09:15 guest kernel: [9997923.993524] INFO: task drbd_r_r1:2094140 blocked for more than 120 seconds. Feb 12 09:09:15 guest kernel: [9997923.994917] Not tainted 4.9.0-8-amd64 #1 Debian 4.9.110-3+deb9u6 Feb 12 09:09:15 guest kernel: [9997923.996161] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Feb 12 09:09:15 guest kernel: [9997923.997441] drbd_r_r1 D 0 2094140 2 0x00000004
 

Marcin Kubica

New Member
Feb 13, 2019
5
0
1
42
Hi Marcin,
In this guest do you use DRBD?
Yes on this particular one. But several other non drbd guests were also affected.
Code:
Feb 12 09:20:29 nondrbd kernel: [482459.869361] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Feb 12 09:20:29 nondrbd kernel: [482459.870326] ata3.00: configured for UDMA/100
Feb 12 09:20:29 nondrbd kernel: [482459.870655] ata3.00: retrying FLUSH 0xe7 Emask 0x4
Feb 12 09:20:29 nondrbd kernel: [482459.871165] ata3.00: device reported invalid CHS sector 0
Feb 12 09:20:29 nondrbd kernel: [482459.871444] ata3: EH complete
Feb 12 09:20:29 nondrbd kernel: [482520.903282] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
Feb 12 09:20:29 nondrbd kernel: [482520.903343] ata3.00: failed command: FLUSH CACHE
Feb 12 09:20:29 nondrbd kernel: [482520.913172] ata3.00: cmd e7/00:00:00:00:00/00:00:00:00:00/a0 tag 8
Feb 12 09:20:29 nondrbd kernel: [482520.913172]          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Feb 12 09:20:29 nondrbd kernel: [482520.918157] ata3.00: status: { DRDY }
Feb 12 09:20:29 nondrbd kernel: [482520.922220] ata3: hard resetting link
Feb 12 09:20:29 nondrbd kernel: [482528.491670] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Feb 12 09:20:29 nondrbd kernel: [482528.496169] ata3.00: configured for UDMA/100
Feb 12 09:20:29 nondrbd kernel: [482528.499135] ata3.00: retrying FLUSH 0xe7 Emask 0x4
Feb 12 09:20:29 nondrbd kernel: [482528.502835] ata3.00: device reported invalid CHS sector 0
Feb 12 09:20:29 nondrbd kernel: [482528.505826] ata3: EH complete
Feb 12 09:20:29 nondrbd kernel: [482559.814508] ata3.00: exception Emask 0x0 SAct 0x1800 SErr 0x0 action 0x6 frozen
Feb 12 09:20:29 nondrbd kernel: [482559.816390] ata3.00: failed command: WRITE FPDMA QUEUED
Feb 12 09:20:29 nondrbd kernel: [482559.818094] ata3.00: cmd 61/08:58:a0:97:b1/00:00:00:00:00/40 tag 11 ncq dma 4096 out
Feb 12 09:20:29 nondrbd kernel: [482559.818094]          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Feb 12 09:20:29 nondrbd kernel: [482559.821628] ata3.00: status: { DRDY }
Feb 12 09:20:29 nondrbd kernel: [482559.823373] ata3.00: failed command: WRITE FPDMA QUEUED
Feb 12 09:20:29 nondrbd kernel: [482559.825067] ata3.00: cmd 61/08:60:a8:97:b1/00:00:00:00:00/40 tag 12 ncq dma 4096 out
Feb 12 09:20:29 nondrbd kernel: [482559.825067]          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Feb 12 09:20:29 nondrbd kernel: [482559.828478] ata3.00: status: { DRDY }
Feb 12 09:20:29 nondrbd kernel: [482559.830196] ata3: hard resetting link
Feb 12 09:20:29 nondrbd kernel: [482621.763088] INFO: rcu_sched self-detected stall on CPU
Feb 12 09:20:29 nondrbd kernel: [482621.764420]     0-...: (1 ticks this GP) idle=f87/140000000000001/0 softirq=17039734/17039734 fqs=0
Feb 12 09:20:29 nondrbd kernel: [482621.765589]      (t=15482 jiffies g=11612976 c=11612975 q=32)
Feb 12 09:20:29 nondrbd kernel: [482621.766719] rcu_sched kthread starved for 15482 jiffies! g11612976 c11612975 f0x0 RCU_GP_WAIT_FQS(3) ->state=0x1
Feb 12 09:20:29 nondrbd kernel: [482621.767989] rcu_sched       S    0     7      2 0x00000000
Feb 12 09:20:29 nondrbd kernel: [482621.769249]  ffff9defb6eb1ac0 ffff9defb6eb1ac0 ffff9deffc9d8100 ffff9defffc18980
Feb 12 09:20:29 nondrbd kernel: [482621.770531]  ffff9defb6820480 ffffb0c84034fdb0 ffffffffbce14219 ffffffffbce190b0
Feb 12 09:20:29 nondrbd kernel: [482621.771759]  00ffffffbce190a4 ffff9defffc18980 0000000000000000 ffff9deffc9d8100
Feb 12 09:20:29 nondrbd kernel: [482621.772992] Call Trace:
Feb 12 09:20:29 nondrbd kernel: [482621.774279]  [<ffffffffbce14219>] ? __schedule+0x239/0x6f0
Feb 12 09:20:29 nondrbd kernel: [482621.775500]  [<ffffffffbce190b0>] ? __switch_to_asm+0x40/0x70
Feb 12 09:20:29 nondrbd kernel: [482621.776682]  [<ffffffffbce14702>] ? schedule+0x32/0x80
Feb 12 09:20:29 nondrbd kernel: [482621.777854]  [<ffffffffbce17a17>] ? schedule_timeout+0x167/0x380
Feb 12 09:20:29 nondrbd kernel: [482621.779040]  [<ffffffffbc8e8780>] ? del_timer_sync+0x50/0x50
Feb 12 09:20:29 nondrbd kernel: [482621.780161]  [<ffffffffbc8e1de5>] ? rcu_gp_kthread+0x505/0x850
Feb 12 09:20:29 nondrbd kernel: [482621.781262]  [<ffffffffbc8bcc4f>] ? __wake_up_common+0x4f/0x90
Feb 12 09:20:29 nondrbd kernel: [482621.782341]  [<ffffffffbc8e18e0>] ? get_state_synchronize_rcu+0x10/0x10
Feb 12 09:20:29 nondrbd kernel: [482621.783404]  [<ffffffffbc89a569>] ? kthread+0xd9/0xf0
Feb 12 09:20:29 nondrbd kernel: [482621.784455]  [<ffffffffbce190a4>] ? __switch_to_asm+0x34/0x70
Feb 12 09:20:29 nondrbd kernel: [482621.785513]  [<ffffffffbc89a490>] ? kthread_park+0x60/0x60
Feb 12 09:20:29 nondrbd kernel: [482621.786566]  [<ffffffffbce19137>] ? ret_from_fork+0x57/0x70
Feb 12 09:20:29 nondrbd kernel: [482621.787620] Task dump for CPU 0:
Feb 12 09:20:29 nondrbd kernel: [482621.788653] scsi_eh_2       R  running task        0    98      2 0x00000008
Feb 12 09:20:29 nondrbd kernel: [482621.789708]  ffffffffbd518ec0 ffffffffbc8a7dcb 0000000000000000 ffffffffbd518ec0
Feb 12 09:20:29 nondrbd kernel: [482621.790776]  ffffffffbc98112b ffff9defffc196c0 ffffffffbd44fd00 0000000000000000
Feb 12 09:20:29 nondrbd kernel: [482621.791771]  ffffffffbd518ec0 00000000ffffffff ffffffffbc8e36fa 00000000003d0900
Feb 12 09:20:29 nondrbd kernel: [482621.792737] Call Trace:
Feb 12 09:20:29 nondrbd kernel: [482621.793739]  <IRQ>
Feb 12 09:20:29 nondrbd kernel: [482621.793763]  [<ffffffffbc8a7dcb>] ? sched_show_task+0xcb/0x130
Feb 12 09:20:29 nondrbd kernel: [482621.794832]  [<ffffffffbc98112b>] ? rcu_dump_cpu_stacks+0x92/0xb2
Feb 12 09:20:29 nondrbd kernel: [482621.795890]  [<ffffffffbc8e36fa>] ? rcu_check_callbacks+0x75a/0x8b0
Feb 12 09:20:29 nondrbd kernel: [482621.796949]  [<ffffffffbc8f1dc8>] ? update_wall_time+0x498/0x7b0
Feb 12 09:20:29 nondrbd kernel: [482621.797912]  [<ffffffffbc8f9c30>] ? tick_sched_do_timer+0x30/0x30
Feb 12 09:20:29 nondrbd kernel: [482621.798813]  [<ffffffffbc8ea2d8>] ? update_process_times+0x28/0x50
Feb 12 09:20:29 nondrbd kernel: [482621.799764]  [<ffffffffbc8f9630>] ? tick_sched_handle.isra.12+0x20/0x50
Feb 12 09:20:29 nondrbd kernel: [482621.800756]  [<ffffffffbc8f9c68>] ? tick_sched_timer+0x38/0x70
Feb 12 09:20:29 nondrbd kernel: [482621.801723]  [<ffffffffbc8eadae>] ? __hrtimer_run_queues+0xde/0x250
Feb 12 09:20:29 nondrbd kernel: [482621.802688]  [<ffffffffbc8eb48c>] ? hrtimer_interrupt+0x9c/0x1a0
Feb 12 09:20:29 nondrbd kernel: [482621.803636]  [<ffffffffc02a9280>] ? ahci_fill_cmd_slot+0x40/0x40 [libahci]
Feb 12 09:20:29 nondrbd kernel: [482621.804540]  [<ffffffffbce1c507>] ? smp_apic_timer_interrupt+0x47/0x60
Feb 12 09:20:29 nondrbd kernel: [482621.805443]  [<ffffffffbce1ada6>] ? apic_timer_interrupt+0x96/0xa0
Feb 12 09:20:29 nondrbd kernel: [482621.806337]  <EOI>
Feb 12 09:20:29 nondrbd kernel: [482621.806353]  [<ffffffffc02a9280>] ? ahci_fill_cmd_slot+0x40/0x40 [libahci]
Feb 12 09:20:29 nondrbd kernel: [482621.807244]  [<ffffffffc02a90a8>] ? ahci_scr_write+0x48/0x60 [libahci]
Feb 12 09:20:29 nondrbd kernel: [482621.808208]  [<ffffffffc0211431>] ? sata_link_resume+0x61/0x160 [libata]
Feb 12 09:20:29 nondrbd kernel: [482621.809147]  [<ffffffffc021728c>] ? sata_link_hardreset+0x12c/0x210 [libata]
Feb 12 09:20:29 nondrbd kernel: [482621.810143]  [<ffffffffc02aa727>] ? ahci_hardreset+0xe7/0x140 [libahci]
Feb 12 09:20:29 nondrbd kernel: [482621.811153]  [<ffffffffc02aa640>] ? ahci_dev_classify+0x80/0x80 [libahci]
Feb 12 09:20:29 nondrbd kernel: [482621.812056]  [<ffffffffc0220f41>] ? ata_eh_reset+0x3a1/0xd30 [libata]
Feb 12 09:20:29 nondrbd kernel: [482621.813045]  [<ffffffffc02aa640>] ? ahci_dev_classify+0x80/0x80 [libahci]
Feb 12 09:20:29 nondrbd kernel: [482621.814048]  [<ffffffffc02aa4b0>] ? ahci_pmp_attach+0x60/0x60 [libahci]
Feb 12 09:20:29 nondrbd kernel: [482621.815006]  [<ffffffffc02ac370>] ? ahci_do_softreset+0x240/0x240 [libahci]
Feb 12 09:20:29 nondrbd kernel: [482621.815999]  [<ffffffffc02ac370>] ? ahci_do_softreset+0x240/0x240 [libahci]
Feb 12 09:20:29 nondrbd kernel: [482621.816852]  [<ffffffffc02aa640>] ? ahci_dev_classify+0x80/0x80 [libahci]
Feb 12 09:20:29 nondrbd kernel: [482621.817767]  [<ffffffffc02aa4b0>] ? ahci_pmp_attach+0x60/0x60 [libahci]
Feb 12 09:20:29 nondrbd kernel: [482621.818688]  [<ffffffffc0221c95>] ? ata_eh_recover+0x275/0x1360 [libata]
Feb 12 09:20:29 nondrbd kernel: [482621.819581]  [<ffffffffc0216ef0>] ? ata_phys_link_offline+0x60/0x60 [libata]
Feb 12 09:20:29 nondrbd kernel: [482621.820451]  [<ffffffffc02ac370>] ? ahci_do_softreset+0x240/0x240 [libahci]
Feb 12 09:20:29 nondrbd kernel: [482621.821326]  [<ffffffffc02aa640>] ? ahci_dev_classify+0x80/0x80 [libahci]
Feb 12 09:20:29 nondrbd kernel: [482621.822180]  [<ffffffffc02aa4b0>] ? ahci_pmp_attach+0x60/0x60 [libahci]
Feb 12 09:20:29 nondrbd kernel: [482621.823052]  [<ffffffffc02aa640>] ? ahci_dev_classify+0x80/0x80 [libahci]
Feb 12 09:20:29 nondrbd kernel: [482621.823925]  [<ffffffffc02ac370>] ? ahci_do_softreset+0x240/0x240 [libahci]
Feb 12 09:20:29 nondrbd kernel: [482621.824852]  [<ffffffffc0216ef0>] ? ata_phys_link_offline+0x60/0x60 [libata]
Feb 12 09:20:29 nondrbd kernel: [482621.825784]  [<ffffffffc0228ce7>] ? sata_pmp_error_handler+0xc7/0x9b0 [libata]
Feb 12 09:20:29 nondrbd kernel: [482621.826679]  [<ffffffffbce190b0>] ? __switch_to_asm+0x40/0x70
Feb 12 09:20:29 nondrbd kernel: [482621.827546]  [<ffffffffbce190b0>] ? __switch_to_asm+0x40/0x70
Feb 12 09:20:29 nondrbd kernel: [482621.828405]  [<ffffffffbce190a4>] ? __switch_to_asm+0x34/0x70
Feb 12 09:20:29 nondrbd kernel: [482621.829265]  [<ffffffffbce190b0>] ? __switch_to_asm+0x40/0x70
Feb 12 09:20:29 nondrbd kernel: [482621.830102]  [<ffffffffbc8e7664>] ? lock_timer_base+0x74/0x90
Feb 12 09:20:29 nondrbd kernel: [482621.830916]  [<ffffffffc02aafb8>] ? ahci_error_handler+0x18/0x60 [libahci]
Feb 12 09:20:29 nondrbd kernel: [482621.831765]  [<ffffffffc0223297>] ? ata_scsi_port_error_handler+0x437/0x780 [libata]
Feb 12 09:20:29 nondrbd kernel: [482621.832682]  [<ffffffffc016dbc0>] ? scsi_eh_get_sense+0x210/0x210 [scsi_mod]
Feb 12 09:20:29 nondrbd kernel: [482621.833534]  [<ffffffffc016b210>] ? scsi_try_target_reset+0x80/0x80 [scsi_mod]
Feb 12 09:20:29 nondrbd kernel: [482621.834394]  [<ffffffffc0223683>] ? ata_scsi_error+0xa3/0xe0 [libata]
Feb 12 09:20:29 nondrbd kernel: [482621.835207]  [<ffffffffc016dc9e>] ? scsi_error_handler+0xde/0x890 [scsi_mod]
Feb 12 09:20:29 nondrbd kernel: [482621.836084]  [<ffffffffbce14221>] ? __schedule+0x241/0x6f0
Feb 12 09:20:29 nondrbd kernel: [482621.836942]  [<ffffffffbc8bcc4f>] ? __wake_up_common+0x4f/0x90
Feb 12 09:20:29 nondrbd kernel: [482621.837755]  [<ffffffffc016dbc0>] ? scsi_eh_get_sense+0x210/0x210 [scsi_mod]
Feb 12 09:20:29 nondrbd kernel: [482621.838585]  [<ffffffffbc89a569>] ? kthread+0xd9/0xf0
Feb 12 09:20:29 nondrbd kernel: [482621.839432]  [<ffffffffbce190a4>] ? __switch_to_asm+0x34/0x70
Feb 12 09:20:29 nondrbd kernel: [482621.840255]  [<ffffffffbc89a490>] ? kthread_park+0x60/0x60
Feb 12 09:20:29 nondrbd kernel: [482621.841089]  [<ffffffffbce19137>] ? ret_from_fork+0x57/0x70
Feb 12 09:20:29 nondrbd kernel: [482621.844042] INFO: task exim4:387584 blocked for more than 120 seconds.
Feb 12 09:20:29 nondrbd kernel: [482621.844935]       Not tainted 4.9.0-8-amd64 #1 Debian 4.9.130-2
Feb 12 09:20:29 nondrbd kernel: [482621.845780] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 12 09:20:29 nondrbd kernel: [482621.846632] exim4           D    0 387584    665 0x00000000
Feb 12 09:20:29 nondrbd kernel: [482621.847488]  ffff9deffaaf3980 0000000000000000 ffff9deffc3a6d80 ffff9defffc18980
Feb 12 09:20:29 nondrbd kernel: [482621.848362]  ffffffffbd411500 ffffb0c840f6fc50 ffffffffbce14219 0000000000000000
Feb 12 09:20:29 nondrbd kernel: [482621.849250]  00ffb0c840f6fc68 ffff9defffc18980 0000000000000001 ffff9deffc3a6d80
Feb 12 09:20:29 nondrbd kernel: [482621.850178] Call Trace:
Feb 12 09:20:29 nondrbd kernel: [482621.851032]  [<ffffffffbce14219>] ? __schedule+0x239/0x6f0
Feb 12 09:20:29 nondrbd kernel: [482621.851871]  [<ffffffffbce14ec0>] ? bit_wait+0x50/0x50
Feb 12 09:20:29 nondrbd kernel: [482621.852737]  [<ffffffffbce14702>] ? schedule+0x32/0x80
Feb 12 09:20:29 nondrbd kernel: [482621.853596]  [<ffffffffbce17a8d>] ? schedule_timeout+0x1dd/0x380
Feb 12 09:20:29 nondrbd kernel: [482621.854449]  [<ffffffffbc8f0ece>] ? ktime_get+0x3e/0xb0
Feb 12 09:20:29 nondrbd kernel: [482621.855306]  [<ffffffffbce14ec0>] ? bit_wait+0x50/0x50
Feb 12 09:20:29 nondrbd kernel: [482621.856135]  [<ffffffffbce13f7d>] ? io_schedule_timeout+0x9d/0x100
Feb 12 09:20:29 nondrbd kernel: [482621.856966]  [<ffffffffbc8bce47>] ? prepare_to_wait+0x57/0x80
Feb 12 09:20:29 nondrbd kernel: [482621.857782]  [<ffffffffbce14ed7>] ? bit_wait_io+0x17/0x60
Feb 12 09:20:29 nondrbd kernel: [482621.858662]  [<ffffffffbce14a95>] ? __wait_on_bit+0x55/0x80
Feb 12 09:20:29 nondrbd kernel: [482621.859543]  [<ffffffffbc982598>] ? find_get_pages_tag+0x158/0x2e0
Feb 12 09:20:29 nondrbd kernel: [482621.860397]  [<ffffffffbc9817ff>] ? wait_on_page_bit+0x7f/0xa0
Feb 12 09:20:29 nondrbd kernel: [482621.861285]  [<ffffffffbc8bd2b0>] ? wake_atomic_t_function+0x60/0x60
Feb 12 09:20:29 nondrbd kernel: [482621.862135]  [<ffffffffbc981900>] ? __filemap_fdatawait_range+0xe0/0x140
Feb 12 09:20:29 nondrbd kernel: [482621.862953]  [<ffffffffbc984274>] ? __filemap_fdatawrite_range+0xd4/0x100
Feb 12 09:20:29 nondrbd kernel: [482621.863787]  [<ffffffffbc98196f>] ? filemap_fdatawait_range+0xf/0x30
Feb 12 09:20:29 nondrbd kernel: [482621.864613]  [<ffffffffbc9843b9>] ? filemap_write_and_wait_range+0x49/0x70
Feb 12 09:20:29 nondrbd kernel: [482621.865613]  [<ffffffffc0395c94>] ? ext4_sync_file+0xe4/0x380 [ext4]
Feb 12 09:20:29 nondrbd kernel: [482621.866487]  [<ffffffffbca3f608>] ? do_fsync+0x38/0x60
Feb 12 09:20:29 nondrbd kernel: [482621.867313]  [<ffffffffbca3f83c>] ? SyS_fsync+0xc/0x10
Feb 12 09:20:29 nondrbd kernel: [482621.868149]  [<ffffffffbc803b7d>] ? do_syscall_64+0x8d/0xf0
Feb 12 09:20:29 nondrbd kernel: [482621.868951]  [<ffffffffbce18f8e>] ? entry_SYSCALL_64_after_swapgs+0x58/0xc6
Feb 12 09:20:29 nondrbd kernel: [482622.156234] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Feb 12 09:20:29 nondrbd kernel: [482622.157116] ata3.00: configured for UDMA/100
Feb 12 09:20:29 nondrbd kernel: [482622.157607] ata3.00: device reported invalid CHS sector 0
Feb 12 09:20:29 nondrbd kernel: [482622.158103] ata3.00: device reported invalid CHS sector 0
Feb 12 09:20:29 nondrbd kernel: [482622.158585] ata3: EH complete
Feb 12 09:20:29 nondrbd kernel: [482682.691807] ata3.00: NCQ disabled due to excessive errors
Feb 12 09:20:29 nondrbd kernel: [482682.693160] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
Feb 12 09:20:29 nondrbd kernel: [482682.694332] ata3.00: failed command: FLUSH CACHE
Feb 12 09:20:29 nondrbd kernel: [482682.695493] ata3.00: cmd e7/00:00:00:00:00/00:00:00:00:00/a0 tag 26
Feb 12 09:20:29 nondrbd kernel: [482682.695493]          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Feb 12 09:20:29 nondrbd kernel: [482682.697870] ata3.00: status: { DRDY }
Feb 12 09:20:29 nondrbd kernel: [482682.699099] ata3: hard resetting link
Feb 12 09:20:29 nondrbd kernel: [482710.130132] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Feb 12 09:20:29 nondrbd kernel: [482710.131019] ata3.00: configured for UDMA/100
Feb 12 09:20:29 nondrbd kernel: [482710.131538] ata3.00: retrying FLUSH 0xe7 Emask 0x4
Feb 12 09:20:29 nondrbd kernel: [482710.132190] ata3.00: device reported invalid CHS sector 0
Feb 12 09:20:29 nondrbd kernel: [482710.132682] ata3: EH complete
Feb 12 09:20:29 nondrbd kernel: [482742.070745] INFO: task jbd2/dm-1-8:403 blocked for more than 120 seconds.
Feb 12 09:20:29 nondrbd kernel: [482742.075200]       Not tainted 4.9.0-8-amd64 #1 Debian 4.9.130-2
Feb 12 09:20:29 nondrbd kernel: [482742.076411] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 12 09:20:29 nondrbd kernel: [482742.077634] jbd2/dm-1-8     D    0   403      2 0x00000000
Feb 12 09:20:29 nondrbd kernel: [482742.078859]  ffff9deffb806880 ffff9deffb806880 ffff9deff8dc1040 ffff9defffc18980
Feb 12 09:20:29 nondrbd kernel: [482742.080078]  ffff9deffba3a3c0 ffffb0c84089bb30 ffffffffbce14219 ffff9defb6bfda10
Feb 12 09:20:29 nondrbd kernel: [482742.081301]  00ffffffbcb09cbe ffff9defffc18980 ffff9defb6801730 ffff9deff8dc1040
Feb 12 09:20:29 nondrbd kernel: [482742.082507] Call Trace:
Feb 12 09:20:29 nondrbd kernel: [482742.083685]  [<ffffffffbce14219>] ? __schedule+0x239/0x6f0
Feb 12 09:20:29 nondrbd kernel: [482742.084879]  [<ffffffffbce14ec0>] ? bit_wait+0x50/0x50
Feb 12 09:20:29 nondrbd kernel: [482742.086038]  [<ffffffffbce14702>] ? schedule+0x32/0x80
Feb 12 09:20:29 nondrbd kernel: [482742.087227]  [<ffffffffbce17a8d>] ? schedule_timeout+0x1dd/0x380
Feb 12 09:20:29 nondrbd kernel: [482742.088459]  [<ffffffffbce190b0>] ? __switch_to_asm+0x40/0x70
Feb 12 09:20:29 nondrbd kernel: [482742.089647]  [<ffffffffbce190a4>] ? __switch_to_asm+0x34/0x70
Feb 12 09:20:29 nondrbd kernel: [482742.090840]  [<ffffffffbce190b0>] ? __switch_to_asm+0x40/0x70
Feb 12 09:20:29 nondrbd kernel: [482742.091991]  [<ffffffffbce190a4>] ? __switch_to_asm+0x34/0x70
Feb 12 09:20:29 nondrbd kernel: [482742.093143]  [<ffffffffbce190b0>] ? __switch_to_asm+0x40/0x70
Feb 12 09:20:29 nondrbd kernel: [482742.094246]  [<ffffffffbce190a4>] ? __switch_to_asm+0x34/0x70
Feb 12 09:20:29 nondrbd kernel: [482742.095371]  [<ffffffffbce190b0>] ? __switch_to_asm+0x40/0x70
Feb 12 09:20:29 nondrbd kernel: [482742.096478]  [<ffffffffbc8f0ece>] ? ktime_get+0x3e/0xb0
Feb 12 09:20:29 nondrbd kernel: [482742.097588]  [<ffffffffbce14ec0>] ? bit_wait+0x50/0x50
Feb 12 09:20:29 nondrbd kernel: [482742.098685]  [<ffffffffbce13f7d>] ? io_schedule_timeout+0x9d/0x100
Feb 12 09:20:29 nondrbd kernel: [482742.099773]  [<ffffffffbc8bce47>] ? prepare_to_wait+0x57/0x80
Feb 12 09:20:29 nondrbd kernel: [482742.100881]  [<ffffffffbce14ed7>] ? bit_wait_io+0x17/0x60
Feb 12 09:20:29 nondrbd kernel: [482742.101948]  [<ffffffffbce14a95>] ? __wait_on_bit+0x55/0x80
Feb 12 09:20:29 nondrbd kernel: [482742.103047]  [<ffffffffbc8a1924>] ? finish_task_switch+0x74/0x200
Feb 12 09:20:29 nondrbd kernel: [482742.104145]  [<ffffffffbce14ec0>] ? bit_wait+0x50/0x50
Feb 12 09:20:29 nondrbd kernel: [482742.105235]  [<ffffffffbce14bfe>] ? out_of_line_wait_on_bit+0x7e/0xa0
Feb 12 09:20:29 nondrbd kernel: [482742.106342]  [<ffffffffbc8bd2b0>] ? wake_atomic_t_function+0x60/0x60
Feb 12 09:20:29 nondrbd kernel: [482742.107534]  [<ffffffffc036ffe5>] ? jbd2_journal_commit_transaction+0xf55/0x17b0 [jbd2]
Feb 12 09:20:29 nondrbd kernel: [482742.108681]  [<ffffffffbce190a4>] ? __switch_to_asm+0x34/0x70
Feb 12 09:20:29 nondrbd kernel: [482742.109754]  [<ffffffffbce190b0>] ? __switch_to_asm+0x40/0x70
Feb 12 09:20:29 nondrbd kernel: [482742.110843]  [<ffffffffc0374c62>] ? kjournald2+0xc2/0x260 [jbd2]
Feb 12 09:20:29 nondrbd kernel: [482742.111924]  [<ffffffffbc8bd210>] ? prepare_to_wait_event+0xf0/0xf0
Feb 12 09:20:29 nondrbd kernel: [482742.113047]  [<ffffffffc0374ba0>] ? commit_timeout+0x10/0x10 [jbd2]
Feb 12 09:20:29 nondrbd kernel: [482742.114130]  [<ffffffffbc89a569>] ? kthread+0xd9/0xf0
Feb 12 09:20:29 nondrbd kernel: [482742.115348]  [<ffffffffbce190a4>] ? __switch_to_asm+0x34/0x70
Feb 12 09:20:29 nondrbd kernel: [482742.116434]  [<ffffffffbc89a490>] ? kthread_park+0x60/0x60
Feb 12 09:20:29 nondrbd kernel: [482742.117521]  [<ffffffffbce19137>] ? ret_from_fork+0x57/0x70
Feb 12 09:20:29 nondrbd kernel: [482742.118760] INFO: task samba:810 blocked for more than 120 seconds.
Feb 12 09:20:29 nondrbd kernel: [482742.119835]       Not tainted 4.9.0-8-amd64 #1 Debian 4.9.130-2
Feb 12 09:20:29 nondrbd kernel: [482742.120916] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 12 09:20:29 nondrbd kernel: [482742.121998] samba           D    0   810    657 0x00000000
Feb 12 09:20:29 nondrbd kernel: [482742.123125]  ffff9deffbbd08c0 ffff9deffb083980 ffff9deffba3b140 ffff9defffc18980
Feb 12 09:20:29 nondrbd kernel: [482742.124262]  ffff9defb65f6380 ffffb0c8407ebc50 ffffffffbce14219 0000000000000000
Feb 12 09:20:29 nondrbd kernel: [482742.125412]  00ffb0c8407ebc68 ffff9defffc18980 0000000000000000 ffff9deffba3b140
Feb 12 09:20:29 nondrbd kernel: [482742.126585] Call Trace:
Feb 12 09:20:29 nondrbd kernel: [482742.127713]  [<ffffffffbce14219>] ? __schedule+0x239/0x6f0
Feb 12 09:20:29 nondrbd kernel: [482742.128865]  [<ffffffffbce14ec0>] ? bit_wait+0x50/0x50
Feb 12 09:20:29 nondrbd kernel: [482742.129960]  [<ffffffffbce14702>] ? schedule+0x32/0x80
Feb 12 09:20:29 nondrbd kernel: [482742.131060]  [<ffffffffbce17a8d>] ? schedule_timeout+0x1dd/0x380
Feb 12 09:20:29 nondrbd kernel: [482742.132137]  [<ffffffffbc8f0ece>] ? ktime_get+0x3e/0xb0
Feb 12 09:20:29 nondrbd kernel: [482742.133201]  [<ffffffffbce14ec0>] ? bit_wait+0x50/0x50
Feb 12 09:20:29 nondrbd kernel: [482742.134222]  [<ffffffffbce13f7d>] ? io_schedule_timeout+0x9d/0x100
Feb 12 09:20:29 nondrbd kernel: [482742.135275]  [<ffffffffbc8bce47>] ? prepare_to_wait+0x57/0x80
Feb 12 09:20:29 nondrbd kernel: [482742.136335]  [<ffffffffbce14ed7>] ? bit_wait_io+0x17/0x60
Feb 12 09:20:29 nondrbd kernel: [482742.137373]  [<ffffffffbce14a95>] ? __wait_on_bit+0x55/0x80
Feb 12 09:20:29 nondrbd kernel: [482742.138377]  [<ffffffffbc982598>] ? find_get_pages_tag+0x158/0x2e0
Feb 12 09:20:29 nondrbd kernel: [482742.139418]  [<ffffffffbc9817ff>] ? wait_on_page_bit+0x7f/0xa0
Feb 12 09:20:29 nondrbd kernel: [482742.140451]  [<ffffffffbc8bd2b0>] ? wake_atomic_t_function+0x60/0x60
Feb 12 09:20:29 nondrbd kernel: [482742.141503]  [<ffffffffbc981900>] ? __filemap_fdatawait_range+0xe0/0x140
Feb 12 09:20:29 nondrbd kernel: [482742.142569]  [<ffffffffbc984274>] ? __filemap_fdatawrite_range+0xd4/0x100
Feb 12 09:20:29 nondrbd kernel: [482742.143654]  [<ffffffffbc98196f>] ? filemap_fdatawait_range+0xf/0x30
Feb 12 09:20:29 nondrbd kernel: [482742.144751]  [<ffffffffbc9843b9>] ? filemap_write_and_wait_range+0x49/0x70
Feb 12 09:20:29 nondrbd kernel: [482742.145881]  [<ffffffffc0395c94>] ? ext4_sync_file+0xe4/0x380 [ext4]
Feb 12 09:20:29 nondrbd kernel: [482742.147015]  [<ffffffffbca3f608>] ? do_fsync+0x38/0x60
Feb 12 09:20:29 nondrbd kernel: [482742.148141]  [<ffffffffbca3f84f>] ? SyS_fdatasync+0xf/0x20
Feb 12 09:20:29 nondrbd kernel: [482742.149284]  [<ffffffffbc803b7d>] ? do_syscall_64+0x8d/0xf0
Feb 12 09:20:29 nondrbd kernel: [482742.150390]  [<ffffffffbce18f8e>] ? entry_SYSCALL_64_after_swapgs+0x58/0xc6
Feb 12 09:20:29 nondrbd kernel: [482742.151616] INFO: task smbd:387536 blocked for more than 120 seconds.
Feb 12 09:20:29 nondrbd kernel: [482742.152785]       Not tainted 4.9.0-8-amd64 #1 Debian 4.9.130-2
Feb 12 09:20:29 nondrbd kernel: [482742.153920] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 12 09:20:29 nondrbd kernel: [482742.155116] smbd            D    0 387536    799 0x00000000
Feb 12 09:20:29 nondrbd kernel: [482742.156318]  ffff9deffb083980 ffff9defca6dce40 ffff9defb65f6380 ffff9defffc18980
Feb 12 09:20:29 nondrbd kernel: [482742.157560]  ffff9defca63e080 ffffb0c840bdbc50 ffffffffbce14219 0000000000000000
Feb 12 09:20:29 nondrbd kernel: [482742.158805]  00ffb0c840bdbc68 ffff9defffc18980 0000000000000000 ffff9defb65f6380
Feb 12 09:20:29 nondrbd kernel: [482742.160053] Call Trace:
Feb 12 09:20:29 nondrbd kernel: [482742.161287]  [<ffffffffbce14219>] ? __schedule+0x239/0x6f0
Feb 12 09:20:29 nondrbd kernel: [482742.162538]  [<ffffffffbce14ec0>] ? bit_wait+0x50/0x50
Feb 12 09:20:29 nondrbd kernel: [482742.163776]  [<ffffffffbce14702>] ? schedule+0x32/0x80
Feb 12 09:20:29 nondrbd kernel: [482742.165018]  [<ffffffffbce17a8d>] ? schedule_timeout+0x1dd/0x380
Feb 12 09:20:29 nondrbd kernel: [482742.166218]  [<ffffffffbc8f0ece>] ? ktime_get+0x3e/0xb0
Feb 12 09:20:29 nondrbd kernel: [482742.167456]  [<ffffffffbce14ec0>] ? bit_wait+0x50/0x50
Feb 12 09:20:29 nondrbd kernel: [482742.168679]  [<ffffffffbce13f7d>] ? io_schedule_timeout+0x9d/0x100
Feb 12 09:20:29 nondrbd kernel: [482742.169877]  [<ffffffffbc8bce47>] ? prepare_to_wait+0x57/0x80
Feb 12 09:20:29 nondrbd kernel: [482742.171101]  [<ffffffffbce14ed7>] ? bit_wait_io+0x17/0x60
Feb 12 09:20:29 nondrbd kernel: [482742.172314]  [<ffffffffbce14a95>] ? __wait_on_bit+0x55/0x80
Feb 12 09:20:29 nondrbd kernel: [482742.173527]  [<ffffffffbc982598>] ? find_get_pages_tag+0x158/0x2e0
Feb 12 09:20:29 nondrbd kernel: [482742.174734]  [<ffffffffbc9817ff>] ? wait_on_page_bit+0x7f/0xa0
Feb 12 09:20:29 nondrbd kernel: [482742.175922]  [<ffffffffbc8bd2b0>] ? wake_atomic_t_function+0x60/0x60
Feb 12 09:20:29 nondrbd kernel: [482742.177099]  [<ffffffffbc981900>] ? __filemap_fdatawait_range+0xe0/0x140
Feb 12 09:20:29 nondrbd kernel: [482742.178228]  [<ffffffffbc984274>] ? __filemap_fdatawrite_range+0xd4/0x100
Feb 12 09:20:29 nondrbd kernel: [482742.179479]  [<ffffffffbc98196f>] ? filemap_fdatawait_range+0xf/0x30
Feb 12 09:20:29 nondrbd kernel: [482742.180619]  [<ffffffffbc9843b9>] ? filemap_write_and_wait_range+0x49/0x70
Feb 12 09:20:29 nondrbd kernel: [482742.181760]  [<ffffffffc0395c94>] ? ext4_sync_file+0xe4/0x380 [ext4]
Feb 12 09:20:29 nondrbd kernel: [482742.182886]  [<ffffffffbca3f608>] ? do_fsync+0x38/0x60
Feb 12 09:20:29 nondrbd kernel: [482742.184000]  [<ffffffffbca3f84f>] ? SyS_fdatasync+0xf/0x20
Feb 12 09:20:29 nondrbd kernel: [482742.185140]  [<ffffffffbc803b7d>] ? do_syscall_64+0x8d/0xf0
Feb 12 09:20:29 nondrbd kernel: [482742.186239]  [<ffffffffbce18f8e>] ? entry_SYSCALL_64_after_swapgs+0x58/0xc6
Feb 12 09:20:29 nondrbd kernel: [482770.754182] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
Feb 12 09:20:29 nondrbd kernel: [482770.755499] ata3.00: failed command: FLUSH CACHE
Feb 12 09:20:29 nondrbd kernel: [482770.756695] ata3.00: cmd e7/00:00:00:00:00/00:00:00:00:00/a0 tag 27
Feb 12 09:20:29 nondrbd kernel: [482770.756695]          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Feb 12 09:20:29 nondrbd kernel: [482770.759048] ata3.00: status: { DRDY }
Feb 12 09:20:29 nondrbd kernel: [482770.760374] ata3: hard resetting link
Feb 12 09:20:29 nondrbd kernel: [482782.990522] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Feb 12 09:20:29 nondrbd kernel: [482782.992889] ata3.00: configured for UDMA/100
Feb 12 09:20:29 nondrbd kernel: [482782.995200] ata3.00: retrying FLUSH 0xe7 Emask 0x4
Feb 12 09:20:29 nondrbd kernel: [482782.998244] ata3.00: device reported invalid CHS sector 0
Feb 12 09:20:29 nondrbd kernel: [482782.999561] ata3: EH complete
Feb 12 09:20:29 nondrbd kernel: [482844.480631] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
Feb 12 09:20:29 nondrbd kernel: [482844.482026] ata3.00: failed command: FLUSH CACHE
Feb 12 09:20:29 nondrbd kernel: [482844.483226] ata3.00: cmd e7/00:00:00:00:00/00:00:00:00:00/a0 tag 19
Feb 12 09:20:29 nondrbd kernel: [482844.483226]          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Feb 12 09:20:29 nondrbd kernel: [482844.485731] ata3.00: status: { DRDY }
Feb 12 09:20:29 nondrbd kernel: [482844.486977] ata3: hard resetting link
Feb 12 09:20:29 nondrbd kernel: [482890.511637] INFO: task samba:810 blocked for more than 120 seconds.
Feb 12 09:20:29 nondrbd kernel: [482890.513107]       Not tainted 4.9.0-8-amd64 #1 Debian 4.9.130-2
Feb 12 09:20:29 nondrbd kernel: [482890.514433] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 12 09:20:29 nondrbd kernel: [482890.515755] samba           D    0   810    657 0x00000000
Feb 12 09:20:29 nondrbd kernel: [482890.517085]  ffff9deffbbd08c0 ffff9deffb083980 ffff9deffba3b140 ffff9defffc18980
Feb 12 09:20:29 nondrbd kernel: [482890.518454]  ffff9defb65f6380 ffffb0c8407ebc50 ffffffffbce14219 0000000000000000
Feb 12 09:20:29 nondrbd kernel: [482890.519764]  00ffb0c8407ebc68 ffff9defffc18980 0000000000000000 ffff9deffba3b140
Feb 12 09:20:29 nondrbd kernel: [482890.521057] Call Trace:
Feb 12 09:20:29 nondrbd kernel: [482890.522425]  [<ffffffffbce14219>] ? __schedule+0x239/0x6f0
Feb 12 09:20:29 nondrbd kernel: [482890.523660]  [<ffffffffbce14ec0>] ? bit_wait+0x50/0x50
Feb 12 09:20:29 nondrbd kernel: [482890.524877]  [<ffffffffbce14702>] ? schedule+0x32/0x80
Feb 12 09:20:29 nondrbd kernel: [482890.526192]  [<ffffffffbce17a8d>] ? schedule_timeout+0x1dd/0x380
Feb 12 09:20:29 nondrbd kernel: [482890.527734]  [<ffffffffbc8f0ece>] ? ktime_get+0x3e/0xb0
Feb 12 09:20:29 nondrbd kernel: [482890.529217]  [<ffffffffbce14ec0>] ? bit_wait+0x50/0x50
Feb 12 09:20:29 nondrbd kernel: [482890.530669]  [<ffffffffbce13f7d>] ? io_schedule_timeout+0x9d/0x100
Feb 12 09:20:29 nondrbd kernel: [482890.532205]  [<ffffffffbc8bce47>] ? prepare_to_wait+0x57/0x80
Feb 12 09:20:29 nondrbd kernel: [482890.533700]  [<ffffffffbce14ed7>] ? bit_wait_io+0x17/0x60
Feb 12 09:20:29 nondrbd kernel: [482890.535157]  [<ffffffffbce14a95>] ? __wait_on_bit+0x55/0x80
Feb 12 09:20:29 nondrbd kernel: [482890.536693]  [<ffffffffbc982598>] ? find_get_pages_tag+0x158/0x2e0
Feb 12 09:20:29 nondrbd kernel: [482890.538166]  [<ffffffffbc9817ff>] ? wait_on_page_bit+0x7f/0xa0
Feb 12 09:20:29 nondrbd kernel: [482890.539659]  [<ffffffffbc8bd2b0>] ? wake_atomic_t_function+0x60/0x60
Feb 12 09:20:29 nondrbd kernel: [482890.541136]  [<ffffffffbc981900>] ? __filemap_fdatawait_range+0xe0/0x140
Feb 12 09:20:29 nondrbd kernel: [482890.542589]  [<ffffffffbc984274>] ? __filemap_fdatawrite_range+0xd4/0x100
Feb 12 09:20:29 nondrbd kernel: [482890.543828]  [<ffffffffbc98196f>] ? filemap_fdatawait_range+0xf/0x30
Feb 12 09:20:29 nondrbd kernel: [482890.545046]  [<ffffffffbc9843b9>] ? filemap_write_and_wait_range+0x49/0x70
Feb 12 09:20:29 nondrbd kernel: [482890.546305]  [<ffffffffc0395c94>] ? ext4_sync_file+0xe4/0x380 [ext4]
Feb 12 09:20:29 nondrbd kernel: [482890.547601]  [<ffffffffbca3f608>] ? do_fsync+0x38/0x60
Feb 12 09:20:29 nondrbd kernel: [482890.548817]  [<ffffffffbca3f84f>] ? SyS_fdatasync+0xf/0x20
Feb 12 09:20:29 nondrbd kernel: [482890.550174]  [<ffffffffbc803b7d>] ? do_syscall_64+0x8d/0xf0
Feb 12 09:20:29 nondrbd kernel: [482890.551453]  [<ffffffffbce18f8e>] ? entry_SYSCALL_64_after_swapgs+0x58/0xc6
Feb 12 09:20:29 nondrbd kernel: [482890.552719] INFO: task smbd:387536 blocked for more than 120 seconds.
Feb 12 09:20:29 nondrbd kernel: [482890.553917]       Not tainted 4.9.0-8-amd64 #1 Debian 4.9.130-2
Feb 12 09:20:29 nondrbd kernel: [482890.555144] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb 12 09:20:29 nondrbd kernel: [482890.556397] smbd            D    0 387536    799 0x00000000
Feb 12 09:20:29 nondrbd kernel: [482890.557648]  ffff9deffb083980 ffff9defca6dce40 ffff9defb65f6380 ffff9defffc18980
Feb 12 09:20:29 nondrbd kernel: [482890.558927]  ffff9defca63e080 ffffb0c840bdbc50 ffffffffbce14219 0000000000000000
Feb 12 09:20:29 nondrbd kernel: [482890.560216]  00ffb0c840bdbc68 ffff9defffc18980 0000000000000000 ffff9defb65f6380
Feb 12 09:20:29 nondrbd kernel: [482890.561495] Call Trace:
Feb 12 09:20:29 nondrbd kernel: [482890.562771]  [<ffffffffbce14219>] ? __schedule+0x239/0x6f0
Feb 12 09:20:29 nondrbd kernel: [482890.564065]  [<ffffffffbce14ec0>] ? bit_wait+0x50/0x50
Feb 12 09:20:29 nondrbd kernel: [482890.565337]  [<ffffffffbce14702>] ? schedule+0x32/0x80
Feb 12 09:20:29 nondrbd kernel: [482890.566612]  [<ffffffffbce17a8d>] ? schedule_timeout+0x1dd/0x380
Feb 12 09:20:29 nondrbd kernel: [482890.567876]  [<ffffffffbc8f0ece>] ? ktime_get+0x3e/0xb0
Feb 12 09:20:29 nondrbd kernel: [482890.569108]  [<ffffffffbce14ec0>] ? bit_wait+0x50/0x50
Feb 12 09:20:29 nondrbd kernel: [482890.570324]  [<ffffffffbce13f7d>] ? io_schedule_timeout+0x9d/0x100
Feb 12 09:20:29 nondrbd kernel: [482890.571595]  [<ffffffffbc8bce47>] ? prepare_to_wait+0x57/0x80
Feb 12 09:20:29 nondrbd kernel: [482890.572811]  [<ffffffffbce14ed7>] ? bit_wait_io+0x17/0x60
Feb 12 09:20:29 nondrbd kernel: [482890.574007]  [<ffffffffbce14a95>] ? __wait_on_bit+0x55/0x80
Feb 12 09:20:29 nondrbd kernel: [482890.575228]  [<ffffffffbc982598>] ? find_get_pages_tag+0x158/0x2e0
Feb 12 09:20:29 nondrbd kernel: [482890.576441]  [<ffffffffbc9817ff>] ? wait_on_page_bit+0x7f/0xa0
Feb 12 09:20:29 nondrbd kernel: [482890.577658]  [<ffffffffbc8bd2b0>] ? wake_atomic_t_function+0x60/0x60
Feb 12 09:20:29 nondrbd kernel: [482890.578840]  [<ffffffffbc981900>] ? __filemap_fdatawait_range+0xe0/0x140
Feb 12 09:20:29 nondrbd kernel: [482890.580072]  [<ffffffffbc984274>] ? __filemap_fdatawrite_range+0xd4/0x100
Feb 12 09:20:29 nondrbd kernel: [482890.581312]  [<ffffffffbc98196f>] ? filemap_fdatawait_range+0xf/0x30
Feb 12 09:20:29 nondrbd kernel: [482890.582523]  [<ffffffffbc9843b9>] ? filemap_write_and_wait_range+0x49/0x70
Feb 12 09:20:29 nondrbd kernel: [482890.583686]  [<ffffffffc0395c94>] ? ext4_sync_file+0xe4/0x380 [ext4]
Feb 12 09:20:29 nondrbd kernel: [482890.584812]  [<ffffffffbca3f608>] ? do_fsync+0x38/0x60
Feb 12 09:20:29 nondrbd kernel: [482890.585946]  [<ffffffffbca3f84f>] ? SyS_fdatasync+0xf/0x20
Feb 12 09:20:29 nondrbd kernel: [482890.587087]  [<ffffffffbc803b7d>] ? do_syscall_64+0x8d/0xf0
Feb 12 09:20:29 nondrbd kernel: [482890.588335]  [<ffffffffbce18f8e>] ? entry_SYSCALL_64_after_swapgs+0x58/0xc6
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!