[SOLVED] ZFS on HDD massive performance drop after update from Proxmox 6.2 to 6.3

One additional information:
In the Syslog there are in more or less irregular periods the following messages:
Code:
Jan 20 20:26:17 proxmox02 kernel: Call Trace:
Jan 20 20:26:17 proxmox02 kernel:  __schedule+0x2e6/0x6f0
Jan 20 20:26:17 proxmox02 kernel:  schedule+0x33/0xa0
Jan 20 20:26:17 proxmox02 kernel:  cv_wait_common+0x104/0x130 [spl]
Jan 20 20:26:17 proxmox02 kernel:  ? wait_woken+0x80/0x80
Jan 20 20:26:17 proxmox02 kernel:  __cv_wait+0x15/0x20 [spl]
Jan 20 20:26:17 proxmox02 kernel:  dmu_tx_wait+0xb6/0x380 [zfs]
Jan 20 20:26:17 proxmox02 kernel:  ? txg_kick+0x8d/0xc0 [zfs]
Jan 20 20:26:17 proxmox02 kernel:  dmu_tx_assign+0x161/0x480 [zfs]
Jan 20 20:26:17 proxmox02 kernel:  zfs_write+0x469/0xf20 [zfs]
Jan 20 20:26:17 proxmox02 kernel:  zpl_write_common_iovec+0xa3/0xf0 [zfs]
Jan 20 20:26:17 proxmox02 kernel:  zpl_iter_write+0xee/0x130 [zfs]
Jan 20 20:26:17 proxmox02 kernel:  new_sync_write+0x125/0x1c0
Jan 20 20:26:17 proxmox02 kernel:  __vfs_write+0x29/0x40
Jan 20 20:26:17 proxmox02 kernel:  vfs_write+0xab/0x1b0
Jan 20 20:26:17 proxmox02 kernel:  ksys_pwrite64+0x66/0xa0
Jan 20 20:26:17 proxmox02 kernel:  __x64_sys_pwrite64+0x1e/0x20
Jan 20 20:26:17 proxmox02 kernel:  do_syscall_64+0x57/0x190
Jan 20 20:26:17 proxmox02 kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xa9
Jan 20 20:26:17 proxmox02 kernel: RIP: 0033:0x7f1a110ffedf
Jan 20 20:26:17 proxmox02 kernel: Code: Bad RIP value.
Jan 20 20:26:17 proxmox02 kernel: RSP: 002b:00007f1a0d5bab50 EFLAGS: 00000293 ORIG_RAX: 0000000000000012
Jan 20 20:26:17 proxmox02 kernel: RAX: ffffffffffffffda RBX: 0000000000000032 RCX: 00007f1a110ffedf
Jan 20 20:26:17 proxmox02 kernel: RDX: 0000000000100000 RSI: 0000556612c99d20 RDI: 0000000000000032
Jan 20 20:26:17 proxmox02 kernel: RBP: 0000556612c99d20 R08: 0000000000000000 R09: 00007f1a0d5babe0
Jan 20 20:26:17 proxmox02 kernel: R10: 000000000eb00000 R11: 0000000000000293 R12: 0000000000100000
Jan 20 20:26:17 proxmox02 kernel: R13: 000000000eb00000 R14: 00007f1a10265c00 R15: 000055660ffa7320
Jan 20 20:26:17 proxmox02 kernel: INFO: task smbd:20175 blocked for more than 120 seconds.
Jan 20 20:26:17 proxmox02 kernel:       Tainted: P           O      5.4.78-2-pve #1
Jan 20 20:26:17 proxmox02 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 20 20:26:17 proxmox02 kernel: smbd            D    0 20175   8703 0x00004320
Jan 20 20:26:17 proxmox02 kernel: Call Trace:
Jan 20 20:26:17 proxmox02 kernel:  __schedule+0x2e6/0x6f0
Jan 20 20:26:17 proxmox02 kernel:  schedule+0x33/0xa0
Jan 20 20:26:17 proxmox02 kernel:  cv_wait_common+0x104/0x130 [spl]
Jan 20 20:26:17 proxmox02 kernel:  ? wait_woken+0x80/0x80
Jan 20 20:26:17 proxmox02 kernel:  __cv_wait+0x15/0x20 [spl]
Jan 20 20:26:17 proxmox02 kernel:  dmu_tx_wait+0xb6/0x380 [zfs]
Jan 20 20:26:17 proxmox02 kernel:  ? txg_kick+0x8d/0xc0 [zfs]
Jan 20 20:26:17 proxmox02 kernel:  dmu_tx_assign+0x161/0x480 [zfs]
Jan 20 20:26:17 proxmox02 kernel:  zfs_write+0x469/0xf20 [zfs]
Jan 20 20:26:17 proxmox02 kernel:  zpl_write_common_iovec+0xa3/0xf0 [zfs]
Jan 20 20:26:17 proxmox02 kernel:  zpl_iter_write+0xee/0x130 [zfs]
Jan 20 20:26:17 proxmox02 kernel:  new_sync_write+0x125/0x1c0
Jan 20 20:26:17 proxmox02 kernel:  __vfs_write+0x29/0x40
Jan 20 20:26:17 proxmox02 kernel:  vfs_write+0xab/0x1b0
Jan 20 20:26:17 proxmox02 kernel:  ksys_pwrite64+0x66/0xa0
Jan 20 20:26:17 proxmox02 kernel:  __x64_sys_pwrite64+0x1e/0x20
Jan 20 20:26:17 proxmox02 kernel:  do_syscall_64+0x57/0x190
Jan 20 20:26:17 proxmox02 kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xa9
Jan 20 20:26:17 proxmox02 kernel: RIP: 0033:0x7f1a110ffedf
Jan 20 20:26:17 proxmox02 kernel: Code: Bad RIP value.
Jan 20 20:26:17 proxmox02 kernel: RSP: 002b:00007f1a0cdb9b50 EFLAGS: 00000293 ORIG_RAX: 0000000000000012
Jan 20 20:26:17 proxmox02 kernel: RAX: ffffffffffffffda RBX: 0000000000000038 RCX: 00007f1a110ffedf
Jan 20 20:26:17 proxmox02 kernel: RDX: 0000000000100000 RSI: 00005566194b7260 RDI: 0000000000000038
Jan 20 20:26:17 proxmox02 kernel: RBP: 00005566194b7260 R08: 0000000000000000 R09: 00007f1a0cdb9be0
Jan 20 20:26:17 proxmox02 kernel: R10: 000000000c800000 R11: 0000000000000293 R12: 0000000000100000
Jan 20 20:26:17 proxmox02 kernel: R13: 000000000c800000 R14: 00007f1a10265c00 R15: 000055660ffa7320
 
anything before that in the logs that looks interesting?
 
found this one and wanted to share:
https://github.com/openzfs/zfs/issues/10253

It describes a strange behavior when scrubbing and having parallel IO. It seems NCQ (Queue-Depth=1) somewhat had an effect.
Maybe that is something you'd like to try as well..


just have built that for myself:
Code:
#!/bin/bash
queuedepth=1

for i in a b c d e f g h i j k l m n o p q r s t u v w x y z ;
    do
        if test -d /sys/block/sd$i ;
            then
                echo "working on sd$i"
                echo "    before: "`cat /sys/block/sd$i/device/queue_depth`
                echo $queuedepth > /sys/block/sd$i/device/queue_depth
                echo "    after:  "`cat /sys/block/sd$i/device/queue_depth`
                sleep 1
        fi
done
 
Last edited:
found this one and wanted to share:
https://github.com/openzfs/zfs/issues/10253

It describes a strange behavior when scrubbing and having parallel IO. It seems NCQ (Queue-Depth=1) somewhat had an effect.
Maybe that is something you'd like to try as well..


just have built that for myself:
Code:
#!/bin/bash
queuedepth=1

for i in a b c d e f g h i j k l m n o p q r s t u v w x y z ;
    do
        if test -d /sys/block/sd$i ;
            then
                echo "working on sd$i"
                echo "    before: "`cat /sys/block/sd$i/device/queue_depth`
                echo $queuedepth > /sys/block/sd$i/device/queue_depth
                echo "    after:  "`cat /sys/block/sd$i/device/queue_depth`
                sleep 1
        fi
done
Interesting - but scrub was not running. Thanks!
 
Hi,

here is the result for zpool list -v:
Code:
root@proxmox02:~# zpool list -v
NAME                         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
hdd_zfs_guests              7.25T  4.24T  3.01T        -         -    25%    58%  1.19x    ONLINE  -
  mirror                    3.62T  2.12T  1.50T        -         -    24%  58.6%      -  ONLINE
    wwn-0x5000c500b3a2d8c4      -      -      -        -         -      -      -      -  ONLINE
    wwn-0x5000c500b3a2edef      -      -      -        -         -      -      -      -  ONLINE
  mirror                    3.62T  2.12T  1.50T        -         -    26%  58.5%      -  ONLINE
    wwn-0x5000c500b38ee3ed      -      -      -        -         -      -      -      -  ONLINE
    wwn-0x5000c500b3a2e636      -      -      -        -         -      -      -      -  ONLINE
hdd_zfs_ssd                 1.81T   237G  1.58T        -         -     6%    12%  1.00x    ONLINE  -
  mirror                    1.81T   237G  1.58T        -         -     6%  12.8%      -  ONLINE
    wwn-0x500a0751e4a94a86      -      -      -        -         -      -      -      -  ONLINE
    wwn-0x500a0751e4a94af8      -      -      -        -         -      -      -      -  ONLINE



Interesting part is that the first 4 tests went through more or less flawless. Which made me crazy. So I started a fifth test and now the problems started.
Interesting part though is that the IO load didn't go up that much.
Maybe it has something to do, that I did a reboot due to some tests I did with other kernels. I have the feeling that the problems cumulate over time.

Chris


Hi,

After I read again all this thred, and I see your syslog error regarding zfs, I think that this error could be the problem in your case. And this error could be a zfs bug(even a corner case), or a non-zfs bugs(kernel,drivers, hardware, whatever). I also see myself in a few ocassions errors like yours in my syslogs/dmesg(not in a PMX install), and and let say in some cases I could "avoid" this kind of errors changings some zfs settings.

Reading your files, I would try this(it is a must to have anyway):

zfs set atime=off any-of-your-zpool
zfs set xattr=sa any-of-your-zpool

And beause prefetch hits is low, anyway(is not wise to keep it enabled):

Demand prefetch data: 0.5 % 79.5k
Demand prefetch metadata: 0.1 % 18.3k


# disable prefetch
echo 1 > /sys/module/zfs/parameters/zfs_prefetch_disable


Then, you could reboot your server, and then see if your problem is gone, or maybe you see some improvments!


Sorry for my late replay and for my bad English ;)

Good luck / Bafta !
 
Hi,
no worries. I'm glad about every help I can get. I have to apologize for my late response. Sometimes other things are more import than computers and I needed some time to test.

I tested the suggested settings. I think it eased it a little bit, but no significant improvement.

I ordered now some Ultrastar disks and additional SATA-Controller and new cables, just for testing and to be sure that the drives or other parts of the system aren't the problem. Even if the seachest tools from Seagate says everything is good, I want to be sure.

I made a lot of testing over the last weekend and the past days. Sometimes the system runs flawless for one or two days. Then suddenly the hiccups start and even with a reboot they won't go away. The number of running containers or VMs doesn't also matter. Even just the pure host running I see the issues while copying data from one pool to the other.

It is just a guts feeling, but I either I seem to trigger a corner case or race condition with my setup or the HDDs have some kind of problem.
Because sometimes after a reboot I have no access to the volumes on the HDD pool even if the root of the pool itself is accessible. When I wait ten to 15 minutes, I can mount the volumes.
And what I see while it hangs are these things in the syslog:
Code:
an 27 17:22:42 proxmox02 pvesh[1591]: Starting CT 302
Jan 27 17:22:42 proxmox02 pve-guests[1592]: <root@pam> starting task UPID:proxmox02:00000629:00004C15:60119352:vzstart:302:root@pam:
Jan 27 17:22:42 proxmox02 pve-guests[1577]: starting CT 302: UPID:proxmox02:00000629:00004C15:60119352:vzstart:302:root@pam:
Jan 27 17:23:00 proxmox02 systemd[1]: Starting Proxmox VE replication runner...
Jan 27 17:23:00 proxmox02 systemd[1]: pvesr.service: Succeeded.
Jan 27 17:23:00 proxmox02 systemd[1]: Started Proxmox VE replication runner.
Jan 27 17:23:30 proxmox02 kernel: INFO: task zpool:3524 blocked for more than 120 seconds.
Jan 27 17:23:30 proxmox02 kernel:       Tainted: P           O      5.4.78-2-pve #1
Jan 27 17:23:30 proxmox02 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 27 17:23:30 proxmox02 kernel: zpool           D    0  3524   1546 0x80004006
Jan 27 17:23:30 proxmox02 kernel: Call Trace:
Jan 27 17:23:30 proxmox02 kernel:  __schedule+0x2e6/0x6f0
Jan 27 17:23:30 proxmox02 kernel:  schedule+0x33/0xa0
Jan 27 17:23:30 proxmox02 kernel:  io_schedule+0x16/0x40
Jan 27 17:23:30 proxmox02 kernel:  cv_wait_common+0xb5/0x130 [spl]
Jan 27 17:23:30 proxmox02 kernel:  ? wait_woken+0x80/0x80
Jan 27 17:23:30 proxmox02 kernel:  __cv_wait_io+0x18/0x20 [spl]
Jan 27 17:23:30 proxmox02 kernel:  txg_wait_synced_impl+0xc9/0x110 [zfs]
Jan 27 17:23:30 proxmox02 kernel:  txg_wait_synced+0x10/0x40 [zfs]
Jan 27 17:23:30 proxmox02 kernel:  spa_config_update+0x139/0x180 [zfs]
Jan 27 17:23:30 proxmox02 kernel:  spa_import+0x5ed/0x7f0 [zfs]
Jan 27 17:23:30 proxmox02 kernel:  ? nvpair_value_common.part.13+0x14d/0x170 [znvpair]
Jan 27 17:23:30 proxmox02 kernel:  zfs_ioc_pool_import+0x12d/0x150 [zfs]
Jan 27 17:23:30 proxmox02 kernel:  zfsdev_ioctl+0x6db/0x8f0 [zfs]
Jan 27 17:23:30 proxmox02 kernel:  ? lru_cache_add_active_or_unevictable+0x39/0xb0
Jan 27 17:23:30 proxmox02 kernel:  do_vfs_ioctl+0xa9/0x640
Jan 27 17:23:30 proxmox02 kernel:  ? handle_mm_fault+0xc9/0x1f0
Jan 27 17:23:30 proxmox02 kernel:  ksys_ioctl+0x67/0x90
Jan 27 17:23:30 proxmox02 kernel:  __x64_sys_ioctl+0x1a/0x20
Jan 27 17:23:30 proxmox02 kernel:  do_syscall_64+0x57/0x190
Jan 27 17:23:30 proxmox02 kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xa9
Jan 27 17:23:30 proxmox02 kernel: RIP: 0033:0x7f6138f19427
Jan 27 17:23:30 proxmox02 kernel: Code: Bad RIP value.
Jan 27 17:23:30 proxmox02 kernel: RSP: 002b:00007ffd7d737c18 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Jan 27 17:23:30 proxmox02 kernel: RAX: ffffffffffffffda RBX: 00007ffd7d737c90 RCX: 00007f6138f19427
Jan 27 17:23:30 proxmox02 kernel: RDX: 00007ffd7d737c90 RSI: 0000000000005a02 RDI: 0000000000000003
Jan 27 17:23:30 proxmox02 kernel: RBP: 00007ffd7d73bb80 R08: 000055db03e51430 R09: 000000000000007a
Jan 27 17:23:30 proxmox02 kernel: R10: 000055db03e16010 R11: 0000000000000246 R12: 000055db03e162e0
Jan 27 17:23:30 proxmox02 kernel: R13: 000055db03e31078 R14: 0000000000000000 R15: 0000000000000000
Jan 27 17:24:00 proxmox02 systemd[1]: Starting Proxmox VE replication runner...
Jan 27 17:24:00 proxmox02 systemd[1]: pvesr.service: Succeeded.
Jan 27 17:24:00 proxmox02 systemd[1]: Started Proxmox VE replication runner.
Jan 27 17:24:49 proxmox02 pvedaemon[1574]: <root@pam> successful auth for user 'root@pam'
Jan 27 17:25:00 proxmox02 systemd[1]: Starting Proxmox VE replication runner...
Jan 27 17:25:00 proxmox02 systemd[1]: pvesr.service: Succeeded.
Jan 27 17:25:00 proxmox02 systemd[1]: Started Proxmox VE replication runner.
Jan 27 17:25:31 proxmox02 kernel: INFO: task zpool:3524 blocked for more than 241 seconds.
Jan 27 17:25:31 proxmox02 kernel:       Tainted: P           O      5.4.78-2-pve #1
Jan 27 17:25:31 proxmox02 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 27 17:25:31 proxmox02 kernel: zpool           D    0  3524   1546 0x80004006
Jan 27 17:25:31 proxmox02 kernel: Call Trace:
Jan 27 17:25:31 proxmox02 kernel:  __schedule+0x2e6/0x6f0
Jan 27 17:25:31 proxmox02 kernel:  schedule+0x33/0xa0
Jan 27 17:25:31 proxmox02 kernel:  io_schedule+0x16/0x40
Jan 27 17:25:31 proxmox02 kernel:  cv_wait_common+0xb5/0x130 [spl]
Jan 27 17:25:31 proxmox02 kernel:  ? wait_woken+0x80/0x80
Jan 27 17:25:31 proxmox02 kernel:  __cv_wait_io+0x18/0x20 [spl]
Jan 27 17:25:31 proxmox02 kernel:  txg_wait_synced_impl+0xc9/0x110 [zfs]
Jan 27 17:25:31 proxmox02 kernel:  txg_wait_synced+0x10/0x40 [zfs]
Jan 27 17:25:31 proxmox02 kernel:  spa_config_update+0x139/0x180 [zfs]
Jan 27 17:25:31 proxmox02 kernel:  spa_import+0x5ed/0x7f0 [zfs]
Jan 27 17:25:31 proxmox02 kernel:  ? nvpair_value_common.part.13+0x14d/0x170 [znvpair]
Jan 27 17:25:31 proxmox02 kernel:  zfs_ioc_pool_import+0x12d/0x150 [zfs]
Jan 27 17:25:31 proxmox02 kernel:  zfsdev_ioctl+0x6db/0x8f0 [zfs]
Jan 27 17:25:31 proxmox02 kernel:  ? lru_cache_add_active_or_unevictable+0x39/0xb0
Jan 27 17:25:31 proxmox02 kernel:  do_vfs_ioctl+0xa9/0x640
Jan 27 17:25:31 proxmox02 kernel:  ? handle_mm_fault+0xc9/0x1f0
Jan 27 17:25:31 proxmox02 kernel:  ksys_ioctl+0x67/0x90
Jan 27 17:25:31 proxmox02 kernel:  __x64_sys_ioctl+0x1a/0x20
Jan 27 17:25:31 proxmox02 kernel:  do_syscall_64+0x57/0x190
Jan 27 17:25:31 proxmox02 kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xa9
Jan 27 17:25:31 proxmox02 kernel: RIP: 0033:0x7f6138f19427
Jan 27 17:25:31 proxmox02 kernel: Code: Bad RIP value.
Jan 27 17:25:31 proxmox02 kernel: RSP: 002b:00007ffd7d737c18 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Jan 27 17:25:31 proxmox02 kernel: RAX: ffffffffffffffda RBX: 00007ffd7d737c90 RCX: 00007f6138f19427
Jan 27 17:25:31 proxmox02 kernel: RDX: 00007ffd7d737c90 RSI: 0000000000005a02 RDI: 0000000000000003
Jan 27 17:25:31 proxmox02 kernel: RBP: 00007ffd7d73bb80 R08: 000055db03e51430 R09: 000000000000007a
Jan 27 17:25:31 proxmox02 kernel: R10: 000055db03e16010 R11: 0000000000000246 R12: 000055db03e162e0
Jan 27 17:25:31 proxmox02 kernel: R13: 000055db03e31078 R14: 0000000000000000 R15: 0000000000000000
Jan 27 17:25:31 proxmox02 kernel: INFO: task zpool:1586 blocked for more than 120 seconds.
Jan 27 17:25:31 proxmox02 kernel:       Tainted: P           O      5.4.78-2-pve #1
Jan 27 17:25:31 proxmox02 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 27 17:25:31 proxmox02 kernel: zpool           D    0  1586   1577 0x00000000
Jan 27 17:25:31 proxmox02 kernel: Call Trace:
Jan 27 17:25:31 proxmox02 kernel:  __schedule+0x2e6/0x6f0
Jan 27 17:25:31 proxmox02 kernel:  schedule+0x33/0xa0
Jan 27 17:25:31 proxmox02 kernel:  schedule_preempt_disabled+0xe/0x10
Jan 27 17:25:31 proxmox02 kernel:  __mutex_lock.isra.10+0x2c9/0x4c0
Jan 27 17:25:31 proxmox02 kernel:  __mutex_lock_slowpath+0x13/0x20
Jan 27 17:25:31 proxmox02 kernel:  mutex_lock+0x2c/0x30
Jan 27 17:25:31 proxmox02 kernel:  spa_open_common+0x62/0x4d0 [zfs]
Jan 27 17:25:31 proxmox02 kernel:  spa_get_stats+0x57/0x570 [zfs]
Jan 27 17:25:31 proxmox02 kernel:  ? kmalloc_large_node+0x86/0x90
Jan 27 17:25:31 proxmox02 kernel:  ? __kmalloc_node+0x267/0x330
Jan 27 17:25:31 proxmox02 kernel:  zfs_ioc_pool_stats+0x39/0x90 [zfs]
Jan 27 17:25:31 proxmox02 kernel:  zfsdev_ioctl+0x6db/0x8f0 [zfs]
Jan 27 17:25:31 proxmox02 kernel:  ? lru_cache_add_active_or_unevictable+0x39/0xb0
Jan 27 17:25:31 proxmox02 kernel:  do_vfs_ioctl+0xa9/0x640
Jan 27 17:25:31 proxmox02 kernel:  ? handle_mm_fault+0xc9/0x1f0
Jan 27 17:25:31 proxmox02 kernel:  ksys_ioctl+0x67/0x90
Jan 27 17:25:31 proxmox02 kernel:  __x64_sys_ioctl+0x1a/0x20
Jan 27 17:25:31 proxmox02 kernel:  do_syscall_64+0x57/0x190
Jan 27 17:25:31 proxmox02 kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xa9
Jan 27 17:25:31 proxmox02 kernel: RIP: 0033:0x7f1a2d89f427
Jan 27 17:25:31 proxmox02 kernel: Code: Bad RIP value.
Jan 27 17:25:31 proxmox02 kernel: RSP: 002b:00007fff9dd2e5e8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Jan 27 17:25:31 proxmox02 kernel: RAX: ffffffffffffffda RBX: 00007fff9dd2e610 RCX: 00007f1a2d89f427
Jan 27 17:25:31 proxmox02 kernel: RDX: 00007fff9dd2e610 RSI: 0000000000005a05 RDI: 0000000000000003
Jan 27 17:25:31 proxmox02 kernel: RBP: 00007fff9dd31bf0 R08: 000055ac0421c680 R09: 00007f1a2d96ade0
Jan 27 17:25:31 proxmox02 kernel: R10: 000055ac0421a010 R11: 0000000000000246 R12: 000055ac0421c530
Jan 27 17:25:31 proxmox02 kernel: R13: 000055ac0421a2e0 R14: 0000000000000000 R15: 00007fff9dd31c04
Jan 27 17:26:00 proxmox02 systemd[1]: Starting Proxmox VE replication runner...
Jan 27 17:26:00 proxmox02 systemd[1]: pvesr.service: Succeeded.
Jan 27 17:26:00 proxmox02 systemd[1]: Started Proxmox VE replication runner.
Jan 27 17:27:00 proxmox02 systemd[1]: Starting Proxmox VE replication runner...
Jan 27 17:27:00 proxmox02 systemd[1]: pvesr.service: Succeeded.
Jan 27 17:27:00 proxmox02 systemd[1]: Started Proxmox VE replication runner.
Jan 27 17:27:32 proxmox02 kernel: INFO: task zpool:3524 blocked for more than 362 seconds.
Jan 27 17:27:32 proxmox02 kernel:       Tainted: P           O      5.4.78-2-pve #1
Jan 27 17:27:32 proxmox02 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 27 17:27:32 proxmox02 kernel: zpool           D    0  3524   1546 0x80004006
Jan 27 17:27:32 proxmox02 kernel: Call Trace:
Jan 27 17:27:32 proxmox02 kernel:  __schedule+0x2e6/0x6f0
Jan 27 17:27:32 proxmox02 kernel:  schedule+0x33/0xa0
Jan 27 17:27:32 proxmox02 kernel:  io_schedule+0x16/0x40
Jan 27 17:27:32 proxmox02 kernel:  cv_wait_common+0xb5/0x130 [spl]
Jan 27 17:27:32 proxmox02 kernel:  ? wait_woken+0x80/0x80
Jan 27 17:27:32 proxmox02 kernel:  __cv_wait_io+0x18/0x20 [spl]
Jan 27 17:27:32 proxmox02 kernel:  txg_wait_synced_impl+0xc9/0x110 [zfs]
Jan 27 17:27:32 proxmox02 kernel:  txg_wait_synced+0x10/0x40 [zfs]
Jan 27 17:27:32 proxmox02 kernel:  spa_config_update+0x139/0x180 [zfs]
Jan 27 17:27:32 proxmox02 kernel:  spa_import+0x5ed/0x7f0 [zfs]
Jan 27 17:27:32 proxmox02 kernel:  ? nvpair_value_common.part.13+0x14d/0x170 [znvpair]
Jan 27 17:27:32 proxmox02 kernel:  zfs_ioc_pool_import+0x12d/0x150 [zfs]
Jan 27 17:27:32 proxmox02 kernel:  zfsdev_ioctl+0x6db/0x8f0 [zfs]
Jan 27 17:27:32 proxmox02 kernel:  ? lru_cache_add_active_or_unevictable+0x39/0xb0
Jan 27 17:27:32 proxmox02 kernel:  do_vfs_ioctl+0xa9/0x640
Jan 27 17:27:32 proxmox02 kernel:  ? handle_mm_fault+0xc9/0x1f0
Jan 27 17:27:32 proxmox02 kernel:  ksys_ioctl+0x67/0x90
Jan 27 17:27:32 proxmox02 kernel:  __x64_sys_ioctl+0x1a/0x20
Jan 27 17:27:32 proxmox02 kernel:  do_syscall_64+0x57/0x190
Jan 27 17:27:32 proxmox02 kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xa9
Jan 27 17:27:32 proxmox02 kernel: RIP: 0033:0x7f6138f19427
Jan 27 17:27:32 proxmox02 kernel: Code: Bad RIP value.
Jan 27 17:27:32 proxmox02 kernel: RSP: 002b:00007ffd7d737c18 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Jan 27 17:27:32 proxmox02 kernel: RAX: ffffffffffffffda RBX: 00007ffd7d737c90 RCX: 00007f6138f19427
Jan 27 17:27:32 proxmox02 kernel: RDX: 00007ffd7d737c90 RSI: 0000000000005a02 RDI: 0000000000000003
Jan 27 17:27:32 proxmox02 kernel: RBP: 00007ffd7d73bb80 R08: 000055db03e51430 R09: 000000000000007a
Jan 27 17:27:32 proxmox02 kernel: R10: 000055db03e16010 R11: 0000000000000246 R12: 000055db03e162e0
Jan 27 17:27:32 proxmox02 kernel: R13: 000055db03e31078 R14: 0000000000000000 R15: 0000000000000000
Jan 27 17:27:32 proxmox02 kernel: INFO: task zpool:1586 blocked for more than 241 seconds.
Jan 27 17:27:32 proxmox02 kernel:       Tainted: P           O      5.4.78-2-pve #1
Jan 27 17:27:32 proxmox02 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 27 17:27:32 proxmox02 kernel: zpool           D    0  1586   1577 0x00000000
Jan 27 17:27:32 proxmox02 kernel: Call Trace:
Jan 27 17:27:32 proxmox02 kernel:  __schedule+0x2e6/0x6f0
Jan 27 17:27:32 proxmox02 kernel:  schedule+0x33/0xa0
Jan 27 17:27:32 proxmox02 kernel:  schedule_preempt_disabled+0xe/0x10
Jan 27 17:27:32 proxmox02 kernel:  __mutex_lock.isra.10+0x2c9/0x4c0
Jan 27 17:27:32 proxmox02 kernel:  __mutex_lock_slowpath+0x13/0x20
Jan 27 17:27:32 proxmox02 kernel:  mutex_lock+0x2c/0x30
Jan 27 17:27:32 proxmox02 kernel:  spa_open_common+0x62/0x4d0 [zfs]
Jan 27 17:27:32 proxmox02 kernel:  spa_get_stats+0x57/0x570 [zfs]
Jan 27 17:27:32 proxmox02 kernel:  ? kmalloc_large_node+0x86/0x90
Jan 27 17:27:32 proxmox02 kernel:  ? __kmalloc_node+0x267/0x330
Jan 27 17:27:32 proxmox02 kernel:  zfs_ioc_pool_stats+0x39/0x90 [zfs]
Jan 27 17:27:32 proxmox02 kernel:  zfsdev_ioctl+0x6db/0x8f0 [zfs]
Jan 27 17:27:32 proxmox02 kernel:  ? lru_cache_add_active_or_unevictable+0x39/0xb0
Jan 27 17:27:32 proxmox02 kernel:  do_vfs_ioctl+0xa9/0x640
Jan 27 17:27:32 proxmox02 kernel:  ? handle_mm_fault+0xc9/0x1f0
Jan 27 17:27:32 proxmox02 kernel:  ksys_ioctl+0x67/0x90
Jan 27 17:27:32 proxmox02 kernel:  __x64_sys_ioctl+0x1a/0x20
Jan 27 17:27:32 proxmox02 kernel:  do_syscall_64+0x57/0x190
Jan 27 17:27:32 proxmox02 kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xa9
Jan 27 17:27:32 proxmox02 kernel: RIP: 0033:0x7f1a2d89f427
Jan 27 17:27:32 proxmox02 kernel: Code: Bad RIP value.
Jan 27 17:27:32 proxmox02 kernel: RSP: 002b:00007fff9dd2e5e8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Jan 27 17:27:32 proxmox02 kernel: RAX: ffffffffffffffda RBX: 00007fff9dd2e610 RCX: 00007f1a2d89f427
Jan 27 17:27:32 proxmox02 kernel: RDX: 00007fff9dd2e610 RSI: 0000000000005a05 RDI: 0000000000000003
Jan 27 17:27:32 proxmox02 kernel: RBP: 00007fff9dd31bf0 R08: 000055ac0421c680 R09: 00007f1a2d96ade0
Jan 27 17:27:32 proxmox02 kernel: R10: 000055ac0421a010 R11: 0000000000000246 R12: 000055ac0421c530
Jan 27 17:27:32 proxmox02 kernel: R13: 000055ac0421a2e0 R14: 0000000000000000 R15: 00007fff9dd31c04



Thanks,
Chris
 
Just out of curiosity: when was your last scrub? Was it successful or did it repair anything?

Also are you able to use a different PCIe slot for your controller?
I have rebuild my NAS with ZFS. All the sudden I experienced checksum errors while previously all worked fine. The scrub as well.
I have realized that I have moved the SAS hba to a different slot to provide so.e "air" between the two plugin cards.
The slot I have placed the hba into is electrically only an x4 slot (previously an x8).
Moving the controller back and all went back to normal.
Need to monitor the situation but until so far no issues...
 
Just out of curiosity: when was your last scrub? Was it successful or did it repair anything?

Also are you able to use a different PCIe slot for your controller?
I have rebuild my NAS with ZFS. All the sudden I experienced checksum errors while previously all worked fine. The scrub as well.
I have realized that I have moved the SAS hba to a different slot to provide so.e "air" between the two plugin cards.
The slot I have placed the hba into is electrically only an x4 slot (previously an x8).
Moving the controller back and all went back to normal.
Need to monitor the situation but until so far no issues...
Thanks for your question.

Last scrub was on 10th of January. And it was successful without any issues needed to be repaired.

A different slot will be difficult, as I use the onboard SATA ports. ;-) Wating for the controller to arrive.

Cheers,

Chris
 
I have the EXACT same symptoms caused by upgrading to 6.3-3 from 6.2-4.

Screenshot 2021-04-21 192320.png

That sharp rise in IO delay happened when I upgraded. same load before and after. I also started having LONG hangs when doing replication between hosts, almost a minute long. I have an extremely fast pool of NVMe drives:

Code:
pool: tank
 state: ONLINE
  scan: resilvered 196G in 0 days 00:10:27 with 0 errors on Sat Apr 17 00:32:39 2021
config:

        NAME                                             STATE     READ WRITE CKSUM
        tank                                             ONLINE       0     0     0
          mirror-0                                       ONLINE       0     0     0
            nvme-INTEL_SSDPE2KX040T7_PHLF810502XJ4P0IGN  ONLINE       0     0     0
            nvme-INTEL_SSDPE2KX040T7_PHLF810502W44P0IGN  ONLINE       0     0     0
          mirror-1                                       ONLINE       0     0     0
            nvme-INTEL_SSDPE2KX040T7_PHLF810502XE4P0IGN  ONLINE       0     0     0
            nvme-INTEL_SSDPE2KX040T7_PHLF810502V64P0IGN  ONLINE       0     0     0
          mirror-2                                       ONLINE       0     0     0
            nvme-INTEL_SSDPE2KX040T7_PHLF813200244P0IGN  ONLINE       0     0     0
            nvme-INTEL_SSDPE2KX040T7_PHLF813200AY4P0IGN  ONLINE       0     0     0
          mirror-3                                       ONLINE       0     0     0
            nvme-INTEL_SSDPE2KX040T7_PHLF810502XN4P0IGN  ONLINE       0     0     0
            nvme-INTEL_SSDPE2KX040T7_PHLF810503114P0IGN  ONLINE       0     0     0
          mirror-4                                       ONLINE       0     0     0
            nvme-INTEL_SSDPE2KX040T7_PHLF810502VL4P0IGN  ONLINE       0     0     0
            nvme-INTEL_SSDPE2KX040T7_PHLF813102KU4P0IGN  ONLINE       0     0     0
          mirror-5                                       ONLINE       0     0     0
            nvme-INTEL_SSDPE2KX040T7_PHLF810502W94P0IGN  ONLINE       0     0     0
            nvme-INTEL_SSDPE2KX040T7_PHLF810503F84P0IGN  ONLINE       0     0     0
          mirror-6                                       ONLINE       0     0     0
            nvme-INTEL_SSDPE2KX040T7_PHLF8132002L4P0IGN  ONLINE       0     0     0
            nvme-INTEL_SSDPE2KX040T7_PHLF810502VD4P0IGN  ONLINE       0     0     0
          mirror-7                                       ONLINE       0     0     0
            nvme-INTEL_SSDPE2KX040T7_PHLF813102JC4P0IGN  ONLINE       0     0     0
            nvme-INTEL_SSDPE2KX040T7_PHLF8105047S4P0IGN  ONLINE       0     0     0
          mirror-8                                       ONLINE       0     0     0
            nvme-INTEL_SSDPE2KX040T7_PHLF810503F14P0IGN  ONLINE       0     0     0
            nvme-INTEL_SSDPE2KX040T7_PHLF810502V54P0IGN  ONLINE       0     0     0
          mirror-9                                       ONLINE       0     0     0
            nvme-INTEL_SSDPE2KX040T7_PHLF8105031A4P0IGN  ONLINE       0     0     0
            nvme-INTEL_SSDPE2KX040T7_PHLF810504W44P0IGN  ONLINE       0     0     0
          mirror-10                                      ONLINE       0     0     0
            nvme-INTEL_SSDPE2KX040T7_PHLF810503FC4P0IGN  ONLINE       0     0     0
            nvme-INTEL_SSDPE2KX040T7_PHLF727500J34P0IGN  ONLINE       0     0     0
        logs
          nvme-INTEL_SSDPE21K375GA_PHKE751000AW375AGN    ONLINE       0     0     0
          nvme-INTEL_SSDPE21K375GA_PHKE75100081375AGN    ONLINE       0     0     0

still have plenty of space left:

Code:
#zfs list

NAME                          USED  AVAIL     REFER  MOUNTPOINT
rpool                        91.2G   124G      104K  /rpool
rpool/ROOT                   82.5G   124G       96K  /rpool/ROOT
rpool/ROOT/pve-1             82.5G   124G     82.5G  /
rpool/data                     96K   124G       96K  /rpool/data
rpool/swap                   8.50G   131G     1.24G  -
tank                         2.01T  36.6T      104K  /tank
tank/config                  14.0M  36.6T     7.35M  /tank/config
tank/data                    1.99T  36.6T      192K  /tank/data
tank/data/migrate              96K  36.6T       96K  /tank/data/migrate
tank/data/subvol-100-disk-1  1.33G  6.74G     1.26G  /tank/data/subvol-100-disk-1
tank/data/subvol-100-disk-2   509G  36.6T      457G  /tank/data/subvol-100-disk-2
tank/data/subvol-100-disk-3  5.22G  7.13G      890M  /tank/data/subvol-100-disk-3
tank/data/subvol-104-disk-1  29.1G  36.6T     22.4G  /tank/data/subvol-104-disk-1
tank/data/subvol-117-disk-1  1.47G  6.53G     1.47G  /tank/data/subvol-117-disk-1
tank/data/subvol-132-disk-0   930M  7.11G      914M  /tank/data/subvol-132-disk-0
tank/data/subvol-149-disk-0  1.96G  6.04G     1.96G  /tank/data/subvol-149-disk-0
tank/data/subvol-172-disk-0  35.2G  71.5G     28.5G  /tank/data/subvol-172-disk-0
tank/data/subvol-174-disk-0  53.8G  47.2G     52.8G  /tank/data/subvol-174-disk-0
tank/data/subvol-176-disk-0  8.10G  42.2G     7.78G  /tank/data/subvol-176-disk-0
tank/data/subvol-196-disk-0  1.66G  6.51G     1.49G  /tank/data/subvol-196-disk-0
tank/data/subvol-197-disk-0  2.21G  6.02G     1.98G  /tank/data/subvol-197-disk-0
tank/data/vm-105-disk-0      4.68G  36.6T     4.68G  -
tank/data/vm-107-disk-0      54.7G  36.6T     53.0G  -
tank/data/vm-113-disk-0      10.8G  36.6T     10.4G  -
tank/data/vm-118-disk-0      17.0G  36.6T     17.0G  -
tank/data/vm-127-disk-0      52.5G  36.6T     42.2G  -
tank/data/vm-136-disk-0      12.2G  36.6T     12.2G  -
tank/data/vm-139-disk-0      26.5G  36.6T     25.2G  -
tank/data/vm-146-disk-0      2.53G  36.6T     2.41G  -
tank/data/vm-150-disk-0      18.2G  36.6T     17.1G  -
tank/data/vm-150-disk-1      2.30G  36.6T     2.30G  -
tank/data/vm-159-disk-0      29.6G  36.6T     28.6G  -
tank/data/vm-164-disk-0      5.01G  36.6T     4.20G  -
tank/data/vm-168-disk-0      29.1G  36.6T     28.3G  -
tank/data/vm-169-disk-0      16.8G  36.6T     16.8G  -
tank/data/vm-170-disk-0      30.7G  36.6T     30.7G  -
tank/data/vm-171-disk-0      35.2G  36.6T     35.2G  -
tank/data/vm-175-disk-0      24.4G  36.6T     18.3G  -
tank/data/vm-181-disk-0      16.3G  36.6T     16.3G  -
tank/data/vm-215-disk-0      26.0G  36.6T     25.8G  -
tank/data/vm-215-disk-1      6.91M  36.6T     6.91M  -
tank/data/vm-230-disk-0      24.1G  36.6T     19.3G  -
tank/data/vm-280-disk-0       111G  36.6T     92.8G  -
tank/data/vm-280-disk-1       209G  36.6T      172G  -
tank/data/vm-282-disk-0       184G  36.6T      150G  -
tank/data/vm-714-disk-0      42.8G  36.6T     41.7G  -
tank/data/vm-715-disk-0      43.5G  36.6T     40.4G  -
tank/data/vm-715-disk-1      32.0G  36.6T     32.0G  -
tank/data/vm-905-disk-0       241G  36.6T      202G  -
tank/data/vm-905-disk-1      83.2G  36.6T     31.6G  -
tank/data/vm-999-disk-1        56K  36.6T       56K  -
tank/encrypted_data           192K  36.6T      192K  /tank/encrypted_data

plenty of memory:

Code:
top - 19:30:42 up 4 days, 19:24,  2 users,  load average: 21.70, 15.44, 15.97
Tasks: 2536 total,   9 running, 2526 sleeping,   0 stopped,   1 zombie
%Cpu(s):  1.3 us,  6.5 sy,  0.0 ni, 85.6 id,  6.4 wa,  0.0 hi,  0.3 si,  0.0 st
GiB Mem :   1511.8 total,    952.9 free,    516.3 used,     42.6 buff/cache
GiB Swap:      8.0 total,      8.0 free,      0.0 used.    953.6 avail Mem

The only difference is the code versions.

If anyone has any clue on this issue, I'm sure there will be more of us that run into this. I'll gladly gather any info needed.

Thanks,
Elliot
 
After upgrading/ changing the hard drives, the issue persisted.

With the ZFS version zfs-2.0.4-pve1 the issues disappeared.

I made the same tests over the past few days. Large file transfers are now back in the 100 MByte/s area and the direct write tests from SSD ZFS to the HDD ZFS work now with an average over 160 MByte/s.

I mark this issue as solved.

Thanks to everyone for helping on this.
 
I have a very similar problem.
Model Family: Seagate IronWolf
Device Model: ST4000VN008-2DR166

6 Disk

Code:
root@pve-sas:~# zpool status
  pool: zpool
 state: ONLINE
  scan: scrub repaired 0B in 03:59:00 with 0 errors on Sun Aug  8 04:23:02 2021
config:


        NAME                                 STATE     READ WRITE CKSUM
        zpool                                ONLINE       0     0     0
          mirror-0                           ONLINE       0     0     0
            ata-ST4000VN008-2DR166_ZGY97JTE  ONLINE       0     0     0
            ata-ST4000VN008-2DR166_ZGY99LKC  ONLINE       0     0     0
          mirror-1                           ONLINE       0     0     0
            ata-ST4000VN008-2DR166_ZGY97552  ONLINE       0     0     0
            ata-ST4000VN008-2DR166_ZGY9DEL3  ONLINE       0     0     0
          mirror-2                           ONLINE       0     0     0
            ata-ST4000VN008-2DR166_ZGY8WAR1  ONLINE       0     0     0
            ata-ST4000VN008-2DR166_ZGY9A9KY  ONLINE       0     0     0


errors: No known data errors

1628597379582.png

Wheels are brand new
6 pcs

1628597451705.png


Code:
pveperf
CPU BOGOMIPS:      118181.92
REGEX/SECOND:      2842557
HD SIZE:           93.93 GB (/dev/mapper/pve-root)
BUFFERED READS:    2745.76 MB/sec
AVERAGE SEEK TIME: 0.03 ms
FSYNCS/SECOND:     362.48
DNS EXT:           69.63 ms
DNS INT:           35.37 ms (local)
root@pve-sas:~#
root@pve-sas:~#
root@pve-sas:~#
root@pve-sas:~# pveperf /zpool
CPU BOGOMIPS:      118181.92
REGEX/SECOND:      2936893
HD SIZE:           8108.14 GB (zpool)
FSYNCS/SECOND:     21.52
DNS EXT:           122.23 ms
DNS INT:           45.03 ms (local)
root@pve-sas:~#
root@pve-sas:~#
root@pve-sas:~# pveperf /zpool
CPU BOGOMIPS:      118181.92
REGEX/SECOND:      2921993
HD SIZE:           8108.14 GB (zpool)
FSYNCS/SECOND:     19.17
DNS EXT:           88.62 ms
DNS INT:           37.94 ms (local)



Code:
uname -a
Linux pve-sas 5.11.22-3-pve #1 SMP PVE 5.11.22-6 (Wed, 28 Jul 2021 10:51:12 +0200) x86_64 GNU/Linux


Linux 5.11.22-3-pve #1 SMP PVE 5.11.22-6 (Wed, 28 Jul 2021 10:51:12 +0200





The problem is that there is a recording from an external source for a pull
And every 3-5 minutes there are freezes.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!