HDD activity excessive SDB and SDC always reads and write

Oanh

Member
Apr 12, 2020
20
0
21
54
I am using the most current version of PM and everything was fine until last night
I am using ZFS on two 8TB HDD (mirrored)
I woke up this morning and the HDD activity light on my computer is solid
I checked netdata and this is what I see
No VMs are even running

I am not sure what happened overnight

Any help would be greatly appreciated as someone else set this up for me and I have no idea how to find out what is reading/writing to the disk so much

PS I am using netdata for monitoring (the screenshot)
 

Attachments

  • Screen Shot 2020-04-12 at 10.53.52 AM.png
    Screen Shot 2020-04-12 at 10.53.52 AM.png
    58.2 KB · Views: 14
Last edited:
try with "iotop"
Thank you for the recommendation.
This is what I saw and do not understand what "z_rd_int" is.
I am assuming that it has something to do with ZFS and Read Disk and Initialize, bu two not know how to stop it or what caused it

I looked at a couple of threads and ran zpool stays with the following result:

root@myserver:~# zpool status
pool: pool1
state: ONLINE
scan: scrub in progress since Sun Apr 12 00:24:01 2020
3.00T scanned at 8.18G/s, 57.4G issued at 156M/s, 5.39T total
0B repaired, 1.04% done, 0 days 09:56:21 to go
config:

NAME STATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-WDC_WD80EMAZ-00M9AA0_VAGDUWBL ONLINE 0 0 0
ata-WDC_WD80EMAZ-00M9AA0_VAGLGALL ONLINE 0 0 0

errors: No known data errors

pool: rpool
state: ONLINE
scan: scrub repaired 0B in 0 days 00:13:36 with 0 errors on Sun Apr 12 00:37:38 2020
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
ata-WDC_WDS100T2B0A-00SM50_20036E808189-part3 ONLINE 0 0 0

What is "scrub" and why is it happening now?
 
Last edited:
You probably had your periodic zfs scrub.
Your install has a default cronjob for that in /etc/cron.d/zfsutils-linux
With content:
Code:
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# Scrub the second Sunday of every month.
24 0 8-14 * * root [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ] && /usr/lib/zfs-linux/scrub
You can check the last scrub status with zpool status
 
  • Like
Reactions: Oanh
You probably had your periodic zfs scrub.
Your install has a default cronjob for that in /etc/cron.d/zfsutils-linux
With content:
Code:
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# Scrub the second Sunday of every month.
24 0 8-14 * * root [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ] && /usr/lib/zfs-linux/scrub
You can check the last scrub status with zpool status
Thx for the reply
How do I set the time for zfs scrub (i.e. monthly, every 2 months, etc)?
 
Thx for the reply
How do I set the time for zfs scrub (i.e. monthly, every 2 months, etc)?
I would leave the file as is, it's a file that was installed by default with general setting that works for most people I guess.
But you can of course change the file if you want.
That would be standard cron adjustment as described in man or:
https://manpages.debian.org/buster/cron/crontab.5.en.html

If you haven't configured yet, you can get an email message on scrub and other zfs events with zed as described in the documentation:
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_zfs
 
  • Like
Reactions: Oanh
I would leave the file as is, it's a file that was installed by default with general setting that works for most people I guess.
But you can of course change the file if you want.
That would be standard cron adjustment as described in man or:
https://manpages.debian.org/buster/cron/crontab.5.en.html

If you haven't configured yet, you can get an email message on scrub and other zfs events with zed as described in the documentation:
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_zfs

Thank you so much for this info.
I will look into it.