I'm using Proxmox 6.2.4 and I have a ZFS Pool on the server with two 8TB disks (WD Red and Seagate). It was previously used for Proxmox VMS, but now it mainly holds the data of Nextcloud in a Docker instance on the same host. Since the server runs 24/7, I want to take care of standby for the disks since they're not always needed.
It seems that the disks never went to standby. When setting one of the disks from the ZFS Pool (sda) to standby manually, they got waked up just a few seconds later:
Each command was run with about ~1s waiting time. Since I can't figure out why this happens, I stopped everything (complete docker Daemon) except Proxmox. Now just Proxmox runs with one VM, which is not located on the ZFS pool (so there should be no activitiy on sda). I tried hd-idle in debug mode
and it shows the i/o activity every few seconds. The ZFS pool disks have read activity on EVERY check:
The same on the other ZFS member sdd. It seems that those reads prevent the discs from standby. Why does this happen?
I can't explain where the reads came from. Does Proxmox some kind of FS checks?
As I said there is nothing on Proxmox that could explain those reads, no VM is running on the ZFS pool. I only have it registered in
But they're not really in use currently. I also tried commenting out those lines from ZFS in
Other things I tried:
All doesn't work: I still see increased reads in the hd-idle probes... Since they're on all disks (also SSDs without ZFS) it seems that something other from Proxmox or Debian causes this problem.
It seems that the disks never went to standby. When setting one of the disks from the ZFS Pool (sda) to standby manually, they got waked up just a few seconds later:
Code:
# hdparm -y /dev/sda
/dev/sda:
issuing standby command
# hdparm -C /dev/sda
/dev/sda:
drive state is: standby
# hdparm -C /dev/sda
/dev/sda:
drive state is: standby
# hdparm -C /dev/sda
/dev/sda:
drive state is: active/idle
Code:
hd-idle -i 0 -a /dev/sda -i 120 -l /var/log/hd-idle.log -d
and it shows the i/o activity every few seconds. The ZFS pool disks have read activity on EVERY check:
Code:
probing sda: reads: 12760648, writes: 3174104
probing sda: reads: 12763720, writes: 3174104
probing sda: reads: 12765256, writes: 3174104
probing sda: reads: 12766792, writes: 3174104
The same on the other ZFS member sdd. It seems that those reads prevent the discs from standby. Why does this happen?
I can't explain where the reads came from. Does Proxmox some kind of FS checks?
As I said there is nothing on Proxmox that could explain those reads, no VM is running on the ZFS pool. I only have it registered in
/etc/pve/storage.cfg
because there are ISO images that I don't want to place on the SSDs:
Code:
dir: zfs-storage
path /zfs-storage/proxmox
content images,iso
/etc/pve/storage.cfg
. And I also commented them in again and ran pvesm set zfs-storage --disable 1
but still got a lot of reads on every probe in both cases.Other things I tried:
- Set use_lvmetad = 1 in global section of /etc/lvm/lvm.conf and reboot the entire server (Read in a post that this could cause the reads: https://forum.proxmox.com/threads/pvestatd-awakes-hdd-immediately.15344/)
systemctl stop pvestatd
because I found this thread where the data collection seems the cause for letting HDDs not sleep: https://forum.proxmox.com/threads/pvestatd-doesnt-let-hdds-go-to-sleep.29727/ After the service was stopped, I saw no data in the web ui (so it should work)
All doesn't work: I still see increased reads in the hd-idle probes... Since they're on all disks (also SSDs without ZFS) it seems that something other from Proxmox or Debian causes this problem.
Last edited: