Hi Everyone
Since upgrade to pve 7.2, 4 x seagate HDD in a zfs raidz1 pool keep spinning down into STANDBY mode.
I want to stop this from happening, for many reasons. I want to keep the disks permanently in IDLE/ACTIVE while the proxmox host is on.
I do not know what the cause of the spindown to standby is, in order to turn it off. Is it a new zfs pool setting for version 2.1.6, or has some other default setting changed from keeping the disks active, to now shutting them down after a short period?
The host exposes the zpool via samba shares to the VM. The VM uses the host zpool samba shares for storage. If the VM remains inactive on the samba shares for around 5 minutes, then the host zpool disks spindown into standby mode, and when the shares are accessed by the VM, they perform a staggered spinup, which delays access and freezes the VM by 30 secs to 1 minute or more.
I have tried using /etc/hdparm.conf (for each disk apm = 255, spindown_time = 0), but proxmox does not seem to respect the settings, and the drives still go into standby.
I assume that smartmontools may be the way to go, but after reading the documentation I cannot make heads or tails about how to actually set the drives to be permanently active, and how to make these settings persist across reboots.
Also I do not understand why with previous versions of proxmox using the same configuration the disks never entered standby (which I want), and why in the latest version they seem to go into standby within 5 minutes (which I don't want). If anyone can answer that problem, I suppose I would have a solution.
pveversion -v
hdparm.conf settings that did not work # same for all disks
smartctl -i -n standby /dev/sda # same for all disks
zpool status
Any help would be greatly appreciated.
AJ
Since upgrade to pve 7.2, 4 x seagate HDD in a zfs raidz1 pool keep spinning down into STANDBY mode.
I want to stop this from happening, for many reasons. I want to keep the disks permanently in IDLE/ACTIVE while the proxmox host is on.
I do not know what the cause of the spindown to standby is, in order to turn it off. Is it a new zfs pool setting for version 2.1.6, or has some other default setting changed from keeping the disks active, to now shutting them down after a short period?
The host exposes the zpool via samba shares to the VM. The VM uses the host zpool samba shares for storage. If the VM remains inactive on the samba shares for around 5 minutes, then the host zpool disks spindown into standby mode, and when the shares are accessed by the VM, they perform a staggered spinup, which delays access and freezes the VM by 30 secs to 1 minute or more.
I have tried using /etc/hdparm.conf (for each disk apm = 255, spindown_time = 0), but proxmox does not seem to respect the settings, and the drives still go into standby.
I assume that smartmontools may be the way to go, but after reading the documentation I cannot make heads or tails about how to actually set the drives to be permanently active, and how to make these settings persist across reboots.
Also I do not understand why with previous versions of proxmox using the same configuration the disks never entered standby (which I want), and why in the latest version they seem to go into standby within 5 minutes (which I don't want). If anyone can answer that problem, I suppose I would have a solution.
pveversion -v
Code:
proxmox-ve: 7.2-1 (running kernel: 5.15.64-1-pve)
pve-manager: 7.2-11 (running version: 7.2-11/b76d3178)
pve-kernel-5.15: 7.2-13
pve-kernel-helper: 7.2-13
pve-kernel-5.13: 7.1-9
pve-kernel-5.11: 7.0-10
pve-kernel-5.15.64-1-pve: 5.15.64-1
pve-kernel-5.15.60-1-pve: 5.15.60-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-1-pve: 5.13.19-3
pve-kernel-5.11.22-7-pve: 5.11.22-12
ceph-fuse: 15.2.17-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-3
libpve-guest-common-perl: 4.1-4
libpve-http-server-perl: 4.1-4
libpve-storage-perl: 7.2-10
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.7-1
proxmox-backup-file-restore: 2.2.7-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-3
pve-docs: 7.2-2
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-6
pve-firmware: 3.5-6
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 7.0.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.6-pve1
hdparm.conf settings that did not work # same for all disks
Code:
/dev/disk/by-id/ata-ST12000NM0007-2A1101_ZCH06778 {
# advanced power management
# 255 = disabled
# 1-127 = spindown
# 128-254 = high performance
apm = 255
# spindown in 5 sec units, min 0, max 255
spindown_time = 0
# acoustic_management
acoustic_management = 254
}
smartctl -i -n standby /dev/sda # same for all disks
Code:
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.64-1-pve] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org
Device is in STANDBY mode, exit(2)
zpool status
Code:
pool: zp0
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
zp0 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ata-ST12000VN0007-2GS116_ZCH029ST ONLINE 0 0 0
ata-ST12000VN0007-2GS116_ZCH087RV ONLINE 0 0 0
ata-ST12000VN0007-2GS116_ZCH0886L ONLINE 0 0 0
ata-ST12000VN0007-2GS116_ZJV01MMP ONLINE 0 0 0
errors: No known data errors
Any help would be greatly appreciated.
AJ