Quick wear out on raid1 - Looking for suggestions

> But still would appreciate some guidance on how to disable it globally? I can see a lot of entries for "atime"

For ZFS, note that you can set 'noatime' at the top level when creating the pool and it will inherit. Otherwise the setting is per-dataset.


zpool create -o ashift=12 -o autoreplace=off -o autoexpand=on -O atime=off -O compression=lz4
$zpoolname [topology] [devices]

For virtual filesystems you don't have to worry, but for ext4 / xfs and the like you would put it in fstab options (defaults,noatime,rw) and remount the filesystem in-place ' mount / -oremount,rw ' or reboot

For windows guests - NOTE run as Administrator, applies to ALL windows disks:

BEGIN noatime.cmd

fsutil behavior set disablelastaccess 1
pause
 
> But still would appreciate some guidance on how to disable it globally? I can see a lot of entries for "atime"

For ZFS, note that you can set 'noatime' at the top level when creating the pool and it will inherit. Otherwise the setting is per-dataset.


zpool create -o ashift=12 -o autoreplace=off -o autoexpand=on -O atime=off -O compression=lz4
$zpoolname [topology] [devices]

For virtual filesystems you don't have to worry, but for ext4 / xfs and the like you would put it in fstab options (defaults,noatime,rw) and remount the filesystem in-place ' mount / -oremount,rw ' or reboot

For windows guests - NOTE run as Administrator, applies to ALL windows disks:

BEGIN noatime.cmd

fsutil behavior set disablelastaccess 1
pause
Well It's already running so future reference. Thoughts on adding it to grub as global?

GRUB_CMDLINE_LINUX_DEFAULT="quiet noatime"
 
Well It's already running so future reference. Thoughts on adding it to grub as global?

GRUB_CMDLINE_LINUX_DEFAULT="quiet noatime"

I've never done it that way, and it wouldn't be reliable if something replaced grub...? YMMV, I would do it the way I described
 
I've never done it that way, and it wouldn't be reliable if something replaced grub...? YMMV, I would do it the way I described
@Kingneutron I apologize my ignorance but I'm not sure how. You are describing to do this while creating zpool but the raid pool where OS sits is already created so I cannot go that route (I think?!).

Some kb talks about modifying fstab and remounting those pointers but I wouldn't know if this is valid way i.e:
Code:
/dev/mapper/pve-root / ext4 defaults,noatime 0 1
Code:
mount -o remount,noatime /

EDIT. So fstab it seems like its only for non zfs type..

I think I know now what you meant by the datasets (again new to this).
zfs list will give us all the pointers and from there we could disable atime like this?

Code:
sudo zfs set atime=off rpool/ROOT/pve-1

Would that be correct approach @Kingneutron ?
 
Last edited:
Also, checking the mount I could only see "realtime" - assuming atime is not enabled then?

Code:
mount | grep ' / '
rpool/ROOT/pve-1 on / type zfs (rw,relatime,xattr,posixacl,casesensitive)
 
> Would that be correct approach @Kingneutron ?

Yep

relatime still updates, just not immediately from what I know. I just do atime=off everywhere and haven't had any issues
Thank You ! I'll be taking a snapshot of performance of that raid today before and after migration/improvements. I'll share it here to provide artifacts.
 
> Would that be correct approach @Kingneutron ?

Yep

relatime still updates, just not immediately from what I know. I just do atime=off everywhere and haven't had any issues
Migrated vFW to nvme, there is nothing left on raid1, can someone explain to me why iostat -v still shows write @ 10M?, but when I do every 5 seconds its shows small amounts:

1724947161482.png


Either way, creating timestamp.
  1. 8/29 - Disk at 35% (77 days uptime) usage with vpfsense on raid1 +cluster services ON with 0.5% degradation daily
  2. 8/30 - TBD
 
Last edited:
> Migrated vFW to nvme, there is nothing left on raid1, can someone explain to me why iostat -v still shows write @ 10M?

Read the man page for zpool-iostat and check out the -y flag, and leave zpool iostat -v 5 running ;-)


This is what I use:

BEGIN ziostat.sh
#!/bin/bash
zpool iostat $1 -y -T d -v 5
 
> Migrated vFW to nvme, there is nothing left on raid1, can someone explain to me why iostat -v still shows write @ 10M?

Read the man page for zpool-iostat and check out the -y flag, and leave zpool iostat -v 5 running ;-)


This is what I use:

BEGIN ziostat.sh
#!/bin/bash
zpool iostat $1 -y -T d -v 5
This definitely looks better, wish I took that capture before move.
1724952573711.png
 
Hi, quick update.

Moving vpfsense did the trick. Degradation is no longer there. Drives are still at 35%. I did also removed cluster services but I think the vFw were the root cause. Thank You everyone for chipping in!

Perhaps during pfsense deployment should not use default zfs or have separate ssd for those alone.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!