[ZFS] Proxmox Rpool Root Mount Point on NVME Mirror: Cannot Disable "Relatime" Property?

Sep 1, 2022
219
40
33
40
tl;dr is it normal that I can't disable relatime on the rpool/ROOT/pve-1 root (/) mountpoint (NVME mirror)?

I'm trying to disable things that cause needless writes to prolong drive endurance, and I've never run into this behavior before. I ran zfs set relatime=off on my rpool and now see the following, even after reboot/shutdown and reboot.

My semi-educated guess is that the dataset/mountpoint needs relatime and it's hardcoded to stay that way?


# zpool status rpool
pool: rpool
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nvme-CT1000P3PSSD8_2403467A9FBE-part3 ONLINE 0 0 0
nvme-CT1000P3PSSD8_240446AC86A9-part3 ONLINE 0 0 0

errors: No known data errors

So that all looks good.

# zfs get relatime
NAME PROPERTY VALUE SOURCE
rpool relatime off local
rpool/ROOT relatime off inherited from rpool
rpool/ROOT/pve-1 relatime on temporary
rpool/data relatime off inherited from rpool
rpool/var-lib-vz relatime off inherited from rpool
 
When I try it, I get this:
Code:
~# zfs  get all  rpool/ROOT/pve-1  | grep time
rpool/ROOT/pve-1  atime                 off                    inherited from rpool
rpool/ROOT/pve-1  relatime              on                     inherited from rpool

~# zfs  set relatime=off rpool
~# zfs  get all  rpool/ROOT/pve-1  | grep time
rpool/ROOT/pve-1  atime                 off                    inherited from rpool
rpool/ROOT/pve-1  relatime              on                     temporary

Perhaps "tempory" means I have to reboot. I will not do this...
 
When I try it, I get this:
Code:
~# zfs  get all  rpool/ROOT/pve-1  | grep time
rpool/ROOT/pve-1  atime                 off                    inherited from rpool
rpool/ROOT/pve-1  relatime              on                     inherited from rpool

~# zfs  set relatime=off rpool
~# zfs  get all  rpool/ROOT/pve-1  | grep time
rpool/ROOT/pve-1  atime                 off                    inherited from rpool
rpool/ROOT/pve-1  relatime              on                     temporary

Perhaps "tempory" means I have to reboot. I will not do this...
That's what it is supposed to mean.

Like I said, the oddity is that the "temporary" state survives both a reboot and a shutdown and power on.
 
Maybe interesting: https://docs.oracle.com/cd/E36784_01/html/E36835/gamnt.html

Using Temporary Mount Properties​

If any of the mount options described in the preceding section are set explicitly by using the –o option with the zfs mount command, the associated property value is temporarily overridden. These property values are reported as temporary by the zfs get command and revert back to their original values when the file system is unmounted. If a property value is changed while the file system is mounted, the change takes effect immediately, overriding any temporary setting.
 
Last edited:
  • Like
Reactions: SInisterPisces
...What a short article. -_- I'm going to have to dig deeper into the man pages. Thanks for pointing me that direction.

But that makes sense and matches the behavior I was seeing before I restarted, as I didn't try to remount it.

This is what it looked like after a reboot, and then still after a complete shutdown and restart. Before the reboot, after I enabled relatime, everything was temporary.
Code:
# zfs get relatime
NAME              PROPERTY  VALUE     SOURCE
rpool             relatime  off       local
rpool/ROOT        relatime  off       inherited from rpool
rpool/ROOT/pve-1  relatime  on        temporary
rpool/data        relatime  off       inherited from rpool
rpool/var-lib-vz  relatime  off       inherited from rpool

So, the initial reboot remounted the rpool and ... most ... of the child datasets inherited properly, but even on a remount, the child dataset that holds the linux root partition is like, nah.

Meanwhile, atime turned off on everything without a hitch. One thing I'm not clear on yet: If atime is off, does relatime even do anything, or does it inherit ... being off, no matter what its settings are? I should probably ask that one on the ZFS reddit or something.
 
Last edited:
Oh, good. That's what I was hoping.

I wish it didn't look so confusing, but at least it's set up properly ... or at least as I intended.

EDIT: I set relatime back to ON on rpool. It doesn't have any effect, and looks cleaner and less confusing that way, which will be beneficial to me the next time I'm futzing around at 2 am trying to understand something and getting confused by random ZFS things again.

:p
 
Last edited:
I am buiding my first PVE with ZFS with a somehow personalised structure and would like to be sure about the ZFS flags recommended for each type of data.

I am particularly concerned about:
Code:
atime
relatime
dnodesize
overlay

In this thread you mention this I guess as correct as far as relatime with all atime off I understand

Code:
rpool             relatime  off       local
rpool/ROOT        relatime  off       inherited from rpool
rpool/ROOT/pve-1  relatime  on        temporary
rpool/data        relatime  off       inherited from rpool
rpool/var-lib-vz  relatime  off       inherited from rpool

So I am guessing that if data will be holding the thin provisioning CT and KVM disks, is fine to have relatime off for that in whatever my dataset is.

As far a dnodesize and overlay on the self installed PVE rpool was created as:
dnodesize: legacy
overlay: on

Just wondering about the first, particularly as I've generally read "auto" is recommended.

I am trying to optimize a proposed structure like (just 2 disks available)

Code:
rpool
  ├── ROOT
  │    └── pve-1        # root
  ├── coredata
  │    ├── home      
  │    ├── data         #host important data
  │    └── vmthin    
  └── tempdata
      └── backups   

quick                         #striped
  ├── cache        
  ├── tmp           
  └── logs
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!