as a bit of background the original commit
21800a71a79c7cf49108e22781d2f34be87b1efd which introduced, but did not yet activate the change refers to the bug that tracked this change -
#2079
the rationale is missing there as well, but I'm happy to give it here (and C+P it to the bug for easier future archeology
). the authkey is used for signing (and verifying) the authentication tickets used for the API (and thus the WEB UI, noVNC sessions, ...). if you know the authkey, you can create arbitrary tickets for arbitrary users valid for arbitrary points in time. that's the reason only root can read it, and only pvedaemon can create new tickets. the problem is that if this key gets lost/into the wrong hands, the 'finder' can create arbitrary tickets for any user at arbitrary points in the future. there was no built-in way to expire the key. since there is no reason to keep a static key at all, we implemented automatic, always-on rotation. now a lost/stolen authkey can be mis-used at most 24h+2h, instead of basically forever. this means that if an attacker finds a backup of /etc/pve from last week, they can't take over the cluster anymore by simply generating a valid access ticket. same applies for other, similar scenarios (let-go disgruntled ex-employees, for example).
the reason the change was activated with PVE 6.0 is simply that that was a point in time where we knew that all nodes in a cluster must have support for verifying rotated tickets, so we could avoid complicated logic and checking other cluster node's package versions etc.
kudos to you for monitoring key changes like that (I think most users/admins are oblivious to which keys to their kingdoms are floating around
) - maybe it helps you to know that pvedaemon will log when it rotates a key, so you can correlate that with the changed file to sort "benign" from "irregular" changes? it's probably a bit involved since the rotation happens on one node, but is visible on the whole cluster (if your systems are clustered).
making the lifetime settable via datacenter.cfg would of course be possible - but it would mean effectively providing a 'turn off security mechanism' switch, which I'd only do if there is a more pressing reason than 'we might have to adapt our monitoring to filter out false positives'. e.g., for migration we have a switch to use a plain-text tunnel instead of SSH, since there are very real performance benefits, and there are network setups where it's a valid choice to not require encryption (e.g., mesh network in small clusters where there is no equipment between the nodes that can snoop, or an already existing wireguard-protected interconnect, or ...).