What happens if you simply try to replace the device with the /dev/disk/by-id path?
zpool replace rpool /dev/nvme7n1p3 /dev/disk/by-id/nvme-eui.<rest-of-ID-here>-part3
I suspect the discrepancy you see is related to the refreservation (reserved space for each zvol). It isn't used, so the PVE UI shows ~120 GB as "free," but is reserved and therefore you get an error when you try to run replication. That is my...
Your local-zfs1 pool is full:
I'm not sure why the Disks -> ZFS UI shows 120 GB free, though. Maybe it is not accounting for the usage of snapshots or something like that. Someone a bit more knowledgeable than I about the nuances there will...
My guess is that doing a fresh install of PVE on the new drives and reimporting the config and guests would be a better approach, but you could also replace one NVMe at a time by failing and replacing each drive sequentially. You'd need to copy...
I don't think I would do it that way. I would install Tailscale into a container and use that "node" to get remote access to your PVE host as needed. I wouldn't mess with the PVE host's DNS.
I don't use Tailscale myself, but it could potentially be as simple as using Tailscale to connect to the LXC guest, then accessing the PVE web UI via its usual IP and port (assuming you don't have any firewall restrictions in place).
Known issue that's being addressed currently. You can try an experimental kernel build if you're brave:
https://forum.proxmox.com/threads/slow-memory-leak-in-6-8-12-13-pve.168961/post-803023