Recent content by gridiron

  1. G

    CPU0 BANK8 CMCI storm

    Sounds like it might be a hardware issue on the platform: https://access.redhat.com/solutions/2710451
  2. G

    Proxmox crashing seemingly at random

    Are you running on a UPS? It's possible you are experiencing power fluctuations/momentary outages causing reboots.
  3. G

    zfs pool problem?

    What happens if you simply try to replace the device with the /dev/disk/by-id path? zpool replace rpool /dev/nvme7n1p3 /dev/disk/by-id/nvme-eui.<rest-of-ID-here>-part3
  4. G

    How do I Prevent OOMKill errors in Proxmox 9.0.11 cause by ZFS 2.3.4

    Did you also set the min value (in case it is larger than the max value set) and run update-initramfs -u?
  5. G

    [SOLVED] PVE 9 Single Server => New Hardware. Is there a hardware migration guide?

    Copying over relevant configs (/etc/pve and whichever others you've modified and want to retain), reinstalling any packages added manually, etc.
  6. G

    ZFS Replication Error: out of space

    I suspect the discrepancy you see is related to the refreservation (reserved space for each zvol). It isn't used, so the PVE UI shows ~120 GB as "free," but is reserved and therefore you get an error when you try to run replication. That is my guess but again it would be good to have someone...
  7. G

    ZFS Replication Error: out of space

    Can you post the output of this command? zfs list -r -o name,used,avail,refer,usedbysnapshots,usedbychildren,usedbyrefreservation local-zfs1
  8. G

    ZFS Replication Error: out of space

    Your local-zfs1 pool is full: I'm not sure why the Disks -> ZFS UI shows 120 GB free, though. Maybe it is not accounting for the usage of snapshots or something like that. Someone a bit more knowledgeable than I about the nuances there will have to comment.
  9. G

    [SOLVED] PVE 9 Single Server => New Hardware. Is there a hardware migration guide?

    My guess is that doing a fresh install of PVE on the new drives and reimporting the config and guests would be a better approach, but you could also replace one NVMe at a time by failing and replacing each drive sequentially. You'd need to copy the partition table, let the ZFS mirror rebuild...
  10. G

    ZFS Replication Error: out of space

    Can you post the results of zfs get all?
  11. G

    Tailscale on PVE host

    I don't think I would do it that way. I would install Tailscale into a container and use that "node" to get remote access to your PVE host as needed. I wouldn't mess with the PVE host's DNS.
  12. G

    Tailscale on PVE host

    I use WireGuard but like you host it on the router (separate box). I like to keep my networking separate in general.
  13. G

    Tailscale on PVE host

    I don't use Tailscale myself, but it could potentially be as simple as using Tailscale to connect to the LXC guest, then accessing the PVE web UI via its usual IP and port (assuming you don't have any firewall restrictions in place).
  14. G

    Tailscale on PVE host

    It would probably be better to install Tailscale into a VM or LXC and access the PVE host via that guest.
  15. G

    Fresh Proxmox 9.0.10 install no renderd128 device

    Bummer. Indeed sounds like faulty hardware then.