I recently added Nginx Proxy Manager to rewrite some of the server name labels for my cluster. I had originally planned on using it for the "inside the bubble" workload servers/services but thought adding the hosts would be nice. And it was all fairly non-controversial to a point. The initial navigation to those hosts is fine. But, when I spawn a shell instance or try to run "Upgrade" I get errors like
SSH into the original IP from a terminal is fine. Also - if I go to the IPv4 in the browser and spawn a shell or upgrade from there it's *also* fine. But I feel like there's a piece of the puzzle missing in my own understanding in order for this to work as expected - so I thought I'd ask here. Thanks for the advice!
Bash:
TASK ERROR: command '/usr/bin/termproxy 5900 --path /nodes/spkez1 --perm Sys.Console -- /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=spkez1' -o 'UserKnownHostsFile=/etc/pve/nodes/spkez1/ssh_known_hosts' -o 'GlobalKnownHostsFile=none' -t root@10.0.0.101 -- /bin/login -f root' failed: exit code 1
SSH into the original IP from a terminal is fine. Also - if I go to the IPv4 in the browser and spawn a shell or upgrade from there it's *also* fine. But I feel like there's a piece of the puzzle missing in my own understanding in order for this to work as expected - so I thought I'd ask here. Thanks for the advice!