I'm currently working out the process of hardening a two node Proxmox cluster for internet facing deployment.
As part of that I'm moving all ports (other than ssh) to internal network interfaces that aren't publicly accessible. ssh will have it's own security configuration, not covered here.
Getting the SPICE console for VMs working turned out to be a bit of a pain.
What ended up working on my Linux desktop is using an ssh tunnel to each of the two nodes (pnode1, pnode2) plus having a small script that remaps the SPICE port from 3128 to whatever is appropriate for the given node.
My local (user level) ssh config file has the port mappings:
The
To access the webUI for a node I need to ensure the (above) ssh tunnel is open for the host, then open a web browser to (for pnode1) https://localhost:9001, or (for pnode) https://localhost:9002.
The SPICE viewer on my local desktop is
When I click on the Console button in the webUI now, the script is automatically run and fixes the port number, then launches the SPICE remote-viewer and pipes its traffic through the ssh tunnel to the VM.
This approach has been working fine in actual use for a few months now. Had to change the above to be a bit more complex to deal with more hosts, but the concept is the same.
Note - For working out the initial hardening process I'm doing all of this experimentation safely in my local network rather than actually live on the net. Just saying.
As part of that I'm moving all ports (other than ssh) to internal network interfaces that aren't publicly accessible. ssh will have it's own security configuration, not covered here.
Getting the SPICE console for VMs working turned out to be a bit of a pain.
What ended up working on my Linux desktop is using an ssh tunnel to each of the two nodes (pnode1, pnode2) plus having a small script that remaps the SPICE port from 3128 to whatever is appropriate for the given node.
My local (user level) ssh config file has the port mappings:
Code:
Host pnode1
LocalForward 9001 10.50.1.1:8006
LocalForward 3000 10.50.1.1:3128
Host pnode2
LocalForward 9002 10.50.1.2:8006
LocalForward 3001 10.50.1.2:3128
The
10.50.1.x
address there is a private network interface on the hosts, used for communication between the cluster members.To access the webUI for a node I need to ensure the (above) ssh tunnel is open for the host, then open a web browser to (for pnode1) https://localhost:9001, or (for pnode) https://localhost:9002.
The SPICE viewer on my local desktop is
/usr/bin/remote-viewer
, and I've named this script /usr/bin/remote-viewer-switch.sh
:
Bash:
#!/usr/bin/env bash
# Transforms a SPICE session file to use the appropriate localhost
# port instead of 3128
SESSION_FILE=$1
# Use port 3000 for pnode1, and port 3001 for pnode2
NEW_PORT=3000
grep "^proxy=" $SESSION_FILE | grep -q pnode1
STATUS=$?
if [ "${STATUS}" -ne 0 ]; then
NEW_PORT=3001
fi
sed -i "s/localhost:3128/localhost:${NEW_PORT}/" $SESSION_FILE
/usr/bin/remote-viewer $@
When I click on the Console button in the webUI now, the script is automatically run and fixes the port number, then launches the SPICE remote-viewer and pipes its traffic through the ssh tunnel to the VM.
This approach has been working fine in actual use for a few months now. Had to change the above to be a bit more complex to deal with more hosts, but the concept is the same.
Note - For working out the initial hardening process I'm doing all of this experimentation safely in my local network rather than actually live on the net. Just saying.
Last edited: