Web GUI and SSH Inaccessible

Fireseed

New Member
Mar 18, 2025
3
0
1
Hey Folks! I am relatively new to this but recently installed Proxmox on a MOGINSOK 2.5gbe Firewall Appliance. My goal was to install PFSense on a VM and support it with multiple LXC containers that run things like the Unifi Controller, Pi-Hole, etc. When I installed everything, it worked great, and I could access the WebGUI, but for whatever reason, after the first night on BOTH of my attempts at this, the WebGUI and SSH became inaccessible. To briefly overview my physical setup, the system has four physical NICs (enp1s0, enp2s0, enp3s0, and enp4s0). enp1s0 is my management interface (linux bridge vmbr0), and I connected it to port 8 on my managed switch. enp3s0 (linux bridge pfLan) was attached to pfSense as it's LAN, which is plugged into switch port 1. enp4s0 (linux bridge pfWan) is pfSense's WAN and is connected to my modem.

When I try to connect to the Web GUI through Proxmox's IP https://192.168.2.9 (on a PC that is wired to port 3 on the switch), I get the error "ERR_CONNECTION_REFUSED". I've followed along with every forum thread that even vaguely sounded like this issue but to no avail.

Strangely enough, the services I installed via LXC and VM on Proxmox are still accessible (pfsense (VM 192.168.2.2), pi-hole (LXC 192.168.2.7), Unifi Controller (LXC 192.168.2.8)).

I've read other threads and will provide some of the most commonly requested information.

ip -r
Code:
default via 192.168.2.2 dev umbr0 proto kernel onlink
192.168.2.0/24 dev umbr0 proto kernel scope link src 192.168.2.9

ip -a
Code:
1: Io: ‹LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid Ift forever preferred_Ift forever 
inet6 ::1/128 scope host noprefixroute 
valid_ift forever preferred_Ift forever
2: enp1s0: ‹BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master umbr0 state UP group default qlen 1000
link/ether a8:b8:e0:06:e0:57 brd ff:ff:ff:ff:ff:ff
3: enp2s0: <BROADCAST, MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether a8:b8:e0:06:e0:58 brd ff:ff:ff:ff:ff:ff
4: enp3s0: ‹BRDADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master pfLan state UP group default qlen 1000
link/ether a8:b8:e0:06:e0:59 brd ff:ff:ff:ff:ff:ff
5: enp4s0: ‹BROADCAST, MULTICAST,UP,LOWER UP> mtu 1500 qdisc mg master pflan state UP group default glen 1000
link/ether a8:b8:e0:06:e0:5a brd ff:ff:ff:ff:ff:ff
5: vmbr0: <BROADCAST, MULTICAST,UP ,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether aB:bB:e0:06:e0:57 brd ff:ff:ff:ff:ff:ff
inet 192.168.2.9/24 scope global vmbro 
valid_Ift forever preferred_lft forever 
inet6 fe8O: :aab8:e0ff: fe06:057/64 scope link 
valid_Ift forever preferred_lft forever
7: pfLan: <BROADCAST, MULTICAST,UP, LOWER _UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 
link/ether a8:b8:e0:06:e0:59 brd ff:ff:ff:ff:ff:ff
inet6 fe80::aab8:e0ff:fe06:e059/64 scope link valid_lft forever preferred_lft forever
B: pfWan: ‹BROADCAST, MULTICAST,UP ,LOWER _UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 
link/ether a8:b8:e0:06:e0:5a brd ff:ff:ff:ff:ff:ff
inet6 fe80: aab8:e0ff: fe06:05a/64 scope link
valid_lft forever preferred_lft forever
9: tap100i0: (BROADCAST, MULTICAST, PROMISC,UP, LOWER UP> mtu 1500 qdisc pfifo_fast master pfLan state UNKNOWN group default qlen 1000
link/ether de:0f:e3:57:2a:6d brd ff:ff:ff:ff:ff:ff
10: tap100i1: <BROADCAST,MULTICAST, PROMISC,UP ,LOWER_UP> mtu 1500 qdisc pfifo_fast master pfWan state UNKNOWN group default qlen 1000
link/ether f2:da:75:9f:60:17 brd ff:ff:ff:ff:ff:ff
16: veth101i0@if2: ‹BROADCAST, MULTICAST,UP, LOWER_UP› mtu 1500 qdisc noqueue master pfLan state UP group default glen 1000
link/ether fe:d5:81:ca:6c:b6 brd ff:ff:ff:ff:ff:ff link-netnsid 0
21: veth102i0@if2: ‹BROADCAST, MULTICAST,UP, LOWER_UP> mtu 1500 qdisc noqueue master pfLan state UP group default glen 
link/ether fe:6f:74:50:87:e4 brd ffiff:ff:ff:ffift link-netnsid 1
23: veth103i0@if2: ‹BROADCAST,MULTICAST,UP ,LOWER_UP> mtu 1500 qdisc noqueue master pfLan state UP group default glen 1000
link/ether fe:21:e1:dc:b2:e0 brd ffiff:ff:ff:ffift link-netnsid 2
 
Sorry to miss that detail, but I am connecting on https://192.168.2.9:8006
It happens, nevertheless details are important.

Start with basics:
SSH works presumably?
What is the output of "curl -k https://127.0.0.1:8006" when run directly from PVE (over SSH or physical console?)
What is the output of "curl -k https://192.168.2.9:8006" when run directly from PVE (over SSH or physical console?)

If you get good HTTP output, then the PVE services are working and the issue is in your networking set up. Since you have quite a bit going on there - continue by removing as many layers as possible, then build them back up.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
It happens, nevertheless details are important.

Start with basics:
SSH works presumably?
What is the output of "curl -k https://127.0.0.1:8006" when run directly from PVE (over SSH or physical console?)
What is the output of "curl -k https://192.168.2.9:8006" when run directly from PVE (over SSH or physical console?)

If you get good HTTP output, then the PVE services are working and the issue is in your networking set up. Since you have quite a bit going on there - continue by removing as many layers as possible, then build them back up.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Yeah, both of those curl commands return what looks to be HTML code.
 
Yeah, both of those curl commands return what looks to be HTML code.
That means PVE is working fine. Next step is to check connectivity between your "gateway" and PVE. Since your gateway is the firewall VM/device - can you ping/ssh/curl between 2.2 and 2.9


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox