Solved, was able to access web gui using ip:8006 for vmbr0. Credit to Moayad 
Hello,
I'm having trouble accessing the web gui.
Initial installation went fine. All seemed well after setting up pfsense VM to route home internet traffic. Web gui access went fine and pfsense was setup to handle internet traffic. I setup another VM afterwards to run windows 10 programs, turbotax etc... I was able to access the webgui for proxmox ok.
After a while, when I tried to connect to proxmox webgui, it stopped working.
I tried pinging IP of proxmox host, no response. However, had no problems accessing the pfsense VM and it was still handling internet traffic from xfininty modem ok.
My setup for proxmox was to have 2 subnets, 1 for VMs 172.x.x.x/16 (vmbr0). The other subnet was 192.x.x.x/24 (vmbr1) for the proxmox port 8006 etc that was NATed using vmbr0. My interfaces file is as below:
The ip address output follows:
The hosts file follows:
When I curl localhost port 8006, I get html output from the web interface. When I curl 192.168.1.202 port 8006, crickets. Pinging 192.168.1.202 fails also. Pinging 172.22.22.200 works fine as well as pfsense VM.
Since it's been a while, a month or 2 since I accessed the web gui after setting up the VMs, not sure what changes occurred that may have affected access..
From doing some research online, is it necessary to add rules to vmbr0 to forward to vmbr1? If so why the change? Odd since I did the initial setup using the GUI I'm not sure if the interfaces file was changed somehow with an update so I'm guessing there may have been a possibility that there may have been rules there initially but was hidden from me since I did use the web interface to set everything up initially.
Makes me wish ip kvms were a lot cheaper.
Any help or advice would help, not very well versed with iptables. Now I'm forced to look at these files since web interface is gone. Bummer. Thanks for any help..

Hello,
I'm having trouble accessing the web gui.
Initial installation went fine. All seemed well after setting up pfsense VM to route home internet traffic. Web gui access went fine and pfsense was setup to handle internet traffic. I setup another VM afterwards to run windows 10 programs, turbotax etc... I was able to access the webgui for proxmox ok.
After a while, when I tried to connect to proxmox webgui, it stopped working.
I tried pinging IP of proxmox host, no response. However, had no problems accessing the pfsense VM and it was still handling internet traffic from xfininty modem ok.
My setup for proxmox was to have 2 subnets, 1 for VMs 172.x.x.x/16 (vmbr0). The other subnet was 192.x.x.x/24 (vmbr1) for the proxmox port 8006 etc that was NATed using vmbr0. My interfaces file is as below:
Code:
auto lo
iface lo inet loopback
iface enp3s0 inet manual
iface enp1s0 inet manual
iface wlo1 inet manual
iface enx00e04c6805d5 inet manual
auto vmbr0
iface vmbr0 inet static
address 172.22.22.200/16
gateway 172.22.22.1
bridge-ports enx00e04c6805d5
bridge-stp off
bridge-fd 0
#USB NIC
auto vmbr1
#private sub network
iface vmbr1 inet static
address 192.168.1.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '192.168.1.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '192.168.1.0/24' -o vmbr0 -j MASQUERADE
source /etc/network/interfaces.d/*
The ip address output follows:
Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
4: enx00e04c6805d5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
link/ether 00:e0:4c:68:05:d5 brd ff:ff:ff:ff:ff:ff
5: wlo1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 50:84:92:ce:df:42 brd ff:ff:ff:ff:ff:ff
altname wlp0s20f3
6: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:e0:4c:68:05:d5 brd ff:ff:ff:ff:ff:ff
inet 172.22.22.200/16 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::2e0:4cff:fe68:5d5/64 scope link
valid_lft forever preferred_lft forever
7: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether 92:e5:60:65:e0:bc brd ff:ff:ff:ff:ff:ff
inet 192.168.1.1/24 scope global vmbr1
valid_lft forever preferred_lft forever
inet6 fe80::90e5:60ff:fe65:e0bc/64 scope link
valid_lft forever preferred_lft forever
The hosts file follows:
Code:
127.0.0.1 localhost.localdomain localhost
192.168.1.202 cookoo.for.somenamehere
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
When I curl localhost port 8006, I get html output from the web interface. When I curl 192.168.1.202 port 8006, crickets. Pinging 192.168.1.202 fails also. Pinging 172.22.22.200 works fine as well as pfsense VM.
Since it's been a while, a month or 2 since I accessed the web gui after setting up the VMs, not sure what changes occurred that may have affected access..
From doing some research online, is it necessary to add rules to vmbr0 to forward to vmbr1? If so why the change? Odd since I did the initial setup using the GUI I'm not sure if the interfaces file was changed somehow with an update so I'm guessing there may have been a possibility that there may have been rules there initially but was hidden from me since I did use the web interface to set everything up initially.
Makes me wish ip kvms were a lot cheaper.
Any help or advice would help, not very well versed with iptables. Now I'm forced to look at these files since web interface is gone. Bummer. Thanks for any help..
Last edited: