[SOLVED] Cannot Access Web Interface

gvolponi

Member
Jan 17, 2022
8
3
8
46
I installed Proxmox on 3 new server and all the procedure from the iso went ok. I configured cluster and tested it, all work like a charm.
Now, after a reboot I cannot access to web interface from any server: login to ssh its ok but from web interface (tested in many browser) always return connection refued.
If I try from ssh console to connect with curl or wget I take this:
root@rvs1:/etc/pve# curl https://localhost:8006 -k | grep title
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to localhost port 8006: Connection refused
root@rvs1:/etc/pve# wget --no-check-certificate https://localhost:8006
--2022-01-17 18:13:36-- https://localhost:8006/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8006... failed: Connection refused.

ss -tulpn
tcp LISTEN 0 4096 [::]:8006 [::]:* users:(("pveproxy worker",pid=18053,fd=6),("pveproxy worker",pid=18052,fd=6),("pveproxy worker",pid=18051,fd=6),("pveproxy",pid=18050,fd=6))


Tried disable firewall
I updated from repository but nothing changed
All services are active, the test VM also function.

What can I check to solve the problem?
Thanks
 
Hi,

Have you checked the Syslog or you can run journalctl -f from SSH to see if there is an error message
 
Hi,
I have checked syslog and no error: it seems like the service pveproxy binding only ipv6 address.
I tried to disable ipv6 but nothing changed.
tcp LISTEN 0 4096 [::]:8006 [::]:* users:(("pveproxy worker",pid=3282,fd=6),("pveproxy worker",pid=3281,fd=6),("pveproxy worker",pid=3280,fd=6),("pveproxy",pid=3279,fd=6))
tcp LISTEN 0 4096 [::]:111 [::]:* users:(("rpcbind",pid=2053,fd=6),("systemd",pid=1,fd=57))
tcp LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=2619,fd=4))
tcp LISTEN 0 4096 [::]:3128 [::]:* users:(("spiceproxy work",pid=3286,fd=6),("spiceproxy",pid=3285,fd=6))
 
Please post the output of the following commands:

Bash:
ip a
cat /etc/network/interfaces
nmap -p 8006 <IP Address>
cat /etc/hosts
 
  • Like
Reactions: democcoatcher
Hi,
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: enp23s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr1 state UP group default qlen 1000
link/ether 7c:10:c9:c2:69:44 brd ff:ff:ff:ff:ff:ff
3: enp23s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr3 state UP group default qlen 1000
link/ether 7c:10:c9:c2:69:45 brd ff:ff:ff:ff:ff:ff
4: enp101s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
link/ether 0c:9d:92:20:4f:f7 brd ff:ff:ff:ff:ff:ff
5: enp101s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr2 state UP group default qlen 1000
link/ether 0c:9d:92:20:4f:f8 brd ff:ff:ff:ff:ff:ff
6: enp179s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:1b:21:23:00:68 brd ff:ff:ff:ff:ff:ff
7: enp179s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:1b:21:23:00:69 brd ff:ff:ff:ff:ff:ff
8: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 0c:9d:92:20:4f:f7 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.22/24 scope global vmbr0
valid_lft forever preferred_lft forever
9: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 7c:10:c9:c2:69:44 brd ff:ff:ff:ff:ff:ff
inet 192.168.135.22/24 scope global vmbr1
valid_lft forever preferred_lft forever
10: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 0c:9d:92:20:4f:f8 brd ff:ff:ff:ff:ff:ff
inet 192.168.145.22/24 scope global vmbr2
valid_lft forever preferred_lft forever
11: vmbr3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 7c:10:c9:c2:69:45 brd ff:ff:ff:ff:ff:ff
inet 192.168.155.22/24 scope global vmbr3
valid_lft forever preferred_lft forever

cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface enp101s0f0 inet manual

iface enp23s0f0 inet manual

iface enp23s0f1 inet manual

iface enp179s0f0 inet manual

iface enp179s0f1 inet manual

iface enp101s0f1 inet manual

auto vmbr0
iface vmbr0 inet static
address 192.168.0.22/24
gateway 192.168.0.244
bridge-ports enp101s0f0
bridge-stp off
bridge-fd 0

auto vmbr1
iface vmbr1 inet static
address 192.168.135.22/24
bridge-ports enp23s0f0
bridge-stp off
bridge-fd 0

auto vmbr2
iface vmbr2 inet static
address 192.168.145.22/24
bridge-ports enp101s0f1
bridge-stp off
bridge-fd 0

auto vmbr3
iface vmbr3 inet static
address 192.168.155.22/24
bridge-ports enp23s0f1
bridge-stp off
bridge-fd 0

nmap -p 8006 192.168.0.22
Starting Nmap 7.80 ( https://nmap.org ) at 2022-01-18 12:38 CET
Nmap scan report for rvs1.ratti.local (192.168.0.22)
Host is up (0.000023s latency).

PORT STATE SERVICE
8006/tcp closed wpl-analytics

Nmap done: 1 IP address (1 host up) scanned in 0.08 seconds

cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.0.22 rvs1.ratti.local rvs1
192.168.0.23 rvs2.ratti.local rvs2
192.168.0.24 rvs3.ratti.local rvs3

# The following lines are desirable for IPv6 capable hosts

::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts


THANKS!
 
nmap -p 8006 192.168.0.22
Starting Nmap 7.80 ( https://nmap.org ) at 2022-01-18 12:38 CET
Nmap scan report for rvs1.ratti.local (192.168.0.22)
Host is up (0.000023s latency).

PORT STATE SERVICE
8006/tcp closed wpl-analytics
the 8006 port is not open on 192.168.0.22

are you sure that all services are up and running?

can you please attach the Syslog `/var/log/syslog` as well
 
Last edited:
Yes, it's not open, but why? what block the port? Until yesterday and before update, all server works correctly..

ss -tulpn
Netid State Recv-Q Send-Q Local Address:port Peer Address:port Process
udp UNCONN 0 0 0.0.0.0:111 0.0.0.0:* users:(("rpcbind",pid=2056,fd=5),("systemd",pid=1,fd=59))
udp UNCONN 0 0 127.0.0.1:323 0.0.0.0:* users:(("chronyd",pid=2637,fd=5))
udp UNCONN 0 0 192.168.135.22:5405 0.0.0.0:* users:(("corosync",pid=3016,fd=28))
udp UNCONN 0 0 [::]:111 [::]:* users:(("rpcbind",pid=2056,fd=7),("systemd",pid=1,fd=64))
udp UNCONN 0 0 [::1]:323 [::]:* users:(("chronyd",pid=2637,fd=6))
tcp LISTEN 0 512 192.168.145.22:6810 0.0.0.0:* users:(("ceph-osd",pid=3039,fd=25))
tcp LISTEN 0 512 192.168.155.22:6810 0.0.0.0:* users:(("ceph-osd",pid=3039,fd=23))
tcp LISTEN 0 512 192.168.145.22:6811 0.0.0.0:* users:(("ceph-osd",pid=3034,fd=25))
tcp LISTEN 0 512 192.168.155.22:6811 0.0.0.0:* users:(("ceph-osd",pid=3034,fd=23))
tcp LISTEN 0 512 192.168.145.22:6812 0.0.0.0:* users:(("ceph-osd",pid=3037,fd=20))
tcp LISTEN 0 512 192.168.155.22:6812 0.0.0.0:* users:(("ceph-osd",pid=3037,fd=18))
tcp LISTEN 0 512 192.168.145.22:6813 0.0.0.0:* users:(("ceph-osd",pid=3037,fd=21))
tcp LISTEN 0 512 192.168.155.22:6813 0.0.0.0:* users:(("ceph-osd",pid=3037,fd=19))
tcp LISTEN 0 512 192.168.145.22:6814 0.0.0.0:* users:(("ceph-osd",pid=3037,fd=24))
tcp LISTEN 0 512 192.168.155.22:6814 0.0.0.0:* users:(("ceph-osd",pid=3037,fd=22))
tcp LISTEN 0 512 192.168.145.22:6815 0.0.0.0:* users:(("ceph-osd",pid=3037,fd=25))
tcp LISTEN 0 512 192.168.155.22:6815 0.0.0.0:* users:(("ceph-osd",pid=3037,fd=23))
tcp LISTEN 0 512 192.168.155.22:3300 0.0.0.0:* users:(("ceph-mon",pid=3010,fd=28))
tcp LISTEN 0 512 192.168.155.22:6789 0.0.0.0:* users:(("ceph-mon",pid=3010,fd=29))
tcp LISTEN 0 4096 0.0.0.0:111 0.0.0.0:* users:(("rpcbind",pid=2056,fd=4),("systemd",pid=1,fd=58))
tcp LISTEN 0 512 192.168.145.22:6800 0.0.0.0:* users:(("ceph-osd",pid=3038,fd=20))
tcp LISTEN 0 512 192.168.155.22:6800 0.0.0.0:* users:(("ceph-osd",pid=3038,fd=18))
tcp LISTEN 0 512 192.168.145.22:6801 0.0.0.0:* users:(("ceph-osd",pid=3038,fd=21))
tcp LISTEN 0 512 192.168.155.22:6801 0.0.0.0:* users:(("ceph-osd",pid=3038,fd=19))
tcp LISTEN 0 512 192.168.145.22:6802 0.0.0.0:* users:(("ceph-osd",pid=3038,fd=24))
tcp LISTEN 0 512 192.168.155.22:6802 0.0.0.0:* users:(("ceph-osd",pid=3038,fd=22))
tcp LISTEN 0 512 192.168.145.22:6803 0.0.0.0:* users:(("ceph-osd",pid=3038,fd=25))
tcp LISTEN 0 512 192.168.155.22:6803 0.0.0.0:* users:(("ceph-osd",pid=3038,fd=23))
tcp LISTEN 0 512 192.168.145.22:6804 0.0.0.0:* users:(("ceph-osd",pid=3039,fd=20))
tcp LISTEN 0 512 192.168.155.22:6804 0.0.0.0:* users:(("ceph-osd",pid=3039,fd=18))
tcp LISTEN 0 4096 127.0.0.1:85 0.0.0.0:* users:(("pvedaemon worke",pid=13228,fd=6),("pvedaemon worke",pid=13227,fd=6),("pvedaemon worke",pid=13226,fd=6),("pvedaemon",pid=13225,fd=6))
tcp LISTEN 0 512 192.168.145.22:6805 0.0.0.0:* users:(("ceph-osd",pid=3034,fd=20))
tcp LISTEN 0 512 192.168.155.22:6805 0.0.0.0:* users:(("ceph-osd",pid=3034,fd=18))
tcp LISTEN 0 512 192.168.145.22:6806 0.0.0.0:* users:(("ceph-osd",pid=3039,fd=21))
tcp LISTEN 0 512 192.168.155.22:6806 0.0.0.0:* users:(("ceph-osd",pid=3039,fd=19))
tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=2600,fd=3))
tcp LISTEN 0 512 192.168.145.22:6807 0.0.0.0:* users:(("ceph-osd",pid=3034,fd=21))
tcp LISTEN 0 512 192.168.155.22:6807 0.0.0.0:* users:(("ceph-osd",pid=3034,fd=19))
tcp LISTEN 0 512 192.168.145.22:6808 0.0.0.0:* users:(("ceph-osd",pid=3039,fd=24))
tcp LISTEN 0 512 192.168.155.22:6808 0.0.0.0:* users:(("ceph-osd",pid=3039,fd=22))
tcp LISTEN 0 512 192.168.145.22:6809 0.0.0.0:* users:(("ceph-osd",pid=3034,fd=24))
tcp LISTEN 0 512 192.168.155.22:6809 0.0.0.0:* users:(("ceph-osd",pid=3034,fd=22))
tcp LISTEN 0 100 127.0.0.1:25 0.0.0.0:* users:(("master",pid=2799,fd=13))
tcp LISTEN 0 4096 [::]:8006 [::]:* users:(("pveproxy worker",pid=13235,fd=6),("pveproxy worker",pid=13234,fd=6),("pveproxy worker",pid=13233,fd=6),("pveproxy",pid=13232,fd=6))
tcp LISTEN 0 4096 [::]:111 [::]:* users:(("rpcbind",pid=2056,fd=6),("systemd",pid=1,fd=60))
tcp LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=2600,fd=4))
tcp LISTEN 0 4096 [::]:3128 [::]:* users:(("spiceproxy work",pid=3233,fd=6),("spiceproxy",pid=3232,fd=6))
 
Hi,

Thank you for the Syslog!

Since you have 3 nodes, can you access the rvs1 node from the other one? or all nodes can't accessed through HTTP?
 
Hi Moayad,
I cant access from any of the nodes. Very very strange...

sysctl -p
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

This from node 3 but it's similar the others nodes
pvecm status
Cluster information
-------------------
Name: ClusterRatti
Config Version: 3
Transport: knet
Secure auth: on

Quorum information
------------------
Date: Tue Jan 18 16:50:02 2022
Quorum provider: corosync_votequorum
Nodes: 3
Node ID: 0x00000002
Ring ID: 1.100
Quorate: Yes

Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 3
Quorum: 2
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 192.168.135.22
0x00000002 1 192.168.135.24 (local)
0x00000003 1 192.168.135.23
 
I solved:
nano etc/default/pveproxy
insert:
LISTEN_IP=0.0.0.0
save
restart
systemctl restart pve-cluster.service
systemctl restart pveproxy.service

Now all function correctly.
Thanks a lof
 
Hi, sorry for reopening this but I ran into very similar issue and thanks to this thread managed to resolve it. I am still going to post this just in case someone makes the same mistakes as me (or more likely in case I make the same mistake again :) )

I have just installed Proxmox for the first time, fiddled with it and all worked great. I've been uploading ISOs through the web interface until it stopped working, I didn't think much of it so I continued uploading ISOs through SFTP, creating VMs, all was great, until at one point when I was shutting down a VM I got a message along the lines of "error closing file /var/log/........' failed - No space left on device (500)"
I connected over SSH, saw that /var/log was taking up over 1.5 GB so I deleted everything in it :D

Then I realized I couldn't access the web UI anymore, thanks to advice in this thread I used journalctl -f and saw:

Apr 12 23:21:41 pve pveproxy[2209]: unable to open log file '/var/log/pveproxy/access.log' - File does not exist

So I created the directory and the file (as root):

mkdir -p /var/lib/pveproxy && touch /var/lib/pveproxy/access.log

from now on the errors changed to :

Apr 12 23:37:24 pve pveproxy[1600]: unable to open log file '/var/log/pveproxy/access.log' - Permission denied

so at first I opened the log out to everyone `chmod -R 0777 /var/log` and I could immediately access the web interface again. So I changed the rights to:

root@pve:~# find /var/log -type f -exec chmod 644 {} \;
root@pve:~# find /var/log -type d -exec chmod 755 {} \;


and then

chown -r www-data /var/log/pveproxy/

WebGUI still works, who knows what else have I borked but this will do for now, I hope this can help someone else (or likely me in the future when I decide to delete logs again)

BTW Eventually I realized that the disk was full because of those ISO uploads from WebUi, it filled the disk at `/var/tmp/pveupload......` so i torched them all too. I only gave proxmox 20GB on SSD to work with, thinking that was going to be enough, then I created another partition where I store the ISOs, VMdisks,.... I might have to grow that partition or reinstall proxmox again I guess.

Edit:
I ended up reinstalling Proxmox anyways (wanted to try out ZFS ) so I'm pasting /var/log ownership and rights after a fresh install for posterity:

Code:
root@pve:~# ls -Al /var/log
total 351
-rw-r--r-- 1 root     root               16812 Apr 13 15:40 alternatives.log
drwxr-xr-x 2 root     root                   2 Jun 10  2021 apt
-rw-r----- 1 root     adm                 2686 Apr 13 15:45 auth.log
-rw-rw---- 1 root     utmp                   0 Dec  6 13:10 btmp
drwxrws--T 2 ceph     ceph                   2 Oct 21 13:27 ceph
drwxr-x--- 2 _chrony  _chrony                2 Apr 13 15:36 chrony
drwxr-xr-x 2 root     root                   2 Nov  9 11:50 corosync
-rw-r----- 1 root     adm                56167 Apr 13 15:45 daemon.log
-rw-r----- 1 root     adm                 7610 Apr 13 15:40 debug
-rw-r--r-- 1 root     root              323487 Apr 13 15:35 dpkg.log
-rw-r--r-- 1 root     root             2049472 Apr 13 15:35 faillog
-rw-r--r-- 1 root     root                1098 Apr 13 15:34 fontconfig.log
drwxr-xr-x 2 root     root                   2 May 18  2021 glusterfs
drwxr-sr-x 3 root     systemd-journal        3 Apr 13 15:36 journal
-rw-r----- 1 root     adm               171499 Apr 13 15:40 kern.log
-rw-rw-r-- 1 root     utmp            18701432 Apr 13 15:45 lastlog
drwxr-xr-x 2 root     root                   3 Apr 13 15:36 lxc
-rw-r----- 1 root     adm                  746 Apr 13 15:40 mail.info
-rw-r----- 1 root     adm                  746 Apr 13 15:40 mail.log
-rw-r----- 1 root     adm                  226 Apr 13 15:40 mail.warn
-rw-r----- 1 root     adm               163739 Apr 13 15:40 messages
drwx------ 2 root     root                   2 Apr 13 15:35 private
drwxr-xr-x 3 root     root                   3 Apr 13 15:35 pve
-rw-r--r-- 1 root     root                 234 Apr 13 15:40 pve-firewall.log
drwx------ 2 www-data www-data               3 Apr 13 15:36 pveproxy
drwxr-xr-x 3 root     root                   3 Apr 13 15:34 runit
drwxr-x--- 2 root     adm                    2 Nov  4 23:20 samba
-rw-r----- 1 root     adm               229180 Apr 13 15:45 syslog
-rw-rw-r-- 1 root     utmp                4224 Apr 13 15:45 wtmp
 
Last edited:
I solved:
nano etc/default/pveproxy
insert:
LISTEN_IP=0.0.0.0
save
restart
systemctl restart pve-cluster.service
systemctl restart pveproxy.service

Now all function correctly.
Thanks a lof
What you did in this. Because in my server pveproxy file is not available
 
FWIW, I had the same problem as described in this post and the solution didn't help me either. Since this was a new install, I re-installed from scratch and then was able to both SSH into the Proxmox server and access the web management interface after reboot.

Here were a couple of odd things that showed up the first time I installed:
* Instead of the network info being all IPv4 addresses or IPv6 addresses, I had a strange mix of both (node address came up as an IPv6 address, but DNS and gateway IPs were IPv4 addresses),
* I corrected the host IP address to be an IPv4 address & fixed the DNS resolver address from 127.0.0.1 to the correct DNS server address (that should have been determined from the DHCP-assigned network info)

However, when this install finished, I was neither able to SSH into the system (SSH connected but the connection was dropped after the following debug messages from the local ssh client:
Code:
debug1: Local version string SSH-2.0-OpenSSH_8.6
kex_exchange_identification: read: Connection reset by peer
Connection reset by 192.168.1.19 port 22
), nor access the web interface (pveproxy was running, listening on *:8086 but would drop connections from outside systems).

I re-installed (NOTE: this time using debug mode of the installer), the network was correctly detected this time (same IPv4 address/gateway/DNS server as handed out via DHCP), and once installed everything looked great. It may be the issues with the first install were somehow timing related, and using the debug installer slowed things down enough to make it work, but that's just my conjecture.

Hopefully this helps some newcomers and maybe also the devs.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!