[SOLVED] Port 8006 doesn't answer anymore after upgrade

kamzata

Renowned Member
Jan 21, 2011
235
9
83
Italy
Just upgrade to pve-manager/6.4-4/337d6701 (running kernel: 5.4.106-1-pve) from previous release (Enterprise repository) and I'm not more able to connect to the Web Interface on port 8006. I tried to restart the whole server but nothing to do. I also checked journal -b but I cannot see any error related.

What's happened?

Solved: by removing net.ipv6.bindv6only=1 in /etc/sysctl.conf

 
Last edited:
Is the networking working?
It seems it's all 100% working except I cannot connect to Web Gui after upgraded. I tried to connect to it using different browsers, different clients, different networks... same result.

Running curl -s -k https://localhost:8006 | grep title inside the host on ssh, doesn't show anything.

Bash:
$ netstat -tulnp | grep 8006
tcp6       0      0 :::8006                 :::*                    LISTEN      2367/pveproxy

Bash:
$ iptables -L
Chain INPUT (policy DROP)
target     prot opt source               destination
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere             limit: avg 400/min burst 1600
ACCEPT     icmp --  anywhere             anywhere             limit: avg 400/min burst 1600
REJECT     tcp  --  anywhere             anywhere             tcp flags:FIN,SYN,RST,ACK/SYN #conn src/32 > 100 reject-with tcp-reset
ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED limit: avg 2000/sec burst 2010
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:8006
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootps
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:68
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootpc
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:http
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:https

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
 
Last edited:
Running curl -s -k https://localhost:8006 | grep title inside the host on ssh, doesn't show anything.
on a hunch - could you try connecting to https://127.0.0.1:8006 ?
additionally please try to connect to the ipv4-address of your node instead of the hostname

if this does not help - post the journal of pveproxy:
`journalctl -u pveproxy -b`

as a next debugging step - I'd take a look at tcpdump:
* `tcpdump -envi vmbr0 port 8006` once this is running try to connect from the outside (replace vmbr0 by the interface you connect to)

I hope this helps!
 
Bash:
$ curl -s -k https://127.0.0.1:8006 | grep title

Nothing returned.

Yes, I tried to connect using IP address as well.

Bash:
$ journalctl -u pveproxy -b
-- Logs begin at Thu 2021-04-29 16:34:25 CEST, end at Thu 2021-04-29 17:54:01 CEST. --
Apr 29 16:34:44 srv001 systemd[1]: Starting PVE API Proxy Server...
Apr 29 16:34:45 srv001 pveproxy[2361]: Using '/etc/pve/local/pveproxy-ssl.pem' as certificate for the web interface.
Apr 29 16:34:45 srv001 pveproxy[2367]: starting server
Apr 29 16:34:45 srv001 pveproxy[2367]: starting 3 worker(s)
Apr 29 16:34:45 srv001 pveproxy[2367]: worker 2368 started
Apr 29 16:34:45 srv001 pveproxy[2367]: worker 2369 started
Apr 29 16:34:45 srv001 pveproxy[2367]: worker 2370 started
Apr 29 16:34:45 srv001 systemd[1]: Started PVE API Proxy Server.

Bash:
$ tcpdump -envi vmbr0 port 8006
tcpdump: listening on vmbr0, link-type EN10MB (Ethernet), capture size 262144 bytes
17:55:40.633741 50:2f:a8:af:71:c1 > d0:50:99:d7:55:10, ethertype IPv4 (0x0800), length 78: (tos 0x0, ttl 45, id 0, offset 0, flags [DF], proto TCP (6), length 64)
    95.232.48.136.50923 > 51.77.66.65.8006: Flags [S], cksum 0xc423 (correct), seq 623439480, win 65535, options [mss 1452,nop,wscale 6,nop,nop,TS val 2885076760 ecr 0,sackOK,eol], length 0
17:55:40.633770 d0:50:99:d7:55:10 > 00:00:0c:9f:f0:01, ethertype IPv4 (0x0800), length 54: (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40)
    51.77.66.65.8006 > 95.232.48.136.50923: Flags [R.], cksum 0xaffe (correct), seq 0, ack 623439481, win 0, length 0

It seems to get the packets but, as I said, browser returns ERR_CONNECTION_REFUSED.
 
Last edited:
hmm - could you please post:
* `cat /etc/default/pveproxy`
* `ip a`
* `ip -6 r`
* `ip r`
* `ss -tlnp`
* `cat /etc/hosts`

do you have any particular custom settings? (kernel-commandline (/proc/cmdline), sysctl (usually /etc/sysctl.conf and /etc/sysctl.d/*conf), any other firewalling technology active on the node? (nftables, ip6tables?)

Thanks
 
Bash:
$ cat /etc/default/pveproxy
cat: /etc/default/pveproxy: No such file or directory

Bash:
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether d0:50:99:d7:55:10 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d0:50:99:d7:55:0f brd ff:ff:ff:ff:ff:ff
4: enp0s20f0u8u3c2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 72:fa:9b:4f:d5:85 brd ff:ff:ff:ff:ff:ff
5: vmbr0: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d0:50:99:d7:55:10 brd ff:ff:ff:ff:ff:ff
    inet 51.77.xxx.65/24 brd 51.77.66.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet 51.89.xxx.206/32 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet 51.89.xxx.215/32 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet 192.168.1.1/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet 192.168.2.1/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 2001:41d0:xxxx:2441::ffff/128 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::d250:99ff:fed7:5510/64 scope link
       valid_lft forever preferred_lft forever
6: veth100i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:88:23:83:47:40 brd ff:ff:ff:ff:ff:ff link-netnsid 0
7: veth101i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:95:f6:88:d9:c6 brd ff:ff:ff:ff:ff:ff link-netnsid 1
8: veth102i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:2c:45:cc:80:47 brd ff:ff:ff:ff:ff:ff link-netnsid 2
9: veth103i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb master vmbr0 state UP group default qlen 1000
    link/ether fe:f3:0d:46:a9:68 brd ff:ff:ff:ff:ff:ff link-netnsid 3
10: veth110i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:1a:0c:b8:a5:56 brd ff:ff:ff:ff:ff:ff link-netnsid 4
11: veth111i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:80:ef:c6:d7:65 brd ff:ff:ff:ff:ff:ff link-netnsid 5
12: veth112i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb master vmbr0 state UP group default qlen 1000
    link/ether fe:16:e5:ce:7c:1a brd ff:ff:ff:ff:ff:ff link-netnsid 6
13: veth120i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb master vmbr0 state UP group default qlen 1000
    link/ether fe:7b:71:e8:7b:15 brd ff:ff:ff:ff:ff:ff link-netnsid 7
14: veth121i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb master vmbr0 state UP group default qlen 1000
    link/ether fe:bd:97:40:d4:25 brd ff:ff:ff:ff:ff:ff link-netnsid 8
15: veth122i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb master vmbr0 state UP group default qlen 1000
    link/ether fe:35:e0:95:e9:07 brd ff:ff:ff:ff:ff:ff link-netnsid 9
16: veth123i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb master vmbr0 state UP group default qlen 1000
    link/ether fe:0b:1d:88:09:cb brd ff:ff:ff:ff:ff:ff link-netnsid 10
17: veth124i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb master vmbr0 state UP group default qlen 1000
    link/ether fe:1d:cf:2b:04:4a brd ff:ff:ff:ff:ff:ff link-netnsid 11
18: veth125i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb master vmbr0 state UP group default qlen 1000
    link/ether fe:08:c0:5b:ce:fb brd ff:ff:ff:ff:ff:ff link-netnsid 12
19: veth126i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb master vmbr0 state UP group default qlen 1000
    link/ether fe:d3:34:41:42:8a brd ff:ff:ff:ff:ff:ff link-netnsid 13
20: veth127i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb master vmbr0 state UP group default qlen 1000
    link/ether fe:c2:68:0b:2b:ea brd ff:ff:ff:ff:ff:ff link-netnsid 14
21: veth128i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:02:df:72:f5:a2 brd ff:ff:ff:ff:ff:ff link-netnsid 15
22: veth200i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:89:81:8c:bd:12 brd ff:ff:ff:ff:ff:ff link-netnsid 16
23: veth210i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:41:36:f3:68:48 brd ff:ff:ff:ff:ff:ff link-netnsid 17

Bash:
$ ip -6 r
::1 dev lo proto kernel metric 256 pref medium
2001:41d0:700:xxxx::ffff dev vmbr0 proto kernel metric 256 pref medium
2001:41d0:700:xxxx:ff:ff:ff:ff dev vmbr0 metric 1024 pref medium
fe80::/64 dev vmbr0 proto kernel metric 256 pref medium
default via 2001:41d0:700:xxxx:ff:ff:ff:ff dev vmbr0 metric 1024 pref medium

Bash:
$ ip r
default via 51.77.xxx.254 dev vmbr0 onlink
51.77.66.0/24 dev vmbr0 proto kernel scope link src 51.77.66.65
192.168.1.0/24 dev vmbr0 proto kernel scope link src 192.168.1.1
192.168.2.0/24 dev vmbr0 proto kernel scope link src 192.168.2.1

Bash:
$ ss -tlnp
State                Recv-Q               Send-Q                               Local Address:Port                                Peer Address:Port
LISTEN               0                    128                                        0.0.0.0:111                                      0.0.0.0:*                   users:(("rpcbind",pid=1360,fd=4),("systemd",pid=1,fd=31))
LISTEN               0                    128                                      127.0.0.1:85                                       0.0.0.0:*                   users:(("pvedaemon worke",pid=2342,fd=6),("pvedaemon worke",pid=2341,fd=6),("pvedaemon worke",pid=2340,fd=6),("pvedaemon",pid=2339,fd=6))
LISTEN               0                    128                                        0.0.0.0:10679                                    0.0.0.0:*                   users:(("sshd",pid=2053,fd=3))
LISTEN               0                    100                                      127.0.0.1:25                                       0.0.0.0:*                   users:(("master",pid=2298,fd=13))
LISTEN               0                    128                                           [::]:8006                                        [::]:*                   users:(("pveproxy worker",pid=2370,fd=6),("pveproxy worker",pid=2369,fd=6),("pveproxy worker",pid=2368,fd=6),("pveproxy",pid=2367,fd=6))
LISTEN               0                    128                                           [::]:111                                         [::]:*                   users:(("rpcbind",pid=1360,fd=6),("systemd",pid=1,fd=33))
LISTEN               0                    128                                           [::]:10679                                       [::]:*                   users:(("sshd",pid=2053,fd=4))
LISTEN               0                    128                                           [::]:3128                                        [::]:*                   users:(("spiceproxy work",pid=2375,fd=6),("spiceproxy",pid=2374,fd=6))
LISTEN               0                    100                                          [::1]:25                                          [::]:*                   users:(("master",pid=2298,fd=14))

Bash:
$ cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
51.77.xxx.65 srv001.xxx.com srv001
51.178.xxx.61   srvlive001.xxx.com srvlive001
51.178.xxx.70   srvdev001.xxx.com srvdev001

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

No, I didn't. Just installed only these packages: ncdu htop curlftpfs npd6 fail2ban. I originally installed the 6.2 then regularly updated.
 
Last edited:
That's odd - maybe npd6 plays into this...
in any case - could reproduce this if I set:
Code:
sysctl -w net.ipv6.bindv6only=1

what's the value on your system? `sysctl -a |grep bindv6only`
 
Bash:
$ sysctl -a |grep bindv6only
net.ipv6.bindv6only = 1

Bash:
vm.max_map_count=262144
fs.protected_hardlinks=1
fs.protected_symlinks=1


### IPv4
net.ipv4.conf.all.rp_filter=1
net.ipv4.icmp_echo_ignore_broadcasts=1
net.ipv4.conf.default.forwarding=1
net.ipv4.conf.default.proxy_arp=0
net.ipv4.ip_forward=1
kernel.sysrq=1
net.ipv4.conf.default.send_redirects=1
net.ipv4.conf.all.send_redirects=0

### IPv6
net.ipv6.conf.eno1.autoconf=0
net.ipv6.conf.eno1.accept_ra=0
net.ipv6.conf.all.accept_redirects=0
net.ipv6.conf.all.router_solicitations=1
net.ipv6.conf.all.forwarding=1
net.ipv6.conf.default.forwarding=1
net.ipv6.conf.all.proxy_ndp=1
net.ipv6.conf.default.proxy_ndp=1
net.ipv6.bindv6only=1
 
Bash:
$ sysctl -a |grep bindv6only
net.ipv6.bindv6only = 1
guess we have the culprit :)

do you have any particular reason for setting this?
Asking because we're currently working on getting the listening-code as robust as possible - and that's one of the things we did not consider (since in my experience quite a few things don't work as expected if this is set):
https://lists.proxmox.com/pipermail/pve-devel/2021-April/047988.html

IOW: if possible - disable this setting, if it's not possible please explain why you need it (so we can take that into consideration)
 
guess we have the culprit :)

do you have any particular reason for setting this?
Asking because we're currently working on getting the listening-code as robust as possible - and that's one of the things we did not consider (since in my experience quite a few things don't work as expected if this is set):
https://lists.proxmox.com/pipermail/pve-devel/2021-April/047988.html

IOW: if possible - disable this setting, if it's not possible please explain why you need it (so we can take that into consideration)
Bingo! You guessed well! Now it works like a charm.

To be honest I don't remember why I set net.ipv6.bindv6only = 1 (I'm on OVH's network). Anyway, now it seems to work all great. What does net.ipv6.bindv6only = 1 do exactly?
 
What does net.ipv6.bindv6only = 1 do exactly?
It tells the system that it must not listen for IPv4 connections when a program listens on the general ((but from IPv6 stemming) wildcard :: address.
Most daemons, like the pveproxy now, use that address to have a unique listening point to accept both, IPv4 and IPv6.
 
i'm having a similar "8006 does not answer" issue today after a fresh install of proxmox 7.2 on an older/slower thin client system

the browser justs loads and loads and loads and hangs.

pveproxy is listening on 8006

telnet pve-server-ip 8006 connects , but issuing any command simply hangs, too....

in "journalctl -u pveproxy -b" there is shown:


Oct 08 20:15:14 hpt620 pveproxy[1484]: worker 1486 finished
Oct 08 20:15:14 hpt620 pveproxy[1484]: starting 1 worker(s)
Oct 08 20:15:14 hpt620 pveproxy[1484]: worker 1580 started
Oct 08 20:15:14 hpt620 pveproxy[1484]: worker 1487 finished
Oct 08 20:15:14 hpt620 pveproxy[1484]: starting 1 worker(s)
Oct 08 20:15:14 hpt620 pveproxy[1484]: worker 1581 started
Oct 08 20:15:14 hpt620 pveproxy[1579]: /etc/pve/local/pve-ssl.pem: failed to use local certificate chain (cert_file or cert) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1917.
Oct 08 20:15:14 hpt620 pveproxy[1580]: /etc/pve/local/pve-ssl.pem: failed to use local certificate chain (cert_file or cert) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1917.
Oct 08 20:15:14 hpt620 pveproxy[1581]: /etc/pve/local/pve-ssl.pem: failed to use local certificate chain (cert_file or cert) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1917.
Oct 08 20:15:19 hpt620 pveproxy[1579]: worker exit
Oct 08 20:15:19 hpt620 pveproxy[1580]: worker exit


and at the beginning:
Oct 08 20:15:04 hpt620 pvecm[1447]: 140583960537984:error:0909006C:pEM routines:get_name:no start line:../crypto/pem/pem_lib.c:745:Expecting: ANY PRIVATE KEY
Oct 08 20:15:04 hpt620 pvecm[1446]: generating pve root certificate failed:
Oct 08 20:15:04 hpt620 pvecm[1446]: command 'faketime yesterday openssl req -batch -days 3650 -new -x509 -nodes -key /etc/pve/priv/pve-root-ca.key -out /etc/pve/pve-root-ca.pem -subj '/CN=Proxmox Virtual Environment/OU=680af574-9861-483d-af0c-b25021dabae1/O=PVE Cluster Manager CA/'' failed: exit code 1

apparently, the pve-root-ca.key file is empty

root@hpt620:/etc/pve/priv# ls -la
total 1
drwx------ 2 root www-data 0 Oct 8 10:48 .
drwxr-xr-x 2 root www-data 0 Jan 1 1970 ..
-rw------- 1 root www-data 1679 Oct 8 10:48 authkey.key
-rw------- 1 root www-data 0 Oct 8 10:48 authorized_keys
drwx------ 2 root www-data 0 Oct 8 10:48 lock
-rw------- 1 root www-data 0 Oct 8 10:48 pve-root-ca.key

the question is, how this can happen/occur and why pveproxy behaves in such a weird way, that it allows connection to 8006 at all
besides the fact that we may need to find out why the pve-root-ca.key has got empty, i think when there is a major problem with the certificate, pveproxy should properly fail and stop listening to port 8006 requests, as it does not make any sense to serve requests.

i copied pve-root-ca.key from another machine and restarted pveproxy and that fixed the problem

the weird thing is, that it's a freshly installed 7.2 system
 
Last edited:
'm having a similar "8006 does not answer" issue today after a fresh install of proxmox 7.2 on an older/slower thin client system

the browser justs loads and loads and loads and hangs.
The issue looks like quite a different one (not related to ipv6only):
* check the logs from messages from pve-cluster/pmxcfs/corosync - and if this does not resolve your issues - create a new thread and post the relevant logs
Thanks