Fresh Installation on Intel NUC

luk1

New Member
Aug 13, 2022
2
0
1
Hi there,

I installed Proxmox today 1st time on my Intel NUC and have issues to ping my Router and all other IPs (local / wan also no nslookup) from Proxmox.
I'm able to access my Proxmox from my laptop (webgui) also via SSH without any issues.

ip addr show
Code:
root@pve:~# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether 94:c6:91:1a:b6:46 brd ff:ff:ff:ff:ff:ff
    altname enp0s31f6
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 94:c6:91:1a:b6:46 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.20/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::96c6:91ff:fe1a:b646/64 scope link
       valid_lft forever preferred_lft forever

ip link show
Code:
root@pve:~# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether 94:c6:91:1a:b6:46 brd ff:ff:ff:ff:ff:ff
    altname enp0s31f6
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 94:c6:91:1a:b6:46 brd ff:ff:ff:ff:ff:ff

ip route
Code:
root@pve:~# ip route
default via 192.168.0.1 dev vmbr0 proto kernel onlink
192.168.0.0/24 dev vmbr0 proto kernel scope link src 192.168.0.20

/etc/network/interfaces
Code:
root@pve:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.0.20
        netmask 255.255.255.0
        gateway 192.168.0.1
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0

/etc/resolv.conf
Code:
root@pve:~# cat /etc/resolv.conf
search local
nameserver 192.168.0.1

ping router
Code:
root@pve:~# ping 192.168.0.1
PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data.
From 192.168.0.20 icmp_seq=1 Destination Host Unreachable
From 192.168.0.20 icmp_seq=2 Destination Host Unreachable
From 192.168.0.20 icmp_seq=3 Destination Host Unreachable
From 192.168.0.20 icmp_seq=4 Destination Host Unreachable
From 192.168.0.20 icmp_seq=5 Destination Host Unreachable
From 192.168.0.20 icmp_seq=6 Destination Host Unreachable

ping proxmox (itself)
Code:
root@pve:~# ping 192.168.0.20
PING 192.168.0.20 (192.168.0.20) 56(84) bytes of data.
64 bytes from 192.168.0.20: icmp_seq=1 ttl=64 time=0.023 ms
64 bytes from 192.168.0.20: icmp_seq=2 ttl=64 time=0.023 ms
64 bytes from 192.168.0.20: icmp_seq=3 ttl=64 time=0.023 ms

What is wrong? I installed Proxmox 3 times (fresh install) on my NUC and used also differemt fixed IPs, but always the same issue.
WiFi is deactivated in BIOS.
My Router is a FRITZ!Box 6660 Cable.

I want to test Proxmox before I buy a subscription.

Thank you and Regards,
Luk
 
Last edited:
Found the issue, I activated the "VLAN aware" because I have VLANs in my Switch configured
 
My issue is similar but slightly different.
This is a new installation of PVE7 on top of Debian 11 and initially all was sweet.
However I now find I can no longer ping google nor 8.8.4.4 but can ping the router. A restart of the networking service temporarily restores internet connection. See below.

Code:
root@proxmox7node01:~# ping google.com
PING google.com (142.250.70.238) 56(84) bytes of data.
From proxmox7node01.local (169.254.8.230) icmp_seq=1 Destination Host Unreachable
From proxmox7node01.local (169.254.8.230) icmp_seq=2 Destination Host Unreachable
From proxmox7node01.local (169.254.8.230) icmp_seq=3 Destination Host Unreachable
From proxmox7node01.local (169.254.8.230) icmp_seq=4 Destination Host Unreachable
^C
--- google.com ping statistics ---
4 packets transmitted, 0 received, +4 errors, 100% packet loss, time 3052ms
pipe 4
root@proxmox7node01:~# ping 8.8.4.4
PING 8.8.4.4 (8.8.4.4) 56(84) bytes of data.
From 169.254.8.230 icmp_seq=1 Destination Host Unreachable
From 169.254.8.230 icmp_seq=2 Destination Host Unreachable
From 169.254.8.230 icmp_seq=3 Destination Host Unreachable
^C
--- 8.8.4.4 ping statistics ---
4 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3072ms
pipe 4
root@proxmox7node01:~# systemctl restart networking
root@proxmox7node01:~# ping google.com
PING google.com (142.250.70.238) 56(84) bytes of data.
64 bytes from mel05s02-in-f14.1e100.net (142.250.70.238): icmp_seq=1 ttl=59 time=10.3 ms
64 bytes from mel05s02-in-f14.1e100.net (142.250.70.238): icmp_seq=2 ttl=59 time=9.07 ms
^C
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 9.073/9.678/10.283/0.605 ms


A few minutes later it is again unreachable.
I have searched but can find no log messages that point me to the issue. As it seems to be timeout issue I have my suspicions about Modemmanager and also nfs-kernel-server. Nfs I use to share a partition to two of the containers.

syslog does show a message:

Code:
kernel: [ 5308.940711] nfs: server 192.168.1.77 not responding, timed out

/etc/exports:

Code:
/mnt/sda1 192.168.1.62/24(rw,sync,no_subtree_check) 192.168.1.105/24(rw,sync,no_subtree_check)

A longer syslog is here
Would appreciate some clues as to how to troubleshoot this one.
TIA.
 
Last edited:
Thanks for the reply. My networking knowledge is limited but I want a static ip rather than use dhcp.
I had my interfaces file like this

Code:
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual
        dns-nameservers 8.8.4.4 8.8.8.8

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.77/24
        gateway 192.168.1.1
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
so there was an extra line with
auto eno1
but commenting this out and rebooting did not help.
Does this mean I need something like this from the docs in my situation?
Code:
auto lo
iface lo inet loopback

auto eno0
iface eno0 inet static
        address  198.51.100.5/29
        gateway  198.51.100.1
        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up echo 1 > /proc/sys/net/ipv4/conf/eno0/proxy_arp


auto vmbr0
iface vmbr0 inet static
        address  203.0.113.17/28
        bridge-ports none
        bridge-stp off
        bridge-fd 0
Or do I need to go to the Masquerading (NAT) with iptables option?
When I had previously installed Proxmox on a smaller SSD from the image file the networking did not give me a problem and the interfaces file looked the same as what I have.

After network restart I see this
Code:
ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether 94:c6:91:17:f9:72 brd ff:ff:ff:ff:ff:ff
    altname enp0s31f6
    inet6 fe80::96c6:91ff:fe17:f972/64 scope link
       valid_lft forever preferred_lft forever
3: wlp58s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d4:25:8b:91:06:ef brd ff:ff:ff:ff:ff:ff
5: veth102i0@if2: <BROADCAST,MULTICAST,DYNAMIC> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether fe:4b:14:20:6e:32 brd ff:ff:ff:ff:ff:ff link-netnsid 0
6: tap103i0: <BROADCAST,MULTICAST,PROMISC,DYNAMIC> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
    link/ether 1e:b9:0b:e2:af:bd brd ff:ff:ff:ff:ff:ff
7: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 94:c6:91:17:f9:72 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.77/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::96c6:91ff:fe17:f972/64 scope link
       valid_lft forever preferred_lft forever

but after a few minutes the extra line
inet 169.254.216.179/16 brd 169.254.255.255 scope global eno1
appears.
 
Last edited:
Thanks for your reply. The docs I was referring to were at https://pve.proxmox.com/wiki/Network_Configuration but give the same config as 3.3.4 in your link.
So with the dns-nameservers 8.8.4.4 8.8.8.8 line removed
and the auto eno1 line removed as mentioned earlier, my /etc/network/interfaces looks the same as the docs.

cat /etc/network/interfaces
Code:
auto lo
iface lo inet loopback

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.77/24
        gateway 192.168.1.1
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0

and the problem persists.
1667904168598.png
 
Okay, that's looking good to me. Back to "small steps first": please post the output of
Code:
ip address show
ip route show
ping -c 3 192.168.1.1 
ping -c 3 8.8.8.8
 
Code:
ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether 94:c6:91:17:f9:72 brd ff:ff:ff:ff:ff:ff
    altname enp0s31f6
    inet 169.254.199.227/16 brd 169.254.255.255 scope global eno1
       valid_lft forever preferred_lft forever
    inet6 fe80::96c6:91ff:fe17:f972/64 scope link
       valid_lft forever preferred_lft forever
3: wlp58s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d4:25:8b:91:06:ef brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 94:c6:91:17:f9:72 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.77/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::96c6:91ff:fe17:f972/64 scope link
       valid_lft forever preferred_lft forever
5: veth102i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:05:16:24:e2:47 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 169.254.247.126/16 brd 169.254.255.255 scope global veth102i0
       valid_lft forever preferred_lft forever
    inet6 fe80::fc05:16ff:fe24:e247/64 scope link
       valid_lft forever preferred_lft forever
6: tap103i0: <BROADCAST,MULTICAST,PROMISC,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 1e:b9:0b:e2:af:bd brd ff:ff:ff:ff:ff:ff
    inet 169.254.76.37/16 brd 169.254.255.255 scope global tap103i0
       valid_lft forever preferred_lft forever
      
ip route show
default dev eno1 scope link
default via 192.168.1.1 dev vmbr0 proto kernel onlink
169.254.0.0/16 dev eno1 proto kernel scope link src 169.254.199.227
169.254.0.0/16 dev veth102i0 proto kernel scope link src 169.254.247.126
169.254.0.0/16 dev tap103i0 proto kernel scope link src 169.254.76.37
169.254.0.0/16 dev vmbr0 scope link metric 1000
192.168.1.0/24 dev vmbr0 proto kernel scope link src 192.168.1.77

ping -c 3 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.667 ms
64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=0.485 ms
64 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=0.470 ms

--- 192.168.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2028ms
rtt min/avg/max/mdev = 0.470/0.540/0.667/0.089 ms

ping -c 3 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
From 169.254.199.227 icmp_seq=1 Destination Host Unreachable
From 169.254.199.227 icmp_seq=2 Destination Host Unreachable
From 169.254.199.227 icmp_seq=3 Destination Host Unreachable

--- 8.8.8.8 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2046ms
pipe 3
 
default dev eno1 scope link
Das erscheint mir verdächtig. Und da dies der erste Eintrag ist, greift er. Die zweite Zeile (via 192...) kommt gar nicht zum Tragen! Man sieht ja, dass "ping 192..." funktioniert, aber "ping 8888" dennoch über die 169.254.x.x geroutet wird.

Zum Vergleich mein PVE:
Code:
~# ip route show
default via 10.3.12.254 dev vmbr3 proto kernel onlink
10.3.0.0/16 dev vmbr3 proto kernel scope link src 10.3.16.2
...
Ich kann es nicht testen, aber manuell solltest du das per ip route del default dev eno1 entfernen können. Noch nicht persistent, weil unklar ist, woher der Eintrag überhaupt herkommt - aber ohne dies sollte wenigstens "ping google" klappen.
 
Thanks again for your help.
Code:
ip route del default dev eno1
root@proxmox7node01:~# ping -c 3 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
From 169.254.199.227 icmp_seq=1 Destination Host Unreachable
From 169.254.199.227 icmp_seq=2 Destination Host Unreachable
From 169.254.199.227 icmp_seq=3 Destination Host Unreachable

--- 8.8.8.8 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2035ms
pipe 3
root@proxmox7node01:~# ping -c 3 google.com
PING google.com (142.250.70.174) 56(84) bytes of data.
From proxmox7node01.local (169.254.199.227) icmp_seq=1 Destination Host Unreachable
From proxmox7node01.local (169.254.199.227) icmp_seq=2 Destination Host Unreachable
From proxmox7node01.local (169.254.199.227) icmp_seq=3 Destination Host Unreachable

--- google.com ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2028ms
pipe 3

Pinging the router is ok.
 
Last edited:
Thanks. Yes I get that. My knowledge of networking doesn't get me to a solution. But thanks again for having a try.
Always after a systemctl restart networking I can ping google.com for a few minutes and then later it fails.
 
Last edited:
Do I need to add a line like this example:
post-up ip route add 192.168.0.0/24 via 172.30.141.5
?
 
More info
Successful routing after a networking restart
Code:
route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         _gateway        0.0.0.0         UG    0      0        0 vmbr0
link-local      0.0.0.0         255.255.0.0     U     1000   0        0 vmbr0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 vmbr0



Failing routing

route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         0.0.0.0         0.0.0.0         U     0      0        0 eno1
default         h268a           0.0.0.0         UG    0      0        0 vmbr0
link-local      0.0.0.0         255.255.0.0     U     0      0        0 eno1
link-local      0.0.0.0         255.255.0.0     U     1000   0        0 vmbr0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 vmbr0


syslog when connection failure starts

ov 10 12:21:27 proxmox7node01.rb.test avahi-daemon[538]: Joining mDNS multicast group on interface eno1.IPv4 with address 169.254.152.32.
Nov 10 12:21:27 proxmox7node01.rb.test avahi-daemon[538]: New relevant interface eno1.IPv4 for mDNS.
Nov 10 12:21:27 proxmox7node01.rb.test avahi-daemon[538]: Registering new address record for 169.254.152.32 on eno1.IPv4.
Nov 10 12:21:27 proxmox7node01.rb.test connmand[541]: eno1 {add} address 169.254.152.32/16 label eno1 family 2
Nov 10 12:21:27 proxmox7node01.rb.test connmand[541]: eno1 {add} route 169.254.0.0 gw 0.0.0.0 scope 253 <LINK>
Nov 10 12:21:27 proxmox7node01.rb.test connmand[541]: eno1 {add} route 0.0.0.0 gw 0.0.0.0 scope 253 <LINK>

So what causes the joining to mDNS with that 169.254.x.x address?
 
avahi-daemon[538]: Registering new address record for 169.254.152.32 on eno1.IPv4.
Where does this come from? On my Nodes there is no Avahi installed. Try to remove it - it does things that are welcomed in a living room multimedia setup, but not on a server.
On my nodes some libraries are installed, but no daemon:
Code:
~# dpkg -l |grep avahi
ii  libavahi-client3:amd64               0.8-5+deb11u1                  amd64        Avahi client library
ii  libavahi-common-data:amd64           0.8-5+deb11u1                  amd64        Avahi common data files
ii  libavahi-common3:amd64               0.8-5+deb11u1                  amd64        Avahi common library
 
Thanks @UdoB .
There is definitely an avahi-daemon installed I guess due to the Debian Bullseye origins. So it looks like I should remove it. This may be the difference between the installation from iso file and the debian installation that I needed to find.
Just a short while ago I deleted the vmbro bridge definition and re-created it and this seems to have solved it. I say this with some caution and will check it for the next day or so.
At the moment I won't make another change until I know things are stable.
Your assistance has been much appreciated. Thank you.

[EDIT] Does nfs require avahi-daemon? This maybe a reason why it is installed.
 
Last edited:
So it seems my bridge definition got corrupted somehow. Maybe because I restored a backup with out of date information.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!