Host can ping default gateway but not beyond

tabre

New Member
Jul 9, 2024
2
1
3
Hi all,

I have had a PVE server running on my network for about a year now and it has worked perfectly until I recently reconfigured my network. I changed from a 10.0.0.0/24 network to a 10.100.0.0/23 network. Every other device works perfectly with LAN and Internet access. The Proxmox host is the only machine that cannot reach the internet. I've read every forum post I can find on the subject and I have finally resorted to making this post as nothing I have found has resolved the issue. Any help would be appreciated.


What I have done:
  • Configured the machine to the best of my knowledge
  • Rebooted
  • Disabled the Proxmox firewalls
  • Flushed IP tables
  • Numerous configuration tweaks referencing other forum posts (reverted because they didn't work)


I can ping the gateway:
Code:
root@logicworx-pve:~# ping 10.100.1.253
PING 10.100.1.253 (10.100.1.253) 56(84) bytes of data.
64 bytes from 10.100.1.253: icmp_seq=1 ttl=255 time=0.537 ms
64 bytes from 10.100.1.253: icmp_seq=2 ttl=255 time=0.740 ms
64 bytes from 10.100.1.253: icmp_seq=3 ttl=255 time=0.661 ms
64 bytes from 10.100.1.253: icmp_seq=4 ttl=255 time=0.662 ms
64 bytes from 10.100.1.253: icmp_seq=5 ttl=255 time=0.728 ms
^C
--- 10.100.1.253 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4089ms
rtt min/avg/max/mdev = 0.537/0.665/0.740/0.072 ms


I can't ping Google DNS (or anything else on the internet):
Code:
root@logicworx-pve:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
^C
--- 8.8.8.8 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4104ms


Configuration Information:
Code:
root@logicworx-pve:~# cat /etc/hosts
127.0.0.1 localhost
10.100.1.1 logicworx-pve.local logicworx-pve

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

Code:
root@logicworx-pve:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto enp3s0
iface enp3s0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 10.100.1.1/23
        gateway 10.100.1.253
        bridge-ports enp3s0
        bridge-stp off
        bridge-fd 0

iface wlp4s0 inet manual

Code:
root@logicworx-pve:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 44:8a:5b:42:00:56 brd ff:ff:ff:ff:ff:ff
3: wlp4s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 80:56:f2:a5:b7:85 brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 44:8a:5b:42:00:56 brd ff:ff:ff:ff:ff:ff
    inet 10.100.1.1/23 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::468a:5bff:fe42:56/64 scope link
       valid_lft forever preferred_lft forever
5: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 1a:e7:a3:95:6c:5e brd ff:ff:ff:ff:ff:ff
6: tap101i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr101i0 state UNKNOWN group default qlen 1000
    link/ether 52:55:43:f9:36:55 brd ff:ff:ff:ff:ff:ff
7: fwbr101i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 4e:8f:eb:82:9e:91 brd ff:ff:ff:ff:ff:ff
8: fwpr101p0@fwln101i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 02:34:4b:90:81:b0 brd ff:ff:ff:ff:ff:ff
9: fwln101i0@fwpr101p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr101i0 state UP group default qlen 1000
    link/ether 4e:8f:eb:82:9e:91 brd ff:ff:ff:ff:ff:ff
10: tap102i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 7e:c5:51:4e:c4:8d brd ff:ff:ff:ff:ff:ff

Code:
root@logicworx-pve:~# ip route
default via 10.100.1.253 dev vmbr0 proto kernel onlink
10.100.0.0/23 dev vmbr0 proto kernel scope link src 10.100.1.1

Code:
root@logicworx-pve:~# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.100.1.253    0.0.0.0         UG    0      0        0 vmbr0
10.100.0.0      0.0.0.0         255.255.254.0   U     0      0        0 vmbr0
 
Everything is looking good at first glance, have you tried doing a traceroute instead of a ping? I also recommend to do a tcpdump on the proxmox host: tcpdump host 1.1.1.1
Sanity check: What kind of network configuration is on your machine?
 
Everything is looking good at first glance, have you tried doing a traceroute instead of a ping? I also recommend to do a tcpdump on the proxmox host: tcpdump host 1.1.1.1
Sanity check: What kind of network configuration is on your machine?

Thank you for suggesting this. After analyzing the traceroute I could see the traffic completely stopped after the first hop. This got me focusing more on my router.

I found that I forgot to update my access control list on my router (whoops). The problem had nothing to do with the configuration on my Proxmox server.

Because I doubled the size of my network and I didn't update my inverse mask for the ACL, it was only letting the first half of the network on the internet (10.100.0.X) and rejecting the second half (10.100.1.X).

Because I configured my DHCP pool to allocate in the 10.100.0.X range, things appeared to be fine with other devices on the network, but none of my statically allocated devices in the 10.100.1.X range could access beyond the gateway. This didn't break anything on my network because all devices in that range are local servers. The only reason I noticed the issue with the Proxmox host is because it hosts a VM running my local DNS server which stopped resolving internet addresses.

Updating the ACL inverse mask from 0.0.0.255 to 0.0.1.255 fixed the problem.

Stupid mistake on my part, but thank you for taking the time to read and help me work through it.
 
  • Like
Reactions: KevinS

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!