Is my provider limiting me?

FilipSK

New Member
Jun 24, 2019
5
0
1
44
Hello everyone, new proxmox user here.

I am really lost on this one. I recently migrated from vmware to proxmox but I have problem setting up my network correctly.

I have 62.XX.XX.160/29 assigned to me by hosting provider, with 62.XX.XX.161 being the default route out.
.162 is used by server management card, .163 is proxmox and .164-166 can be used by VM's (.167 being broadcast)

I am using bridged setup, bridging my physical interface through vmbr0. I was using a similar setup on vmware as well. The problem is, that in proxmox this only works on just one VM , and not on others. I can't get to or from any other VM than 100. Am I doing something wrong, or did my provider started filtering macs? (I have no idea how to test that though :( ). If so, what options do I have?

HOST configuration
Code:
# cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface enp8s0f0 inet manual

iface enp8s0f1 inet manual

auto vmbr0
iface vmbr0 inet static
        address  62.X.X.163
        netmask  255.255.255.248
        gateway  62.X.X.161
        bridge-ports enp8s0f0
        bridge-stp off
        bridge-fd 0

auto vmbr1
iface vmbr1 inet static
        bridge-ports none
        bridge-stp off
        bridge-fd 0
#local
Code:
# ip route show
default via 62.X.X.161 dev vmbr0 onlink 
62.X.X.160/29 dev vmbr0 proto kernel scope link src 62.X.X.163
Code:
# ip address show enp8s0f0
2: enp8s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 00:19:99:98:bb:2e brd ff:ff:ff:ff:ff:ff

# ip address show vmbr0
25: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:19:99:98:bb:2e brd ff:ff:ff:ff:ff:ff
    inet 62.X.X.163/29 brd 62.X.X.167 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::219:99ff:fe98:bb2e/64 scope link 
       valid_lft forever preferred_lft forever

VM 100 configuration (ubuntu) - works perfectly

Code:
$ cat /etc/netplan/50-cloud-init.yaml 
network:
    ethernets:
        ens18:
            addresses:
            - 62.X.X.164/29
            gateway4: 62.X.X.161
            nameservers:
                addresses:
                - 8.8.8.8
    version: 2
Code:
$ ip route show
default via 62.X.X.161 dev ens18 proto static 
62.X.X.160/29 dev ens18 proto kernel scope link src 62.X.X.164
Code:
$ ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether fe:f8:90:40:4a:84 brd ff:ff:ff:ff:ff:ff
    inet 62.X.X.164/29 brd 62.XX.XX.167 scope global ens18
       valid_lft forever preferred_lft forever
    inet6 fe80::fcf8:90ff:fe40:4a84/64 scope link 
       valid_lft forever preferred_lft forever

Testing it all:
Code:
$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=55 time=20.8 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=55 time=20.9 ms
^C
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 20.896/20.902/20.909/0.144 ms


VM 101 configuration (ubuntu) - no network

Code:
$ cat /etc/netplan/50-cloud-init.yaml 
network:
    ethernets:
        ens18:
            addresses:
            - 62.X.X.165/29
            gateway4: 62.X.X.161
            nameservers:
                addresses:
                - 8.8.8.8
    version: 2

Code:
$ ip route show
default via 62.X.X.161 dev ens18 proto static 
62.XX.XX.160/29 dev ens18 proto kernel scope link src 62.X.X.165
Code:
$ ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 16:dd:e4:41:46:63 brd ff:ff:ff:ff:ff:ff
    inet 62.X.X.165/29 brd 62.X.X.167 scope global ens18
       valid_lft forever preferred_lft forever
    inet6 fe80::14dd:e4ff:fe41:3663/64 scope link 
       valid_lft forever preferred_lft forever

Code:
$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
From 62.XX.XX.165 icmp_seq=1 Destination Host Unreachable
From 62.XX.XX.165 icmp_seq=2 Destination Host Unreachable
From 62.XX.XX.165 icmp_seq=3 Destination Host Unreachable
From 62.XX.XX.165 icmp_seq=4 Destination Host Unreachable
...
 
from a quick glance at your config this should work.

maybe the provider is indeed filtering mac-addresses - you could ask their support

* can you ping the default-gateway from VM101?
* what does the neighbor table/arp table show on VM101 after trying to ping the gateway - `ip neigh` ?
 
  • Like
Reactions: FilipSK
Hello,

it is not possible to ping the default GW, same error as above.

Code:
$ ip neigh
62.X.X.161 dev ens18   FAILED

I will try to write the support and see what they say. As far as I understand, if they do filter the addresses, I should switch to routed setup.
 
hmm - seems there is a problem on layer 2.
Just to be sure - I'm expecting that both VMs have one interface which is a member of vmbr0 directly (and not of vmbr1)

In that case please contact your provider for hints/assistance/documentation.

hope this helps!
 
  • Like
Reactions: FilipSK
Yes, they are only connected to vmbr0 .. vmbr1 was created for future use, but nothing is assigned to it.

Thank you very much for help!
 
It seems that mac filtering was indeed implemented at some point with my provider.
To make things easier, I've decided to switch to routed setup. In case someone has the same problem, here is how I did it:

HOST configuration
Host IP config is done on physical device, bridge is disconnected from physical device and it has no IP is defined. Proxy ARP will do the magic, we just need the ip forwarding to be activated and every guest behind the bridge it needs a route, so the host knows where they are (this works from IPs from same subnet the host uses, or for completely different ranges as well). Proxmox wiki recomends assigning ip to the bridge, but it seems like a waste of IP address
Code:
# cat /etc/network/interfaces
auto lo
iface lo inet loopback
auto enp8s0f0
iface enp8s0f0 inet static
        address  62.X.X.163
        netmask  255.255.255.248
        gateway  62.X.X.161
        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up echo 1 > /proc/sys/net/ipv4/conf/enp8s0f0/proxy_arp
auto vmbr0
iface vmbr0 inet manual
        bridge-ports none
        bridge-stp off
        bridge-fd 0
        post-up ip route add 85.Y.Y.65/32 dev vmbr0
        post-up ip route add 62.X.X.164/32 dev vmbr0
        pre-down ip route del 85.Y.Y.65/32 dev vmbr0
        pre-down ip route del 62.X.X.164/32 dev vmbr0

VM 101
HOST IP on physical interface is used as gateway, it is not possible to use the default GW directly as in my original post
Code:
$ cat /etc/netplan/50-cloud-init.yaml
network:
    ethernets:
        ens18:
            addresses:
            - 62.X.X.164/29
            gateway4: 62.X.X.163
            nameservers:
                addresses:
                - 8.8.8.8
    version: 2

VM 102
A little example of how to use a default route that is not part of assigned IP range. If we assigned an IP to the host on the bridge, we would use that one instead. (that's how proxmox wiki shows it)
Code:
network:
    ethernets:
        ens18:
            addresses:
            - 85.Y.Y.66/29
            routes:
            - to: 0.0.0.0/0
              via: 62.X.X.163
              on-link: true
            nameservers:
                addresses:
                - 8.8.8.8
That's about it, hope it helps someone in the future.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!