proxmox5.2 bridge is not working. OVH Vrack.

Francisco FreeMEM

New Member
Sep 13, 2018
4
0
1
52
I have two lxc (ct) machines configured with public IP from a RIPE block in a ovh vrack and they can pinging between lxc machines but not to their gateways. I have other machines in soyoustart and the need a /32 netmask configuration and a virtual mac created in soyoustart panel and works correctly. I've try the same configuration at my proxmox machine in ovh but it doesn't work, neither vrack who doesn't need a restricted virtual mac.
I'm using this kernel
uname -a
Linux cluster1 4.15.18-7-pve #1 SMP PVE 4.15.18-27 (Wed, 10 Oct 2018 10:50:11 +0200) x86_64 GNU/Linux


proxmox machine have this network configuration

auto lo
iface lo inet loopback

iface eno1 inet manual

auto eno2
iface eno2 inet static
address 10.0.0.1
netmask 255.255.255.0

auto vmbr0
iface vmbr0 inet static
address 5.xx.xx.xx/24
gateway 5.xx.xx.254
bridge_ports eno1
bridge_stp off
bridge_fd 0

The eno2 device it's being used to sync with other machine in a proxmox cluster.

brctl show
bridge name bridge id STP enabled interfaces
vmbr0 8000.a4bf012f52a5 no eno1
veth100i0
veth101i0

The ct machines are bridged to vmbr0.

- The first ct machine configuration:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 02:00:00:1c:8b:af brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 51.xx.xx.82/28 brd 51.xx.xx.95 scope global eth0
valid_lft forever preferred_lft forever

ip ro
default via 51.xx.xx.94 dev eth0 proto static
51.xx.xx.80/28 dev eth0 proto kernel scope link src 51.xx.xx.82


- The second ct machine
ip ro
default via 51.xx.xx.94 dev eth0 proto static
51.xx.xx.80/28 dev eth0 proto kernel scope link src 51.xx.x.83

Between them can do ping but not to their gw.
From the proxmox machine is possible to do ping to gw.

The ping from ct to gw
ping 51.xx.xx.94
PING 51.xx.xx.94 (51.xx.xx.94) 56(84) bytes of data.
From 51.xx.xx.83 icmp_seq=1 Destination Host Unreachable
From 51.xx.xx.83 icmp_seq=2 Destination Host Unreachable

A tcpdump from proxmox achine show the folowwing arp packet traffic
tcpdump -n host 51.xx.xx.83
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vmbr0, link-type EN10MB (Ethernet), capture size 262144 bytes
21:14:47.608612 ARP, Request who-has 51.xx.x.94 tell 51.xx.xx.83, length 28
21:14:48.632747 ARP, Request who-has 51.75.97.94 tell 51.xx.xx.83, length 28


I've also set

echo 1 > /proc/sys/net/ipv4/conf/all/proxy_arp

but nothing happens.
Could you kindly tell me why not works?
Best regards,
Francisco
 
with ovh make sure you use a vmware compatible MAC address on the virtual machine NIC and it matches the one created by their systems for the ip.

I have 2 weeks left on a soyoustart(ovh) server which is spare until then, so I will test proxmox on there to see if it works ok.
 
the problem was the vrack is assigned only to one of network devices and RIPE block was assigned to vrack in eno2 device so I had to create a vmbr1 bridged in eno2.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!