bridge and forwarding / routing

MartinP

New Member
Feb 25, 2009
6
0
1
Hello,

I have an issue with forwarding / routing over an internal bridge, and I don't manage to find the solution. Perhaps you can give me a hint.

Description of the situation:

On a host, named "proxmox", there are some bridges:

Code:
proxmox:~# brctl show
bridge name     bridge id               STP enabled     interfaces
vmbr0           8000.001d603f92cc       yes             eth0
                                                        veth105.0
vmbr1           8000.000854534a5a       yes             eth1
                                                        veth104.1
vmbr2           8000.000854534622       yes             eth2
                                                        veth104.2
vmbr3           8000.001851e12078       no              veth104.0
                                                        veth105.1
Then, I have two containers for the moment, and both are connected to the bridge vmbr3.

The container named "serveur" (container ID 105) has two interfaces:
Code:
root@serveur:/# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:18:51:8c:f7:1d
          inet addr:192.168.1.1  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::218:51ff:fe8c:f71d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:294 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:13337 (13.0 KB)  TX bytes:384 (384.0 B)

eth1      Link encap:Ethernet  HWaddr 00:18:51:d3:83:eb
          inet addr:10.0.0.105  Bcast:10.0.0.255  Mask:255.255.255.0
          inet6 addr: fe80::218:51ff:fed3:83eb/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:15 errors:0 dropped:0 overruns:0 frame:0
          TX packets:38 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:972 (972.0 B)  TX bytes:2872 (2.8 KB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
The default route is set to the interface of the other container:

Code:
root@serveur:/# ip route show
10.0.0.0/24 dev eth1  proto kernel  scope link  src 10.0.0.105
192.168.1.0/24 dev eth0  proto kernel  scope link  src 192.168.1.1
default via 10.0.0.104 dev eth1  metric 100
And this route is working, I think, because I can ping this other container:

Code:
root@serveur:/# ping 10.0.0.104
PING 10.0.0.104 (10.0.0.104) 56(84) bytes of data.
64 bytes from 10.0.0.104: icmp_seq=1 ttl=64 time=0.034 ms
64 bytes from 10.0.0.104: icmp_seq=2 ttl=64 time=0.010 ms
64 bytes from 10.0.0.104: icmp_seq=3 ttl=64 time=0.030 ms
The other container is named "balancer" (container ID 104), and has three interfaces:
Code:
root@balancer:/# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:18:51:cb:b3:2b
          inet addr:10.0.0.104  Bcast:10.0.0.255  Mask:255.255.255.0
          inet6 addr: fe80::218:51ff:fecb:b32b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:172 errors:0 dropped:0 overruns:0 frame:0
          TX packets:45 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:11892 (11.6 KB)  TX bytes:2036 (1.9 KB)

eth1      Link encap:Ethernet  HWaddr 00:18:51:fa:fa:6d
          inet addr:192.168.200.104  Bcast:192.168.200.255  Mask:255.255.255.0
          inet6 addr: fe80::218:51ff:fefa:fa6d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1044 errors:0 dropped:0 overruns:0 frame:0
          TX packets:121 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:41015 (40.0 KB)  TX bytes:7820 (7.6 KB)

eth2      Link encap:Ethernet  HWaddr 00:18:51:69:4e:90
          inet addr:10.193.96.132  Bcast:10.193.96.159  Mask:255.255.255.224
          inet6 addr: fe80::218:51ff:fe69:4e90/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1022 errors:0 dropped:0 overruns:0 frame:0
          TX packets:70 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:39853 (38.9 KB)  TX bytes:4556 (4.4 KB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
In this container, I set up two default routes, because I want to balance between two Internet connexions:
Code:
root@balancer:/# ip route show
10.193.96.128 dev eth2  scope link  src 10.193.96.132
192.168.200.0 dev eth1  scope link  src 192.168.200.104
10.193.96.128/27 dev eth2  proto kernel  scope link  src 10.193.96.132
10.0.0.0/24 dev eth0  proto kernel  scope link  src 10.0.0.104
192.168.200.0/24 dev eth1  proto kernel  scope link  src 192.168.200.104
default
        nexthop via 192.168.200.1  dev eth1 weight 3
        nexthop via 10.193.96.129  dev eth2 weight 1
Now, when I go back to the other container ("serveur", ID 105), and I try to reach the out-going interfaces of my "balancer (ID 104), there is no response:

Code:
root@serveur:/# ping 192.168.200.104
PING 192.168.200.104 (192.168.200.104) 56(84) bytes of data.

--- 192.168.200.104 ping statistics ---
10 packets transmitted, 0 received, 100% packet loss, time 8999ms

root@serveur:/# ping 10.193.96.132
PING 10.193.96.132 (10.193.96.132) 56(84) bytes of data.

--- 10.193.96.132 ping statistics ---
11 packets transmitted, 0 received, 100% packet loss, time 10009ms
Both containers are set up on the base of the template: ubuntu-8.0-standard_8.04-1_i386.tar.gz. The Proxmox Virtual Environment is version 1.1.

I think that I probably misunderstood something fundamental, but I am trying for some days now and I did not manage to find a solution.

Can you give me a hint?

Martin
 
In my container ID 105 ("serveur") I did a "ping 192.168.200.104". With tcpdump, I can see that this interface (sitting in ID 104) is answering, but these answers do not manage to go the way back, so the ping-command is without an echo.

So I don't know what to do. The routing seems to be settled in one direction, but the echo does not find his way back.

I've until now always worked with real cables, switches and so on, and I have seen the bridge as a kind of switch, where the vethx are plugged in. But perhaps there are more settings to do.

Can there be a problem with MAC-addresses?

Martin
 
In the meantime, I have enabled STP on vmbr3. But there was no change.

I checked ARP:

arp on the freshly started system:

Code:
HOST
192.168.1.25 on vmbr0 (my testing machine with putty)
10.0.0.104 on vmbr3 (the default gateway for the host)
192.168.200.1 on vmbr1 (the default gateway for CT104)
I have disconnected eth2 for the moment to have a less confusing setting.

CT105
10.0.0.104 on eth1 (means veth105.1) (the default gateway for CT105)

CT104
192.168.200.1 on eth1 (means veth104.1) (the default gateway for CT104)
10.0.0.100 on eth0 (means veth104.0) (IP-address of vmbr3)
After trying some ping to different locations from inside CT105, ARP-Tables changed to:
Code:
CT105
192.168.200.100 eth1
10.0.0.104 eth1

CT104
192.168.100.1 eth1
192.168.200.100 eth1
10.0.0.105 eth0
10.0.0.100 eth0

Host
192.168.1.25 vmbr0
10.0.0.105 vmbr3
10.0.0.104 vmbr3
10.0.0.104 vmbr3
My setting:
Code:
       Host  CT104      CT105
vmbr0  eth0  -          veth105.0
vmbr1  eth1  veth104.1  -
vmbr2  eth2  veth104.2  -
vmbr3  -     veth104.0  veth105.1

eth0 = office network 192.168.1.0/24
eth1 = ISP 1  network with router 192.168.200.1/24
eth2 = ISP 2  network with router 10.193.96.129/27

vmbr0 to connect containers with services for the office network.
vmbr3 to connect these containers (if necessary) to the CT104
vmbr1 to connect the load-balancing CT104 to ISP 1
vmbr2 to connect the load-balancing CT104 to ISP 2

I have disconnected eth2 for the moment to have a less confusing setting.
Now, I am trying to understand more, I made up a table to see where pings get an answer and where not:

pingb.jpg


That's where I am at the moment.

Martin
 
be sure to do at the hardware node

ifconfig veth105.1 up

where veth105.1 is the network interface of the container with problems.
:D

Also hardware interface should be at promisc mode

Sorry to awake old thread, but i had the same problem a lot ago. I solved this way.
 
@llazzaro: Thank you for your answer. In the meantime, I have changed to another approach, without Proxmox. Perhaps I will give it a try later on, and at this moment I will have to come back and check out your proposition.

Martin