bridged network

Yesterday I updated my servers to beta2. All worked fine, since I noticed that all virtual servers with bridged network (vmbr0). They are not pingable or something else, also from the vm inside to the outside world. The only thing that works is a ping from the host to the vm.
Is there a manual tipping of a command on the host necessary ? because, this behaviour existed also in the past when I do a migration to a other server. the vm was only reachable when i did a backmigration to the server where I generated the host.

some addition information:
ifconfig on the host:
veth107.0 Link encap:Ethernet HWaddr 00:18:51:6E:8B:AB
inet6 addr: fe80::218:51ff:fe6e:8bab/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3 errors:0 dropped:0 overruns:0 frame:0
TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:196 (196.0 b) TX bytes:334 (334.0 b)

interface config:
proxmox:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface eth0 inet manual

auto vmbr0
iface vmbr0 inet static
address 194.xxx.xxx.21
netmask 255.255.255.0
gateway 194.xxx.xxx.1
bridge_ports eth0
bridge_stp on
bridge_fd 0



route: (I inserted the 194.xxx.xxx.16 address by hand)
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
radius-00.nanet * 255.255.255.255 UH 0 0 0 venet0
194.xxx.xxx.16 * 255.255.255.255 UH 0 0 0 venet0
194.xxx.xxx.0 * 255.255.255.0 U 0 0 0 vmbr0
default at-mib-wi-r01.n 0.0.0.0 UG 0 0 0 vmbr0

VM:
ifconfig in the vm

eth0 Link encap:Ethernet HWaddr 00:18:51:ED:6C:52 inet addr:194.xxx.xxx.16 Bcast:194.xxx.xxx.255 Mask:255.255.255.0 inet6 addr: fe80::218:51ff:feed:6c52/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1685 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:75909 (74.1 KiB) TX bytes:664 (664.0 b) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:9 errors:0 dropped:0 overruns:0 frame:0 TX packets:9 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:672 (672.0 b) TX bytes:672 (672.0 b)

route:
Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 194.xxx.xxx.0 * 255.255.255.0 U 0 0 0 eth0 default 194.xxx.xxx.1 0.0.0.0 UG 0 0 0 eth0


some ideas?

thanks a lot,
Patrick
 
beta2 adds fixed mac addresses to the VMs. So it is posssible that the mac addresses changed. Please check if your VMs uses persistent network device names based on mac addresses. If so, reconfigure then to use the new mac addresses.

- Dietmar
 
UPDATE

I created a nex debian vm, i managed to ping from host to the vm and from the vm out to the host, but not to the outside world.

Host:
veth108.0 Link encap:Ethernet HWaddr 00:18:51:02:FC:83
inet6 addr: fe80::218:51ff:fe02:fc83/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:245 errors:0 dropped:0 overruns:0 frame:0
TX packets:3448 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:9540 (9.3 KiB) TX bytes:156591 (152.9 KiB)

cat /etc/vz/conf/108.conf
NETIF="ifname=eth0,mac=00:18:51:AE:00:26,host_ifname=veth108.0,host_mac=00:18:51:02:FC:83"

vm ifconfig:
eth0 Link encap:Ethernet HWaddr 00:18:51:AE:00:26 inet addr:194.112.145.17 Bcast:194.112.145.255 Mask:255.255.255.0 inet6 addr: fe80::218:51ff:feae:26/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3700 errors:0 dropped:0 overruns:0 frame:0 TX packets:245 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:167887 (163.9 KiB) TX bytes:9540 (9.3 KiB)


What information should I provide that you have all information you need?
 
they were generated automatically except " 194.xxx.xxx.16 * 255.255.255.255 UH 0 0 0 venet0"

Again: Why do you use those custom routing tables?

progress.gif
 
What do you do if you create a (debian-standard) vm with bridged network?
while creating i choose "Bridged Ethernet". after creating and starting the vm I log into the vm an configured the ethernet device like it is a real server (/etc/network/interfaces)

cat /etc/network/interfaces
# Used by ifup(8) and ifdown(8). See the interfaces(5) manpage or
# /usr/share/doc/ifupdown/examples for more information.
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address 194.xxx.xxx.16
netmask 255.255.255.0
gateway 194.xxx.xxx.1


that it should be? I googled a bit, but fith no positiv results. Looks like a little bit difficult using a bridged networkdevice in proxmox/openvz ;)
 
i use venet on other VMs, that works great. On this one machine, there runs a bought webapplication that checks the mac address of the network device. since there is no mac in venet, i had to use veth. now I'm in proverbially "Zwickmühle" ;)
It looks like, the host cant forward the packages to the outside. hostside networking is no problem.
The server is connected to a HP-Procurve 1800G switch, with vlan support (vlan 1 is standard configured for the administration console). Maybe, I have to search the solution of the problem also there. But before the upgrade it works on the same switch too.

As I mentioned, migrating the vm to a other server and bringing the network up also didn't work in the past. back on the source hostserver it works again. very strange that behaviour. I will continue searching for the solution. there must be one! :) I only have to find it ;)

lg
Patrick
 
The server is connected to a HP-Procurve 1800G switch, with vlan support (vlan 1 is standard configured for the administration console). Maybe, I have to search the solution of the problem also there. But before the upgrade it works on the same switch too.

The configuration you posted does not include or mention any vlan setup - I am confused now.

- Dietmar
 
If you give me root access to the machine I will test tomorow. My RSA key is:

Code:
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAywdGDogQRufTWDQd2fLkROfDzZvYgv1MhQlJnwYxkWE7k+Z8oEcJ4JPAkpyMxg1WCr0GJAzcJ3720hSjuAjiKslaUiqnc6YX4JnwZ3YLK8kbe+iGjByEGUuFKvJA7yJQ4yeZqOwY8BE6YIcfnhtpsU4fwzvkz2oOvMxcFnaGRFYTBtEkxti5Yk35W9i1JhybP6CWkRlAVB+5DylGTMLCAovd1mCEOUOvc6AcP0KLCw5VMd+2ib4Xs8L3/nKYNrEkJVBnhUGEgRC2JowtP9Ua07cg1K2YkwYF6YaOYz1GtEzh565ZEoCzP934rnMOf6Nn2sq6QizrCKgVZ9HGUVtj3w== root@tequila
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!