Network issue: VM Talking to a Physical Server

Jul 14, 2011
23
0
21
Canada
Situation

A few VM's have 2 NIC's, one for the public network and one for the LAN which is using the vmbr10 bridge. A database server is currently hosted on the proxmox host and need to be moved to a physical server outside of the proxmox host.

The proxmox (nodes) hosts are configured as a cluster. We have 4 nodes which are configured to use a public IP address (eth0) and the secondary NIC (eth1) is connected to a unmanaged switch, which is the LAN. The nodes along with the VM's talk to each other using this network.

Question

How can I proceed to make the VM's to talk with the physical server (database server) outside of the hosts (nodes) using eth1.

Hosts (nodes) Network Configuration

iface eth0 inet manual

auto vmbr0 # WAN
iface vmbr0 inet static
address 184.x.x.x
netmask 255.255.255.224
gateway 184.x.x.x
broadcast 184.x.x.x
bridge_ports eth0
bridge_stp off
bridge_fd 0

auto eth1 # LAN -> Unmanaged Switch
iface eth1 inet static
address 10.10.1.1
netmask 255.0.0.0

auto vmbr10 # Bridge for the VMs
iface vmbr10 inet manual
bridge_ports eth1.10
bridge_stp off
bridge_fd 0


VM Network Configuration (Need to talk with the Database server)

auto eth0 # WAN
iface eth0 inet static
address 209.x.x.x
netmask 255.255.255.224
broadcast 209.x.x.x
gateway 209.x.x.x

auto eth1 # LAN connected to vmbr10
iface eth1 inet static
address 10.0.2.1
netmask 255.255.255.0

Database Server (Physical Server outside of the cluster)

auto eth0 # WAN
iface eth0 inet static
address 184.x.x.x
netmask 255.255.255.224
network 184.x.x.x
broadcast 184.x.x.x
gateway 184.x.x.x

auto eth1 # LAN -> Unmanaged Switch which can ping all the nodes
iface eth1 inet static
address 10.10.1.4
netmask 255.0.0.0

auto eth1:0 # Database IP address which need to talk to the VM (10.0.2.1)
iface eth1:0 inet static
address 10.0.2.98
netmask 255.255.255.0


Troubleshooting

They can see each other:
Code:
root@database:~# tcpdump -ann -i eth1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
12:26:11.843817 ARP, Request who-has 10.0.2.98 tell 10.0.2.1, length 42
12:26:12.003217 ARP, Request who-has 10.0.2.1 tell 10.0.2.98, length 28
12:26:12.853775 ARP, Request who-has 10.0.2.98 tell 10.0.2.1, length 42
12:26:13.003217 ARP, Request who-has 10.0.2.1 tell 10.0.2.98, length 28
12:26:14.845013 ARP, Request who-has 10.0.2.98 tell 10.0.2.1, length 42
12:26:15.003221 ARP, Request who-has 10.0.2.1 tell 10.0.2.98, length 28
12:26:15.843994 ARP, Request who-has 10.0.2.98 tell 10.0.2.1, length 42
12:26:16.003199 ARP, Request who-has 10.0.2.1 tell 10.0.2.98, length 28
I see incoming ICMP requests on the database server from the VM:
Code:
root@database:~# tcpdump -ann -i eth1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
12:27:36.073255 IP 10.0.2.98 > 10.0.2.1: ICMP echo request, id 26858, seq 6, length 64
12:27:37.073246 IP 10.0.2.98 > 10.0.2.1: ICMP echo request, id 26858, seq 7, length 64
12:27:38.073243 IP 10.0.2.98 > 10.0.2.1: ICMP echo request, id 26858, seq 8, length 64
12:27:39.073247 IP 10.0.2.98 > 10.0.2.1: ICMP echo request, id 26858, seq 9, length 64
12:27:40.073243 IP 10.0.2.98 > 10.0.2.1: ICMP echo request, id 26858, seq 10, length 64

Ping from the database to the VM and vice versa:

Code:
root@database:~# ping 10.0.2.1
PING 10.0.2.1 (10.0.2.1) 56(84) bytes of data.
From 10.0.2.98 icmp_seq=1 Destination Host Unreachable
From 10.0.2.98 icmp_seq=2 Destination Host Unreachable
From 10.0.2.98 icmp_seq=3 Destination Host Unreachable
From 10.0.2.98 icmp_seq=5 Destination Host Unreachable

I don't get it, they see each other, but don't talk.
 
Last edited:
Hi,
your network config on eth1/vmbr10 is wrong! You cant used tagged traffic with an unmanaged switch!

Use something like this:
Code:
auto eth1 [COLOR=#008000][I]# LAN -> Unmanaged Switch[/I][/COLOR]
iface eth1 inet static
             address  0.0.0.0
             netmask 0.0.0.0

auto vmbr10
iface vmbr10 inet static
    address  10.10.1.1
    netmask  255.0.0.0
    bridge_ports eth1
    bridge_stp off
    bridge_fd 0

Udo
 
Hi,
your network config on eth1/vmbr10 is wrong! You cant used tagged traffic with an unmanaged switch!

Use something like this:
Code:
auto eth1 [COLOR=#008000][I]# LAN -> Unmanaged Switch[/I][/COLOR]
iface eth1 inet static
             address  0.0.0.0
             netmask 0.0.0.0

auto vmbr10
iface vmbr10 inet static
    address  10.10.1.1
    netmask  255.0.0.0
    bridge_ports eth1
    bridge_stp off
    bridge_fd 0

Udo

You mean on the host ?

Keep in mind, I need all the hosts (nodes) to be able to talk with each other because I have some VMs that talk between them located on different nodes. Let's say I have a VM cluster with 2 HAproxy and 2 web server, I deploy the web servers on node 1 and 2 and they are able to communicate via LAN which is using a private bridge on both nodes. That said, I need to replicate the bridges on both nodes.

That's why I need to have all the nodes able to communicate via a private LAN for this to work.
 
Hi Simon,
and you are sure, that you "private" communication work well with an unmanaged switch? I ask, because you use vlan-tagging and the switch aren't able to handle this traffic - also all ports get this traffic - not very private!

You can do also use an other ip-range (like 192.168.10.0/24) for all "private" traffic but this is also clumsy.

And yes - i mean the pve-host.

Udo
 
Hi Simon,
and you are sure, that you "private" communication work well with an unmanaged switch? I ask, because you use vlan-tagging and the switch aren't able to handle this traffic - also all ports get this traffic - not very private!

You can do also use an other ip-range (like 192.168.10.0/24) for all "private" traffic but this is also clumsy.

And yes - i mean the pve-host.

Udo

Hey Udo thanks for your time BTW!

Well right now with the actual configuration on the nodes (we have 4 nodes), I can migrate a VM (configured as a cluster, so 2 NICs, 1 public and 1 private) to an another node and the VM is still able to talk with the other VMs in the virtual cluster.

Let say I want 4 VM configured as a cluster:

For all VM
VM NIC1 : Public network using vmbr0 (eth0 replicated on all physical nodes. This NIC is plugged into a managed Cisco switch.)
VM NIC2 : Private network using vmbr10 (eth1.10 replicated on all physical nodes. This NIC is plugged into a unmanaged Linksys switch.)

For all nodes
Each node have their eth1 configured with a private IP address (right now it's 10.10.1.1 to 10.10.1.4 (4 nodes)). So each physical node can ping each other and I use this network to make the VMs to talk each other even if they are not hosted on the same physical node.

So right now yes it work, my concern now, I need to make the VMs to talk using the private network with a physical server outside of the proxmox cluster. The physical server is connected to the unmanaged switch only.

The problem can be solved by installing Proxmox VE on the physical server and deploying the database server as a KVM guest using the vmbr10 network. But I don't want to do that, I want to DB on bare metal.
 
Hi Simon,
what I want to say with my postings before: you can do the same without vlan-tagging and "real" vmbr10-> eth1.

Udo

Oh!.... I tried your suggestion and it doest'n work, I can't bring up the interfaces. Hey I'm short in time now, can you help me with that tomorrow ?
 
Ok I see, it's late now for you! Do you prefer to talk on Skype or something ? Then, if we can fix that I'll write a debrief about the problem -> solution.
Hi Simon,
sorry i don't use Skype. But there is an irc-channel ##proxmox ...

But i think i have say all things: don't use 802.1q (at all hosts) with an unmanaged switch. Then your VMs will be able to communicate with your hosts on the Lan.

Udo
 
Last edited:
Hi,
your network config on eth1/vmbr10 is wrong! You cant used tagged traffic with an unmanaged switch!

Use something like this:
Code:
auto eth1 [COLOR=#008000][I]# LAN -> Unmanaged Switch[/I][/COLOR]
iface eth1 inet static
             address  0.0.0.0
             netmask 0.0.0.0

auto vmbr10
iface vmbr10 inet static
    address  10.10.1.1
    netmask  255.0.0.0
    bridge_ports eth1
    bridge_stp off
    bridge_fd 0

Udo

Hi Udo, always is a pleasure to talk to you.

Appreciating much your generous help that is very importan for me, i want to ask a few queries:

I want to configure bond and bridge for two NICs for use my VMs (KVM) on my LAN with 1 switch unmanaged
Please see this link (image) and don't see the part that said vmbr0 because i don't have 3 NICs, only two
http://forum.proxmox.com/attachment.php?attachmentid=1043&d=1341778945

Then the questions according to this image:
1- eth0 and eth1 must be in Autostart = no right?
2- bond0 must be be in Autostart = yes right?
3- vmbr1 must be be in Autostart = yes right?
4- And if my two NICs are different brands but the same speed, bond (rr, alb and tlb) will be a problem ???
(Note: i think that "rr" is only for use in mode crossover-cable, for example with DRBD)

And last question:
5- For your example above, is neceasry autostart eth1? because in a clean installation of Proxmox VE 2.x iso installer i see that "eth0" is not in autostart and works well for me
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!