Network setup

Hamclock

New Member
Jul 26, 2014
7
0
1
I'm trying to put together a setup where I have a pfSense firewall as a VM on a Proxmox node, and all other VMs on that same node communicate with the outside world through that firewall VM. The firewall should be binding to all the IP addresses available to the hypervisor (that part is mostly irrelevant to my problem).

I've got pfSense installed, and it's able to distribute local IPs to other VMs via DHCP. However, I'm unable to get the firewall to bind its WAN interface to any IP addresses provided by the data center. I've mostly rules out a firewall misconfiguration, and I'm pretty sure I've followed all the data center's instructions for using an IP range, but I still can't ping ANYTHING from the pfSense VM, which leads me to believe I've configured something network-related in Proxmox incorrectly.

Here's the hypervisor's network tab:
JvS6u0B.png


And the firewall VM's hardware config:

AtBuVYR.png


According to the data center, in order to use an IP in the block I'm trying to use, the machine's MAC address needs to be 02:00:00:FF:4C:0E. I've got that set in Proxmox and in the firewall. Here's the firewall's WAN configuration:
VrkgOji.png


And the relevant portion of ifconfig on the firewall:
cYLUMLa.png


According to the data center, these are the network settings I need to use:
IP: Fail Over IP
Netmask: 255.255.255.255
Broadcast: Fail Over IP
Gateway: Main IP of the server ending in 254.
"Fail Over IP" is an address in the IP block I was allocated (192.99.198.148/30), and "Main IP of the server" is 192.99.10.135, meaning the gateway should be 192.99.10.254.

All of these settings look correct to me (although I'm certainly no expert). However, the firewall VM can't ping the gateway IP or any Internet IPs. The hypervisor can ping both. Is there something I'm misunderstanding about how network devices work in Proxmox?
 
According to the data center, these are the network settings I need to use:

"Fail Over IP" is an address in the IP block I was allocated (192.99.198.148/30), and "Main IP of the server" is 192.99.10.135, meaning the gateway should be 192.99.10.254.

All of these settings look correct to me (although I'm certainly no expert). However, the firewall VM can't ping the gateway IP or any Internet IPs. The hypervisor can ping both. Is there something I'm misunderstanding about how network devices work in Proxmox?

Nope.
I guess gateway should be 192.99.198.150 as this is the only one usable address left

192.99.198.151 will be the broadcast

I hope it helps.

If that doesn't work - try to run a brand new vm with simple network setup and test it from there.
 
I've been told explicitly that I should be using .10.254 as my gateway, not .198.254:
the actual gateway that you should be using would be : 192.99.10.254
It's a little strange that the gateway would be outside the address' subnet, but that's what they told me.

Anyway, I tried setting up a new, clean VM running Ubuntu 14.04 Desktop and I seem to be having the same problem:
Ce8Enqq.png

I tried with .148 and .149, with a netmask of both 255.255.255.252 and 255.255.255.255, and with gateways of .10.254, .198.250, and .198.254, and just about every combination of those variables, all with the same symptoms.
 
I've been told explicitly that I should be using .10.254 as my gateway, not .198.254:
It's a little strange that the gateway would be outside the address' subnet, but that's what they told me.
Not just strange, it's nt working. Might be just typo. I think you shold contact them to clarify.

I tried with .148 and .149, with a netmask of both 255.255.255.252 and 255.255.255.255, and with gateways of .10.254, .198.250, and .198.254, and just about every combination of those variables, all with the same symptoms.

If you mind to have one more try: setup laptop with your public ip and try to connect.

I remember in my experience the most complicated looking problems was just because of cable not sitting properly in the socket. But because of many other factors problems enterfere with the others and making the "Gordian Knot" with no chance to solve. The only one way was to go back to basics.
 
Might be just typo. I think you shold contact them to clarify.

They're quite sure that the gateway I should be using with 192.99.198.148/30 is 192.99.10.254.

setup laptop with your public ip and try to connect.

This is a rented server in a data center I don't have physical access to, so that's not an option. However, I'm using vmbr0 which is bridged to eth0, and on the hypervisor I can use that interface to ping out properly, so I don't think it's a simple connectivity issue.

Since I'm new to Proxmox and wasn't 100% sure if it's possible to put multiple IPs on a single bridge, I tried creating a "vmbr2" that connects using the settings I was given by the datacenter. However, I can't get it to become "active," and when I try booting a VM associated with it I get the error "bridge 'vmbr2' does not exist":
SWHMTRd.png


Is this an approach I should continue to pursue? vmbr0 is already bound to my server's "main" IP (which I don't want VMs using). Should I be able to bind additional IPs on the same interface, or should I need to create a new bridge for each IP I want to add? That doesn't seem right to me, but I've just about exhausted all the "sane" options I can think of.
 
They're quite sure that the gateway I should be using with 192.99.198.148/30 is 192.99.10.254.

This is a rented server in a data center I don't have physical access to, so that's not an option. However, I'm using vmbr0 which is bridged to eth0, and on the hypervisor I can use that interface to ping out properly, so I don't think it's a simple connectivity issue.


You can try bridge aliases instead:

~# ifconfig vmbr0:0 192.99.198.149 netmask 255.255.255.252 && ping 192.99.198.150

or

~# ifconfig vmbr0:0 192.99.198.150 netmask 255.255.255.252 && ping 192.99.198.149

Also ping and tracerute 192.99.198.149 and 192.99.198.150 from outside the datacenter. You can use tcpdump to check traffic at your end.

As I can see from the screenshot you've got public 192.99.10.135 and it's gateway 192.99.10.254
These 192.99.198.148/30 addresses. AFAIK 192.99.198.148/30 will never know about 192.99.10.254 unless you specify default gateway from the same address range.

If that doesn't work I really have no idea what's going on there. And I'm sure it's not Proxmox related. Proxmox just based on Debian GNU/Linux distro and tcp/ip networking is common in general for all unix-based systems. Please contact your datacenter support and quote all your test results.
 
Here's another problem: I can't even use the server's original IP. I spun up a stock Ubuntu 14.04 live image, pointed its network interface at vmbr0 (which was already configured when I first fired up Proxmox), and used the same settings as vmbr0 uses in the screenshot I posted above, and that VM still can't ping anything. It also can't get a DHCP lease, not that I expected that to work. These exact IP settings work on the hypervisor, just not on VMs. Unless there's something that prevents the main management IP from being used by VMs, I assume this should work.

Basically, I can't get any VMs to connect to the Internet, period. I could set up a plain headless Debian system with dual-stack IPv4/v6 static IPs using nothing but echo and cat, but no matter what I try I can't get a VM running under Proxmox to talk to anything outside its box.
 
Code:
~# pveversion -v
proxmox-ve-2.6.32: 3.2-129 (running kernel: 2.6.32-30-pve)
pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-29-pve: 2.6.32-126
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-16
pve-firmware: 1.1-3
libpve-common-perl: 3.0-18
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve5
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

Code:
~# cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).


# The loopback network interface
auto lo
iface lo inet loopback


# for Routing
auto vmbr1
iface vmbr1 inet manual
    post-up /etc/pve/kvm-networking.sh
    bridge_ports dummy0
    bridge_stp off
    bridge_fd 0




# vmbr0: Bridging. Make sure to use only MAC adresses that were assigned to you.
auto vmbr0
iface vmbr0 inet static
    address 192.99.10.135
    netmask 255.255.255.0
    network 192.99.10.0
    broadcast 192.99.10.255
    gateway 192.99.10.254
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0


iface vmbr0 inet6 static
    address 2607:5300:60:3B87::
    netmask 64
    post-up /sbin/ip -f inet6 route add 2607:5300:60:3Bff:ff:ff:ff:ff dev vmbr0
    post-up /sbin/ip -f inet6 route add default via 2607:5300:60:3Bff:ff:ff:ff:ff
    pre-down /sbin/ip -f inet6 route del default via 2607:5300:60:3Bff:ff:ff:ff:ff
    pre-down /sbin/ip -f inet6 route del 2607:5300:60:3Bff:ff:ff:ff:ff dev vmbr0

Code:
~# head -n -0 /etc/pve/qemu-server/*.conf
 ==> /etc/pve/qemu-server/100.conf <==
bootdisk: ide0
cores: 1
ide0: local:100/vm-100-disk-1.qcow2,format=qcow2,size=32G
ide2: none,media=cdrom
memory: 512
name: pfs.sys.ununoctium.net
net0: e1000=02:00:00:FF:4C:0E,bridge=vmbr0
net1: e1000=AA:CB:AE:34:7D:E9,bridge=vmbr1
ostype: l26
sockets: 1


==> /etc/pve/qemu-server/101.conf <==
bootdisk: ide0
cores: 2
ide0: local:101/vm-101-disk-1.qcow2,size=500G
ide2: local:iso/ubuntu-14.04-server-amd64.iso,media=cdrom
memory: 1024
name: gwn.sys.ununoctium.net
net0: e1000=D2:9C:38:A2:CB:ED,bridge=vmbr1
ostype: l26
sockets: 1


==> /etc/pve/qemu-server/102.conf <==
bootdisk: ide0
cores: 1
ide0: local:102/vm-102-disk-1.qcow2,format=qcow2,size=32G
ide2: local:iso/ubuntu-14.04.1-desktop-amd64.iso,media=cdrom
memory: 2048
name: dsads
net0: e1000=26:0F:60:14:FD:9C,bridge=vmbr1
ostype: l26
sockets: 1


==> /etc/pve/qemu-server/103.conf <==
balloon: 2
bootdisk: ide0
cores: 1
ide0: local:103/vm-103-disk-1.qcow2,size=32G
ide2: local:iso/ubuntu-14.04.1-desktop-amd64.iso,media=cdrom
memory: 2048
name: asdasd
net0: e1000=02:00:00:FF:4C:0E,bridge=vmbr0
ostype: l26
sockets: 1
 
Code:
/etc/pve/kvm-networking.sh: No such file or directory

And on vm 103 (it's a pain to get plain text off this machine):
eHwVWUu.png
 
Thanks for all of that.

Let me resume:

1) You've got public ip 192.99.10.135 and it's gateway 192.99.10.25
2) It works from the proxmox (hypervisor level) -- confirms that ip, gw, and networking config is correct
3) It doesn't work if you assign 192.99.10.135 to vm. can't get out to the internet
4) Hovewer you can ping vm and from each vm each other inside the vmbr0 intranet -- confirms that bringe is working

Is that correct?

If that the case I can guess that possibly network switch issue. Somehow it blocks traffic to get through. Check MAC filtering, ARP table, MTU etc.

Do you have a acces to the switch? Manageable or dumb?
 
After much back-and-forth, it turns out the problem was... ARP tables in a router I don't control!

It turns out this problem was completely not Proxmox-related, but thanks for helping me troubleshoot and convince myself that that was really the case and I needed to lean on the data center a little harder.
 
After much back-and-forth, it turns out the problem was... ARP tables in a router I don't control!

It turns out this problem was completely not Proxmox-related, but thanks for helping me troubleshoot and convince myself that that was really the case and I needed to lean on the data center a little harder.

No worries. Nice to know that finally you win. :-) Community can not fix all your problems but can make you guess right. Cheers.