Hello Fellow proxmoxers,
I've been using this VE for a while now, and I have to say I have been pretty pleased with it through the years.
Alas, now I am facing a weird issue that I cannot seem to find the solution of.
PVEVERSION: pve-manager/6.2-10/a20769ed (running kernel: 5.4.44-2-pve)
CURRENT /etc/network/interface OF PVE
auto lo
iface lo inet loopback
iface eno1 inet manual
auto eno2
iface eno2 inet static
address 192.168.12.2/24
#PVE VLAN
auto vmbr0
iface vmbr0 inet dhcp
bridge-ports eno1
bridge-stp off
bridge-fd 0
auto vmbr1
iface vmbr1 inet static
address 192.168.33.1/24
gateway 192.168.33.1
bridge-ports none
bridge-stp off
bridge-fd 0
#Virtual VMs LAN
With this setup, the current situation is:
1. eno1 [bridge vmbr0] is the NIC with public IP of the pve itself for network access
2. eno2 is a dedicated NIC that bridges across multiple pve instances [dedicated zfs snapshot channel]
3. vmbr0 bridge is the network that gives internet access + is the network given to those VMs that require a public IP themselves [ for IP FAILOVERS ]
4. vmbr1 is the VMs network [ for inter communication among various virtual machines] and should be the way that some virtual machines [e.g database instances] talk to the outside world, via MASQUERADING of connections with src vmbr1 that go through vmbr0
All points from 1 to 3 work fine (and half of 4), vms can talk with eachothers, vms can ping the vmbr1 pve address, zfs replication via eno2 is working fine and eno1 network access is good as well.
What is unusual is that i cannot seem to give vms access to internet via MASQUERADING of vmbr0 [or eno1] :
-A POSTROUTING -s 192.168.33.0/24 -o vmbr0 -j MASQUERADE
iptables -P FORWARD is on ACCEPT for sake of testing
ipv4_forward in sysctl is set to 1
rt_filters = 0 (all of them)
The routing table is [route -n]:
With ip route:
default via [redacted-gateway] dev vmbr0
[redacted-wan-network] dev vmbr0 proto kernel scope link src [redacted-wan-ip]
192.168.12.0/24 dev eno2 proto kernel scope link src 192.168.12.2
192.168.33.0/24 dev vmbr1 proto kernel scope link src 192.168.33.1
PING from a VM with IP 192.168.33.5 to 8.8.8.8 does not see to work. During this ping, tcpdump -i vmbr0 [or eno1] give the following output:
11:14:42.958458 IP proxmox6 > dns.google: ICMP echo request, id 19562, seq 93, length 64
11:14:42.959951 IP dns.google > proxmox6: ICMP echo reply, id 19562, seq 93, length 64
With iptables LOGs i only see this line currently repeated:
proxmox6 kernel: [73972.733861] IN=vmbr1 OUT=vmbr0 [redacted MACs] SRC=192.168.33.5 DST=8.8.8.8 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=44844 DF PROTO=ICMP TYPE=8 CODE=0 ID=19562 SEQ=179
With rules iptables -j LOG as :
-A PREROUTING -p icmp -j LOG
-A POSTROUTING -j LOG
-A FORWARD -j LOG
-A OUTPUT -p icmp -j LOG
So... from tcpdump it seems packets do come back from google dns to the vmbr0 interface, as they are being MASQUERADED correctly.. but from there, poof, gone. Also, from logging of iptables -j LOG i cannot see a coming back packet...
Last but not list, the same thing happens if i ping from PVE itself via ping -I vmbr1 (as in, 192.168.33.1 which is the gateway for the VMs assigned to pve itself)
I am open to any idea really, been working on this 2 days and i cannot really figure out wtf is going on, need a routing table for the subnet? But the kernel seems to know that packets for 192.168.33.0 have to go via vmbr1...
Thanks for eventual responses
I've been using this VE for a while now, and I have to say I have been pretty pleased with it through the years.
Alas, now I am facing a weird issue that I cannot seem to find the solution of.
PVEVERSION: pve-manager/6.2-10/a20769ed (running kernel: 5.4.44-2-pve)
CURRENT /etc/network/interface OF PVE
auto lo
iface lo inet loopback
iface eno1 inet manual
auto eno2
iface eno2 inet static
address 192.168.12.2/24
#PVE VLAN
auto vmbr0
iface vmbr0 inet dhcp
bridge-ports eno1
bridge-stp off
bridge-fd 0
auto vmbr1
iface vmbr1 inet static
address 192.168.33.1/24
gateway 192.168.33.1
bridge-ports none
bridge-stp off
bridge-fd 0
#Virtual VMs LAN
With this setup, the current situation is:
1. eno1 [bridge vmbr0] is the NIC with public IP of the pve itself for network access
2. eno2 is a dedicated NIC that bridges across multiple pve instances [dedicated zfs snapshot channel]
3. vmbr0 bridge is the network that gives internet access + is the network given to those VMs that require a public IP themselves [ for IP FAILOVERS ]
4. vmbr1 is the VMs network [ for inter communication among various virtual machines] and should be the way that some virtual machines [e.g database instances] talk to the outside world, via MASQUERADING of connections with src vmbr1 that go through vmbr0
All points from 1 to 3 work fine (and half of 4), vms can talk with eachothers, vms can ping the vmbr1 pve address, zfs replication via eno2 is working fine and eno1 network access is good as well.
What is unusual is that i cannot seem to give vms access to internet via MASQUERADING of vmbr0 [or eno1] :
-A POSTROUTING -s 192.168.33.0/24 -o vmbr0 -j MASQUERADE
iptables -P FORWARD is on ACCEPT for sake of testing
ipv4_forward in sysctl is set to 1
rt_filters = 0 (all of them)
The routing table is [route -n]:
Code:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 [secret] 0.0.0.0 UG 0 0 0 vmbr0
[secret-wan] 0.0.0.0 255.255.255.0 U 0 0 0 vmbr0
192.168.12.0 0.0.0.0 255.255.255.0 U 0 0 0 eno2
192.168.33.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr1
With ip route:
default via [redacted-gateway] dev vmbr0
[redacted-wan-network] dev vmbr0 proto kernel scope link src [redacted-wan-ip]
192.168.12.0/24 dev eno2 proto kernel scope link src 192.168.12.2
192.168.33.0/24 dev vmbr1 proto kernel scope link src 192.168.33.1
PING from a VM with IP 192.168.33.5 to 8.8.8.8 does not see to work. During this ping, tcpdump -i vmbr0 [or eno1] give the following output:
11:14:42.958458 IP proxmox6 > dns.google: ICMP echo request, id 19562, seq 93, length 64
11:14:42.959951 IP dns.google > proxmox6: ICMP echo reply, id 19562, seq 93, length 64
With iptables LOGs i only see this line currently repeated:
proxmox6 kernel: [73972.733861] IN=vmbr1 OUT=vmbr0 [redacted MACs] SRC=192.168.33.5 DST=8.8.8.8 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=44844 DF PROTO=ICMP TYPE=8 CODE=0 ID=19562 SEQ=179
With rules iptables -j LOG as :
-A PREROUTING -p icmp -j LOG
-A POSTROUTING -j LOG
-A FORWARD -j LOG
-A OUTPUT -p icmp -j LOG
So... from tcpdump it seems packets do come back from google dns to the vmbr0 interface, as they are being MASQUERADED correctly.. but from there, poof, gone. Also, from logging of iptables -j LOG i cannot see a coming back packet...
Last but not list, the same thing happens if i ping from PVE itself via ping -I vmbr1 (as in, 192.168.33.1 which is the gateway for the VMs assigned to pve itself)
I am open to any idea really, been working on this 2 days and i cannot really figure out wtf is going on, need a routing table for the subnet? But the kernel seems to know that packets for 192.168.33.0 have to go via vmbr1...
Thanks for eventual responses
Last edited: