Hi,
I have 3 hardware hosts, all configured the same (ips are .11, .12, .13)
Each host has 1 physical NIC (enp35s0).
Additionaly on that NIC are several subnets via VLAN.
VLAN 4000 -> local private LAN between proxmox hosts AND vms.
VLAN 4001 -> has a public subnet, also by vms
VLAN 4002 -> for host ceph communication
VLAN 4003 -> also vm communication
MTU on all VLAN is set to 1400, due to provider restrictions.
The setup is bridged, not routed. (so the proxmox iptables should not affect it, right?)
Here is my /etc/network/interfaces
No in gerenal the stuff woks. The proxmox cluster is up and running, corosync ist doing fine, ceph works.
I have a clustered firewall as vms, that have working public and private interface.
I can create vms, that use the virtual firewall as a gateway.
Problem is the following:
VM A, and VM B (192.168.128.21) - both configured with bride=vmbr4000
Both are centos8, fresh install.
Both can reach the internet via the virt. firewall (192.168.128.1)
Both can ping each other
Both can ping the proxmox hosts (192.168.128.11, 12, 13)
They can ssh into each other.
But if VM B start a webserver, no one can reach it:
But SSH from the same vm to VM B works!
A tcpdump shows me only TCP retransmission and then ICMP Destination unreachable (Communication administratively filtered)
The vms do not run any firewall / iptables
The vms run on the same proxmox host
The result is the exact same if I use one if the proxmox hosts as source.
What the heck is blocking this?
I have 3 hardware hosts, all configured the same (ips are .11, .12, .13)
Each host has 1 physical NIC (enp35s0).
Additionaly on that NIC are several subnets via VLAN.
VLAN 4000 -> local private LAN between proxmox hosts AND vms.
VLAN 4001 -> has a public subnet, also by vms
VLAN 4002 -> for host ceph communication
VLAN 4003 -> also vm communication
MTU on all VLAN is set to 1400, due to provider restrictions.
The setup is bridged, not routed. (so the proxmox iptables should not affect it, right?)
Here is my /etc/network/interfaces
Code:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!
source /etc/network/interfaces.d/*
auto lo
iface lo inet loopback
iface enp35s0 inet manual
auto enp35s0.4000
iface enp35s0.4000 inet manual
mtu 1400
vlan-id 4000
auto enp35s0.4001
iface enp35s0.4001 inet manual
mtu 1400
vlan-id 4001
auto enp35s0.4002
iface enp35s0.4002 inet static
address 192.168.0.11/24
mtu 1400
vlan-id 4002
#ceph
auto enp35s0.4003
iface enp35s0.4003 inet manual
mtu 1400
vlan-id 4003
auto vmbr0
iface vmbr0 inet static
address 135.X.X.34/26
gateway 135.X.X.1
bridge-ports enp35s0
bridge-stp off
bridge-fd 0
# Proxmox Host
iface vmbr0 inet6 static
address 2a01:4f9:4b:3d1d::2/64
gateway fe80::1
auto vmbr4000
iface vmbr4000 inet static
address 192.168.128.11/24
bridge-ports enp35s0.4000
bridge-stp off
bridge-fd 0
mtu 1400
up route add -net 172.16.10.0 netmask 255.255.255.0 gw 10.0.0.1
down route del -net 172.16.10.0 netmask 255.255.255.0 gw 10.0.0.1
#Private vSwitch
auto vmbr4001
iface vmbr4001 inet manual
bridge-ports enp35s0.4001
bridge-stp off
bridge-fd 0
mtu 1400
#Public vSwitch
auto vmbr4003
iface vmbr4003 inet manual
bridge-ports enp35s0.4003
bridge-stp off
bridge-fd 0
mtu 1400
#pfsync
No in gerenal the stuff woks. The proxmox cluster is up and running, corosync ist doing fine, ceph works.
I have a clustered firewall as vms, that have working public and private interface.
I can create vms, that use the virtual firewall as a gateway.
Problem is the following:
VM A, and VM B (192.168.128.21) - both configured with bride=vmbr4000
Both are centos8, fresh install.
Both can reach the internet via the virt. firewall (192.168.128.1)
Both can ping each other
Both can ping the proxmox hosts (192.168.128.11, 12, 13)
They can ssh into each other.
But if VM B start a webserver, no one can reach it:
Code:
curl http://192.168.128.21
curl: (7) Failed to connect to 192.168.128.21 port 80: No route to host
But SSH from the same vm to VM B works!
A tcpdump shows me only TCP retransmission and then ICMP Destination unreachable (Communication administratively filtered)
The vms do not run any firewall / iptables
The vms run on the same proxmox host
The result is the exact same if I use one if the proxmox hosts as source.
What the heck is blocking this?