Hi there,
No doubt this has been asked loads, but I can't find any answers in the search function, I have paid for support so will utilise if I need to; but with the forum being open and pretty helpful I was hoping someone in here could help and benefit others.
I have 2 servers at OVH; which also have a vRack interface. The physical machines have 2 interfaces, one primary which is public then a second interface which is connected through the vRack.
When I push my IP block through the vRack, by enabling it in the vRack services the VPS's running are unable to receive IP traffic to their public IP.. These are running on OpenVZ.
When I remove the IP from vRack they work. Example being: ve:100 has an external IP, when outside the vRack the IP is routable. As soon as I put it into the vRack, it isn't routable. Despite running an arping..
OVH; in their infinite wizdom tell me this is because the vmbrX interface isn't going via the vrack; so I have made some changes.
Interfaces:
I have also ensure that /etc/vz/vz.conf has
Naturally the node hosting the VE can ping and SSH to it; but the world cannot SSH to it or ping.. Am I missing some drastic here? Or do I need to tweak the VZ to listen on vmbr1?
Any advice is most appreciated,
Carl
No doubt this has been asked loads, but I can't find any answers in the search function, I have paid for support so will utilise if I need to; but with the forum being open and pretty helpful I was hoping someone in here could help and benefit others.
I have 2 servers at OVH; which also have a vRack interface. The physical machines have 2 interfaces, one primary which is public then a second interface which is connected through the vRack.
When I push my IP block through the vRack, by enabling it in the vRack services the VPS's running are unable to receive IP traffic to their public IP.. These are running on OpenVZ.
When I remove the IP from vRack they work. Example being: ve:100 has an external IP, when outside the vRack the IP is routable. As soon as I put it into the vRack, it isn't routable. Despite running an arping..
OVH; in their infinite wizdom tell me this is because the vmbrX interface isn't going via the vrack; so I have made some changes.
Interfaces:
Code:
# network interface settingsauto lo
iface lo inet loopback
iface eth0 inet manual
iface eth1 inet manual
auto vmbr1
iface vmbr1 inet manual
bridge_ports dummy0
bridge_stp off
bridge_fd 0
post-up /etc/pve/kvm-networking.sh
auto vmbr0
iface vmbr0 inet static
address 5.x.x.x
netmask 255.255.255.0
gateway 5.x.x.254
broadcast 5.x.x.255
bridge_ports eth0
bridge_stp off
bridge_fd 0
network 5.x.x0
auto vmbr2
iface vmbr2 inet static
address 192.168.100.1
netmask 255.255.255.0
bridge_ports eth1
bridge_stp off
bridge_fd 0
I have also ensure that /etc/vz/vz.conf has
Code:
# Controls which interfaces to send ARP requests and modify ARP tables on.
NEIGHBOUR_DEVS=all
Naturally the node hosting the VE can ping and SSH to it; but the world cannot SSH to it or ping.. Am I missing some drastic here? Or do I need to tweak the VZ to listen on vmbr1?
Any advice is most appreciated,
Carl
Last edited: