Changes in OVH IP Routing to VMs

Derzeroth

New Member
Oct 3, 2017
8
0
1
28
In the past, I configured Virtual Machines on Proxmox using the default gateway with the .254 end of ip failover.

Currently, the tutorial says that I should use the gateway with the .254 IP end of the dedicated server (host machine)

However, I am not able to deliver internet to the virtual machines in either of two ways.

The machines I've created in the past are all on the internet. However, I can not create any new machine with internet after this OVH change. What should I do?
Thank you.
 
The gateway address needs to be a device that can route network traffic - usually your ISP router if the VM's are on the same subnet as the router. Can the proxmox host reach the internet?
 
  • Like
Reactions: Derzeroth
Sorry, just realised you are using cloud hosting.

According to the guides, the gateway is the penultimate address in the IP block allocated to you.

So if you were allocated, say 100.100.100.9/29 - this would give you a network range of 100.100.100.8 to 100.100.100.15

within this range, .8 would be the reserved network address, .15 would be the reserved broadcast address and .14 would be the network gateway
 
  • Like
Reactions: Derzeroth
The gateway address needs to be a device that can route network traffic - usually your ISP router if the VM's are on the same subnet as the router. Can the proxmox host reach the internet?

Hello brother, thanks for the reply!

Yes, proxmox host reach the internet!

I will show u my steps:


Here are the steps I took:

1- Created a Virtual MAC on failover IP 144.217.158.190
2- The Virtual MAC is 02:00:00:70:f0:32
3- Created a LXC Container using CentOS, the template is from TurnKey internet, provided natively by proxmox, centos-7-default_20171212_amd64.tar.xz
4- On Proxmox, the container has this settings on Network Options:
Mac address: 02:00:00:70:f0:32
IPV4: 144.217.158.190/32 (CIDR is required)
Gateway: 149.56.16.254 (the host machine ip is 149.56.16.78)
Setted as static (the other options is DHCP)
DNS domain and servers: use host settings

5- After this, when i do "ping google.com" the message i received is: "ping: google.com: Name or service not known"
6- When i do the command "ifconfig -a" the result is: "-bash: ifconfig: command not found"


Now i will make the tutorial from this page: https://docs.ovh.com/gb/en/dedicated/network-bridging/

I'm following the CentOS 7 tutorial, since i'm using the centos-7 lxc container

7- The network adapter on Proxmox is the eth0, so, i will change the eth0 networking settings, on /etc/sysconfig/network-scripts/ifcfg-eth0 and /etc/sysconfig/network-scripts/route-eth0

8-
ifcfg-eth0:

DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
USERCTL=no
IPV6INIT=no
PEERDNS=yes
TYPE=Ethernet
NETMASK=255.255.255.255
IPADDR=144.217.158.190
GATEWAY=149.56.16.254
ARP=yes
HWADDR=02:00:00:70:F0:32


9 -
route-eth0
149.56.16.254 - 255.255.255.255 eth0
NETWORK_GW_VM - 255.255.255.0 eth0
default 149.56.16.254




After this, no internet, no ifconfig-a too.

After:
# service network stop
# service network start
# service network restart


No internet too

----------------------------------------------------------
ip addr command:

[root@centosvmtest network-scripts]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
13: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether 02:00:00:70:f0:32 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 144.217.158.190/32 brd 144.217.158.190 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::ff:fe70:f032/64 scope link
valid_lft forever preferred_lft forever

----------------------------------------------------------------
ip -s link command:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
RX: bytes packets errors dropped overrun mcast
2450 31 0 0 0 0
TX: bytes packets errors dropped carrier collsns
2450 31 0 0 0 0
13: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT qlen 1000
link/ether 02:00:00:70:f0:32 brd ff:ff:ff:ff:ff:ff link-netnsid 0
RX: bytes packets errors dropped overrun mcast
839504 10268 0 0 0 0
TX: bytes packets errors dropped carrier collsns
3984 66 0 0 0 0



---------------------------------------------------------------------

On Proxmox:
ifconfig -a command:




root@dzhostserver01:~# ifconfig -a
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether a4:bf:01:08:41:52 txqueuelen 1000 (Ethernet)
RX packets 3179549 bytes 317456378 (302.7 MiB)
RX errors 0 dropped 0 overruns 3 frame 0
TX packets 106467 bytes 53271927 (50.8 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth1: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether a4:bf:01:08:41:53 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 65666 bytes 20638266 (19.6 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 65666 bytes 20638266 (19.6 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth100i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether fe:c4:b3:bc:84:45 txqueuelen 1000 (Ethernet)
RX packets 9 bytes 518 (518.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 51124 bytes 4203154 (4.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth100i1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether fe:c5:27:e9:46:fe txqueuelen 1000 (Ethernet)
RX packets 20936 bytes 8208636 (7.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 71598 bytes 7787777 (7.4 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth101i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether fe:33:cb:bc:05:26 txqueuelen 1000 (Ethernet)
RX packets 4013 bytes 215779 (210.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 55500 bytes 4484366 (4.2 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth103i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether fe:b4:5b:a3:16:5c txqueuelen 1000 (Ethernet)
RX packets 67 bytes 4054 (3.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 10609 bytes 867786 (847.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

vmbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 149.56.16.78 netmask 255.255.255.0 broadcast 149.56.16.255
inet6 2607:5300:61:14e:: prefixlen 64 scopeid 0x0<global>
inet6 fe80::a6bf:1ff:fe08:4152 prefixlen 64 scopeid 0x20<link>
ether a4:bf:01:08:41:52 txqueuelen 1000 (Ethernet)
RX packets 3148832 bytes 269041458 (256.5 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 72108 bytes 44297596 (42.2 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

vmbr1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::789a:95ff:fea7:d93d prefixlen 64 scopeid 0x20<link>
ether 7a:9a:95:a7:d9:3d txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7 bytes 746 (746.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


------------------------------------------------------------

Proxmox Info:
Proxmox Virtual Environment 5.2-10

It's updated, but the original template was from OVH



--------------------------------------------------------

If u can help me with that, I'll be grateful.
 
Sorry, just realised you are using cloud hosting.

According to the guides, the gateway is the penultimate address in the IP block allocated to you.

So if you were allocated, say 100.100.100.9/29 - this would give you a network range of 100.100.100.8 to 100.100.100.15

within this range, .8 would be the reserved network address, .15 would be the reserved broadcast address and .14 would be the network gateway

I'm using a dedicated server
SP-32 plan on OVH
 
Ah, OK - I think I understand

OVH are providing a point-to-point link for your public IP - hence the /32 subnet mask.

In this setup, only one device on your server can reach the internet - as you've found.

To make this work so your VM's can reach the internet, you have to enable routing on the Proxmox Host and make the Proxmox Host the default gateway for the VM's. This is discussed here
https://pve.proxmox.com/wiki/Network_Configuration

Your VM's will not be individually reachable from the internet unless you set up NAT tables.
 
  • Like
Reactions: Derzeroth
Ah, OK - I think I understand

OVH are providing a point-to-point link for your public IP - hence the /32 subnet mask.

In this setup, only one device on your server can reach the internet - as you've found.

To make this work so your VM's can reach the internet, you have to enable routing on the Proxmox Host and make the Proxmox Host the default gateway for the VM's. This is discussed here
https://pve.proxmox.com/wiki/Network_Configuration

Your VM's will not be individually reachable from the internet unless you set up NAT tables.

Thanks for the reply!

I own 02 virtual machines with internet within Proxmox. However, they were made before the OVH changed the form of routing.
I will try to perform the procedures that are in the link that you sent

Thanks!
 
Ah, OK - I think I understand

OVH are providing a point-to-point link for your public IP - hence the /32 subnet mask.

In this setup, only one device on your server can reach the internet - as you've found.

To make this work so your VM's can reach the internet, you have to enable routing on the Proxmox Host and make the Proxmox Host the default gateway for the VM's. This is discussed here
https://pve.proxmox.com/wiki/Network_Configuration

Your VM's will not be individually reachable from the internet unless you set up NAT tables.

On Network Configuration, we have two options:

Default Configuration using a Bridge

and

Routed Configuration


Can you tell me what would be the most appropriate option for me?


This is my interfaces file from Proxmox host:

2iqvEGV.png


Thanks!
 
I think you'd want the routed configuration

Code:
auto lo
iface lo inet loopback

auto eth0
#real IP address
iface eth0 inet static
        address  149.56.16.78
        netmask  255.255.255.0
        network  149.56.16.0
        broadcast 149.56.16.255     
        gateway  149.56.16.254

auto eth1
#failback IP
iface eth1 inet static
        address 144.217.158.190
        netmask 255.255.255.255

auto vmbr0
#private sub network - eg 10.10.10.xx
iface vmbr0 inet static
        address  10.10.10.254
        netmask  255.255.255.0
        bridge_ports none
        bridge_stp off
        bridge_fd 0

        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up   iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE

Then assign 10.10.10.xx addresses to your VM's and use 10.10.10.254 as the gateway
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!