[SOLVED] systemctl status networking.service down / network works

Ma907xb

Well-Known Member
Dec 26, 2018
71
1
48
USA
Hello,

I'm getting a FAILED on my networking serivce. I'm not sure what's causing the problem? All networks are working correctly/pingable. The configuration looks correct.

systemctl status networking.service down.JPG

Here is my etc/network/interfaces

auto lo
iface lo inet loopback

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
address 192.168.48.20
netmask 255.255.255.0
gateway 192.168.48.1
bridge_ports eno1
bridge_stp off
bridge_fd 0

iface enp8s0f0 inet manual

iface enp8s0f1 inet manual

auto eno2
iface eno2 inet static
address 192.168.49.20
netmask 255.255.255.0
gateway 192.168.49.1

auto enp10s0f0
iface enp10s0f0 inet static
address 192.168.50.20
netmask 255.255.255.0
gateway 192.168.50.1


auto enp10s0f1
iface enp10s0f1 inet static
address 192.168.51.20
netmask 255.255.255.0
gateway 192.168.51.1
 
Last edited:
Also new to proxmox and I might be wrong, but maybe you can't have multiple gateways on different interfaces or multiple gateways in general without policy based routing?

Delete all the gateways except from vmbr0.
 
I'm able to ping the other servers.

root@vmhost01:~# ping 192.168.51.30
PING 192.168.51.30 (192.168.51.30) 56(84) bytes of data.
64 bytes from 192.168.51.30: icmp_seq=1 ttl=64 time=0.192 ms
64 bytes from 192.168.51.30: icmp_seq=2 ttl=64 time=0.080 ms
64 bytes from 192.168.51.30: icmp_seq=3 ttl=64 time=0.079 ms
--- 192.168.51.30 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 32ms
rtt min/avg/max/mdev = 0.079/0.117/0.192/0.053 ms
root@vmhost01:~# ping 192.168.50.30
PING 192.168.50.30 (192.168.50.30) 56(84) bytes of data.
64 bytes from 192.168.50.30: icmp_seq=1 ttl=64 time=0.115 ms
64 bytes from 192.168.50.30: icmp_seq=2 ttl=64 time=0.171 ms
64 bytes from 192.168.50.30: icmp_seq=3 ttl=64 time=0.111 ms
64 bytes from 192.168.50.30: icmp_seq=4 ttl=64 time=0.078 ms
--- 192.168.50.30 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 57ms
rtt min/avg/max/mdev = 0.078/0.118/0.171/0.035 ms
root@vmhost01:~# ping 192.168.49.30
PING 192.168.49.30 (192.168.49.30) 56(84) bytes of data.
64 bytes from 192.168.49.30: icmp_seq=1 ttl=64 time=0.145 ms
64 bytes from 192.168.49.30: icmp_seq=2 ttl=64 time=0.147 ms
64 bytes from 192.168.49.30: icmp_seq=3 ttl=64 time=0.155 ms
64 bytes from 192.168.49.30: icmp_seq=4 ttl=64 time=0.145 ms
 
you can't have multiple default gw in /etc/network/interfaces.

it'll try to add for each interface "ip route add default via xxxx", and you can only have 1 route like this in kernel. so it'll fail on other interfaces.

it's possible to have multiple ecmp gateway, with manually add "post-up ip ip route add default proto static scope global nexthop via 192.168.49.1 dev eno2 weight 1 nexthop via 192.168.50.1 dev enp10s0f0 weight 1 ...." on in interface.

(if the weight is the same you'll have loadbalancing).
But your router need to handle this too for packet return.
 
I have hardware switches and a firewall taking care of the VLAN between the 3 servers. This is configured correctly.

Would configuring linux VLAN within my network configuration file solve the failed start?

I added the tags to my configuration file, but I'm unfamiliar with how to do it or if it will solve my problem. I'm also unfamiliar with post-up and policy routing.

I was recommended to the below, but am unsure if this is what i'll need. As this is a private network.

http://tldp.org/HOWTO/Adv-Routing-HOWTO/lartc.rpdb.multiple-links.html
 
Last edited:
what do you want to achieve with your 4 network interfaces ? aggregate them to have more bandwith ? if yes, you need to do a bond.


about vlan, do your hardware switch force a specific vlan for all servers ? (on cisco, this is "access vlan"). in this case, you don't need to setup any vlan on proxmox.

if you hardware switch is configured to allow multiples vlans to go to proxmox nodes (cisco is "trunk"), in this case, you need to configure vlan on the vms for example.
 
I'm trying to do the below. I'm using fortinet firewall/switches. Each IP has a VLAN on the switch and is assigned to the ports as a Native VLAN.

Why am is networking.services booting failed? Is it reading all traffic from one interface?

auto eno2
iface eno2 inet static
address 192.168.49.20..........Corosync cluster network
netmask 255.255.255.0
gateway 192.168.49.1

auto enp10s0f0
iface enp10s0f0 inet static
address 192.168.50.20..........Ceph Public Network
netmask 255.255.255.0
gateway 192.168.50.1


auto enp10s0f1
iface enp10s0f1 inet static
address 192.168.51.20...........Ceph Cluster Network
netmask 255.255.255.0
gateway 192.168.51.1
 
One thing which is problematic is the 3 gateways...
The gateway is considered for the default route - and you cannot (simply) have 3 default routes.

If all your ceph-communication happens within 192.168.50/24 and 192.168.51.0/24 respectively just delete the gateway lines (then all your default traffic will go to 192.168.49.1

I hope this helps!
 
One thing which is problematic is the 3 gateways...
The gateway is considered for the default route - and you cannot (simply) have 3 default routes.

If all your ceph-communication happens within 192.168.50/24 and 192.168.51.0/24 respectively just delete the gateway lines (then all your default traffic will go to 192.168.49.1

I hope this helps!

Would configuring additional IP tables for my 3 additional default gateways solve my problem? See below link.

https://www.thomas-krenn.com/en/wiki/Two_Default_Gateways_on_One_System
 
Why would you want to do that?
which traffic comes into your ceph network from the outside and needs to leave via the same route?

IMHO this makes the setup quite a bit more complicated - and until now I have seldomly seen an actual use-case where this makes more sense than just specifying which networks are to be reached on which interface
 
Why would you want to do that?
which traffic comes into your ceph network from the outside and needs to leave via the same route?

IMHO this makes the setup quite a bit more complicated - and until now I have seldomly seen an actual use-case where this makes more sense than just specifying which networks are to be reached on which interface


I don't want traffic into the CEPH network from the outside. I just want each node to have 3 private networks used for corosync, and the CEPH public/cluster. I'd like to solve the issue with the networking.services error at boot (from my original post). My configuration appears to work and I can successfully setup a cluster and CEPH, but i'm not sure my networking config file is correct or I need to specify the route in the iptables because of multiple gateways. I'm only trying to fix that error.
 
Last edited:
I would simply decide which of the networks should be used for outbound traffic and delete the gateway line from the other 2 interfaces.

I hope this helps!
 
This solved my problem. Networking services starts successfully once I removed the 3 default gateways.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!