[SOLVED] Guests see network interfaces as disabled

Taylor Chien

New Member
Jan 15, 2018
2
0
1
29
We have a four-node Proxmox cluster running the latest patches for 5.1. Each server is a DL380 G8 with two gigabit and two ten-gigabit lines. The ten-gig lines are bonded together to access the storage network (no gateway). One of the gigabit lines is dedicated to management, with a static assigned directly to the interface. The last gigabit line is trunked access for VMs, but no matter what configuration I use, the guest sees the interface as down.

We know it's not the switch, because we've connected the new servers to ports used by known working servers. The only things different from our known working servers are the ten-gig additions, a couple interface names, and the fact that the new servers are running a Ceph cluster, none of which affects the trunk.

Both the Storage and Management networks are operational, but any attempt to use networking on a VM leads to the interface being marked as down on Linux guests.

I'm at a loss. It's probably something super simple to do with bridging, but I can't figure out what.

Code:
# Interface configuration:

# Loopback
auto lo
iface lo inet loopback

# Management interface
auto eno1
iface eno1 inet static
        address 10.xxx.xxx.5$
        netmask 255.255.255.0
        gateway 10.xxx.xxx.1
        dns-nameservers x.x.x.x, x.x.x.x
        dns-domain pve.example.com
        dns-search pve.example.com

# Trunk Interface
auto eno2
iface eno2 inet manual

auto vmbr0
iface vmbr0 inet manual
        bridge_ports eno2
        bridge_stp off
        bridge_fd 0
        bridge_vlan_aware yes

# Storage Network Interface - Local Only
iface ens2f0 inet manual
iface ens2f1 inet manual

auto bond0
iface bond0 inet static
        address 192.168.xxx.5$
        netmask 255.255.255.0
        slaves ens2f0 ens2f1
        bond_miimon 100
        bond_mode 4
        pre-up ifup ens2f0 ens2f1
        post-down ifdown ens2f0 ens2f1

The interface message on VM 500 (Linux guest):

Code:
eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000

VM 500's network config:

Code:
net0: virtio=00:11:22:33:44:55,bridge=vmbr0,tag=123

"ip add" on the server:

Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether xx:xx:xx:xx:xx:4c brd ff:ff:ff:ff:ff:ff
    inet 10.xxx.xxx.52/24 brd 10.xxx.xxx.255 scope global eno1
       valid_lft forever preferred_lft forever
    inet6 fe80::xxxx:xxxx:xxxx:964c/64 scope link
       valid_lft forever preferred_lft forever
3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether xx:xx:xx:xx:xx:4d brd ff:ff:ff:ff:ff:ff
4: eno3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether xx:xx:xx:xx:xx:4e brd ff:ff:ff:ff:ff:ff
5: eno4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether xx:xx:xx:xx:xx:4f brd ff:ff:ff:ff:ff:ff
6: ens2f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether xx:xx:xx:xx:xx:e7 brd ff:ff:ff:ff:ff:ff
7: ens2f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether xx:xx:xx:xx:xx:e8 brd ff:ff:ff:ff:ff:ff
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether xx:xx:xx:xx:xx:e8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.xxx.5$/24 brd 192.168.xxx.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::xxxx:xxxx:xxxx:4e8/64 scope link
       valid_lft forever preferred_lft forever
9: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether xx:xx:xx:xx:xx:4d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::xxxx:xxxx:xxxx:964d/64 scope link
       valid_lft forever preferred_lft forever

The output of "brctl show" with VM 500 running:

Code:
bridge name     bridge id               STP enabled     interfaces
vmbr0           8000.xxxxxxxxxx4d       no                eno2
                                                               tap500i0

The output of "pveversion -v":

Code:
proxmox-ve: 5.1-35 (running kernel: 4.13.13-4-pve)
pve-manager: 5.1-41 (running version: 5.1-41/0b958203)
pve-kernel-4.13.13-4-pve: 4.13.13-35
pve-kernel-4.13.13-2-pve: 4.13.13-33
libpve-http-server-perl: 2.0-8
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-19
qemu-server: 5.0-18
pve-firmware: 2.0-3
libpve-common-perl: 5.0-25
libpve-guest-common-perl: 2.0-14
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-17
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-3
pve-docs: 5.1-15
pve-qemu-kvm: 2.9.1-5
pve-container: 2.0-18
pve-firewall: 3.0-5
pve-ha-manager: 2.0-4
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.1-2
lxcfs: 2.0.8-1
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.3-pve1~bpo9
ceph: 12.2.2-pve1

Note: eno2 and eno3 are unused on these servers at the moment while we try and find the issue.
Edit: Formatting
 
The interface message on VM 500 (Linux guest):

Code:
eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000

VM 500's network config:

Code:
net0: virtio=00:11:22:33:44:55,bridge=vmbr0,tag=123

In the first step verify the network configuration inside the VM. What happens if you simply try

Code:
ifconfig eth0 up

in the virtual machine?
 
It had nothing to do with any of this. I assigned a MAC prefix to the VMs which turned out to be invalid.

Whoops.

Removing that bad prefix made everything work.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!