[SOLVED] Problems with my ethernet link during my first VM

Feb 25, 2019
19
0
6
41
Hello,
After see different topics, most very helpfully, i'm stuck...
I install proxmox 5.3-9. I have this configuration:

Two servers. Evreyone has 4 ethernet link:
  • eno1 and eno2 for my LAN
  • eno3 and eno4 for link other proxmox server.

In the future will be in cluster but for now i'm interesting only one server.
I need to install Win Server 2016.
The installation was fine. I found a problem with Virtio lan driver and i fixed with vrtio-win0.1.164.iso.

Windows see my Ethernet driver, but not connected on my lan...
In this moment i have this configuration on my hypervisor (See attached file).

I don't understand what wrongs in my configuration...
I try install also Debian and I also there i had problems... Whats wrong in my configuration?
 

Attachments

  • Immagine.png
    Immagine.png
    9.4 KB · Views: 8
Please post the complete /etc/network/interfaces file (redact public IPs or any other sensitive information).
In addition please post the config of your Windows VM. (Output of 'qm conf <vmid>')
 
Ok. Sorry:
Code:
auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

iface eno3 inet manual

iface eno4 inet manual

auto bond0
iface bond0 inet manual
              bond-slaves eno1 eno2
              bond-miimon 100
              bond-balance balance-rr

auto bond1
              address 192.168.128.1
              netmask 255.255.255.0
              bond-slaves eno1 eno2
              bond-miimon 100
              bond-balance balance-rr

auto vmbr0
iface vmbr0 inet static
        address  172.16.1.8
        netmask  255.255.255.0
        gateway  172.16.1.254
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
 
agent: 1
bootdisk: virtio0
cores: 2
ide0: local:iso/SW_DVD9_Win_Server_STD_CORE_2016_64Bit_Italian_-4_DC_STD_MLF_X21-70532.ISO,media=cdrom,size=5813242K
ide2: local:iso/virtio-win-0.1.141.iso,media=cdrom,size=309208K
memory: 8112
name: WSUS
net0: virtio=9A:15:D1:D6:BD:55,bridge=vmbr0
numa: 0
ostype: win10
scsi0: tank1:vm-101-disk-0,size=40G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=0e61636e-562b-4562-8697-9690c5190031
sockets: 1
vmgenid: 9e24fc43-8f64-49c5-825d-da2280481eb2
Here my VM configuration in attached
 

Attachments

  • Immagine.png
    Immagine.png
    15.6 KB · Views: 7
Last edited:
IP Windows Config

Host Name ...........................................: WIN-R06A0EG4BJT
Primary DNS suffix ................................:
Node type ..............................................: Hybrid
Enable routing IP ...................................: No
Enable Proxy WINS ...............................: No

Ethernet link:

DNS suffic for connection .......................:
Description ..............................................: Ref Hat VirtIO Ethernet Adapter
DHCP ......................................................: No
Enable automatic configuration ...............: Yes
Address IPV4 ..........................................: 172.16.1.13 (prefered)
Subnet Mask : 255.255.255.0
Gateway ..................................................: 172.16.1.254
DNS server ..............................................: 172.16.0.2
172.16.14.10
NetBIOS on TCP/IP .................................: Enabled

Tunnel card isatap. {E1FE5C0D-D253-4738-8F04-6F067D44C30}:

Support status ...........................................: Disconnected
DNs suffix for connection ...........................:
Description .................................................: Microsoft ISATAP Adapter #2
Address ......................................................: 00-00-00-00-00-00-00-E0
Enabled DHCP ...........................................: No
Automatic Enabled Configuration ...............: Yes

I don't understand if the problem is VM or in Hypervisor configuration...
Thanks
 
Your bond1 is missing the 'iface bond1 inet static' line and it has the same slaves defined as bond0 instead of 'eno3' and 'eno4'
 
Your bond1 is missing the 'iface bond1 inet static' line and it has the same slaves defined as bond0 instead of 'eno3' and 'eno4'
I'm sorry, i don't know why but my copy/paste didn't work.
I repost my /etc/network/interfaces

Code:
auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

iface eno3 inet manual

iface eno4 inet manual

auto bond0
iface bond0 inet manual
        bond-slaves eno1 eno2
        bond-miimon 100
        bond-mode balance-rr

auto bond1
iface bond1 inet static
        address  192.168.128.1
        netmask  255.255.255.0
        bond-slaves eno3 eno4
        bond-miimon 100
        bond-mode balance-rr

auto vmbr0
iface vmbr0 inet static
        address  172.16.1.8
        netmask  255.255.255.0
        gateway  172.16.1.254
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0

With this configuration my Win2016 dosn't connect on my LAN
 
What have you tried? Can you ping the VM? Can you ping the gateway from the VM? Can you ping '8.8.8.8'?
Is any firewall active (host, VM)?
 
My Vm is isolated... Ping from my client in the LAN to VM doesn't work, from my VM if try to ping google or other IP doesn't work.
I also try tracert command but the result is same.
 
So you can neither ping the gateway nor the host?
 
Please post the output of 'ip a', 'ip r' and 'brctl show' from your host.
 
So... from vmbr0. Command 'ip a'
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group defaul t qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 18:66:da:97:54:a9 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 18:66:da:97:54:a9 brd ff:ff:ff:ff:ff:ff
4: eno3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
link/ether 18:66:da:97:54:ab brd ff:ff:ff:ff:ff:ff
5: eno4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
link/ether 18:66:da:97:54:ab brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 18:66:da:97:54:a9 brd ff:ff:ff:ff:ff:ff
7: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP grou p default qlen 1000
link/ether 18:66:da:97:54:a9 brd ff:ff:ff:ff:ff:ff
inet 172.16.1.8/24 brd 172.16.1.255 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::1a66:daff:fe97:54a9/64 scope link
valid_lft forever preferred_lft forever
8: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 18:66:da:97:54:ab brd ff:ff:ff:ff:ff:ff
inet 192.168.128.1/24 brd 192.168.128.255 scope global bond1
valid_lft forever preferred_lft forever
inet6 fe80::1a66:daff:fe97:54ab/64 scope link
valid_lft forever preferred_lft forever
9: tap101i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
link/ether da:3f:b0:14:9a:44 brd ff:ff:ff:ff:ff:ff

command 'ip r'

default via 172.16.1.254 dev vmbr0 onlink
172.16.1.0/24 dev vmbr0 proto kernel scope link src 172.16.1.8
192.168.128.0/24 dev bond1 proto kernel scope link src 192.168.128.1

command 'brctl show'

bridge name bridge id STP enabled interfaces
vmbr0 8000.1866da9754a9 no bond0
tap101i0
 
Can you try it with bond mode 'active-backup' instead of 'balance-rr'? The output of 'dmesg' could also be helpful.
 
Bingo!

Thanks a lot Mira! I'm change the type of bond in active-backup and my VM is reborn!!!
I'm sorry for the disturb.
Just last question... Why the round robin wasn't right?
I also tried to put the address in bond0 and insert in vmbr0 the slave bond0 but didn't work...

For now Thanks very much!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!