Hi!
I have a small cluster of Proxmox machines, and I am in the process of upgrading them to 5.0 from 4.4. The two that I have converted have this problem of every few reboots, the network simply doesn't work. I can log in via the console and run /etc/init.d/networking restart and that makes it go, but that's not a good solution.
I have a fairly standard bonded, VLAN'd setup:
When it boots up, all the interfaces are "UP", but the bond0 is set to round-robin, not 802.3ad, and I don't know how that could be:
Here is part of what it looks like when set up correctly:
And the pair of Juniper switches I have show the same thing - both ports are up at 1gb, but the 8023ad link aggregation is not.
These machines have been working just fine before the upgrade to 5.0 - is there anything that may have changed?
Thanks in advanced!
I have a small cluster of Proxmox machines, and I am in the process of upgrading them to 5.0 from 4.4. The two that I have converted have this problem of every few reboots, the network simply doesn't work. I can log in via the console and run /etc/init.d/networking restart and that makes it go, but that's not a good solution.
I have a fairly standard bonded, VLAN'd setup:
Code:
# cat /etc/network/interfaces
auto lo
iface lo inet loopback
iface eth0 inet manual
iface eth1 inet manual
auto bond0
iface bond0 inet manual
slaves eth0 eth1
bond_miimon 100
bond_mode 4
## live
auto bond0.2
iface bond0.2 inet manual
vlan-raw-device bond0
## private
auto bond0.4
iface bond0.4 inet manual
vlan-raw-device bond0
## live
auto vmbr0
iface vmbr0 inet manual
bridge_ports bond0.2
bridge_stp off
bridge_fd 0
## private
auto vmbr1
iface vmbr1 inet static
address 10.10.10.18
netmask 255.255.255.0
gateway 10.10.10.1
bridge_ports bond0.4
bridge_stp off
bridge_fd 0
When it boots up, all the interfaces are "UP", but the bond0 is set to round-robin, not 802.3ad, and I don't know how that could be:
Code:
no-net# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:90:xx:xx:xx
Slave queue ID: 0
Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:90:xx:xx:xy
Slave queue ID: 0
Here is part of what it looks like when set up correctly:
Code:
good-net# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: 00:25:90:08:58:82
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 2
Actor Key: 9
Partner Key: 19
Partner Mac Address: f8:c0:01:cb:a1:80
Slave Interface: eth0
...etc...
And the pair of Juniper switches I have show the same thing - both ports are up at 1gb, but the 8023ad link aggregation is not.
These machines have been working just fine before the upgrade to 5.0 - is there anything that may have changed?
Thanks in advanced!