After upgrading to version 19.5.12 of the NIC firmware from version 19.0.12 no LACP bond interface will come active.
This is just a reference for those that face similar issue. I rollback to the previous version on the Dell Servers and everything came back to normal.
Please excuse my lack of knowledge I am new to proxmox and this industry as a whole. So the server I have to run has 4 gigabit nics on it and I had configured proxmox to run on just one. For redundancy purposes and wanting improved speeds I went poking around the network config. I made a...
We have 2x switches available, each with 4x 10G ports and 48x 1G ports. They're stackable, though we'd rather avoid it.
At the moment, all hypervisors are routing through 1 of the 2 switches. They're going through the untagged handoff VLAN, with tagged VLANs atop.
Those tagged VLANs...
I'm fighting with network setup, not sure if this config can work, so would be nice if you can share info or your config regarding similar setup.
System: HPE ML350 Gen9
Network card: HPE 546SFP+ (MLX312B) - 2-port SFP+ (part number: 779793-B21)
Bridging: using OVS...
I am trying to set up our PVE node on the network and am having an issue I've been thus far unsuccessful in resolving.
The node has two NIC's with multiple ports (1-4, and 5-8), which I've set up for LACP resulting in two bonds (bond0, and bond1). The goal is that bond1 (ports 5-8 / the...
mein nächstes Problem steht an. Ich habe nun mit 4 Netzwerkkarten ein Linux Bond erstellt. Der Bond funktioniert auch. Ich kann nur den Bond nicht als Netzwerkkarte für eine VM angeben. Was fehlt noch? Ich hatte schon versucht ein vmbr1 zu erstellen und als Bridge Port habe ich den Bond...
My network has some issues. When the network traffic increases, the network connections tend to be very slow even though it's an 10GB network. I'm not sure whether it's proxmox-related or not.
VMx = virtual machine x
VHx = proxmox virtual host x
VM1 = 192.168.0.51 (E2:A9:CC:75:79:AF)...
I thought I would post this to maybe assist anybody else that has a HP Procurve 2910al-24G network switch and using 'bridge-vlan-aware' parameter in their server interfaces.
I have a 3 server cluster consisting of 2x Dell R420 and 1 x R710. I converted them all recently from Proxmox 5.4 to 6.1...
Participants: two PVEs (6.1-8 ) and an old HP-2810-24G (J9021A)
I want to have both failover and double bandwidth on this two fresh installed PVEs .
But actually works only failover, the bandwidth is limited to 1Gbps.
I am sure I am missing something, this is why I am posting here.
I'm running a 4 node pve cluster (6-1.3) with ovs installed. All networks configured using ovs and all interfaces are LACP bonded.
OVS BOND (LACP) -> OVS BRIDGE -> VMS
Recently I noticed Windows VMs using intel E1000 drivers disconnecting after some...
I'm trying to set up LACP on my Proxmox using Openvswitch, but it's not working and I'm out of ideas and googling.
I have a Zyxel GS1900, enabled LACP and created a LAG-Group with the 2 ports connected to my server. In Proxmox i set up the network like this:
iface bond0 inet...
Hello. I'm looking to setup my Proxmox 5.4 cluster with 2 bonded ports for VMs/Management and 2 bonded ports for integrated CEPH storage. In looking over the Network Config docs there is a reference to running the cluster network from the bonded link requiring active-passive mode. Which one...
Building a small home lab HA Proxmox/Ceph cluster with 3 Dell 210ii boxes. I'm trying to do it on the cheap and have decided to forego a 10Gb ring in favor of LACP'd 1Gb based network with quad port intel cards in each of the three boxes and an HP 1810G-24 switch. Due to ignorance I was...
Sehr geehrte Kolleginnen und Kollegen,
aktuell habe ich einen Demoaufbau im semiproduktiven Einsatz um Probleme zu erkennen und einzudämmen, bevor ich damit an 500 User gehe. Aktuell benutzt das Unternehmen garkeine logisch getrennten Systeme. Setups in diesem Umfang hatte ich zuvor mit vmWare...
Since I have upgraded my pve 3.4 to 5.2, now backup, clone, restore are very sloww and block access to my virtual machine.
Here you can find all the setup done :
PVE root, swap and data are in ext3 same as used with PVE 3.4
NAS synology is using ext4 (connect in nfs)...
I have a question about bonding and HA. I want to create a HA PVE cluster, but I am confused with the bonding and its mode. See this simplified picture. I have two switches (Mikrotik CRS317, not stackable) and multiple PVE nodes (just one signed).
What should I configure to create a HA...
I'm currently virtualizing my FreeNAS to VM on my proxmox. However, I have a problem.
These are the tests I did.
For debian the LACP is working properly. (network virtIO bonding on vm)
For freeNAS the LACP does not work. (network virtIO lagg on vm)
I am trying to get the bonding work with my Cisco WS-C3850-24T switch but I keep getting "Suspended: LACP currently not enabled on the remote port" when I check my logs on the switch, here's my configuration on switch side.
description Access PROXMOX-SRV...
I have a 4 nic PowerEdge r710. Current interface file
iface lo inet loopback
iface eno2 inet manual
iface eno1 inet manual
iface eno3 inet manual
iface eno4 inet manual
iface bond0 inet manual
slaves eno2 eno3 eno4...