My network has some issues. When the network traffic increases, the network connections tend to be very slow even though it's an 10GB network. I'm not sure whether it's proxmox-related or not.
VMx = virtual machine x
VHx = proxmox virtual host x
VM1 = 192.168.0.51 (E2:A9:CC:75:79:AF)...
I thought I would post this to maybe assist anybody else that has a HP Procurve 2910al-24G network switch and using 'bridge-vlan-aware' parameter in their server interfaces.
I have a 3 server cluster consisting of 2x Dell R420 and 1 x R710. I converted them all recently from Proxmox 5.4 to 6.1...
Participants: two PVEs (6.1-8 ) and an old HP-2810-24G (J9021A)
I want to have both failover and double bandwidth on this two fresh installed PVEs .
But actually works only failover, the bandwidth is limited to 1Gbps.
I am sure I am missing something, this is why I am posting here.
I'm running a 4 node pve cluster (6-1.3) with ovs installed. All networks configured using ovs and all interfaces are LACP bonded.
OVS BOND (LACP) -> OVS BRIDGE -> VMS
Recently I noticed Windows VMs using intel E1000 drivers disconnecting after some...
I'm trying to set up LACP on my Proxmox using Openvswitch, but it's not working and I'm out of ideas and googling.
I have a Zyxel GS1900, enabled LACP and created a LAG-Group with the 2 ports connected to my server. In Proxmox i set up the network like this:
iface bond0 inet...
Hello. I'm looking to setup my Proxmox 5.4 cluster with 2 bonded ports for VMs/Management and 2 bonded ports for integrated CEPH storage. In looking over the Network Config docs there is a reference to running the cluster network from the bonded link requiring active-passive mode. Which one...
Building a small home lab HA Proxmox/Ceph cluster with 3 Dell 210ii boxes. I'm trying to do it on the cheap and have decided to forego a 10Gb ring in favor of LACP'd 1Gb based network with quad port intel cards in each of the three boxes and an HP 1810G-24 switch. Due to ignorance I was...
Sehr geehrte Kolleginnen und Kollegen,
aktuell habe ich einen Demoaufbau im semiproduktiven Einsatz um Probleme zu erkennen und einzudämmen, bevor ich damit an 500 User gehe. Aktuell benutzt das Unternehmen garkeine logisch getrennten Systeme. Setups in diesem Umfang hatte ich zuvor mit vmWare...
Since I have upgraded my pve 3.4 to 5.2, now backup, clone, restore are very sloww and block access to my virtual machine.
Here you can find all the setup done :
PVE root, swap and data are in ext3 same as used with PVE 3.4
NAS synology is using ext4 (connect in nfs)...
I have a question about bonding and HA. I want to create a HA PVE cluster, but I am confused with the bonding and its mode. See this simplified picture. I have two switches (Mikrotik CRS317, not stackable) and multiple PVE nodes (just one signed).
What should I configure to create a HA...
I'm currently virtualizing my FreeNAS to VM on my proxmox. However, I have a problem.
These are the tests I did.
For debian the LACP is working properly. (network virtIO bonding on vm)
For freeNAS the LACP does not work. (network virtIO lagg on vm)
I am trying to get the bonding work with my Cisco WS-C3850-24T switch but I keep getting "Suspended: LACP currently not enabled on the remote port" when I check my logs on the switch, here's my configuration on switch side.
description Access PROXMOX-SRV...
I have a 4 nic PowerEdge r710. Current interface file
iface lo inet loopback
iface eno2 inet manual
iface eno1 inet manual
iface eno3 inet manual
iface eno4 inet manual
iface bond0 inet manual
slaves eno2 eno3 eno4...
Bonding problems. LACP is selected and rebooted to take effect. WebGUI shows LACP but... cat /proc/net/bonding/bond0 shows "round robin" and I am getting errors on the switch and in Proxmox. I have no idea how to fix this without a reinstall. I have a Cisco Catalyst 3560G, I have the switch set...
I plan to replace a stack of 2xswitchs. A 3 nodes cluster (with dedicated lacp interfaces for vmbr0 and ceph) is connected to this stack. I guess I have to shutdown the entire cluster for doing this. What are your recommendation for doing this properly ? (Stop all vms, then what just...
I'm currently building a "proof-of-concept" for work using Proxmox.
I have a 4x1Gb LACP config (see below), but I get slow performance from my VMs.
I'm using a 400MBps NAS storage that works perfectly when a 10Gb non-vm client is tested.
On Proxmox I have 3, Win 2012 r2 VM's with...
This configuration (perhaps you'll call it a work-around) took me a while to sort out, so hopefully it will save you some time.
When you bond 2 interfaces and then want to make a "vmbr" (for example vmbr0) over them, you'll find that the moment a VM starts with the same vlan tag as...