I am new to proxmox as of a few weeks ago, and am trying to replace VMware because I'd rather avoid paying VCenter pricing to save the department money. So all advice is welcome during their trial period.
On our servers, we have dual NIC cards, after some time, I was able to get a continuous...
Hello. I'm looking to setup my Proxmox 5.4 cluster with 2 bonded ports for VMs/Management and 2 bonded ports for integrated CEPH storage. In looking over the Network Config docs there is a reference to running the cluster network from the bonded link requiring active-passive mode. Which one...
Hi, I am using Proxmox just for 2 or 3 Month now. I love it, its Open Source and uses ZFS. I've used ESXi for about 7 years before I moved to PVE.
Everything was running fine. I wanted to care about networking performance and activate Bonding, wich I used all the time under Vmware.
I was testing bonding thingies with 4 port dell broadcom 5719 nics. There is something weird when I tried to bond 4 port nics in round robin mode. I want to share.
Using 2 nics connected directly port to port with patch cables.
3 different server hardware and these kernels.
I have a question about bonding and HA. I want to create a HA PVE cluster, but I am confused with the bonding and its mode. See this simplified picture. I have two switches (Mikrotik CRS317, not stackable) and multiple PVE nodes (just one signed).
What should I configure to create a HA...
Got some interesting problem while configuring OVS. but, first, let me describe network
1) Two lan interfaces bonded property on switch
2) Single VLAN # 519
3) Proxmox 5.2, free edition :)
Ok first i configure with linux bridges, config is follow
iface lo inet loopback
I've got a server with 4 network interfaces, bonded together with 802.3ad.
To make that setup work, I had to remove the bridge, else every possible configuration I've tried failed. Now, my network configuration looks like this.
With this setup, I've reached the internet by updating...
I have a question about creating a cluster.
We use two servers, which have each four 1GB NICs. We work with local storage. The most importent VMs are Actice Directory und fileserver. These VMs are redundant. AD1 is on hypervisor1 and AD2 is on hypervisor2.
Now I just wanted to make a...
I've configured my server with a bonded and bridged connection and seem to have gotten in a little over my head. Our networking group's switch configuration may also be involved. I have kernel logs going back to July 16th and since August 14th I keep periodically getting the following error...
So I got a new server for my Docker Plex environment, a beefy dual Xeon Dell Precision. I thought I would run a virtual lab environment on the same hardware so I installed Proxmox but I am struggling coming from more of a vmware background. I really like Plex on Docker for its community and...
I'm new to VM's, bridges and bonding. The network configuration page suggests some settings but leaves many things unclear to me.
I want to bond two NICs, a 1G and a 10G, in active-backup mode with the 10G as primary. Then I want to use a bridge so that all my VM's use the 10G and failover to...
I'm currently setuping a Proxmox LAB at home and I'd like to have some guidances/recommendations.
Here is my hardware :
2 HP Proliant Microserver Gen 8 :
- 16 GB of RAM
- 220 SSD
- 2 Gigabit interfaces on each server
- MicroSD port for Proxmox installation
I would like to have an...
Bonding problems. LACP is selected and rebooted to take effect. WebGUI shows LACP but... cat /proc/net/bonding/bond0 shows "round robin" and I am getting errors on the switch and in Proxmox. I have no idea how to fix this without a reinstall. I have a Cisco Catalyst 3560G, I have the switch set...
I'm currently building a "proof-of-concept" for work using Proxmox.
I have a 4x1Gb LACP config (see below), but I get slow performance from my VMs.
I'm using a 400MBps NAS storage that works perfectly when a 10Gb non-vm client is tested.
On Proxmox I have 3, Win 2012 r2 VM's with...
Creating an environment here is the details.
HP blade servers 3 node HA cluster
SAN iSCSI multipath shared storage
2 10gb NIC making them bond
my question is
is it enough to create cluster bond (2 10gb NIC) or is there any recommendations to avoid bottleneck/latency issues.
I have a small cluster of Proxmox machines, and I am in the process of upgrading them to 5.0 from 4.4. The two that I have converted have this problem of every few reboots, the network simply doesn't work. I can log in via the console and run /etc/init.d/networking restart and that makes...
I am installing Proxmox on an HP DL360p Gen9 with a 2 port 10G nic that has a trunked LACP config on the switch side and have the below as my /etc/network/interfaces file:
iface lo inet loopback
iface eth0 inet manual
iface eth1 inet manual
iface bond0 inet manual...
I'm testing Proxmox and bumped into to small problem.
I have 3 servers with 4 cards each connected to single switch.
I have created bond0 and bond1 interface with 2 interfaces in each bond.
Then I have assigned vmbr0 to bond0 and vmbr1 to bond1.
Vmbr1 have IP in same network:
This configuration (perhaps you'll call it a work-around) took me a while to sort out, so hopefully it will save you some time.
When you bond 2 interfaces and then want to make a "vmbr" (for example vmbr0) over them, you'll find that the moment a VM starts with the same vlan tag as...