vlan stability?

mrballcb

New Member
Sep 30, 2011
24
0
1
We have used Proxmox 1.6, 1.8, 1.9, 2.0, and 2.1. In all prior installs, we simply assigned a switch port to a single vlan (access mode) and assigned an IP to a vmbrX (bridged to the appropriate ethX). We are now testing the configuration of the switch port in trunk mode and bridging vmbrX to the appropriate ethX.VLAN (and limiting it on the switch to the vlan's that should only be visible to hosts that might be running on these proxmox servers). We are not using any kind of bonding.

To state it simply, it works. It simplified our configuration because it removed the manual vlan mapping from the switch ports and moved it into the proxmox configuration. So far I like it, but I'm looking for possible pitfalls and gotchas. Has anybody had any issues with stability or throughput using trunked interfaces versus regular? Anything else that is worth considering before we adopt this configuration across all our proxmox servers?

At first we did have an issue that looked like a severe throughput problem (20 minutes to copy a 1.8 GB file across NFS), but it turned out that it was a disk I/O problem. Breaking the RAID 1 mirror so that only one drive was functional then allowed it to complete in about a minute, so either the controller has an issue or one of the drives is bad. It was merely a red-herring to cloud up the waters for us for a little while.
 
We have used Proxmox 1.6, 1.8, 1.9, 2.0, and 2.1. In all prior installs, we simply assigned a switch port to a single vlan (access mode) and assigned an IP to a vmbrX (bridged to the appropriate ethX). We are now testing the configuration of the switch port in trunk mode and bridging vmbrX to the appropriate ethX.VLAN (and limiting it on the switch to the vlan's that should only be visible to hosts that might be running on these proxmox servers). We are not using any kind of bonding.
Hi,
why you use trunk (bonding) on the switch side and than not???
To state it simply, it works. It simplified our configuration because it removed the manual vlan mapping from the switch ports and moved it into the proxmox configuration. So far I like it, but I'm looking for possible pitfalls and gotchas. Has anybody had any issues with stability or throughput using trunked interfaces versus regular? Anything else that is worth considering before we adopt this configuration across all our proxmox servers?

At first we did have an issue that looked like a severe throughput problem (20 minutes to copy a 1.8 GB file across NFS), but it turned out that it was a disk I/O problem. Breaking the RAID 1 mirror so that only one drive was functional then allowed it to complete in about a minute, so either the controller has an issue or one of the drives is bad. It was merely a red-herring to cloud up the waters for us for a little while.
I use vlan-tagging (802.1Q) since years without trouble. Important is to use only tagged traffic on one interface ( mixed mode with default-vlan and tagged traffic seems to be problematic on some systems).

Depends on the driver/hardware (NIC/switch) you can have a (little) performance impact or need special settings (solarflare 10GB cards need an "ethtool -K eth0 tso off" to reach the right performance).

Udo
 
Hi,
why you use trunk (bonding) on the switch side and than not???

I use vlan-tagging (802.1Q) since years without trouble. Important is to use only tagged traffic on one interface ( mixed mode with default-vlan and tagged traffic seems to be problematic on some systems).

Depends on the driver/hardware (NIC/switch) you can have a (little) performance impact or need special settings (solarflare 10GB cards need an "ethtool -K eth0 tso off" to reach the right performance).

Sorry that was unclear. In my verbage above:
1. Bonding meant port trunking (using multiple network connections in parallel to increase the link speed beyond the limits of any one single cable or port, aka link aggregation.)
2. When I said trunking, I meant Ethernet trunking (carrying multiple VLANs through a single network link through the use of a trunking protocol.)

We have used vlan tagging for a while too, but only between switches. We've only just started to consider using trunk ports to Linux hosts, in this case to Proxmox servers. In the switch, we do have the relevant port set to either access mode or trunk mode, not mixed. We also prune the vlans to only allow the specific vlans that should be visible to the proxmox server and cut down on broadcast traffic from vlans which will never be utilized.

Question: I'm not using link aggregation, so is there any harm in connecting the second ethernet device to a switch port and setting the switch port to be a trunk port? (and just not using it unless the first ethernet device fails in some way?) The OSI layers involved is a little fuzzy here and I'm not sure if the linux kernel will do funky things with the vlan tagged traffic it sees at both ethernet ports (trying to avoid any STP issues). We're looking for the quickest possible manual failover in the case of an unanticipated outage. We have negative experiences with link aggregation paired with ethernet trunking (but not _recently_ so), if you have other experiences, we would like to hear them. In our experience, ethernet link aggregation just caused more problems than it solved.
 
...
Question: I'm not using link aggregation, so is there any harm in connecting the second ethernet device to a switch port and setting the switch port to be a trunk port? (and just not using it unless the first ethernet device fails in some way?) The OSI layers involved is a little fuzzy here and I'm not sure if the linux kernel will do funky things with the vlan tagged traffic it sees at both ethernet ports (trying to avoid any STP issues). We're looking for the quickest possible manual failover in the case of an unanticipated outage. We have negative experiences with link aggregation paired with ethernet trunking (but not _recently_ so), if you have other experiences, we would like to hear them. In our experience, ethernet link aggregation just caused more problems than it solved.
Hi,
if you simply have the interface without IP there can't be any impact. But I don't see the benefit. If you have networktrouble two years later, will you remeber which network-port has the "spare"-cable? And will the switchport configured right - normaly vlans will added (first for testing and than in real live) and it's easy to forget to configure the spare-port too?!

Of course, an switchport can die, but this happens not very often - wrong configured ports are IMHO an bigger problem.

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!