I've installed a test server with Proxmox VE 2.3 and all the latest updates on a Dell R910 system (32 CPU cores, 512GB RAM) in order to better understand network throughput when using VMs. My first test is is to establish a baseline for performance and run the test against the Proxmox host itself, reasoning that I won't see VM performance any better than what bare-metal Linux can manage. I'm using the Ixia Ixchariot test suite (http://www.ixchariot.com/products/datasheets/ixchariot.html) and have anywhere from 2 to 16 systems sending traffic to the Proxmox host.
After tuning the adapter as I usually do when testing Linux performance on the Intel 82599 NIC (changing TCP transmit/receive window side, TCP window scaling, etc.) and setting interrupt affinity as suggested by Intel for the ixgbe driver, I received unexpectedly low network throughput (~4Gb/s) and and unexpectedly high CPU utilization for the test application (70-99% CPU utilization for each CPU core receiving traffic according to atop). Adding more network streams (moving from 2 to 16 remote traffic sources/sinks) causes performance to drop as low as 1.2Gb/s.
My question is, how do I peel the onion here? I disabled bridging and used the ethX interface directly with no difference. I unloaded the kvm/kvm_intel modules in case they were causing some issues but no difference. What else can I peel away?
I notice a number of vz* kernel modules (vzethdev, vznetdev, vzcpt, vzrst, etc.). Is there a reasonable way to strip VZ support out while I test the network? What about eliminating rate limiting support (xt_limit, xt_dscp, xt_hl, etc.), how do I do that? Any other areas that I should look at as I compare to a standard Debian/Ubuntu release system?
Dave
Dave
After tuning the adapter as I usually do when testing Linux performance on the Intel 82599 NIC (changing TCP transmit/receive window side, TCP window scaling, etc.) and setting interrupt affinity as suggested by Intel for the ixgbe driver, I received unexpectedly low network throughput (~4Gb/s) and and unexpectedly high CPU utilization for the test application (70-99% CPU utilization for each CPU core receiving traffic according to atop). Adding more network streams (moving from 2 to 16 remote traffic sources/sinks) causes performance to drop as low as 1.2Gb/s.
My question is, how do I peel the onion here? I disabled bridging and used the ethX interface directly with no difference. I unloaded the kvm/kvm_intel modules in case they were causing some issues but no difference. What else can I peel away?
I notice a number of vz* kernel modules (vzethdev, vznetdev, vzcpt, vzrst, etc.). Is there a reasonable way to strip VZ support out while I test the network? What about eliminating rate limiting support (xt_limit, xt_dscp, xt_hl, etc.), how do I do that? Any other areas that I should look at as I compare to a standard Debian/Ubuntu release system?
Dave
Dave