Looking at the commands used, they are all core iproute2 commands to bring up the virtual function ... so it would work with any vendor implementing the right hooks since you're never calling a vendor-supplied command directly. I think it would be worthwhile if it is intended to be...
Some NICs like Mellanox allow you to create "virtual functions" (basically a hardware-assisted virtual NIC). These are meant to be passed directly through to VMs instead of generating "tap" interfaces tied to a bridge.
Supposedly this sort of thing completely hardware-offloads all...
restarting networking on proxmox never works for me, I've always got to reboot to test new configurations.
Its very odd to have 2 interfaces on the same subnet, that will probably cause odd issues. You also really shouldn't have both a linux bridge and an ovs bridge on the same host.
@spirit Wow, that's cool that standard linux bridges support that now. Any idea what gets written to /etc/network/interfaces for this (I never use the GUI for configuring the interfaces)? I saw cumulus linux supported something like this without needing OVS, but figured it might be something...
The Mellanox Connect X-4Lx cards worked great with the mlx5 driver.
Everyone: avoid Intel i40e, even if using their newer 10GbE X710 cards (i40e isn't specific to 40GbE). The older 10GbE generation that used the ixgbe appears to be fine, so the i40e driver is borked.
Those that have it working, what network cards are you using?
In my test lab, I have it working with 2x igb and 2x ixgbe ports and it seems to work well.
However I just set up a new cluster and it uses 2x igb, then 2x the newer Intel X710/XL710 which uses the i40e driver and it clearly doesn't...
Firewalls can't do anything to prevent DDoS, you'll still consume bandwidth from your ISP your firewall can just drop the traffic but you'll still be billed for the traffic hitting your firewall... often the traffic will far exceed your port speed under a true DDoS attack.
The only real way to...
@manu: "the standard linux bridges have the same features as open-vswitch"
Really? So a single linux bridge these days can support multiple vlans then just assign a vm to one of those vlans without requiring a bridge per vlan? When did this happen?
Also, Rapid Spanning Tree is supported on...
My guess is proxmox prefers LTS kernels. 4.9 was supposed to be announced as LTS, which was just released Monday, but I haven't seen any confirmation of its LTS status, so it might have been pushed to 4.10.
Typically you would just close the console and restart it and it will size the screen properly. That said, I typically use the actually IPMIView utility supermicro provides and not the one built into the web interface.
If that doesn't work, you'll need to pass a command line option to the boot...
Interesting ... good that you got a stack trace, when mine panic'd I didn't.
I did notice that in the trace that it shows NAPI and GRO. In the 4.5 notes, it pretty much says the NAPI system was overhauled:
https://kernelnewbies.org/Linux_4.5#head-5558c630ad32cc1b2c85fb8ab6a4e4f5c0bb64de
It...
@gardar , tried 4.4.30 (not .3) lowlatency ... it died hard on me, infact, it took out one of my NICs completely where it failed hardware initialization on reboot. It finally came back up on the 4.7 kernel after unloading the ixgbe driver and reloading it, then survived a reboot after that.
I have also confirmed http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.7/linux-image-4.7.0-040700-lowlatency_4.7.0-040700.201608021801_amd64.deb resolves my issue, or at least appears to. I was able to get at least 1 node to lock up before they were all inter-connected, and it does not occur...
I think I've reproduced the same behavior with topology changes. Using Intel 10GBaseT NICs here. I didn't have console due to an unrelated IPMI issue, so I couldn't see if it was a kernel panic or not, but connecting ports or disconnecting can sometimes cause all networking to cease on at...
The raw read iops are impressive ... that said, I see higher iops for write on my current cluster, but its also spread across more OSDs. I need to re-evaluate that I guess. Its possible we wouldn't see any real improvement due to Ceph overhead.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.