Search results

  1. B

    Feature Request: Virtual Function NIC pass-through?

    Looking at the commands used, they are all core iproute2 commands to bring up the virtual function ... so it would work with any vendor implementing the right hooks since you're never calling a vendor-supplied command directly. I think it would be worthwhile if it is intended to be...
  2. B

    Feature Request: Virtual Function NIC pass-through?

    changed topic to more properly indicate this is a feature request...
  3. B

    Feature Request: Virtual Function NIC pass-through?

    Some NICs like Mellanox allow you to create "virtual functions" (basically a hardware-assisted virtual NIC). These are meant to be passed directly through to VMs instead of generating "tap" interfaces tied to a bridge. Supposedly this sort of thing completely hardware-offloads all...
  4. B

    Open vSwitch and incorrect RSTP (+ crash on topology change involving Mellanox 10GbE adapter)

    @fabian Looks like 4.9 is officially LTS now, any chance the PVE kernel will be updated to this in the near-ish future?
  5. B

    Question about Ceph and partitioning host disks

    ceph-deploy will allow partitions, or at least it used to as we used to deploy that way. pveceph does not.
  6. B

    Open Vswitch Bridge not starting.

    restarting networking on proxmox never works for me, I've always got to reboot to test new configurations. Its very odd to have 2 interfaces on the same subnet, that will probably cause odd issues. You also really shouldn't have both a linux bridge and an ovs bridge on the same host.
  7. B

    Open vSwitch updates

    @spirit Wow, that's cool that standard linux bridges support that now. Any idea what gets written to /etc/network/interfaces for this (I never use the GUI for configuring the interfaces)? I saw cumulus linux supported something like this without needing OVS, but figured it might be something...
  8. B

    Open vSwitch and incorrect RSTP (+ crash on topology change involving Mellanox 10GbE adapter)

    The Mellanox Connect X-4Lx cards worked great with the mlx5 driver. Everyone: avoid Intel i40e, even if using their newer 10GbE X710 cards (i40e isn't specific to 40GbE). The older 10GbE generation that used the ixgbe appears to be fine, so the i40e driver is borked.
  9. B

    Open vSwitch and incorrect RSTP (+ crash on topology change involving Mellanox 10GbE adapter)

    Those that have it working, what network cards are you using? In my test lab, I have it working with 2x igb and 2x ixgbe ports and it seems to work well. However I just set up a new cluster and it uses 2x igb, then 2x the newer Intel X710/XL710 which uses the i40e driver and it clearly doesn't...
  10. B

    Open vSwitch updates

    I can confirm this happened to me as well, the 'needrestart' package did not fix the situation.
  11. B

    Antiddos Firewall

    Firewalls can't do anything to prevent DDoS, you'll still consume bandwidth from your ISP your firewall can just drop the traffic but you'll still be billed for the traffic hitting your firewall... often the traffic will far exceed your port speed under a true DDoS attack. The only real way to...
  12. B

    Open vSwitch updates

    @manu: "the standard linux bridges have the same features as open-vswitch" Really? So a single linux bridge these days can support multiple vlans then just assign a vm to one of those vlans without requiring a bridge per vlan? When did this happen? Also, Rapid Spanning Tree is supported on...
  13. B

    Open vSwitch and incorrect RSTP (+ crash on topology change involving Mellanox 10GbE adapter)

    My guess is proxmox prefers LTS kernels. 4.9 was supposed to be announced as LTS, which was just released Monday, but I haven't seen any confirmation of its LTS status, so it might have been pushed to 4.10.
  14. B

    Problem installing via IPMI Supermicro

    Typically you would just close the console and restart it and it will size the screen properly. That said, I typically use the actually IPMIView utility supermicro provides and not the one built into the web interface. If that doesn't work, you'll need to pass a command line option to the boot...
  15. B

    Open vSwitch and incorrect RSTP (+ crash on topology change involving Mellanox 10GbE adapter)

    Interesting ... good that you got a stack trace, when mine panic'd I didn't. I did notice that in the trace that it shows NAPI and GRO. In the 4.5 notes, it pretty much says the NAPI system was overhauled: https://kernelnewbies.org/Linux_4.5#head-5558c630ad32cc1b2c85fb8ab6a4e4f5c0bb64de It...
  16. B

    Open vSwitch and incorrect RSTP (+ crash on topology change involving Mellanox 10GbE adapter)

    @gardar , tried 4.4.30 (not .3) lowlatency ... it died hard on me, infact, it took out one of my NICs completely where it failed hardware initialization on reboot. It finally came back up on the 4.7 kernel after unloading the ixgbe driver and reloading it, then survived a reboot after that.
  17. B

    Open vSwitch and incorrect RSTP (+ crash on topology change involving Mellanox 10GbE adapter)

    And .... 4.4.30 does not work. First node upgraded ... and kernel panic.
  18. B

    Open vSwitch and incorrect RSTP (+ crash on topology change involving Mellanox 10GbE adapter)

    I have also confirmed http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.7/linux-image-4.7.0-040700-lowlatency_4.7.0-040700.201608021801_amd64.deb resolves my issue, or at least appears to. I was able to get at least 1 node to lock up before they were all inter-connected, and it does not occur...
  19. B

    Open vSwitch and incorrect RSTP (+ crash on topology change involving Mellanox 10GbE adapter)

    I think I've reproduced the same behavior with topology changes. Using Intel 10GBaseT NICs here. I didn't have console due to an unrelated IPMI issue, so I couldn't see if it was a kernel panic or not, but connecting ports or disconnecting can sometimes cause all networking to cease on at...
  20. B

    NVMe support/experiences

    The raw read iops are impressive ... that said, I see higher iops for write on my current cluster, but its also spread across more OSDs. I need to re-evaluate that I guess. Its possible we wouldn't see any real improvement due to Ceph overhead.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!