Recent content by robertb

  1. R

    corosync udp flood

    Hello everyone, we have seen a similar problem with corosync. Once a cluster member is rebooted and rejoins the cluster, the NICs (ixgbe) on other nodes will reset and shutdown their links with messages like so: [4692542.687464] ixgbe 0000:43:00.0 eth2: Reset adapter [4692547.806838] ixgbe...
  2. R

    Guest-agent fs-freeze command breaks the system on backup

    Hey guys, did anyone manage to find a cause for this? Happens to me too since a few weeks now. Some virtual machines will at random switch to read only mode and present exact the same problem after rebooting. Seems to happen after taking a snapshot backup to pbs. However, after running fsck...
  3. R

    30% Performance Regression after upgrading Proxmox 5.0 to 5.4

    Hello, it is a two socket mainboard with both cpus populated (https://www.supermicro.com/products/motherboard/Xeon/C600/X9DRi-LN4F_.cfm). We have udev rules that pin the interface names to eth0, eth1 and so on by mac address.
  4. R

    30% Performance Regression after upgrading Proxmox 5.0 to 5.4

    Hi, no there is no special setup involved, it's a regular (vlan aware) linux bridge where the virtual machines are connected to. Could it be related to some of the cpu bug fixes that happened in the recent kernel changes? Maybe v2 is somehow differently affected compared to v3, just a guess...
  5. R

    30% Performance Regression after upgrading Proxmox 5.0 to 5.4

    Hello, in our case the setup is: 2x Xeon E5-2690v2 per server on a Supermicro Mainboard, while two onboard NICs (Intel i350) are connected via LACP to the switches. All machines have the same specs. However, the freshly booted ones with the newer kernel struggle massively. There are sometimes...
  6. R

    30% Performance Regression after upgrading Proxmox 5.0 to 5.4

    Hello, I am experiencing issues too - packet processing performance has dropped significantly. Seems like newer kernel versions are affected somehow; the same workload is just fine on a 4.15.18-3-pve #1 SMP PVE 4.15.18-22 machine. However, 4.15.18-13-pve #1 SMP PVE 4.15.18-37 causes massive...
  7. R

    Mellanox Drivers for PVE 5.4

    update your initramfs: update-initramfs -k all -u -v
  8. R

    4.15 based test kernel for PVE 5.x available

    Hello, I also have the same problem with 2x Xeon e5 2680v2 on supermicro x9drw-if, latest bios and pve. Crashes with the same message, additionally I have one of the 10g links (x520-da) flapping since. However, removing the Intel-Microcode package seems to hhav stabilized it a bit.
  9. R

    Jumbo Frames not work after upgrade pve-kernel to 4.13.8-2

    Good morning, I just confirmed that by manually compiling and inserting igb driver 5.3.5.15, the queues will start to be functional again. Incase someone relies on this: igb driver modules can be replaced live with minimal impact (short packet loss), no reboot required.
  10. R

    Jumbo Frames not work after upgrade pve-kernel to 4.13.8-2

    Hello, sorry for bringing up this old topic. I just ran across the same Intel i350 issue: MSI-X stopped working and falls back to MSI, which also doesn't allow to configure queues larger than 1. PVE 4.13.13-36 driver: igb version: 5.3.5.10
  11. R

    Network Hotplug - Error "What is ":1"?"

    Thanks for updating the thread.
  12. R

    Network Hotplug - Error "What is ":1"?"

    Hello, thanks for the suggestion. apt install iproute2=4.10.0-1 solved it for me too. best regards
  13. R

    Upgrade Path from 4.4 to 5.1

    I just did the same procedure. Watch out for network interfaces being auto-renamed. The upgrade process failed once on every node, a simple apt-get install -f fixed it. However, since the upgrade i recently get a "501 too many open files" from the webui (no lxc containers in use). This might...
  14. R

    VM Boot failure: RAM Balloon driver cant get pages

    yeah, I'd assume that the used ram is quite standard server ram (kingston, samsung). I don't think that there is any bandwidth limit. I could imagine some condition where maybe pvestatd tries to put pressure on the guests to reduce memory using the balloon driver, before they are completely...
  15. R

    VM Boot failure: RAM Balloon driver cant get pages

    Hello, the hypervisors memory ranges from 96 up to 256 GB of DDR3 regECC ram, depending on the cluster nodes. All systems are dual xeon E5 servers on supermicro mainboards. Yes, all VMs have ballooning configured. Sometimes this happens on hosts with only 1 virtual machine at that point of...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!