Search results

  1. xcdr

    Jumbo Frames not work after upgrade pve-kernel to 4.13.8-2

    ixgbe (2 x 10Gbit) works for me too (without bonding), I have only problem on igb (4 x 1Gbit and 2 x 1Gbit) NICs ethtool info on both NICs shows: firmware-version: 3.19, 0x00013bdb expansion-rom-version: bus-info: 0000:05:00.0 supports-statistics: yes supports-test: yes supports-eeprom-access...
  2. xcdr

    Jumbo Frames not work after upgrade pve-kernel to 4.13.8-2

    This might be bug in igb driver: version: 5.3.5.12 srcversion: D735972E4CD19103A136184 vermagic: 4.13.8-2-pve SMP mod_unload modversions Working one: version: 5.3.5.10 srcversion: B708B1E560AB2E8BECA698C vermagic: 4.13.4-1-pve SMP mod_unload modversions
  3. xcdr

    Jumbo Frames not work after upgrade pve-kernel to 4.13.8-2

    Hi. On my hardware CEPH fail after upgrade pve-kernel from 4.13.4-1 to 4.13.8-2 because MTU 9000 stop working. "ip link" shows proper MTU but "ping -M do" with packet larger than 1500 doesn't work, after rollback kernel version all forks fine! I have bonded interfaces on Intel 82580 cards...
  4. xcdr

    temporarily disable out of the box self-fencing

    Yes, config is done, watchdog appears to hang randomly, it's not new cluster, I want to try iTCO on next service window.
  5. xcdr

    temporarily disable out of the box self-fencing

    I always migrate all VM to another node before upgrade, so HA manager has nothing to work and load is nearly 0. Unfortunately by default PROXMOX/Debian has no pernament log via journalctl and I have no logs before current boot (currently I turn on according to Debian manual) Last log on...
  6. xcdr

    temporarily disable out of the box self-fencing

    I have exactly the same problem on two different clusters, one of them have separate cluster network interfaces and fast system drive, second have shared cluster network interfaces (but usage is very low) and slow sata dom - fence during dist-upgrade. I also found another strange problem on...
  7. xcdr

    Cluster going down randomly

    I have similar problem, nodes sometimes goes down/restart, many corosync retransmit. At first cluster all nodes goes down immediately at saturday, and today another cluster goes down when I restarted one of nodes. Confirmed corosync retransmit problem at four different instalations with PVE...