Recent content by fhloston

  1. F

    Random 6.8.4-2-pve kernel crashes

    Logs in this post further back. No, not yet.
  2. F

    Random 6.8.4-2-pve kernel crashes

    Yes, we use encryption on our OSD and I remember seeing storage/dm-crypt related traces!
  3. F

    Random 6.8.4-2-pve kernel crashes

    It only crashes on the 3 ceph nodes with additional nvme devices and Mellanox Connect-X3 - the compute nodes do not crash. Same boards, same CPU Gen, all X10DRU-i+ and E5-26xx v4.
  4. F

    Random 6.8.4-2-pve kernel crashes

    Actually not true. it still crashes here with that. Supermicro X10DRU-i+ and Intel E5-26xx v4.
  5. F

    Random 6.8.4-2-pve kernel crashes

    My X540-AT2 based X10DRU-i+ run fine, when they do not have NVME storage and no Connect-X3 and no ceph-osd. So I do not see the Intel NIC as the culprit.
  6. F

    Probleme nach dem Kernelupdate auf 6.8.4-2-pve

    Ich habe hier 11 Kisten mit Supermicro X10DRU-i+ und BIOS 3.5. CPUs sind E5-2620 v4 in den 3 Ceph nodes und E5-2667 v4 und E5-2683 v4 in den Compute Nodes. Ceph Nodes haben NVME Storage und Connect-X3 Karten, Compute Nodes nur die onboard X540-AT2. 6.8.4 funktioniert auf den Compute Nodes ohne...
  7. F

    Freezes mit Proxmox 8.2

    Probiers mal mit Kernel 6.5.13.
  8. F

    Random 6.8.4-2-pve kernel crashes

    Still crashes with intel_iommu=off and BIOS updated to 3.5. Back to 6.5.13 for now.
  9. F

    Random 6.8.4-2-pve kernel crashes

    Also crashes here with 6.8.4 and not with 6.5.13: Supermicro X10DRU-i+, Bios 3.4 E5-2620 v4 01:00.0 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01) 01:00.1 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01) 88:00.0...
  10. F

    pverados segfault

    Same here, sometimes the opcode bytes are listed though: [Mon Jul 17 05:04:28 2023] pverados[740828]: segfault at 55a4a3d5d030 ip 000055a4a3d5d030 sp 00007ffecd408178 error 14 in perl[55a4a3d31000+195000] likely on CPU 1 (core 2, socket 0) [Mon Jul 17 05:04:28 2023] Code: Unable to access...
  11. F

    Vm hangs on reboot

    I am seeing the same issue here after the upgrade of the OPNsense to 22.1.
  12. F

    [SOLVED] Issues with SRIOV-based NIC-passthrough to firewall

    Could you please elaborate a bit? How do you configure the VLAN filter? I am trying something similar but with Intel X553, ixgbe (linux/proxmox) and iavf (opnsense). So far I have not succeeded in getting a trunk port to work in opnsense. The multicast issue for carp can be solved with...
  13. F

    proxmox 7.0 sdn beta test

    actually, to my understanding you don't. you can run all vxlan-ids in the same multicast group. it is counterproductive to spawn multicast groups per vxlan. Default for linux is max. 20 igmp memberships... net.ipv4.igmp_max_memberships = 20 yes, i had a look at bgp-evpn. currently i am...
  14. F

    proxmox 7.0 sdn beta test

    I have some thoughts regarding the SDN VXLAN implementation. The VXLAN interface is configured with vxlan_remoteip <peerips>. To my understanding this means that BUM traffic is replicated from one node to all others. This has implications for scaling. Another approach would be to use a...
  15. F

    proxmox 7.0 sdn beta test

    Did you actually apply that config? pvesh set /cluster/sdn ifreload -a

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!