Recent content by sirebral

  1. S

    [SOLVED] Supermicro X11DPi-NT NIC X722 Firmware NVM Update

    Here was my ultimate fix, if anyone runs into this: In the Linux terminal after the first failed update: sudo ./nvmupdate64e -u -c nvmupdate.cfg -sv. Reboot, and good to go, all matches properly now.
  2. S

    [SOLVED] Supermicro X11DPi-NT NIC X722 Firmware NVM Update

    Same issue here, any luck getting the second update to work?
  3. S

    [SOLVED] Supermicro X11DPi-NT NIC X722 Firmware NVM Update

    Thank you, I tried all the links and none wanted to work, really appreciate the uploads, and mega should stick.
  4. S

    [SOLVED] Supermicro X11DPi-NT NIC X722 Firmware NVM Update

    Anyone still have these? Can't find them anywhere, and all the links here are dead. Thanks!
  5. S

    NVIDIA vGPU Software 18 Support for Proxmox VE

    Such a bummer on vGPU, buy and enterprise card and then pay us to use it. I really despise their licensing model. Also, this splits the GPU, so you aren't really sharing the load, unless things have changed, you're giving VRAM to specific machines, so in this case, LXC is the better solution...
  6. S

    VM Fails to Start with vGPU - vfio-pci Input/Output Error

    Pinning of 6.8.12-11-pve seems to fix up my passthrough of an X550 NIC, however, others are reporting pinning is not working for some devices. -12 has a passthrough regression.
  7. S

    Display problems in Chromium Based Browsers

    Hey all, For the past year or so I've noted that the terminal in my LXC's is strange in Chromium based browers. I've tried a few and the display is inconsiddent at best. While I know I could just use SSH I prefer the browser's convieneince. Sometimes for example if I do and LS is will start...
  8. S

    Help Needed: Networking Issue with Linux Bridges on Cluster, hard down.

    Bump, anyone have any idea? I have a reddit thread as well, and am getting a few responses, yet nothing solid as of yet. I'm really hurting being hard down. https://www.reddit.com/r/Proxmox/comments/1cp66xh/help_needed_networking_issue_with_linux_bridges/ Thanks!
  9. S

    Help Needed: Networking Issue with Linux Bridges on Cluster, hard down.

    This is probably a clue as to the problem, yet I don't know how to resolve it. The links are showing up with ip show link. Address HWtype HWaddress Flags Mask Iface 10.20.40.95 (incomplete) enp94s0d1...
  10. S

    Help Needed: Networking Issue with Linux Bridges on Cluster, hard down.

    Hi All, I'm seeking assistance with a odd networking issue on version 8.2 (non-production repos). We are currently operating on kernel 6.5, despite the availability of kernel 8.6, due to specific compatibility and stability requirements. Our setup involves using Linux bridges for LAN and...
  11. S

    Random freezes, maybe ZFS related

    I am also having issues with the Xeon Scalable Gen1. I don't have any good traces at this time, yet when mine dies, the server doesn't freeze. It seems like the NIC's loose carrier (the Intel 500 Series). Then my whole dashboard just shows blank machines. Back on the 6.5 kernel, and the...
  12. S

    Proxmox freeze nach kernel update to 6.8.4-2-pve

    It's not only Hetzner, I run my own boxes and they're newer than what you guys are running. They are supermicro, yet they also have issues.
  13. S

    Proxmox freeze nach kernel update to 6.8.4-2-pve

    I am not at Hetzner; I am running at my own datacenter with the same issues.
  14. S

    Opt-in Linux 6.8 Kernel for Proxmox VE 8 available on test & no-subscription

    Thanks; I'm hoping it's just out of branch drivers. These boxes have run for 2 years; I've messed with them quite a bit, and they need a refresh. I'm adding a cluster node and converting from ZFS to CEPH, so a full overhaul. I don't use the onboard NIC's; rather, I have Intel 500 series in all...
  15. S

    Opt-in Linux 6.8 Kernel for Proxmox VE 8 available on test & no-subscription

    Same here, running a 2-node cluster on Supermicro. Boxes are both Intel Gen 1 scalable processors. I may just hold off as I'm about to rebuild the cluster, regardless. However, I can't presently run on the 6.8 branch for more than a few hours. Yet to isolate errors, yet quite a few...