Search results

  1. S

    [SOLVED] Supermicro X11DPi-NT NIC X722 Firmware NVM Update

    Here was my ultimate fix, if anyone runs into this: In the Linux terminal after the first failed update: sudo ./nvmupdate64e -u -c nvmupdate.cfg -sv. Reboot, and good to go, all matches properly now.
  2. S

    [SOLVED] Supermicro X11DPi-NT NIC X722 Firmware NVM Update

    Same issue here, any luck getting the second update to work?
  3. S

    [SOLVED] Supermicro X11DPi-NT NIC X722 Firmware NVM Update

    Thank you, I tried all the links and none wanted to work, really appreciate the uploads, and mega should stick.
  4. S

    [SOLVED] Supermicro X11DPi-NT NIC X722 Firmware NVM Update

    Anyone still have these? Can't find them anywhere, and all the links here are dead. Thanks!
  5. S

    NVIDIA vGPU Software 18 Support for Proxmox VE

    Such a bummer on vGPU, buy and enterprise card and then pay us to use it. I really despise their licensing model. Also, this splits the GPU, so you aren't really sharing the load, unless things have changed, you're giving VRAM to specific machines, so in this case, LXC is the better solution...
  6. S

    VM Fails to Start with vGPU - vfio-pci Input/Output Error

    Pinning of 6.8.12-11-pve seems to fix up my passthrough of an X550 NIC, however, others are reporting pinning is not working for some devices. -12 has a passthrough regression.
  7. S

    Display problems in Chromium Based Browsers

    Hey all, For the past year or so I've noted that the terminal in my LXC's is strange in Chromium based browers. I've tried a few and the display is inconsiddent at best. While I know I could just use SSH I prefer the browser's convieneince. Sometimes for example if I do and LS is will start...
  8. S

    Help Needed: Networking Issue with Linux Bridges on Cluster, hard down.

    Bump, anyone have any idea? I have a reddit thread as well, and am getting a few responses, yet nothing solid as of yet. I'm really hurting being hard down. https://www.reddit.com/r/Proxmox/comments/1cp66xh/help_needed_networking_issue_with_linux_bridges/ Thanks!
  9. S

    Help Needed: Networking Issue with Linux Bridges on Cluster, hard down.

    This is probably a clue as to the problem, yet I don't know how to resolve it. The links are showing up with ip show link. Address HWtype HWaddress Flags Mask Iface 10.20.40.95 (incomplete) enp94s0d1...
  10. S

    Help Needed: Networking Issue with Linux Bridges on Cluster, hard down.

    Hi All, I'm seeking assistance with a odd networking issue on version 8.2 (non-production repos). We are currently operating on kernel 6.5, despite the availability of kernel 8.6, due to specific compatibility and stability requirements. Our setup involves using Linux bridges for LAN and...
  11. S

    Random freezes, maybe ZFS related

    I am also having issues with the Xeon Scalable Gen1. I don't have any good traces at this time, yet when mine dies, the server doesn't freeze. It seems like the NIC's loose carrier (the Intel 500 Series). Then my whole dashboard just shows blank machines. Back on the 6.5 kernel, and the...
  12. S

    Proxmox freeze nach kernel update to 6.8.4-2-pve

    It's not only Hetzner, I run my own boxes and they're newer than what you guys are running. They are supermicro, yet they also have issues.
  13. S

    Proxmox freeze nach kernel update to 6.8.4-2-pve

    I am not at Hetzner; I am running at my own datacenter with the same issues.
  14. S

    Opt-in Linux 6.8 Kernel for Proxmox VE 8 available on test & no-subscription

    Thanks; I'm hoping it's just out of branch drivers. These boxes have run for 2 years; I've messed with them quite a bit, and they need a refresh. I'm adding a cluster node and converting from ZFS to CEPH, so a full overhaul. I don't use the onboard NIC's; rather, I have Intel 500 series in all...
  15. S

    Opt-in Linux 6.8 Kernel for Proxmox VE 8 available on test & no-subscription

    Same here, running a 2-node cluster on Supermicro. Boxes are both Intel Gen 1 scalable processors. I may just hold off as I'm about to rebuild the cluster, regardless. However, I can't presently run on the 6.8 branch for more than a few hours. Yet to isolate errors, yet quite a few...
  16. S

    HA, can I have some migrate?

    If there is no remediation, it would be great if this feature could be considered for the roadmap, as it seems like it would be quite useful. For now, I've had to remove the less critical clients from HA, but this isn't a perfect solution since that means they will fail to come back up on the...
  17. S

    Persistent HBA Error Logs on Two Nodes - Seeking Assistance

    Hello everyone, I'm encountering an issue with two of our nodes that both use the same Host Bus Adapter (HBA). Recently, they have begun to consistently log error messages, which are cluttering the journal. Here are examples of the repeated error messages: ``` Apr 04 02:28:05 pve kernel...
  18. S

    HA, can I have some migrate?

    Hey everyone, I'm in the midst of setting up my HA (High Availability) setup and aiming to get it just right for my needs. I've got a couple of key guests that I need to automatically failover in case of issues, while there are others that aren't as crucial, which I'd prefer to shut down...
  19. S

    docker: failed to register layer: ApplyLayer exit status 1 stdout: stderr: unlinkat /var/log/apt: invalid argument.

    I think this is supposed to work now, interestingly enough I still get some complaints about unsupported configs if I use Overlay2, it doesn't SEEM to cause issues, just makes me curious as to why it's still throwing errors.
  20. S

    Hardware ID's Changing, Causing LXC's to have wrong passthrough configuration

    Hi All, I've noticed that every time I upgrade my kernel to the latest prod PVE version the ID's on my NVIDIA cards change. I have to alter the configuration file for the LXC's that use video cards manually every time after a ls /dev/nvidia* and ls /dev/dri* to match the changes. Is there a...