Search results

  1. K

    proxmox on arm64

    https://www.ipi.wiki/products/ampere-altra-developer-platform?variant=42970872086690 https://www.ipi.wiki/products/com-hpc-ampere-altra https://buy.hpe.com/us/en/compute/proliant-rl-servers/proliant-rl300-servers/proliant-rl300-server/hpe-proliant-rl300-gen11/p/1014682063 How prepared are you? :)
  2. K

    Disk smart status no longer working

    Following. Same issue, same kind of system (HP Dl380p, P420i in HBA mode). Works with "- d", errors-out without it.
  3. K

    Failed to destroy vGPU device.

    Thanks @dcsapak . That's the "if" condition I was looking for. I merged your code in my instance and I'll report back if anything breaks although I don't see why it would. Thanks again! --- For the records, here is what the function now looks like on my end: sub cleanup_pci_devices { my...
  4. K

    Failed to destroy vGPU device.

    @aslopez_irontec You mention "sleep(5)" in your comment but show "sleep(15)" in the code; did you go extra-cautious with 15 or is it a typo in your comment? Thanks for sharing your results!
  5. K

    Failed to destroy vGPU device.

    I noticed the same on my end. Rebooting the VM does indeed clear the error. I got puzzled for a bit when cloning VMs trying to find why they would all BSOD till I figured that stopping/restarting them solved the issue. Back on the original topic though, I'm still not sure what's the best path...
  6. K

    Failed to destroy vGPU device.

    https://www.youtube.com/watch?v=_36yNWw_07g&t=6s I'm wondering if we could simply add an "if" to check if the device is an NVidia card and if the VM linked is already in stop mode, execute the code we commented out, otherwise skip that bit. Would that be a good alternative to dealing with NVidia?
  7. K

    Failed to destroy vGPU device.

    Hopefully. It's a fresh install on my end as well. I'm in no way shape or form associated to proxmox team. Maybe @dcsapak can enlighten us with a suggestion for a temporary "if" condition we could "safely" add to our systems for the time being. I know it would be a temporary fix but as-is, the...
  8. K

    Failed to destroy vGPU device.

    If I'm getting this right, adding this to your system would cause other PCI decives to not clear properly. I believe we should rather add an "if" condition to skip the two lines if the device is our GPU. Not sure what would be the variable to check though.
  9. K

    Failed to destroy vGPU device.

    Following closely. Having the same issue on an HP DL380p - xeons E-2640 - Tesla P4 - Nvidia Grid 525.85.07 (same version as OP). Looks like using some profiles triggers less issues (A69 which gave 4GB per VM was pretty stable while A67 = 1GB per VM crashes most of the time on VM shutdown).
  10. K

    [TUTORIAL] Compile Proxmox VE with patched intel-iommu driver to remove RMRR check

    Thanks for the highlight! I'm a bit behind on this aspect. I thought/hoped we could load the module (as opposed to recompile the kernel to have it baked-in). I haven't fiddled with kernel recompilation since early 2000 when I had so much time at hand that I would bootstrap gentoo on a weekly...
  11. K

    [TUTORIAL] Compile Proxmox VE with patched intel-iommu driver to remove RMRR check

    @killer129 Thanks for your work. After hours of head bashing around obscure forum threads, I finally stumbled upon a post that led me to you github. A reboot later I was able to successfully pass my PCI HBA SAS card (LSI) to a TrueNAS VM in ProxMox. Now I wonder: What does it takes for a path...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!