I stumbled upon this thread a few weeks ago as I was running into similar issues, figured I'd share with the both of you that I've had some success using NVIDIA's Mellanox repo and version 23.04 of the MLNX_OFED drivers with PVE 7.4 and kernel 5.15.107-2. DKMS builds perfectly for me - a first...
Setup a Proxmox Backup Server today and trying to do an ACME registration with a PowerDNS server.
Getting an error back from PowerDNS on creation of the certificate where the domain isn't valid?
[Fri Dec 23 10:40:22 PST 2022] Please refer to https://curl.haxx.se/libcurl/c/libcurl-errors.html...
As the title asks, was the VFIO vfio_mdev kernel module removed from the 5.15 release?
Just did an update on a host that used it, and it appears to be missing from the new modules directory for the new kernel. If this was purposeful, what's the easiest way to get it back? Not familiar with...
To all who might find it useful, I had a PVE use-case where there are actually two smart array controllers with disks attached. A quick modification to @joanandk 's script by forcing the output of SA_ID into an array and iterating through it appeared to fix that issue for me.
#!/bin/bash
#...
Thanks for the reply, @dcsapak. Appreciate knowing where things stand. My understanding is that the NVidia cards, at least the enterprise ones, do support some form of mediated device live migration. Both VMware and XenServer carry some form of live migration support, though certainly that...
Apologies for the delay here, I missed this one.
I had to make two changes here, the first was to modify /etc/default/grub:
GRUB_CMDLINE_LINUX="crashkernel=auto resume=/dev/mapper/rl-swap rd.lvm.lv=rl/root rd.lvm.lv=rl/swap pci=realloc rd.driver.blacklist=nouveau"
The second was, as described...
Long time Proxmox user, but first time graphics card virtualizer.
I've got a couple of Nvidia V100S 32G cards split between two servers. Nvidia drivers installed as per instructions on the host, and the vGPU itself works fine on the VM with GRID drivers. Licensing also works, as expected...
I registered here specifically to provide future users with a resolution for this problem that seemed to escape me on some simple searches.
The issue when passing multiple GPU's towards a Q35 KVM machine with pcipassthrough using OVMF appears to be a lack of addressable PCI memory space...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.