Having been playing around with the Proxmox API for a bit now, I've been working on a way to programatically manage cluster-wide Security Groups and rules. From the API documentation, it appears as though a positional argument is accepted as part of a post request to this endpoint...
Sure thing.
/etc/network/interfaces:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives...
Recently upgraded a Proxmox Cluster to 8.2.7 and am seeing some odd behiavour with the firewall even if all traffic is allowed. When firewall is on, pings to the firewall'd VM stop responding after between 30 and 60 seconds. Disabling the firewall allows pings to flow again, as does a live...
Hey Shanreich,
This works well on machines that are started/booted, but doesn't (for obvious reasons) trigger on an offline migration, for instance.
When building a system that integrates with the Proxmox API to gather status details on the VM's themselves, maintaining the Proxmox node...
Looks like the v2 patch worked for me, the line numbers in the v1 patch for the API and Cloudinit endpoints appear slightly different.
In my testing, both the IP/DNS setting and the password application for the Administrator work well even without running one of the development versions of...
I too am getting issues applying these patches against PVE 8.2.4 from the no-subscription repository. The qemu API patch seems to think it's already been applied, and the cloud-init patch fails on the first hunk. Looking at the code from each of these files at the areas where the patch should...
As vGPU's seem to have become more popular, figured I'd ask here first to see if anyone else has run into this.
The latest release of the NVIDIA vGPU driver does appear to build on the latest 6.8 kernel available for Proxmox, however this success leads to an empty mdevctl list. After speaking...
Looking through the forums here, it looks like some commit work took place to enable pre/post-migration hookscripts some time back in late 2022. Looking at a recent PVE 8.2 installation, however, it appears that these hooks are still not present in the QemuMigration.pm or CLI qm source files on...
I stumbled upon this thread a few weeks ago as I was running into similar issues, figured I'd share with the both of you that I've had some success using NVIDIA's Mellanox repo and version 23.04 of the MLNX_OFED drivers with PVE 7.4 and kernel 5.15.107-2. DKMS builds perfectly for me - a first...
Setup a Proxmox Backup Server today and trying to do an ACME registration with a PowerDNS server.
Getting an error back from PowerDNS on creation of the certificate where the domain isn't valid?
[Fri Dec 23 10:40:22 PST 2022] Please refer to https://curl.haxx.se/libcurl/c/libcurl-errors.html...
As the title asks, was the VFIO vfio_mdev kernel module removed from the 5.15 release?
Just did an update on a host that used it, and it appears to be missing from the new modules directory for the new kernel. If this was purposeful, what's the easiest way to get it back? Not familiar with...
To all who might find it useful, I had a PVE use-case where there are actually two smart array controllers with disks attached. A quick modification to @joanandk 's script by forcing the output of SA_ID into an array and iterating through it appeared to fix that issue for me.
#!/bin/bash
#...
Thanks for the reply, @dcsapak. Appreciate knowing where things stand. My understanding is that the NVidia cards, at least the enterprise ones, do support some form of mediated device live migration. Both VMware and XenServer carry some form of live migration support, though certainly that...
Apologies for the delay here, I missed this one.
I had to make two changes here, the first was to modify /etc/default/grub:
GRUB_CMDLINE_LINUX="crashkernel=auto resume=/dev/mapper/rl-swap rd.lvm.lv=rl/root rd.lvm.lv=rl/swap pci=realloc rd.driver.blacklist=nouveau"
The second was, as described...
Long time Proxmox user, but first time graphics card virtualizer.
I've got a couple of Nvidia V100S 32G cards split between two servers. Nvidia drivers installed as per instructions on the host, and the vGPU itself works fine on the VM with GRID drivers. Licensing also works, as expected...
I registered here specifically to provide future users with a resolution for this problem that seemed to escape me on some simple searches.
The issue when passing multiple GPU's towards a Q35 KVM machine with pcipassthrough using OVMF appears to be a lack of addressable PCI memory space...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.