I have been using a Nvidia GPU for months now with great success, then I updated to 7.2 and now the VM says "No devices were found".
I do see it using lspci, I can find it using a PCI Device (hostpci0) in the hardware section.
This is the conf file for the guest OS.
gent: 1
args: -cpu...
I got a Dell PE R720 server and a Nvidia Quadro M2000 GPU, running Proxmox 7.1.x.
The server discovered the GPU when rebooting after install and did some initializing.
I do see it using lspci.
I have read this Proxmox PCI passthrough and when I come to the part GPU Passthrough, it is suggested...
@chchia Yes and no... in the grub file I had something that I did not recognize, I removed that and rebuilt the grub.
I also fixed my /etc/kernel/cmdline file, in the Proxmox documentation it was stated it was to be one line, that I did not have before.
So after these fixes the VM started. :)
I have followed TechnoTims guide to add an Nvidia card to a VM, I have done all the steps but when I add the PCIE card to teh VM and try to start it I get:
When I check for IMMOU:
dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
[ 0.015676] ACPI: DMAR 0x000000007D3346F4 000160 (v01 DELL PE_SC3...
I had to replace a drive in my storage pool, I did it via the cli, zpool replace tank old-disk new-disk, the only "issue" I have having is how this looks...
Will the old disk disappear when the resilvering will be gone?
I have only used zfs 0.8.6 when replacing disks and that looks different. So...
So I am moving 6.9 TB of data to my new Proxmox server, on the "from" server it states that it is 6.9 TB, in the CLI on the proxmox server it says the same.
The GUI though says 7.45 TB.
How come?
I am setting a my new proxmox server, I have a 14 Tb zpool and I want the coming VMs and CTs to use it.
But what is the best way giving the VMs and CTs access to the zpool?
I did yes. Then I remembred that this was a golden moment to test if my backup solution is as good as I think it is. So far I have installed Proxmox again. Added two zpools now it is time to import the backupen VMs.
I decided to start the Monday by updating one of my Proxmox servers to a new kernel version. The update went well, but the boot process did not.
Pilot manages to start and in there I can see that these services failed to load, quite a few and some bloody important;
apparmor, kmod...
I had the same setup apart from the layer and the IP-numbers... still I did not get any network whatsoever. I have "sadly" deleted that config.
I will steal yours and adapt it to my network and try again, thanks!
So I did my setup as you have described and now it seems to be working. Only used 2...
I have a Dell PE720 with 4 NICs, just today 4 ports on my Unifi Switch 8p opened up, so I got the idea to do a Bond of all ports, and give it one IP that would be load balanced!
So I deleted all the bridges, configured my mighty Bond with 4 10Gb NICs. Added a bridge with an IP and pointed it to...
My Proxmox VE syslog is filled with these two lines, lxc 101 is my Unifi controller LXC. How do I fix this?
Jan 16 11:45:43 pve audit[2261527]: AVC apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-101_</var/lib/lxc>" name="/run/systemd/unit-root/proc/"...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.