I'm about to make some big changes to my Proxmox installation (I'm going to try VGPU Unlock), and I would like to be able to revert back to my present state if something goes wrong.
Since I've installed Proxmox in a ZFS file system, is there any way to do what I want?
Yes, I actually did in ESXI.
You need to add the following Configuration Parameters to the VM you are passing through the A100 :
pciPassthru.use64bitMMIO TRUE
pciPassthru.64bitMMIOSizeGB 128
After this it worked fine for me.
I'm also getting this error, but only in one of my GPUs.
Might this be a new error introduced in the latest version of Proxmox? Because that same GPU was working fine with an older version (although in a different motherboard).
Right, you can do that, but you don't need to. This is for the cases where you want to create a MIG device, and then divide it into further VGPU device(s).
But, are you saying that would be the workaround though? I tried to use directly mdevctl on the MIG (which didn't work), maybe I'm missing...
No, I don't see it anywhere in the system under Proxmox.
It's the `nvidia-smi` that shows it with the command: nvidia-smi -L
For instance:
> nvidia-smi -L
GPU 0: NVIDIA A100-PCIE-40GB (UUID: GPU-ee14e29d-dd5b-2e8e-eeaf-9d3debd10788)
MIG 4g.20gb Device 0: (UUID...
I can pass it with full passthrough, yes, and that works, but it really defeats the purpose since it can't only be assigned to a single VM instead of several.
VGPU is supposed to work, and I tried with the subscriber NVIDIA drivers, but `mdevctl` complains it can't find any device, so I end up...
Sorry for taking so long but I've been trying a lot of different stuff to get this work and nothing works except for using Ubuntu or Suse on bare metal.
What do you mean by bindmount the /dev device into containers? I got the idea bindmount in Proxmox was just of LXC. Can I use this in a VM?
Thank you @dcapak
This is not VGPU though, it's MIG, a new(ish from 2020) technology from NVIDIA.
And I can confirm that creating a virtual GPRU with MIG does work correctly on Proxmox command line. But after that, nothing else can't be done because (unlike with VGPU instances that Proxmox...
Does Proxmox support the relatively new NVIDIA MIG functionality for GPU virtualization: https://docs.nvidia.com/datacenter/tesla/mig-user-guide/index.html
I did manage to install the required drivers and setup the different GPU contexts from Proxmox command line, but then, there is no way to...
I am having a strange problem and I don't even know where to start debugging it.
I followed the could init instructions (https://pve.proxmox.com/wiki/Cloud-Init_Support) on Proxmox documentation to create a cloud init template out of an ubuntu LTS cloud init image...
I am trying to create a K3OS kubernetes cluster on Proxmox.
K3OS all install process can be easily automated by simply booting from the install ISO and pointing the install command to a `config.yaml` file.
Also, K3OS accepts a series of kernel cmdline parameters in order to start the install...
Hi and thank you.
I was trying LXC because it's much easier to config several LXC containers than several VMs (i.e. I can just change the hostname and IP addresses directly on Proxmox interface and deploy a few of them instead of going through the install process of a full VM).
There's is no...
I am trying to get a kubernetes node to run on a LXC container (tried with Ubuntu and Alpine so far), but I can't get it to work due to a problem with the cgroups.
I am trying with a privileged LXC container, and I already configured lxc to that container at /etc/pve/lxc/200.conf with...
I am having a strange bug showing up.
I get the message "Guest Agent not running" on Proxmox, although on windows the Guest Agent is installed and confirmed running with `Get-Service QEMU-GA`.
On proxmox itself the output of `qm agent <vmid> ping` is that `QEMU guest agent is not running`. But...
I suppose this must be something trivial but I've been struggling with it.
My host has 2 Physical Network Devices and I want to make a VM use specifically one of them. So, I created `vmbr1` which uses the `enp2s0f1` physical interface.
Thing is:
1 - I can't access my proxmox on 192.168.10.100...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.