How do I see the GPU utilization?

For CPU: scroll down a little bit and look under CPU (which can already be seen at the lower end of your screenshot). Or click on the Summary of the node (instead of the Datacenter, where you took the screenshot).
For GPU: Proxmox does not use a GPU except for the host console terminal, which makes it always be at around 0% usage.
Or did I not understand your question correctly?
 
Just reviving this one. I am using LXC containers with a GPU not passed throug since they share the same kernel it can be shared. Does that mean there is no way of viewing usage on the host?
 
Just reviving this one. I am using LXC containers with a GPU not passed throug since they share the same kernel it can be shared. Does that mean there is no way of viewing usage on the host?
Depends on the GPU, e.g. nvidia has nvidia-smi and there are tools like nvtop and such.
 
Depends on the GPU, e.g. nvidia has nvidia-smi and there are tools like nvtop and such.
But can these be installed on the host? I have an Intel Arc that the host uses and some LXCs share and then a passthrough AMD Radeon to a Linux VM.
 
But can these be installed on the host? I have an Intel Arc that the host uses and some LXCs share and then a passthrough AMD Radeon to a Linux VM.
LXC means that you have to use the host's kernel, so it can be monitored from the host and the driver has to come also from the host. For any PCIe passthrough into a QEMU VM, you can only monitor it from the inside.
 
LXC means that you have to use the host's kernel, so it can be monitored from the host and the driver has to come also from the host. For any PCIe passthrough into a QEMU VM, you can only monitor it from the inside.
I understand that. Hence my question about installing it on the host.
 
I understand that. Hence my question about installing it on the host.
The driver yes, sure. The monitoring depends on what machine uses it. The devices can normally not be used from different container and/or the pve host at the same time so I assume this is also true for the monitoring. Therefore I would monitor it where I use it.

I've done this in the past for various AI toolchain related stuff. I bind-mounted my nvidia devices to a couple of containers and just started the one I needed and worked from there. nvidia-smi and nvtop worked flawlessly.
 
For people arriving here with the same question as OP (how to monitor GPU usage of the VM from outside), I found this https://github.com/fgaim/gpuview.

It runs a lightweight webserver on the VM showing the GPU charge. Because it's a webserver it is visible from outside (with the pro and cons of that).

Worked for me, even if I would prefer some graph alonside the others (but I understood there was a problem of "who runs the gpu" in a pass-through configuration).