there are no artifical limits in pdm itself, so the only limit is what the system can give.
mostly this is
* network latency/throughput (e.g. for remotes with many vms updating the metrics, etc. can be quite large)
* memory: from what i can...
while migration tasks should show on the pdm, i can confirm that the maintenance status of nodes is currently not shown on PDM
Would you mind opening a feature request/bug on https://bugzilla.proxmox.com/ ?
sadly that path alone does not really say where it comes from or anything else.
can you see which device (e.g. pci id) id belongs to?
what does it show?
you can print it by doing, e.g.
cat /sys/class/hwmon/hwmon1/temp1_input
yes, it should be the same as passing through any other gpu
note that passing through integrated gpus is a bit more complicated usually, and not as well supported as passing through "proper" pci devices.
i guess sometimes the sensors that are available in linux via drivers/etc are not the same that are available via the ipmi so it might be a different sensor.
do you know how beszel retrieves the sensor info?
if the package 'lm-sensors' is...
well, sr-iov is the 'pci standard' way to do multiple vgpus on a single pci card. nvidia takes a slightly different approach (you have to "activate" a profile on the virtual function and need a specialized driver)
for normal passthrough, nothing...
nur um ein bisschen Klarheit zu schaffen (die Doku page kann man hier sicher noch verbessern!):
grundsätzlich sind die angegebenen Versionen miteinander kompatibel weil explizit getestet, bzw. beim expliziten durchtesten von neuen NVIDIA Treiber...
i can't really speak to the performance of each card (maybe there is some relevant benchmark for you out there) but theoretically the b50 can do sr-iov (so "vGPU") while the a2000 can't do that. if that's relevant to you it might be an upgrade...
ok so because i also ran into a similar issue with an RTX Pro 6000 Blackwell card.
if the host has one region with X GiB + several others , the MMIO size must be the next multiple of 2 above X
so in your case, the host reports:
which would...
Hi, you say increasing the mmio size did not do anything:
how did you do increase the size?
did you see the different solutions to increasing the mmio size here...
well 'proper' vgpus with qemu will only ever be time-sliced and not directly the mig devices (because they're just 'virtual' hardware partitioning)
i currently have access to an rtx pro 6000 blackwell card, which supports timesliced vgpus on top...
if you test it and it works, and you want it to be officially supported, it wouldn't hurt to tell nvidia that you want proxmox as supported platform :)
if you want to use 'vgpu' style gpu with a H200 you have to use nvidia ai enterprise https://www.nvidia.com/en-us/data-center/products/ai-enterprise/
which incurs an extra licensing fee to nvidia (idk the exact details currently, but in the past...
well you can give a pdm user privileges to specific things on the pve remotes so he'll only see (and can interact with) those. It does not give automatic privileges to the pve cluster itself, since the pdm user is not automatically logged in to pve
hi,
currently the users/privileges/groups/etc are completely separate between pve and pdm
so if you want to manage the PVE roles you have to do it there,
on PDM, there are separate users/privs for pdm, and itself has access with the...
hi,
sadly the error is not really telling what actually might be wrong.
Are you sure pdm can resolve and reach that remote via the data you put in? (e.g. there is no firewall that's blocking access, the correct ip/hostname/port, etc.)