tested your script (with my token entered) and got the same error
i noticed you use 'http://' instead of 'https://'. after i fixed that, it works.
maybe the redirection from http -> https causes the python lib to drop the auth header?
ah ok, yes the live-migration patches will probably interfere/not work with that series
i focused now on getting it to work again, and after i'll send a new version of the live migration patches to adapt to the changes
i tested with 6.8.8-3-pve (but i think all 6.8 kernels should work) and the driver from here:
https://github.com/intel-gpu/intel-gpu-i915-backports/tree/backport/main
use the 'backport/main' branch, install the dependencies (dkms, and a few others can't remember currently) and use 'make...
a bit of general developer info: https://pve.proxmox.com/wiki/Developer_Documentation
the correct way to test those changes is to checkout the source repositories, apply the patches with e.g. `git am` and then rebuild the packages and install them
i'm not expecting anybody here to do that...
not yet, but we're working on it, but no timeframe yet. installing the driver is more or less straightforward (from https://github.com/intel-gpu/intel-gpu-i915-backports) but the activation of the virtual functions need a bit of configuration (on ubuntu it's possible with the xpu-smi tool from...
Hi,
maybe a dumb question: did you enable iommu in the bios? (you did not explicitly say so)
it may have a different name, e.g. VT-d depending on the bios/motherboard
also, could you post the complete output of the 'dmesg' command? maybe we can see there what might go wrong
thx for reporting, yes that is simply a gui bug, i sent a patch to our devel list: https://lists.proxmox.com/pipermail/pve-devel/2024-August/065056.html
fyi: i posted a patch series to our devel mailing list, so feel free to test it (if you want/can):
https://lists.proxmox.com/pipermail/pve-devel/2024-August/065046.html
I think the hw error logs:
indicate that either the memory or the cpu is the problem here... since you already changed the memory, i'd lean to the cpu
usually if it's a software problem you get at least a bit of logs/dump/etc. not a straight reboot
another thing that's possible is the PSU, if...
nice you found the problem... well for a changer to work properly you need those, since otherwise we and the changer cannot know which tape is where...
see https://pbs.proxmox.com/docs/backup-client.html#backup-pruning and https://pbs.proxmox.com/docs/prune-simulator/index.html for details, but it does the right thing:
you specified keep-last 7 which are the following:
keep daily 14:
then one is removed
then keep weekly 4
and then keep...
i don't know how to configure zabbix exactly, but the state should be as follows:
no 'last-run-state' and no 'last-run-upid' => it never ran
no 'last-run-state' and a 'last-run-upid' => it's currently running
'last-run-state' (regardless of upid or not) => it ran in the past with that result...
the 'last-run-state' is the state for the last run, so if there was no run yet -> empty, and if there is currently a run, then it's still empty but 'last-run-upid' is set
is there any state missing that can't be seen with the above information?
ok, i probably left too much out in my answer. the vnc part cannot access the vgpu part only the emulated gpu. though you generally don't have fun with such a setup, since mouse input via vnc is a bit weird AFAIR and windows probably will not use the nvidia gpu on the display of the emulated...
no, VNC cannot access the display of the virtual gpu at the moment (there is an additional property for qemus pci device: 'display=on' that could work, but in my last tests this was very unstable and not really usable..)
hi,
this message says when this happens normally, so which kernel do you run and which kernel sources do you have installed?
e.g. what is the output of 'pveversion -v' and 'dpkg -l | grep proxmox-headers'
one part is already applied and should be already in proxmox-backup-server 3.2.3-1
(this fix: https://lists.proxmox.com/pipermail/pbs-devel/2024-May/009192.html)
but the multithreading was held up because of some concerns regarding how we want to decide e.g. the number of threads (to not...
that's a question you have to ask nvidia. i'm not sure what their current policy regarding lts + newer kernels are, but only they have the capability to update the older branch of the driver for newer kernels...
please don't hijack an old thread. instead open your own thread with a bit more information (e.g. full error message, a bit more detailed descriptions of the setup/issue/etc.)
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.