mhmm does it progress if you change the ostype from windows 11 to linux? this only sets a few specific parameters, but maybe one of them is the culprit
Hi,
There are only 2 ways to share (phsyical) GPU resource to guests AFAIK:
* use of virtio-gl (though that is currently limited to opengl, vulkan is in the works)
* use of vgpus, though there are multiple ways:
- for intel, there is the intel...
hi, yep it's currently not implemented, would you mind opening a bug report on https://bugzilla.proxmox.com (maybe check if there already is one) so we can keep better track of it?
one other issue that could be happening is that allocating the memory of the windows vm just takes an (absurd) amount of time. we had such behavior in the past, especially if the memory is fragmented. Could you try to reduce the amount of memory...
this is the problem, the tape job sees a tape that is already part of the media set and writable, and still available (for a standalone drive, 'offline' means not in the drive, but on-site)
i'd bet that if you mark this too as vaulted, it would...
can you try with the following changes to the config:
instead of
hostpci0: 0000:e1:00.0,pcie=1
hostpci1: 0000:e1:00.1,pcie=1
please use
hostpci0: 0000:e1:00,pcie=1
this will passthrough both functions as one device, like it is visible on...
mhmm indeed it did not allocate a new media-set, this would look like this (what it did in my tests)
2026-02-18T10:39:19+01:00: Starting tape backup job '...'
2026-02-18T10:39:19+01:00: update media online status
2026-02-18T10:39:19+01:00...
hi,
can you post the vm config ?
does the windows vm boot without the passed through devices?
also the journal from the host during that time would be interesting.
aside from the install notes might needing an update, what problem do you have with secure boot with our shim/grub/kernel? They should be signed and bootable with secure boot?
i can't reproduce this here:
i have a standalone drive:
* started a backup via a job with a media pool policies 'keep' (retention) and 'continue' (allocation)
* marked the last 'writable' medium as vaulted (so no member of the media-set is...
yeah, this has to be fixed in pdm. In PVE we currently only share the 'non-shared' storages multiple times (or use the list the user can configure) but in pdm we simply sum everything up. would you mind creating a bug for that on...
ah ok, i only saw that just the package version was bumped and assumed it didn't contain any relevant changes for this, but yeah go ahead and try, the worst that can happen is that the module won't compile and you have to remove the package again...
not super sure about the 'no installation candidate' issue, what does
apt-cache policy nvidia-driver
show?
here it's like this:
nvidia-driver:
Installed: (none)
Candidate: 550.163.01-2
Version table:
550.163.01-2 500
500...
the patch is already applied, so it will be included in the next bump for qemu-server
when you manually apply patches you have to reload/restart pveproxy & pvedaemon, so they load the perl libraries again
sadly, not really
to access the display of a vm via vnc, qemu has to have access to that. and that only works for the built in (virtual) gpus and some select vgpus (e.g. nvidia vgpu). But even there we don't utilize this, since in our testing it...
yes, thanks, i can see the folloing messages:
2026-02-06T13:41:26+01:00 agorapverssi1 QEMU[13459]: kvm: migration_block_inactivate: bdrv_inactivate_all() failed: -1
2026-02-06T13:41:26+01:00 agorapverssi1 QEMU[13459]: kvm: Error in migration...
actually i didn't mean the task log, but the whole journal/syslog from both nodes. you can obtain that with
journalctl
(this will print the *whole* journal, use '--until' and '--since' to limit it to the correct timeframe)
not quite sure what you mean, each pci card should be listed in e.g. 'lspci' see an excerpt from the lspci here:
...
00:02.0 VGA compatible controller: Intel Corporation Alder Lake-S GT1 [UHD Graphics 770] (rev 0c)
...
03:00.0 VGA compatible...