mhmm indeed it did not allocate a new media-set, this would look like this (what it did in my tests)
2026-02-18T10:39:19+01:00: Starting tape backup job '...'
2026-02-18T10:39:19+01:00: update media online status
2026-02-18T10:39:19+01:00...
hi,
can you post the vm config ?
does the windows vm boot without the passed through devices?
also the journal from the host during that time would be interesting.
aside from the install notes might needing an update, what problem do you have with secure boot with our shim/grub/kernel? They should be signed and bootable with secure boot?
i can't reproduce this here:
i have a standalone drive:
* started a backup via a job with a media pool policies 'keep' (retention) and 'continue' (allocation)
* marked the last 'writable' medium as vaulted (so no member of the media-set is...
yeah, this has to be fixed in pdm. In PVE we currently only share the 'non-shared' storages multiple times (or use the list the user can configure) but in pdm we simply sum everything up. would you mind creating a bug for that on...
ah ok, i only saw that just the package version was bumped and assumed it didn't contain any relevant changes for this, but yeah go ahead and try, the worst that can happen is that the module won't compile and you have to remove the package again...
not super sure about the 'no installation candidate' issue, what does
apt-cache policy nvidia-driver
show?
here it's like this:
nvidia-driver:
Installed: (none)
Candidate: 550.163.01-2
Version table:
550.163.01-2 500
500...
the patch is already applied, so it will be included in the next bump for qemu-server
when you manually apply patches you have to reload/restart pveproxy & pvedaemon, so they load the perl libraries again
sadly, not really
to access the display of a vm via vnc, qemu has to have access to that. and that only works for the built in (virtual) gpus and some select vgpus (e.g. nvidia vgpu). But even there we don't utilize this, since in our testing it...
yes, thanks, i can see the folloing messages:
2026-02-06T13:41:26+01:00 agorapverssi1 QEMU[13459]: kvm: migration_block_inactivate: bdrv_inactivate_all() failed: -1
2026-02-06T13:41:26+01:00 agorapverssi1 QEMU[13459]: kvm: Error in migration...
actually i didn't mean the task log, but the whole journal/syslog from both nodes. you can obtain that with
journalctl
(this will print the *whole* journal, use '--until' and '--since' to limit it to the correct timeframe)
not quite sure what you mean, each pci card should be listed in e.g. 'lspci' see an excerpt from the lspci here:
...
00:02.0 VGA compatible controller: Intel Corporation Alder Lake-S GT1 [UHD Graphics 770] (rev 0c)
...
03:00.0 VGA compatible...
yeah, i have one right here, running with proxmox ve 9.1 and it detects and works fine
as already said, check the bios for settings, and try with a e.g. ubuntu live iso to see if it's there at all
rebar refers to 'resizable bar', e.g. see...
you should be able to attach a file, or if that's not possible, you can split it up in multiple posts, or use a text sharing website and share the link here
Hi,
could you post the vm config and 'pveversion -v' from both sides please
also the journal from the time of the migration from both sides could be relevant
are you sure the card is fully inserted in the pci slot? if yes, does it show up anywhere in 'dmesg' ? (you can post/attach the output here)
if it's not in dmesg and lspci, and it is fully inserted then the only possibilities i see are:
there...
You are missing some directives in the vmbr0_vlan100 and vmbr0_vlan48 sections respectively. When configuring an IPv4 address you need to add inet static - when configuring no IP address inet manual is required. Analogous for IPv6, but with inet6...
is this a cluster?
this sounds like that one of the nodes is on a newer version that includes the gui changes but you have a vm on a node selected where that api parameter is not there yet.
can you check (and post) the output of 'pveversion -v'...