i can't reproduce this here:
i have a standalone drive:
* started a backup via a job with a media pool policies 'keep' (retention) and 'continue' (allocation)
* marked the last 'writable' medium as vaulted (so no member of the media-set is...
yeah, this has to be fixed in pdm. In PVE we currently only share the 'non-shared' storages multiple times (or use the list the user can configure) but in pdm we simply sum everything up. would you mind creating a bug for that on...
ah ok, i only saw that just the package version was bumped and assumed it didn't contain any relevant changes for this, but yeah go ahead and try, the worst that can happen is that the module won't compile and you have to remove the package again...
not super sure about the 'no installation candidate' issue, what does
apt-cache policy nvidia-driver
show?
here it's like this:
nvidia-driver:
Installed: (none)
Candidate: 550.163.01-2
Version table:
550.163.01-2 500
500...
the patch is already applied, so it will be included in the next bump for qemu-server
when you manually apply patches you have to reload/restart pveproxy & pvedaemon, so they load the perl libraries again
sadly, not really
to access the display of a vm via vnc, qemu has to have access to that. and that only works for the built in (virtual) gpus and some select vgpus (e.g. nvidia vgpu). But even there we don't utilize this, since in our testing it...
yes, thanks, i can see the folloing messages:
2026-02-06T13:41:26+01:00 agorapverssi1 QEMU[13459]: kvm: migration_block_inactivate: bdrv_inactivate_all() failed: -1
2026-02-06T13:41:26+01:00 agorapverssi1 QEMU[13459]: kvm: Error in migration...
actually i didn't mean the task log, but the whole journal/syslog from both nodes. you can obtain that with
journalctl
(this will print the *whole* journal, use '--until' and '--since' to limit it to the correct timeframe)
not quite sure what you mean, each pci card should be listed in e.g. 'lspci' see an excerpt from the lspci here:
...
00:02.0 VGA compatible controller: Intel Corporation Alder Lake-S GT1 [UHD Graphics 770] (rev 0c)
...
03:00.0 VGA compatible...
yeah, i have one right here, running with proxmox ve 9.1 and it detects and works fine
as already said, check the bios for settings, and try with a e.g. ubuntu live iso to see if it's there at all
rebar refers to 'resizable bar', e.g. see...
you should be able to attach a file, or if that's not possible, you can split it up in multiple posts, or use a text sharing website and share the link here
Hi,
could you post the vm config and 'pveversion -v' from both sides please
also the journal from the time of the migration from both sides could be relevant
are you sure the card is fully inserted in the pci slot? if yes, does it show up anywhere in 'dmesg' ? (you can post/attach the output here)
if it's not in dmesg and lspci, and it is fully inserted then the only possibilities i see are:
there...
You are missing some directives in the vmbr0_vlan100 and vmbr0_vlan48 sections respectively. When configuring an IPv4 address you need to add inet static - when configuring no IP address inet manual is required. Analogous for IPv6, but with inet6...
is this a cluster?
this sounds like that one of the nodes is on a newer version that includes the gui changes but you have a vm on a node selected where that api parameter is not there yet.
can you check (and post) the output of 'pveversion -v'...
ah ok sorry, the output wasn't actually necessary to see the issue^^
when you look at your config, you can see that the zpool01 entry does not have the 'sparse 1' option, so the space for each vm volume is reserved upfront:
so while the vm...
so we take the info from 'zfs list' which shows:
which shows ~82% used (USED/(USED+AVAIL))
ok, one additional command output would be helpful to see the geometry of the zpool (which might explain the big difference in output)
zpool status