yes, thanks, i can see the folloing messages:
2026-02-06T13:41:26+01:00 agorapverssi1 QEMU[13459]: kvm: migration_block_inactivate: bdrv_inactivate_all() failed: -1
2026-02-06T13:41:26+01:00 agorapverssi1 QEMU[13459]: kvm: Error in migration completion: Bad address
and...
actually i didn't mean the task log, but the whole journal/syslog from both nodes. you can obtain that with
journalctl
(this will print the *whole* journal, use '--until' and '--since' to limit it to the correct timeframe)
not quite sure what you mean, each pci card should be listed in e.g. 'lspci' see an excerpt from the lspci here:
...
00:02.0 VGA compatible controller: Intel Corporation Alder Lake-S GT1 [UHD Graphics 770] (rev 0c)
...
03:00.0 VGA compatible controller: Intel Corporation Battlemage G21 [Arc...
yeah, i have one right here, running with proxmox ve 9.1 and it detects and works fine
as already said, check the bios for settings, and try with a e.g. ubuntu live iso to see if it's there at all
rebar refers to 'resizable bar', e.g. see here...
you should be able to attach a file, or if that's not possible, you can split it up in multiple posts, or use a text sharing website and share the link here
Hi,
could you post the vm config and 'pveversion -v' from both sides please
also the journal from the time of the migration from both sides could be relevant
are you sure the card is fully inserted in the pci slot? if yes, does it show up anywhere in 'dmesg' ? (you can post/attach the output here)
if it's not in dmesg and lspci, and it is fully inserted then the only possibilities i see are:
there is some bios setting to disable that pci slot
the...
is this a cluster?
this sounds like that one of the nodes is on a newer version that includes the gui changes but you have a vm on a node selected where that api parameter is not there yet.
can you check (and post) the output of 'pveversion -v' on all nodes ?
if it's not a cluster, are you in...
ah ok sorry, the output wasn't actually necessary to see the issue^^
when you look at your config, you can see that the zpool01 entry does not have the 'sparse 1' option, so the space for each vm volume is reserved upfront:
so while the vm disk reserves the space from the point of view of...
so we take the info from 'zfs list' which shows:
which shows ~82% used (USED/(USED+AVAIL))
ok, one additional command output would be helpful to see the geometry of the zpool (which might explain the big difference in output)
zpool status
this signal (11) is a "SIGSEGV" which means there was a segmentation fault somewhere. I'd guess it's either some corrupted file (you could check this with the 'debsums' package+command) or faulty hardware (like e.g. faulty memory)
since it happened after an upgrade i'd assume the first option
setting the '+i' (immutable) flag is only a further protecting layer. It's set to prevent accidental modification of template base file (since it could have linked clones).
everything should continue to work even without that flag, but now an admin could modify that image without first removing...
i agree it's not optimal. If you have a (in the best case concrete) improvement suggestion, please open an enhancement request on https://bugzilla.proxmox.com . that way we can better keep track of these requests :)
this is meant as a bar over time (left is older, right is newer) where the color indicates the usage (100% red, 0% green). I agree it's not the best visualization, but nothing better did come up yet.
e.g. see the example screenshot in the docs...
mhmm? not sure i get what you mean. which documentation is "behind fake walled gardens" ? every piece of the software and docs are out in the open. Yes, we don't have any detailed guide ourselves on e.g. how you to run jellyfin with a gpu, because there is already the guide i linked to?