yeah, i have one right here, running with proxmox ve 9.1 and it detects and works fine
as already said, check the bios for settings, and try with a e.g. ubuntu live iso to see if it's there at all
rebar refers to 'resizable bar', e.g. see...
you should be able to attach a file, or if that's not possible, you can split it up in multiple posts, or use a text sharing website and share the link here
Hi,
could you post the vm config and 'pveversion -v' from both sides please
also the journal from the time of the migration from both sides could be relevant
are you sure the card is fully inserted in the pci slot? if yes, does it show up anywhere in 'dmesg' ? (you can post/attach the output here)
if it's not in dmesg and lspci, and it is fully inserted then the only possibilities i see are:
there...
You are missing some directives in the vmbr0_vlan100 and vmbr0_vlan48 sections respectively. When configuring an IPv4 address you need to add inet static - when configuring no IP address inet manual is required. Analogous for IPv6, but with inet6...
is this a cluster?
this sounds like that one of the nodes is on a newer version that includes the gui changes but you have a vm on a node selected where that api parameter is not there yet.
can you check (and post) the output of 'pveversion -v'...
ah ok sorry, the output wasn't actually necessary to see the issue^^
when you look at your config, you can see that the zpool01 entry does not have the 'sparse 1' option, so the space for each vm volume is reserved upfront:
so while the vm...
so we take the info from 'zfs list' which shows:
which shows ~82% used (USED/(USED+AVAIL))
ok, one additional command output would be helpful to see the geometry of the zpool (which might explain the big difference in output)
zpool status
this signal (11) is a "SIGSEGV" which means there was a segmentation fault somewhere. I'd guess it's either some corrupted file (you could check this with the 'debsums' package+command) or faulty hardware (like e.g. faulty memory)
since it...
setting the '+i' (immutable) flag is only a further protecting layer. It's set to prevent accidental modification of template base file (since it could have linked clones).
everything should continue to work even without that flag, but now an...
i agree it's not optimal. If you have a (in the best case concrete) improvement suggestion, please open an enhancement request on https://bugzilla.proxmox.com . that way we can better keep track of these requests :)
this is meant as a bar over time (left is older, right is newer) where the color indicates the usage (100% red, 0% green). I agree it's not the best visualization, but nothing better did come up yet.
e.g. see the example screenshot in the docs...
mhmm? not sure i get what you mean. which documentation is "behind fake walled gardens" ? every piece of the software and docs are out in the open. Yes, we don't have any detailed guide ourselves on e.g. how you to run jellyfin with a gpu...
there are no artifical limits in pdm itself, so the only limit is what the system can give.
mostly this is
* network latency/throughput (e.g. for remotes with many vms updating the metrics, etc. can be quite large)
* memory: from what i can...
while migration tasks should show on the pdm, i can confirm that the maintenance status of nodes is currently not shown on PDM
Would you mind opening a feature request/bug on https://bugzilla.proxmox.com/ ?