Any specific errors in the host or guest kernel/system logs?
Please also note that the new 6.8 kernel will be the default relatively soon, so watch out on updates pulling it in again.
What's the exact package version as outputted by, e.g.: apt show nvidia-driver (Version field)?
As the changelog of the slightly newer 525.147.05-7~deb12u1 from the bookworm-updates repo mentions fixing build with the 6.8 kernel that would not be included in the version 525.147.05-4~deb12u1...
We slowed down moving that version along after we saw an odd report about OSD not coming up in this post: https://forum.proxmox.com/threads/after-updating-ceph-18-2-2-each-osds-never-start.144621/
But I did not get any further feedback yet there, and we could not reproduce this in our test and...
Back then 6.8.1 was available on both test and no-subscription repo, while 6.8.4-2 was only available on the test repo.
Like all package updates the flow is [internal repo] -> [test] -> [no-subscription] -> [enterprise].
It seems like King Tiger had the test repo enabled and thus got already...
The log from the last reply sounds a bit like a slightly older issue that seems to only trigger under certain circumstances:
https://tracker.ceph.com/issues/61948
@Nexsol do you see a similar error in your system logs?
I checked this out a bit more closely, and it seems that the source code of the latest upstream versions 9.013.02
has some explicit checks for the 6.8 kernel (or newer), so I'm not sure if it's best to just updating the str-compare method to make it compile is enough to actually make it fully...
I had just a simple checkbox in mind, that if ticked would make the installer create a systemd.link file for each interface with just enumerating some prefix, the rest then would be handled by systemd/udev – and yeah, if there's no match for an interface then it will fall back to the naming from...
Thanks for your feedback, and yeah changes in kernel release, systemd version and moving around HW can unfortunately result in such name changes. IME the ones from kernel updates stabilize once all features of a HW are supported correctly and no new issues come up.
One thing to avoid such...
This model is not available from us or Debian, so best to see if upstream has a fix.
From a quick look there it seems that this got already reported: https://github.com/google/gasket-driver/issues/23
That report includes a diff for a fix you could try to apply locally (open a new thread if you...
If they could just be bothered to create actual good upstream drivers... But oh well, as this package is available from the Debian repos, and has a significant amount of users, we will look into this and provide an update soonish – thanks for the report in any way!
Thanks for your feedback, that's what we hoped for.
We even found a few other edge cases that got fixed in version 0.7.0 which just got added to the repos.
In that version we also expose a new virtual .version file in the user space filesystem. This can be used to ensure that one is indeed...
We recently uploaded a 6.8 kernel into our repositories, it will be used as new default kernel in the next Proxmox VE 8.2 point release (Q2'2024).
This follows our tradition of upgrading the Proxmox VE kernel to match the current Ubuntu version until we reach an (Ubuntu) LTS release. This kernel...
This would require an extra flag that probably should be exposed per backup job.
As the backup client cannot actually distinguish a "normal" user triggered backup from a one that is triggered by the Proxmox VE backup stack ("vzdump"), and while I still can see that for some users this might be...
One thing worth a try might be to update to the latest pve-esxi-import-tools to version 0.6.1 which got uploaded yesterday evening, after the update you need to disable the ESXi storage and then re-enable it to ensure the new version is used.
That version includes another improvement we found...
If it's the same problem then the documented workaround that helped all having this exact problem in this thread will also help you.
If not I recommend opening an enterprise support ticket with Dell's and/or our support, as Dell's HW engineers have full access to their proprietary systems docs...
If this was easy we certainly would have done it, but we had to resort to whatever is actually available for Proxmox VE from the outside (i.e., no terms-of-services-ridden-with-restrictions SDKs or the like).
Note that our storage-migration with replication exactly works like that, so we're...
Yeah, we cannot hedge against all possible configurations users can make, and we certainly do not want to limit power users that understand their systems by restricting access to create NFS servers or the like (which is rather impossible to do for certain any way).
Anyhow, if you add custom...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.