You should uninstall pve-kernel-6.2: e.g. apt-get purge pve-kernel-6.2*, then simply reboot and a uname -a should display something like that:
Linux my-pve-host 5.19.17-2-pve #1 SMP PREEMPT_DYNAMIC PVE 5.19.17-2 (Sat, 28 Jan 2023 16:40:25 x86_64 GNU/Linux
I also have this issue when upgrading to optional 6.2 kernel. After downgrade back to 5.15 or 5.19 all LXCs starts again.
But i have also observed this behavior with PVE-7.3 and 7.2 with the optional 6.1 kernel.
PS: The lxc container is a proxmox-mail-gateway instance.
Using LVM
Ok apparently the mdevs are also included during a backup. For example, I have a VM which never actually runs, but it has a vgpu assigned to it.
I now have the problem that I have ghost mdevs from vms that are not running.
Probably this is due to the fact that I have commented out the cleanup...
it's not that simple, unfortunately. First you have to check if it is an Nvidia GPU, and then if the Nvidia driver is >= 15.0. In addition, only < 16.0 should be checked, if nvidia changes something again in the future.
This is just a temporary fix and may break your system/mdevs. Only use this if you're using nvidia mdevs and no other mdevs like network-cards, ...
Additonally this will be overwritten on proxmox updates.
If you DM me i can give you a hint. I wrote the message as i got an error that i can not write you.
mh, i don’t unterstand why there is an error, and there is no error when this lines are commented out. Maybe the error is somewhere else? How do i print into log in perl? Maybe i can figure out...
I'am running on 14.4 drivers since a few weeks and before on 14.3 which were really stable and did not clean up automatically. This behaviout is also new to me on drivers 15.0 and 15.1.
I have no urgent reason to upgrade to 15.0/15.1 drivers (since 14.x is working great with proxmox) but i'am...
Ok after changing commenting out the lines 6127, 6128 (which are 6099,6100 on my local /usr/share/perl5/PVE/QemuServer.pm) to
sub cleanup_pci_devices {
my ($vmid, $conf) = @_;
foreach my $key (keys %$conf) {
next if $key !~ m/^hostpci(\d+)$/;
my $hostpciindex = $1...
I can reproduce this issue using nvidia grid driver >= 15.0.
To get around this, use 14.x drivers.
Nevertheless my environment:
root@myhost:~# uname -a
Linux hostname 5.15.83-1-pve #1 SMP PVE 5.15.83-1 (2022-12-15T00:00Z) x86_64 GNU/Linux
As you can see on nvidia-smi vgpu the VM...
Vielen Dank! Dann werde ich das Ticket mal in den Augen behalten.
Mein spezifisches Problem konnte ich temporär lösen, indem ich die Binary von "prl_nettool" einfach durch ein leeres Bash-Skript ersetzt habe. Das behebt aber ja auch eher nur die Ursache, aber nicht das Problem an sich.
Je nach...
Hallo erstmal,
ich versuche mein Problem mal etwas gegliederrt darzustellen.
Folgende Situation
Datei / Variable
Soll (laut Dokumentation)
Ist
Hinweis / Notiz
/etc/hostname
a
b
...
a.pmg.example.com
b.pmg.example.com
...
falsch, siehe folgenden Text ...
(beim reboot überschrieben)...
Sorry to bring up this post again.
I can't get this to work. I now have a SR-IOV supported Mainboard and a new CPU. I don't think the MB / CPU is the problem.
I have installed and cumpiled the latest MxGPU-Drivers from kasperlewau/MxGPU-Virtualization. I can set the virtual-functions and get...
Hi, @aracno I have the same Issue on a ASUS KGPED16 with latest BIOS. Did you solve your problem?
The server is booted with quiet reboot=cold mem=256G rcu_nocbs=0-31 amd_iommu=on iommu=pt pci=realloc enable_mtrr_cleanup=1 video=efifb:off
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.