Proxmox VE 8.0 (beta) released!

I have been trying the 6.2 opt-in kernel. (pve-kernel-6.2.11-2-pve) It is hanging and haven't been able to find a reason. Will 8 run with the pve-kernel-5.15.107-2-pve kernel? Is the beta kernel the same?
Thanks
 
I have been trying the 6.2 opt-in kernel. (pve-kernel-6.2.11-2-pve) It is hanging and haven't been able to find a reason.
I did not see any thread from you talking about this, maybe open one with HW details and more specifics about what or where it hangs, ideally we get that sorted out so that staying on older kernels isn't the only option.

Will 8 run with the pve-kernel-5.15.107-2-pve kernel?
No, 6.2 is the new default and running older kernel permanently can lead to weird issues (e.g., as they are built with another older compiler (from PVE 7) than the rest of the system (from PVE 8)).

Is the beta kernel the same?
What do you mean here?
 
I did not see any thread from you talking about this, maybe open one with HW details and more specifics about what or where it hangs, ideally we get that sorted out so that staying on older kernels isn't the only option.


No, 6.2 is the new default and running older kernel permanently can lead to weird issues (e.g., as they are built with another older compiler (from PVE 7) than the rest of the system (from PVE 8)).


What do you mean
I have had a few posts here.
https://forum.proxmox.com/threads/opt-in-kernel-panics.122589/#post-541798 . The latest 6.2 just hangs up. Nothing on the screen or in the logs that I have seen.

I will be having an operation tomorrow so will have time to experiment over the next week.

I was just wondering if the beta release kernel was the same as opt-in 6.2 kernel?

Any hints welcome.
Thanks
 
My node stops at job networking.service running (17min 42 sec / no limit)
At boot
same here ...
had to boot with init=/bin/bash and remove the whole network config to be able to boot

afterwards I copied the network config back and did a "systemctl restart networking" to bring all bonding/bridges online again ...
not sure why it hangs at boot time ... is there any way to get to the boot logs? (if there is any auto config there it hangs at boot time ... like
auto enp46s0f0)
 

Attachments

  • interfaces.txt
    1.3 KB · Views: 8
Last edited:
This is a timely release. I just finished building the hardware for another node. In this machine, I’m using an Intel 13th gen Raptor Lake CPU. I’m focused on power optimization for this node and adjusted the governors and low energy bias to power/save instead of performance.

I was scratching my head why powertop wasn’t showing me get higher than C3 package stage. Turns out powertop from Debian’s bullseye repositories is version 2.11 while bookworm’s is 2.14.

Would you suggest I just go straight to the pve8 beta rather than try and pull just that one package from bookworm? I already had to hack away at the installer to get it to run (nomodeset and had to explicitly identify my iGPU for X11), so maybe best to just run on the bleeding edge vs. making pve7 play nice with the hardware.

Either way, I’m running the 6.2 kernel.
 
This is a timely release. I just finished building the hardware for another node. In this machine, I’m using an Intel 13th gen Raptor Lake CPU. I’m focused on power optimization for this node and adjusted the governors and low energy bias to power/save instead of performance.

I was scratching my head why powertop wasn’t showing me get higher than C3 package stage. Turns out powertop from Debian’s bullseye repositories is version 2.11 while bookworm’s is 2.14.

Would you suggest I just go straight to the pve8 beta rather than try and pull just that one package from bookworm? I already had to hack away at the installer to get it to run (nomodeset and had to explicitly identify my iGPU for X11), so maybe best to just run on the bleeding edge vs. making pve7 play nice with the hardware.

Either way, I’m running the 6.2 kernel.
So I tried upgrading to pve 8 Beta in-place using the pve7to8 instructions. It went fine initially, but then died when I tried to spin up my Windows VM. Tried that a few times without luck and then instead tried spinning up my Linux VM. That didn't work either. For both, the kernel would go into a soft hang. I could initially ssh into it and issue a reboot, but then it would hang with eventually complaining some helper threads are not responding.

I should note that I have my on-board SATA controlled passed through as a PCIe device with all functions. This worked fine with 7.4, so I'll be reinstalling 7.4 on the node.

Edit to add: Reinstalled 7.4 and the VMs came up without issue, passing through my SATA controller.
 
Last edited:
I set up a test box for pve8, and initial reports look promising- but then I havent dont anything of actual "real world" value on it yes. this is on a single socket, 6 core machine. all looks well, but here's the weird part:

1686616368545.png

This machine is idle. there is basically nothing running:

1686616427712.png

It doesnt manifest any actual problems; vms start and run with expected performance, containers too. I just dont understand where this load is coming from, and why its reporting 50% io delay.

thoughts?
 
same here ...
had to boot with init=/bin/bash and remove the whole network config to be able to boot

afterwards I copied the network config back and did a "systemctl restart networking" to bring all bonding/bridges online again ...
not sure why it hangs at boot time ... is there any way to get to the boot logs? (if there is any auto config there it hangs at boot time ... like
auto enp46s0f0)

I am also in - having the same problem.

PVE hangs when booting and waits for networking.service

A workaround for this is.
  1. boot via Grub into Recovery Mode

  2. Code:
    systemctl disable networking

  3. reboot normal kernel

  4. Code:
    systemctl start networking

  5. start all virtual machines and containers manually (they are all shutdown, because the interface was not present during / after booting).
this way, "networking" is permanently switched off even during reboots and only has to be started after the reboot.
As soon as the issue is resolved, you can set this back to "enable".

I also got the following from the log:
Code:
ifupdown2: main.py:85:main(): error: main exception: name 'traceback' is not defined

However, this may just be a coincidence and have nothing to do with the issue. I hope this will be fixed soon, as I have a headless unit running.
 
I set up a test box for pve8, and initial reports look promising- but then I havent dont anything of actual "real world" value on it yes. this is on a single socket, 6 core machine. all looks well, but here's the weird part:

View attachment 51510

This machine is idle. there is basically nothing running:

View attachment 51511

It doesnt manifest any actual problems; vms start and run with expected performance, containers too. I just dont understand where this load is coming from, and why its reporting 50% io delay.

thoughts?
can you send result of:

cat /proc/pressure/cpu
cat /proc/pressure/io
cat /proc/pressure/memory

?
 
I am also in - having the same problem.

PVE hangs when booting and waits for networking.service

A workaround for this is.
  1. boot via Grub into Recovery Mode

  2. Code:
    systemctl disable networking

  3. reboot normal kernel

  4. Code:
    systemctl start networking

  5. start all virtual machines and containers manually (they are all shutdown, because the interface was not present during / after booting).
this way, "networking" is permanently switched off even during reboots and only has to be started after the reboot.
As soon as the issue is resolved, you can set this back to "enable".

I also got the following from the log:
Code:
ifupdown2: main.py:85:main(): error: main exception: name 'traceback' is not defined

However, this may just be a coincidence and have nothing to do with the issue. I hope this will be fixed soon, as I have a headless unit running.
you can enable debug with

/etc/default/networking
Code:
DEBUG="yes"
VERBOSE="yes"
 
I also got the following from the log:
Code:
ifupdown2: main.py:85:main(): error: main exception: name 'traceback' is not defined

However, this may just be a coincidence and have nothing to do with the issue. I hope this will be fixed soon, as I have a headless unit running.
That's strange, line 85 in main.py, is commented by default "# import traceback".
did you have done some manual change in ifupdown2 files ?


But the main error, seem that ifupdown2 think it's already running or that a lock file still exist.

Does this error occur only once ? does other reboot other fine after ?
 

Attachments

  • Capture d’écran du 2023-06-13 07-14-36.png
    Capture d’écran du 2023-06-13 07-14-36.png
    76.6 KB · Views: 17
just wondering if a update to 6.3 happens ? 6.2 is already EOL today so there are no updates to it at all from official kernel side
That's not how it works in the enterprise world. In this case,ubuntu maintains 6.2 kernel up-to some time in the future
 
I am also in - having the same problem.

PVE hangs when booting and waits for networking.service
same here ...
had to boot with init=/bin/bash and remove the whole network config to be able to boot
My node stops at job networking.service running (17min 42 sec / no limit)

Can those with network issues after upgrade (e.g., through network device renames) please post some more details about their setup, at least:
  • NIC models and drivers and PCI address (e.g., with lspci -k you get all PCI(e) devices and the used kernel driver, using something like lspci -k | grep -A3 -i ethernet might spare you from manually searching.
  • Motherboard or server Vendor + Model and CPU
  • the output of ip link
  • pveversion -v
That would be great!
 
Last edited:
  • Like
Reactions: pschneider1968
just wondering if a update to 6.3 happens ? 6.2 is already EOL today so there are no updates to it at all from official kernel side
6.2 is not EOL for us or our kernel upstream Ubuntu, only the kernel.org stable-x.y do not backport any patches anymore, but there are plenty to pick from other stable releases and the stable Linux kernel mailing list.
 
So I tried upgrading to pve 8 Beta in-place using the pve7to8 instructions. It went fine initially, but then died when I tried to spin up my Windows VM. Tried that a few times without luck and then instead tried spinning up my Linux VM. That didn't work either. For both, the kernel would go into a soft hang. I could initially ssh into it and issue a reboot, but then it would hang with eventually complaining some helper threads are not responding.

I should note that I have my on-board SATA controlled passed through as a PCIe device with all functions. This worked fine with 7.4, so I'll be reinstalling 7.4 on the node.

Edit to add: Reinstalled 7.4 and the VMs came up without issue, passing through my SATA controller.
if you have time, could you maybe check if the iommu groups and the pci addresses stayed the same across the upgrade?
it may be enough to opt-in to the 6.2 kernel on 7.4 (to rule out weird kernel changes)
or do you maybe have the journal/syslogs still?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!