Proxmox VE 8.0 (beta) released!

That's rather huge amount of processes not being scheduled for some time due to waiting for IO..
And you really don't notice any hangs or the like? Anything in the logs (journal)?
System is operating completly normally. at the time I reported this, there are 3 ct's and one vm running without any apparent issue.

There are no messages in journalctl after 17:40 yesterday (I still cant get used to having no rsyslog- maybe there are entries elsewhere?)

What is also interesting, when I posted the original report, there were no guests running at all, and the load average was 3+ and io delay was averaging 50%. Today its 2.3 with guests, and io delay is averaging 30%.

Most peculiar.
 
Upgraded my test node and can confirm encrypted rpool with dropbear-initramfs is still working. But some some reason Putty won'T allow me to log in as root, while WinSCP is still working. Both use the same pub/priv keys and before the update it was working fine. I've chosen to keep the existing /etc/ssd/sshd_config while upgrading, but even when editing the new default config file It won't allow me to log in with Putty.
Using Putty to unlock the rpool while booting still works fine.

Edit:
1686676439294.png

Peagent still got the priv key of which the pub key is in "/root/.ssh/authorized_keys". "/etc/ssh/sshd_config" still got the lines PermitRootLogin yes and PasswordAuthentication no.

If I remember right when showing the config differences while updating, only the...
Code:
# Change to yes to enable challenge-response passwords (beware issues with
# some PAM modules and threads)
KbdInteractiveAuthentication no
...block changed.
 
Last edited:
Hi,

Yes, using apt autoremove will get rid of packages that were installed as dependencies but are no longer needed/depended upon.
i tried but it found 0 packages. should remove them manually one by one?

Code:
root@pve:~# apt autoremove --purge
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
 
Hmm, odd, at what stage does it hang up?

As in the installer there's no systemd running, so the hang of the networking.service in the installed system must be a side effect. Anything error/warning like in the journal?

Just to be sure, how what was the fix for your installed system to make network work again?
At exactly the same moment as the installed system. When detect / bring up the network interfaces.

I don't have a fix for this but I do have a workaround. This is to leave the network service off until the system has booted. After that a "systemctl start networking.service" works without further problems.

But this also means that all containers have to be started manually because the vmbr was not present during the boot.

@t.lamprecht
Anything else I can provide on the problem or ideas?
 
Last edited:
System is operating completly normally. at the time I reported this, there are 3 ct's and one vm running without any apparent issue.

There are no messages in journalctl after 17:40 yesterday (I still cant get used to having no rsyslog- maybe there are entries elsewhere?)

What is also interesting, when I posted the original report, there were no guests running at all, and the load average was 3+ and io delay was averaging 50%. Today its 2.3 with guests, and io delay is averaging 30%.

Most peculiar.
Mystery solved. broken autofs mount.
 
  • Like
Reactions: t.lamprecht
i tried but it found 0 packages. should remove them manually one by one?

Code:
root@pve:~# apt autoremove --purge
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

apt list '~c' shows leftover configurations which you can apt purge '~c' then.

Further, apt list '~o' shows obsoleted packages; uninstall them with apt purge '~o'
 
i tried but it found 0 packages. should remove them manually one by one?

Code:
root@pve:~# apt autoremove --purge
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Okay, it might be that the packages were manually installed or dependencies for those, e.g. on my system
Code:
root@enia ~log/apt # aptitude why libpython3.7-stdlib 
i   libpython3.7 Depends libpython3.7-stdlib (= 3.7.3-2+deb10u3)
root@enia ~log/apt # aptitude why libpython3.7       
Manually installed, current version 3.7.3-2+deb10u3, priority optional
No dependencies require to install libpython3.7

But there also seems to be a second category
Code:
root@enia ~log/apt # aptitude why gcc-9-base
Automatically installed, current version 9.3.0-22, priority required
No dependencies require to install gcc-9-base
I haven't been able to find it from a quick search, but it might be that it's not autoremoved because of priority required.

While the packages are just local after upgrade, so shouldn't be needed, I'd be a bit careful with removing things where you're not sure.
 
After installing the 8 beta, my Windows 10 VM with GPU passthrough no longer runs, and always crashes the entire node immediatelly after starting.

Same with a Win 11 VM. Both do work without PCI (GPU) passthrough.

Both journalctl -xe and dmesg only show logs after the crash. Any tips? Thanks.

this is the config of the Win10 guest:
Code:
agent: 1
args: -device vfio-pci,host=00:02.0,romfile=/etc/pve/qemu-server/i915ovmf.rom,x-igd-opregion=on
balloon: 0
bios: ovmf
boot: order=sata0;ide2;net0
cores: 2
cpu: host,hidden=1
efidisk0: storage:vm-102-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci0: 0000:00:1f.3
ide2: none,media=cdrom
machine: pc-q35-7.2
memory: 4096
meta: creation-qemu=7.2.0,ctime=1684587351
name: win10
net0: e1000=96:B3:46:FA:0C:EA,bridge=vmbr0,firewall=1
numa: 0
ostype: win10
parent: all_working
sata0: storage:vm-102-disk-2,backup=0,discard=on,replicate=0,size=128G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=98aed783-45fe-4ff4-a508-e417b4d3108c
sockets: 1
spice_enhancements: foldersharing=1
tablet: 0
tags: windows
vga: none
vmgenid: 33ad5816-0574-4e7e-82f1-16aa982055ef
vmstatestorage: storage

Using opt-in kernel 6.2.11-2-pve. What is PM 8 default kernel?

this is my grub:
Code:
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt initcall_blacklist=sysfb_init"
GRUB_CMDLINE_LINUX=""

my etc/modprobe.d/vfio.conf (GPU and audio, they are in separate IOMMU groups, GPU is in its group by itself)
Code:
options vfio-pci ids=8086:5916,8086:9d71
 
Last edited:
  • Like
Reactions: CycloneB
And where to get the utility pve7to8?

# pveversion
pve-manager/7.4-3/9002ab8a (running kernel: 5.15.107-1-pve)

# pve7to8
-bash: pve7to8: command not found

# apt show pve7to8
N: Unable to locate package pve7to8
N: Unable to locate package pve7to8
E: No packages found

# dpkg -S pve7to8
dpkg-query: no path found matching pattern *pve7to8*
 
And where to get the utility pve7to8?

# pveversion
pve-manager/7.4-3/9002ab8a (running kernel: 5.15.107-1-pve)

# pve7to8
-bash: pve7to8: command not found

# apt show pve7to8
N: Unable to locate package pve7to8
N: Unable to locate package pve7to8
E: No packages found

# dpkg -S pve7to8
dpkg-query: no path found matching pattern *pve7to8*
I think the latest 7.4 has the pve7to8 utility.
 
And where to get the utility pve7to8?
It's shipped by pve-manager package, you need to first upgrade to latest 7.4 to get that, as mentioned various time in the upgrade docs upgrading to latest 7.x is a requirement for a pain free upgrade..
 
Code:
pve7to8 --full
complains
Code:
WARN: Found at least one CT (160) which does not support running in a unified cgroup v2 layout
    Consider upgrading the Containers distro or set systemd.unified_cgroup_hierarchy=0 in the Proxmox VE hosts' kernel cmdline! Skipping further CT compat checks.

This is a Debian 12 CT, upgraded from the 11.7.1 CT-Template.

Does the 7to8 Script detect the Version of Debian 12 systemd wrong?
https://forum.proxmox.com/threads/u...ade-warning-pve-6-4-to-7-0.92459/#post-403019
 
Hi,
Code:
pve7to8 --full
complains
Code:
WARN: Found at least one CT (160) which does not support running in a unified cgroup v2 layout
    Consider upgrading the Containers distro or set systemd.unified_cgroup_hierarchy=0 in the Proxmox VE hosts' kernel cmdline! Skipping further CT compat checks.

This is a Debian 12 CT, upgraded from the 11.7.1 CT-Template.

Does the 7to8 Script detect the Version of Debian 12 systemd wrong?
https://forum.proxmox.com/threads/u...ade-warning-pve-6-4-to-7-0.92459/#post-403019
should be fixed in git: https://git.proxmox.com/?p=pve-manager.git;a=commit;h=591f411f729e2f9c2f1fd45540a4d777c16b3245 and will be in pve-manager=7.4-14 once it's released.
 
Can you explain why specific CT templates like Ubuntu 20.04, Debian 10 and Alpine 3.17 are not available?
 
Can you explain why specific CT templates like Ubuntu 20.04, Debian 10 and Alpine 3.17 are not available?
They are already quite old and got newer supported releases already available. Also, we don't want to promote usage of Distributions with newer releases already available, like 22.04 LTS for Ubuntu, Debian 11 and now also Debian 12 (image coming soon), or Alpine Linux 3.18.

Debian 10 is even EOL by standard support, starting out with releases where there are two newer major releases should be avoided, and if really required one can download them manually from the archive, and check their hash sum with an older index, e.g., the one from Proxmox VE 7.
Existing containers with those older releases will naturally continue to work, as will templates downloaded already in Proxmox VE 7.

Ubuntu 20.04 would be the only one I could imagine having still some use due to some software targeting that version, and it's in standard support until 2025, so if there's popular demand we can add that again to the Proxmox VE 8 container template index.
 
They are already quite old and got newer supported releases already available. Also, we don't want to promote usage of Distributions with newer releases already available, like 22.04 LTS for Ubuntu, Debian 11 and now also Debian 12 (image coming soon), or Alpine Linux 3.18.

Debian 10 is even EOL by standard support, starting out with releases where there are two newer major releases should be avoided, and if really required one can download them manually from the archive, and check their hash sum with an older index, e.g., the one from Proxmox VE 7.
Existing containers with those older releases will naturally continue to work, as will templates downloaded already in Proxmox VE 7.

Ubuntu 20.04 would be the only one I could imagine having still some use due to some software targeting that version, and it's in standard support until 2025, so if there's popular demand we can add that again to the Proxmox VE 8 container template index.

Yes, please add Ubuntu 20.04 back! I have software that still depends on that version.
 
after installing 8.0 beta from iso, on first update run from pvetest i get :

<snip
Setting up libzpool5linux (2.1.12-pve1) ...
Setting up zfsutils-linux (2.1.12-pve1) ...
Setting up zfs-initramfs (2.1.12-pve1) ...
Setting up zfs-zed (2.1.12-pve1) ...
Processing triggers for initramfs-tools (0.142) ...
update-initramfs: Generating /boot/initrd.img-6.2.16-2-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/F7B6-1F9B
Copying kernel and creating boot-entry for 6.2.16-1-pve
Copying kernel and creating boot-entry for 6.2.16-2-pve
Couldn't find EFI system partition. It is recommended to mount it to /boot or /efi. <--- !
Alternatively, use --esp-path= to specify path to mount point.
Processing triggers for libc-bin (2.36-9) ...
Processing triggers for pve-manager (8.0.0~8) ...
Processing triggers for man-db (2.11.2-2) ...
<snip>

the system has an efi partition , efi boot is acrtive (the system has no legacy boot) and efibootmgr showing efi entires and efi is active, as /sys/firmware/efi/efivars dir exists

should i worry about this message ? maybe @Stoiko Ivanov will know !?
 
Installed proxmox-ve_8.0-BETA-1.iso
I have a problem installing the vgpu driver

I install the driver with the following commands:
chmod +x ./NVIDIA-Linux-x86_64-525.105.14-vgpu-kvm.run
./NVIDIA-Linux-x86_64-525.105.14-vgpu-kvm.run --dkms

And here are the error messages he writes to me:

ERROR: An error occurred while performing the step: "Building kernel modules". See /var/log/nvidia-installer.log
for details.

ERROR: An error occurred while performing the step: "Checking to see whether the nvidia-vgpu-vfio kernel module
was successfully built". See /var/log/nvidia-installer.log for details.

ERROR: The nvidia-vgpu-vfio kernel module was not created.

ERROR: Installation has failed. Please see the file '/var/log/nvidia-installer.log' for details. You may find
suggestions on fixing installation problems in the README available on the Linux driver download page at
www.nvidia.com.

What could be the problem ?

Everything works fine on PVE 7.4.13
 
Last edited:
@t.lamprecht
Thank you very much for updating the kernel to 6.2.16-2 !!!
My Intel Arc A380 works now! It couldn't load firmware with 6.2.16-1, including a lot of bugs that i had with that card!

But i wasn't sure where my PCIe bugs camed from, so i didn't reported anything, cause i didn't thought that its related to kernel drivers.
I mean i wasn't sure, could be my MB or the Card itself or the m.2 to PCIe4.0x4 adapter... (i ordered actually another m.2 adapter to test out)

However, since the update today to 6.2.16-2 fixed all my issues, lol.
Not sure what you changed and if you actually merged some fixes related to that or not,
but i just rebooted to be sure 5x into 6.2.16-1 and 6.2.16-2 and its definitively the kernel issue, lol

Code:
[    6.495233] i915 0000:03:00.0: [drm] VT-d active for gfx access
[    6.497358] i915 0000:03:00.0: [drm] Local memory IO size: 0x000000017c800000
[    6.497575] i915 0000:03:00.0: [drm] Local memory available: 0x000000017c800000
[    6.701447] i915 0000:03:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=none
[    6.727179] i915 0000:03:00.0: [drm] Finished loading DMC firmware i915/dg2_dmc_ver2_08.bin (v2.8)
[    7.018038] i915 0000:03:00.0: [drm] GuC firmware i915/dg2_guc_70.bin version 70.5.1
[    7.018215] i915 0000:03:00.0: [drm] HuC firmware i915/dg2_huc_gsc.bin version 7.10.3
[    7.051274] i915 0000:03:00.0: [drm] GuC submission enabled
[    7.051448] i915 0000:03:00.0: [drm] GuC SLPC enabled
[    7.056404] i915 0000:03:00.0: [drm] GuC RC: enabled
[    7.230234] [drm] Initialized i915 1.6.0 20201103 for 0000:03:00.0 on minor 1
[    7.270417] snd_hda_intel 0000:04:00.0: bound 0000:03:00.0 (ops i915_audio_component_bind_ops [i915])
[    7.272674] i915 0000:03:00.0: [drm] Cannot find any crtc or sizes
[    7.273141] i915 0000:03:00.0: [drm] Cannot find any crtc or sizes
[    7.312641] mei_gsc i915.mei-gscfi.768: FW not ready: resetting: dev_state = 1 pxp = 0
[    7.314770] mei_gsc i915.mei-gsc.768: FW not ready: resetting: dev_state = 2 pxp = 2
[    7.314796] mei_gsc i915.mei-gscfi.768: FW not ready: resetting: dev_state = 1 pxp = 0
[    7.315025] mei_gsc i915.mei-gsc.768: unexpected reset: dev_state = ENABLED fw status = 00000345 84670000 00000000 00000000 E0020002 00000000
[    7.324076] mei_gsc i915.mei-gscfi.768: unexpected reset: dev_state = INIT_CLIENTS fw status = 00000345 00000000 00000000 00000000 00000000 00000000
[    7.709747] i915 0000:03:00.0: [drm] HuC authenticated
[    7.710161] mei_pxp i915.mei-gsc.768-fbf6fcf1-96cf-4e2e-a6a6-1bab8cbe36b1: bound 0000:03:00.0 (ops i915_pxp_tee_component_ops [i915])

Thats the working boot with 6.2.16-2, if you're interrested in 6.2.16-1 logs (i doubt), but i can post it here either if you're interrested.

Thanks & Cheers!

EDIT:
I just tested my Jellyfin & Plex Docker Container inside My Lxc Container (with device passthrough, etc)
Works the first time absolutely fluid and perfect with hardware transcoding!
intel_gpu_top shows that the gpu renders properly either!

I didn't thought i would able to get it running lol, but whatever you changed/updated in the kernel, THANK YOU!
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!