Proxmox VE 8.0 released!

Upgraded my 3 nodes one at a time with no issues, just followed the guide!

I do however have a Warning in Ceph.
Code:
Module 'restful' has failed dependency: PyO3 modules may only be initialized once per interpreter processModule 'restful' has failed dependency: PyO3 modules may only be initialized once per interpreter process

I'm new to Ceph, and have only recently installed and setup a pool. I currently have nothing actually installed or using the ceph storage. So I'm not sure this is a result of the upgrade or my original install.
I have the same issue - have you found a resolve yet?
 
no, but for my curiousity it happens only in one out of two s740. have some more oft these and will do further testing. they have differnt bios. maybe there is a relation...
 
severe issues on futro s740 mini pc with moderate network load - i see freezes/latency issues after upgrading to pve8

Bash:
64 bytes from 192.168.179.9: icmp_seq=390 ttl=64 time=25.749 ms
64 bytes from 192.168.179.9: icmp_seq=391 ttl=64 time=2.216 ms
64 bytes from 192.168.179.9: icmp_seq=392 ttl=64 time=2.242 ms
64 bytes from 192.168.179.9: icmp_seq=393 ttl=64 time=12.196 ms
64 bytes from 192.168.179.9: icmp_seq=394 ttl=64 time=13.812 ms
64 bytes from 192.168.179.9: icmp_seq=395 ttl=64 time=2.054 ms
64 bytes from 192.168.179.9: icmp_seq=396 ttl=64 time=3.772 ms
Request timeout for icmp_seq 397
Request timeout for icmp_seq 398
Request timeout for icmp_seq 399
Request timeout for icmp_seq 400
Request timeout for icmp_seq 401
64 bytes from 192.168.179.9: icmp_seq=397 ttl=64 time=5657.127 ms
64 bytes from 192.168.179.9: icmp_seq=398 ttl=64 time=4652.875 ms
64 bytes from 192.168.179.9: icmp_seq=399 ttl=64 time=3648.646 ms
64 bytes from 192.168.179.9: icmp_seq=400 ttl=64 time=2645.291 ms
64 bytes from 192.168.179.9: icmp_seq=401 ttl=64 time=1643.965 ms
64 bytes from 192.168.179.9: icmp_seq=402 ttl=64 time=643.095 ms
64 bytes from 192.168.179.9: icmp_seq=403 ttl=64 time=5402.795 ms
64 bytes from 192.168.179.9: icmp_seq=404 ttl=64 time=4400.167 ms
64 bytes from 192.168.179.9: icmp_seq=405 ttl=64 time=3397.164 ms
64 bytes from 192.168.179.9: icmp_seq=406 ttl=64 time=2394.939 ms
64 bytes from 192.168.179.9: icmp_seq=407 ttl=64 time=1394.247 ms
64 bytes from 192.168.179.9: icmp_seq=408 ttl=64 time=396.327 ms
64 bytes from 192.168.179.9: icmp_seq=409 ttl=64 time=4107.552 ms
64 bytes from 192.168.179.9: icmp_seq=410 ttl=64 time=3109.809 ms
64 bytes from 192.168.179.9: icmp_seq=411 ttl=64 time=2105.314 ms
64 bytes from 192.168.179.9: icmp_seq=412 ttl=64 time=1102.421 ms
64 bytes from 192.168.179.9: icmp_seq=413 ttl=64 time=97.707 ms
64 bytes from 192.168.179.9: icmp_seq=414 ttl=64 time=3215.873 ms
64 bytes from 192.168.179.9: icmp_seq=415 ttl=64 time=2215.231 ms
64 bytes from 192.168.179.9: icmp_seq=416 ttl=64 time=1212.028 ms
64 bytes from 192.168.179.9: icmp_seq=417 ttl=64 time=209.033 ms
64 bytes from 192.168.179.9: icmp_seq=418 ttl=64 time=611.635 ms
Request timeout for icmp_seq 424
Request timeout for icmp_seq 425
64 bytes from 192.168.179.9: icmp_seq=419 ttl=64 time=7429.334 ms
64 bytes from 192.168.179.9: icmp_seq=420 ttl=64 time=6426.092 ms
64 bytes from 192.168.179.9: icmp_seq=421 ttl=64 time=5424.907 ms
64 bytes from 192.168.179.9: icmp_seq=422 ttl=64 time=4424.393 ms
64 bytes from 192.168.179.9: icmp_seq=423 ttl=64 time=3420.850 ms
64 bytes from 192.168.179.9: icmp_seq=424 ttl=64 time=2443.929 ms
64 bytes from 192.168.179.9: icmp_seq=425 ttl=64 time=1439.509 ms
64 bytes from 192.168.179.9: icmp_seq=426 ttl=64 time=440.033 ms
Request timeout for icmp_seq 434
Request timeout for icmp_seq 435
Request timeout for icmp_seq 436
Request timeout for icmp_seq 437
Request timeout for icmp_seq 438
64 bytes from 192.168.179.9: icmp_seq=427 ttl=64 time=12052.972 ms
64 bytes from 192.168.179.9: icmp_seq=428 ttl=64 time=11052.822 ms
64 bytes from 192.168.179.9: icmp_seq=429 ttl=64 time=10048.715 ms
64 bytes from 192.168.179.9: icmp_seq=430 ttl=64 time=9047.991 ms
64 bytes from 192.168.179.9: icmp_seq=431 ttl=64 time=8046.758 ms
64 bytes from 192.168.179.9: icmp_seq=432 ttl=64 time=7043.676 ms


Bash:
02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 0c)
    DeviceName: Onboard - RTK Ethernet
    Subsystem: Fujitsu Technology Solutions RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller
    Flags: bus master, fast devsel, latency 0, IRQ 20
    I/O ports at e000 [size=256]
    Memory at a1104000 (64-bit, non-prefetchable) [size=4K]
    Memory at a1100000 (64-bit, prefetchable) [size=16K]
    Capabilities: [40] Power Management version 3
    Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+
    Capabilities: [70] Express Endpoint, MSI 01
    Capabilities: [b0] MSI-X: Enable+ Count=4 Masked-
    Capabilities: [d0] Vital Product Data
    Capabilities: [100] Advanced Error Reporting
    Capabilities: [140] Virtual Channel
    Capabilities: [160] Device Serial Number 00-00-00-00-00-00-00-00
    Capabilities: [170] Latency Tolerance Reporting
    Kernel driver in use: r8169
    Kernel modules: r8169
many users have reported bugs with current r8169 driver. (seem to works with dkms, so it's really a driver bug.) Need to wait than proxmox dev backporting it.
 
  • Like
Reactions: RolandK
i need to withdraw my findings. it's NOT being caused by system/kernel/driver issue but it was fritzbox port accidentally be in green mode 100mbit instead of gbit (cable was plugged to the wrong port). that seems to lead to weird behaviour in packet priority handling.
 
Hello!!
I've followed the guide to update to 8.0, but it has failed. These are the warnings:

Code:
Job for pvestatd.service failed.
See "systemctl status pvestatd.service" and "journalctl -xeu pvestatd.service" for details.
dpkg: error processing package pve-manager (--configure):
 installed pve-manager package post-installation script subprocess returned error exit status 1
dpkg: dependency problems prevent configuration of proxmox-ve:
 proxmox-ve depends on pve-manager; however:
  Package pve-manager is not configured yet.


dpkg: error processing package proxmox-ve (--configure):
 dependency problems - leaving unconfigured
Processing triggers for debianutils (5.7-0.4) ...
Processing triggers for libc-bin (2.36-9) ...
Processing triggers for proxmox-backup-file-restore (3.0.1-1) ...
Updating file-restore initramfs...
12101 blocks
Processing triggers for ca-certificates (20230311) ...
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
Processing triggers for initramfs-tools (0.142) ...
update-initramfs: Generating /boot/initrd.img-6.2.16-3-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
Processing triggers for pve-ha-manager (4.0.2) ...
Errors were encountered while processing:
 pve-manager
 proxmox-ve
Removing subscription nag from UI...
E: Sub-process /usr/bin/dpkg returned an error code (1)

I don't know what to do, know…
 
Hello, this is the journalctl output:
E: Sub-process /usr/bin/dpkg returned an error code (1)
root@mrbyte:~# journalctl -xe
Code:
Jul 02 15:32:00 mrbyte pvestatd[1043]: ipcc_send_rec[4] failed: Connection refused
Jul 02 15:32:00 mrbyte pvestatd[1043]: status update error: Connection refused
Jul 02 15:32:00 mrbyte pveproxy[1947823]: worker exit
Jul 02 15:32:00 mrbyte pveproxy[1073]: worker 1947823 finished
Jul 02 15:32:00 mrbyte pveproxy[1073]: starting 1 worker(s)
Jul 02 15:32:00 mrbyte pveproxy[1073]: worker 1947828 started
Jul 02 15:32:00 mrbyte pveproxy[1947828]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_fil>
Jul 02 15:32:00 mrbyte pveproxy[1947824]: worker exit
Jul 02 15:32:00 mrbyte pveproxy[1947825]: worker exit
Jul 02 15:32:00 mrbyte pveproxy[1073]: worker 1947824 finished
Jul 02 15:32:00 mrbyte pveproxy[1073]: starting 1 worker(s)
Jul 02 15:32:00 mrbyte pveproxy[1073]: worker 1947829 started
Jul 02 15:32:00 mrbyte pveproxy[1073]: worker 1947825 finished
Jul 02 15:32:00 mrbyte pveproxy[1073]: starting 1 worker(s)
Jul 02 15:32:00 mrbyte pveproxy[1073]: worker 1947830 started
Jul 02 15:32:00 mrbyte pveproxy[1947829]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_fil>
Jul 02 15:32:00 mrbyte pveproxy[1947830]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_fil>
Jul 02 15:32:01 mrbyte cron[1918606]: (*system*vzdump) CAN'T OPEN SYMLINK (/etc/cron.d/vzdump)
Jul 02 15:32:02 mrbyte pve-firewall[1036]: status update error: Connection refused
Jul 02 15:32:02 mrbyte pvescheduler[1947831]: replication: Connection refused
Jul 02 15:32:02 mrbyte pvescheduler[1947832]: jobs: cfs-lock 'file-jobs_cfg' error: no quorum!

It's only a part, I wouldn't like to spam the thread with log messages..

Best regards!!
 
Hello!! I'm newbie with this kind of stuff... I don't know what this means this, I apologize...
Maybe buy a subscription with support tickets and ask the Proxmox staff very politely to fix Removing subscription nag from UI... for you so it does not break the upgrade? At least that's how I interpret it...
 
  • Like
Reactions: mrbyte
Maybe buy a subscription with support tickets and ask the Proxmox staff very politely to fix Removing subscription nag from UI... for you so it does not break the upgrade? At least that's how I interpret it...
Hello, thank you very much for your response. :)

As far I can remember, when I installed Proxmox months ago, I followed one of the thousands blogposts where explain how to remove the popup regarding subscription on login. Do you think that this broke my update?


Thank you very much.
 
Hello!!

Finally, I've managed to fix the errors:
  1. Deleting the Enterprise repository and the files related to pve-manger in /var/lib/dpkg/info/.
  2. After I've run a dist-upgrade.
  3. Reboot.
However, I can't reach the web ui...



Best regards!! :)
 
Sure, no problem

  • Proxmox Shell
  • sudo nano /etc/apt/sources.list
  • Add non-free at the end of the first bookworm main contrib line
  • Control + X, save exit
  • sudo apt update
  • sudo apt install dkms
  • sudo apt install r8168-dkms
  • reboot
  • ethtool -i %interfacename% (e.g. ethtool -i enp1s0) to check the loaded driver, should show r8168
Hi,
Thanks a lot for this procedure. it saved me a lot of time.
I had network disconnection issues. And now my system is working fine.
Thanks again for your help.
 
  • Like
Reactions: elcoco
Hi!
I'm testing proxmox-pve8, i found the following:
"pve-kernel-6.2.16-3-pve"
"mdadm - v4.2 - 2021-12-30 - Debian 4.2-5"


Results:
At first, I thought it is a systemd bug, but further digging I found it is some kernel bug ( mdadm + udev ).
Linux software-raid (mdadm) is totally broken:
- Promxox host is not possible to shutdown/restart due the "MD loop messages", only power-cut.

Same error (solution was kernel upgrade):
https://forums.rockylinux.org/t/rocky-9-1-system-with-md-devices-never-shuts-down/8534
 
Another user hit by the r8169 driver issue, I think
Had the issue with an upgraded system; though it went away when it wasn't part of a cluster.

Did a full delete and install, still had the issue.
Took the drive out of the chassis and put it in an older computer, it worked for about 12 hours and then the bug came back.
 

Current Release Family: NVIDIA vGPU Software 15​

vGPU SoftwareLinux vGPU ManagerWindows vGPU ManagerLinux DriverWindows DriverRelease Date
15.3525.125.03529.06525.125.06529.11June 2023

NVIDIA-Linux-x86_64-525.125.03-vgpu-kvm.run
Has anyone tried installing this driver on PVE 8 ?

Everything works fine on 7.4-15
I have tried, it doesn't work. And as far as I can tell Nvidia doesn't plan on supporting 6.x kernels anytime soon.
 
Is it possible to upgrade if your internet connection relies on a VM (OpnSense as router/firewall)?
 
  • Like
Reactions: sbellon
  • Like
Reactions: ualex and ATQ
Please help a new proxmox user.
Can I have a cluster with VE7.4 and VE8?
I have not done any cluster before but I read the available documentation and it seems doable.
Why am asking this? because I thought of the following given that uptime is critical for me.


I have an VE 7.4 that runs a dozen machines and I want to upgrade to 8.

Let create a cluster with a spare laptop I have available (it has 1tb nvme and 1tb external ssd so more space than I really need).
This new laptop will have VE 8 then migrate all the vz/vm to this laptop,
If everything goes well and without issues

Shutdown the old nuc with the 7.4, format it, install the new version 8
(Nuc has a 512gb nvme and a 512gb ssd)

Join again the cluster and migrate back all the machines from the laptop to the nuc.
If everything goes well and without issues remove the laptop
and have a cluster of only one VE
(In the near future I will buy a 2nd nuc and have a piece of mind)

Am I missing something?
Is there a smarter faster more efficient way?

Any help greatly appreciated
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!