I have the same issue - have you found a resolve yet?Upgraded my 3 nodes one at a time with no issues, just followed the guide!
I do however have a Warning in Ceph.
Code:Module 'restful' has failed dependency: PyO3 modules may only be initialized once per interpreter processModule 'restful' has failed dependency: PyO3 modules may only be initialized once per interpreter process
I'm new to Ceph, and have only recently installed and setup a pool. I currently have nothing actually installed or using the ceph storage. So I'm not sure this is a result of the upgrade or my original install.
many users have reported bugs with current r8169 driver. (seem to works with dkms, so it's really a driver bug.) Need to wait than proxmox dev backporting it.severe issues on futro s740 mini pc with moderate network load - i see freezes/latency issues after upgrading to pve8
Bash:64 bytes from 192.168.179.9: icmp_seq=390 ttl=64 time=25.749 ms 64 bytes from 192.168.179.9: icmp_seq=391 ttl=64 time=2.216 ms 64 bytes from 192.168.179.9: icmp_seq=392 ttl=64 time=2.242 ms 64 bytes from 192.168.179.9: icmp_seq=393 ttl=64 time=12.196 ms 64 bytes from 192.168.179.9: icmp_seq=394 ttl=64 time=13.812 ms 64 bytes from 192.168.179.9: icmp_seq=395 ttl=64 time=2.054 ms 64 bytes from 192.168.179.9: icmp_seq=396 ttl=64 time=3.772 ms Request timeout for icmp_seq 397 Request timeout for icmp_seq 398 Request timeout for icmp_seq 399 Request timeout for icmp_seq 400 Request timeout for icmp_seq 401 64 bytes from 192.168.179.9: icmp_seq=397 ttl=64 time=5657.127 ms 64 bytes from 192.168.179.9: icmp_seq=398 ttl=64 time=4652.875 ms 64 bytes from 192.168.179.9: icmp_seq=399 ttl=64 time=3648.646 ms 64 bytes from 192.168.179.9: icmp_seq=400 ttl=64 time=2645.291 ms 64 bytes from 192.168.179.9: icmp_seq=401 ttl=64 time=1643.965 ms 64 bytes from 192.168.179.9: icmp_seq=402 ttl=64 time=643.095 ms 64 bytes from 192.168.179.9: icmp_seq=403 ttl=64 time=5402.795 ms 64 bytes from 192.168.179.9: icmp_seq=404 ttl=64 time=4400.167 ms 64 bytes from 192.168.179.9: icmp_seq=405 ttl=64 time=3397.164 ms 64 bytes from 192.168.179.9: icmp_seq=406 ttl=64 time=2394.939 ms 64 bytes from 192.168.179.9: icmp_seq=407 ttl=64 time=1394.247 ms 64 bytes from 192.168.179.9: icmp_seq=408 ttl=64 time=396.327 ms 64 bytes from 192.168.179.9: icmp_seq=409 ttl=64 time=4107.552 ms 64 bytes from 192.168.179.9: icmp_seq=410 ttl=64 time=3109.809 ms 64 bytes from 192.168.179.9: icmp_seq=411 ttl=64 time=2105.314 ms 64 bytes from 192.168.179.9: icmp_seq=412 ttl=64 time=1102.421 ms 64 bytes from 192.168.179.9: icmp_seq=413 ttl=64 time=97.707 ms 64 bytes from 192.168.179.9: icmp_seq=414 ttl=64 time=3215.873 ms 64 bytes from 192.168.179.9: icmp_seq=415 ttl=64 time=2215.231 ms 64 bytes from 192.168.179.9: icmp_seq=416 ttl=64 time=1212.028 ms 64 bytes from 192.168.179.9: icmp_seq=417 ttl=64 time=209.033 ms 64 bytes from 192.168.179.9: icmp_seq=418 ttl=64 time=611.635 ms Request timeout for icmp_seq 424 Request timeout for icmp_seq 425 64 bytes from 192.168.179.9: icmp_seq=419 ttl=64 time=7429.334 ms 64 bytes from 192.168.179.9: icmp_seq=420 ttl=64 time=6426.092 ms 64 bytes from 192.168.179.9: icmp_seq=421 ttl=64 time=5424.907 ms 64 bytes from 192.168.179.9: icmp_seq=422 ttl=64 time=4424.393 ms 64 bytes from 192.168.179.9: icmp_seq=423 ttl=64 time=3420.850 ms 64 bytes from 192.168.179.9: icmp_seq=424 ttl=64 time=2443.929 ms 64 bytes from 192.168.179.9: icmp_seq=425 ttl=64 time=1439.509 ms 64 bytes from 192.168.179.9: icmp_seq=426 ttl=64 time=440.033 ms Request timeout for icmp_seq 434 Request timeout for icmp_seq 435 Request timeout for icmp_seq 436 Request timeout for icmp_seq 437 Request timeout for icmp_seq 438 64 bytes from 192.168.179.9: icmp_seq=427 ttl=64 time=12052.972 ms 64 bytes from 192.168.179.9: icmp_seq=428 ttl=64 time=11052.822 ms 64 bytes from 192.168.179.9: icmp_seq=429 ttl=64 time=10048.715 ms 64 bytes from 192.168.179.9: icmp_seq=430 ttl=64 time=9047.991 ms 64 bytes from 192.168.179.9: icmp_seq=431 ttl=64 time=8046.758 ms 64 bytes from 192.168.179.9: icmp_seq=432 ttl=64 time=7043.676 ms
Bash:02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 0c) DeviceName: Onboard - RTK Ethernet Subsystem: Fujitsu Technology Solutions RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller Flags: bus master, fast devsel, latency 0, IRQ 20 I/O ports at e000 [size=256] Memory at a1104000 (64-bit, non-prefetchable) [size=4K] Memory at a1100000 (64-bit, prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Endpoint, MSI 01 Capabilities: [b0] MSI-X: Enable+ Count=4 Masked- Capabilities: [d0] Vital Product Data Capabilities: [100] Advanced Error Reporting Capabilities: [140] Virtual Channel Capabilities: [160] Device Serial Number 00-00-00-00-00-00-00-00 Capabilities: [170] Latency Tolerance Reporting Kernel driver in use: r8169 Kernel modules: r8169
Job for pvestatd.service failed.
See "systemctl status pvestatd.service" and "journalctl -xeu pvestatd.service" for details.
dpkg: error processing package pve-manager (--configure):
installed pve-manager package post-installation script subprocess returned error exit status 1
dpkg: dependency problems prevent configuration of proxmox-ve:
proxmox-ve depends on pve-manager; however:
Package pve-manager is not configured yet.
dpkg: error processing package proxmox-ve (--configure):
dependency problems - leaving unconfigured
Processing triggers for debianutils (5.7-0.4) ...
Processing triggers for libc-bin (2.36-9) ...
Processing triggers for proxmox-backup-file-restore (3.0.1-1) ...
Updating file-restore initramfs...
12101 blocks
Processing triggers for ca-certificates (20230311) ...
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
Processing triggers for initramfs-tools (0.142) ...
update-initramfs: Generating /boot/initrd.img-6.2.16-3-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
Processing triggers for pve-ha-manager (4.0.2) ...
Errors were encountered while processing:
pve-manager
proxmox-ve
Removing subscription nag from UI...
E: Sub-process /usr/bin/dpkg returned an error code (1)
Code:Removing subscription nag from UI...
Jul 02 15:32:00 mrbyte pvestatd[1043]: ipcc_send_rec[4] failed: Connection refused
Jul 02 15:32:00 mrbyte pvestatd[1043]: status update error: Connection refused
Jul 02 15:32:00 mrbyte pveproxy[1947823]: worker exit
Jul 02 15:32:00 mrbyte pveproxy[1073]: worker 1947823 finished
Jul 02 15:32:00 mrbyte pveproxy[1073]: starting 1 worker(s)
Jul 02 15:32:00 mrbyte pveproxy[1073]: worker 1947828 started
Jul 02 15:32:00 mrbyte pveproxy[1947828]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_fil>
Jul 02 15:32:00 mrbyte pveproxy[1947824]: worker exit
Jul 02 15:32:00 mrbyte pveproxy[1947825]: worker exit
Jul 02 15:32:00 mrbyte pveproxy[1073]: worker 1947824 finished
Jul 02 15:32:00 mrbyte pveproxy[1073]: starting 1 worker(s)
Jul 02 15:32:00 mrbyte pveproxy[1073]: worker 1947829 started
Jul 02 15:32:00 mrbyte pveproxy[1073]: worker 1947825 finished
Jul 02 15:32:00 mrbyte pveproxy[1073]: starting 1 worker(s)
Jul 02 15:32:00 mrbyte pveproxy[1073]: worker 1947830 started
Jul 02 15:32:00 mrbyte pveproxy[1947829]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_fil>
Jul 02 15:32:00 mrbyte pveproxy[1947830]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_fil>
Jul 02 15:32:01 mrbyte cron[1918606]: (*system*vzdump) CAN'T OPEN SYMLINK (/etc/cron.d/vzdump)
Jul 02 15:32:02 mrbyte pve-firewall[1036]: status update error: Connection refused
Jul 02 15:32:02 mrbyte pvescheduler[1947831]: replication: Connection refused
Jul 02 15:32:02 mrbyte pvescheduler[1947832]: jobs: cfs-lock 'file-jobs_cfg' error: no quorum!
Maybe buy a subscription with support tickets and ask the Proxmox staff very politely to fixHello!! I'm newbie with this kind of stuff... I don't know what this means this, I apologize...
Removing subscription nag from UI...
for you so it does not break the upgrade? At least that's how I interpret it...Hello, thank you very much for your response.Maybe buy a subscription with support tickets and ask the Proxmox staff very politely to fixRemoving subscription nag from UI...
for you so it does not break the upgrade? At least that's how I interpret it...
Hi,Sure, no problem
- Proxmox Shell
- sudo nano /etc/apt/sources.list
- Add non-free at the end of the first bookworm main contrib line
- Control + X, save exit
- sudo apt update
- sudo apt install dkms
- sudo apt install r8168-dkms
- reboot
- ethtool -i %interfacename% (e.g. ethtool -i enp1s0) to check the loaded driver, should show r8168
I have tried, it doesn't work. And as far as I can tell Nvidia doesn't plan on supporting 6.x kernels anytime soon.Current Release Family: NVIDIA vGPU Software 15
vGPU Software Linux vGPU Manager Windows vGPU Manager Linux Driver Windows Driver Release Date 15.3 525.125.03 529.06 525.125.06 529.11 June 2023
NVIDIA-Linux-x86_64-525.125.03-vgpu-kvm.run
Has anyone tried installing this driver on PVE 8 ?
Everything works fine on 7.4-15
another user did it with success by downloading the packages first: https://forum.proxmox.com/threads/download-pve8-packages-and-continue-upgrade-offline.129804/Is it possible to upgrade if your internet connection relies on a VM (OpnSense as router/firewall)?
Thanks! I'll give it a shot.Hi,
another user did it with success by downloading the packages first: https://forum.proxmox.com/threads/download-pve8-packages-and-continue-upgrade-offline.129804/