Proxmox VE 8.0 (beta) released!

Can those with network issues after upgrade (e.g., through network device renames) please post some more details about their setup, at least:
  • NIC models and drivers and PCI address (e.g., with lspci -k you get all PCI(e) devices and the used kernel driver, using something like lspci -k | grep -A3 -i ethernet might spare you from manually searching.
  • Motherboard or server Vendor + Model and CPU
  • the output of ip link
  • pveversion -v
That would be great!

Hi,

thanks for your feedback!

Code:
lspci -k | grep -A3 -i ethernet

02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 0c)
        DeviceName: Onboard - RTK Ethernet
        Subsystem: Fujitsu Technology Solutions RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller
        Kernel driver in use: r8169
        Kernel modules: r8169

Server / Vendor are Fujitsu Futro S740

Code:
ip link (after doing my above mentioned workaround)

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether 4c:52:62:ba:ed:9c brd ff:ff:ff:ff:ff:ff
    altname enp2s0
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 52:0a:68:4d:d6:ad brd ff:ff:ff:ff:ff:ff
4: veth104i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether fe:18:b9:30:3a:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
5: veth103i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether 9a:a6:bc:69:d0:89 brd ff:ff:ff:ff:ff:ff link-netnsid 1
6: veth102i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether fe:86:a2:58:f4:d0 brd ff:ff:ff:ff:ff:ff link-netnsid 2
7: veth101i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether fe:55:73:e7:d3:a2 brd ff:ff:ff:ff:ff:ff link-netnsid 3
8: veth100i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether 16:2b:ac:40:ef:f0 brd ff:ff:ff:ff:ff:ff link-netnsid 4
9: veth106i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether fe:a2:40:ce:94:2e brd ff:ff:ff:ff:ff:ff link-netnsid 5

Code:
pveversion -v

proxmox-ve: 8.0.1 (running kernel: 6.2.16-1-pve)
pve-manager: 8.0.0~8 (running version: 8.0.0~8/61cf3e3d9af500c2)
pve-kernel-6.2: 8.0.0
pve-kernel-6.2.16-1-pve: 6.2.16-1
pve-kernel-6.2.11-2-pve: 6.2.11-2
pve-kernel-6.2.6-1-pve: 6.2.6-1
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx1
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-3
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.5
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.0
libpve-access-control: 8.0.1
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.3
libpve-guest-common-perl: 5.0.3
libpve-http-server-perl: 5.0.3
libpve-rs-perl: 0.8.3
libpve-storage-perl: 8.0.0
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 2.99.0-1
proxmox-backup-file-restore: 2.99.0-1
proxmox-kernel-helper: 8.0.0
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.0.4
pve-cluster: 8.0.1
pve-container: 5.0.1
pve-docs: 8.0.1
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.1
pve-firmware: 3.7-1
pve-ha-manager: 4.0.1
pve-i18n: 3.0.2
pve-qemu-kvm: 8.0.2-2
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.3
smartmontools: 7.3-1+b1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.11-pve2

Thanks!!
 
Hello,
I upgraded to 8 beta.
Everything went without issues.
So far so good.

Can I get rid of leftover packages?
root@pve:/etc/apt/sources.list.d# apt list | grep ,local WARNING: apt does not have a stable CLI interface. Use with caution in scripts. gcc-10-base/now 10.2.1-6 amd64 [installed,local] gcc-9-base/now 9.3.0-22 amd64 [installed,local] libbpf0/now 1:0.3-2 amd64 [installed,local] libcbor0/now 0.5.0+dfsg-2 amd64 [installed,local] libdns-export1110/now 1:9.11.19+dfsg-2.1 amd64 [installed,local] libffi7/now 3.3-6 amd64 [installed,local] libflac8/now 1.3.3-2+deb11u1 amd64 [installed,local] libicu67/now 67.1-7 amd64 [installed,local] libisc-export1105/now 1:9.11.19+dfsg-2.1 amd64 [installed,local] libldap-2.4-2/now 2.4.57+dfsg-3+deb11u1 amd64 [installed,local] liblttng-ust-ctl4/now 2.12.1-1 amd64 [installed,local] liblttng-ust0/now 2.12.1-1 amd64 [installed,local] libmpdec3/now 2.5.1-1 amd64 [installed,local] libperl5.32/now 5.32.1-4+deb11u2 amd64 [installed,local] libprocps8/now 2:3.3.17-5 amd64 [installed,local] libprotobuf23/now 3.12.4-1 amd64 [installed,local] libpython3.9-minimal/now 3.9.2-1 amd64 [installed,local] libpython3.9-stdlib/now 3.9.2-1 amd64 [installed,local] libpython3.9/now 3.9.2-1 amd64 [installed,local] libruby2.7/now 2.7.4-1+deb11u1 amd64 [installed,local] libsepol1/now 3.1-1 amd64 [installed,local] libssl1.1/now 1.1.1n-0+deb11u5 amd64 [installed,local] liburcu6/now 0.12.2-1 amd64 [installed,local] liburing1/now 0.7-3 amd64 [installed,local] perl-modules-5.32/now 5.32.1-4+deb11u2 all [installed,local] python3.9-minimal/now 3.9.2-1 amd64 [installed,local] python3.9/now 3.9.2-1 amd64 [installed,local]
 
Last edited:
Hi,

thanks for your feedback!

Code:
lspci -k | grep -A3 -i ethernet

02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 0c)
        DeviceName: Onboard - RTK Ethernet
        Subsystem: Fujitsu Technology Solutions RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller
        Kernel driver in use: r8169
        Kernel modules: r8169

Server / Vendor are Fujitsu Futro S740

Code:
ip link (after doing my above mentioned workaround)

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether 4c:52:62:ba:ed:9c brd ff:ff:ff:ff:ff:ff
    altname enp2s0
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 52:0a:68:4d:d6:ad brd ff:ff:ff:ff:ff:ff
4: veth104i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether fe:18:b9:30:3a:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
5: veth103i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether 9a:a6:bc:69:d0:89 brd ff:ff:ff:ff:ff:ff link-netnsid 1
6: veth102i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether fe:86:a2:58:f4:d0 brd ff:ff:ff:ff:ff:ff link-netnsid 2
7: veth101i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether fe:55:73:e7:d3:a2 brd ff:ff:ff:ff:ff:ff link-netnsid 3
8: veth100i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether 16:2b:ac:40:ef:f0 brd ff:ff:ff:ff:ff:ff link-netnsid 4
9: veth106i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether fe:a2:40:ce:94:2e brd ff:ff:ff:ff:ff:ff link-netnsid 5

Code:
pveversion -v

proxmox-ve: 8.0.1 (running kernel: 6.2.16-1-pve)
pve-manager: 8.0.0~8 (running version: 8.0.0~8/61cf3e3d9af500c2)
pve-kernel-6.2: 8.0.0
pve-kernel-6.2.16-1-pve: 6.2.16-1
pve-kernel-6.2.11-2-pve: 6.2.11-2
pve-kernel-6.2.6-1-pve: 6.2.6-1
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx1
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-3
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.5
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.0
libpve-access-control: 8.0.1
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.3
libpve-guest-common-perl: 5.0.3
libpve-http-server-perl: 5.0.3
libpve-rs-perl: 0.8.3
libpve-storage-perl: 8.0.0
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 2.99.0-1
proxmox-backup-file-restore: 2.99.0-1
proxmox-kernel-helper: 8.0.0
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.0.4
pve-cluster: 8.0.1
pve-container: 5.0.1
pve-docs: 8.0.1
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.1
pve-firmware: 3.7-1
pve-ha-manager: 4.0.1
pve-i18n: 3.0.2
pve-qemu-kvm: 8.0.2-2
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.3
smartmontools: 7.3-1+b1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.11-pve2

Thanks!!
Many thanks for reporting the info, one thing that would help us to better understand if it really was systemd naming would be to boot the Proxmox VE 7.4-1 ISO and select the Advanced -> Debug Mode in the initial GRUB boot menu, there continue to the second prompt (just before the installer UI would start) and check the ip link output for different names used, you can then simply abort the installer.
 
  • Like
Reactions: Chrischi
Hi,
Hello,
I upgraded to 8 beta.
Everything went without issues.
So far so good.

Can I get rid of leftover packages?
root@pve:/etc/apt/sources.list.d# apt list | grep ,local WARNING: apt does not have a stable CLI interface. Use with caution in scripts. gcc-10-base/now 10.2.1-6 amd64 [installed,local] gcc-9-base/now 9.3.0-22 amd64 [installed,local] libbpf0/now 1:0.3-2 amd64 [installed,local] libcbor0/now 0.5.0+dfsg-2 amd64 [installed,local] libdns-export1110/now 1:9.11.19+dfsg-2.1 amd64 [installed,local] libffi7/now 3.3-6 amd64 [installed,local] libflac8/now 1.3.3-2+deb11u1 amd64 [installed,local] libicu67/now 67.1-7 amd64 [installed,local] libisc-export1105/now 1:9.11.19+dfsg-2.1 amd64 [installed,local] libldap-2.4-2/now 2.4.57+dfsg-3+deb11u1 amd64 [installed,local] liblttng-ust-ctl4/now 2.12.1-1 amd64 [installed,local] liblttng-ust0/now 2.12.1-1 amd64 [installed,local] libmpdec3/now 2.5.1-1 amd64 [installed,local] libperl5.32/now 5.32.1-4+deb11u2 amd64 [installed,local] libprocps8/now 2:3.3.17-5 amd64 [installed,local] libprotobuf23/now 3.12.4-1 amd64 [installed,local] libpython3.9-minimal/now 3.9.2-1 amd64 [installed,local] libpython3.9-stdlib/now 3.9.2-1 amd64 [installed,local] libpython3.9/now 3.9.2-1 amd64 [installed,local] libruby2.7/now 2.7.4-1+deb11u1 amd64 [installed,local] libsepol1/now 3.1-1 amd64 [installed,local] libssl1.1/now 1.1.1n-0+deb11u5 amd64 [installed,local] liburcu6/now 0.12.2-1 amd64 [installed,local] liburing1/now 0.7-3 amd64 [installed,local] perl-modules-5.32/now 5.32.1-4+deb11u2 all [installed,local] python3.9-minimal/now 3.9.2-1 amd64 [installed,local] python3.9/now 3.9.2-1 amd64 [installed,local]
Yes, using apt autoremove will get rid of packages that were installed as dependencies but are no longer needed/depended upon.
 
Hello,

Upgrade to Proxmox 8 with hyper-converged ceph worked without an issues.

But Ceph shows a Warning:

Module 'dashboard' has failed dependency: PyO3 modules may only be initialized once per interpreter process
 

I have one more thing:
about the modules that we need to add:
Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

I just checked the kmod log and found out:
Code:
Failed to find module 'vfio_virqfd'

Can it be that that is depricated either and needs an documentation update either?
 
I have one more thing:
about the modules that we need to add:
Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

I just checked the kmod log and found out:
Code:
Failed to find module 'vfio_virqfd'

Can it be that that is depricated either and needs an documentation update either?

seems like the vfio_virqfd modules has been integrated into the vfio module - at least according to the Arch wiki [1]

[1] https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF#mkinitcpio
 
  • Like
Reactions: Neobin and Ramalama
Can those with network issues after upgrade (e.g., through network device renames) please post some more details about their setup, at least:
  • NIC models and drivers and PCI address (e.g., with lspci -k you get all PCI(e) devices and the used kernel driver, using something like lspci -k | grep -A3 -i ethernet might spare you from manually searching.
  • Motherboard or server Vendor + Model and CPU
  • the output of ip link
  • pveversion -v
That would be great!
root@micromaster ~ # lspci -k | grep -A3 -i ethernet
26:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
Subsystem: ASRock Incorporation I210 Gigabit Network Connection
Kernel driver in use: igb
Kernel modules: igb
27:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
Subsystem: ASRock Incorporation I210 Gigabit Network Connection
Kernel driver in use: igb
Kernel modules: igb
--
2e:00.0 Ethernet controller: Intel Corporation Ethernet Controller X550 (rev 01)
Subsystem: Intel Corporation Ethernet 10G 2P X550-t Adapter
Kernel driver in use: ixgbe
Kernel modules: ixgbe
2e:00.1 Ethernet controller: Intel Corporation Ethernet Controller X550 (rev 01)
Subsystem: Intel Corporation Ethernet 10G 2P X550-t Adapter
Kernel driver in use: ixgb
Kernel modules: ixgbe

Mainboard: Asrock X570D4U
CPU: AMD Ryzen 9 5950X

root@micromaster ~ # ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp38s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether d0:50:99:ff:c2:67 brd ff:ff:ff:ff:ff:ff
3: enp39s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether d0:50:99:ff:c2:66 brd ff:ff:ff:ff:ff:ff
4: enp46s0f0: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc mq master bond0 state DOWN mode DEFAULT group default qlen 1000
link/ether b4:96:91:3d:81:b4 brd ff:ff:ff:ff:ff:ff
5: enx42b2594d06c0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 42:b2:59:4d:06:c0 brd ff:ff:ff:ff:ff:ff
6: enp46s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
link/ether b4:96:91:3d:81:b4 brd ff:ff:ff:ff:ff:ff permaddr b4:96:91:3d:81:b6
20: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
link/ether b4:96:91:3d:81:b4 brd ff:ff:ff:ff:ff:ff
21: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 76:19:ab:c0:01:6d brd ff:ff:ff:ff:ff:ff
22: bond0.4@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr4 state UP mode DEFAULT group default qlen 1000
link/ether b4:96:91:3d:81:b4 brd ff:ff:ff:ff:ff:ff
23: vmbr4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 82:d3:79:1f:23:b1 brd ff:ff:ff:ff:ff:ff
24: bond0.5@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr5 state UP mode DEFAULT group default qlen 1000
link/ether b4:96:91:3d:81:b4 brd ff:ff:ff:ff:ff:ff
25: vmbr5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether d6:34:1e:9b:7d:93 brd ff:ff:ff:ff:ff:ff
26: bond0.31@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr31 state UP mode DEFAULT group default qlen 1000
link/ether b4:96:91:3d:81:b4 brd ff:ff:ff:ff:ff:ff
27: vmbr31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether b2:d1:a7:aa:36:45 brd ff:ff:ff:ff:ff:ff
28: tap105i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether 3a:67:69:9b:5b:26 brd ff:ff:ff:ff:ff:ff
29: tap105i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr31 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether 7a:3a:d1:7e:40:a4 brd ff:ff:ff:ff:ff:ff
30: tap105i2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr4 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether 3e:5a:62:30:3d:e5 brd ff:ff:ff:ff:ff:ff
31: tap102i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether c2:f3:d5:ad:c3:54 brd ff:ff:ff:ff:ff:ff
32: tap103i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether e2:73:a4:a7:6d:19 brd ff:ff:ff:ff:ff:ff
33: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether f2:68:e1:bb:08:95 brd ff:ff:ff:ff:ff:ff

root@micromaster ~ # pveversion -v
proxmox-ve: 8.0.1 (running kernel: 6.2.16-1-pve)
pve-manager: 8.0.0~8 (running version: 8.0.0~8/61cf3e3d9af500c2)
pve-kernel-6.2: 8.0.0
pve-kernel-5.15: 7.4-3
pve-kernel-6.2.16-1-pve: 6.2.16-1
pve-kernel-6.2.11-2-pve: 6.2.11-2
pve-kernel-5.15.107-2-pve: 5.15.107-2
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx1
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-3
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.5
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.0
libpve-access-control: 8.0.1
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.3
libpve-guest-common-perl: 5.0.3
libpve-http-server-perl: 5.0.3
libpve-rs-perl: 0.8.3
libpve-storage-perl: 8.0.0
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 2.99.0-1
proxmox-backup-file-restore: 2.99.0-1
proxmox-kernel-helper: 8.0.0
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.0
proxmox-widget-toolkit: 4.0.4
pve-cluster: 8.0.1
pve-container: 5.0.1
pve-docs: 8.0.1
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.1
pve-firmware: 3.7-1
pve-ha-manager: 4.0.1
pve-i18n: 3.0.2
pve-qemu-kvm: 8.0.2-2
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.3
smartmontools: 7.3-1+b1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.11-pve2


... network fails also on the i210 nics
 
Last edited:
Everthing went fine with Proxmox + Ceph HCI (All NVMe) v2 but we are using ceph dashboard and this happens atm:

## Module 'dashboard' has failed dependency: PyO3 modules may only be initialized once per interpreter process
OR
## Module 'restful' has failed dependency: PyO3 modules may only be initialized once per interpreter process

not that important, but something that might interest some users, as the dashboard is kinda popular (but not that useful to be honest with PVE). Im not sure if that restful module is a default module, that PVE is using or if its one that comes with messing arround with the dashboard. Might be worth check for you.
 
Last edited:
  • Like
Reactions: herzkerl
if you have time, could you maybe check if the iommu groups and the pci addresses stayed the same across the upgrade?
it may be enough to opt-in to the 6.2 kernel on 7.4 (to rule out weird kernel changes)
or do you maybe have the journal/syslogs still?
I did check the PCI addresses and they stayed the same, but I hadn’t checked the iommu groups. I am running with 6.2 opt-in on 7.4. I can always try upgrading again and grab some doc if you like.
 
Just read all the comments in this post. Great work being done here with the community.

I have installed the 8 Beta, and, the statistics are not loading. Maybe this is because it was in a new machine without correct time set (it was in the future when it started before synchronizing the time to UTC).

I can report the same issue about the module 'vfio_virqfd'. Tracking the possibility of being integrated into the 'vfio' module.

Regarding the new 'ceph.list' comparing with 7.4. Is this a new repository to split the packges from the main PVE repository?

Is enough during the beta to disable the enterprise for PVE and CEPH, and, after release enabling the no subscription version (disabling beta of course)?
 
Hi,

Hope I can ask this here, is there any plan for an official Terraform provider made by Proxmox for PVE 8.0 ?
I would like to push a bit more of proxmox in my company but it's hard to push without a proper Terraform provider made by the company.

Regards !
 
Many thanks for reporting the info, one thing that would help us to better understand if it really was systemd naming would be to boot the Proxmox VE 7.4-1 ISO and select the Advanced -> Debug Mode in the initial GRUB boot menu, there continue to the second prompt (just before the installer UI would start) and check the ip link output for different names used, you can then simply abort the installer.
Thank you for investigating the issue further.

Here is the output

7.4-1 ISO -> Debug Mode:
Code:
root@proxmox:/# ip link
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00
2: eno1: <BROADCAST, MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default glen 1000 
link/ether 4c: 52:62:ba:ed:9c brd ff :ff:ff:ff:ff:ff 
altname enp2so

8.0-BETA ISO -> Debug Mode:
Code:
Hangs up on boot like my production system.
 
Quick question, what exactly were the dark mode theme improvements mentioned in the changelog? I just installed the 8.0 beta 1 and the dark theme looks exactly the same at first glance.
  • Improved Dark color theme:
The Dark color theme, introduced in Proxmox VE 7.4, received a lot of feedback from our community, which resulted in further improvements​
 
can you send result of:

cat /proc/pressure/cpu
cat /proc/pressure/io
cat /proc/pressure/memory

[root@pve ~]$ cat /proc/pressure/cpu some avg10=0.35 avg60=0.27 avg300=0.27 total=344642428 full avg10=0.00 avg60=0.00 avg300=0.00 total=0 [root@pve ~]$ cat /proc/pressure/io some avg10=86.45 avg60=88.19 avg300=88.93 total=56068152074 full avg10=78.31 avg60=80.71 avg300=81.77 total=52892725380 [root@pve ~]$ cat /proc/pressure/memory some avg10=0.00 avg60=0.00 avg300=0.00 total=4217530 full avg10=0.00 avg60=0.00 avg300=0.00 total=4162179
 
I have installed the 8 Beta, and, the statistics are not loading. Maybe this is because it was in a new machine without correct time set (it was in the future when it started before synchronizing the time to UTC).
Yes, that's very likely the issue. You might fix that by stopping clearing the /var/lib/rrdcached/db/pve2-* files, possibly also needs a restart of rrdcached.service and pve-cluster.service.

Regarding the new 'ceph.list' comparing with 7.4. Is this a new repository to split the packges from the main PVE repository?
Yes, from the announcement:
Ceph Server:
  • Ceph Quincy 17.2 is the default and comes with continued support.
  • There is now an enterprise repository for Ceph which can be accessed via any Proxmox VE subscription, providing the best stability for production systems.
Is enough during the beta to disable the enterprise for PVE and CEPH, and, after release enabling the no subscription version (disabling beta of course)?
Yes, albeit we natrually recommend using the enterprise repositories for production systems after the beta ends.
 
  • Like
Reactions: npestana
Quick question, what exactly were the dark mode theme improvements mentioned in the changelog? I just installed the 8.0 beta 1 and the dark theme looks exactly the same at first glance.
It's a lot of minor details, e.g., bar chart colors. But, we backported most of it to 7.4 already, so you'd only see a (slightly) bigger difference if comparing a fresh 7.4 installation to 8.0 beta installation, as once latest 7.4 updates are installed they are pretty much the same.
We still mention them in the release note as they have happened after "snapshotting" the 7.4 point release.
 
  • Like
Reactions: pve-Joseph
cat /proc/pressure/io
some avg10=86.45 avg60=88.19 avg300=88.93 total=56068152074
full avg10=78.31 avg60=80.71 avg300=81.77 total=52892725380
That's rather huge amount of processes not being scheduled for some time due to waiting for IO..
And you really don't notice any hangs or the like? Anything in the logs (journal)?
 
Hangs up on boot like my production system.
Hmm, odd, at what stage does it hang up?

As in the installer there's no systemd running, so the hang of the networking.service in the installed system must be a side effect. Anything error/warning like in the journal?

Just to be sure, how what was the fix for your installed system to make network work again?
 
Yes, that's very likely the issue. You might fix that by stopping clearing the /var/lib/rrdcached/db/pve2-* files, possibly also needs a restart of rrdcached.service and pve-cluster.service.
I will do a clean install. Took the opportunity to test specific things I couldn't do in VirtualBox. I will report if I experience it again.

Yes, albeit we natrually recommend using the enterprise repositories for production systems after the beta ends.
For now I have to stick with the free repositories, however, I will consider the subscription. Just starting out with Proxmox. :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!