Proxmox VE 8.0 (beta) released!

Amazing Work! Thanks Guys!
Im on 8.0 Beta already, but i reinstalled proxmox from scratch with the 8.0 iso.
Had to migrate my Storage completely, so i had to reinstall Proxmox anyway and this camed just at the perfect moment!

However, had no issues already on any older Proxmox version, i just needed always the newest kernel...
Initially for x570 ECC Support (that was 5.11 opt-in kernel)
Later for the Intel Arc A380 (that was 6.2 opt-in kernel)

Now Proxmox 8.0 defaults on 6.2 Kernel, which is amazing for me.
I don't have any issues with PVE 8.0 Beta 1 so far, everything works, no complains!
 
  • Like
Reactions: pschneider1968
hello, i tried update and apt udpate gave some weird warnings due to repository "pve-no-subscription" being active and then upgrade failed with error message of failed proxmox-ve removal.

i found the reason was that i did have pve-no-subscription active instead of pvetest

this could perhaps be handled in a better/more convenient way by adding a check to pve7to8 check script
 
hello, i tried update and apt udpate gave some weird warnings due to repository "pve-no-subscription" being active and then upgrade failed with error message of failed proxmox-ve removal.

i found the reason was that i did have pve-no-subscription active instead of pvetest

this could perhaps be handled in a better/more convenient way by adding a check to pve7to8 check script
this is mentioned in the FAQs already. the "issue" will disappear as soon as the final release is available.
 
  • Like
Reactions: RolandK
Currently I have a dedicated Proxmox 7.4 mirrored boot drive and a separate mirrored drive for VM images only. Do I still need to backup all VMs to external storage like it is instructed in the guide? Or maybe I can simply use existing dedicated mirrored drive for VM images and somehow connect/import to new Proxmox 8 system?
Well, following two points:
  • in-place upgrade to current Proxmox VE is possible since 4.x, at least if done one by one (4.4 to 5.x, 5.4 to 6.x, and so one).
  • having working (i.e., restore-tested) backups is always a good idea.
 
  • Like
Reactions: rigel.local
Is it safe to assume that Proxmox VE 8.0 will also be released in 12 days as well or the above is just a coincidence and it can take much longer than that? Asking because I just did a fresh install of Proxmox VE 7.4 and don't know should I do migration to 8 now or wait a little bit for stable version, since in July I will not have time a chance to do this.
This is mostly a coincidence, but it's not like we plan to do a beta for months, so it is likely that a final version happens in a few weeks at most, but we cannot give a hard date as we decide on that mostly from feedback on the new version and upgrades, cirtical bugs fixed and also some features that were already near the finish line fully implemented.

As the upgrade from 7.4 is in my experience rather painless, I don't think you have to wait, especially as Proxmox VE 7.x is still supported for security and grave bug fixes for until ~ a year.
 
My node stops at job networking.service running (17min 42 sec / no limit)
At boot
I'd recommend checking the journal for more errors or warnings, recheck the network configuration and otherwise open a new thread with more details (HW used, network config, network setup, ...)
 
Did also a Upgrade to Beta, everthing works, Thank you.



One Problem. I have VMBR0 and VMBR1 Linux bridge. -> I use VMBR0 for Internet connection. After the Update Proxmox used automatically VMBR1 for Internet. So after update a had no Internet.

After delet VMBR1 and reboot, evertyhing works fine... Maybe bug?
 
Did also a Upgrade to Beta, everthing works, Thank you.



One Problem. I have VMBR0 and VMBR1 Linux bridge. -> I use VMBR0 for Internet connection. After the Update Proxmox used automatically VMBR1 for Internet. So after update a had no Internet.

After delet VMBR1 and reboot, evertyhing works fine... Maybe bug?

physical nic name change ? (again ....) .

I have already see on twitter user having again a nic name changing name. (I still dont get why systemd dev want this fucking predictive naming scheme, predictive only until you upgrade kernel or systemd ...)
 
@t.lamprecht
Do you enable iommu by default somehow?

Im asking because, i reinstalled proxmox from scratch, without adding vfio modules or "amd_iommu=on iommu=pt" cmdline parameters....
means, i didn't changed anything.
it's weird, but i can passthrough devices...

Code:
find /sys/kernel/iommu_groups/ -type l
/sys/kernel/iommu_groups/17/devices/0000:01:00.0
/sys/kernel/iommu_groups/35/devices/0000:31:00.0
/sys/kernel/iommu_groups/7/devices/0000:00:03.3
/sys/kernel/iommu_groups/25/devices/0000:2a:00.3
/sys/kernel/iommu_groups/25/devices/0000:2a:00.1
/sys/kernel/iommu_groups/25/devices/0000:21:08.0
/sys/kernel/iommu_groups/25/devices/0000:2a:00.0
/sys/kernel/iommu_groups/15/devices/0000:00:14.3
/sys/kernel/iommu_groups/15/devices/0000:00:14.0
/sys/kernel/iommu_groups/33/devices/0000:2f:00.0
/sys/kernel/iommu_groups/5/devices/0000:00:03.1
/sys/kernel/iommu_groups/23/devices/0000:21:01.0
/sys/kernel/iommu_groups/13/devices/0000:00:08.0
/sys/kernel/iommu_groups/31/devices/0000:2d:00.0
/sys/kernel/iommu_groups/3/devices/0000:00:02.0
/sys/kernel/iommu_groups/21/devices/0000:04:00.0
/sys/kernel/iommu_groups/11/devices/0000:00:07.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.1
/sys/kernel/iommu_groups/38/devices/0000:32:00.3
/sys/kernel/iommu_groups/28/devices/0000:23:00.0
/sys/kernel/iommu_groups/18/devices/0000:02:01.0
/sys/kernel/iommu_groups/36/devices/0000:32:00.0
/sys/kernel/iommu_groups/8/devices/0000:00:03.4
/sys/kernel/iommu_groups/26/devices/0000:21:09.0
/sys/kernel/iommu_groups/26/devices/0000:2b:00.0
/sys/kernel/iommu_groups/16/devices/0000:00:18.3
/sys/kernel/iommu_groups/16/devices/0000:00:18.1
/sys/kernel/iommu_groups/16/devices/0000:00:18.6
/sys/kernel/iommu_groups/16/devices/0000:00:18.4
/sys/kernel/iommu_groups/16/devices/0000:00:18.2
/sys/kernel/iommu_groups/16/devices/0000:00:18.0
/sys/kernel/iommu_groups/16/devices/0000:00:18.7
/sys/kernel/iommu_groups/16/devices/0000:00:18.5
/sys/kernel/iommu_groups/34/devices/0000:30:00.0
/sys/kernel/iommu_groups/6/devices/0000:00:03.2
/sys/kernel/iommu_groups/24/devices/0000:21:05.0
/sys/kernel/iommu_groups/14/devices/0000:00:08.1
/sys/kernel/iommu_groups/32/devices/0000:2e:00.0
/sys/kernel/iommu_groups/4/devices/0000:00:03.0
/sys/kernel/iommu_groups/22/devices/0000:20:00.0
/sys/kernel/iommu_groups/12/devices/0000:00:07.1
/sys/kernel/iommu_groups/30/devices/0000:28:00.0
/sys/kernel/iommu_groups/30/devices/0000:27:00.0
/sys/kernel/iommu_groups/2/devices/0000:00:01.2
/sys/kernel/iommu_groups/20/devices/0000:03:00.0
/sys/kernel/iommu_groups/10/devices/0000:00:05.0
/sys/kernel/iommu_groups/39/devices/0000:32:00.4
/sys/kernel/iommu_groups/29/devices/0000:23:00.1
/sys/kernel/iommu_groups/0/devices/0000:00:01.0
/sys/kernel/iommu_groups/19/devices/0000:02:04.0
/sys/kernel/iommu_groups/37/devices/0000:32:00.1
/sys/kernel/iommu_groups/9/devices/0000:00:04.0
/sys/kernel/iommu_groups/27/devices/0000:21:0a.0
/sys/kernel/iommu_groups/27/devices/0000:2c:00.0

Code:
dmesg | grep 'remapping'
[    0.948104] x2apic: IRQ remapping doesn't support X2APIC mode
[    1.223037] AMD-Vi: Interrupt remapping enabled

Screenshot 2023-06-11 223724.png
 
@t.lamprecht
Do you enable iommu by default somehow?
afaik, amd iommu has always been enabled by default.

only intel was disabled default. (and was enabled in early 5.15, but some hardware didn't like it, so it has been disable again. Not sure in new kernel 6.2).
 
Thanks for another great (pre-)release!
For BTRFS I found only the "@"-thing in the release notes. Is there anything else and is BTRFS still technologypreview with PVE 8?
 
Hi,

During the upgrade PVE 7 to 8 on my test system I get to validate change or not on /etc/lvm/lvm.conf with lots of change. Most of them seem to be to comment a default value setting:

Code:
--- /etc/lvm/lvm.conf   2021-07-19 16:01:37.893450177 +0200
+++ /etc/lvm/lvm.conf.dpkg-new  2022-10-19 21:37:31.000000000 +0200
@@ -33,15 +33,18 @@
        # any configuration mismatch is ignored and the default value is used
        # without any warning (a message about the configuration key not being
        # found is issued in verbose mode only).
-       checks = 1
+       # This configuration option has an automatic default value.
+       # checks = 1

But the last one seems proxmox specific:

Code:
-devices {
-        # added by pve-manager to avoid scanning ZFS zvols
-        global_filter=["r|/dev/zd.*|"]
-}

Configuration file '/etc/lvm/lvm.conf'

For now I've kept the existing lvm.conf, any advice?
 
Hi,

During the upgrade PVE 7 to 8 on my test system I get to validate change or not on /etc/lvm/lvm.conf with lots of change. Most of them seem to be to comment a default value setting:

Code:
--- /etc/lvm/lvm.conf   2021-07-19 16:01:37.893450177 +0200
+++ /etc/lvm/lvm.conf.dpkg-new  2022-10-19 21:37:31.000000000 +0200
@@ -33,15 +33,18 @@
        # any configuration mismatch is ignored and the default value is used
        # without any warning (a message about the configuration key not being
        # found is issued in verbose mode only).
-       checks = 1
+       # This configuration option has an automatic default value.
+       # checks = 1

But the last one seems proxmox specific:

Code:
-devices {
-        # added by pve-manager to avoid scanning ZFS zvols
-        global_filter=["r|/dev/zd.*|"]
-}

Configuration file '/etc/lvm/lvm.conf'

For now I've kept the existing lvm.conf, any advice?

If you did not make any changes to the lvm.conf yourself, then it is suggested to install the package maintainer's version as the PVE-specific changes will get added automatically again. This is also documented in our upgrade guide [1].

[1] https://pve.proxmox.com/wiki/Upgrad..._system_to_Debian_Bookworm_and_Proxmox_VE_8.0
 
  • Like
Reactions: guerby
I get this warning when executing pve7to8 --full
Code:
WARN: Found at least one CT (240) which does not support running in a unified cgroup v2 layout
    Consider upgrading the Containers distro or set systemd.unified_cgroup_hierarchy=0 in the Proxmox VE hosts' kernel cmdline! Skipping further CT compat checks.
All my containers run Debian 12, therefore their systemd should be new enough and they are all unprivileged. All of them run Docker though. As far as I read this may be a problem. Setting systemd.unified_cgroup_hierarchy=0 is a temporary solution which I would like to avoid. Can I ignore this warning or can I do something to get rid of it?
 
I get this warning when executing pve7to8 --full
Code:
WARN: Found at least one CT (240) which does not support running in a unified cgroup v2 layout
    Consider upgrading the Containers distro or set systemd.unified_cgroup_hierarchy=0 in the Proxmox VE hosts' kernel cmdline! Skipping further CT compat checks.
All my containers run Debian 12, therefore their systemd should be new enough and they are all unprivileged. All of them run Docker though. As far as I read this may be a problem. Setting systemd.unified_cgroup_hierarchy=0 is a temporary solution which I would like to avoid. Can I ignore this warning or can I do something to get rid of it?
Hi, yes if your systemd version is recent enough, you can safely ignore this error, a patch for this has been submitted to the mailing list [0].

You can double check the systemd version in the container by running systemctl --version. Version 232 or above are fine.

Regarding the docker container I am not sure, I suggest making a backup and trying on a test system to be on the safe side.

[0] https://lists.proxmox.com/pipermail/pve-devel/2023-June/057425.html
 
Last edited:
  • Like
Reactions: MG-X

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!