Proxmox VE 7.0 (beta) released!

As EFI backed VMs start just fine here it'd be probably better to open a new forum thread, which is a bit friendlier for initial investigation. In general it'd be good to ensure that you can navigate into the OVMF EFI menu, start new newly created VMs with EFI (or try some Linux Live ISO for existing ones) to rule out some things and help with narrowing down a possible reproducer.

New thread started.
 
Which ID was it and with which ISO did you install Proxmox VE? Also, what's your current pveversion -v?
the id was a0ee88c29b764c46a579dd89c86c2d84

i think it was some 4.x Version. Year of installation was 2015

Code:
proxmox-ve: 7.0-2 (running kernel: 5.11.22-1-pve)
pve-manager: 7.0-4 (running version: 7.0-4/ef340e15)
pve-kernel-5.11: 7.0-3
pve-kernel-helper: 7.0-3
pve-kernel-5.4: 6.4-3
pve-kernel-5.3: 6.1-6
pve-kernel-5.11.22-1-pve: 5.11.22-1
pve-kernel-5.11.21-1-pve: 5.11.21-1
pve-kernel-5.11.7-1-pve: 5.11.7-1
pve-kernel-5.4.119-1-pve: 5.4.119-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
ceph: 15.2.13-pve1
ceph-fuse: 15.2.13-pve1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.0.0-1+pve5
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 7.0-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 7.0-3
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-6
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-1
lxcfs: 4.0.8-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 1.1.10-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.1-3
pve-cluster: 7.0-2
pve-container: 4.0-3
pve-docs: 7.0-3
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.2-2
pve-i18n: 2.3-1
pve-qemu-kvm: 6.0.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-5
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.4-pve1
 
Last edited:
i think it was some 4.x Version. Year of installation was 2015
Much thanks! Could it be that it was installed already with a 4.0 beta then?

What older timestamp are on the directories in the / root, can you please post the output ls -lt / | tail ?
 
Much thanks! Could it be that it was installed already with a 4.0 beta then?

What older timestamp are on the directories in the / root, can you please post the output ls -lt / | tail ?
yes that could be

ls -lt / | tail drwxr-xr-x 18 root root 4096 Jun 28 20:11 lib drwxr-xr-x 11 root root 4096 Jun 28 20:09 usr drwxr-xr-x 2 root root 4096 Jun 28 20:07 lib64 drwxr-xr-x 3 root root 4096 Feb 9 2016 opt drwxr-xr-x 12 root root 4096 Feb 9 2016 var drwx------ 2 root root 16384 Feb 9 2016 lost+found drwxr-xr-x 3 root root 4096 Feb 9 2016 mnt drwxr-xr-x 2 root root 4096 Oct 6 2015 media drwxr-xr-x 2 root root 4096 Oct 6 2015 srv drwxr-xr-x 2 root root 4096 Aug 26 2015 home
 
Newly created Ceph Pacific cluster would automatically use sharding for the rocksdb already, so I assume this was on an upgraded cluster?
Some more information would be interesting to know, ideally opening a new thread:
  • cluster was 100% HEALTH_OK before that
  • how was the cluster upgraded (i.e., closely following our upgrade how-to?)
  • What was the ceph cluster initially created with, how was it upgraded over time?
  • What type of OSDs are in use, file-store, blue-store, both?
  • Was the rocksdb on its own (separate) device?
  • What was the exact command to trigger the reshard?
Thanks!

New thread started.
 
  • Like
Reactions: t.lamprecht
drwxr-xr-x 2 root root 4096 Aug 26 2015 home
Pretty sure it was with a beta from above, as the final release happened after that (05.10.2015).
That would also explain not doing the auto-regeneration in your case, as the check for "bad" machine ID's lacked really only the two betas of 4.0 ones, but I asked around and got hold of those now too - should be more complete for the final release then, much thanks!
 
Last edited:
My upgrade was fine some things I noticed.
Note: I only updated PVE not ceph to newest version.


########################################################################

Problem: Error on apt update
Cause: You might still have pve non subscription or pve-enterprise repository active

629 packages can be upgraded. Run 'apt list --upgradable' to see them.
W: Skipping acquire of configured file 'pve-no-subscription/binary-amd64/Packages' as repository 'http://download.proxmox.com/debian/pve bullseye InRelease' doesn't have the component 'pve-no-subscription' (component misspelt in sources.list?)
W: Skipping acquire of configured file 'pve-no-subscription/i18n/Translation-en' as repository 'http://download.proxmox.com/debian/pve bullseye InRelease' doesn't have the component 'pve-no-subscription' (component misspelt in sources.list?)
W: Skipping acquire of configured file 'pve-no-subscription/i18n/Translation-en_US' as repository 'http://download.proxmox.com/debian/pve bullseye InRelease' doesn't have the component 'pve-no-subscription' (component misspelt in sources.list?)

Solution: comment out non-subscription under /etc/apt/sources.list and enterprise repo unter /etc/apt/sources.list.d/pve-enterprise.list and run apt update again ;-)

########################################################################

Problem: no storage-migration after upgrading the first host (mellanox 25gbit connect-x5)
Cause: ceph network down (ssh: connect to host 10.100.100.231 port 22: No route to host)
Reason: naming of interfaces changed
Details:

ens5f0 changed to ens5f0np0 resulting in no ceph network after upgraded
ens5f1 changed to ens5f1np1

Solution: changing the slaves in the bond1 to the new names

########################################################################

Problem: error installing grub on devices
Cause: you might use systemdboot (UEFI), check IF you dont know
Solutions: skip error and reboot after upgrade, my machines bootet fine

########################################################################
 
Last edited:
Solution: comment out non-subscription under /etc/apt/sources.list

there is no "pve-no-subscription" repo for the beta.

read the first post of this thread for the details.
 
there is no "pve-no-subscription" repo for the beta.

read the first post of this thread for the details.
yap. was the one from the testing-cluster we used on 6.4.x. needs to be commented out. because of the sed commands it also gets updated to bullseye etc. - its not a problem just one thing I noticed and some might have this aswell.

Edit: ah I see, theres only the repo mentioned in this post.
 
Last edited:
yap. was the one from the testing-cluster we used on 6.4.x. needs to be commented out. because of the sed commands it also gets updated to bullseye etc. - its not a problem just one thing I noticed and some might have this aswell.

Edit: ah I see, theres only the repo mentioned in this post.

The usual three repos (pve-no-subscription, pvetest, and pve-enterprise) will all be available after a release. During the beta period, only the pvetest repo exists. As stated in the documents, it should be possible to transition from pvetest to a subscription or non-subscription repo after release.
 
  • Like
Reactions: jsterr
PVE6.2->6.4->7.0Beta test upgrade
openvswitch-switch 12.5.0

My config is based on https://pve.proxmox.com/wiki/Open_vSwitch , part "Example 2: Bond + Bridge + Internal Ports".

My openvswitch configuration stopped to work after reboot to PVE7. Openvswitch-switch service start successfuly without errors, but there are no bridges (as in "ovs-vsctl show" or "ip link". All physical links are down (this doesn't matter, when bridges doesn't exists). If i use ovs-vsctl to manually adding bridges/ifaces, it works ("ip link" shows them). Any hint?

Edit: Fixed (for now). I set "auto vmbrX" and removed "auto interface_name" from the rest.
 
Last edited:
Given the major version number change, will be there any benefits to doing a clean install of 7.0 rather than an upgrade of 6.4 to 7.0?
Most of the time there'd be no significant benefit, especially as the last point release is quite up to date in terms how the installation is done and how the system is setup. Naturally, if you would like to try BTRFS as root filesystem then that is only possible in the PVE 7.0 ISO, where it got introduced as tech preview.

The upgrade is quite compatible and should handle any "breaking" change already, at least if done by following our upgrade how-to closely.

You really should have a periodical restore-tested backup strategy anyway, so that should not change anything either way.

New package defaults, like ifupdown2 over ifupdown for network or chrony over systemd-timesyncd for NTP can be switched so easily on any running system that those really do not justify a reinstallation on their own.

In clusters with three or more nodes one can do a "no downtime" upgrade also with both, reinstallation or upgrade.

So in summary, no big reason for or against it in general, and specifics are very setup dependent.
 
Given the major version number change, will be there any benefits to doing a clean install of 7.0 rather than an upgrade of 6.4 to 7.0?
My oldest cluster have some nodes installed with proxmox2 ten years ago, upgraded to 3,4,5,6 ;)
 
I also have a problem with an existing centos 7 lxc ct. It starts apparently fine but none of the systemd processes actually start. any systemctl commands fail with "Failed to get D-Bus connection: No such file or directory"

Behavior repeats unprivileged or not. I looked into anything in /etc/default/grub as suggested earlier in the thread but I dont see anything suspicious.

Behavior replicable with fresh lxc template (centos 7)
 
Last edited:
I also have a problem with an existing centos 7 lxc ct. It starts apparently fine but none of the systemd processes actually start. any systemctl commands fail with "Failed to get D-Bus connection: No such file or directory"

Behavior repeats unprivileged or not. I looked into anything in /etc/default/grub as suggested earlier in the thread but I dont see anything suspicious.

Behavior replicable with fresh lxc template (centos 7)

this is an issue with the new cgroupv2 feature affecting "old" distros like Centos 7 and Ubuntu 16.04. we're working on improving the handling there.
 
  • Like
Reactions: guletz
Most of the time there'd be no significant benefit, especially as the last point release is quite up to date in terms how the installation is done and how the system is setup. Naturally, if you would like to try BTRFS as root filesystem then that is only possible in the PVE 7.0 ISO, where it got introduced as tech preview.

The upgrade is quite compatible and should handle any "breaking" change already, at least if done by following our upgrade how-to closely.

You really should have a periodical restore-tested backup strategy anyway, so that should not change anything either way.

New package defaults, like ifupdown2 over ifupdown for network or chrony over systemd-timesyncd for NTP can be switched so easily on any running system that those really do not justify a reinstallation on their own.

In clusters with three or more nodes one can do a "no downtime" upgrade also with both, reinstallation or upgrade.

So in summary, no big reason for or against it in general, and specifics are very setup dependent.
For a new cluster,
Most of the time there'd be no significant benefit, especially as the last point release is quite up to date in terms how the installation is done and how the system is setup. Naturally, if you would like to try BTRFS as root filesystem then that is only possible in the PVE 7.0 ISO, where it got introduced as tech preview.

The upgrade is quite compatible and should handle any "breaking" change already, at least if done by following our upgrade how-to closely.

You really should have a periodical restore-tested backup strategy anyway, so that should not change anything either way.

New package defaults, like ifupdown2 over ifupdown for network or chrony over systemd-timesyncd for NTP can be switched so easily on any running system that those really do not justify a reinstallation on their own.

In clusters with three or more nodes one can do a "no downtime" upgrade also with both, reinstallation or upgrade.

So in summary, no big reason for or against it in general, and specifics are very setup dependent.
As suggested in post n.18 by t.lamprecht, I would like to try, on a test cluster, the upgrade from version 6.4 to version 7.0. After that I would like to know, for native 7.0 installations, if do you recommend to continue using ZFS as root file system or to use the new BTRFS, always in RAID 1?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!