Proxmox VE 7.0 released!

What exactly am I looking for? I have read it several times.


“The release of Proxmox VE 7.0 this time comes ahead of the stable release of Debian Bullseye. Although almost all packages were ready, the Debian project team postponed the release originally planned for May, due to an unresolved issue in the Debian installer. Since we maintain our own Proxmox installer, and are not affected by this particular issue, we decided to release despite the delay of Debian. The core packages of Proxmox VE are either already subject to the very strict Debian freeze policy for essential packages (for example, libc or compiler) or are maintained by our Proxmox developers (for example, QEMU, Kernel, Ceph, LXC, Rust compiler).”
Thomas Lamprecht, lead developer at Proxmox
 
Same issue here, no renaming of the network device. It works fine with the 'eth0' config, but when switching to 'vmbr0' it isn't working. I tried setting the MAC address on the bridge to that of 'eth0', but haven't gotten it to work yet
How did you setup the host, with our ISO or some ISO? Are you also on hetzner?

Can you try setting the MAC address to the one of the physical interface (check with ip link) by adding a line like the following to the bridge (normally vmbr0) section in /etc/network/interfaces

Code:
hwaddress ab:cd:ef:12:34:56

Maybe your hosting provider restricts outgoing traffic by MAC address and thus you'd need to change it back or tell your hoster about the new bridge MAC so that they'll allow traffic from it.
 
I have run the pve6to7 script and have for all my VMs and LXCs such warnings:

Code:
INFO: Checking storage content type configuration..
WARN: VM 104 - volume 'glusterfs-container:104/vm-104-disk-0.qcow2' (unreferenced) - storage does not have content type 'images' configured.
WARN: CT 312 - volume 'glusterfs2:312/vm-312-disk-0.raw' (unreferenced) - storage does not have content type 'rootdir' configured.
WARN: CT 314 - volume 'glusterfs2:314/vm-314-disk-0.raw' (unreferenced) - storage does not have content type 'rootdir' configured.
WARN: CT 305 - volume 'glusterfs:305/vm-305-disk-0.raw' (unreferenced) - storage does not have content type 'rootdir' configured.
WARN: CT 305 - volume 'glusterfs:305/vm-305-disk-1.raw' (unreferenced) - storage does not have content type 'rootdir' configured.
WARN: When migrating, Proxmox VE 7.0 only scans storages with the appropriate content types for unreferenced guest volumes.

Are there instructions on how to fix that?

The VMs are set up on a "Normal" glusterfs with all relevant types and the LXC are on a directory storage on top of the glusterfs mount dir where the storage also have all the relevant details.

As Example from 305.conf
Code:
mp0: glusterfs-container:305/vm-305-disk-0.raw,mp=/var/lib/redis,backup=1,replicate=0,size=2G

And from storage.cfg
Code:
dir: glusterfs-container
        path /mnt/pve/glusterfs
        content rootdir
        is_mountpoint yes
        shared 1

EDIT: AAaahhhhhhhhh ... could it be "just" false positives because both point to the same storage and so all images are there and the warning kind of show them "the other way around" ... it tells about missing stuff in *-container for the VM (which will never be used) and shows the "non -container" for the container images ... Can you confirm this?

EDIT2: Or does PVE7 also support "rootdir" for Glusterfs storage now? ;-)
 
Last edited:
Are there instructions on how to fix that?

It depends a bit, most of the time you want to add the content type the script suggests (see below), but sometimes this could also be a hint that the same backing storage (e.g., the exact same directory, or the exact same NFS share) is configured more than once, which is generally not supported in Proxmox VE, at least not if the content types are not strictly separated, as it may break locking and other assumptions.

glusterfs-container
For that one the images content type is missing (you can add it over the web-interface: datacenter -> storage -> edit).
glusterfs2
For that one the rootdir seems to miss.
The VMs are set up on a "Normal" glusterfs with all relevant types and the LXC are on a directory storage on top of the glusterfs mount dir where the storage also have all the relevant details.
Are they on the exact same glusterfs share? As then, you may actually want to ignore those warnings as it seems that you have cleanly separated the content types.
 
  • Like
Reactions: Apollon77
I reinstalled a testing machine yesterday with the new 7 iso, but the interface still shows 7.0-5 BETA (with the bugzilla link). Apt is up to date and I've done ctrl-f5 and ctrl-shift-f5.
We only uploaded the 7.0 ISO in the early after noon CET, so you may have still used the BETA one?

Ensure you got valid Proxmox VE package repos setup and do an update afterwards:
https://pve.proxmox.com/wiki/Package_Repositories
 
How did you setup the host, with our ISO or some ISO? Are you also on hetzner?

Can you try setting the MAC address to the one of the physical interface (check if ip link) by adding a line like the following to the bridge section in /etc/network/interfaces

Code:
hwaddress ab:cd:ef:12:34:56

Maybe your hoster restricts outgoing traffic my MAC and thus you'd need to change it back or tell your hoster about the new bridge MAC so that they'll allow traffic from it.

That did the trick - thanks! I've tried
Code:
bridge_hw ab:cd:ef:12:34:56
as I found in the documentation that is linked from the release notes: https://sources.debian.org/src/bridge-utils/1.7-1/debian/NEWS/#L3-L23 - maybe this could be clarified?
 
Hi,
EDIT: AAaahhhhhhhhh ... could it be "just" false positives because both point to the same storage and so all images are there and the warning kind of show them "the other way around" ... it tells about missing stuff in *-container for the VM (which will never be used) and shows the "non -container" for the container images ... Can you confirm this?
Yes, if it's the same backing storage, this can lead to false positives. We'll try and improve the pve6to7 script in that regard.
 
  • Like
Reactions: Apollon77
Thanks, this worked like a charm on an initially debian based installation

For those who have only 1 node that runs infrastructure required to access the internet such as Firewall, DNS, etc. it might be helpful to fetch the required packages in advance in order to be able to shutdown the VMs/containers during the upgrade. It can be done with:

Bash:
sudo apt --download-only dist-upgrade
 
Upgraded a cluster node running PVE 6.4-11 to PVE7 and upon reboot I can no longer access the host via the network.

Here's my /etc/network/interfaces

auto lo iface lo inet loopback auto eno1 iface eno1 inet manual auto eno2 iface eno2 inet manual iface enp0s26f0u2 inet manual iface enp21s0f0 inet manual iface enp21s0f1 inet manual iface enp26s0f0 inet manual iface enp26s0f1 inet manual auto bond0 iface bond0 inet manual bond-slaves eno1 eno2 bond-miimon 100 bond-mode balance-tlb auto bond0.301 iface bond0.301 inet manual auto vmbr0 iface vmbr0 inet manual bridge-ports bond0 bridge-stp off bridge-fd 0 auto vmbr0v301 iface vmbr0v301 inet static address 10.10.10.113/24 gateway 10.10.10.1 bridge-ports bond0.301 bridge-stp off bridge-fd 0

I noticed that vmbr0 and vmbr0v301 mac addresses have changed but it should still work.

Please help!

Regards,

Dennis
 
Last edited:
Upgraded a cluster node running PVE 6.4-11 to PVE7 and upon reboot I can no longer access the host via the network.

Here's my /etc/network/interfaces

auto lo iface lo inet loopback auto eno1 iface eno1 inet manual auto eno2 iface eno2 inet manual iface enp0s26f0u2 inet manual iface enp21s0f0 inet manual iface enp21s0f1 inet manual iface enp26s0f0 inet manual iface enp26s0f1 inet manual auto bond0 iface bond0 inet manual bond-slaves eno1 eno2 bond-miimon 100 bond-mode balance-tlb auto bond0.301 iface bond0.301 inet manual auto vmbr0 iface vmbr0 inet manual bridge-ports bond0 bridge-stp off bridge-fd 0 auto vmbr0v301 iface vmbr0v301 inet static address 10.10.10.113/24 gateway 10.10.10.1 bridge-ports bond0.301 bridge-stp off bridge-fd 0

I noticed that vmbr0 and vmbr0v301 mac addresses have changed but it should still work.

Please help!

Regards,

Dennis

Dennis - see a reply above #62 - it probably has to do with the MAC address changed, you have to set it manually on the bridge interface now.
 
  • Like
Reactions: Stoiko Ivanov
Upgraded a cluster node running PVE 6.4-11 to PVE7 and upon reboot I can no longer access the host via the network.

Here's my /etc/network/interfaces

auto lo iface lo inet loopback auto eno1 iface eno1 inet manual auto eno2 iface eno2 inet manual iface enp0s26f0u2 inet manual iface enp21s0f0 inet manual iface enp21s0f1 inet manual iface enp26s0f0 inet manual iface enp26s0f1 inet manual auto bond0 iface bond0 inet manual bond-slaves eno1 eno2 bond-miimon 100 bond-mode balance-tlb auto bond0.301 iface bond0.301 inet manual auto vmbr0 iface vmbr0 inet manual bridge-ports bond0 bridge-stp off bridge-fd 0 auto vmbr0v301 iface vmbr0v301 inet static address 10.10.10.113/24 gateway 10.10.10.1 bridge-ports bond0.301 bridge-stp off bridge-fd 0

I noticed that vmbr0 and vmbr0v301 mac addresses have changed but it should still work.

Please help!

Regards,

Dennis
See https://forum.proxmox.com/threads/proxmox-ve-7-0-beta-released.91388/post-399699
 
3x 6.4 -> 7 upgrades, all failed with the Open-vSwitch bridge, some how not liking or adding the physical interface they are "bound" to.

My colleague found the "solution" appears to be a simple installation of `ifupdown2` and a reboot to fix :shrug:

Also, the one I was playing.testing NetData in weird setups, and found that the NetData /usr/local/lib/libbpf8 breaks `ip` from iputils. I just whacked the netdata /usr/local/lib/libbpf* and was on my way to a 7.0 installation
 
After update 6.4 -> 7.0 beta -> 7.0
WEB GUI logout me every 5 minutes. How edited it or disabled?
 
3x 6.4 -> 7 upgrades, all failed with the Open-vSwitch bridge, some how not liking or adding the physical interface they are "bound" to.

My colleague found the "solution" appears to be a simple installation of `ifupdown2` and a reboot to fix :shrug:
If this happened deterministically with ifupdown - could you please create a new thread and post your /etc/network/interfaces which caused the problem - thanks!

Also, the one I was playing.testing NetData in weird setups, and found that the NetData /usr/local/lib/libbpf8 breaks `ip` from iputils. I just whacked the netdata /usr/local/lib/libbpf* and was on my way to a 7.0 installation
well /sbin/ip does link to libbpf so this might happen - check with `ldd /bin/ip`
However a not properly working /bin/ip could also cause quite many problems regarding network config ... (so maybe the problem ifupdown/ifupdown2 is just a result of this issue?)
 
After update 6.4 -> 7.0 beta -> 7.0
WEB GUI logout me every 5 minutes. How edited it or disabled?
First please ensure you cleared your browser cache, or force reload the web-interface with CTRL + SHIFT + R.

If it still happens check the journal of the server for any errors logged when it logging you out, for example with journalctl -b. You can open a new thread so that it does not get overlooked here in this big thread (there could be some back and forth about details/logs).
 
Updated from PVE 6.4-11 to PVE 7. In principle, everything is fine, thanks for the work.
There are some small remarks about metrics.
In the PVE 7 interface, memory usage is shown as used + buff / cache. This can be seen from the output of the "free -m" command.
In PVE 6.4 interface, memory usage is shown as used. Checked the output of "free -m" on the old installation.
The same happens when using external metrics - influxDb v2.
In addition, in influxDb, the system / diskwrite metric - all values are 0 after the update.
 
Hi,

need some help regarding installing PVE 7.x over PVE 6.4-11...

When I start to upgrade I stuck in step:
Upgrade the system to Debian Bullseye and Proxmox VE 7.0
Code:
apt dist-upgrade

I got the status:
apt(8) now waits for the lock indefinitely if connected to a tty, or for 120 seconds if not.

image.png

And yes, after 120 seconds nothing happens, I tried also to wait 20 minutes, same status...


Any idea how to fix to continue installation...

On the test "pve6to7 --full" I did not get any error and no warning...

Thank you for support...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!