Proxmox VE 7.0 released!

the only situation I know of where that one could be used, is when using docker in CTs on ZFS.
Yeah, that is exactly what we were doing. We used ZFS to do replication of the docker-in-LXC services between servers. I guess we need to rebuild that in qemu now, but docker-in-lxc on ZFS, even with AUFS was just super easy in terms of configuration, maintenance, replication. It even enabled a pretty useful sort-of-HA, because */15 replication only takes 3 seconds and LXC startup time is 5-10 seconds.
 
Just for clarity: you manage to install PVE 7.0 just fine, but afterwards, once you boot and get to the normal console login it breaks?
Correct. Blue screen boot selection if fine, text during boot up is just fine, but the video mode change before the login screen switches to a bad video mode.

Can ssh with no problems, can get to the web console with no problems, can join a cluster with no problems.

Thanks!
 
Yeah, that is exactly what we were doing. We used ZFS to do replication of the docker-in-LXC services between servers. I guess we need to rebuild that in qemu now, but docker-in-lxc on ZFS, even with AUFS was just super easy in terms of configuration, maintenance, replication. It even enabled a pretty useful sort-of-HA, because */15 replication only takes 3 seconds and LXC startup time is 5-10 seconds.
I just wonder if Proxmox staff will somewhen realize native Web-interface controlled Docker support in PVE (in addition to LXC and QEMU)?
 
Last edited:
Correct. Blue screen boot selection if fine, text during boot up is just fine, but the video mode change before the login screen switches to a bad video mode.
Ok, thanks for confirmation.
Can ssh with no problems, can get to the web console with no problems, can join a cluster with no problems.
Great that at least this all is working, so it's really only the graphic driver that breaks.

Can you please open a new thread, tag my username with @t.lamprecht and provide some more info, for now the output of
lspci -knn and it would be great if you can check the journal (or dmesg) for any error regarding display/drm/gpu or the like and post that too.
 
I just wonder if Proxmox stuff will somewhen realize native Web-interface controlled Docker support in PVE (in addition to LXC and QEMU)?
No plans there, I'm afraid. Also, this is the thread for the Proxmox VE 7.0 release :)
 
  • Like
Reactions: nibblerrick
Yeah, that is exactly what we were doing. We used ZFS to do replication of the docker-in-LXC services between servers. I guess we need to rebuild that in qemu now, but docker-in-lxc on ZFS, even with AUFS was just super easy in terms of configuration, maintenance, replication. It even enabled a pretty useful sort-of-HA, because */15 replication only takes 3 seconds and LXC startup time is 5-10 seconds.
FYI: there's some progress going on in the overlayfs support on ZFS, which would be great in general, but one cannot really guess yet when it'll be actually shipped. PVE 6.4 will be still supported for almost a year, so you could wait at least a few months to see how that plays out before investing in change to VMs there, in the worst case you still have the same amount of work as you'd have now.

FWIW, you could also build the module yourself: https://github.com/sfjro/aufs5-standalone/tree/aufs5.11 but that is definitively more involved.
 
  • Like
Reactions: fvanlint
Upgraded a 4 node production cluster running Ceph with no downtime or issues. I did do the needful and ran the check script, ensured Prox 6.4 was fully updated, and reviewed my systems for any of the known issues/extra steps. For example, 3/4 nodes in this cluster run old boards and therefore can only BIOS boot, not EFI boot, so I made sure their ZFS boot pools were properly switched to the proxmox boot method. I have not yet upgraded Ceph to Pacific due to time constraints.

10/10 major version release. Lots of nice little UI improvements, some apparent performance improvements on my hardware, and continued support for hardware that should have been recycled years ago.
 
Got this error.
I have 6.4.13 installed it looks like the upgrade thinks it's only 6.4.1

W: (pve-apt-hook) !! ATTENTION !!
W: (pve-apt-hook) You are attempting to upgrade from proxmox-ve '6.4-1' to proxmox-ve '7.0-2'. Please make sure to read the Upgrade notes at
W: (pve-apt-hook) https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0
W: (pve-apt-hook) before proceeding with this operation.
W: (pve-apt-hook)
W: (pve-apt-hook) Press enter to continue, or C^c to abort.
 
Hi,
Got this error.
I have 6.4.13 installed it looks like the upgrade thinks it's only 6.4.1
note that this is not the pve-manager package, but the proxmox-ve package. Also note that this is not an error, but a warning, urging you to follow the guide. If you just blindly continue, things might go wrong. There is a reason the guide exists.
 
Got this error.
This is not a error but a warning that you're about to upgrade to a new major release.

It's there to ensure people are aware of the upgrade how-to and that a major upgrade is about to start.

I have 6.4.13 installed it looks like the upgrade thinks it's only 6.4.1
The pve-manager (currently 6.4-13) package and the proxmox-ve (currently indeed version 6.4-1) meta package are two different ones, here the latter is used.

If you're aware of the upgrade how-to and ready to upgrade you can press enter to continue
 
Hi,

Congratulations on the new release!

I have a fresh PVE 7 installation for testing and manually adding VLAN interface (vmbr0.1) on the bridge (vmbr0) like I did on PVE 6.3 does not work. Adding "hwaddress" option to them helped (I have not tested if it's needed for both or maybe just one of them), even if our network is not MAC-restricted. Hint: reboot was needed in my case (after testing different network settings).

It looks to me that:
- showing MAC in the GUI makes sense
- adding hwaddress by the installer would be nice
- it would be great if PVE could offer VLAN settings at the installation time to avoid error-prone manual setup.
 
Last edited:
Dunno if this is a Safari or Proxmox7 bug but the gui isn't playing well with Safari Version 15.0 (17612.1.18.1.3) (MacOs Monterey Beta 12).
Some menu dont show up, creating a vm generates a white window overlay without any text.
Working A1 with Chrome..

1626271188156.png
 
Hi
Apparently btrfs raid1(2disks) does not work when configured during installation. I tested in virtualbox and when removing the first disk the system won't boot.
Zfs is ok
 
Hi
Apparently btrfs raid1(2disks) does not work when configured during installation. I tested in virtualbox and when removing the first disk the system won't boot.
That is expected, you need to manually fix booting in this case.
 
Hi
Apparently btrfs raid1(2disks) does not work when configured during installation. I tested in virtualbox and when removing the first disk the system won't boot.
Zfs is ok
You can add rootflags=degraded to the kernel command line in that case on boot.
 
Hm, shouldn't the point of RAID be that it works even if it fails(up to a point)?
Yes, and there are many place with issues. These are the reasons why we do not recommend btrfs for now, its just a preview technology in Proxmox VE.
 
  • Like
Reactions: jasonsansone

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!