Proxmox VE 7.0 released!

I got the status:
apt(8) now waits for the lock indefinitely if connected to a tty, or for 120 seconds if not.
I understand the confusion ...
As far as I can see this is not a message/instruction by apt, but rather the changelogs of all upgraded packages being displayed in a pager (less (1))

try typing 'q' to exit less - the upgrade should proceed.

However - since I'm not sure whether this is the virtual console - make sure to run the upgrade via ssh (or a direct console/IPMI) - see the upgrade guide:
Perform the actions via console or ssh; preferably via console to avoid interrupted ssh connections. Do not carry out the upgrade when connected via the virtual console offered by the GUI; as this will get interrupted during the upgrade.

https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0#Actions_step-by-step

I hope this helps!
 
  • Like
Reactions: kslbe
Any reason for 'chrony' still showing up as an 'Unknown' and 'Dead' service in v7.0.8?
Bug? Special feature or easter-egg? Or legacy that still needs to be removed?
Can I do something about it or will it be solved in a new release?

Screenshot 2021-07-07 at 22.11.03.png
 
Last edited:
Any reason for 'chrony' still showing up as an 'Unknown' and 'Dead' service in v7.0.8?
Bug? Special feature or easter-egg? Or legacy that still needs to be removed?
Can I do something about it or will it be solved in a new release?
is the chrony package installed?
check the release notes:
Due to the design limitations of systemd-timesync, which make it problematic for server use, new installations will install chrony as the default NTP daemon.
If you upgrade from a system using systemd-timesyncd, it's recommended that you manually install either chrony, ntp or openntpd.
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_7.0

I hope this helps!
 
Any reason for 'chrony' still showing up as an 'Unknown' and 'Dead' service in v7.0.8?
Bug? Special feature or easter-egg? Or legacy that still needs to be removed?
Can I do something about it or will it be solved in a new release?

View attachment 27422
chrony is the new default if I am not mistaken. disable timesyncd and make sure chrony autostarts.

Update: @Stoiko Ivanov was faster. :D
 
Last edited:
Any reason for 'chrony' still showing up as an 'Unknown' and 'Dead' service in v7.0.8?
Bug? Special feature or easter-egg? Or legacy that still needs to be removed?
Can I do something about it or will it be solved in a new release?
Just for completenes’s sake another answer to this: While chrony is the new default for Proxmox VE 7, it only gets automatically installed for new installations through the ISO. If you upgrade we do not automatically pull it in, as quite some admins have setup other NTP daemons like openntp or ntpd already, and we do not want to automatically mess with them.

The GUI could definitively improve a bit here, as in, override the "warning" for any NTP providing daemon if one of the available ones runs OK.

If you want to switch over to chrony, which I'd recommend if systend-timesync is in use, then do:
Bash:
apt update
apt install chrony

You may want to check out the chrony docs (manpages) available also online:
https://chrony.tuxfamily.org/doc/4.0/chrony.conf.html

By default, you may probably only want to adapt the NTP server it polls from, at least if you have known high-quality ones, with stable latency in your vicinity. Besides that it works pretty much out of the box for simpler setups.
 
  • Like
Reactions: JSEHV
I was happy upgrading my systems.. 1 went ok, 2 went massive fail :(

the failing ones did not come back with network, so i tried a fresh bullseye debian install and then
Code:
apt install proxmox-pve

It all looked ok up to this point..
Code:
Preconfiguring packages ...
(Reading database ... 35134 files and directories currently installed.)
Removing firmware-bnx2x (20210315-2) ...
Removing firmware-linux-free (20200122-1) ...
Removing firmware-realtek (20210315-2) ...
Removing ifupdown (0.8.36) ...

Network error: Software caused connection abort.

aaaand it's gone.
 
Removing firmware-bnx2x (20210315-2) ...
Removing firmware-linux-free (20200122-1) ...
Removing firmware-realtek (20210315-2) ...
That wasn't a healthy installed Proxmox VE system previously, was it? Those firmware packages are all provided directly by our pve-firmware package, which conflicts on all of those quoted above. So, if those are installed it really looks like the packaging state was a bit off, to say the least.

Was this installed on top of debian, and where not all PVE packages installed (pveversion -v) or some repos missing for the upgrade?

What did the pve6to7 checker-script output before the upgrade?
 
I didn't find any info in release info.

After upgrade, swap in lxc is gone? In config is 512MB (unpriv. container). This happens on all containers.
Code:
              total        used        free      shared  buff/cache   available
Mem:        1048576       37960      906600       16500      104016     1010616
Swap:             0           0           0

Code:
proxmox-ve: 7.0-2 (running kernel: 5.11.22-1-pve)
pve-manager: 7.0-8 (running version: 7.0-8/b1dbf562)
pve-kernel-5.11: 7.0-3
pve-kernel-helper: 7.0-3
pve-kernel-5.4: 6.4-3
pve-kernel-5.11.22-1-pve: 5.11.22-2
pve-kernel-5.4.119-1-pve: 5.4.119-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 14.2.21-1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.1.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-4
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-9
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-2
lxcfs: 4.0.8-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.1-1
proxmox-backup-file-restore: 2.0.1-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.2-4
pve-cluster: 7.0-3
pve-container: 4.0-7
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.3-1
pve-i18n: 2.4-1
pve-qemu-kvm: 6.0.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-7
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.4-pve1
 
Hey man, no question here... but I only want to say how happy I am with Proxmox on my home-server. I was running 6.4 and today I upgraded to v7. So:
  • Stopped all virtual machines
  • Triggered one last backup (to my nfs volume, pointing to my Synology NAS)
  • Re-installed V7 on the host (so clean install)
  • Re-added the nfs volume
  • Imported the virtual machines
  • ... be happy :)
Up and running within 30 minutes! This is really awesome and that's why I'm a paid PAYING user. Would encourage all people here to do so!
 
Last edited:
How do you measure that? And is it RSS (resident set size) or virtual memory? And what version of Ubuntu/Debian is used there (so that we can see if we can reproduce anything).

Note that due to the switch from control groups v1 to pure v2 the accounting may be slightly different, especially if the tool cannot cope well with cgroup v2 systems. But it should not allow any more usage, we actually can limit SWAP and memory now much nicer in v2 (it was quite weird in cgroup v1).
FYI:
the Proxmox host (v7.0) itself displays the correct memory value in the Dashboard.
The containers and the virtual machines do not display correct memory values.
I did a pveupdate/pveupgrade every hour (till now) and got some updates, still no correct memory values.
 
After upgrade, swap in lxc is gone? In config is 512MB (unpriv. container). This happens on all containers.
You're right, while accounting itself was correctly setup from the outside, there was a "display bug" in upstream LXCFS when running with cgroupv2, thus the swap values where shown as zero to container inside tools.

Will be fixed with lxcfs version 4.0.8-pve2, which is currently making its way through the repositories.
 
  • Like
Reactions: lacosta
Hello
i update my proxmox to version 7. The openmediavault vm as HDD in direct attach with SCSI (passtrought). But the VM failed and crash all my proxmox machine.
It seems to be an error during read/write HDD. I do not have probleme on proxmox 6.
Is there a bug on direct drive attaching to a VM?
Thancks
 
Last edited:
The following packages have been kept back:
libpve-access-control pve-manager

apt install libpve-access-control pve-manager
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
libpve-access-control : Depends: libpve-rs-perl but it is not going to be installed
pve-manager : Depends: libpve-rs-perl (>= 0.2.2) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
 
@Stoiko Ivanov Please add the possibility to add the 'hwaddress' to bridges to the UI in the 'advanced' section. This is (now) a necessary step on many setups and seems also required for bonds and bridges to work properly. Should be quite simple to add to the UI in PVE6/7.

Also it isn't really clear from the manual if and how to configure MAC addresse(s) when bonds are in use. Which MAC to configure there etc.

Thanks!
 
Last edited:
The following packages have been kept back:
libpve-access-control pve-manager

apt install libpve-access-control pve-manager
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
libpve-access-control : Depends: libpve-rs-perl but it is not going to be installed
pve-manager : Depends: libpve-rs-perl (>= 0.2.2) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
Seems like some repo misconfiguration. Possibly only updated the Proxmox VE one to bullseye, missing the Debian ones?
https://pve.proxmox.com/wiki/Package_Repositories#_repositories_in_proxmox_ve

If that does not helps please open a new thread, post the repositories configured there:
head -n -0 /etc/apt/sources.list /etc/apt/sources.list.d/*.list
 
@Stoiko Ivanov Please add the possibility to add the 'hwaddress' to bridges to the UI in the 'advanced' section. This is a necessary step on many setups and seems also required for bonds and bridges to work properly. Should be quite simple to add to the UI in PVE7.
Please do not tag people directly for requests, but open an enhancement request over at https://bugzilla.proxmox.com/ instead.

As you need the CLI for upgrading to a new major configuration anyway and the MAC address needs only to change for the interface your host communicates over, and that only if you're in a restricted network (e.g, when rented server by a hosting provider). So if you configure the node IP on a bond, change the bond, if you configure it on a Linux Bridge like most do, then change it for the bridge as documented in the upgrade how-to:
https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0#Check_Linux_Network_Bridge_MAC
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!