Proxmox VE 8.3 released!

Please run debsums -s.
If it doesn't show any changed packages, then you may want to run a memtest on your host.

Did you upgrade the kernel recently? Does it also happen with the previous kernel?
Code:
debsums -s

The host is running ECC-memory:
Code:
ras-mc-ctl --errors
No Memory errors.

No PCIe AER errors.

No Extlog errors.

No MCE errors.

Kernel upgraded to 6.8.12-4-pve from 6.8.8-4-pve
I have not witnessed this problem before. Is it possible that it is a bug in vncproxy ?
I was doing upgrades to a few Ubuntu VMs. As normal there is very intensive screen updating happening then...

Rgds
 
Last edited:
I'd suggest to move this to a separate thread. Could you open one and @ me there?
Please also provide the journal covering a few minutes (~10) before and after the segfaults.
 
I recently performed a fresh installation of PVE 8.3. However, after starting the host I noticed it was using about 7GB of memory out of the available 96GB. Is this normal behavior?

For the boot disk, I chose Btrfs instead of ZFS, similar to another host I set up previously, which started with around 1.8GB of memory usage.

Any insights or advice would be appreciated!
 
However, after starting the host I noticed it was using about 7GB of memory out of the available 96GB. Is this normal behavior?
How did you measure that? It can be fine if it's used for some caches, as unused memory is basically wasted memory and those caches can be made available if more memory is actually required by programs/VMs. But if an idle PVE installation uses 7 GB of memory that cannot be made available it would be rather odd – here a fresh installation uses roughly 1.3 GB of memory.
 
  • Like
Reactions: Johannes S
Hi Thomas,

i did try again with fresh install but same result and only 1 SSD disk (boot drive is partitioned)
On a other system with a intel xeon it uses 1.2 GB so i dont know why it use this much on this server
1733433882271.png
 
what's the status on nftables now?

Is it recommended to enable it for an existing server without much trouble (just ticking it in the DC > server > firewall tab) ?

I noticed when doing it it moved all rules correctly but:

pve-firewall restart
pve-firewall status


Status: enabled/running (pending changes)
 
Last edited:
Hi,
what's the status on nftables now?
while at lot of work was/is being done to shape it up, the nftables-based firewall is still in "tech preview" status, see the docs: https://pve.proxmox.com/pve-docs/chapter-pve-firewall.html#pve_firewall_nft
Is it recommended to enable it for an existing server without much trouble (just ticking it in the DC > server > firewall tab) ?
Again, it is still in tech preview, so you might run into bugs/incompatibilities with certain edge cases. I'd not recommend it yet for production use in sensitive environments, it will be announced when it's ready for that ;)
I noticed when doing it it moved all rules correctly but:

pve-firewall restart
pve-firewall status


Status: enabled/running (pending changes)
How long did you wait after the restart? Anything in the system logs/journal?
 
How long did you wait after the restart? Anything in the system logs/journal?
I ran the restart command because pve-firewall status showed (pending changes)

journalctl -f doesn't show anything new when enabling / disabling it. When enabling it, pve-firewall.log starts to get entries. It never gets that on iptables

Code:
0 5 - 09/Dec/2024:00:00:05 +0100 starting pvefw logger
735 7 guest-735-in 09/Dec/2024:10:28:56 +0100 ACCEPT: OUT=fwbr735i1 PHYSOUT=tap735i1
..
 
I ran the restart command because pve-firewall status showed (pending changes)

journalctl -f doesn't show anything new when enabling / disabling it. When enabling it, pve-firewall.log starts to get entries. It never gets that on iptables

Code:
0 5 - 09/Dec/2024:00:00:05 +0100 starting pvefw logger
735 7 guest-735-in 09/Dec/2024:10:28:56 +0100 ACCEPT: OUT=fwbr735i1 PHYSOUT=tap735i1
..
Please open a new thread (to avoid making the announcement thread here less readable) and provide the details of your configuration (i.e. firewall configuration files, package versions, etc.).
 
Hi! New Proxmox user here (and LOVING IT). In my home environment I just have one Proxmox server that has a few vms and containers on it. I ran the update and looks like it took me from 8.2 to 8.3.

Everything seems fine. Nothing skipped a beat. Do I have to restart? It didn't say so. I think it restarted the web UI because my console came disconnected right at the tail end of it, but I reconnected and checked for updates again and it says I'm all up to date. So that's it? It's a simple non-event?

This is a user who's used vmware professionally for 14+ years and used to a maintenance mode > reboot.
How did you update?
 
How did you update?
@kjstech Re: Rebooting on update, Proxmox won't force you by default. A lot of things (support libraries, tools, etc.) don't require an immediate reboot.

If you install an updated kernel the new kernel won't activate until you restart. If you're updating from the console via apt dist-upgrade, it'll warn you to reboot for the new kernel to kick in. I guess that includes going from 8.2 --> 8.3.

There's a potential caveat to this that I don't fully understand yet (new-ish user here, as well):
I think that if it installs updates to QEMU/associated packages that are used to run VM and LXCs, those updates won't kick in for active VM and LXC containers until those are power-cycled. Not sure what happens with VM and LXCs that are activated after the update but before a reboot.

I think, if you can do it, a reboot is the safest way to make sure everything's using the latest version of QEMU/KVM, but you don't have to until you're ready.
 
Hi,
There's a potential caveat to this that I don't fully understand yet (new-ish user here, as well):
I think that if it installs updates to QEMU/associated packages that are used to run VM and LXCs, those updates won't kick in for active VM and LXC containers until those are power-cycled. Not sure what happens with VM and LXCs that are activated after the update but before a reboot.
newly started guests always use the currently installed versions of QEMU and the lxc-* commands.

Note that for VMs, it is true that currently running instances will still execute with the QEMU version they were started with originally (there is no automatic live-upgrade happening), but the management code that is used (i.e. qemu-server package) will be the currently installed one.

For LXCs it is a bit more complicated. Many things are dependent on the kernel, so it will still be running on the booted kernel in that sense, but e.g. the lxcfs.service is reloaded upon upgrade.

A host reboot is generally recommended after kernel upgrades.
 
  • Like
Reactions: SInisterPisces
Just reporting a successful upgrade from 8.1.3 to 8.3.2 (at least the pve version shown in the GUI.) I know I always appreciate positive reports when I'm looking to upgrade.

Thanks to the team for all the hard work!
 
TLDR; no more IO Delay after update on (SMR?) HDD

Thanks! This update really did it for me. I am running Proxmox off a HDD on a single core thinclient after Proxmox destroyed an USB stick and an SSD. Since using the HDD, the CPU IO Delay was always huge (60%+) when idle and only went down on intensive HDD activity. I thought it might be due to SMR HDD. Something changed in the last update though, no more IO Delay. Thanks a lot!

If anyone knows what changed, please share!
My guess is, that Proxmox finally stopped thrashing disks with permanent byte-wise write hammering (which also degraded 20% SSD within just a few months). That's just intuition though, because all the tools like iotop always failed to show any activity.
1736073985142.png
 
  • Like
Reactions: ucholak
If anyone knows what changed, please share!
My guess is, that Proxmox finally stopped thrashing disks with permanent byte-wise write hammering (which also degraded 20% SSD within just a few months). That's just intuition though, because all the tools like iotop always failed to show any activity.


i highly doubt this since these permanent writes are by design. ProxmoxVE needs to ensure that the configuration data in /etc/pve is consistent for all nodes of a cluster. For that reason the data in /etc/pve is synced between the nodes and to a sqlite file:
https://pve.proxmox.com/wiki/Proxmox_Cluster_File_System_(pmxcfs)


Nothing has changed in that regard since ProxmoxVEs "trashing of ssds" isn't a Bug but working as intended.

HDDs just handle that better than consumer ssds, usb sticks or sd drives.

That's the reason the manual recommends enterprise ssds, not because Proxmox Server Solutions GmbH has an endirsement Deal with BigEnterpriseSSD.
 
i highly doubt this since these permanent writes are by design. ProxmoxVE needs to ensure that the configuration data in /etc/pve is consistent for all nodes of a cluster. For that reason the data in /etc/pve is synced between the nodes and to a sqlite file:
https://pve.proxmox.com/wiki/Proxmox_Cluster_File_System_(pmxcfs)


Nothing has changed in that regard since ProxmoxVEs "trashing of ssds" isn't a Bug but working as intended.

HDDs just handle that better than consumer ssds, usb sticks or sd drives.

That's the reason the manual recommends enterprise ssds, not because Proxmox Server Solutions GmbH has an endirsement Deal with BigEnterpriseSSD.
I only run a single node and I have all the multi-node services (pve-ha-lrm.service pve-ha-crm.service corosync) disabled and I even used to put the logs and pve graph db into tmpfs, yet my storage still degraded very fast. Anyway, I just noticed that also server load went down considerably. Something must have changed as I didn't fiddle with settings and services for months.

edit: might also be because I updated the 2 VMs and 2 LXCs (one PBS VM, rest Debian).

I guess "shut up an be happy about it" is a viable option here :-)

1736111268370.png
 
Last edited:
The service for the cluster file system is pve-cluster.service (and you don't want to stop it since otherwise your configuration changes won't be saved to the sqlite file in case of a reboot or shutdown!) and runs even on a single-node cluster.
 
  • Like
Reactions: Brobert
Hi,
Nothing has changed in that regard since ProxmoxVEs "trashing of ssds" isn't a Bug but working as intended.
no, there was a change, quoting the release notes
Code:
Reduce amplification when writing to the cluster filesystem (pmxcfs), by adapting the fuse setup and using a lower-level write method (issue 5728).
and changelog:
Code:
pve-cluster (8.0.8) bookworm; urgency=medium

  * fix #5728: pmxcfs: allow bigger writes than 4k by using the fuse
    big_writes option. This reduces write-amplification in the sqlite DB for
    writes bigger than 4 KiB, especially if paired with a recent change to the
    file_set_contents helper from pve-common to write as much as possible in
    one go.