Proxmox Virtual Environment 9.0 released!

Does anyone have an answer regarding my situation ? Thank you very much !
Did you run apt update after changing all the sources? Because then it should usually find a few hundred packages to update. It might need to run again. But yeah, in that case it should be fine. In the upgrade guide we don't call pve8to9 anymore, after changing the sources :)
 
Did you run apt update after changing all the sources? Because then it should usually find a few hundred packages to update. It might need to run again. But yeah, in that case it should be fine. In the upgrade guide we don't call pve8to9 anymore, after changing the sources :)

Yes, I ran “apt update” afterwards, and this is what I got (in the console) : "579 packages can be upgraded. Run 'apt list --upgradable' to see them".

However, I didn't run “apt upgrade” unless you confirm that it's preferable before launching the update (PVE 9 with Debian 13) via "dist-upgrade" ;)

Thanks !
 
Hi,
Yes, I ran “apt update” afterwards, and this is what I got (in the console) : "579 packages can be upgraded. Run 'apt list --upgradable' to see them".

However, I didn't run “apt upgrade” unless you confirm that it's preferable before launching the update (PVE 9 with Debian 13) via "dist-upgrade"
running apt upgrade when you already switched the repositories to new Debian codenames is a no-go! And actually, you should always use apt dist-upgrade or apt full-upgrade on a Proxmox installation: https://lore.proxmox.com/pve-devel/20240909102050.40220-1-f.ebner@proxmox.com/
 
Do you have the "ballooning device" enabled in the advanced memory options? if not, then there is no way to get the detailed guest view infos and you are in the same boat as with the *BSDs ;)
Indeed, I enabled the ballooning option, specifying the same value for both the minimum and maximum memory, and the information about memory usage is correct in the VM summary. Thanks for the info.
 
  • Like
Reactions: aaron
Very detailed upgrade instructions. Smooth upgrade of the whole cluster.

The only thing I did not see in the instructions was to run
Code:
ceph osd set noout
on the node that is being upgraded in case of using a CEPH cluster. But looking for the warning message from pve8to9 should point the user to this command.

Congrats on the release Proxmox team :)
 
Again for the people affected by the LVM thin pool issue ( @Jedis @Kevo @Damon1974 @thomascgh ), could you also post the output of
Code:
grep -e thin_check_options -e auto-repair /etc/lvm/lvm.conf
? Thanks!
 
Is it me only, or someone else have problem with snapshots not working on LVM @ iSCSI for some VMs?

I've created test setup - newest proxmox 8.x with created Windows VM on it, then fully upgraded it to proxmox 9 (also enabled the Allow Snapshots as Volume-Chain option on the iSCSI storage) - rebooted host. After reboot I still cannot make a snapshot for existing VM. I've created another Windows based VM, but the option is still not available (The current guest configuration does not support taking new snapshots).

Then I've created empty linux VM, and it's working (at least I have option to make a snapshot).

Is there something that I've made wrong for the windows VMs? Or is it a bug?
 
Hi,
Is it me only, or someone else have problem with snapshots not working on LVM @ iSCSI for some VMs?

I've created test setup - newest proxmox 8.x with created Windows VM on it, then fully upgraded it to proxmox 9 (also enabled the Allow Snapshots as Volume-Chain option on the iSCSI storage) - rebooted host. After reboot I still cannot make a snapshot for existing VM. I've created another Windows based VM, but the option is still not available (The current guest configuration does not support taking new snapshots).
If you have a TPM disk, snapshots are not yet supported, because the TPM state file can only be raw currently and for snapshots, qcow2 would be required: https://bugzilla.proxmox.com/show_bug.cgi?id=4693

If you get a runtime error rather than just the option being blacked out, make sure to use QEMU machine version >= 10.0. For more information about that see: https://lore.proxmox.com/pve-devel/.../T/#mc06dfe1364b603213eb1ceff653cdb462bdaa3bc

EDIT: improve wording
 
Last edited:
Again for the people affected by the LVM thin pool issue ( @Jedis @Kevo @Damon1974 @thomascgh ), could you also post the output of
Code:
grep -e thin_check_options -e auto-repair /etc/lvm/lvm.conf
? Thanks!
Code:
$ grep -e thin_check_options -e auto-repair /etc/lvm/lvm.conf
        # (Not recommended.) Also see thin_check_options.
        # Configuration option global/thin_check_options.
        # thin_check_options = [ "-q", "--clear-needs-check-flag" ]
 
  • Like
Reactions: fiona
Hello,
I have three nodes of Proxmox running in three separate mini pc. After reviewing the process to upgrade Proxmox from version 8 to 9, I have a clear idea of how to do it. I just have a few questions about some of the prerequisites:
- Does the Proxmox Backup Server have to be a machine independent of the Proxmox node, or can it be a virtualized machine in Proxmox itself?
- In the event that the Proxmox Backup Server was a virtual machine running within the Proxmox itself to be upgraded, I understand that it should be the only machine running, is that correct?
- I understand that the availability of a Proxmox Backup Server is requested because it will be used during the upgrade process to save some type of configuration?

Best regards and thank you very much!! :)

PD: I have already upgraded my other three nodes of Proxmox Backup Server to version 4 without any problems.
 
- In the event that the Proxmox Backup Server was a virtual machine running within the Proxmox itself to be upgraded, I understand that it should be the only machine running, is that correct?
- I understand that the availability of a Proxmox Backup Server is requested because it will be used during the upgrade process to save some type of configuration?

no and no - no backups are done during the upgrade, you are supposed to have backups before starting the upgrade ;)
 
  • Like
Reactions: mrbyte
Here they are. Send me a PM for the password to the 7-Zip zip file.
Unfortunately, I can't find any LVM-related errors in the logs.

Prior to reboot, when it shuts down, it can still see all volumes and the thin pool:
Code:
Aug 05 17:40:49 zerovector lvm[115769]:   29 logical volume(s) in volume group "pve" unmonitored
But after reboot, it can only see swap and the root file system (telling from surrounding logs), but not the thin pool:
Code:
Aug 05 17:41:06 zerovector lvm[498]:   2 logical volume(s) in volume group "pve" monitored
And I can't see any errors before or after until
Code:
Aug 05 17:41:15 zerovector pve-guests[1549]: activating LV 'pve/data' failed:   Check of pool pve/data failed (status:64). Manual repair required

Could you share the output of the following commands?
Code:
pvdisplay
vgdisplay
lvdisplay pve/data
lvdisplay pve/name-of-a-volume-inside-the-thin-pool
 
Unfortunately, I can't find any LVM-related errors in the logs.

Prior to reboot, when it shuts down, it can still see all volumes and the thin pool:
Code:
Aug 05 17:40:49 zerovector lvm[115769]:   29 logical volume(s) in volume group "pve" unmonitored
But after reboot, it can only see swap and the root file system (telling from surrounding logs), but not the thin pool:
Code:
Aug 05 17:41:06 zerovector lvm[498]:   2 logical volume(s) in volume group "pve" monitored
And I can't see any errors before or after until
Code:
Aug 05 17:41:15 zerovector pve-guests[1549]: activating LV 'pve/data' failed:   Check of pool pve/data failed (status:64). Manual repair required

Could you share the output of the following commands?
Code:
pvdisplay
vgdisplay
lvdisplay pve/data
lvdisplay pve/name-of-a-volume-inside-the-thin-pool
The only thing that stood out from the upgrade regarding lvm, was that during the update, it asked if I wanted to keep my lvm.conf or use the one with the upgrade. I had it show the diff (it was included in the output in the zip file -- term.log), but because it echoed all of the control characters, it was hard to read. I chose to use the one included in the upgrade, so if there was some kind of unique setup there, it may have corrupted things.

Code:
Configuration file '/etc/lvm/lvm.conf'
 ==> Modified (by you or by a script) since installation.
 ==> Package distributor has shipped an updated version.
   What would you like to do about it ?  Your options are:
    Y or I  : install the package maintainer's version
    N or O  : keep your currently-installed version
      D     : show the differences between the versions
      Z     : start a shell to examine the situation
 The default action is to keep your current version.
*** lvm.conf (Y/I/N/O/D/Z) [default=N] ? D
[?1h=
--- /etc/lvm/lvm.conf   2024-09-23 11:05:49.577383395 -0400[m
+++ /etc/lvm/lvm.conf.dpkg-new  2025-05-05 14:14:18.000000000 -0400[m
@@ -36,6 +36,19 @@[m
        # This configuration option has an automatic default value.[m
        # checks = 1[m
 [m
+       # Configuration option config/validate_metadata.[m
+       # Allows to select the level of validation after metadata transformation.[m
+       # Validation takes extra CPU time to verify internal consistency.[m
+       # Accepted values:[m
+       #   full[m
+       #     Do a full metadata validation before disk write.[m
+       #   none[m
+       #     Skip any checks (unrecommended, slightly faster).[m
+       #[m
+       # This configuration option is advanced.[m
+       # This configuration option has an automatic default value.[m
+       # validate_metadata = "full"[m
+[m
        # Configuration option config/abort_on_errors.[m
        # Abort the LVM process if a configuration mismatch is found.[m
        # This configuration option has an automatic default value.[m
@@ -122,7 +135,7 @@[m
        # Configuration option devices/use_devicesfile.[m
        # Enable or disable the use of a devices file.[m
        # When enabled, lvm will only use devices that[m
-       # are lised in the devices file. A devices file will[m
+       # are listed in the devices file. A devices file will[m
        # be used, regardless of this setting, when the --devicesfile[m
        # option is set to a specific file name.[m
        # This configuration option has an automatic default value.[m
@@ -135,6 +148,16 @@[m
        # This configuration option has an automatic default value.[m
        # devicesfile = "system.devices"[m
 [m
+       # Configuration option devices/devicesfile_backup_limit.[m
+       # The max number of backup files to keep in /etc/lvm/devices/backup.[m
+       # LVM creates a backup of the devices file each time a new[m
+       # version is created, or each time a modification is detected.[m
+       # When the max number of backups is reached, the oldest are[m
+       # removed to remain at the limit. Set to 0 to disable backups.[m
+       # Only the system devices file is backed up.[m
+       # This configuration option has an automatic default value.[m
+       # devicesfile_backup_limit = 50[m
+[m
        # Configuration option devices/search_for_devnames.[m
        # Look outside of the devices file for missing devname entries.[m
        # A devname entry is used for a device that does not have a stable[m
<snip -- the full output is in the term.log file>

Code:
Configuration file '/etc/lvm/lvm.conf'
 ==> Modified (by you or by a script) since installation.
 ==> Package distributor has shipped an updated version.
   What would you like to do about it ?  Your options are:
    Y or I  : install the package maintainer's version
    N or O  : keep your currently-installed version
      D     : show the differences between the versions
      Z     : start a shell to examine the situation
 The default action is to keep your current version.
*** lvm.conf (Y/I/N/O/D/Z) [default=N] ? Y
Installing new version of config file /etc/lvm/lvm.conf ...
Installing new version of config file /etc/lvm/lvmlocal.conf ...
Installing new version of config file /etc/lvm/profile/vdo-small.profile ...

Code:
root@myhost:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/nvme0n1p3
  VG Name               pve
  PV Size               464.76 GiB / not usable <3.01 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              118978
  Free PE               4096
  Allocated PE          114882
  PV UUID               fUxWE0-SKKU-lapb-8kwS-0e3v-Byqg-0p3Pm2

  --- Physical volume ---
  PV Name               /dev/sda
  VG Name               ext
  PV Size               <1.82 TiB / not usable <1.09 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              476932
  Free PE               128
  Allocated PE          476804
  PV UUID               5Y7yOB-EilR-ZbtB-RJ7P-L14V-Gt3b-W5G3Gi

root@myhost:~# vgdisplay
  --- Volume group ---
  VG Name               pve
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  292
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                27
  Open LV               15
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <464.76 GiB
  PE Size               4.00 MiB
  Total PE              118978
  Alloc PE / Size       114882 / <448.76 GiB
  Free  PE / Size       4096 / 16.00 GiB
  VG UUID               etj2MO-CLOA-Xhc7-hGGv-8M8W-Sp3x-zUhPq6

  --- Volume group ---
  VG Name               ext
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  61
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                4
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <1.82 TiB
  PE Size               4.00 MiB
  Total PE              476932
  Alloc PE / Size       476804 / <1.82 TiB
  Free  PE / Size       128 / 512.00 MiB
  VG UUID               mKCbWq-DAfB-NojE-Ofo6-F9vW-TIfh-dKoyxm

root@myhost:~# lvdisplay pve/data
  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                d6qMKa-zkQ0-2cTP-7JMg-a0C9-OwXp-2QliuY
  LV Write Access        read/write (activated read only)
  LV Creation host, time proxmox, 2023-04-24 17:46:38 -0400
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 0
  LV Size                <369.22 GiB
  Allocated pool data    34.04%
  Allocated metadata     1.29%
  Current LE             94520
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:7

root@myhost:~# lvdisplay pve/vm-103-disk-0
  --- Logical volume ---
  LV Path                /dev/pve/vm-103-disk-0
  LV Name                vm-103-disk-0
  VG Name                pve
  LV UUID                29GJaS-mXHz-cbnW-SSoz-VsXi-1AON-2oweh7
  LV Write Access        read/write
  LV Creation host, time myhost, 2023-04-28 23:57:41 -0400
  LV Pool name           data
  LV Status              available
  # open                 0
  LV Size                32.00 GiB
  Mapped size            62.59%
  Current LE             8192
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:9
 
Last edited:
  • Like
Reactions: fiona
i made some backups of VMs and Containters on Proxmox 8. install fresh Proxmox 9 and none of my backups will start. any help?
 
Last edited:
The only thing that stood out from the upgrade regarding lvm, was that during the update, it asked if I wanted to keep my lvm.conf or use the one with the upgrade. I had it show the diff (it was included in the output in the zip file -- term.log), but because it echoed all of the control characters, it was hard to read. I chose to use the one included in the upgrade, so if there was some kind of unique setup there, it may have corrupted things.
Yes, I saw that and it should be fine (there is another user report who didn't replace the config during upgrade too). There was a suspicion it could be related to the thin check options, but it might not be the only problem. Since you replaced it, the thin check options are the default ones anyways, as the output of the grep command earlier also shows.
 
It should, did you force-reload the documentation site to ensure you don't just still see the previous repo? HA affinities have a dedicated section, and snapshot as volume chains on LVM has also docs, e.g. on our public mirrored docs these topics are visible here:

https://pve.proxmox.com/pve-docs/chapter-ha-manager.html#ha_manager_rules
https://pve.proxmox.com/pve-docs/chapter-pvesm.html#storage_lvm
Ah, thx. Thought there is a new entry in the overview table in the beginning of the storage pages and was loiking for the word "affinity" in the index.