Proxmox VE 9.0 BETA released!

I upgraded two of my home servers to Debian Trixie + Proxmox 9.0 Beta. I have the following issues and questions:
1) vGPU unlock patching does not work. The patch program gives the following errors when attempting to patch the nvidia drivers: "/usr/bin/patch: ** patch line 9 contains NUL byte." I have worked around this issue by patching the drivers with a Debian 12 distribution.

2) How would I go about adding the AMD (Ryzen) microcode updates and Intel microcode updates to these two systems? Is it a separate repo I have to add / configure with APT?

3) On my main server rig, I have two VMs that run all the time, Win Server 2025 and Win11, both updated to the same base kernel / build. On Proxmox 8.x, KSM sharing would kick in after about 5 minutes of them starting and share about 7 GB of RAM. With Proxmox 9 on Trixie, KSM sharing never kicks in. Is there a reason for this or is this broken / bugged?

KSM-Broken.png

Thanks.
 
I built two new servers (planning a third to test clustering) and did an ISO install fresh. Then ran an update to ensure they were at the latest releases.

Then selecting to enable Ceph, it complained about dependencies between Ceph packages. I don't have the exact logs on me right now but it was a conflict between 19.2.2 pve2 and 19.2.2 pve5.

I tried both through the WebUI and CLI, neither installed Ceph. No other changes made to the system, it was a fresh install.
 
Backporting is actually not even an option, the fix would apply to the server side not to Proxmox VE. So we need to work around the error on our side. I.e. the version of targetcli-fb in Debian 13 throws that error and Debian 13 is/will be very popular. I'll prepare a patch for that.

EDIT: for now, I created a merge request adding the fix for Trixie in Debian Salsa: https://salsa.debian.org/linux-blocks-team/targetcli-fb/-/merge_requests/12
We can hope that this still makes it into the final Debian 13 release, if it does not, we'll go for the workaround on our side.
@Danfossi the fix did make it into Trixie, with many thanks to @fabian :)
Code:
# apt changelog targetcli-fb
targetcli-fb (1:2.1.53-1.3) unstable; urgency=medium

  * Non-maintainer upload.

  [Fiona Ebner]
  * fix Python exception when creating LUNs with ACLs present (Closes: #1109887)

 -- Fabian Grünbichler <debian@fabian.gruenbichler.email>  Sat, 26 Jul 2025 09:27:03 +0200
 
I think that's what fixed it for me.
I'm still not clear what I might be able to do to stop this from popping up everywhere. It doesn't stop anything from happening but I know the 9.0 release note mentioned something about it but it wasn't clear on how to fix.

Clearing the Web UI cache doesn't help and not where I see it most. It happens the most on CLI

Code:
root@pve:~# pct enter 104
user config - ignore invalid privilege 'VM.Monitor'
 
I'm still not clear what I might be able to do to stop this from popping up everywhere. It doesn't stop anything from happening but I know the 9.0 release note mentioned something about it but it wasn't clear on how to fix.

Clearing the Web UI cache doesn't help and not where I see it most. It happens the most on CLI

Code:
root@pve:~# pct enter 104
user config - ignore invalid privilege 'VM.Monitor'

you have a role using that privilege, and that privilege no longer exists. update the role, and the warning will be gone.
 
well the parser drops it (see the warning), but it is still there in the config unless you modify it..
 
Great work! We will start incorporating this into our CI/CD.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
We're pleased to share that PVE 9 Beta has successfully passed our storage regression tests, aligned with the feature set of PVE 8.

Along the way, we identified a regression related to sg3-utils, which was promptly incorporated/resolved by the Proxmox team. A corrected package was made available through the PVE repository, and we've confirmed that the issue is fully resolved with the updated version.

We did have to make a few adjustments to our test suite, specifically around leak detection, to accommodate changes in the behavior of showcmd. We'll follow up on the dev list about this. Importantly, all core functionality remains intact and behaves as expected.

A big thank-you to the Proxmox team for their prompt support. Great work as always!


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
thanks a lot, applying that patch to /usr/share/perl5/PVE/LXC.pm did the trick...

Code:
$ podman run hello-world
Resolved "hello-world" as an alias (/etc/containers/registries.conf.d/shortnames.conf)
Trying to pull docker.io/library/hello-world:latest...
Getting image source signatures
Copying blob e6590344b1a5 done  
Copying config 74cc54e27d done  
Writing manifest to image destination
Storing signatures

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/
 
Yes, it can be also used for those storages, but there one must enable it on storage creation time (i.e., when adding it as new storage in PVE), this is because these storages already supported qcow2 format before and were rather flexible with accepted names that one could manually choose, so we cannot simply allow turning it one for existing storage config entries to avoid ambiguity.

But when it gets enabled–the checkbox is already there in the UI–the snapshots whill be created as separate volumes, which can avoid some disadvantages of the in-format qcow2 snapshots, like the performance not dropping temporarily a bit on e.g. NFS.
FWIW, we focused a bit more of our testing on the LVM side, there might a bit more edge cases left for the directory based plugins, and it's definitively less of a pain point compared to the lack of simple vendor-agnostic snapshot support for volumes on SANs.
Hi!
Regarding the QCOW2 external Snapshots:
I can't get them to work....

I updated our Test-Cluster to PVE9 Beta and added a NFS volume as storage.
The "Allow Snapshots as Volume-Chain" checkbox was visible and i activated that.
Its also visible in the storage.cfg:

nfs: lnx_test_01_rz0
export /vol_proxmox_lnx_test_01
path /mnt/pve/lnx_test_01_rz0
server 10..X.X.X
content images
prune-backups keep-all=1
snapshot-as-volume-chain 1

I cloned a VM to that volume and made a snapshot.
Sadly it was still the internal QCOW2 snapshot.

Then i tried it with a newly created VM, but also sadly the same... no external qcow2 snaps...
1753900927233.png

Am i missing something?

I eagerly awaited this feature, because snapshot creation and deletion on NFS shares is sometimes very painful with large disks and some disk pressure... :)
 
Latest update gave me an error on the two systems I updated
Code:
dpkg: error processing package systemd-boot (--configure):
 installed systemd-boot package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:
 systemd-boot
E: Sub-process /usr/bin/dpkg returned an error code (1)
Not sure if this is wide spread or my systems are an an edge case.
Is any other information required?
Systems tested
  • MiniPC 2020 Topton i5 8265U
  • MiniPC 2023 Topton i5-1235U
Full update listing was
Code:
Starting system upgrade: apt-get dist-upgrade
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
  libpve-rs-perl pve-container shim-signed shim-signed-common
4 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 3483 kB of archives.
After this operation, 6144 B of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://download.proxmox.com/debian/pve trixie/pve-test amd64 libpve-rs-perl amd64 0.10.6 [2995 kB]
Get:2 http://download.proxmox.com/debian/pve trixie/pve-test amd64 pve-container all 6.0.5 [146 kB]
Get:3 http://download.proxmox.com/debian/pve trixie/pve-test amd64 shim-signed-common all 1.47+pmx1+15.8-1+pmx1 [15.6 kB]
Get:4 http://download.proxmox.com/debian/pve trixie/pve-test amd64 shim-signed amd64 1.47+pmx1+15.8-1+pmx1 [326 kB]
Fetched 3483 kB in 1s (4827 kB/s)   
Reading changelogs... Done
Preconfiguring packages ...
(Reading database ... 58749 files and directories currently installed.)
Preparing to unpack .../libpve-rs-perl_0.10.6_amd64.deb ...
Unpacking libpve-rs-perl (0.10.6) over (0.10.5) ...
Preparing to unpack .../pve-container_6.0.5_all.deb ...
Unpacking pve-container (6.0.5) over (6.0.3) ...
Preparing to unpack .../shim-signed-common_1.47+pmx1+15.8-1+pmx1_all.deb ...
Unpacking shim-signed-common (1.47+pmx1+15.8-1+pmx1) over (1.46+pmx2+15.8-1+pmx1) ...
Preparing to unpack .../shim-signed_1.47+pmx1+15.8-1+pmx1_amd64.deb ...
Unpacking shim-signed:amd64 (1.47+pmx1+15.8-1+pmx1) over (1.46+pmx2+15.8-1+pmx1) ...
Setting up pve-container (6.0.5) ...
Setting up libpve-rs-perl (0.10.6) ...
Setting up shim-signed-common (1.47+pmx1+15.8-1+pmx1) ...
No DKMS packages installed: not changing Secure Boot validation state.
Setting up shim-signed:amd64 (1.47+pmx1+15.8-1+pmx1) ...
No DKMS packages installed: not changing Secure Boot validation state.
Processing triggers for proxmox-kernel-helper (9.0.2) ...
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
UUID="8591-DA94" SIZE="1073741824" FSTYPE="vfat" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="sda" MOUNTPOINT=""
Mounting '/dev/disk/by-uuid/8591-DA94' on '/var/tmp/espmounts/8591-DA94'.
Installing systemd-boot..
Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/8591-DA94/EFI/systemd/systemd-bootx64.efi".
Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/8591-DA94/EFI/BOOT/BOOTX64.EFI".
Random seed file /var/tmp/espmounts/8591-DA94/loader/random-seed successfully refreshed (32 bytes).
Created EFI boot entry "Linux Boot Manager".
Configuring systemd-boot..
Unmounting '/dev/disk/by-uuid/8591-DA94'.
Adding '/dev/disk/by-uuid/8591-DA94' to list of synced ESPs..
Processing triggers for procps (2:4.0.4-8) ...
Processing triggers for pve-ha-manager (5.0.1) ...
Processing triggers for pve-manager (9.0.0~12) ...
Processing triggers for systemd (257.7-1) ...
Processing triggers for man-db (2.13.1-1) ...
Processing triggers for systemd-boot (257.7-1) ...
dpkg: error processing package systemd-boot (--configure):
 installed systemd-boot package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:
 systemd-boot
E: Sub-process /usr/bin/dpkg returned an error code (1)

Your System is up-to-date

starting shell
root@pve2:/#
 
thanks a lot, applying that patch to /usr/share/perl5/PVE/LXC.pm did the trick...

Code:
$ podman run hello-world
Resolved "hello-world" as an alias (/etc/containers/registries.conf.d/shortnames.conf)
Trying to pull docker.io/library/hello-world:latest...
Getting image source signatures
Copying blob e6590344b1a5 done
Copying config 74cc54e27d done
Writing manifest to image destination
Storing signatures

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/
This should be already packaged in pve-container version 6.0.4, which was uploaded this (well now yesterdays) afternoon.
 
Latest update gave me an error on the two systems I updated
Code:
dpkg: error processing package systemd-boot (--configure):
 installed systemd-boot package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:
 systemd-boot
E: Sub-process /usr/bin/dpkg returned an error code (1)
Not sure if this is wide spread or my systems are an an edge case.
Is any other information required?
Systems tested
  • MiniPC 2020 Topton i5 8265U
  • MiniPC 2023 Topton i5-1235U
Full update listing was
Code:
Starting system upgrade: apt-get dist-upgrade
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
  libpve-rs-perl pve-container shim-signed shim-signed-common
4 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 3483 kB of archives.
After this operation, 6144 B of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://download.proxmox.com/debian/pve trixie/pve-test amd64 libpve-rs-perl amd64 0.10.6 [2995 kB]
Get:2 http://download.proxmox.com/debian/pve trixie/pve-test amd64 pve-container all 6.0.5 [146 kB]
Get:3 http://download.proxmox.com/debian/pve trixie/pve-test amd64 shim-signed-common all 1.47+pmx1+15.8-1+pmx1 [15.6 kB]
Get:4 http://download.proxmox.com/debian/pve trixie/pve-test amd64 shim-signed amd64 1.47+pmx1+15.8-1+pmx1 [326 kB]
Fetched 3483 kB in 1s (4827 kB/s)  
Reading changelogs... Done
Preconfiguring packages ...
(Reading database ... 58749 files and directories currently installed.)
Preparing to unpack .../libpve-rs-perl_0.10.6_amd64.deb ...
Unpacking libpve-rs-perl (0.10.6) over (0.10.5) ...
Preparing to unpack .../pve-container_6.0.5_all.deb ...
Unpacking pve-container (6.0.5) over (6.0.3) ...
Preparing to unpack .../shim-signed-common_1.47+pmx1+15.8-1+pmx1_all.deb ...
Unpacking shim-signed-common (1.47+pmx1+15.8-1+pmx1) over (1.46+pmx2+15.8-1+pmx1) ...
Preparing to unpack .../shim-signed_1.47+pmx1+15.8-1+pmx1_amd64.deb ...
Unpacking shim-signed:amd64 (1.47+pmx1+15.8-1+pmx1) over (1.46+pmx2+15.8-1+pmx1) ...
Setting up pve-container (6.0.5) ...
Setting up libpve-rs-perl (0.10.6) ...
Setting up shim-signed-common (1.47+pmx1+15.8-1+pmx1) ...
No DKMS packages installed: not changing Secure Boot validation state.
Setting up shim-signed:amd64 (1.47+pmx1+15.8-1+pmx1) ...
No DKMS packages installed: not changing Secure Boot validation state.
Processing triggers for proxmox-kernel-helper (9.0.2) ...
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
UUID="8591-DA94" SIZE="1073741824" FSTYPE="vfat" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="sda" MOUNTPOINT=""
Mounting '/dev/disk/by-uuid/8591-DA94' on '/var/tmp/espmounts/8591-DA94'.
Installing systemd-boot..
Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/8591-DA94/EFI/systemd/systemd-bootx64.efi".
Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/8591-DA94/EFI/BOOT/BOOTX64.EFI".
Random seed file /var/tmp/espmounts/8591-DA94/loader/random-seed successfully refreshed (32 bytes).
Created EFI boot entry "Linux Boot Manager".
Configuring systemd-boot..
Unmounting '/dev/disk/by-uuid/8591-DA94'.
Adding '/dev/disk/by-uuid/8591-DA94' to list of synced ESPs..
Processing triggers for procps (2:4.0.4-8) ...
Processing triggers for pve-ha-manager (5.0.1) ...
Processing triggers for pve-manager (9.0.0~12) ...
Processing triggers for systemd (257.7-1) ...
Processing triggers for man-db (2.13.1-1) ...
Processing triggers for systemd-boot (257.7-1) ...
dpkg: error processing package systemd-boot (--configure):
 installed systemd-boot package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:
 systemd-boot
E: Sub-process /usr/bin/dpkg returned an error code (1)

Your System is up-to-date

starting shell
root@pve2:/#
Are you sure you got enough space left on the boot partitions?

What does the following output:

Code:
mkdir /run/tmp-mnt
mount /dev/disk/by-uuid/8591-DA94 /run/tmp-mnt

df -h /run/tmp-mnt

# cleanup
umount /run/tmp-mnt
rmdir /run/tmp-mnt
 
Are you sure you got enough space left on the boot partitions?

What does the following output:

Code:
root@pve2:/dev/disk/by-uuid# ls
0B8B-BE9F  1396322700503545336  3e9f448c-56f9-4c42-8247-f253581c592a  8591-DA94
root@pve2:/dev/disk/by-uuid# mkdir /run/tmp-mnt
root@pve2:/dev/disk/by-uuid# mount /dev/disk/by-uuid/8591-DA94 /run/tmp-mnt
root@pve2:/dev/disk/by-uuid# df -h /run/tmp-mnt
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2      1022M  181M  842M  18% /run/tmp-mnt



There should be enough space as the partitions were created automatically during Proxmox installation. User ZFS partition are both below 30% used
01 update error.jpg

However I suspect a subsequent update at your end has fixed it as running the update again today has cleared the error
Code:
Starting system upgrade: apt-get dist-upgrade
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
The following NEW packages will be installed:
  conntrack libclass-methodmaker-perl
The following packages will be upgraded:
  ifupdown2 libpve-access-control libpve-storage-perl proxmox-firewall pve-firewall
  pve-lxc-syscalld pve-manager qemu-server
8 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 2767 kB of archives.
After this operation, 22.0 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://download.proxmox.com/debian/pve trixie/pve-test amd64 ifupdown2 all 3.3.0-1+pmx8 [256 kB]
Get:2 http://download.proxmox.com/debian/pve trixie/pve-test amd64 libpve-access-control all 9.0.3 [75.2 kB]
Get:3 http://download.proxmox.com/debian/pve trixie/pve-test amd64 libpve-storage-perl all 9.0.9 [167 kB]
Get:4 http://download.proxmox.com/debian/pve trixie/pve-test amd64 pve-firewall amd64 6.0.3 [78.0 kB]
Get:5 http://download.proxmox.com/debian/pve trixie/pve-test amd64 proxmox-firewall amd64 1.1.1 [745 kB]
Get:6 http://deb.debian.org/debian trixie/main amd64 conntrack amd64 1:1.4.8-2 [35.6 kB]
Get:7 http://download.proxmox.com/debian/pve trixie/pve-test amd64 pve-lxc-syscalld amd64 2.0.1 [307 kB]
Get:8 http://download.proxmox.com/debian/pve trixie/pve-test amd64 qemu-server amd64 9.0.10 [329 kB]
Get:9 http://download.proxmox.com/debian/pve trixie/pve-test amd64 pve-manager all 9.0.0~14 [591 kB]
Get:10 http://deb.debian.org/debian trixie/main amd64 libclass-methodmaker-perl amd64 2.25-1 [181 kB]
Fetched 2767 kB in 1s (4362 kB/s)          
Reading changelogs... Done
Selecting previously unselected package conntrack.
(Reading database ... 58749 files and directories currently installed.)
Preparing to unpack .../0-conntrack_1%3a1.4.8-2_amd64.deb ...
Unpacking conntrack (1:1.4.8-2) ...
Preparing to unpack .../1-ifupdown2_3.3.0-1+pmx8_all.deb ...
Unpacking ifupdown2 (3.3.0-1+pmx8) over (3.3.0-1+pmx7) ...
Selecting previously unselected package libclass-methodmaker-perl:amd64.
Preparing to unpack .../2-libclass-methodmaker-perl_2.25-1_amd64.deb ...
Unpacking libclass-methodmaker-perl:amd64 (2.25-1) ...
Preparing to unpack .../3-libpve-access-control_9.0.3_all.deb ...
Unpacking libpve-access-control (9.0.3) over (9.0.2) ...
Preparing to unpack .../4-libpve-storage-perl_9.0.9_all.deb ...
Unpacking libpve-storage-perl (9.0.9) over (9.0.8) ...
Preparing to unpack .../5-pve-firewall_6.0.3_amd64.deb ...
Unpacking pve-firewall (6.0.3) over (6.0.2) ...
Preparing to unpack .../6-proxmox-firewall_1.1.1_amd64.deb ...
Unpacking proxmox-firewall (1.1.1) over (1.1.0) ...
Preparing to unpack .../7-pve-lxc-syscalld_2.0.1_amd64.deb ...
Unpacking pve-lxc-syscalld (2.0.1) over (2.0.0) ...
Preparing to unpack .../8-qemu-server_9.0.10_amd64.deb ...
Unpacking qemu-server (9.0.10) over (9.0.9) ...
Preparing to unpack .../9-pve-manager_9.0.0~14_all.deb ...
Unpacking pve-manager (9.0.0~14) over (9.0.0~12) ...
Setting up libclass-methodmaker-perl:amd64 (2.25-1) ...
Setting up conntrack (1:1.4.8-2) ...
Setting up ifupdown2 (3.3.0-1+pmx8) ...
find: '/var/lib/dhcp/': No such file or directory
Setting up systemd-boot (257.7-1) ...
Setting up libpve-access-control (9.0.3) ...
Setting up pve-lxc-syscalld (2.0.1) ...
Setting up libpve-storage-perl (9.0.9) ...
Setting up pve-firewall (6.0.3) ...
Setting up qemu-server (9.0.10) ...
Setting up proxmox-firewall (1.1.1) ...
Setting up pve-manager (9.0.0~14) ...
Processing triggers for man-db (2.13.1-1) ...
Processing triggers for dbus (1.16.2-2) ...
Processing triggers for procps (2:4.0.4-8) ...
Processing triggers for pve-ha-manager (5.0.1) ...
Processing triggers for systemd (257.7-1) ...

Your System is up-to-date

starting shell
root@pve2:/# apt-get dist-upgrade
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
root@pve2:/#

Given it resolved with subsequent updates I suspect there is limited value in further debugging. But happy to peruse it if you see value in doing so.
 
Last edited:
Hi all,

I am trying the thick-LVM snapshot function,
I found that there is two weird things, I took snapshot 0 ~ 19 (ss0 ~ ss19), and then:

1. I want to remove the snapshot ss10, It will prompt:
Code:
vm-100-disk-0.qcow2: deleting snapshot 'ss10' by rebasing 'ss11' on top of 'ss9'
running 'qemu-img rebase -b /dev/vg1/snap_vm-100-disk-0_ss9.qcow2 -F qcow -f qcow2 /dev/vg1/snap_vm-100-disk-0_ss11.qcow2'
TASK ERROR: error rebase ss11 from ss9; command '/usr/bin/qemu-img rebase -b /dev/vg1/snap_vm-100-disk-0_ss9.qcow2 -F qcow2 -f qcow2 /dev/vg1/snap_vm-100-disk-0_ss11.qcow2' failed: Insecure dependency in exec while running with -T switch at /usr/lib/x86_64-linux-gnu/perl-base/IPC/Open3.pm line 176.
as result,
- VM is locked
- snapshot cannot delete
- the Date/Status column on GUI remarked that ss10 as delete
- lvs still can see there is lv snap_vm-100-disk-0_ss10.qcow2 vg1 -wi-a----- <32.01g


2. I want to remvoe the latest snapshot ss19, It will prompt:
Code:
vm-100-disk-0.qcow2: deleting snapshot 'ss19' by rebasing 'current' on top of 'ss18'
running 'qemu-img rebase -b /dev/vg1/snap_vm-100-disk-0_ss18.qcow2 -F qcow -f qcow2 /dev/vg1/vm-100-disk-0.qcow2'
TASK ERROR: error rebase current from ss18; command '/usr/bin/qemu-img rebase -b /dev/vg1/snap_vm-100-disk-0_ss18.qcow2 -F qcow2 -f qcow2 /dev/vg1/vm-100-disk-0.qcow2' failed: Insecure dependency in exec while running with -T switch at /usr/lib/x86_64-linux-gnu/perl-base/IPC/Open3.pm line 176.
as result,
- VM is locked
- snapshot cannot delete
- the Date/Status column on GUI remarked that ss19 as delete
- lvs still can see there is lv snap_vm-100-disk-0_ss19.qcow2 vg1 -wi-a----- <32.01g


both performed same result, and seems all the snapshots cannot be removed anymore (tried on some of them also),
is there any limitation for the thick-LVM snapshot?
the Proxmox version is 9.0.0~8.
the Proxmox is brand-new installed.

1753937303173.png
 
Last edited:
Hi all,

I am trying the thick-LVM snapshot function,
I found that there is two weird things, I took snapshot 0 ~ 19 (ss0 ~ ss19), and then:

1. I want to remove the snapshot ss10, It will prompt:
Code:
vm-100-disk-0.qcow2: deleting snapshot 'ss10' by rebasing 'ss11' on top of 'ss9'
running 'qemu-img rebase -b /dev/vg1/snap_vm-100-disk-0_ss9.qcow2 -F qcow -f qcow2 /dev/vg1/snap_vm-100-disk-0_ss11.qcow2'
TASK ERROR: error rebase ss11 from ss9; command '/usr/bin/qemu-img rebase -b /dev/vg1/snap_vm-100-disk-0_ss9.qcow2 -F qcow2 -f qcow2 /dev/vg1/snap_vm-100-disk-0_ss11.qcow2' failed: Insecure dependency in exec while running with -T switch at /usr/lib/x86_64-linux-gnu/perl-base/IPC/Open3.pm line 176.
as result,
- VM is locked
- snapshot cannot delete
- the Date/Status column on GUI remarked that ss10 as delete
- lvs still can see there is lv snap_vm-100-disk-0_ss10.qcow2 vg1 -wi-a----- <32.01g


2. I want to remvoe the latest snapshot ss19, It will prompt:
Code:
vm-100-disk-0.qcow2: deleting snapshot 'ss19' by rebasing 'current' on top of 'ss18'
running 'qemu-img rebase -b /dev/vg1/snap_vm-100-disk-0_ss18.qcow2 -F qcow -f qcow2 /dev/vg1/vm-100-disk-0.qcow2'
TASK ERROR: error rebase current from ss18; command '/usr/bin/qemu-img rebase -b /dev/vg1/snap_vm-100-disk-0_ss18.qcow2 -F qcow2 -f qcow2 /dev/vg1/vm-100-disk-0.qcow2' failed: Insecure dependency in exec while running with -T switch at /usr/lib/x86_64-linux-gnu/perl-base/IPC/Open3.pm line 176.
as result,
- VM is locked
- snapshot cannot delete
- the Date/Status column on GUI remarked that ss19 as delete
- lvs still can see there is lv snap_vm-100-disk-0_ss19.qcow2 vg1 -wi-a----- <32.01g


both performed same result, and seems all the snapshots cannot be removed anymore (tried on some of them also),
is there any limitation for the thick-LVM snapshot?
the Proxmox version is 9.0.0~8.
the Proxmox is brand-new installed.

View attachment 88649
Did you upgrade to latest pve-test repo state? What's your pveversion -v output? As there were quite a few fixes for these things included already.
 
Did you upgrade to latest pve-test repo state? What's your pveversion -v output? As there were quite a few fixes for these things included already.
Yes, I retried with version 9.0.0~14


pveversion -v
Code:
proxmox-ve: 9.0.0 (running kernel: 6.14.8-1-pve)
pve-manager: 9.0.0~14 (running version: 9.0.0~14/193b0f8ec8447b1d)
proxmox-kernel-helper: 9.0.2
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
proxmox-kernel-6.14: 6.14.8-2
proxmox-kernel-6.14.8-1-pve-signed: 6.14.8-1
ceph-fuse: 19.2.2-pve5
corosync: 3.1.9-pve2
criu: 4.1-1
frr-pythontools: 10.3.1-1+pve3
ifupdown2: 3.3.0-1+pmx8
intel-microcode: 3.20250512.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.2
libpve-cluster-perl: 9.0.2
libpve-common-perl: 9.0.8
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.3
libpve-network-perl: 1.1.2
libpve-rs-perl: 0.10.6
libpve-storage-perl: 9.0.9
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2
lxc-pve: 6.0.4-2
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.7-1
proxmox-backup-file-restore: 4.0.7-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.1.1
proxmox-kernel-helper: 9.0.2
proxmox-mail-forward: 1.0.1
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.0
proxmox-widget-toolkit: 5.0.2
pve-cluster: 9.0.2
pve-container: 6.0.5
pve-docs: 9.0.4
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.16-3
pve-ha-manager: 5.0.1
pve-i18n: 3.5.0
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.10
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.3-pve1
 
Hi all,

I am trying the thick-LVM snapshot function,
I found that there is two weird things, I took snapshot 0 ~ 19 (ss0 ~ ss19), and then:

1. I want to remove the snapshot ss10, It will prompt:
Code:
vm-100-disk-0.qcow2: deleting snapshot 'ss10' by rebasing 'ss11' on top of 'ss9'
running 'qemu-img rebase -b /dev/vg1/snap_vm-100-disk-0_ss9.qcow2 -F qcow -f qcow2 /dev/vg1/snap_vm-100-disk-0_ss11.qcow2'
TASK ERROR: error rebase ss11 from ss9; command '/usr/bin/qemu-img rebase -b /dev/vg1/snap_vm-100-disk-0_ss9.qcow2 -F qcow2 -f qcow2 /dev/vg1/snap_vm-100-disk-0_ss11.qcow2' failed: Insecure dependency in exec while running with -T switch at /usr/lib/x86_64-linux-gnu/perl-base/IPC/Open3.pm line 176.
as result,
- VM is locked
- snapshot cannot delete
- the Date/Status column on GUI remarked that ss10 as delete
- lvs still can see there is lv snap_vm-100-disk-0_ss10.qcow2 vg1 -wi-a----- <32.01g


2. I want to remvoe the latest snapshot ss19, It will prompt:
Code:
vm-100-disk-0.qcow2: deleting snapshot 'ss19' by rebasing 'current' on top of 'ss18'
running 'qemu-img rebase -b /dev/vg1/snap_vm-100-disk-0_ss18.qcow2 -F qcow -f qcow2 /dev/vg1/vm-100-disk-0.qcow2'
TASK ERROR: error rebase current from ss18; command '/usr/bin/qemu-img rebase -b /dev/vg1/snap_vm-100-disk-0_ss18.qcow2 -F qcow2 -f qcow2 /dev/vg1/vm-100-disk-0.qcow2' failed: Insecure dependency in exec while running with -T switch at /usr/lib/x86_64-linux-gnu/perl-base/IPC/Open3.pm line 176.
as result,
- VM is locked
- snapshot cannot delete
- the Date/Status column on GUI remarked that ss19 as delete
- lvs still can see there is lv snap_vm-100-disk-0_ss19.qcow2 vg1 -wi-a----- <32.01g


both performed same result, and seems all the snapshots cannot be removed anymore (tried on some of them also),
is there any limitation for the thick-LVM snapshot?
the Proxmox version is 9.0.0~8.
the Proxmox is brand-new installed.

[...]
Hi, thanks for the report, I can reproduce the bug and will send a patch.
EDIT: Sent a patch here: https://lore.proxmox.com/pve-devel/20250731071306.11777-1-f.weber@proxmox.com/T/
EDIT 2: Should be fixed in libpve-storage-perl >= 9.0.11.
 
Last edited: