Resize Proxmox LVM to allow snapshot backup to a remote-storage

seb2010

New Member
May 8, 2025
2
0
1
Hello everyone,
I'm currently struggling with a situation on my Proxmox 8.3.1 server, which hosts two Debian LXC containers running ioBroker.
The problem I'm trying to solve is that, in this configuration, it has not been possible to create vzdump backups in snapshot mode for the containers for the past two weeks.
The backup fails with the following error:
Code:
ERROR: Backup of VM 100 failed - no lock found trying to remove 'backup' lock
I've tried a lot with and around the locks, and it seems the issue is actually related to vzdump and the storage calculation — the misleading error messages make it seem like it's a lock problem. Even with no lock files, no entries in the config, and no orphaned dumps, a backup still can't be created.
The suspicion is that there is not enough available storage space to create the dump.

Here is the setup:
  • Server with a 500GB SSD
  • One LXC container with 128GB and one with 64GB of storage
    I can't say exactly how the thin pools were configured — I did that as a complete newbie. Maybe you can see it in the outputs below. If you need more, just let me know.
I believe that I need to increase pve-root from 96GB to at least 128GB so that it can create the snapshot of VM 100. To do that, I would somehow need to resize the LVM, the thin pool, or whatever is involved. But I have absolutely no idea how to do that.
I might also be completely wrong, or maybe this needs to be solved through creating a new thin pool. I don’t really understand the underlying concept well, and I feel like I’m hitting a mental block when trying to read up on it.

Here are some outputs I can already provide:
lsblk:
Code:
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
nvme0n1                      259:0    0 476.9G  0 disk
├─nvme0n1p1                  259:1    0  1007K  0 part
├─nvme0n1p2                  259:2    0     1G  0 part /boot/efi
└─nvme0n1p3                  259:3    0 475.9G  0 part
  ├─pve-swap                 252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 252:1    0    96G  0 lvm  /
  ├─pve-data_tmeta           252:2    0   3.6G  0 lvm
  │ └─pve-data-tpool         252:4    0 348.8G  0 lvm
  │   ├─pve-data             252:5    0 348.8G  1 lvm
  │   ├─pve-vm--100--disk--0 252:6    0   128G  0 lvm
  │   └─pve-vm--101--disk--0 252:7    0    64G  0 lvm
  └─pve-data_tdata           252:3    0 348.8G  0 lvm
    └─pve-data-tpool         252:4    0 348.8G  0 lvm
      ├─pve-data             252:5    0 348.8G  1 lvm
      ├─pve-vm--100--disk--0 252:6    0   128G  0 lvm
      └─pve-vm--101--disk--0 252:7    0    64G  0 lvm

lvdisplay
Code:
--- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                X9mSND-ir1z-rEg8-nYc5-W0rR-jzt5-CDkXku
  LV Write Access        read/write (activated read only)
  LV Creation host, time proxmox, 2023-11-30 18:48:03 +0100
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 0
  LV Size                <348.82 GiB
  Allocated pool data    54.37%
  Allocated metadata     2.22%
  Current LE             89297
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:5
 
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                wqzvbV-TwHh-MNT8-7ood-BPwz-Td2d-53r6dF
  LV Write Access        read/write
  LV Creation host, time proxmox, 2023-11-30 18:48:02 +0100
  LV Status              available
  # open                 2
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0
 
  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                FZlOpt-BCsO-x10L-hnDt-eLK6-P6yY-buYtQt
  LV Write Access        read/write
  LV Creation host, time proxmox, 2023-11-30 18:48:02 +0100
  LV Status              available
  # open                 1
  LV Size                96.00 GiB
  Current LE             24576
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:1
 
  --- Logical volume ---
  LV Path                /dev/pve/vm-100-disk-0
  LV Name                vm-100-disk-0
  VG Name                pve
  LV UUID                R7iCI2-zrIb-lWi9-eoUw-Mii3-wJie-Dfy7xZ
  LV Write Access        read/write
  LV Creation host, time SJSERVER, 2023-11-30 19:19:09 +0100
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                128.00 GiB
  Mapped size            99.95%
  Current LE             32768
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:6
 
  --- Logical volume ---
  LV Path                /dev/pve/vm-101-disk-0
  LV Name                vm-101-disk-0
  VG Name                pve
  LV UUID                6hxVC1-o1ri-DuIz-Ke7P-e3Ni-fdMe-zN35JA
  LV Write Access        read/write
  LV Creation host, time SJSERVER, 2025-03-18 20:40:00 +0100
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                64.00 GiB
  Mapped size            96.44%
  Current LE             16384
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:7

vzdump error text (storage Gdrive is an rclone google-drive target, but it shows the same error if tried with local storage):
Code:
INFO: starting new backup job: vzdump 101 100 --mode snapshot --fleecing 0 --node SJSERVER --prune-backups 'keep-last=3' --notes-template '{{guestname}}' --mailnotification always --quiet 1 --compress zstd --storage GDrive
INFO: filesystem type on dumpdir is 'fuse.rclone' -using /var/tmp/vzdumptmp73646_100 for temporary files
INFO: Starting Backup of VM 100 (lxc)
INFO: Backup started at 2025-05-06 00:15:01
INFO: status = running
INFO: CT Name: sjserver01
INFO: including mount point rootfs ('/') in backup
INFO: excluding bind mount point mp0 ('/media/NAS') from backup (not a volume)
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: no lock found trying to remove any lock
INFO: create storage snapshot 'vzdump'
no lock found trying to remove 'backup'  lock
ERROR: Backup of VM 100 failed - no lock found trying to remove 'backup'  lock
INFO: Failed at 2025-05-06 00:15:02
INFO: filesystem type on dumpdir is 'fuse.rclone' -using /var/tmp/vzdumptmp73646_101 for temporary files
INFO: Starting Backup of VM 101 (lxc)
INFO: Backup started at 2025-05-06 00:15:03
INFO: status = running
INFO: CT Name: wschi
INFO: including mount point rootfs ('/') in backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: no lock found trying to remove any lock
INFO: create storage snapshot 'vzdump'
no lock found trying to remove 'backup'  lock
ERROR: Backup of VM 101 failed - no lock found trying to remove 'backup'  lock
INFO: Failed at 2025-05-06 00:15:05
INFO: Backup job finished with errors
TASK ERROR: job errors

Can anyone here help me
a) better figure out what exactly is causing the issue, and
b) how to solve it step by step?

A backup in stop-mode does work, but the VMs need to run around the clock. I can’t afford to pause them for hours...

Best regards and thanks in advance,
SEB