Backup restore failure and subsequent 'metaspace' issue

martind

Member
Aug 3, 2020
13
0
6
43
Woke up to a dead VM this morning - xfs had gone bad and no matter what I did to repair, it wasn't happy. As a result I decided to restore from backup.

All was going well up to 100% when it then had problems:

Code:
vma: restore failed - vma blk_flush drive-virtio0 failed
/bin/bash: line 1: 123310 Done                    zstd -q -d -c /mnt/pve/VM_Backup/dump/vzdump-qemu-104-2021_04_09-01_02_40.vma.zst
     123311 Trace/breakpoint trap   | vma extract -v -r /var/tmp/vzdumptmp123308.fifo - /var/tmp/vzdumptmp123308
  device-mapper: message ioctl on  (253:7) failed: Operation not supported
  Failed to process message "delete 26".
  Failed to suspend cm_vg0/data with queued messages.
unable to cleanup 'cm_data:vm-106-disk-0' - lvremove 'cm_vg0/vm-106-disk-0' error:   Failed to update pool cm_vg0/data.
  device-mapper: message ioctl on  (253:7) failed: Operation not supported
  Failed to process message "delete 26".
  Failed to suspend cm_vg0/data with queued messages.
unable to cleanup 'cm_data:vm-106-disk-1' - lvremove 'cm_vg0/vm-106-disk-1' error:   Failed to update pool cm_vg0/data.
no lock found trying to remove 'create'  lock
TASK ERROR: command 'set -o pipefail && zstd -q -d -c /mnt/pve/VM_Backup/dump/vzdump-qemu-104-2021_04_09-01_02_40.vma.zst | vma extract -v -r /var/tmp/vzdumptmp123308.fifo - /var/tmp/vzdumptmp123308' failed: exit code 133

Looking at storage for this node, under LVM-Thin it's showing as usage being 87% but Metadata Usage at 100%.

Can anyone advise how I get out of this pickle? The hypervisor has other vm's running at the moment that are 'live' and mission critical though I do, of course, have backups.