pct rollback fails with error "lvremove 'xyz-vg/vm-123-disk-0' error: Logical volume xyz-vg/vm-123-disk-0 contains a filesystem in use."

h0a

New Member
Sep 28, 2021
5
0
1
52
I am trying to rollback a snapshot on a container and get the error:
"lvremove 'xyz-vg/vm-123-disk-0' error: Logical volume xyz-vg/vm-123-disk-0 contains a filesystem in use."
After that the Container is locked with snapshot lock.
The lock can be removed by pct unlock 123.
The lsof command does not show anything related to container or vm-123-disk.

There is another container with a rootfs and a second logical disk attached, where I have the same issue.
Snapshots and rollbacks with all the other containers work just fine. Strange!

I have a feeling that it might be related to having not enough space, but I don't get any error about space issues. Could this be the reason?

Here is some technical details on the system and container:

container config:
(the container used to have a mountpoint to an external USB drive mounted on the host. That mounpoint was removed from the container config before starting the snapshots)
Code:
root@system:~# cat /etc/pve/lxc/123.conf
arch: amd64
cores: 8
hostname: hostname
lock: rollback
memory: 24576
net0: name=eth0,bridge=vmbr0,firewall=1,gw=10.0.0.1,hwaddr=46:84:FA:24:66:D8,ip=10.0.0.123/24,type=veth
onboot: 0
ostype: ubuntu
parent: afterbugfix
rootfs: data-thin:vm-123-disk-0,size=777G
swap: 24576
unprivileged: 1

[afterbugfix]
arch: amd64
cores: 8
hostname: hostname
memory: 24576
net0: name=eth0,bridge=vmbr0,firewall=1,gw=10.0.0.1,hwaddr=46:84:FA:24:66:D8,ip=10.0.0.123/24,type=veth
onboot: 1
ostype: ubuntu
parent: beforebugfix
rootfs: data-thin:vm-123-disk-0,size=777G
snaptime: 1643558392
swap: 24576
unprivileged: 1

[beforebugfix]
#BEFORE "bugfix"
arch: amd64
cores: 8
hostname: hostname
memory: 24576
net0: name=eth0,bridge=vmbr0,firewall=1,gw=10.0.0.1,hwaddr=46:84:FA:24:66:D8,ip=10.0.0.123/24,type=veth
onboot: 1
ostype: ubuntu
rootfs: data-thin:vm-123-disk-0,size=777G
snaptime: 1643520670
swap: 24576
unprivileged: 1

pveversion:
pve-manager/7.1-10/6ddebafe (running kernel: 5.13.19-3-pve)

vgs and lvs output:
Code:
root@system:~# vgs
  VG     #PV #LV #SN Attr   VSize   VFree
  xyz-vg   1  19   0 wz--n- 905,42g 5,69g
root@system:~# lvs
  LV                                            VG     Attr       LSize   Pool Origin        Data%  Meta%  Move Log Cpy%Sync Convert
  data                                          xyz-vg twi-aotz-- 840,00g                    81,65  42,85                      
  home                                          xyz-vg -wi-ao----   6,00g                                                      
  root                                          xyz-vg -wi-ao----  14,27g                                                      
  snap_vm-123-disk-0_afterbugfix                xyz-vg Vri---tz-k 777,00g data vm-111-disk-0                                  
  snap_vm-123-disk-0_beforebugfix                xyz-vg Vri---tz-k 777,00g data vm-111-disk-0                                  
  swap_1                                        xyz-vg -wi-ao----  31,82g                                                      
  tmp                                           xyz-vg -wi-ao---- 948,00m                                                      
  var                                           xyz-vg -wi-ao----  <5,07g                                                      
(…)            
  vm-123-disk-0                                 xyz-vg Vwi-aotz-- 777,00g data               67,72                            
(…)

Any suggestions on how to proceed?

I currently don't have enough space inside the (remote) system to clone the container from a backup an reperform my software related bugfixes inside the container and/or snapshots.

Currently the only way I see out of this is to try to remove the container and resotre it from a backup (remote connection, very slow).
 
Last edited:

h0a

New Member
Sep 28, 2021
5
0
1
52
OK, solved it myself:
The fs of container 123 was mounted with another container.
Simple solution: Remove mount point on the other container.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!