Error when rolling back ct container snapshot

liuhonglu

New Member
Aug 5, 2023
12
0
1
When I rolled back the ct container snapshot, this error occurred:lvremove 'pve/vm-101-disk-0' error: Logical volume pve/vm-101-disk-0 contains a filesystem in use.
I searched for relevant content on Google, but there is very little relevant content. It is basically related information on the pve forum. Because my native language is Chinese, I did not find a solution through browser translation.
 
Last edited:
Do you have any nested mounts in your CT configuration?

Please post the content of /etc/pve/lxc/101.conf and the output of mount inside the CT.
 
Do you have any nested mounts in your CT configuration?

Please post the content of /etc/pve/lxc/101.conf and the output of mount inside the CT.
arch: amd64
cores: 2
features: nesting=1
hostname: ubuntu-CT
lock: rollback
memory: 4096
net0: name=eth0,bridge=vmbr0,firewall=1,gw=10.10.10.1,hwaddr=66:56:72:79:B9:7D,ip=10.10.10.11/32,ip6=dhcp,type=veth
onboot: 1
ostype: ubuntu
parent: Portainer
rootfs: local-lvm:vm-101-disk-0,size=15G
swap: 0
unprivileged: 1

[Portainer]
arch: amd64
cores: 2
features: nesting=1
hostname: ubuntu-CT
memory: 4096
net0: name=eth0,bridge=vmbr0,firewall=1,gw=10.10.10.1,hwaddr=66:56:72:79:B9:7D,ip=10.10.10.10/32,ip6=dhcp,type=veth
onboot: 1
ostype: ubuntu
parent: docker
rootfs: local-lvm:vm-101-disk-0,size=15G
snaptime: 1697365991
swap: 0
unprivileged: 1

[docker]
arch: amd64
cores: 2
features: nesting=1
hostname: ubuntu-CT
memory: 4096
net0: name=eth0,bridge=vmbr0,firewall=1,gw=10.10.10.1,hwaddr=66:56:72:79:B9:7D,ip=10.10.10.10/32,ip6=dhcp,type=veth
onboot: 1
ostype: ubuntu
parent: wordpress
rootfs: local-lvm:vm-101-disk-0,size=15G
snaptime: 1697365911
swap: 0
unprivileged: 1

[lnmp]
arch: amd64
cores: 4
features: nesting=1
hostname: ubuntu-CT
memory: 4096
net0: name=eth0,bridge=vmbr0,firewall=1,gw=10.10.10.1,hwaddr=66:56:72:79:B9:7D,ip=10.10.10.10/32,ip6=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: local-lvm:vm-101-disk-0,size=15G
snaptime: 1697292911
swap: 0
unprivileged: 1

[wordpress]
arch: amd64
cores: 2
features: nesting=1
hostname: ubuntu-CT
memory: 4096
net0: name=eth0,bridge=vmbr0,firewall=1,gw=10.10.10.1,hwaddr=66:56:72:79:B9:7D,ip=10.10.10.10/32,ip6=dhcp,type=veth
onboot: 1
ostype: ubuntu
parent: lnmp
rootfs: local-lvm:vm-101-disk-0,size=15G
snaptime: 1697332099
swap: 0
unprivileged: 1
 
Are you running Docker inside the unprivileged container, and are there any Docker containers running?

Please also post the output of the failing task log. You can click on the task log in the GUI and download the log.
Also, Your container is in the locked state. The error would be different, but if you need to manually unlock the container run pct unlock 101
 
Yes, when I created a container, I checked the right -free container by default. And the Docker installed in it, and run a docker container Portainer.
TASK ERROR: lvremove 'pve/vm-101-disk-0' error: Logical volume pve/vm-101-disk-0 contains a filesystem in use.
 
Hi,
might not be related to the issue at hand, but just mentioning that it's better to run Docker in a VM. From the documentation:
If you want to run application containers, for example, Docker images, it is recommended that you run them inside a Proxmox QEMU VM. This will give you all the advantages of application containerization, while also providing the benefits that VMs offer, such as strong isolation from the host and the ability to live-migrate, which otherwise isn’t possible with containers.
In the past, users also reported issues with incompatibilities after certain upgrades when running Docker in a container.
 
Hi,
might not be related to the issue at hand, but just mentioning that it's better to run Docker in a VM. From the documentation:

In the past, users also reported issues with incompatibilities after certain upgrades when running Docker in a container.
However, when using rclone to mount the Google dirve network disk, an error occurred that the snapshot could not be rolled back. Apparently it's not just docker that can cause errors. Finally, I want to ask, can this problem be solved?
 
Now I cannot destroy the lxc container and it will also show: lvremove 'pve/vm-101-disk-0' error: Logical volume pve/vm-101-disk-0 contains a filesystem in use.
 
Can you rollback the container when the container is stopped?

Please post the output of the following from within the container to see what mounts you have enabled

Code:
mount
 
Can you rollback the container when the container is stopped?

Please post the output of the following from within the container to see what mounts you have enabled

Code:
mount
I can't rollback the snapshot when the container is stopped, and now destroying the container doesn't work either!
Code:
root@ubuntu-CT:~# mount
/dev/mapper/pve-vm--101--disk--0 on / type ext4 (rw,relatime,stripe=16)
none on /dev type tmpfs (rw,relatime,size=492k,mode=755,uid=100000,gid=100000,inode64)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
proc on /proc/sys/net type proc (rw,nosuid,nodev,noexec,relatime)
proc on /proc/sys type proc (ro,relatime)
proc on /proc/sysrq-trigger type proc (ro,relatime)
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime)
sysfs on /sys/devices/virtual/net type sysfs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
none on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
lxcfs on /proc/cpuinfo type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /proc/diskstats type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /proc/loadavg type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /proc/meminfo type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /proc/slabinfo type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /proc/stat type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /proc/swaps type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /proc/uptime type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /sys/devices/system/cpu type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
udev on /dev/full type devtmpfs (rw,nosuid,relatime,size=8009624k,nr_inodes=2002406,mode=755,inode64)
udev on /dev/null type devtmpfs (rw,nosuid,relatime,size=8009624k,nr_inodes=2002406,mode=755,inode64)
udev on /dev/random type devtmpfs (rw,nosuid,relatime,size=8009624k,nr_inodes=2002406,mode=755,inode64)
udev on /dev/tty type devtmpfs (rw,nosuid,relatime,size=8009624k,nr_inodes=2002406,mode=755,inode64)
udev on /dev/urandom type devtmpfs (rw,nosuid,relatime,size=8009624k,nr_inodes=2002406,mode=755,inode64)
udev on /dev/zero type devtmpfs (rw,nosuid,relatime,size=8009624k,nr_inodes=2002406,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=100005,mode=620,ptmxmode=666,max=1026)
devpts on /dev/ptmx type devpts (rw,nosuid,noexec,relatime,gid=100005,mode=620,ptmxmode=666,max=1026)
devpts on /dev/console type devpts (rw,nosuid,noexec,relatime,gid=100005,mode=620,ptmxmode=666,max=1026)
devpts on /dev/tty1 type devpts (rw,nosuid,noexec,relatime,gid=100005,mode=620,ptmxmode=666,max=1026)
devpts on /dev/tty2 type devpts (rw,nosuid,noexec,relatime,gid=100005,mode=620,ptmxmode=666,max=1026)
none on /proc/sys/kernel/random/boot_id type tmpfs (ro,nosuid,nodev,noexec,relatime,size=492k,mode=755,uid=100000,gid=100000,inode64)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,uid=100000,gid=100000,inode64)
tmpfs on /run type tmpfs (rw,nosuid,nodev,size=1608828k,mode=755,uid=100000,gid=100000,inode64)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,uid=100000,gid=100000,inode64)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=1608828k,mode=700,uid=100000,gid=100000,inode64)
 
Last edited:
You mentioned that you're mounting Google Drive from within the container. I don't see this here in your mountpoints.

Using a FUSE mount inside a container is not recommended [1].

Disable onboot option on the CT
Try to revert / remove the CT after a reboot so that the mountpoints are not active.

Consider using a VM for Docker and FUSE mounts instead of a CT

[1]: https://pve.proxmox.com/wiki/Linux_Container#pct_container_storage
 
You mentioned that you're mounting Google Drive from within the container. I don't see this here in your mountpoints.

Using a FUSE mount inside a container is not recommended [1].

Disable onboot option on the CT
Try to revert / remove the CT after a reboot so that the mountpoints are not active.

Consider using a VM for Docker and FUSE mounts instead of a CT

[1]: https://pve.proxmox.com/wiki/Linux_Container#pct_container_storage
That was a long time ago when I used rclone to mount google dirve and the snapshot could not be rolled back. Because I knew that it would be unable to be rolled back, I did not use rclone this time. Auto-starting has been disabled. The current situation is: restarting pve still cannot delete the lxc container, delete the snapshot, and cannot roll back the snapshot. You can only reinstall the pve system. In other words, the technology in pve and lxc is not mature.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!