[SOLVED] Cannot roll back VMs

pixelpoint

Member
Mar 25, 2021
20
1
8
35
Dear Proxmox Users and Maintainers / Developers,

I cannot seem to roll back (maybe only some) VMs.

The full rollback log is:
Code:
Logical volume "vm-196-disk-0" successfully removed.
WARNING: You have not turned on protection against thin pools running out of space.
WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
Logical volume "vm-196-disk-0" created.
WARNING: Sum of all thin volume sizes (<14.15 TiB) exceeds the size of thin pool pve/data and the size of whole volume group (<10.48 TiB).
qemu: qemu_mutex_unlock_impl: Operation not permitted
TASK ERROR: start failed: QEMU exited with code 1

Here is the config file for this VM, 196.conf (there are quite a few snapshots, as this VM is actively used for testing):
Code:
agent: 1
boot: order=scsi1;scsi0
cores: 4
memory: 8192
meta: creation-qemu=7.2.0,ctime=1681894549
name: deploytest
net0: virtio=1A:B0:B6:84:AD:11,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: before_fix_deploy_goe_user_build
scsi0: local-lvm:vm-196-disk-0,discard=on,iothread=1,size=40G,ssd=1
scsi1: none,media=cdrom
scsihw: virtio-scsi-single
smbios1: uuid=0e97ca1c-2d26-456c-8f3e-904cd615431f
sockets: 1
vmgenid: 050f9705-dfd8-43bf-94be-9be7f17c3c30

[before_cronjob]
#Before testing cronjob to delete backups etc
agent: 1
boot: order=scsi1;scsi0
cores: 4
memory: 8192
meta: creation-qemu=7.2.0,ctime=1681894549
name: deploytest
net0: virtio=1A:B0:B6:84:AD:11,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: safety_snapshot_permission_folder_delete
runningcpu: kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep
runningmachine: pc-i440fx-8.0+pve0
scsi0: local-lvm:vm-196-disk-0,discard=on,iothread=1,size=40G,ssd=1
scsi1: none,media=cdrom
scsihw: virtio-scsi-single
smbios1: uuid=0e97ca1c-2d26-456c-8f3e-904cd615431f
snaptime: 1691486322
sockets: 1
vmgenid: 9a2a2ea9-6f39-49e4-bc56-68536b7103b2
vmstate: local-lvm:vm-196-state-before_cronjob

[before_fix_deploy_goe_user_build]
agent: 1
boot: order=scsi1;scsi0
cores: 4
memory: 8192
meta: creation-qemu=7.2.0,ctime=1681894549
name: deploytest
net0: virtio=1A:B0:B6:84:AD:11,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: before_cronjob
runningcpu: kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep
runningmachine: pc-i440fx-8.0+pve0
scsi0: local-lvm:vm-196-disk-0,discard=on,iothread=1,size=40G,ssd=1
scsi1: none,media=cdrom
scsihw: virtio-scsi-single
smbios1: uuid=0e97ca1c-2d26-456c-8f3e-904cd615431f
snaptime: 1691934666
sockets: 1
vmgenid: 9a2a2ea9-6f39-49e4-bc56-68536b7103b2
vmstate: local-lvm:vm-196-state-before_fix_deploy_goe_user_build

[before_git_ci_tests]
agent: 1
boot: order=scsi1;scsi0
cores: 4
memory: 8192
meta: creation-qemu=7.2.0,ctime=1681894549
name: deploytest
net0: virtio=1A:B0:B6:84:AD:11,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
runningcpu: kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep
runningmachine: pc-i440fx-7.2+pve0
scsi0: local-lvm:vm-196-disk-0,discard=on,iothread=1,size=40G,ssd=1
scsi1: none,media=cdrom
scsihw: virtio-scsi-single
smbios1: uuid=0e97ca1c-2d26-456c-8f3e-904cd615431f
snaptime: 1684139757
sockets: 1
vmgenid: 643e5e6e-ef55-4cec-a1cb-bbbe9bd06798
vmstate: local-lvm:vm-196-state-before_git_ci_tests

[before_goe_deploy_change]
agent: 1
boot: order=scsi1;scsi0
cores: 4
memory: 8192
meta: creation-qemu=7.2.0,ctime=1681894549
name: deploytest
net0: virtio=1A:B0:B6:84:AD:11,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: before_fix_deploy_goe_user_build
runningcpu: kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep
runningmachine: pc-i440fx-8.0+pve0
scsi0: local-lvm:vm-196-disk-0,discard=on,iothread=1,size=40G,ssd=1
scsi1: none,media=cdrom
scsihw: virtio-scsi-single
smbios1: uuid=0e97ca1c-2d26-456c-8f3e-904cd615431f
snaptime: 1692025993
sockets: 1
vmgenid: 9a2a2ea9-6f39-49e4-bc56-68536b7103b2
vmstate: local-lvm:vm-196-state-before_goe_deploy_change

[safety_snapshot_deploy_change]
#change go-e deploy to match gstaad / bow deploy changes
agent: 1
boot: order=scsi1;scsi0
cores: 4
memory: 8192
meta: creation-qemu=7.2.0,ctime=1681894549
name: deploytest
net0: virtio=1A:B0:B6:84:AD:11,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: before_goe_deploy_change
runningcpu: kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep
runningmachine: pc-i440fx-8.0+pve0
scsi0: local-lvm:vm-196-disk-0,discard=on,iothread=1,size=40G,ssd=1
scsi1: none,media=cdrom
scsihw: virtio-scsi-single
smbios1: uuid=0e97ca1c-2d26-456c-8f3e-904cd615431f
snaptime: 1692693925
sockets: 1
vmgenid: 9a2a2ea9-6f39-49e4-bc56-68536b7103b2
vmstate: local-lvm:vm-196-state-safety_snapshot_deploy_change

[safety_snapshot_permiss]
#Permissions in ${DEPLOY_DATA} bind mount tests
agent: 1
boot: order=scsi1;scsi0
cores: 4
memory: 8192
meta: creation-qemu=7.2.0,ctime=1681894549
name: deploytest
net0: virtio=1A:B0:B6:84:AD:11,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: before_git_ci_tests
runningcpu: kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep
runningmachine: pc-i440fx-7.2+pve0
scsi0: local-lvm:vm-196-disk-0,discard=on,iothread=1,size=40G,ssd=1
scsi1: none,media=cdrom
scsihw: virtio-scsi-single
smbios1: uuid=0e97ca1c-2d26-456c-8f3e-904cd615431f
snaptime: 1690463240
sockets: 1
vmgenid: 643e5e6e-ef55-4cec-a1cb-bbbe9bd06798
vmstate: local-lvm:vm-196-state-safety_snapshot_permiss

[safety_snapshot_permission_folder_delete]
#redis cache folder will be deleted and docker-compose.yml redis user will be removed (test)
agent: 1
boot: order=scsi1;scsi0
cores: 4
memory: 8192
meta: creation-qemu=7.2.0,ctime=1681894549
name: deploytest
net0: virtio=1A:B0:B6:84:AD:11,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: safety_snapshot_permiss
runningcpu: kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep
runningmachine: pc-i440fx-8.0+pve0
scsi0: local-lvm:vm-196-disk-0,discard=on,iothread=1,size=40G,ssd=1
scsi1: none,media=cdrom
scsihw: virtio-scsi-single
smbios1: uuid=0e97ca1c-2d26-456c-8f3e-904cd615431f
snaptime: 1691058391
sockets: 1
vmgenid: 9a2a2ea9-6f39-49e4-bc56-68536b7103b2
vmstate: local-lvm:vm-196-state-safety_snapshot_permission_folder_delete

I also cannot roll back another VM (VM 197, I don't remember exactly, but it might have been cloned from 196), same error:
Code:
Logical volume "vm-197-disk-0" successfully removed.
WARNING: You have not turned on protection against thin pools running out of space.
WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
Logical volume "vm-197-disk-0" created.
WARNING: Sum of all thin volume sizes (<14.15 TiB) exceeds the size of thin pool pve/data and the size of whole volume group (<10.48 TiB).
qemu: qemu_mutex_unlock_impl: Operation not permitted
TASK ERROR: start failed: QEMU exited with code 1

Config file 197.conf:
Code:
agent: 1
boot: order=scsi1;scsi0
cores: 4
memory: 8192
meta: creation-qemu=7.2.0,ctime=1681894722
name: deploytest2
net0: virtio=B6:42:CA:6B:7F:18,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: added_goe_deploykey
scsi0: local-lvm:vm-197-disk-0,discard=on,iothread=1,size=40G,ssd=1
scsi1: none,media=cdrom
scsihw: virtio-scsi-single
smbios1: uuid=47323edb-31f2-4b93-a437-efae22db653b
sockets: 1
vmgenid: 94619df8-a66b-498b-b8d8-f7e87d2b5c9b

[added_goe_deploykey]
agent: 1
boot: order=scsi1;scsi0
cores: 4
memory: 8192
meta: creation-qemu=7.2.0,ctime=1681894722
name: deploytest2
net0: virtio=B6:42:CA:6B:7F:18,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: initdb_deployed
runningcpu: kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep
runningmachine: pc-i440fx-7.2+pve0
scsi0: local-lvm:vm-197-disk-0,discard=on,iothread=1,size=40G,ssd=1
scsi1: none,media=cdrom
scsihw: virtio-scsi-single
smbios1: uuid=47323edb-31f2-4b93-a437-efae22db653b
snaptime: 1690201330
sockets: 1
vmgenid: b5085799-39f6-4293-a50f-85382f68c7f7
vmstate: local-lvm:vm-197-state-added_goe_deploykey

[before_deploy_go-e]
agent: 1
boot: order=scsi1;scsi0
cores: 4
memory: 8192
meta: creation-qemu=7.2.0,ctime=1681894722
name: deploytest2
net0: virtio=B6:42:CA:6B:7F:18,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
runningcpu: kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep
runningmachine: pc-i440fx-7.2+pve0
scsi0: local-lvm:vm-197-disk-0,discard=on,iothread=1,size=40G,ssd=1
scsi1: none,media=cdrom
scsihw: virtio-scsi-single
smbios1: uuid=47323edb-31f2-4b93-a437-efae22db653b
snaptime: 1690195007
sockets: 1
vmgenid: 93970061-7d3a-463e-aec5-82c897e5d448
vmstate: local-lvm:vm-197-state-before_deploy_go-e

[initdb_deployed]
agent: 1
boot: order=scsi1;scsi0
cores: 4
memory: 8192
meta: creation-qemu=7.2.0,ctime=1681894722
name: deploytest2
net0: virtio=B6:42:CA:6B:7F:18,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: before_deploy_go-e
runningcpu: kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep
runningmachine: pc-i440fx-7.2+pve0
scsi0: local-lvm:vm-197-disk-0,discard=on,iothread=1,size=40G,ssd=1
scsi1: none,media=cdrom
scsihw: virtio-scsi-single
smbios1: uuid=47323edb-31f2-4b93-a437-efae22db653b
snaptime: 1690196241
sockets: 1
vmgenid: 93970061-7d3a-463e-aec5-82c897e5d448
vmstate: local-lvm:vm-197-state-initdb_deployed

These 2 VMs are "relatively" new. VM 199 existed before both 196 and 197 and I can rollback VM 199 as much as I like:
Code:
agent: 1
bios: ovmf
boot: order=sata0;scsi0
cores: 4
efidisk0: local-lvm:vm-199-disk-1,size=4M
memory: 4096
name: testbuntu
net0: virtio=3E:79:BF:B2:B7:A8,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: after_iothread_change
sata0: none,media=cdrom
scsi0: local-lvm:vm-199-disk-0,discard=on,iothread=1,size=50G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=0bb7d658-bd5d-4662-b1cf-e101ab3ef537
sockets: 1
startup: order=199,up=10
vmgenid: 08e26999-eedd-44c4-8ac9-1003b498c59a

[after_iothread_change]
agent: 1
bios: ovmf
boot: order=sata0;scsi0
cores: 4
efidisk0: local-lvm:vm-199-disk-1,size=4M
memory: 4096
name: testbuntu
net0: virtio=3E:79:BF:B2:B7:A8,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: before_iothread_change
sata0: none,media=cdrom
scsi0: local-lvm:vm-199-disk-0,discard=on,iothread=1,size=50G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=0bb7d658-bd5d-4662-b1cf-e101ab3ef537
snaptime: 1692702039
sockets: 1
startup: order=199,up=10
vmgenid: 7aefbbec-5f50-42a9-9598-49c5a5d247d2

[ansible_init_done]
agent: 1
bios: ovmf
boot: order=sata0;scsi0
cores: 4
efidisk0: local-lvm:vm-199-disk-1,size=4M
memory: 4096
name: testbuntu
net0: virtio=3E:79:BF:B2:B7:A8,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: installed
sata0: none,media=cdrom
scsi0: local-lvm:vm-199-disk-0,discard=on,size=50G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=0bb7d658-bd5d-4662-b1cf-e101ab3ef537
snaptime: 1689691877
sockets: 1
startup: order=199,up=10
vmgenid: 26a5df21-9526-4f47-b4e6-8c01bef79905

[before_iothread_change]
agent: 1
bios: ovmf
boot: order=sata0;scsi0
cores: 4
efidisk0: local-lvm:vm-199-disk-1,size=4M
memory: 4096
name: testbuntu
net0: virtio=3E:79:BF:B2:B7:A8,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: ansible_init_done
sata0: none,media=cdrom
scsi0: local-lvm:vm-199-disk-0,discard=on,size=50G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=0bb7d658-bd5d-4662-b1cf-e101ab3ef537
snaptime: 1692702003
sockets: 1
startup: order=199,up=10
vmgenid: 275c05e0-91ed-4240-89c9-32ce2ed56147

[created]
#SATA DVD + Boot Order
agent: 1
bios: ovmf
boot: order=sata0;scsi0
cores: 4
efidisk0: local-lvm:vm-199-disk-1,size=4M
memory: 4096
name: testbuntu
net0: virtio=3E:79:BF:B2:B7:A8,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
sata0: none,media=cdrom
scsi0: local-lvm:vm-199-disk-0,discard=on,size=50G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=0bb7d658-bd5d-4662-b1cf-e101ab3ef537
snaptime: 1658148110
sockets: 1
startup: order=199,up=10
vmgenid: d20b33e6-8dbf-43d5-9a9f-ffa734de6fc7

[installed]
agent: 1
bios: ovmf
boot: order=sata0;scsi0
cores: 4
efidisk0: local-lvm:vm-199-disk-1,size=4M
memory: 4096
name: testbuntu
net0: virtio=3E:79:BF:B2:B7:A8,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: created
sata0: none,media=cdrom
scsi0: local-lvm:vm-199-disk-0,discard=on,size=50G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=0bb7d658-bd5d-4662-b1cf-e101ab3ef537
snaptime: 1689691333
sockets: 1
startup: order=199,up=10
vmgenid: 26a5df21-9526-4f47-b4e6-8c01bef79905

Here is my PVE Version:
Code:
pveversion --verbose
proxmox-ve: 8.0.2 (running kernel: 6.2.16-6-pve)
pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390)
pve-kernel-6.2: 8.0.5
proxmox-kernel-helper: 8.0.3
pve-kernel-5.15: 7.4-4
pve-kernel-5.4: 6.4-4
proxmox-kernel-6.2.16-6-pve: 6.2.16-7
proxmox-kernel-6.2: 6.2.16-7
pve-kernel-6.2.16-5-pve: 6.2.16-6
pve-kernel-5.15.108-1-pve: 5.15.108-2
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.4.124-1-pve: 5.4.124-2
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown: 0.8.41
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-3
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.0
libpve-access-control: 8.0.3
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.6
libpve-guest-common-perl: 5.0.3
libpve-http-server-perl: 5.0.4
libpve-rs-perl: 0.8.4
libpve-storage-perl: 8.0.2
libqb0: 1.0.5-1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 3.0.1-1
proxmox-backup-file-restore: 3.0.1-1
proxmox-kernel-helper: 8.0.3
proxmox-mail-forward: 0.2.0
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.2
proxmox-widget-toolkit: 4.0.6
pve-cluster: 8.0.2
pve-container: 5.0.4
pve-docs: 8.0.4
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.3
pve-firmware: 3.7-1
pve-ha-manager: 4.0.2
pve-i18n: 3.0.5
pve-qemu-kvm: 8.0.2-3
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.6
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.12-pve1

I found another thread on here with the same error message, though it seems to be a different problem (waking from hibernation):
https://forum.proxmox.com/threads/vm-cannot-resume-after-upgrade-to-8-0.129899/

In last comment, @fiona speaks of having iothread enabled and having made a PBS Backup as being the pre-requisites for these problems, which may explain why 196/197 cannot rollback (daily backup schedule) and 199 (no backup schedule at all) is able to roll back just fine, even after testing with iothread.

Does anybody else have this kind of problem or error message?
Is the linked threads problem related to mine or are these 2 different things?

Thank you for reading and best regards,
pixelpoint
 
Hi,
it looks very much like the other issue. It is fixed in pve-qemu-kvm >= 8.0.2-4 currently available in the no-subscription repository.
 
Thank you for clearing things up. I will wait until pve-qemu-kvm v8.0.2-4 hits the enterprise-repo and report back after that :)

Best regards,
pixelpoint
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!