Snapshots in 'delete' status

micush

Renowned Member
Jul 18, 2015
77
4
73
I have a script that automatically creates and deletes snapshots for all VMs in a FIFO fashion. For almost all of my VMs this works as expected. For some of my larger VMs I end up with something like this:

L65o8cn.png


I can then go in and delete the deleted snapshots by clicking on the 'Remove' button. This is obviously not the desired behavior.

Any thoughts or insight into this?

Thanks much.
 
  • how does your script create/delete snapshots? via qm/pct or pvesh or the API?
  • the "delete status" indicates that those snapshots could not be completely deleted, can you post the complete configuration file of such a VM? (not "pct/qm config ID", but the actual content of the config file in /etc/pve)
  • are you running an uptodate installation? please post the output of "pveversion -v"
 
Hi,

The answers to your inquiries are as follows.

- The script executes '/usr/sbin/qm delsnapshot $vm $snap'

- A config file for one of the effected VMs is:
Code:
#VM 101
agent: 1
boot: c
bootdisk: virtio0
cores: 5
hotplug: disk,network,usb
localtime: 1
memory: 16384
name: vm101
net0: bridge=vmbr0,firewall=1,virtio=DE:32:50:F2:86:D8,queues=4,tag=15
numa: 0
onboot: 1
ostype: win7
parent: daily201606031845
scsi2: ds01:iso/virtio-win-0.1.117.iso,media=cdrom,size=55664K
scsihw: virtio-scsi-single
sockets: 1
virtio0: ds01:108/vm-108-disk-1.qcow2,cache=writeback,size=750G

[PENDING]
delete: agent

[daily201605191845]
agent: 1
boot: c
bootdisk: virtio0
cores: 5
hotplug: disk,network,usb
localtime: 1
memory: 16384
name: vm101
net0: bridge=vmbr0,firewall=1,virtio=DE:32:50:F2:86:D8,queues=4,tag=15
numa: 0
onboot: 1
ostype: win7
scsi2: ds01:iso/virtio-win-0.1.117.iso,media=cdrom,size=55664K
scsihw: virtio-scsi-single
snapstate: delete
snaptime: 1463708755
sockets: 1
virtio0: ds01:108/vm-108-disk-1.qcow2,cache=writeback,size=750G

[daily201605201845]
agent: 1
boot: c
bootdisk: virtio0
cores: 5
hotplug: disk,network,usb
localtime: 1
memory: 16384
name: vm101
net0: bridge=vmbr0,firewall=1,virtio=DE:32:50:F2:86:D8,queues=4,tag=15
numa: 0
onboot: 1
ostype: win7
parent: daily201605191845
scsi2: ds01:iso/virtio-win-0.1.117.iso,media=cdrom,size=55664K
scsihw: virtio-scsi-single
snapstate: delete
snaptime: 1463795144
sockets: 1
virtio0: ds01:108/vm-108-disk-1.qcow2,cache=writeback,size=750G

[daily201605241845]
agent: 1
boot: c
bootdisk: virtio0
cores: 5
hotplug: disk,network,usb
localtime: 1
memory: 16384
name: vm101
net0: bridge=vmbr0,firewall=1,virtio=DE:32:50:F2:86:D8,queues=4,tag=15
numa: 0
onboot: 1
ostype: win7
parent: daily201605201845
scsi2: ds01:iso/virtio-win-0.1.117.iso,media=cdrom,size=55664K
scsihw: virtio-scsi-single
snapstate: delete
snaptime: 1464140745
sockets: 1
virtio0: ds01:108/vm-108-disk-1.qcow2,cache=writeback,size=750G

[daily201605251845]
agent: 1
boot: c
bootdisk: virtio0
cores: 5
hotplug: disk,network,usb
localtime: 1
memory: 16384
name: vm101
net0: bridge=vmbr0,firewall=1,virtio=DE:32:50:F2:86:D8,queues=4,tag=15
numa: 0
onboot: 1
ostype: win7
parent: daily201605241845
scsi2: ds01:iso/virtio-win-0.1.117.iso,media=cdrom,size=55664K
scsihw: virtio-scsi-single
snapstate: delete
snaptime: 1464227165
sockets: 1
virtio0: ds01:108/vm-108-disk-1.qcow2,cache=writeback,size=750G

[daily201605261845]
agent: 1
boot: c
bootdisk: virtio0
cores: 5
hotplug: disk,network,usb
localtime: 1
memory: 16384
name: vm101
net0: bridge=vmbr0,firewall=1,virtio=DE:32:50:F2:86:D8,queues=4,tag=15
numa: 0
onboot: 1
ostype: win7
parent: daily201605251845
scsi2: ds01:iso/virtio-win-0.1.117.iso,media=cdrom,size=55664K
scsihw: virtio-scsi-single
snapstate: delete
snaptime: 1464313553
sockets: 1
virtio0: ds01:108/vm-108-disk-1.qcow2,cache=writeback,size=750G

[daily201605271845]
agent: 1
boot: c
bootdisk: virtio0
cores: 5
hotplug: disk,network,usb
localtime: 1
memory: 16384
name: vm101
net0: bridge=vmbr0,firewall=1,virtio=DE:32:50:F2:86:D8,queues=4,tag=15
numa: 0
onboot: 1
ostype: win7
parent: daily201605261845
scsi2: ds01:iso/virtio-win-0.1.117.iso,media=cdrom,size=55664K
scsihw: virtio-scsi-single
snapstate: delete
snaptime: 1464399941
sockets: 1
virtio0: ds01:108/vm-108-disk-1.qcow2,cache=writeback,size=750G

[daily201605311845]
agent: 1
boot: c
bootdisk: virtio0
cores: 5
hotplug: disk,network,usb
localtime: 1
memory: 16384
name: vm101
net0: bridge=vmbr0,firewall=1,virtio=DE:32:50:F2:86:D8,queues=4,tag=15
numa: 0
onboot: 1
ostype: win7
parent: daily201605271845
scsi2: ds01:iso/virtio-win-0.1.117.iso,media=cdrom,size=55664K
scsihw: virtio-scsi-single
snapstate: delete
snaptime: 1464745540
sockets: 1
virtio0: ds01:108/vm-108-disk-1.qcow2,cache=writeback,size=750G

[daily201606011845]
agent: 1
boot: c
bootdisk: virtio0
cores: 5
hotplug: disk,network,usb
localtime: 1
memory: 16384
name: vm101
net0: bridge=vmbr0,firewall=1,virtio=DE:32:50:F2:86:D8,queues=4,tag=15
numa: 0
onboot: 1
ostype: win7
parent: daily201605311845
scsi2: ds01:iso/virtio-win-0.1.117.iso,media=cdrom,size=55664K
scsihw: virtio-scsi-single
snaptime: 1464831964
sockets: 1
virtio0: ds01:108/vm-108-disk-1.qcow2,cache=writeback,size=750G

[daily201606021845]
agent: 1
boot: c
bootdisk: virtio0
cores: 5
hotplug: disk,network,usb
localtime: 1
memory: 16384
name: vm101
net0: bridge=vmbr0,firewall=1,virtio=DE:32:50:F2:86:D8,queues=4,tag=15
numa: 0
onboot: 1
ostype: win7
parent: daily201606011845
scsi2: ds01:iso/virtio-win-0.1.117.iso,media=cdrom,size=55664K
scsihw: virtio-scsi-single
snaptime: 1464918345
sockets: 1
virtio0: ds01:108/vm-108-disk-1.qcow2,cache=writeback,size=750G

[daily201606031845]
agent: 1
boot: c
bootdisk: virtio0
cores: 5
hotplug: disk,network,usb
localtime: 1
memory: 16384
name: vm101
net0: bridge=vmbr0,firewall=1,virtio=DE:32:50:F2:86:D8,queues=4,tag=15
numa: 0
onboot: 1
ostype: win7
parent: daily201606021845
scsi2: ds01:iso/virtio-win-0.1.117.iso,media=cdrom,size=55664K
scsihw: virtio-scsi-single
snaptime: 1465004738
sockets: 1
virtio0: ds01:108/vm-108-disk-1.qcow2,cache=writeback,size=750G

- Output of 'pveversion -v' is:
Code:
# pveversion -v
proxmox-ve: 4.2-49 (running kernel: 4.4.8-1-pve)
pve-manager: 4.2-4 (running version: 4.2-4/2660193c)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.4.8-1-pve: 4.4.8-49
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-39
qemu-server: 4.0-74
pve-firmware: 1.1-8
libpve-common-perl: 4.0-60
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-50
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-16
pve-container: 1.0-63
pve-firewall: 2.0-26
pve-ha-manager: 1.0-31
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve9~jessie
openvswitch-switch: 2.5.0-1

Thanks for the reply. Much appreciated.
 
A little more investigation reveals some sort of timeout when running the script interactively:

Code:
vm108: Removing snapshot daily201605311845
VM 108 qmp command 'delete-drive-snapshot' failed - unable to connect to VM 108 qmp socket - timeout after 5974 retries
vm108: Removing snapshot daily201605271845
VM 108 qmp command 'delete-drive-snapshot' failed - unable to connect to VM 108 qmp socket - timeout after 5976 retries
 
could you do "netstat -x | grep 108.qmp" and post the output (if any)? if there is no output, please try to delete one of the snapshots directly afterwards with a manual "qm delsnapshot 108 snapshotname". for both commands, you can replace the "108" with a different affected VM id.
 
There is no output on the 'netstat' command.

A manual 'qm delsnapshot' gives a timeout error:

~# qm delsnapshot 108 daily201606031845
VM 108 qmp command 'delete-drive-snapshot' failed - got timeout
 
Hi,
I noticed that this thread was not resolved and I actually had the same issue as micush. I was wondering if someone gave this more thoughts.
Here is a link to the script that I wrote to automate daily snapshots. [Can't due to a spam warning] Script is available on Github on my repository (User: LouisOuellet) (Title: Linux Server Automation)

The script is executed like this in my crontab
Code:
0 18 * * * /root/proxmox.sh --no-colors --no-banner --debug --create-snapshot -a

This server only runs 1 vm for now. But has 7 qcow2 disk attached.
1x 120gb
6x 2000gb => used in a software RAID inside the VM (Running Windows Server 2008 R2).

Netstat results
Code:
root@VirtMan-02:~# netstat -x | grep 100.qmp
unix  3      [ ]         STREAM     CONNECTED     49055876 /var/run/qemu-server/100.qmp
unix  2      [ ]         STREAM     CONNECTING    0        /var/run/qemu-server/100.qmp
unix  2      [ ]         STREAM     CONNECTING    0        /var/run/qemu-server/100.qmp

A manual 'qm delsnapshot' gives a timeout error:
Code:
root@VirtMan-02:~# qm unlock 100
root@VirtMan-02:~# qm delsnapshot 100 Snap_2018_04_28_180003
VM 100 qmp command 'delete-drive-snapshot' failed - got timeout

pveversion -v
Code:
root@VirtMan-02:~# pveversion -v
pve-manager: 5.1-41 (running version: 5.1-41/0b958203)
pve-kernel-4.13.13-2-pve: 4.13.13-32
libpve-http-server-perl: 2.0-8
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-19
qemu-server: 5.0-18
libpve-common-perl: 5.0-25
libpve-guest-common-perl: 2.0-14
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-17
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-3
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-5
pve-container: 2.0-18
pve-firewall: 3.0-5
pve-ha-manager: 2.0-4
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.1-2
lxcfs: 2.0.8-1
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.3-pve1~bpo9

My config file :
Code:
root@VirtMan-02:~# cat /etc/pve/qemu-server/100.conf
bootdisk: sata0
cores: 4
lock: snapshot-delete
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_05_10_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
sockets: 1

[Snap_2018_04_21_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Y2018M04D14
sata0: local-lvm:vm-100-disk-1,size=80G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snapstate: delete
snaptime: 1524348067
sockets: 1
unused0: sdb1:100/vm-100-disk-1.qcow2
unused1: sdc1:100/vm-100-disk-1.qcow2

[Snap_2018_04_27_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_04_21_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snapstate: delete
snaptime: 1524866470
sockets: 1

[Snap_2018_04_28_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_04_27_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snapstate: delete
snaptime: 1524952870
sockets: 1
unused0: sdb1:100/vm-100-disk-1.qcow2
unused1: sdc1:100/vm-100-disk-1.qcow2

[Snap_2018_04_29_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_04_28_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snapstate: delete
snaptime: 1525039270
sockets: 1

[Snap_2018_05_01_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_04_29_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snapstate: delete
snaptime: 1525212070
sockets: 1

[Snap_2018_05_02_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_05_01_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snapstate: delete
snaptime: 1525298468
sockets: 1

[Snap_2018_05_03_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_05_02_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snaptime: 1525384869
sockets: 1

[Snap_2018_05_04_180008]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_05_03_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snaptime: 1525471274
sockets: 1

[Snap_2018_05_05_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_05_04_180008
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snaptime: 1525557670
sockets: 1

[Snap_2018_05_06_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_05_05_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snaptime: 1525644069
sockets: 1

[Snap_2018_05_07_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_05_06_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snaptime: 1525730470
sockets: 1

[Snap_2018_05_08_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_05_07_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snaptime: 1525816869
sockets: 1

[Snap_2018_05_09_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_05_08_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snaptime: 1525903269
sockets: 1

[Snap_2018_05_10_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_05_09_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snaptime: 1525989670
sockets: 1

[Y2018M04D14]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snaptime: 1523768704
sockets: 1


Is there a way to extend the timeout of qm delsnapshot. As most of the snapshots are about 100gb per disk to delete. Thus 15min is definitely not enough.
 
Last edited:
could you do "netstat -x | grep 108.qmp" and post the output (if any)? if there is no output, please try to delete one of the snapshots directly afterwards with a manual "qm delsnapshot 108 snapshotname". for both commands, you can replace the "108" with a different affected VM id.

I know this thread is getting old now, but I can't seemed to find a solution for this exact problem. I posted my own results a few days ago. Have you managed to find a solution before ?
 
  • Like
Reactions: testowy734174712
I did not find a solution.

I ditched qcow2 in favor of raw because the speed penalty was too much for me.

No more snapshots for me.
 
I did not find a solution.

I ditched qcow2 in favor of raw because the speed penalty was too much for me.

No more snapshots for me.

Sadly I am not willing to give up on snapshots as it offers the best protection against Ransom Wares. Our network was attacked a few months ago and since the previous IT didn't implement much security on the network, the ransom ware did pretty good damage. I had to rethink the whole storage & authentication strategy. So I decided to virtualize all the servers to allow more controls and easier remote maintenance.

I might migrate the current filesystem (EXT4) to lvm-thin to do the snapshots at filesystem level. But for now I am trying another approach which consist of shrinking the qcow2 disks. Main downside is the huge down time I have to plan to do the procedure on each virtual disk. But after shrink I am expecting to free up 70% of the disks. Since the actual data is about 400gb per disk.
 
I personally had found that the more qcow2 snapshots i took and deleted the slower the VM performance got. I read that qcow2 can suffer from internal fragmentation. Perhaps this was the case for me. Switching to raw alleviated the slowdown for me. Ymmv.
 
That's an interesting note. May I ask if you were shutting the virtual machines prior to doing the snapshots ? Because I have only experienced it when the vm was not shutdown before the snapshot was created. Which is why I adjusted my script. So I do this :

Code:
qm unlock $VMID
qm shutdown $VMID --forceStop 1 && qm wait $VMID -timeout 300
qm snapshot $VMID $SnapName --vmstate 1
qm start $VMID

Once I adjusted it this way, I haven't heard about any issues related to data fragmentations.

In other notes, I was successfull in shrinking the disk by using this :

Code:
cd $qcow2_dir
cp -v $original /mnt/backup/$original_backup
rm -v $original
qemu-img convert -O qcow2 /mnt/backup/$original_backup $original

Note that I shutdown the vm before proceeding. I manage to recover about 75% of the space.

Before applying this solution to all the virtual disk, I tested this with a qcow2 part of a softraid 5 in windows. And once completed, the drive appeared as healthy in windows disk manager.
 
I have noticed this problem as well.

When I clone a container that has snapshots, the references in the clone to the parental snapshots get copied without cloning the underlying disk volumes new disk lvm-thin volumes.

So when I delete one of these snapshots it orphans the snapshot in the other containers, and locks the containers when I try to remove it.

The solution has been to simply remove the text of the offending snapshots in the container's config file in /etc/pve/lxc/.

I keep backups of the original conf files (using etckeeper of course).

Could this propagate to a problem with the new Replication feature of Proxmox 5.2?
 
  • Like
Reactions: noko
Hi,
I noticed that this thread was not resolved and I actually had the same issue as micush. I was wondering if someone gave this more thoughts.
Here is a link to the script that I wrote to automate daily snapshots. [Can't due to a spam warning] Script is available on Github on my repository (User: LouisOuellet) (Title: Linux Server Automation)

The script is executed like this in my crontab
Code:
0 18 * * * /root/proxmox.sh --no-colors --no-banner --debug --create-snapshot -a

This server only runs 1 vm for now. But has 7 qcow2 disk attached.
1x 120gb
6x 2000gb => used in a software RAID inside the VM (Running Windows Server 2008 R2).

Netstat results
Code:
root@VirtMan-02:~# netstat -x | grep 100.qmp
unix  3      [ ]         STREAM     CONNECTED     49055876 /var/run/qemu-server/100.qmp
unix  2      [ ]         STREAM     CONNECTING    0        /var/run/qemu-server/100.qmp
unix  2      [ ]         STREAM     CONNECTING    0        /var/run/qemu-server/100.qmp

A manual 'qm delsnapshot' gives a timeout error:
Code:
root@VirtMan-02:~# qm unlock 100
root@VirtMan-02:~# qm delsnapshot 100 Snap_2018_04_28_180003
VM 100 qmp command 'delete-drive-snapshot' failed - got timeout

pveversion -v
Code:
root@VirtMan-02:~# pveversion -v
pve-manager: 5.1-41 (running version: 5.1-41/0b958203)
pve-kernel-4.13.13-2-pve: 4.13.13-32
libpve-http-server-perl: 2.0-8
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-19
qemu-server: 5.0-18
libpve-common-perl: 5.0-25
libpve-guest-common-perl: 2.0-14
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-17
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-3
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-5
pve-container: 2.0-18
pve-firewall: 3.0-5
pve-ha-manager: 2.0-4
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.1-2
lxcfs: 2.0.8-1
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.3-pve1~bpo9

My config file :
Code:
root@VirtMan-02:~# cat /etc/pve/qemu-server/100.conf
bootdisk: sata0
cores: 4
lock: snapshot-delete
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_05_10_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
sockets: 1

[Snap_2018_04_21_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Y2018M04D14
sata0: local-lvm:vm-100-disk-1,size=80G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snapstate: delete
snaptime: 1524348067
sockets: 1
unused0: sdb1:100/vm-100-disk-1.qcow2
unused1: sdc1:100/vm-100-disk-1.qcow2

[Snap_2018_04_27_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_04_21_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snapstate: delete
snaptime: 1524866470
sockets: 1

[Snap_2018_04_28_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_04_27_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snapstate: delete
snaptime: 1524952870
sockets: 1
unused0: sdb1:100/vm-100-disk-1.qcow2
unused1: sdc1:100/vm-100-disk-1.qcow2

[Snap_2018_04_29_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_04_28_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snapstate: delete
snaptime: 1525039270
sockets: 1

[Snap_2018_05_01_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_04_29_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snapstate: delete
snaptime: 1525212070
sockets: 1

[Snap_2018_05_02_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_05_01_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snapstate: delete
snaptime: 1525298468
sockets: 1

[Snap_2018_05_03_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_05_02_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snaptime: 1525384869
sockets: 1

[Snap_2018_05_04_180008]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_05_03_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snaptime: 1525471274
sockets: 1

[Snap_2018_05_05_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_05_04_180008
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snaptime: 1525557670
sockets: 1

[Snap_2018_05_06_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_05_05_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snaptime: 1525644069
sockets: 1

[Snap_2018_05_07_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_05_06_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snaptime: 1525730470
sockets: 1

[Snap_2018_05_08_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_05_07_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snaptime: 1525816869
sockets: 1

[Snap_2018_05_09_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_05_08_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snaptime: 1525903269
sockets: 1

[Snap_2018_05_10_180003]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: Snap_2018_05_09_180003
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snaptime: 1525989670
sockets: 1

[Y2018M04D14]
bootdisk: sata0
cores: 4
memory: 6144
name: StorMan-02
net0: e1000=0E:74:1D:1A:25:EF,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
sata0: local-lvm:vm-100-disk-1,size=80G
scsi0: sdb1:100/vm-100-disk-1.qcow2,size=2000G
scsi1: sdc1:100/vm-100-disk-1.qcow2,size=2000G
scsi2: sdd1:100/vm-100-disk-1.qcow2,size=2000G
scsi3: sde1:100/vm-100-disk-2.qcow2,size=2000G
scsi4: sdf1:100/vm-100-disk-2.qcow2,size=2000G
scsi5: sdg1:100/vm-100-disk-2.qcow2,size=2000G
scsihw: virtio-scsi-pci
smbios1: uuid=9a853372-1f74-4ae0-8018-6ce9bc1b2522
snaptime: 1523768704
sockets: 1


Is there a way to extend the timeout of qm delsnapshot. As most of the snapshots are about 100gb per disk to delete. Thus 15min is definitely not enough.
I had the same problem and I believe I found this timeout here:
Code:
/usr/share/perl5/PVE/QMPClient.pm
and added an another elsif for my specific case, which is timeout for blockdev-snapshot-delete-internal-sync and I've made it 20 mins (instead of stock 10mins):
Perl:
#around 119th code's line
elsif ($cmd->{execute} eq 'savevm-start' ||
         $cmd->{execute} eq 'savevm-end' ||
         $cmd->{execute} eq 'query-backup' ||
         $cmd->{execute} eq 'query-block-jobs' ||
         $cmd->{execute} eq 'block-job-cancel' ||
         $cmd->{execute} eq 'block-job-complete' ||
         $cmd->{execute} eq 'backup-cancel' ||
         $cmd->{execute} eq 'query-savevm' ||
         $cmd->{execute} eq 'delete-drive-snapshot' ||
         $cmd->{execute} eq 'guest-shutdown' ||
         $cmd->{execute} eq 'blockdev-snapshot-internal-sync' ||
         $cmd->{execute} eq 'snapshot-drive'  ) {
        $timeout = 10*60; # 10 mins ?
    } elsif ($cmd->{execute} eq 'blockdev-snapshot-delete-internal-sync') {
        $timeout = 20*60; # 20 mins, needed for +100GB files <-- my modification
    } else {
        $timeout = 3; # default
    }
 
Last edited: