Cannot clone VM or move disk with more than 13 snapshots

Jan 11, 2021
11
0
1
35
Bern
We have an issue in our Proxmox cluster. It seems not possible to clone a VM or move the disk to another storage, if the VM has more than 13 snapshots + current. Till 13 snapshots + current works fine. The error message I get is:
Bash:
moving disk with snapshots, snapshots will not be moved!

create full clone of drive sata0 (qnap01.nfs.hdd.vm:249/vm-249-disk-0.qcow2)

TASK ERROR: storage migration failed: lvcreate 'NVMe/vm-249-disk-0' error: Insecure dependency in exec while running with -T switch at /usr/share/perl/5.28/IPC/Open3.pm line 178.

The source of the disk is on a NFS storage. I cannot move to another NFS storage nor to a local disk.

It seems the problem only exists on the webgui, using qm, cloning seems to work.

Code:
# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.106-1-pve)
pve-manager: 6.4-5 (running version: 6.4-5/6c7bf5de)
pve-kernel-5.4: 6.4-1
pve-kernel-helper: 6.4-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.8
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.4-1
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-2
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-1
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.5-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.5-3
pve-cluster: 6.4-1
pve-container: 3.3-5
pve-docs: 6.4-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1
 

oguz

Proxmox Staff Member
Staff member
Nov 19, 2018
3,389
388
88
hi,

my VM with 15+ snapshots clones correctly here, using both the GUI and the qm tool on the latest versions.

The source of the disk is on a NFS storage.
also no issue with this (using a qcow2 disk).

I cannot move to another NFS storage nor to a local disk.
you mean you don't have anywhere to move it, or you try to move it and get an error?

for reference here is my VM config where everything works:
Code:
agent: 1
boot: order=scsi0;ide2
cores: 1
ide2: NFS:iso/alpine-standard-3.12.0-x86_64.iso,media=cdrom
memory: 2048
name: alpine-fuuu
net0: virtio=52:FA:56:4D:E1:C1,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
parent: snap3
scsi0: NFS:6666/vm-6666-disk-0.qcow2,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=508d575e-eee8-4797-9637-fcc636251af4
sockets: 2
vmgenid: 1fe039e1-c5eb-4c00-9631-7a2215a9f7a3


can you post your VM configuration qm config VMID? so we can take a look
 
Last edited:
Jan 11, 2021
11
0
1
35
Bern
Hi Oguz

Thanks for your reply!


you mean you don't have anywhere to move it, or you try to move it and get an error?
I tried to move it and got the same error as well.

Here is my config
Code:
qm config 249
bios: ovmf
boot: order=sata0;ide2;net0
cores: 8
cpu: host
efidisk0: qnap01.nfs.hdd.vm:249/vm-249-disk-1.qcow2,size=128K
ide2: qnap01.nfs.hdd.generic:iso/clonezilla-live-2.7.1-22-amd64.iso,media=cdrom,size=306M
memory: 16348
name: Copy-of-VM-ErsaIMFS-Base-OS-Template
net0: virtio=B6:8C:FA:0F:9A:C9,bridge=vmbr0
net1: virtio=7E:A8:D0:84:E7:5D,bridge=vmbr10
net10: virtio=E2:6B:EB:BD:26:22,bridge=vmbr19
net11: virtio=DA:76:CD:0F:CB:CC,bridge=vmbr20
net2: virtio=3E:A6:98:B5:E5:7A,bridge=vmbr11
net3: virtio=5A:58:14:F1:14:19,bridge=vmbr12
net4: virtio=6E:69:89:78:AD:7B,bridge=vmbr13
net5: virtio=A6:5D:5D:53:E0:32,bridge=vmbr14
net6: virtio=6A:41:96:8E:43:12,bridge=vmbr15
net7: virtio=8E:DF:4F:C1:5F:C2,bridge=vmbr16
net8: virtio=A2:D6:F2:D1:2C:8A,bridge=vmbr17
net9: virtio=72:B4:3A:68:7B:54,bridge=vmbr18
numa: 0
ostype: l26
parent: asdfasdfasfdasf
sata0: qnap01.nfs.hdd.vm:249/vm-249-disk-0.qcow2,size=120G
scsihw: virtio-scsi-pci
smbios1: uuid=f849adc5-3e4c-4016-8a23-a63e61cd517d
sockets: 1
vga: virtio
vmgenid: 941a29ac-5971-4ca1-ab94-ba62da49f48a

Best regards
Mathias
 

oguz

Proxmox Staff Member
Staff member
Nov 19, 2018
3,389
388
88
thanks for the config. could you also check the journal for related errors and post them here?
 
Jan 11, 2021
11
0
1
35
Bern
Here the output of the journal.

Code:
Jun 10 14:47:33 proxmox05 pvedaemon[61405]: <user@domain> move disk VM 249: move --disk sata0 --storage qnap01.nfs.ssd.vm
Jun 10 14:47:33 proxmox05 pvedaemon[10206]: moving disk with snapshots, snapshots will not be moved!
Jun 10 14:47:33 proxmox05 pvedaemon[61405]: <user@domain> starting task UPID:proxmox05:000027DE:127F1EF4:60C209E5:qmmove:249:user@domain:
Jun 10 14:47:33 proxmox05 pvedaemon[10206]: storage migration failed: unable to create image: Insecure dependency in exec while running with -T switch at /usr/share/perl/5.28/IPC/Open3.pm line 178.
Jun 10 14:47:33 proxmox05 pvedaemon[61405]: <user@domain> end task UPID:proxmox05:000027DE:127F1EF4:60C209E5:qmmove:249:user@domain: storage migration failed: unable to create image: Insecure dependency in exec while running with -T switch at /usr/share/perl/5.28/IPC/Open3.pm line 178.

Best regards
Mathias
 

oguz

Proxmox Staff Member
Staff member
Nov 19, 2018
3,389
388
88
i made this config:

Code:
bios: ovmf
boot: order=sata0;ide2;net0
cores: 8
cpu: host
efidisk0: NFS:156/vm-156-disk-1.qcow2,size=128K
ide2: NFS:iso/debian-10.6.0-amd64-netinst.iso,media=cdrom
memory: 16348
name: testbug
net0: virtio=F6:66:BF:EF:09:90,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
parent: snap_15
sata0: NFS:156/vm-156-disk-0.qcow2,size=120G
scsihw: virtio-scsi-pci
smbios1: uuid=bec6e5ac-28a7-49db-8d76-da424bfbf7fc
sockets: 1
vmgenid: 54463c24-b8e9-469f-bcb4-18c427775799

the guest network interfaces should be irrelevant for cloning purposes, otherwise it matches with your config. this vm has 15 snapshots.

then i tried cloning the vm with qm clone 156 157 --storage NFS, it worked.
cloning to local storage also worked.
it was slow but i cannot reproduce your error.

Here the output of the journal.
could you also post from 10 minutes before this error? maybe there's something else we're missing.

you should try upgrading all the packages to the latest available versions by running apt update && apt dist-upgrade to be sure.

and your storage configuration would be interesting to see as well: cat /etc/pve/storage.cfg
 
Jan 11, 2021
11
0
1
35
Bern
then i tried cloning the vm with qm clone 156 157 --storage NFS, it worked.
Please have in mind, that cloning using qm works as mentioned in my first post. The problem only exists in the gui.

Here the journal log. I don't see anything useful before.

Around 11:12:04 i started the clone using qm and Interrupted that as soon as it started to clone. At 11:12:18 I started the clone from the GUI and got the failure.

Code:
Jun 11 11:00:00 proxmox05 systemd[1]: Starting Proxmox VE replication runner...
Jun 11 11:00:00 proxmox05 systemd[1]: pvesr.service: Succeeded.
Jun 11 11:00:00 proxmox05 systemd[1]: Started Proxmox VE replication runner.
Jun 11 11:00:10 proxmox05 pmxcfs[1752]: [status] notice: received log
Jun 11 11:00:40 proxmox05 pmxcfs[1752]: [status] notice: received log
Jun 11 11:01:00 proxmox05 systemd[1]: Starting Proxmox VE replication runner...
Jun 11 11:01:00 proxmox05 systemd[1]: pvesr.service: Succeeded.
Jun 11 11:01:00 proxmox05 systemd[1]: Started Proxmox VE replication runner.
Jun 11 11:02:00 proxmox05 systemd[1]: Starting Proxmox VE replication runner...
Jun 11 11:02:00 proxmox05 systemd[1]: pvesr.service: Succeeded.
Jun 11 11:02:00 proxmox05 systemd[1]: Started Proxmox VE replication runner.
Jun 11 11:03:00 proxmox05 systemd[1]: Starting Proxmox VE replication runner...
Jun 11 11:03:00 proxmox05 systemd[1]: pvesr.service: Succeeded.
Jun 11 11:03:00 proxmox05 systemd[1]: Started Proxmox VE replication runner.
Jun 11 11:04:00 proxmox05 systemd[1]: Starting Proxmox VE replication runner...
Jun 11 11:04:00 proxmox05 systemd[1]: pvesr.service: Succeeded.
Jun 11 11:04:00 proxmox05 systemd[1]: Started Proxmox VE replication runner.
Jun 11 11:05:00 proxmox05 systemd[1]: Starting Proxmox VE replication runner...
Jun 11 11:05:00 proxmox05 systemd[1]: pvesr.service: Succeeded.
Jun 11 11:05:00 proxmox05 systemd[1]: Started Proxmox VE replication runner.
Jun 11 11:06:00 proxmox05 systemd[1]: Starting Proxmox VE replication runner...
Jun 11 11:06:00 proxmox05 systemd[1]: pvesr.service: Succeeded.
Jun 11 11:06:00 proxmox05 systemd[1]: Started Proxmox VE replication runner.
Jun 11 11:07:00 proxmox05 systemd[1]: Starting Proxmox VE replication runner...
Jun 11 11:07:00 proxmox05 systemd[1]: pvesr.service: Succeeded.
Jun 11 11:07:00 proxmox05 systemd[1]: Started Proxmox VE replication runner.
Jun 11 11:08:00 proxmox05 systemd[1]: Starting Proxmox VE replication runner...
Jun 11 11:08:00 proxmox05 systemd[1]: pvesr.service: Succeeded.
Jun 11 11:08:00 proxmox05 systemd[1]: Started Proxmox VE replication runner.
Jun 11 11:08:28 proxmox05 pmxcfs[1752]: [status] notice: received log
Jun 11 11:09:00 proxmox05 systemd[1]: Starting Proxmox VE replication runner...
Jun 11 11:09:00 proxmox05 systemd[1]: pvesr.service: Succeeded.
Jun 11 11:09:00 proxmox05 systemd[1]: Started Proxmox VE replication runner.
Jun 11 11:10:00 proxmox05 systemd[1]: Starting Proxmox VE replication runner...
Jun 11 11:10:00 proxmox05 systemd[1]: pvesr.service: Succeeded.
Jun 11 11:10:00 proxmox05 systemd[1]: Started Proxmox VE replication runner.
Jun 11 11:10:39 proxmox05 qm[30597]: <root@pam> starting task UPID:proxmox05:00007787:12EF196F:60C3288F:qmclone:249:root@pam:
Jun 11 11:10:59 proxmox05 qm[30599]: VM 249 qmp command failed - VM 249 not running
Jun 11 11:11:00 proxmox05 systemd[1]: Starting Proxmox VE replication runner...
Jun 11 11:11:00 proxmox05 systemd[1]: pvesr.service: Succeeded.
Jun 11 11:11:00 proxmox05 systemd[1]: Started Proxmox VE replication runner.
Jun 11 11:12:00 proxmox05 systemd[1]: Starting Proxmox VE replication runner...
Jun 11 11:12:00 proxmox05 systemd[1]: pvesr.service: Succeeded.
Jun 11 11:12:00 proxmox05 systemd[1]: Started Proxmox VE replication runner.
Jun 11 11:12:04 proxmox05 qm[30599]: clone failed: copy failed: command '/usr/bin/qemu-img convert -p -n -f qcow2 -O qcow2 /mnt/pve/qnap01.nfs.hdd.vm/images/249/vm-249-disk-0.qcow2 zeroinit:/mnt/pve/qnap01.nfs.hdd.vm/images/247/vm-247-disk-0.qcow2' failed: interrupted by signal
Jun 11 11:12:04 proxmox05 qm[30597]: <root@pam> end task UPID:proxmox05:00007787:12EF196F:60C3288F:qmclone:249:root@pam: clone failed: copy failed: command '/usr/bin/qemu-img convert -p -n -f qcow2 -O qcow2 /mnt/pve/qnap01.nfs.hdd.vm/images/249/vm-249-disk-0.qcow2 zeroinit:/mnt/pve/qnap01.nfs.hdd.vm/images/247/vm-247-disk-0.qcow2' failed: interrupted by signal
Jun 11 11:12:18 proxmox05 pvedaemon[25653]: <xxxx@xxxx.xxxx> starting task UPID:proxmox05:000079F5:12EF3FBF:60C328F2:qmclone:249:xxxx@xxxx.xxxx:
Jun 11 11:12:18 proxmox05 pvedaemon[31221]: VM 249 qmp command failed - VM 249 not running
Jun 11 11:12:19 proxmox05 pvedaemon[31221]: clone failed: unable to create image: Insecure dependency in exec while running with -T switch at /usr/share/perl/5.28/IPC/Open3.pm line 178.
Jun 11 11:12:19 proxmox05 pvedaemon[25653]: <xxxx@xxxx.xxxx> end task UPID:proxmox05:000079F5:12EF3FBF:60C328F2:qmclone:249:xxxx@xxxx.xxxx: clone failed: unable to create image: Insecure dependency in exec while running with -T switch at /usr/share/perl/5.28/IPC/Open3.pm line 178.

And here the storage config
Code:
dir: local
    path /var/lib/vz
    content vztmpl,iso,backup

lvmthin: local-lvm
    thinpool data
    vgname pve
    content images,rootdir

nfs: vmbackup
    export /volume2/vmcluster-backup
    path /mnt/pve/vmbackup
    server 10.40.0.10
    content backup
    options vers=4.1
    prune-backups keep-last=2

nfs: qnap01.nfs.ssd.vm
    export /nfsexport.ssd.vm
    path /mnt/pve/qnap01.nfs.ssd.vm
    server 192.168.101.16
    content images,rootdir
    options vers=4.2
    prune-backups keep-all=1

nfs: qnap01.nfs.hdd.vm
    export /nfsexport.hdd.vm
    path /mnt/pve/qnap01.nfs.hdd.vm
    server 192.168.101.16
    content rootdir,images
    options vers=4.2
    prune-backups keep-all=1

nfs: qnap01.nfs.hdd.generic
    export /nfsexport.hdd.generic
    path /mnt/pve/qnap01.nfs.hdd.generic
    server 192.168.101.16
    content vztmpl,iso
    options vers=4.2
    prune-backups keep-all=1

nfs: vmbackup-daily
    export /volume2/vmcluster-backup-daily
    path /mnt/pve/vmbackup-daily
    server 10.40.0.10
    content backup
    prune-backups keep-last=6

lvm: NVMe
    vgname NVMe
    content rootdir,images
    nodes proxmox05
    shared 0

nfs: nfsserver.xxxx.ch
    export /home/xxxx/proxmox-backup
    path /mnt/pve/nfsserver.xxxx.ch
    server nfs-server.xxxx.ch
    content backup
    nodes proxmox05
    options vers=3
    prune-backups keep-all=1
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!