Disk Storage migration - Insecure dependency in exec while running...

avladulescu

Renowned Member
Mar 3, 2015
35
1
73
Bucharest/Romania
Hi Guys,

I just reinstalled a server after some disk changes, with proxmox, updated to latest version, rejoined the cluster and try to migrate some disk's VM from an external ceph cluster to the new server which is also configured to share a raid volume over nfs to all other servers.

I have received an error, though I don't get it on all running VMs, which I couldn't locate in google or over forum's search button. Also I have to mention, I tried the following 2 points:

1. enable/disable KRBD on the ceph storage from datacenter storage configuration
2. shutdown the VM (all qemu here no lxc) and give it another try with the VM's disk migration to other storage.
3. repeat step #2 on current reinstalled node as well as on other nodes, but with no success.


So, the error sounds like:

create full clone of drive virtio0 (Store-CEPH:vm-110-disk-1)
TASK ERROR: storage migration failed: error with cfs lock 'storage-Storage-SSD': unable to create image: Insecure dependency in exec while running with -T switch at /usr/share/perl/5.24/IPC/Open3.pm line 178.

Below is more info:

proxmox-ve: 5.2-2 (running kernel: 4.15.18-4-pve)
pve-manager: 5.2-8 (running version: 5.2-8/fdf39912)
pve-kernel-4.15: 5.2-7
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.13.13-2-pve: 4.13.13-33
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-38
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-10
libpve-storage-perl: 5.0-26
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-2
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-2
openvswitch-switch: 2.7.0-3
proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-30
pve-container: 2.0-26
pve-docs: 5.2-8
pve-firewall: 3.0-14
pve-firmware: 2.0-5
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.2-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-33
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.9-pve1~bpo9

Quorum information
------------------
Date: Tue Sep 11 01:15:23 2018
Quorum provider: corosync_votequorum
Nodes: 6
Node ID: 0x00000003
Ring ID: 5/164
Quorate: Yes

Votequorum information
----------------------
Expected votes: 8
Highest expected: 8
Total votes: 6
Quorum: 5
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000005 1 192.168.221.11
0x00000006 1 192.168.221.12
0x00000007 1 192.168.221.13
0x00000008 1 192.168.221.14
0x00000003 1 192.168.221.17 (local)
0x00000004 1 192.168.221.18

Must also add, that network is not an issue, as no network IP or cables have been change during the pve07 server reinstall.

Any advice would be greatly appreciated.

Kind regards,
Alex
 
Same issue here. Migrated disks to ceph in order to do an online migration in a cluster. Can't get the disks out of Ceph now.
 
So, the error sounds like:
create full clone of drive virtio0 (Store-CEPH:vm-110-disk-1)
TASK ERROR: storage migration failed: error with cfs lock 'storage-Storage-SSD': unable to create image: Insecure dependency in exec while running with -T switch at /usr/share/perl/5.24/IPC/Open3.pm line 178.
Was this the exact error? What can you see in the syslog/journal?
  • What ceph packages are installed on your systems?
    Code:
    dpkg -l | grep -i ceph
  • Is your system up-to-date?
    Code:
    apt update && apt dist-upgrade
  • Which kernels are you running on the nodes in the cluster?
  • How does the config of a VM, that produces the error, look like?
    Code:
    qm config <vmid>
  • How is the ceph storage configured on the nodes?
    Code:
    cat /etc/pve/storage.cfg
 
Hello Alwin,

Please pay attention to my post, I use an external ceph cluster which is up 2 date, and the same ceph cluster which I used (but on different pool) on pve 4.x branch before the 5.x upgrade.

The ceph version on the cluster is 10.2.10-0ubuntu0.16.04.1.

Regarding the pve' the system was up to date when I wrote the post, actually all nodes in the cluster have been update and upgraded as well as rebooted. The used kernel on those nodes is 4.15.18-4-pve.

Regarding Storage, I use the external ceph storage, and 2 NFS storage from other external servers, therefore pve nodes are only used to virtualize VDS VMs and no LXC containers.

The VM configuration is irrelevant as no 1 VM had these problem, but around 30 VMs, Windows, Linux with no major configuration differences, all had cpus, ram and 1-2 ceph storage drives and network cards.

The storage configuration on each VM, which I am using is always virtio driver.


So, since I had no response on this problem since I had the post, I had to find a way to migrating the storage source drive of those VMs from Ceph back to a new build NFS storage.

I can tell you that nothing worked until I manually run in the background of each pve node (where the VM would rest) the following command:

qm move_disk 3000 virtio2 Storage-SSD --format qcow2

Afterwards, the disk would move successfully, but another issue bumped out of this, that I couldn't remove the Ceph VM image, as it still "has watchers". As I know that this bug was manifested first on 4.x branch but got fixed really quick, it was very surprising to see that pve 5.x still has this issue manifest itself.

In order to release the watchers from the images, I had to run from the pve's node the rbd unmap command for each drive in side and after try to remove it from the Prox GUI.

Feel free to ask for other details if needed, I will surrey reply back.
Alex
 
I just resolved it from the CLI (no error there):
# qm move_disk VMID disk_to_move target_storage
 
Anyway, the error in the GUI script seems that there's a condition not solved that should be checked.

Regards,
 
Anyway, the error in the GUI script seems that there's a condition not solved that should be checked.
Are you connected to the PVE4.x or PVE5.x install, while doing the move disk?
 
Same issue. Not able to full clone vm on ceph storage. ("Linked clone" is working). Also not able to move vm disk to another storage.

But this works fine:
Code:
root@pve1:~# qm clone 100 101

Code:
proxmox-ve: 5.2-2 (running kernel: 4.15.18-4-pve)
pve-manager: 5.2-8 (running version: 5.2-8/fdf39912)
pve-kernel-4.15: 5.2-7
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.15.17-1-pve: 4.15.17-9
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-38
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-10
libpve-storage-perl: 5.0-27
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-2
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-2
proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-30
pve-container: 2.0-26
pve-docs: 5.2-8
pve-firewall: 3.0-14
pve-firmware: 2.0-5
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.2-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-33
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.9-pve1~bpo9


ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) luminous (stable)
 
Last edited:
Same issue. Not able to full clone vm on ceph storage. ("Linked clone" is working). Also not able to move vm disk to another storage.
Are the VMs connected to external ceph storage? Is any other storage working with clone/move disk?
 
Are the VMs connected to external ceph storage? Is any other storage working with clone/move disk?
Yes I use remote ceph storage.
Yes I can create vm on local storage then cloning and moving disk works fine (even can move disk from local storage to ceph).
 
More info. Resizing of disk operation and moving of the disk.
Code:
rbd resize 'vm-102-disk-0' error: Insecure dependency in exec while running with -T switch at /usr/share/perl/5.24/IPC/Open3.pm line 178. (500)

create full clone of drive virtio0 (ceph_backup:vm-102-disk-0)
TASK ERROR: storage migration failed: lvcreate 'pve/vm-102-disk-0' error: Insecure dependency in exec while running with -T switch at /usr/share/perl/5.24/IPC/Open3.pm line 178.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!