Hi Guys,
I just reinstalled a server after some disk changes, with proxmox, updated to latest version, rejoined the cluster and try to migrate some disk's VM from an external ceph cluster to the new server which is also configured to share a raid volume over nfs to all other servers.
I have received an error, though I don't get it on all running VMs, which I couldn't locate in google or over forum's search button. Also I have to mention, I tried the following 2 points:
1. enable/disable KRBD on the ceph storage from datacenter storage configuration
2. shutdown the VM (all qemu here no lxc) and give it another try with the VM's disk migration to other storage.
3. repeat step #2 on current reinstalled node as well as on other nodes, but with no success.
So, the error sounds like:
create full clone of drive virtio0 (Store-CEPH:vm-110-disk-1)
TASK ERROR: storage migration failed: error with cfs lock 'storage-Storage-SSD': unable to create image: Insecure dependency in exec while running with -T switch at /usr/share/perl/5.24/IPC/Open3.pm line 178.
Below is more info:
proxmox-ve: 5.2-2 (running kernel: 4.15.18-4-pve)
pve-manager: 5.2-8 (running version: 5.2-8/fdf39912)
pve-kernel-4.15: 5.2-7
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.13.13-2-pve: 4.13.13-33
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-38
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-10
libpve-storage-perl: 5.0-26
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-2
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-2
openvswitch-switch: 2.7.0-3
proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-30
pve-container: 2.0-26
pve-docs: 5.2-8
pve-firewall: 3.0-14
pve-firmware: 2.0-5
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.2-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-33
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.9-pve1~bpo9
Quorum information
------------------
Date: Tue Sep 11 01:15:23 2018
Quorum provider: corosync_votequorum
Nodes: 6
Node ID: 0x00000003
Ring ID: 5/164
Quorate: Yes
Votequorum information
----------------------
Expected votes: 8
Highest expected: 8
Total votes: 6
Quorum: 5
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000005 1 192.168.221.11
0x00000006 1 192.168.221.12
0x00000007 1 192.168.221.13
0x00000008 1 192.168.221.14
0x00000003 1 192.168.221.17 (local)
0x00000004 1 192.168.221.18
Must also add, that network is not an issue, as no network IP or cables have been change during the pve07 server reinstall.
Any advice would be greatly appreciated.
Kind regards,
Alex
I just reinstalled a server after some disk changes, with proxmox, updated to latest version, rejoined the cluster and try to migrate some disk's VM from an external ceph cluster to the new server which is also configured to share a raid volume over nfs to all other servers.
I have received an error, though I don't get it on all running VMs, which I couldn't locate in google or over forum's search button. Also I have to mention, I tried the following 2 points:
1. enable/disable KRBD on the ceph storage from datacenter storage configuration
2. shutdown the VM (all qemu here no lxc) and give it another try with the VM's disk migration to other storage.
3. repeat step #2 on current reinstalled node as well as on other nodes, but with no success.
So, the error sounds like:
create full clone of drive virtio0 (Store-CEPH:vm-110-disk-1)
TASK ERROR: storage migration failed: error with cfs lock 'storage-Storage-SSD': unable to create image: Insecure dependency in exec while running with -T switch at /usr/share/perl/5.24/IPC/Open3.pm line 178.
Below is more info:
proxmox-ve: 5.2-2 (running kernel: 4.15.18-4-pve)
pve-manager: 5.2-8 (running version: 5.2-8/fdf39912)
pve-kernel-4.15: 5.2-7
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.13.13-2-pve: 4.13.13-33
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-38
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-10
libpve-storage-perl: 5.0-26
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-2
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-2
openvswitch-switch: 2.7.0-3
proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-30
pve-container: 2.0-26
pve-docs: 5.2-8
pve-firewall: 3.0-14
pve-firmware: 2.0-5
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.2-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-33
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.9-pve1~bpo9
Quorum information
------------------
Date: Tue Sep 11 01:15:23 2018
Quorum provider: corosync_votequorum
Nodes: 6
Node ID: 0x00000003
Ring ID: 5/164
Quorate: Yes
Votequorum information
----------------------
Expected votes: 8
Highest expected: 8
Total votes: 6
Quorum: 5
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000005 1 192.168.221.11
0x00000006 1 192.168.221.12
0x00000007 1 192.168.221.13
0x00000008 1 192.168.221.14
0x00000003 1 192.168.221.17 (local)
0x00000004 1 192.168.221.18
Must also add, that network is not an issue, as no network IP or cables have been change during the pve07 server reinstall.
Any advice would be greatly appreciated.
Kind regards,
Alex