Cannot Clone VM - Operation not supported

ant0nwax

Active Member
Jan 19, 2017
16
0
41
Earth
www.hangstudio.com
Hi Forum Readers and Scientists

I would like to ask for some help, since Proxmox is new to me,
I am running it actually for a year, just any time when I read in between about cloning and restoring I found tons of threads that people have problems, so I start an easy thread for ONE single issue

I would like to clone a CentOS 7 VM

What I did:
logon to webgui as root
search for the VM
press right mouse button on VM name and select Clone from the menu
VM ID is a new unique D, that never existed and I never tried before
Name is a unique new name never existed and I never tried before
Snapshot: NOT current but the snapshot that I would like to clone (a backup of my machine)
press Clone

Result:
Task Viewer:
TASK ERROR: clone failed: activate_volume 'pve/snap_vm-222-disk-1_antux14' error: device-mapper: message ioctl on failed: Operation not supported

Environment:
RAM usage
57.47% (8.69 GiB of 15.13 GiB)
KSM sharing
48.34 MiB
HD space(root)
16.52% (2.22 GiB of 13.41 GiB)
SWAP usage
0.00% (0 B of 4.00 GiB)
CPU(s)
4 x Intel(R) Core(TM) i5-3427U CPU @ 1.80GHz (1 Socket)
Kernel Version
Linux 4.4.98-2-pve #1 SMP PVE 4.4.98-101 (Mon, 18 Dec 2017 13:36:02 +0100)
PVE Manager Version
pve-manager/4.4-20/2650b7b5

Thanks for your help and guidance.
Have a great day.
 
Please can you also post the storage config, and maybe the VM config? I would also suggest to update the system to version 5
 
Storage Config, one internal M2 SATA and one external USB SATA
both 64 GB, and now hold your breath: RAID 0 Stripe :)

root@antnuc:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- 55.77g 0
/dev/sdb1 pve lvm2 a-- 55.89g 6.81g
root@antnuc:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 2 12 0 wz--n- 111.67g 6.81g
root@antnuc:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotzM- 87.05g 72.84 99.83
root pve -wi-ao---- 13.75g
snap_vm-222-disk-1_antux14 pve Vri-i-tz-k 5.00g data vm-222-disk-1 100.00 100.00
snap_vm-223-disk-1_antux15 pve Vri---tz-k 7.00g data vm-223-disk-1
swap pve -wi-ao---- 4.00g
vm-111-disk-1 pve Vwi-aotz-- 32.00g data 99.78
vm-222-disk-1 pve Vwi-aotz-- 5.00g data 64.79
vm-222-disk-2 pve Vwi-a-tz-- 2.00g data 0.00
vm-222-disk-3 pve Vwi-a-tz-- 1.00g data 0.00
vm-222-state-antux14 pve Vwi-a-tz-- 4.49g data 7.63
vm-223-disk-1 pve Vwi---tz-- 7.00g data
vm-333-disk-1 pve Vwi-a-tz-- 32.00g data 66.62



VMCONFIG, I think I post this, there was no instruction about what to post... ID 222 is the machine in question

root@antnuc:~# cat /etc/pve/nodes/antnuc/qemu-server/222.conf
#DNS
#FTP
boot: cdn
bootdisk: ide0
cores: 1
ide0: local-lvm:vm-222-disk-1,size=5G
ide2: none,media=cdrom
memory: 2048
name: antux14
net0: bridge=vmbr0,e1000=66:34:33:62:64:63
numa: 0
ostype: l26
parent: antux14
smbios1: uuid=bb68a57d-7824-43e6-b0d8-2b4ab6b8ceef
sockets: 1
unused0: local-lvm:vm-222-disk-2
unused1: local-lvm:vm-222-disk-3

[antux14]
#Proxmox CentOS VM with DNS, FTP, SSH, rootsh 2017 01 22
boot: cdn
bootdisk: ide0
cores: 1
ide0: local-lvm:vm-222-disk-1,size=5G
ide2: none,media=cdrom
machine: pc-i440fx-2.5
memory: 2048
name: antux14
net0: bridge=vmbr0,e1000=66:34:33:62:64:63
numa: 0
ostype: l26
smbios1: uuid=bb68a57d-7824-43e6-b0d8-2b4ab6b8ceef
snaptime: 1485060053
sockets: 1
vmstate: local-lvm:vm-222-state-antux14
 
Upgrading is always a bigger issue, how to be sure that my VMs will be working after the upgrade?
Am I able to Roll back the Upgrade?
Do you have a official Upgrade Guide from my Version to Version 5 ?
Thanks
 
Thanks, I will try to backup the virtual machines on another disk (external storage) and then upgrade via command line. For me it seems simpler than the reinstall from scratch, even if the reinstall is a cleaner and better choice for a professional upgrade. If the VMs are not able to start then I try a reinstall, wish me luck :)
 
Upgrading via command line was successful, now the clone still does not work, and
It shows me this, a more meaningful error message:

create full clone of drive ide0 (local-lvm:vm-222-disk-1)
Using default stripesize 64.00 KiB.
WARNING: Remaining free space in metadata of thin pool pve/data is too low (99.83% >= 87.50%). Resize is recommended.
TASK ERROR: clone failed: lvcreate 'pve/vm-444-disk-1' error: Cannot create new thin volume, free space in thin pool pve/data reached threshold.


I have an idea what to do, i would like to ask what is correct?

- INCREASE Logical Volume /pve/data ?
- SHRINK Logical Volume /pve/data?


root@antnuc:~# vgs -a pve
VG #PV #LV #SN Attr VSize VFree
pve 2 12 0 wz--n- 111.67g 6.81g
root@antnuc:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 87.05g 72.84 99.83
root pve -wi-ao---- 13.75g
snap_vm-222-disk-1_antux14 pve Vri-a-tz-k 5.00g data vm-222-disk-1 48.08

snap_vm-223-disk-1_antux15 pve Vri---tz-k 7.00g data vm-223-disk-1
swap pve -wi-ao---- 4.00g
vm-111-disk-1 pve Vwi-a-tz-- 32.00g data 99.78
vm-222-disk-1 pve Vwi-a-tz-- 5.00g data 64.79
vm-222-disk-2 pve Vwi-a-tz-- 2.00g data 0.00
vm-222-disk-3 pve Vwi-a-tz-- 1.00g data 0.00
vm-222-state-antux14 pve Vwi-a-tz-- 4.49g data 7.63
vm-223-disk-1 pve Vwi-a-tz-- 7.00g data 58.80
vm-333-disk-1 pve Vwi-a-tz-- 32.00g data 66.62


I could also remove the green disks/snapshots, they are not needed anymore, just I also run into an error again :)

root@antnuc:~# lvremove pve/vm-223-disk-1
Do you really want to remove and DISCARD active logical volume pve/vm-223-disk-1? [y/n]: y
device-mapper: message ioctl on (253:4) failed: Operation not supported
Failed to process thin pool message "set_transaction_id 20 21".
Failed to suspend pve/data with queued messages.
Failed to update pool pve/data.


Then suddenly this virtual disk is gone?


root@antnuc:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-cotzM- 87.05g 72.84 99.80
root pve -wi-ao---- 13.75g
snap_vm-222-disk-1_antux14 pve Vri-a-tz-k 5.00g data vm-222-disk-1 48.08
snap_vm-223-disk-1_antux15 pve Vri---tz-k 7.00g data
swap pve -wi-ao---- 4.00g
vm-111-disk-1 pve Vwi-aotz-- 32.00g data 99.78
vm-222-disk-1 pve Vwi-aotz-- 5.00g data 64.79
vm-222-state-antux14 pve Vwi-a-tz-- 4.49g data 7.63
vm-333-disk-1 pve Vwi-a-tz-- 32.00g data 66.62


And I try to remove the other


root@antnuc:~# lvremove pve/snap_vm-223-disk-1_antux15
Do you really want to remove and DISCARD logical volume pve/snap_vm-223-disk-1_antux15? [y/n]: y
device-mapper: message ioctl on (253:4) failed: Operation not supported
Failed to process thin pool message "delete 3".
Failed to suspend pve/data with queued messages.
Failed to update pool pve/data.
root@antnuc:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-cotzM- 87.05g 72.84 99.80
root pve -wi-ao---- 13.75g
snap_vm-222-disk-1_antux14 pve Vri-a-tz-k 5.00g data vm-222-disk-1 48.08
snap_vm-223-disk-1_antux15 pve Vri---tz-k 7.00g data
swap pve -wi-ao---- 4.00g
vm-111-disk-1 pve Vwi-aotz-- 32.00g data 99.78
vm-222-disk-1 pve Vwi-aotz-- 5.00g data 64.79
vm-222-state-antux14 pve Vwi-a-tz-- 4.49g data 7.63
vm-333-disk-1 pve Vwi-a-tz-- 32.00g data 66.62


And the other disk is not gone?
could you please tell me why these things are happening? I do not understand this


Size did not change?

root@antnuc:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 2 9 0 wz--n- 111.67g 6.81g


I think this update is enough for the day :) THANK YOU
 
Sounds like a space issue, I would probably make a full backup your VM, install new/larger drive/s and install proxmox, then restore the VM to your host. I would leave the original hard drive intact, just in case.

If you go to ProxMox main menu, highlight your host (proxmox), go "task history", look for the clone task that failed, it should be "VM 222 - CLONE" double click and check the output. In my example below, I tried to make a clone with not enough space on pve/data. About the 5th line in the task output it gave me this message:

"WARNING: Sum of all thin volume sizes (104.00 GiB) exceeds the size of thin pool pve/data and the amount of free space in volume group (15.82 GiB)!"

If you have anything important within that VM, please consider 3-2-1 backup rule.
 
3,2,1? never heared about this :)
and thanks, i am aware that there is no space, just since it is LVM i am able to change the size, my question again:

shall I
- INCREASE Logical Volume /pve/data ?
- SHRINK Logical Volume /pve/data?


there is some free space, just i am not familiar with Linux LVM commands :)

Thanks
 
create full clone of drive ide0 (local-lvm:vm-222-disk-1)
Using default stripesize 64.00 KiB.
WARNING: Remaining free space in metadata of thin pool pve/data is too low (99.84% >= 87.50%). Resize is recommended.
TASK ERROR: clone failed: lvcreate 'pve/vm-444-disk-1' error: Cannot create new thin volume, free space in thin pool pve/data reached threshold.
 
I added some space to the Logical Volume /dev/pve/data
and i got the same error message (above)
How can I resize?

root@antnuc:/var/log# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-cotzM- 93.86g 72.87 99.84
 
root@antnuc:/var/log# lvremove /dev/pve/snap_vm-223-disk-1_antux15
Do you really want to remove and DISCARD logical volume pve/snap_vm-223-disk-1_antux15? [y/n]: y
device-mapper: message ioctl on (253:4) failed: Operation not supported
Failed to process thin pool message "delete 3".
Failed to suspend pve/data with queued messages.
Failed to update pool pve/data.
 
"Storage Config, one internal M2 SATA and one external USB SATA
both 64 GB, and now hold your breath: RAID 0 Stripe"

That's a very non-standard setup for raid-0. I mean it's already dangerous with raid-0, compounded with half the stripe on external USB.
Honestly I think you're better off with a different setup. If you have at least 16GB memory, you could do ZFS raid1 root. Then add M2 SATA as cache (ZIL - L2ARC). I would ditch the external USB 64 flash though.

You can re-purpose the 64GB USB flash on a separate PC as freenas boot disk, and setup raid1 or raid 10 storage, and add the freenas box as NFS (containers and VM) or SMB (VM only) storage mount on your ProxMox setup.
 
Isnt this funny with the LVM streched between an USB and an Internal disk on RAID0 ? i think its funny :)
My FreeNAS is way too slow for good performance for a VM, i will one day buy a 128 GB M2 disk, until then, I am not sure, i would like to solve this riddle
 
I have the same error again, now I have the 128 GB disk and its full too

what are the rules for LVM? how much data may the LVs in the VG use? not 100%?

how to clean metadata?
root@antnuc:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-cotzM- 88.00g 70.12 99.83
... (99.83% >= 87.50%). Resize is recommended.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!