Hi,
this is from:
pve-manager/8.1.10/4b06efb5db453f29 (running kernel: 6.5.13-3-pve)
A qcow2 based template on an NFS server is link cloned on the same NFS server
Doing this via API will result in tasks with this output:
49 seconds later changing the IO limits:
Which will fail...
While the basically identical process will not fail when executed manually through the proxmox UI:
via proxmox UI:
23 seconds later:
So the question would be now, what could be possible reasons why the API would behave differently from the actions triggered by the Proxmox UI.
Any suggestions for further debugging are welcome. We could already rule out performance issues and similar kinds of root causes.
Thank you!
this is from:
pve-manager/8.1.10/4b06efb5db453f29 (running kernel: 6.5.13-3-pve)
A qcow2 based template on an NFS server is link cloned on the same NFS server
Doing this via API will result in tasks with this output:
Code:
create full clone of drive ide0 (nfs-server:321/vm-321-cloudinit.qcow2)
Formatting '/mnt/pve/nfs-server/images/457/vm-457-cloudinit.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=4194304 lazy_refcounts=off refcount_bits=16
create linked clone of drive virtio0 (nfs-server:321/base-321-disk-0.qcow2)
clone 321/base-321-disk-0.qcow2: images, vm-457-disk-0.qcow2, 457 to vm-457-disk-0.qcow2 (base=../321/base-321-disk-0.qcow2)
Formatting '/mnt/pve/nfs-server/images/457/vm-457-disk-0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=2147483648 backing_file=../321/base-321-disk-0.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
TASK OK
49 seconds later changing the IO limits:
Code:
update VM 457: -virtio0 nfs-server:321/base-321-disk-0.qcow2/457/vm-457-disk-0.qcow2,iops_rd=50000,iops_wr=50000,mbps_rd=244,mbps_wr=244,discard=on
TASK ERROR: volume 'nfs-server:321/base-321-disk-0.qcow2/457/vm-457-disk-0.qcow2' does not exist
Which will fail...
While the basically identical process will not fail when executed manually through the proxmox UI:
via proxmox UI:
Code:
create full clone of drive ide0 (nfs-server:321/vm-321-cloudinit.qcow2)
Formatting '/mnt/pve/nfs-server/images/457/vm-457-cloudinit.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=4194304 lazy_refcounts=off refcount_bits=16
create linked clone of drive virtio0 (nfs-server:321/base-321-disk-0.qcow2)
clone 321/base-321-disk-0.qcow2: images, vm-457-disk-0.qcow2, 457 to vm-457-disk-0.qcow2 (base=../321/base-321-disk-0.qcow2)
Formatting '/mnt/pve/nfs-server/images/457/vm-457-disk-0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=2147483648 backing_file=../321/base-321-disk-0.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16
TASK OK
23 seconds later:
Code:
update VM 457: -virtio0 nfs-server:321/base-321-disk-0.qcow2/457/vm-457-disk-0.qcow2,discard=on,size=2G,mbps_rd=244,mbps_wr=244,iops_rd=50000,iops_wr=50000
TASK OK
So the question would be now, what could be possible reasons why the API would behave differently from the actions triggered by the Proxmox UI.
Any suggestions for further debugging are welcome. We could already rule out performance issues and similar kinds of root causes.
Thank you!