[SOLVED] Delete Parameter Ignored on Move Disk API Endpoint

utkonos

Active Member
Apr 11, 2022
150
38
33
According to the documentation for the move_disk endpoint below, the "delete" parameter defaults to "0" which should keep the original as an unused disk. However, from my testing, no matter whether the delete parameter is set explicitly to "0" or missing and therefore default, the original is never kept. Whether this parameter is set to "0" or "1" or missing, the behavior is the same: the disk is moved and the original is deleted.

Is this a bug, user error, or a documentation problem?

https://pve.proxmox.com/pve-docs/api-viewer/index.html#/nodes/{node}/qemu/{vmid}/move_disk
 
Last edited:
it works here... can you post the vm config before and after the move, as well as the task log of the move itself and also please how you call the api (e.g. via gui/cli/api client/etc)

EDIT: also important: the output of pveversion -v
 
Of course. I ran a full update and rebooted right before performing the following test. I have zeroed out some parts of the config.

Here is the config of the destination VM before the move /api2/json/nodes/example/qemu/100/config

JSON:
{
  "data": {
    "boot": "order=virtio0;ide2",
    "cores": 8,
    "digest": "0000000000000000000000000000000000000000",
    "ide2": "local:iso/OPNsense-22.1.2-OpenSSL-dvd-amd64.iso,media=cdrom",
    "memory": 8192,
    "meta": "creation-qemu=6.2.0,ctime=0000000000",
    "name": "destination",
    "net0": "virtio=00:00:00:00:00:00,bridge=vmbr0",
    "onboot": 0,
    "ostype": "other",
    "scsihw": "virtio-scsi-pci",
    "smbios1": "uuid=00000000-0000-0000-0000-000000000000",
    "virtio0": "local-lvm:vm-100-disk-0,backup=0,size=120G",
    "vmgenid": "00000000-0000-0000-0000-000000000000"
  }
}

Here is the config of the source VM before the move /api2/json/nodes/example/qemu/101/config

JSON:
{
  "data": {
    "boot": "order=ide2",
    "cores": 4,
    "digest": "0000000000000000000000000000000000000000",
    "ide2": "local:iso/lubuntu-21.10-desktop-amd64.iso,media=cdrom",
    "memory": 4096,
    "meta": "creation-qemu=6.2.0,ctime=0000000000",
    "name": "source",
    "net0": "virtio=00:00:00:00:00:00,bridge=vmbr0",
    "ostype": "l26",
    "scsihw": "virtio-scsi-pci",
    "smbios1": "uuid=00000000-0000-0000-0000-000000000000",
    "virtio0": "local-lvm:vm-101-disk-0,backup=0,size=1G",
    "vmgenid": "00000000-0000-0000-0000-000000000000"
  }
}

Here is the request as made using the Paw API client:

HTTP:
POST /api2/json/nodes/example/qemu/101/move_disk HTTP/1.1
Authorization: ***** Hidden credentials *****
Content-Type: application/x-www-form-urlencoded; charset=utf-8
Host: 192.168.1.1:8006
Connection: close
User-Agent: Paw/3.3.6 (Macintosh; OS X/11.6.5) GCDHTTPRequest
Content-Length: 54

disk=virtio0&target-disk=ide3&target-vmid=100&delete=0

Here is the config of the destination after the move:

JSON:
{
  "data": {
    "boot": "order=virtio0;ide2",
    "cores": 8,
    "digest": "0000000000000000000000000000000000000000",
    "ide2": "local:iso/OPNsense-22.1.2-OpenSSL-dvd-amd64.iso,media=cdrom",
    "ide3": "local-lvm:vm-100-disk-1,backup=0,size=1G",
    "memory": 8192,
    "meta": "creation-qemu=6.2.0,ctime=0000000000",
    "name": "destination",
    "net0": "virtio=00:00:00:00:00:00,bridge=vmbr0",
    "onboot": 0,
    "ostype": "other",
    "scsihw": "virtio-scsi-pci",
    "smbios1": "uuid=00000000-0000-0000-0000-000000000000",
    "virtio0": "local-lvm:vm-100-disk-0,backup=0,size=120G",
    "vmgenid": "00000000-0000-0000-0000-000000000000"
  }
}

Here is the config of the source after the move:

JSON:
{
  "data": {
    "boot": "order=ide2",
    "cores": 4,
    "digest": "0000000000000000000000000000000000000000",
    "ide2": "local:iso/lubuntu-21.10-desktop-amd64.iso,media=cdrom",
    "memory": 4096,
    "meta": "creation-qemu=6.2.0,ctime=0000000000",
    "name": "source",
    "net0": "virtio=00:00:00:00:00:00,bridge=vmbr0",
    "ostype": "l26",
    "scsihw": "virtio-scsi-pci",
    "smbios1": "uuid=00000000-0000-0000-0000-000000000000",
    "vmgenid": "00000000-0000-0000-0000-000000000000"
  }
}

This is the output of the task log:

Code:
moving disk 'virtio0' from VM '101' to '100'
  Renamed "vm-101-disk-0" to "vm-100-disk-1" in volume group "pve"
removing disk 'virtio0' from VM '101' config
update VM 100: -ide3 local-lvm:vm-100-disk-1,backup=0,size=1G
TASK OK

Here is the output from pveversion -v

Code:
root@example:~# pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-6-pve)
pve-manager: 7.1-12 (running version: 7.1-12/b3c09de3)
pve-kernel-helper: 7.1-14
pve-kernel-5.13: 7.1-9
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-7
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-5
libpve-guest-common-perl: 4.1-1
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.1-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-2
proxmox-backup-client: 2.1.6-1
proxmox-backup-file-restore: 2.1.6-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-9
pve-cluster: 7.1-3
pve-container: 4.1-4
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-6
pve-ha-manager: 3.3-3
pve-i18n: 2.6-2
pve-qemu-kvm: 6.2.0-3
pve-xtermjs: 4.16.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1
 
Last edited:
The "delete" parameter only takes effect when you are moving disk between different storage backends.
In your case you are only moving disk between VMs. Since the disk is physically staying on the same backend, then only operation that happens is a "rename". Obviously this disk cannot be present in both VMs at the same time.
You will be left with an unused disk on your source if you :
- move disk from backend1 to backend2 inside the VM
- move disk from backend1 to backend2 in a different VM
In that case a full block copy will be created on target and source will be detached but not removed, unless you specify delete=1

P.S. I suspect a disk format change can allow for the source to be kept around as well.


Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
@bbgeek17 That makes perfect sense. My use case is to make a fake USB that I "plug into" a VM so that the VM installer receives a custom config from a file on the fake USB. I want to be able to deploy many VM simultaneously, so being able to copy rather than move is ideal.

I have just tested it and I have a way to do what I need now. Thanks! I can now "store" the disk image on a placeholder VM with just that disk. To distribute the disk to other VMs for deployment, I make one call to move_disk that makes a copy of the disk on that placeholder VM. Then I make a second call to move_disk to move that new unused disk to the VM that I am deploying.

Is there a way to copy a disk to and from a particular storage backend independent from a VM? Basically, can I get rid of the placeholder VM somehow? The idea would be to create a new VM with the fake USB disk attached. Then populate the disk with the configuration data. Then move or copy that disk to a location on a storage backend that is not referenced in a VM config. From that point, I would copy this unreferenced disk to VMs as they are deployed. Is any of this possible?

Lastly, can a note about delete only working when moving across storage backends be added to the API documentation? I can eventually look at the contribution process, but it looks like the repos on Github are read-only, so I'd need to figure out your process rather than submitting a documentation PR right now on GH.
 
Last edited:
Is there a way to copy a disk to and from a particular storage backend independent from a VM? Basically, can I get rid of the placeholder VM somehow?
Not if you want to use PVE API/tools. Pretty much everything is tied into VMID and while you can allocate a disk for a non-existing VMID (pvesm), snapshots/moves/clones/etc are done via "qm" that requires VM config to be present. So does the API.

The idea would be to create a new VM with the fake USB disk attached. Then populate the disk with the configuration data. Then move or copy that disk to a location on a storage backend that is not referenced in a VM config. From that point, I would copy this unreferenced disk to VMs as they are deployed. Is any of this possible?
In this thread I've shown an example of using both PVE and Blockbridge API to speed up cloning of a VM on mass scale:
https://forum.proxmox.com/threads/concurrent-cloning-of-vm.97549/
Depending on your storage capabilities you could just do a thin clone of a "golden image" disk, add the config you want and give it the right name that PVE expects. No placeholder VM would be needed.

Lastly, can a note about delete only working when moving across storage backends be added to the API documentation?
You can open a bug/request here with your proposal: https://bugzilla.proxmox.com/


Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Not if you want to use PVE API/tools. Pretty much everything is tied into VMID
I understand. Alternatively, can I use Ceph to get where I want to go? Is there a way to attach a ceph block device to a VM? Is there a way to create a Ceph object that is not tied to a VMID?

I'm thinking about the same functionality as Volumes on DigitalOcean. They can be created separately from a VM and then can be arbitrarily attached, detached, and duplicated.

Thanks for all your help, btw!
 
Last edited:
I understand. Alternatively, can I use Ceph to get where I want to go? Is there a way to attach a ceph block device to a VM? Is there a way to create a Ceph object that is not tied to a VMID?
Not via PVE interface, the storage plugins are abstracted via a common storage layer. The higher level storage functions hide each backend complexity and capabilities. You can, of course, create it via Ceph cli. As long as you name it properly PVE will be happy to use it.
I'm thinking about the same functionality as Volumes on DigitalOcean. They can be created separately from a VM and then can be arbitrarily attached, detached, and duplicated.
You can do it out of band of PVE, as shown in my example above. Proper naming is the key since PVE does not keep track of volumes in anything other than VM config.


Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
there is 'qm importdisk', which can copy a volume (managed by PVE or arbitrary path) into a new volume and reference it in a VM config. it's CLI only, git already has the API support though, where you can do qm set VMID -scsiX TARGET_STORAGE:0,import-from=VOLID_OR_PATH which will copy VOLID_OR_PATH into a new volume on TARGET_STORAGE and assign it to scsiX in VMID. qm create also supports it ;)
 
I have a set of API calls that now do exactly what I needed:

  1. Copy disk on same VM from original storage backend to different one.
  2. Move disk from VM to target VM
  3. Start install process on destination VM
  4. Import config from disk.
This works for me. I now need to solve the problem of how to get the config file from the outside onto Proxmox. I can do this via connecting to the VM that is the source of the disk via SSH. My end goal is to generate the config file dynamically in my script then upload it to the intermediate VM via scp, then write it to the disk. Then it would hand off to the process steps above that are Proxmox API calls.

This all may be a moot point very soon because a new feature may be on the way with the software I'm configuring. Rather than using disks, it will be able to read config from an ISO attached as CDROM. This means I would generate the config, generate an ISO with the config in it, and upload the ISO to Proxmox storage. From there, it can be attached to any number of VMs during the deployment process. I feel like this way is cleaner overall.
 
  • Like
Reactions: fabian

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!