Can't remove source disk after move.

Jan 2, 2021
17
0
6
39
PVE 7.0-10 was upgrading one of my RAID arrays on my single node. I dropped in a secondary drive and migrated my VM's to that "Temp" drive, upgraded RAID, migrated my VM's to the new array. I used the "remove from source" checkbox to keep from doing manual cleanup of unused disks afterwards. However two disks moved successfully but weren't deleted from the source location. I can't delete them due to an error that the VM exists and with instructions to use the hardware page to remove them, however the disks don't show on the hardware page. Any idea's what's going on or how to remove them? I don't need the "Temp" drive I'm close to just deleting the VG and yanking the disk. But would rather do this cleanly.
 
Last edited:
you can do a 'qm rescan --vmid <VMID>' to scan the storages for leftover disks (they should show then under the hardware view as 'unused' disks)
from there it should be possible to delete them

However two disks moved successfully but weren't deleted from the source location.
can you post the 'move disk' task log ?
 
you can do a 'qm rescan --vmid <VMID>' to scan the storages for leftover disks (they should show then under the hardware view as 'unused' disks)
from there it should be possible to delete them


can you post the 'move disk' task log ?
the 'qm rescan --vmid <VMID>' thankfully discovered the disks and allowed me to remove the disks. Here is one of the move outputs for reference I omitted all the percentage ticks in the middle. Not sure why it thought it removed the disk but didn't, or why it didn't automatically show it as an unused disk.

Code:
()
create full clone of drive scsi1 (Temp:vm-106-disk-2)
  Logical volume "vm-106-disk-1" created.
drive mirror is starting for drive-scsi1
drive-scsi1: transferred 0.0 B of 1.0 TiB (0.00%) in 0s
drive-scsi1: transferred 77.0 MiB of 1.0 TiB (0.01%) in 1s
~~~~
drive-scsi1: transferred 1.0 TiB of 1.0 TiB (100.00%) in 3h 4m 39s
drive-scsi1: transferred 1.0 TiB of 1.0 TiB (100.00%) in 3h 4m 40s, ready
all 'mirror' jobs are ready
drive-scsi1: Completing block job_id...
drive-scsi1: Completed successfully.
drive-scsi1: mirror-job finished
  Logical volume "vm-106-disk-2" successfully removed
TASK OK
 
A similar problem: I checked several times that when moving a disk to another storage, the original disk is not deleted. I put the checkbox to delete, not a word about deletion in the log. The option to remove the source was included with a tick.
Virtual Environment 7.0-13

create full clone of drive sata0 (xxx:160/vm-160-disk-0.qcow2)
Logical volume "vm-160-disk-0" created.
drive mirror is starting for drive-sata0
drive-sata0: transferred 507.0 MiB of 60.0 GiB (0.83%) in 5s
drive-sata0: transferred 1020.0 MiB of 60.0 GiB (1.66%) in 9s
~~
drive-sata0: transferred 60.0 GiB of 60.0 GiB (100.00%) in 9m 22s
drive-sata0: transferred 60.0 GiB of 60.0 GiB (100.00%) in 9m 23s, ready
all 'mirror' jobs are ready
drive-sata0: Completing block job_id...
drive-sata0: Completed successfully.
drive-sata0: mirror-job finished
TASK OK
 
up, the problem is urgent. Delete source don't start in log

create full clone of drive scsi0 (local:180/vm-180-disk-0.qcow2)
Formatting '/mnt/pve/srv4store_raid1/images/180/vm-180-disk-0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=4294967296 lazy_refcounts=off refcount_bits=16
transferred 0.0 B of 4.0 GiB (0.00%)
transferred 4.0 GiB of 4.0 GiB (100.00%)
TASK OK
 
Last edited:
up, the problem is urgent. Delete source don't start in log
just tested here, while the deletion is not in the log, the source actually got deleted. did you check if the source is still there in your case?
 
just tested here, while the deletion is not in the log, the source actually got deleted. did you check if the source is still there in your case?
The source has not been deleted. The delete source checkbox was enabled. If you uncheck the delete box, the old disk becomes disconnected after copying and it can be deleted by standard means (deleted nominally). That is, the operation of deleting a source does not start - it does not correctly form a command through the web interface
 
can you post the source storage content before you move the disk (with delete source enabled) , then the task log and then the source storage content again after the move ?
 
The task hangs 100% and is not executed, the VM is locked. Further work with this VM only through server reboot
 
The task hangs 100% and is not executed, the VM is locked. Further work with this VM only through server reboot
in that case i would expect the source image not to be deleted, how long did you wait? maybe it just takes a while to write to the storage (or it is full, or it hangs, etc.)
 
the task did complete, but the original disk was not deleted. For an image with a size of 5Gb it is very long (1 Gb / s network, raid 10 SAS 6G 10k prm)
 
anything in the journal/dmesg/syslog?
 
anything in the journal/dmesg/syslog?
Oct 19 14:16:44 srv5pve pvestatd[1325]: status update time (5.637 seconds)
Oct 19 14:16:44 srv5pve pvedaemon[392196]: <root@pam> end task UPID:srv5pve:00062507:00E0B535:616EA508:qmmove:180:root@pam: OK
Oct 19 14:17:00 srv5pve systemd[1]: Starting Proxmox VE replication runner...
Oct 19 14:17:01 srv5pve systemd[1]: pvesr.service: Succeeded.
Oct 19 14:17:01 srv5pve systemd[1]: Finished Proxmox VE replication runner.
Oct 19 14:17:01 srv5pve systemd[1]: pvesr.service: Consumed 1.231s CPU time.
 
up, the problem is urgent.
please do not bump without more information, we cannot help more without having more information...

you still did not provide me with the following information, i previously asked for:
can you post the source storage content before you move the disk (with delete source enabled) , then the task log and then the source storage content again after the move ?
in addition, please post the vm config (qm config ID) before and after, then do a 'qm rescan --vmid VMID' and the config again
 
Was this ever sorted? I have the same problem.
Installed Proxmox 7.2-7 on (slow) spinning rust, and added NVME storage, and set it up as ZFS.
Moved VMs to ZFS. I'm 95% sure I did check the "delete old" box.
They moved OK, but the old files are still on the HDD. I can't delete them any way I have tried - I am either told the VM exists or to delete the file from the hardware page (which shows the new file only).
I then destroyed the VM and it deleted both old and new disks.
The reinstall is going as I write.

No great urgency - the only remaining VM is running PyHole, and that does little to no IO.
 
Found a workaround :)
Clone the VM.
Use the clone
Delete the original which removes both its disk files.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!