Moving unused disks lock VM ?

fiveangle

Active Member
Dec 23, 2020
30
8
28
San Francisco Bay Area
I am testing a production workflow we are attempting to build using PVE and am a bit confused...

Part of the workflow is to take advantage of PVE to manage migration of virtual disks between storage tiers. However, it appears our workflow is dead in the water because PVE will not allow us to snapshot a VM if unattached VDs are being migrated. Can anyone explain why this is the case ? PVE doesn't even offer the ability to snapshot unattached VDs, so why is the VM locked for this operation ?

I suspect this is simply an oversight in the design and that I could submit a ticket on bugzilla for it since migration of an unattached VD should be able to occur asynchronously and entirely independently of the VM state, but I wanted to check here to see if there is something else that I am missing ?

-=dave
 
just to understand correctly:

you are moving an 'unused' disk from one storage to another, and while this is running you want to do a snapshot?

If yes, this is by design. A snapshot of a guest not only takes a snapshot of the disks, but of the configuration also. And when a disk is being moved, the new storage location/name etc. has to land in the config

so if this were possible, you would create a snapshot where you cannot reliably roll back to, since that volume would not exist anymore
 
  • Like
Reactions: Kingneutron
The move will either succeed, or not succeed, depending on the move results, and the move process will either update the config if the move succeeds, or not, if it fails. In neither case does the result affect any active state of the VM (or CT for that matter) nor the activity of it's attached VDs. There is no technical reason to force the VM/CT to be offline for the duration of the move. The unattached VDs are not snapshotted during the snapshot process, so again, I'm struggling to see the value.

I did a bit more testing, and not only is snapshot blocked, but even starting the VM is blocked [insert wide-eye emoji here]. Reviewing the snapshot state saved within the VM .conf, the unattached VDs do not even appear in the snapshot section, so it's clear only the primary state section contains the entries, and they are independent of snapshot function. VDs can be attached and detached at will during runtime, but an unattached VD being moved is a blocker ? It boggles the mind.

While I certainly hear that you are saying "this is by design" (in the SW Dev industry, we tongue-in-cheek call it B.A.D. - Broken As Designed when changing the behavior is easily possible but not a priority for us) but if you give it some thought, there is no technical reason to force the VM/CT to be down for the duration of the VD move. There may be other operations that could be blocked by the VD move, but starting the VD/CT and snapshotting it and it's attached VDs are not one of them.

I was surprised to find that changes to unattached VDs aren't treated similarly to how resource changes made to running VMs/LXCs are: by highlighting the new VD information (path, size, mount options, etc) within the Hardware/Resources UI in RED to indicate new information that will become active upon next functional "restart" of the VM/CT, just as RAM size, VD attachment, detachment, mount options, etc are handled currently.

I have not done a code review on the underlying architectural process, but if there are "blockers" under the hood, they are surely due to oversight and could be mitigated with relatively minor changes. The only "approved" option is one must fully clone the VM/CT, then move the unattached VD, which throws the baby out with the bath water.

[a few moments later]

I have found a (very kludgey hack) work-around: have a pre-created sibling VM to re-assign the unattached VD to, initiate the move from that VM, then re-assign the moved VD to the original VM after the move completes. But I certainly don't appreciate the extra potential for human/scripted error, but it will have to do for now.

@dcsapak, if you believe this to be a worthwhile change, would you consider submitting a PR on our behalf ? If that is frowned upon, I will go ahead and submit one myself, but since they've ignored our plea for a simple change going on 5yrs now, I'm not super confident in action without any internal interest.

Thank you for coming to my Ted Talk : )

-=dave
 
Last edited:
so if this were possible, you would create a snapshot where you cannot reliably roll back to, since that volume would not exist anymore
Just to be clear, the snapshot taken prior to initiating a move of a detached VD does not contain the detached VD entries. Once the move completes, the move operation goes and updates only the primary section of the VM, regardless, which it would still do after the move completes, even if the VD were running or snapshotted post-move-initiation.

Please feel free to check for yourself. I do not presume to be an expert on all things internal to PVE, but this one seems like a feature slip that would greatly benefit many of us. There are likely plenty of users who have encountered the limitation, shaken their fist at the sky, and moved on. But I am a huge fan of PVE and do my best to promote its implementation among all my clients. I will be with PVE for the long haul, and simply want the same as you fine folks building it: for it to be as efficient for users as it can be, within reason.

-=dave
 
  • Like
Reactions: Kingneutron