[Feature request] Cloud-Init move drive option

hac3ru

Member
Mar 6, 2021
45
1
13
33
Hello,

I saw that the Move disk is unavailable for the cloud-init drive. After doing some reading, I found out that the CI disk is being regenerated at VM startup, so it can be removed and then created to the new storage. The only issue is that if you've got a running VM, in a production env, you might want to avoid this. I was doing some storage maintenance today and it was a real pain to move all the VMs from the storage, shut some of them down - so I could move the cloudinit drive, remember to disable the protection mode, since when protection mode is on, you can't really delete drives - etc.

Also, it would be amazing if we'd have an option to set a storage into maintenance mode. On enabling this, all VMs disks would be moved to a different storage. Probably, to make this even more useful, if there's not a large enough storage, the user would get the option to pick which drive goes to which storage.

Another thing I just hit: if a VM has multiple large disks, I didn't find an option to "schedule" the migration of all the disks and then go grab a coffee. It would be cool to have a button like we have the Migrate button, to migrate disks - where all disks of that specific VM would be migrated, one after the other.

And another (I'm on a roll today): this is a cosmetic thing: if a storage is shared among ALL the PVE hosts, we should only see it once in the Folder View -> Storage (maybe have a subsection called Shared - where the shared storages would be found - and one called Host-Specific - or something). Also, why are the iSCSI storages that are not flagged as Use LUNs directly shown in storage "folder"? There's no usable info about it, we can't see the disk usage or anything. I would not show those, since that brings no real value to the user.

In my case, I have 3 PVE hosts with 4 shared storages (2 of them are iSCSI storages that are not used directly - so I have an LVM on top of that). When I expand the storage section I'm having (4 storages + 2 iSCSIs) x 3 (PVE Hosts). That's 18 entries, for something that could be done in 4 entries - the storages themselves - shown only once since they're shared, and hiding the iSCSIs.

Thank you.
 
Last edited:
The only issue is that if you've got a running VM, in a production env, you might want to avoid this. I was doing some storage maintenance today and it was a real pain to move all the VMs from the storage, shut some of them down - so I could move the cloudinit drive, remember to disable the protection mode, since when protection mode is on, you can't really delete drives - etc.
CloudInit in its generated form is just an ISO attached as CDrom. On a running VM it has no value because its only used during boot. You should have been able to eject/remove/reinit it on running VM.
Also, it would be amazing if we'd have an option to set a storage into maintenance mode. On enabling this, all VMs disks would be moved to a different storage. Probably, to make this even more useful, if there's not a large enough storage, the user would get the option to pick which drive goes to which storage.
Sounds like a very specific corner situation that doesnt necessarily need a dedicated CLI option, since it can be done via a bit of scripting using existing interfaces.

PS if you want any of your ideas to be considered, they must be filed in PVE bugzilla. Ideas in the forum are generally not taken into consideration.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
CloudInit in its generated form is just an ISO attached as CDrom. On a running VM it has no value because its only used during boot. You should have been able to eject/remove/reinit it on running VM.
I was definitely not. I'm running Proxmox 8.1.3 on all the nodes, no updates available...
Sounds like a very specific corner situation that doesnt necessarily need a dedicated CLI option, since it can be done via a bit of scripting using existing interfaces.
Well, I don't feel like that's an edge case. I mean, how do you perform maintenance on the shared storages? When updating the NAS, for example, reboots are required => I need to move all VMs from that storage, perform the maintenance and then move the VMs back, which takes quite a while. While I do agree that it can be fixed using a script, I feel like this would be a huge plus for PVE.
PS if you want any of your ideas to be considered, they must be filed in PVE bugzilla. Ideas in the forum are generally not taken into consideration.
Thank you. I was thinking that maybe these options exist somewhere and maybe I missed them. I'll post them into the bugzilla and hopefully we'll see them implemented sometime.
 
I was definitely not. I'm running Proxmox 8.1.3 on all the nodes, no updates available...
I am not sure where you ran into an issue:

create VM with cloud-init:
Code:
VMID=3002 ./vm_create.sh
variables used:
STORAGE == bb-nvme
VMID == 3002
DEVICE == scsi0
NAME == vm3002
OSUSER == debian
CONSOLE == vga
CLOUDINIT == local
==============
update VM 3002: -scsi0 bb-nvme:vm-3002-disk-0 -scsihw virtio-scsi-single
update VM 3002: -net0 virtio,bridge=vmbr0,firewall=1
update VM 3002: -scsi1 bb-nvme:cloudinit
scsi1: successfully created disk 'bb-nvme:vm-3002-cloudinit,media=cdrom'
generating cloud-init ISO
update VM 3002: -boot c -bootdisk scsi0
update VM 3002: -serial0 socket -vga virtio
update VM 3002: -ipconfig0 ip=dhcp
update VM 3002: -cipassword <hidden> -ciuser debian

start VM:
Code:
qm start 3002
generating cloud-init ISO

confirm VM is running:
Code:
qm list|grep 3002
      3002 vm3002               running    8192               8.00 819572

find ID of the cloud-init disk and move it to different storage. I'll report the error to PVE devs but it doesnt affect end result:
Code:
 qm config 3002|grep cloud
scsi1: bb-nvme:vm-3002-cloudinit,media=cdrom
root@pve7demo1:~# qm disk move 3002 scsi1 bb-iscsi
create full clone of drive scsi1 (bb-nvme:vm-3002-cloudinit)
Use of uninitialized value $completion in string eq at /usr/share/perl5/PVE/QemuServer.pm line 8159.
found unused cloudinit disk 'bb-nvme:vm-3002-cloudinit', removing it

confirm disk has been moved:
Code:
qm config 3002|grep cloud
scsi1: bb-iscsi:vm-3002-cloudinit,media=cdrom,size=4M

delete cloud-init disk (VM is still running):
Code:
qm unlink 3002 --idlist scsi1
update VM 3002: -delete scsi1

create disk on the first storage again:
Code:
qm set 3002 --scsi1 bb-nvme:cloudinit
update VM 3002: -scsi1 bb-nvme:cloudinit
scsi1: successfully created disk 'bb-nvme:vm-3002-cloudinit,media=cdrom'
generating cloud-init ISO


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
how do you perform maintenance on the shared storages? When updating the NAS, for example, reboots are required => I need to move all VMs from that storage, perform the maintenance and then move the VMs back
Sounds like you are running a non-redundant consumer NAS? Blockbridge customers have advantage of running dual-controller redundant shared storage. No VM reboots or migrations are needed if storage maintenance is done. In fact, our upgrades are completely transparent to the clients and no failovers are needed.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
1. I was using the GUI, so maybe using the CLI works... can't say for sure, but I'll try to test it.

2. Nope, I never said I'm using Blockbridge. I'm using EMCs, IBM, and Pure Storage. None of them, at least from what I know, can be updated without a reboot. Besides this, we're having NFS and others which don't really support HA. So having a software option to help with this would be great.
 
1. I was using the GUI, so maybe using the CLI works... can't say for sure, but I'll try to test it.
They are using the same backend API, if you get different results - its possibly a bug. You would need to thoroughly test to ensure its not a user error.

Nope, I never said I'm using Blockbridge. I'm using EMCs, IBM, and Pure Storage. None of them, at least from what I know, can be updated without a reboot. Besides this, we're having NFS and others which don't really support HA. So having a software option to help with this would be great.
I know you are not running Blockbridge. Having worked for one of the members of storage cartel mentioned in your list (in the NAS division) - we would have been laughed out of every Wall Street bank if the NFS failover between controllers caused VM/host reboots. You generally update one controller at a time with them, failing services over to the other. In a properly configured environment there should be no outages (regardless of the protocol), beyond small IO hiccups which should be retried by the client.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Ermmm....

I don't know, when I'm trying to update, it says that the SPs will update, one at a time, but it requires a reboot at the end of the process.....?
 
start VM:
Code:
qm start 3002
generating cloud-init ISO

confirm VM is running:
Code:
qm list|grep 3002
      3002 vm3002               running    8192               8.00 819572

find ID of the cloud-init disk and move it to different storage. I'll report the error to PVE devs but it doesnt affect end result:
Code:
 qm config 3002|grep cloud
scsi1: bb-nvme:vm-3002-cloudinit,media=cdrom
root@pve7demo1:~# qm disk move 3002 scsi1 bb-iscsi
create full clone of drive scsi1 (bb-nvme:vm-3002-cloudinit)
Use of uninitialized value $completion in string eq at /usr/share/perl5/PVE/QemuServer.pm line 8159.
found unused cloudinit disk 'bb-nvme:vm-3002-cloudinit', removing it

confirm disk has been moved:
Code:
qm config 3002|grep cloud
scsi1: bb-iscsi:vm-3002-cloudinit,media=cdrom,size=4M

delete cloud-init disk (VM is still running):
Code:
qm unlink 3002 --idlist scsi1
update VM 3002: -delete scsi1

create disk on the first storage again:
Code:
qm set 3002 --scsi1 bb-nvme:cloudinit
update VM 3002: -scsi1 bb-nvme:cloudinit
scsi1: successfully created disk 'bb-nvme:vm-3002-cloudinit,media=cdrom'
generating cloud-init ISO


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

It works from CLI, not working from GUI even version 8.2.4
 
It works from CLI, not working from GUI even version 8.2.4
Perhaps I misunderstand the challenge.

The CloudInit is a generated ISO attached as CDrom. It is essentially a static set of data. If you are trying to move it from GUI that would imply it's not a mass change that requires automation. Couldn't you :

a) remove CloudInit from the VM configuration
b) add CloudInit to the VM configuration on the desired storage

Yes, GUI appears to disallow the "Disk Action" menu on the CloudInit disk. The 3 standard options in that menu are:
1) Reassign the owner: not applicable to CloudInit
2) Resize: not applicable to CloudInit
3) Move storage: This can be done via other methods. Can be argued to be inapplicable to CloudInit.

It is simply easier to block the entire menu where 2 out of 3 options are not applicable and the 3rd is 95% not applicable.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!