Hi there,
while proxmox provides a great feature set I still have the feeling that it lacks several features which makes life for VM administrators easier. At least in typical QA environments.
From my perspective this includes (but is not limited
):
VM Migration:
- Support for VM migration via other network interface than the main external interface.
It would be nice if the proxmox frontend would allow to migrate VMs not only via external / management interface. For several
reasons a dedicated storage network makes sense. To my understanding this means little modification to proxmox itself. Only
a cluster-replicated list of all configured bridges in the cluster nodes (given you have set up your storage network from within
proxmox) plus an option to select the network interface on which to migrate the VM. Given the right control methods are choosen
there is little to no I/O impact to both nodes (see below).
- Batch migration of multiple VMs. For maintenance it might be required to move all VMs from one node to another node. For example
if you have local storage and need to rebuild your RAID. It would be nice if the proxmox frontend lets one select multiple machines
which are migrated in a queued manor. Might be nice to have an additional menu point which simply lets you select one server node,
define a target node and automatically migrates all VMs from one one to another (Maintenance migration !). Resource verification
on the target node (e.g. sufficent RAM / HDD space) plus a warning would be nice but IMO are not mandatory. Administrators should
always have an eye on their host ressources plus monitoring tools.
A "du -hs /var/lib/vz/images" gives you quickly the size of all kvm images on the source node.
A simple
gives the information how much space is left on the target node for KVM images.
VM Management:
- Support for Snapshots from proxmox frontend. Making a snapshot of a current installation set is essential for QA activities.
It would be nice if the proxmox frontend allows managing snapshots for qcow2 images via GUI. This would include saving, naming,
restoring and deleting multiple snapshots per VM.
- Cloning of VMs. Similar to snapshoting it is often required in QA environments to duplicate machines for quick server provision.
To my best understanding the proxmox frontend could do the similar stept that we now need to do manually. First generation of identical
conf with new mac and then copying the hdd image with ionice on idle priority from "source VM" to "target VM" directory.
- User ACL: It would be very cool if proxmox supports differents user profiles which limit access to certain functions for certain
VMs. Example: Assign two test VMs to a user and let him access the snapshot management plus VM boot / shutdown / reset functions.
Somewhat related to this:
I solved two requirements (VM migration via internal storage network plus batch migration) with a simple bash script you can find here:
http://pastebin.com/qjkqFFcG
!!! Use at own risk !!!
We have 3 servers connected to a dedicated storage network each with a 4 GBit bond device. Using the script I see up to 50MB/s when
migrating a VM. The impact on the overall host performance is mediocre. I see 5-10% I/O delay on the target host which is not more
than I usually see when migrating a Vm via proxmox frontend. Our systems have two Quadcore Xeons with 2GHz, 32 GB RAM and a RAID5
with 4 SATA HDDs. Using ionice helps to keep the I/O load on the source host down. If you are afraid off a too high impact there
would be the option to limit rsync bandwith (similar to what older proxmox version do during migration). The script itself is
pretty basic and one might have interest adding more feature (e.g. limiting I/O with cgroups, add a log file, add mail support).
Batch migration can be easily done by chaining multiple command via &&
vm_migrate proxmox-001.storage.local 123 && vm_migrate proxmox-001.storage.local 124 && vm_migrate proxmox-001.storage.local 125 ....
This worked perfectly for me during the last days when I needed to migrate a whole bunch of testservers from one cluster node to another.
Best regards,
Holger
while proxmox provides a great feature set I still have the feeling that it lacks several features which makes life for VM administrators easier. At least in typical QA environments.
From my perspective this includes (but is not limited

VM Migration:
- Support for VM migration via other network interface than the main external interface.
It would be nice if the proxmox frontend would allow to migrate VMs not only via external / management interface. For several
reasons a dedicated storage network makes sense. To my understanding this means little modification to proxmox itself. Only
a cluster-replicated list of all configured bridges in the cluster nodes (given you have set up your storage network from within
proxmox) plus an option to select the network interface on which to migrate the VM. Given the right control methods are choosen
there is little to no I/O impact to both nodes (see below).
- Batch migration of multiple VMs. For maintenance it might be required to move all VMs from one node to another node. For example
if you have local storage and need to rebuild your RAID. It would be nice if the proxmox frontend lets one select multiple machines
which are migrated in a queued manor. Might be nice to have an additional menu point which simply lets you select one server node,
define a target node and automatically migrates all VMs from one one to another (Maintenance migration !). Resource verification
on the target node (e.g. sufficent RAM / HDD space) plus a warning would be nice but IMO are not mandatory. Administrators should
always have an eye on their host ressources plus monitoring tools.
A "du -hs /var/lib/vz/images" gives you quickly the size of all kvm images on the source node.
A simple
Code:
df -h /var/lib/vz/images | cut -d" " -f7
VM Management:
- Support for Snapshots from proxmox frontend. Making a snapshot of a current installation set is essential for QA activities.
It would be nice if the proxmox frontend allows managing snapshots for qcow2 images via GUI. This would include saving, naming,
restoring and deleting multiple snapshots per VM.
- Cloning of VMs. Similar to snapshoting it is often required in QA environments to duplicate machines for quick server provision.
To my best understanding the proxmox frontend could do the similar stept that we now need to do manually. First generation of identical
conf with new mac and then copying the hdd image with ionice on idle priority from "source VM" to "target VM" directory.
- User ACL: It would be very cool if proxmox supports differents user profiles which limit access to certain functions for certain
VMs. Example: Assign two test VMs to a user and let him access the snapshot management plus VM boot / shutdown / reset functions.
Somewhat related to this:
I solved two requirements (VM migration via internal storage network plus batch migration) with a simple bash script you can find here:
http://pastebin.com/qjkqFFcG
!!! Use at own risk !!!
We have 3 servers connected to a dedicated storage network each with a 4 GBit bond device. Using the script I see up to 50MB/s when
migrating a VM. The impact on the overall host performance is mediocre. I see 5-10% I/O delay on the target host which is not more
than I usually see when migrating a Vm via proxmox frontend. Our systems have two Quadcore Xeons with 2GHz, 32 GB RAM and a RAID5
with 4 SATA HDDs. Using ionice helps to keep the I/O load on the source host down. If you are afraid off a too high impact there
would be the option to limit rsync bandwith (similar to what older proxmox version do during migration). The script itself is
pretty basic and one might have interest adding more feature (e.g. limiting I/O with cgroups, add a log file, add mail support).
Batch migration can be easily done by chaining multiple command via &&
vm_migrate proxmox-001.storage.local 123 && vm_migrate proxmox-001.storage.local 124 && vm_migrate proxmox-001.storage.local 125 ....
This worked perfectly for me during the last days when I needed to migrate a whole bunch of testservers from one cluster node to another.
Best regards,
Holger
Last edited: