Feature suggestions: Better VM administration features

holgerb

Member
Aug 3, 2009
45
0
6
Hi there,

while proxmox provides a great feature set I still have the feeling that it lacks several features which makes life for VM administrators easier. At least in typical QA environments.
From my perspective this includes (but is not limited ;)):

VM Migration:
- Support for VM migration via other network interface than the main external interface.
It would be nice if the proxmox frontend would allow to migrate VMs not only via external / management interface. For several
reasons a dedicated storage network makes sense. To my understanding this means little modification to proxmox itself. Only
a cluster-replicated list of all configured bridges in the cluster nodes (given you have set up your storage network from within
proxmox) plus an option to select the network interface on which to migrate the VM. Given the right control methods are choosen
there is little to no I/O impact to both nodes (see below).

- Batch migration of multiple VMs. For maintenance it might be required to move all VMs from one node to another node. For example
if you have local storage and need to rebuild your RAID. It would be nice if the proxmox frontend lets one select multiple machines
which are migrated in a queued manor. Might be nice to have an additional menu point which simply lets you select one server node,
define a target node and automatically migrates all VMs from one one to another (Maintenance migration !). Resource verification
on the target node (e.g. sufficent RAM / HDD space) plus a warning would be nice but IMO are not mandatory. Administrators should
always have an eye on their host ressources plus monitoring tools.

A "du -hs /var/lib/vz/images" gives you quickly the size of all kvm images on the source node.
A simple
Code:
df -h /var/lib/vz/images | cut -d" "  -f7
gives the information how much space is left on the target node for KVM images.

VM Management:
- Support for Snapshots from proxmox frontend. Making a snapshot of a current installation set is essential for QA activities.
It would be nice if the proxmox frontend allows managing snapshots for qcow2 images via GUI. This would include saving, naming,
restoring and deleting multiple snapshots per VM.

- Cloning of VMs. Similar to snapshoting it is often required in QA environments to duplicate machines for quick server provision.
To my best understanding the proxmox frontend could do the similar stept that we now need to do manually. First generation of identical
conf with new mac and then copying the hdd image with ionice on idle priority from "source VM" to "target VM" directory.

- User ACL: It would be very cool if proxmox supports differents user profiles which limit access to certain functions for certain
VMs. Example: Assign two test VMs to a user and let him access the snapshot management plus VM boot / shutdown / reset functions.

Somewhat related to this:
I solved two requirements (VM migration via internal storage network plus batch migration) with a simple bash script you can find here:
http://pastebin.com/qjkqFFcG
!!! Use at own risk !!!

We have 3 servers connected to a dedicated storage network each with a 4 GBit bond device. Using the script I see up to 50MB/s when
migrating a VM. The impact on the overall host performance is mediocre. I see 5-10% I/O delay on the target host which is not more
than I usually see when migrating a Vm via proxmox frontend. Our systems have two Quadcore Xeons with 2GHz, 32 GB RAM and a RAID5
with 4 SATA HDDs. Using ionice helps to keep the I/O load on the source host down. If you are afraid off a too high impact there
would be the option to limit rsync bandwith (similar to what older proxmox version do during migration). The script itself is
pretty basic and one might have interest adding more feature (e.g. limiting I/O with cgroups, add a log file, add mail support).

Batch migration can be easily done by chaining multiple command via &&
vm_migrate proxmox-001.storage.local 123 && vm_migrate proxmox-001.storage.local 124 && vm_migrate proxmox-001.storage.local 125 ....

This worked perfectly for me during the last days when I needed to migrate a whole bunch of testservers from one cluster node to another.

Best regards,
Holger
 
Last edited:
thanks for your suggestions, nothing really new but I would like to have such a feature set.
 
Thanks for your quick reply ! Yes, I know...I didn't request new features.
It is more that the "rest of the pack" has many of them and hopefully proxmox will have some of them soon too :D
 
The migration takes place with the NIC that is allowed to access the storage weather you set that up to be the management NIC or not is up to you. In my setup i have a NIC for the storage network, a NIC for the vm internet access and a NIC for management. This is not hard to do with the current setup.
 
This is not correct for promox 1.8 if you migrate via web interface. We have one "external" NIC / bridge connected to our test network plus one NIC / bond device connected to an internal storage network (4x1GBit) with a dedicated switch. Proxmox only allows migration via the external NIC / bridge (unless I miss something).

Or do you mean using the storage network as well for cluster synchronisation ?

Anyway, such a functionality would be nice directly via web interface. As stated above we have solved this via bash script.
 
I am using 1.8 and do not have this problem. The only ip allowed to connect to my storage is the "storage NIC" on my node. So the node could not use the other NICs to connect. Further more the "storage NIC" is physically on a different network. Works well for me. Here is a example of what my net config looks like. I use the management NIC for cluster sync

# network interface settings
auto lo

iface lo inet loopback

auto eth0

iface eth0 inet static ##Management
address 192.168.20.85
netmask 255.255.255.0
gateway 192.168.20.1



auto vmbr0 ##Internet for VM's
iface vmbr0 inet static
address 192.168.3.104
netmask 255.255.255.0
gateway 192.168.3.1
bridge_ports eth1
bridge_stp off
bridge_fd 0

auto eth2

iface eth2 inet static ##Storage
address 192.168.7.86
netmask 255.255.255.0
gateway 192.168.7.1
 
Last edited: