Updates for Proxmox VE 3.0 - including storage migration

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
754
1,740
223
We just moved a bunch of packages from pvetest to our stable repository, including a lot of bug fixes, code cleanups, qemu 1.4.2 and also a new quite cool new feature - storage migration.

A big Thank-you to our active community for all feedback, testing, bug reporting and patch submissions.

Release Notes

- ceph (0.61.3-1~bpo70+1)

  • update of ceph-common, librbd1 and librados2
- libpve-storage-perl (3.0-8)

  • rdb: --format is deprecated, use --image-format instead
  • be more verebose on rbd commands to get progress
  • various fixes for nexenta plugin
- vncterm (1.1-4)

  • Allow to add intermediate certificates to /etc/pve/local/pve-ssl.pem (users previously used apache option SSLCertificateChainFile for that)
- pve-qemu-kvm (1.4-13)

  • update to qemu 1.4.2
  • remove rbd-add-an-asynchronous-flush.patch (upstream now)
- qemu-server (3.0-20)

  • new API to update VM config: this one is fully asynchronous.
  • snapshot rollback: use pc-i440fx-1.4 as default
  • config: implement new 'machine' configuration
  • migrate: pass --machine parameter to remote 'qm start' command
  • snapshot: store/use 'machine' confifiguration
  • imlement delete flag for move_disk
  • API: rename move to move_disk
  • implement storage migration ("qm move")
  • fix bug 395: correctly handle unused disk with storage alias
  • fix unused disk handling (do not hide unused disks when used with snapshot).
- pve-manager (3.0-23)

  • fix bug #368: use vtype 'DnsName' to verify host names
  • fix bug #401: disable connection timeout during API call processing
  • add suport for new qemu-server async configuration API
  • support 'delete' flag for 'Move disk'
  • add 'Move disk' button for storage migration
__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 
First, thks for this release !!!

I have a question about storage live migration....

Can i use it to move vm's disks from LVM VolumeGroup to another LVM VolumeGroup ?
We plan to change our storage array which host proxmox kvm vms using shared LVM.
The idea is to mount new LUN from new storage array, create a new VolumeGroup in proxmox and live migrate storage from the old array.

Do you think it's a valid scenario with the new storage migration feature ?

Regards
 
Any kvm storage can be migrated to any storage. This includes both shared and unshared storage both online and off-line. One limitation though is, if my initial implementation is used, that a HA enabled VM can only be migrated to shared storage.
 
if you update the qemu/kvm package, you need to stop/start your KVM based to use the new version.

if you update the kernel, you need to reboot the host.
 
Probably a stupid question, but I don't see any storage migration option thru the web interface. Is it command-line only?
 
Thanks! I kept looking for it by right-clicking over the VM on the left-side menu where the "Migrate" option is.
 
During upgrade from 3.0 to the latest 3.x that described here:
http://pve.proxmox.com/wiki/Downloads#Update_a_running_Proxmox_Virtual_Environment_3.x_to_latest_3.0
I got question.
What is the right answer?

Code:
Processing triggers for man-db ...
Processing triggers for install-info ...
Setting up base-files (7.1wheezy1) ...
Installing new version of config file /etc/debian_version ...

Configuration file `/etc/issue'
 ==> Modified (by you or by a script) since installation.
 ==> Package distributor has shipped an updated version.
   What would you like to do about it ?  Your options are:
    Y or I  : install the package maintainer's version
    N or O  : keep your currently-installed version
      D     : show the differences between the versions
      Z     : start a shell to examine the situation
 The default action is to keep your current version.
*** issue (Y/I/N/O/D/Z) [default=N] ?
 
excellent news re migration, does this mean that i could create a cluster of PVE nodes without shared storage AND be able to do migrations between the nodes? I'm not looking for HA options here, just central management and easy manual migration between pve nodes
 
Yes, you can migrate, but of course not "live" migrate, you have to turn off the VM first, then migrate. Btw, since the migrate UI does not ask for destination storage, I guess you can migrate only to target "local" one
 
excellent news re migration, does this mean that i could create a cluster of PVE nodes without shared storage AND be able to do migrations between the nodes? I'm not looking for HA options here, just central management and easy manual migration between pve nodes

not yet, I'll try to work on this for next proxmox release. (live-migration + storage migration of local storage)
 
its no problem.
 
I just downloaded the 3.0 official iso to install in my box, in order to update the latest changes, what should I do? Just sudo apt-get update && sudo apt-get dist-upgrade?
Thank you in advance.
 
What is the best way to upgrade an cluster?
I got three nodes, two for KVM and one just for OpenVZ (with migration posibility to the other two nodes)?

Can I upgrade the "linux" only node without any problems?
And then migrate the KVM vm's to the other node --> update empty node --> migrate all vm's to updated node --> update other kvm node?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!