Updates for Proxmox VE 3.0 - including storage migration ("qm move")

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
754
1,740
223
We just uploaded a bunch of packages to our pvetest repository, including a lot of bug fixes, code cleanups, qemu 1.4.2 and also a new quite cool new feature - storage migration ("qm move").

A big Thank-you to our active community for all feedback, testing, bug reporting and patch submissions.

Release Notes

- ceph (0.61.3-1~bpo70+1)

  • update of ceph-common, librbd1 and librados2
- libpve-storage-perl (3.0-8)

  • rdb: --format is deprecated, use --image-format instead
  • be more verebose on rbd commands to get progress
  • various fixes for nexenta plugin
- vncterm (1.1-4)

  • Allow to add intermediate certificates to /etc/pve/local/pve-ssl.pem (users previously used apache option SSLCertificateChainFile for that)
- pve-qemu-kvm (1.4-13)

  • update to qemu 1.4.2
  • remove rbd-add-an-asynchronous-flush.patch (upstream now)
- qemu-server (3.0-20)

  • new API to update VM config: this one is fully asynchronous.
  • snapshot rollback: use pc-i440fx-1.4 as default
  • config: implement new 'machine' configuration
  • migrate: pass --machine parameter to remote 'qm start' command
  • snapshot: store/use 'machine' confifiguration
  • imlement delete flag for move_disk
  • API: rename move to move_disk
  • implement storage migration ("qm move")
  • fix bug 395: correctly handle unused disk with storage alias
  • fix unused disk handling (do not hide unused disks when used with snapshot).
- pve-manager (3.0-23)

  • fix bug #368: use vtype 'DnsName' to verify host names
  • fix bug #401: disable connection timeout during API call processing
  • add suport for new qemu-server async configuration API
  • support 'delete' flag for 'Move disk'
  • add 'Move disk' button for storage migration
__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 
Last edited by a moderator:
Very great news: "new quite cool new feature - storage migration ("qm move")."
Could you specify in which conditions it's possible to exploit the storage migration? (i.e. only when the vm disk is a raw or .qcow file? lvm/iscsi raw ?)

Thanks to all for the excellent product and work!
Best Regards
SergioF
 
storage migration - great feature, thanks!

just testing a bit - two findings so far:

- if the target storage is too small to hold the to-be-migrated image physically, the migration process stalls and times out with some message about "probable bad sectors"

- a qcow2 image (preallocation:metadata) gets "expanded" during migration, means a "thin provisioned" platform may get disk space problems:
before migration:
# qemu-img info ./101/vm-101-disk-1b.qcow2
image: ./101/vm-101-disk-1.qcow2
file format: qcow2
virtual size: 100G (107374182400 bytes)
disk size: 6.5G
cluster_size: 65536

after migration:
# qemu-img info ./101/vm-101-disk-1.qcow2
image: ./101/vm-101-disk-1.qcow2
file format: qcow2
virtual size: 100G (107374182400 bytes)
disk size: 100G
cluster_size: 65536

(a manual 'qemu-img convert' afterwards re-creates the sparse image)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!