How to best migrate to new host?

Zyg0te

Active Member
Mar 13, 2016
13
5
43
36
Hi!

I currently have Proxmox running on two hosts. With one being an old gaming PC and the other being a small server I want to replace both of them with a proper rack server (Supermicro 5019S-M).

However, my question is how I can most easily migrate all my VM's and CT's from the two clustered servers I have running now with my new server. Do I have to add the new server to the cluster, migrate everything over, and then remove it from the cluster again or is there an even simpler way?

Thanks!
 
without downtime, the only possibility is to add the new server to the cluster, migrate, and the remove the server from the cluster
with downtime, the easiest method is to backup & restore
 
  • Like
Reactions: borekon
without downtime, the only possibility is to add the new server to the cluster, migrate, and the remove the server from the cluster
with downtime, the easiest method is to backup & restore

Thanks for the info. With backup/restore I'd need enough storage space somewhere to store all the backups, which might be a challenge, so I guess the cluster method is the only viable option.
 
Hi,
don't forget: "without downtime" is only true with shared storage (or similiar, like ceph).

Udo

Hi Udo,
I have send a patch serie for RFC today on pve-devel mailing for qemu livemigration + storage migration across differents proxmox cluster. (without shared storage between hosts).

BTW,
online local storage migration in the same cluster is already implemented command line since 1-2 months now.
 
  • Like
Reactions: GadgetPig
Hi Udo,
I have send a patch serie for RFC today on pve-devel mailing for qemu livemigration + storage migration across differents proxmox cluster. (without shared storage between hosts).
Hi Spirit,
that sound's good.
BTW,
online local storage migration in the same cluster is already implemented command line since 1-2 months now.
don't know that! Only read the announcment that you work on that.

Just a try and it's work:
Code:
root@pve-a:~# qm migrate 100 pve-b --online --with-local-disks
Feb 22 21:46:50 starting migration of VM 100 to node 'pve-b' (192.168.200.12)
Feb 22 21:46:51 found local disk 'local-lvm:vm-100-disk-1' (in current VM config)
Feb 22 21:46:51 copying disk images
Feb 22 21:46:51 starting VM 100 on remote node 'pve-b'
Feb 22 21:47:01 start remote tunnel
Feb 22 21:47:04 starting storage migration
Feb 22 21:47:04 virtio0: start migration to to nbd:192.168.200.12:60000:exportname=drive-virtio0
drive mirror is starting for drive-virtio0
drive-virtio0: transferred: 0 bytes remaining: 3221225472 bytes total: 3221225472 bytes progression: 0.00 % busy: true ready: false
drive-virtio0: transferred: 82837504 bytes remaining: 3138387968 bytes total: 3221225472 bytes progression: 2.57 % busy: true ready: false
drive-virtio0: transferred: 170917888 bytes remaining: 3050307584 bytes total: 3221225472 bytes progression: 5.31 % busy: true ready: false
...
drive-virtio0: transferred: 3020947456 bytes remaining: 200409088 bytes total: 3221356544 bytes progression: 93.78 % busy: true ready: false
drive-virtio0: transferred: 3110076416 bytes remaining: 111280128 bytes total: 3221356544 bytes progression: 96.55 % busy: true ready: false
drive-virtio0: transferred: 3197108224 bytes remaining: 24248320 bytes total: 3221356544 bytes progression: 99.25 % busy: true ready: false
drive-virtio0: transferred: 3221356544 bytes remaining: 0 bytes total: 3221356544 bytes progression: 100.00 % busy: false ready: true
all mirroring jobs are ready
Feb 22 21:47:44 starting online/live migration on unix:/run/qemu-server/100.migrate
Feb 22 21:47:44 migrate_set_speed: 8589934592
Feb 22 21:47:44 migrate_set_downtime: 0.1
Feb 22 21:47:44 set migration_caps
Feb 22 21:47:44 set cachesize: 80530636
Feb 22 21:47:44 start migrate command to unix:/run/qemu-server/100.migrate
Feb 22 21:47:46 migration status: active (transferred 39519963, remaining 278339584), total 814555136)
Feb 22 21:47:46 migration xbzrle cachesize: 67108864 transferred 0 pages 0 cachemiss 0 overflow 0
Feb 22 21:47:48 migration status: active (transferred 86607903, remaining 228352000), total 814555136)
...
Feb 22 21:47:57 migration status: active (transferred 296233536, remaining 389120), total 814555136)
Feb 22 21:47:57 migration xbzrle cachesize: 67108864 transferred 0 pages 0 cachemiss 0 overflow 0
Feb 22 21:47:57 migration speed: 14.49 MB/s - downtime 76 ms
Feb 22 21:47:57 migration status: completed
drive-virtio0: transferred: 3221422080 bytes remaining: 0 bytes total: 3221422080 bytes progression: 100.00 % busy: false ready: true
all mirroring jobs are ready
drive-virtio0: Completing block job...
drive-virtio0: Completed successfully.
drive-virtio0 : finished
  Logical volume "vm-100-disk-1" successfully removed
Feb 22 21:48:24 migration finished successfully (duration 00:01:36)
Very helpfull!

Thanks
Udo
 
  • Like
Reactions: borekon
Hi spirit,

I am researching the best method for lve migrating VMs from an old cluster to a new cluster. My plan is to add the old cluster nodes as ceph clients of the new cluster, move the storage to the new ceph cluster and then figure out a way to live migrate the running vm to the new host.

I am interested in your patch and if it would help me accomplish this task. How would I go about getting the patch and trying it out?
 
Hi spirit,

I am researching the best method for lve migrating VMs from an old cluster to a new cluster. My plan is to add the old cluster nodes as ceph clients of the new cluster, move the storage to the new ceph cluster and then figure out a way to live migrate the running vm to the new host.

I am interested in your patch and if it would help me accomplish this task. How would I go about getting the patch and trying it out?

I'll try to rebase it on last proxmox 5.X next month.
 
without downtime, the only possibility is to add the new server to the cluster, migrate, and the remove the server from the cluster
with downtime, the easiest method is to backup & restore

Great. I've been Googling for 30 minutes. Where are the instructions? I have the cluster created with the 2 hosts in it, I have no idea how to move a VM to another node.
 
Great. I've been Googling for 30 minutes. Where are the instructions? I have the cluster created with the 2 hosts in it, I have no idea how to move a VM to another node.
Hi,
perhaps you should not use google but use the centext help or do an
Code:
man qm
Esp. the section qm migrate tell you how to migrate the VMs
Code:
qm migrate <vmid> <target> [OPTIONS]

       Migrate virtual machine. Creates a new migration task.

       <vmid>: <integer> (1 - N)
           The (unique) ID of the VM.

       <target>: <string>
           Target node.

       --force <boolean>
           Allow to migrate VMs which use local devices. Only root may use this option.

       --migration_network <string>
           CIDR of the (sub) network that is used for migration.

       --migration_type <insecure | secure>
           Migration traffic is encrypted using an SSH tunnel by default. On secure, completely private networks this can be disabled to increase performance.

       --online <boolean>
           Use online/live migration.

       --targetstorage <string>
           Default target storage.

       --with-local-disks <boolean>
           Enable live storage migration for local dis
But perhaps it's better to use the backup/restore way, because declustrering is not such easy like vm migration!

Udo
 
  • Like
Reactions: GadgetPig
Ok, so I created a VM and installed linux on it just to play around with migration before trying it with a production server.

I have a desktop and an actual server. The desktop has an SSD and a slow HDD. The server has a 3-disk RAID 5 array on a P440ar controller. Originally the server was physical only running Server 2016, a DC along with file server and database application. I know that's not the best way to do things but my clients generally have either 0 or 1 server, only a few of them have multiple servers.

I'm not very versed in hypervisors, but I have a bad taste in my mouth for Microsoft's hyperv, and to get good backup features on ESXi I'd have to get the customer to spend more money, which isn't happening at this point.

So I installed ProxMox on the desktop, installed ProxMox to the SSD and 1 of the VMs on the SSD. I separated the DC from the other roles, so the VM on the SSD is just a DC, nothing else. On the slow HDD I installed another 2016 server VM and migrated all their data and applications to that VM.

Once I did that and had a backup of each VM to a USB drive, I wiped the physical server that I was having trouble with and installed ProxMox on it and created a cluster. So I'm trying to move the linux VM just for practice/learning. Here's what I've run into so far:

command: qm migrate 102 pveHP350Gen9

result: Storage 'local-SlowDisks' is not available on node 'pveHP350Gen9'

At that point the storage called 'local-SlowDisks' was not shared. So, I tried sharing it. When I share it, it shows up under the other host, but with a question mark by it, and trying the above command yields:
ERROR: found stale volume copy 'local-SlowDisks:vm-102-disk-1' on node 'pveHP350Gen9'

I unshared the storage.

Then I tried: qm migrate 102 pveHP350Gen9 -targetstorage local-lvm

which gives me:
400 Parameter verification failed.
targetstorage: Live storage migration can only be done online.

I have no idea what that means. I get the same error whether the VM is running or shut down.

Then I read this in 'man qm'
Offline Migration
If you have local resources, you can still offline migrate your VMs, as long as all disk are on storages, which are defined on both hosts.
Then the migration will copy the disk over the network to the target host.

What!? How would I define storage that is on host A on host B when the storage is completely different on the 2 hosts? I'm obviously not understanding the meaning of any of this.
 
What!? How would I define storage that is on host A on host B when the storage is completely different on the 2 hosts? I'm obviously not understanding the meaning of any of this.
the storage definitions are by default cluster-wide, so each server has the same view of storages, choosing a different storages when migrating is not yet implemented for offline migration
you can however, move the disk temporarily to the storage which exists on all server (you can restrict the storages to some nodes btw)
or you could change the target server so that the same storage endpoint exists there also (e.g. create a vg for 'SlowDisks' as well)

marking a storage shared is exactly that: marking it
it does not share it automatically, but tells pve that the storage is shared (which makes migrating easier, as no disks have to be copied/moved)
 
aderumier,

I look forward to it. Thanks!

Hi, I had time to backports my patch to last proxmox5

here a qemu-server deb build.

http://odisoweb1.odiso.net/qemu-server_5.0-21_amd64.deb

(need to be installed on source and destination host)

then:
#qm migrateexternal <vmid> [<targetip>] --targetstorage <string>

(you need to copy the root id_rsa.pub to target cluster root authorized_keys)



It's working fine for me, but I need to clean the code before try to push it upstream.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!