VM live migration with local storage

Any chance this has changed? Is it able to do live migration of local storage now? When I try I get the following error: ERROR: migration aborted (duration 00:00:01): Failed to sync data - can't live migrate attached local disks without with-local-disks option.

Given the error I am hopeful that there is "with-local-disks" option that can be enabled to facilitate this.

Currently I have the following package versions:

proxmox-ve: 4.4-82 (running kernel: 4.4.40-1-pve)
pve-manager: 4.4-12 (running version: 4.4-12/e71b7a74)
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.4.19-1-pve: 4.4.19-66
pve-kernel-4.4.40-1-pve: 4.4.40-82
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-109
pve-firmware: 1.1-10
libpve-common-perl: 4.0-92
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2 vncterm: 1.3-1
pve-docs: 4.4-3
pve-qemu-kvm: 2.7.1-4
pve-container: 1.0-94
pve-firewall: 2.0-33
pve-ha-manager: 1.0-40
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-3
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80
 
Any chance this has changed? Is it able to do live migration of local storage now? When I try I get the following error: ERROR: migration aborted (duration 00:00:01): Failed to sync data - can't live migrate attached local disks without with-local-disks option.

Given the error I am hopeful that there is "with-local-disks" option that can be enabled to facilitate this.

Currently I have the following package versions:

proxmox-ve: 4.4-82 (running kernel: 4.4.40-1-pve)
pve-manager: 4.4-12 (running version: 4.4-12/e71b7a74)
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.4.19-1-pve: 4.4.19-66
pve-kernel-4.4.40-1-pve: 4.4.40-82
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-109
pve-firmware: 1.1-10
libpve-common-perl: 4.0-92
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2 vncterm: 1.3-1
pve-docs: 4.4-3
pve-qemu-kvm: 2.7.1-4
pve-container: 1.0-94
pve-firewall: 2.0-33
pve-ha-manager: 1.0-40
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-3
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80

sure, you can do it, command line only for now.

qm migrate .... --with-local-disks
 
  • Like
Reactions: boopzz
Thanks for the quick reply! I just tested it and it works great. Any idea if/when this might be implemented in the GUI?
 
Thanks for the quick reply! I just tested it and it works great. Any idea if/when this might be implemented in the GUI?
It would be great to have this implemented in the GUI. I'm using ZFS local storage and it would be awesome to live migrate VMs from one node to another. Perhaps this could be added to PVE 5.0?
 
It would be great to have this implemented in the GUI. I'm using ZFS local storage and it would be awesome to live migrate VMs from one node to another. Perhaps this could be added to PVE 5.0?
may I ask how do you configure your setup?

is it 2 node?
what are the stats?

thanks.
 
may I ask how do you configure your setup?

is it 2 node?
what are the stats?

Sure, right now I have a two node setup of PVE 5.0 beta2, both using ZFS (raidz2) for local storage. There's a dedicated gigabit network for corosync traffic. One node is a Dell PowerEdge R710 and the other a HP DL360 G5. The install went smoothly on the R710 but I had to add "acpi=off" as a kernel parameter for both the installer and final installation on the HP DL360. Hardware stats are:

Dell PowerEdge R710
2x Intel Xeon E5540 @ 2.53GHz
128GB ECC RAM
8x 300GB 10K 2.5" SAS HDDs
Dell PERC 6/iR flashed to IT mode

HP DL360 G5
2x Intel Xeon 5140 @ 2.33GHz
32GB ECC RAM
6x 146GB 2.5" SAS HDDs
IBM ServeRAID M1015 flashed to IT mode

The server specs are clearly mismatched, however the setup is mostly for testing. If all goes well the HP will be used for dev and I'll add another R710 into the cluster for production use. I'm very interested in testing PVE-zsync as a sort of failover setup (without shared storage).

I tried live migrate in the GUI but got the same error as the original poster:

Jun 27 10:18:42 starting migration of VM 101 to node 'dev-pve-01'
Jun 27 10:18:42 ERROR: Failed to sync data - can't live migrate attached local disks without with-local-disks option
Jun 27 10:18:42 aborting phase 1 - cleanup resources
Jun 27 10:18:42 ERROR: migration aborted (duration 00:00:01): Failed to sync data - can't live migrate attached local disks without with-local-disks option
TASK ERROR: migration aborted
 
thanks, very impressive.

this however is a bit too much for my needs .
I simply do not have this kind of hardware, nor do I need it for my home setup.
was interesting reading though and it may help some one else someday.
 
...... I had to add "acpi=off" as a kernel parameter for both the installer and final installation on the HP DL360.
Sorry for the off-topic, but I also have a DL-360G5 that will like to use for testing 5.x.
Can you detail explain how to add "acpi=off" to the installation of PVE 5.0x
 
I just upgraded my 2-node cluster to v5.0-32. I've been planning a cheeky workaround the 2-node cluster for a while so finally spun up a VM on my PC and added a 3rd node just incase one goes down (so I dont get quorum errors and cant start VM's!).

I used:
Code:
qm migrate 250 --with-local-storage --online
to move my local storage VM (actually my pfSense VM) to the other node.
It did work as far as when it had copied over but then the VM stalled, quick VM stop/start sorted it. The live migration part didn't quite work for me but the local disk migration works which is what I need.

EDIT: Just to add, I have my main host with my normal Plex etc VM's running on a HP GT1610 Gen8 Microserver (with upgrade CPU to E3-1265L v2) and my second host which is essentially just my pfSense edge router/firewall on a Zotac ZBOX CI323 NANO, its a small mini-PC that has 2x Gigabit ethernet ports. I had to install debian and then add Proxmox ontop to install as the installer didn't work. I mainly added Proxmox to give me super easy backups and a way to use the rest of the resources on the box for other VM's.
 
Last edited:
Tested here with localstorage. This is working fine with only one disk, without spice and without storagereplication.
 
Tested here with localstorage. This is working fine with only one disk, without spice and without storagereplication.
What type of local storage are you using (ZFS, LVM, etc.)? Also are you using the VirtIO drivers in your guest and what is the guest OS? In my experience Linux guests with VirtIO drivers seem to work the most consistently when using ZFS as the local storage.
 
Tested here on ZFS with Virtio and QCOW2 on LVM also with Virtio. Tested too with Windows10.
 
Before add it to gui,

I think we need to add a deny with iothread and multiple disks, as it's buggy.

also need to verify how it's works with zfs local replication. (maybe simply forbid it)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!