Hi
Earlier this week, I provisioned two new PVE 5.1 hosts with OVH and using ZFS for the local storage pools.
This works fine and VMs are running really fast.
I then wanted to try the new live migration with local storage that was introduced with PVE 5.0, but it did not seem to be working, although the same storage (local-zfs) exists on both nodes in my two-node cluster.
I have checked that I can move the local VM disks to other types of storage without shutting down the VM, but moving to another host does not seem working.
Here is the error output when trying to live migrate between my two hosts:
And here is the config file of one of the VMs that I cannot move:
This specific VM is running CentOS 7, but I also tried with a Server 2016 VM and the same result.
Are there any specific requirements for migration using local storage?
I also tried using our production on-premise 4-node cluster, that was upgraded from PVE 4.4 to 5.0 and then to 5.1 and the same result. The on-premise does not use ZFS, but instead uses file-based (qcow2) storage.
Earlier this week, I provisioned two new PVE 5.1 hosts with OVH and using ZFS for the local storage pools.
This works fine and VMs are running really fast.
I then wanted to try the new live migration with local storage that was introduced with PVE 5.0, but it did not seem to be working, although the same storage (local-zfs) exists on both nodes in my two-node cluster.
I have checked that I can move the local VM disks to other types of storage without shutting down the VM, but moving to another host does not seem working.
Here is the error output when trying to live migrate between my two hosts:
Code:
2017-10-29 17:19:29 starting migration of VM 202 to node 'ns6735811' (172.16.0.1)
2017-10-29 17:19:29 found local disk 'local-zfs:vm-202-disk-1' (in current VM config)
2017-10-29 17:19:29 can't migrate local disk 'local-zfs:vm-202-disk-1': can't live migrate attached local disks without with-local-disks option
2017-10-29 17:19:29 ERROR: Failed to sync data - can't migrate VM - check log
2017-10-29 17:19:29 aborting phase 1 - cleanup resources
2017-10-29 17:19:29 ERROR: migration aborted (duration 00:00:00): Failed to sync data - can't migrate VM - check log
TASK ERROR: migration aborted
And here is the config file of one of the VMs that I cannot move:
Code:
agent: 1
bootdisk: scsi0
cores: 1
hotplug: disk,network,usb,memory,cpu
ide2: none,media=cdrom
memory: 2048
name: ns2.mydomain.com
net0: virtio=03:00:10:fb:a5:e6,bridge=vmbr0
numa: 1
onboot: 1
ostype: l26
scsi0: local-zfs:vm-202-disk-1,cache=writethrough,size=40G
scsihw: virtio-scsi-pci
smbios1: uuid=19630614-03a5-4998-ac00-1826c68416ba
sockets: 1
Are there any specific requirements for migration using local storage?
I also tried using our production on-premise 4-node cluster, that was upgraded from PVE 4.4 to 5.0 and then to 5.1 and the same result. The on-premise does not use ZFS, but instead uses file-based (qcow2) storage.