Live migration on local shared storage

Leo David

Well-Known Member
Apr 25, 2017
115
5
58
44
Hi,
I've configured a 3 bare-metal nodes cluster, and regarding storage each of one is configured with a separated ssd LVM named "data".
I've added then Datacenter LVM volume "ssd-data", based on LVM name "data", shared, enabled, no restrictions.
Created a VM, alocated a disk on "ssd-data" storage on node1. So far , so good.
The problem comes when a try to migrate the vm to node2, it sais:
"TASK ERROR: can't activate LV '/dev/data/vm-100-disk-2': Failed to find logical volume "data/vm-100-disk-2"
Is it possible to live migrate vm's or lxc's ( manual or ha mode ) between nodes on nodes' local shared storages ?
 
The 'shared' flag is a flag to indicate that the storage is shared. It makes no sense to set that on a local storage.
 
Ok.
So un-sharing local storage will permit migration ?
The problem is that with local storage unshared, it seems i cannot clone vm's between nodes anymore.
Is there a best practice for migration / clonning regarding storage configuration and type ( shared/unshared ) ?
 
Thank you,
So the best way to have full clonning & migration capabilities is to have attached to Proxmox cluster a "shared" network storage. Either NFS, Gluster, Ceph, and so on...
 
I don't necesarilly need to stick to local storage. Just need to know recommended storage type configuration that permits ha, migration, clonning. I have to build-up a proxmox based project for production needs ( considering subscriptions of course ).
Meanwhile, I configured an external ceph storage, it seems to work pretty good for existing vm's.
Just need to know if local storage ( shared/not shared ) is suitable for ha, migration, clonning.
 
from everything i have read so far, using local storage for HA, while possible with some extra steps, is not recommended and has it's issues. definitely not recommended for Production use if you can avoid it.
I am running a windows Hyper-V cluster on 2 nodes with local storage but I really had no choice in the matter.
 
Thank you. Yes, makes sense to have non-local shared storage ( including ceph on proxmox ) for these features.
I'll stick on ceph for a while though...:)
 
Hi,
Using an external Ceph storage, live migration works like a charm.
However, I dont'really understand why HA feature requires a vm reboot, and is not acting like a automatic live migration, without any downtime of vm's provided services.
In my case, it takes about 2-3 minutes until the auto-migrated vm starts from the moment a disconnect one pve node from the cluster.
Is it possible ? To have HA doing an automatic live migration ?
Thank you,
Have a nice weekend !
 
Hi,
Using an external Ceph storage, live migration works like a charm.
However, I dont'really understand why HA feature requires a vm reboot, and is not acting like a automatic live migration, without any downtime of vm's provided services.
In my case, it takes about 2-3 minutes until the auto-migrated vm starts from the moment a disconnect one pve node from the cluster.
Is it possible ? To have HA doing an automatic live migration ?
Thank you,
Have a nice weekend !

That's would require some kind of continous memory replication between hosts, to have fault tolerance HA.
qemu have added support for this recently (2.9), some features are still pending for qemu 2.10.
it's not yet implement in proxmox.


https://github.com/qemu/qemu/blob/master/docs/COLO-FT.txt
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!