How to use live migrations without shared storage

dg_

New Member
Sep 28, 2016
29
0
1
41
Hello guys,

I have to implement a new environment, but i am not sure how Proxmox can solve my requirements:
  • 2 servers (Server A and Server B).
  • SSD and SATA disks for every server.
  • All vm have to use SSD disk (IO is critical).
  • Every night Server A take snapshot of all vms using local SSD and copy to SATA of Server B.
  • Every night Server B take snapshot of all vms using local SSD and copy to SATA of Server A.
  • If i have to upgrade any server, i need to do 'live migration' without downtime to another server using local SSD. I can use 10G network.
  • VM must use Thin provisioning, specially when migrate to another node.
I known that shared storage is the best option, but this environment can not use it, so i will not have redundant storage, just backup every night.

I was reading about ZFS with deduplication, ZFS with iscsi plugin, LVM thin... but i am not sure what is the best option or if Proxmox can solve all requirements.

Can anybody help me? please.

Thanks.
 

fireon

Famous Member
Oct 25, 2010
3,217
216
83
37
Austria/Graz
iteas.at

dg_

New Member
Sep 28, 2016
29
0
1
41
For your Backupsituation it is ok you can use the Proxmox Backup: https://pve.proxmox.com/wiki/Backup_and_Restore Backup is managed in the Webgui.
With ZFS you can also use PVE-Zsync.
But you can't migrate Live without downtime. This is only possible with shared storage. Choose one frome the list.
  1. Does Zsync need ZFS in source storage? or it will work if i use LVM thin? for example.
  2. If i don't use zsync, can i dump vm to local storage of another host in the cluster?

If i can note migrate without downtime... i have some questions:
  1. Can i migrate from LVM thin to LVM thin in another host? it will be transferred under ssh?
  2. If i use LVM thin or raw in source storage server, after migrate to new server, will i lost thin provisioning? If you do a live migration to different storage, kvm don't preserve thin, but it works if vm is stopped. Is the same migrating to another server?
  3. What is storage has the best performance for local SSD? LVM thin, qcow2, raw... ?
Thanks for your help.
 

fortechitsolutions

Active Member
Jun 4, 2008
333
12
38
Hi, just a few brief followup comments on this thread,

- From my tests, a while back, I am quite certain that VM Migration *without* shared storage is in fact both possible; and not at all a big deal, since a fair while now (>1 year at least, I forget?) - This is implemented as a feature upstream in KVM ...as a standard feature. Similarly you can do 'storage migrations' (ie, move the VM disk image from one storage pool to another). The catch is - that it is not 'instant' or 'fast' but rather is moving blocks over the wire (Gig-ether presumably, or maybe 10gig, depends on how your proxmox nodes are connected in cluster?) - so if you have big VM images then doing a "Live migration" might take ~hours. But it will happen, with patience, and no perceptible downtime. (ie, VM keeps running on origin-machine for duration of blocks-copy work; then an iterative sync process does the final-cut-over-tidy-up; and then VM is lit up on target host ~milliseconds after it is paused on source host. But you will have a ~waiting patiently period of time - when blocks are being copied from one ProxHost to TheOtherProxHost).

I believe local storage is going to provide thin-provision inherently. LVM backed stores used to never allow thin, but more recently this is an added feature; I believe. I haven't used LVM Thin yet so can't comment. Generally I think you won't want to use LVM with local storage, the main use case of LVM might be on a SAN backed storage where you want "live migration against this shared stoage attached to multiple proxmox nodes concurrently". Which is not your use case.

For what it is worth, if bandwidth and performance are sufficient, using NFS as shared storage is very easy and effective. Solves the problem of 'no budget for SAN' - an inexpensive NFS target (ie, Synology, or a home-built Supermicro with 10gig NIC running stock debian and more or less acting as a 'dumb NFS target') - will suffice. Then you have plenty of "shared storage" and good fast live migrations. Of course you also have a single point of failure in your proxmox cluster as well :)

I haven't gotten in habit yet of using ZFS in my stock deployments of Proxmox. EXT4 seems more solid and less 'fuss' and the 'cool features' of ZFS (dedup, compress, etc) increase hardware burden (RAM, CPU) for your nodes / in ways that have yet to make sense for any client projects I've done. My current go-to build is BCache SSD accellerated bulk SATA "software raid" config - minimal debian install done first for SWRaid/ Then Bcache is added / and then proxmox is installed after-the-fact via repo "custom debian install" method. really quite straightforward, and bcache is a nice compromise between all-flash / flash performance / and bulk sata capacity.

If you don't mind though springing for all SSD for your storage, I am sure pure-SSD backed local disk will be nice and fast for local VM storage. Then maybe have local SATA (raid of course of some form) for your 'backup storage tank'.

As mentioned earlier in the thread, I have the feeling built-in proxmox backup features will suffice for your requirement around backups. Also to mention, NFS targets make good backup destinations. :)


Tim Chipman
Fortech IT Solutions
http://FortechITSolutions.ca
 

dg_

New Member
Sep 28, 2016
29
0
1
41
Hi, just a few brief followup comments on this thread,

- From my tests, a while back, I am quite certain that VM Migration *without* shared storage is in fact both possible; and not at all a big deal, since a fair while now (>1 year at least, I forget?) - This is implemented as a feature upstream in KVM ...as a standard feature. Similarly you can do 'storage migrations' (ie, move the VM disk image from one storage pool to another). The catch is - that it is not 'instant' or 'fast' but rather is moving blocks over the wire (Gig-ether presumably, or maybe 10gig, depends on how your proxmox nodes are connected in cluster?) - so if you have big VM images then doing a "Live migration" might take ~hours. But it will happen, with patience, and no perceptible downtime. (ie, VM keeps running on origin-machine for duration of blocks-copy work; then an iterative sync process does the final-cut-over-tidy-up; and then VM is lit up on target host ~milliseconds after it is paused on source host. But you will have a ~waiting patiently period of time - when blocks are being copied from one ProxHost to TheOtherProxHost).

I believe local storage is going to provide thin-provision inherently. LVM backed stores used to never allow thin, but more recently this is an added feature; I believe. I haven't used LVM Thin yet so can't comment. Generally I think you won't want to use LVM with local storage, the main use case of LVM might be on a SAN backed storage where you want "live migration against this shared stoage attached to multiple proxmox nodes concurrently". Which is not your use case.

For what it is worth, if bandwidth and performance are sufficient, using NFS as shared storage is very easy and effective. Solves the problem of 'no budget for SAN' - an inexpensive NFS target (ie, Synology, or a home-built Supermicro with 10gig NIC running stock debian and more or less acting as a 'dumb NFS target') - will suffice. Then you have plenty of "shared storage" and good fast live migrations. Of course you also have a single point of failure in your proxmox cluster as well :)

I haven't gotten in habit yet of using ZFS in my stock deployments of Proxmox. EXT4 seems more solid and less 'fuss' and the 'cool features' of ZFS (dedup, compress, etc) increase hardware burden (RAM, CPU) for your nodes / in ways that have yet to make sense for any client projects I've done. My current go-to build is BCache SSD accellerated bulk SATA "software raid" config - minimal debian install done first for SWRaid/ Then Bcache is added / and then proxmox is installed after-the-fact via repo "custom debian install" method. really quite straightforward, and bcache is a nice compromise between all-flash / flash performance / and bulk sata capacity.

If you don't mind though springing for all SSD for your storage, I am sure pure-SSD backed local disk will be nice and fast for local VM storage. Then maybe have local SATA (raid of course of some form) for your 'backup storage tank'.

As mentioned earlier in the thread, I have the feeling built-in proxmox backup features will suffice for your requirement around backups. Also to mention, NFS targets make good backup destinations. :)


Tim Chipman
Fortech IT Solutions
http://FortechITSolutions.ca
Thanks for you reply.

I can not use shared storage because i delegate all hardware management on others companies than they do not support it. I can use extra servers as NFS, but it will be a unique point of failure. To have 2 NFS servers in HA is not good solutions because i must have RAID 1 over network because both servers are not directly connected to the same array disks.

As you said, migrate from local storage to another host in local storage can take long time. It's not a big problem for me because:
  1. I use 1G/10G network card.
  2. Local disks are SSD.
  3. I do live migration with planed interventions.
  4. Fail over of a Proxmox node will be fixed with backup of data in another node with Zsync or simirlar.
How do you propuse to do 'live migration' from local storage to another node with local storage? Proxmox don't show remote local storages as destination when you are moving vm disks.

Another possible but not professional solution is configure a NFS server in every proxmox node, it will allow migrate vms, but i limit IOPs from 90k to 10-15K (for SSD). And i lost thin provisioning when i move running vm from NFS storage to another NFS storage.

Thanks.
 

fortechitsolutions

Active Member
Jun 4, 2008
333
12
38
Hi DG, thanks for the extra clarification on your environment. I agree that shared storage is not always possible, or desirable, depending on the situation. For what it is worth as an FYI, there are non-free NFS Filer "Appliances" (ie, lnux under the hood, and built on standard-based toolkits) which permit a pair of servers to act as an HA_NFS target, ie, two vanilla X86 servers with local disks (presumably raid) exported via NFS; and then data sync between the 2 servers; and virtual IP / HA as the NFS target for which your client system accesses NFS share. And then in case one node "Goes away" (power fail, explosion, etc :) then the other node just carries on and your client systems don't see any negative side effect. I believe it is also possible to build such things 'by hand' although it is not "trivial" or even "easy" - but it is doable - if you need such a thing, and don't wish to pay some 3rd party for the 'ease of using their tool'.

Anyhow. That stuff aside.

I just had a look, just as a sanity test / to make sure I'm not dreaming. Context: I've got an environment I setup for a client; 2 x Proxmox 4.X hosts; shared-nothing in terms of storage. They are both members of a 2-node Proxmox cluster. I don't have HA configured / don't wish it to be used. In the "Proxmox cluster Tree hierarchy menu" on the left-edge of web-admin UI, while looking at the "Admin single pane of glass" - I see both node1 and node2 listed as cluster members; and -- I am easily able to locate a VM on first node, right-click and choose "Migrate" and then designate migration to the second node. I just spun up a test VM with 32gig disk on node1, local storage; and then designate migrate to node2. It is taking about 2 minutes to copy the blocks from Node1>>Node2 (the hosts have cluster network connectivity of 10gig ethernet). But it "just works" - so you do achieve a live migration, just not 'instant fast'.

Possibly if you don't have this option, your proxmox nodes are not joined together as a proxmox cluster, but rather are stand-alone nodes? Or you are running an older version of proxmox which does not yet have this feature ?

Note that this is not done when looking in a single VM, at the hardware resources Tab, and "moving a VM Disk". We can't do "Storage Migration" because the VM Disk must always be on the physical node where the VM resides, since we have non-shared storage config here. So you cannot migrate just a disk. You must migrate the VM, entirely as a unit, from one node to a different node.

Hope this is clear, and maybe from the sound of it, a feature that will be of some use to you.

Tim
 

dg_

New Member
Sep 28, 2016
29
0
1
41
Hi DG, thanks for the extra clarification on your environment. I agree that shared storage is not always possible, or desirable, depending on the situation. For what it is worth as an FYI, there are non-free NFS Filer "Appliances" (ie, lnux under the hood, and built on standard-based toolkits) which permit a pair of servers to act as an HA_NFS target, ie, two vanilla X86 servers with local disks (presumably raid) exported via NFS; and then data sync between the 2 servers; and virtual IP / HA as the NFS target for which your client system accesses NFS share. And then in case one node "Goes away" (power fail, explosion, etc :) then the other node just carries on and your client systems don't see any negative side effect. I believe it is also possible to build such things 'by hand' although it is not "trivial" or even "easy" - but it is doable - if you need such a thing, and don't wish to pay some 3rd party for the 'ease of using their tool'.

Anyhow. That stuff aside.

I just had a look, just as a sanity test / to make sure I'm not dreaming. Context: I've got an environment I setup for a client; 2 x Proxmox 4.X hosts; shared-nothing in terms of storage. They are both members of a 2-node Proxmox cluster. I don't have HA configured / don't wish it to be used. In the "Proxmox cluster Tree hierarchy menu" on the left-edge of web-admin UI, while looking at the "Admin single pane of glass" - I see both node1 and node2 listed as cluster members; and -- I am easily able to locate a VM on first node, right-click and choose "Migrate" and then designate migration to the second node. I just spun up a test VM with 32gig disk on node1, local storage; and then designate migrate to node2. It is taking about 2 minutes to copy the blocks from Node1>>Node2 (the hosts have cluster network connectivity of 10gig ethernet). But it "just works" - so you do achieve a live migration, just not 'instant fast'.

Possibly if you don't have this option, your proxmox nodes are not joined together as a proxmox cluster, but rather are stand-alone nodes? Or you are running an older version of proxmox which does not yet have this feature ?

Note that this is not done when looking in a single VM, at the hardware resources Tab, and "moving a VM Disk". We can't do "Storage Migration" because the VM Disk must always be on the physical node where the VM resides, since we have non-shared storage config here. So you cannot migrate just a disk. You must migrate the VM, entirely as a unit, from one node to a different node.

Hope this is clear, and maybe from the sound of it, a feature that will be of some use to you.

Tim
Hello,

Thanks so much, it was a stupid behavior doing migration in my side...

It works like you said. Just migration from VM name, not from disk resources for VM. Of course you need local storage in both nodes enabled for images and mounted in the same path.

You comment about NFS solution with not shared storage is interesting. Can you say some of these solutions that can be used by software (not appliance).

Now i can migrate between nodes using local storages, but there is any solution for keep thin provisioning when you do an online migration?

Thanks.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!