Three hosts, each w/ local storage. Why can't I migrate offline?

starkruzr

Well-Known Member
So, I have three hosts, and their storage arrangement looks like this:

http://imgur.com/a/w5fub

I want to move "Vulpeculae" from cirrus to cumulonimbus.

(Ignore Aerilon, that is a FreeNAS box with iSCSI that currently doesn't work.)

When I try to hit "migrate" -- which, apparently, doesn't do what I think it does -- it tells me "storage local2 is not available on cumulonimbus." Yeah, no shit, that's sort of the point. When I go to the "hardware" listing for the VM and try to first move the disk, cumulonimbus's local storage is not listed in the list of options.

All I want to do is move my (offline!) VM from one host to another. Why is this so hard?
 
  • Like
Reactions: networ
Hi, I don't know that you give quite enough information. Can you confirm,
- you have installed latest proxmox here (UI screenshot snips look like it but not 100% sure ..)
- you have followed standard (wiki) process to setup a proxmox cluster
- cluster is happy, you can connect from host1>>host2 via ssh with no pass, for example? and things look as expected for output for pvecm status, for example from a 3-node cluster I have to play with right now that works happily, I can see,

Code:
root@dpve1:~# pvecm status
Quorum information
------------------
Date:             Mon Jan  9 15:43:43 2017
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          0x00000001
Ring ID:          3/17596
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000003          1 192.168.3.80
0x00000001          1 192.168.3.88 (local)
0x00000002          1 192.168.3.194
root@dpve1:~#


So, to comment from my experience.
- if your cluster is setup and working
- then you can migrate VMs with share-nothing storage,
- migration can be offline (ie, VM is powered off)
or
- migration can be online (ie, VM is powered on)
* Caveat: if RAM / VM activity is too busy, then online-migrate never finishes :-)

but assuming your node-to-node bandwidth is sufficient (1gig ether at least) then - with patient waiting, the disk image blocks are happily sent, and then ram is sent and then deltas are sent progressively smaller cycles over and over until eventually the migration is done and cut over happens.

And yes, it really does work when it works.

So what you describe, suggests more to me that the cluster is not yet working happily. ?

Tim
 
Hi Tim,

Thanks for replying. In order for your questions:
  • Is 4.3-14/3a8c61c7 the latest? I believe so; that's what I have installed.
  • Yes
  • Code:
    root@cirrus:~# pvecm status
    Quorum information
    ------------------
    Date:             Mon Jan  9 15:05:59 2017
    Quorum provider:  corosync_votequorum
    Nodes:            3
    Node ID:          0x00000001
    Ring ID:          1/296
    Quorate:          Yes
    
    Votequorum information
    ----------------------
    Expected votes:   3
    Highest expected: 3
    Total votes:      3
    Quorum:           2 
    Flags:            Quorate
    
    Membership information
    ----------------------
        Nodeid      Votes Name
    0x00000001          1 192.168.9.10 (local)
    0x00000002          1 192.168.9.11
    0x00000003          1 192.168.9.12
    root@cirrus:~#
Looks like I'm good there. The VM I am trying to migrate is offline. It can't be online because I don't have shared storage for the three nodes yet. I'm trying to
  • Bring a VM to the offline state
  • Migrate both compute and storage to another node
  • Bring it back up
That's all. I have migrated VMs before, with 2 nodes, under a different version of the software. It didn't give me these problems. What I think is going on is that there's something different about this version of Proxmox insofar as it handles storage, among other things. As I said, when I shut down a VM and try to move its disk to storage on another node, I can't see any of those nodes to move it to in the "migrate here" list.
 
As an experiment, to the local storage on each node I added the other nodes, and marked both as shared. (Those appear to be the only circumstances under which it will let me move disks.) Then I did I "move disk." So I tried moving the disk from cirrus to cumulonimbus before migrating the VM. If I look in the local storage for cumulonimbus, the moved .raw is not there. Instead, it has *created* the same directory structure on cirrus that exists on cumulonimbus, and moved the file to the location on cirrus that I wanted to move it to on cumulonimbus.

How do I tell it, "No, stupid, move the damn file to the datastore on the OTHER node?!" *sigh*
 
can you post the content of the file /etc/pve/storage.cfg

in general when offline migrating, the disks will be moved to the same storage on the target node (we're working on improving this)

so in order to move a vm from node1 to node2, all disks of the vm have to be on storage definitions which are available on both nodes (but not marked as shared)

for more details see: https://pve.proxmox.com/wiki/Storage_Model
 
As an experiment, to the local storage on each node I added the other nodes, and marked both as shared. (Those appear to be the only circumstances under which it will let me move disks.) Then I did I "move disk." So I tried moving the disk from cirrus to cumulonimbus before migrating the VM. If I look in the local storage for cumulonimbus, the moved .raw is not there. Instead, it has *created* the same directory structure on cirrus that exists on cumulonimbus, and moved the file to the location on cirrus that I wanted to move it to on cumulonimbus.

How do I tell it, "No, stupid, move the damn file to the datastore on the OTHER node?!" *sigh*
Hi,
your local storage must simply named same and excist (and not shared) on all node (where you want to offline migrate).

Udo
 
can you post the content of the file /etc/pve/storage.cfg

in general when offline migrating, the disks will be moved to the same storage on the target node (we're working on improving this)

so in order to move a vm from node1 to node2, all disks of the vm have to be on storage definitions which are available on both nodes (but not marked as shared)

for more details see: https://pve.proxmox.com/wiki/Storage_Model
Hi, thanks for replying -- on which node would you like this file? Or is it shared among all so they're all the same?
 
Code:
root@cirrus:~# cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content images,vztmpl,iso,rootdir
    maxfiles 0

dir: local2
    path /media/usb0
    nodes cirrus
    shared 1
    content images
    maxfiles 1

iscsi: aerilon
    portal 192.168.9.34
    target iscsi
    content images

dir: cumulusdatadir
    path /mnt/data
    nodes cumulus
    shared 0
    content images,iso,rootdir
    maxfiles 1

dir: cumulonimbuslocal
    path /mnt/data
    nodes cumulonimbus
    shared 1
    content images,iso,rootdir
    maxfiles 1

root@cirrus:~#
 
okay, for example you could make following configuration

you make a storage "local2" (or some other name)
which points to "/mnt/local2"
this you mark available on all nodes and not shared

and now you make /mnt/local2 link to the "real" directory on each node

now all nodes have a local2 storage available (each on a different directory) and you should be able to offline migrate vms which are lying on this storage
 
  • Like
Reactions: starkruzr
Will proxmox allow zsync, snapshots and other zfs-related benefits to CTs when using directories like above?
 
okay, for example you could make following configuration

you make a storage "local2" (or some other name)
which points to "/mnt/local2"
this you mark available on all nodes and not shared

and now you make /mnt/local2 link to the "real" directory on each node

now all nodes have a local2 storage available (each on a different directory) and you should be able to offline migrate vms which are lying on this storage
Okay, I think I see how this works now. Thanks.

Is there a development effort underway to make storage addressable by host instead of this "everything has to use the same path" arrangement? This becomes more and more important as users use FUSE plugins to access new and different kinds of storage more quickly than the Proxmox dev team can necessarily keep up with.
 
  • Like
Reactions: networ
Okay, I think I see how this works now. Thanks.

Is there a development effort underway to make storage addressable by host instead of this "everything has to use the same path" arrangement? This becomes more and more important as users use FUSE plugins to access new and different kinds of storage more quickly than the Proxmox dev team can necessarily keep up with.

we have already implemented it for online local storage migration last week for qemu machine. (it's like move_disk to remote node + vm migration )

Offline disk migration need to be improve to be able to convert from/to any storage, try to keep snapshot if possible,...
But it's on the road
 
okay, for example you could make following configuration

you make a storage "local2" (or some other name)
which points to "/mnt/local2"
this you mark available on all nodes and not shared

and now you make /mnt/local2 link to the "real" directory on each node

now all nodes have a local2 storage available (each on a different directory) and you should be able to offline migrate vms which are lying on this storage

There is one problem with this arrangement: these storage volumes are all different sizes. How does it know how much storage is available on this new symlinked volume?
 
There is one problem with this arrangement: these storage volumes are all different sizes. How does it know how much storage is available on this new symlinked volume?

each node calculates the usage (there is a pvestatd running on each of them).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!