[SOLVED] How to migrate container from one node to another, each with it's local storage

Chicken76

Well-Known Member
Jun 26, 2017
52
2
48
44
So here's the setup in a nutshell:
  • two nodes: node1, node2 in a cluster
  • cluster is healthy and running the latest version of PVE, all updates applied
  • node1 storages: local, local-zfs, pool1
  • node2 storages: local, local-zfs, pool2
The question is: how do I move a container that runs on node1 and resides on pool1 to node2 and have it on pool2?

I've thought of moving the storage of the container from pool1 to local-zfs on node1, then migrate it to node2 and then moving its storage to from local-zfs to pool2. Unfortunately the container is bigger than the free space on local-zfs on either node.
 
Hi,
what kind of storages are pool1 and pool2? Assuming the volumes are raw, you can use
Code:
pvesm export pool1:<ID>/vm-<ID>-disk-<N>.raw raw+size - | ssh <IP of NODE2> pvesm import pool2:<ID>/vm-<ID>-disk-<N>.raw raw+size -
While the container is shut down you can then move the config from /etc/pve/nodes/node1/lxc/<ID>.conf to /etc/pve/nodes/node2/lxc/<ID>.conf and update the storage ID from pool1 to pool2 for the rootfs and mountpoints. Then it should be possible to start it on the second node. After making sure everything works, you can remove the left over volumes on node1.
 
  • Like
Reactions: Moayad
Thank you Fabian_E for the information.

All storages are ZFS pools.

I was thinking of trying to do zfs send | zfs recv with that dataset like this:
Bash:
root@node1:/# zfs send pool1/subvol-ID-disk-0 | ssh node2 zfs recv pool2/subvol-ID-disk-0
Do you think this would work?
 
Yes, that should work. With format zfs instead of raw+size, pvesm export/import is basically only a wrapper around zfs send/receive, so it doesn't make much of a difference which of the two you use.
 
It worked! Thank you Fabian_E!

Marking the thread as solved and posting here the outcome so it may be useful to someone looking to do the same thing.

The only "problem" I encountered was that the zfs send command would not run because the dataset was mounted (even if the container was not running). After I unmounted the dataset it all worked fine.
 
And how to do it without zfs?
I have only local storage which is lvm-thin. Proxmox refuse migrating the container even in shutdown/offline state! It says:
ERROR: migration aborted (duration 00:00:00): storage 'ssdpool2' is not available on node 'proxmox1'
Is this possible in GUI if i do not have shared container storage?
Because the container is not visible in RAW LVM-Thin volume. There is no filesystem on that drive.
What is the point in cluster configuration if it cannot move/migrate containers between storages?
 
@promok please keep your questions to a single thread..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!