Offline migration of LXC to another node

fossxplorer

New Member
Mar 6, 2019
2
0
1
54
Hi,
IIRC, i've successfully migrated a VM with local storage previously, but trying to do the same with an LXC container gives me error:
root@pve01:~# pct migrate 105 pve3
2019-03-06 13:51:24 starting migration of CT 105 to node 'pve3' (192.168.1.12)
2019-03-06 13:51:25 ERROR: could not activate storage 'emphase01', zfs error: cannot import 'emphase01': no such pool available
2019-03-06 13:51:25 aborting phase 1 - cleanup resources
2019-03-06 13:51:25 start final cleanup
2019-03-06 13:51:25 ERROR: migration aborted (duration 00:00:01): could not activate storage 'emphase01', zfs error: cannot import 'emphase01': no such pool available
migration aborted
root@pve01:~#

The container is shut down. Any idea how i can get it to migrate with local storage over to node pve3?

Thanks.
 
do all of your clusternodes have the same zfspools available ?
Please post the output of:
* `zpool status`
* `zfs list`
* `cat /etc/pve/storage.cfg`
 
  • Like
Reactions: fossxplorer
No, not the pool i'm i'm trying to migrate to on node pve3. How does the pct migrate choose which storage on the destination node to migrate to?

Details:

Node is 192.168.1.10
pool: hgstzfs
state: ONLINE
scan: scrub repaired 0B in 3h11m with 0 errors on Sun Feb 10 03:35:27 2019
config:

NAME STATE READ WRITE CKSUM
hgstzfs ONLINE 0 0 0
sdb ONLINE 0 0 0

errors: No known data errors

pool: rpool
state: ONLINE
scan: scrub repaired 0B in 0h10m with 0 errors on Sun Feb 10 00:34:29 2019
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sda2 ONLINE 0 0 0

errors: No known data errors

pool: s3500
state: ONLINE
scan: scrub repaired 0B in 0h1m with 0 errors on Sun Feb 10 00:25:54 2019
config:

NAME STATE READ WRITE CKSUM
s3500 ONLINE 0 0 0
wwn-0x55cd2e404b4e8262 ONLINE 0 0 0

errors: No known data errors
NAME USED AVAIL REFER MOUNTPOINT
hgstzfs 1.03T 1.60T 1.50G /hgstzfs
hgstzfs/backupUSB 868G 1.60T 868G /hgstzfs/backupUSB
hgstzfs/db 86.1G 1.60T 85.3G /hgstzfs/db
hgstzfs/dbuser1 85.6G 1.60T 84.8G /hgstzfs/dbuser1
hgstzfs/intel320 17.7G 1.60T 17.7G /hgstzfs/intel320
rpool 75.5G 14.6G 104K /rpool
rpool/ROOT 6.18G 14.6G 96K /rpool/ROOT
rpool/ROOT/pve-1 6.18G 14.6G 6.18G /
rpool/data 61.7G 14.6G 16.3G /rpool/data
rpool/data/subvol-101-disk-1 169M 3.84G 169M /rpool/data/subvol-101-disk-1
rpool/data/subvol-105-disk-0 20.1G 12.9G 20.1G /rpool/data/subvol-105-disk-0
rpool/data/subvol-107-disk-0 1.44G 6.58G 1.42G /rpool/data/subvol-107-disk-0
rpool/data/subvol-108-disk-0 1.75G 14.6G 1.75G /rpool/data/subvol-108-disk-0
rpool/data/vm-100-disk-1 13.0G 14.6G 7.57G -
rpool/data/vm-100-state-snap_26092018 1.23G 14.6G 1.23G -
rpool/data/vm-102-disk-0 6.50G 14.6G 6.50G -
rpool/data/vm-106-disk-0 1.24G 14.6G 1.24G -
rpool/swap 7.44G 15.2G 6.82G -
s3500 21.9G 247G 96K /s3500
s3500/owndb 21.9G 247G 21.9G /s3500/owndb
dir: local
path /var/lib/vz
content backup,iso,vztmpl

zfspool: local-zfs
pool rpool/data
content images,rootdir
sparse 1

zfspool: hgstzfs2
pool hgstzfs2
content rootdir,images
nodes pve02

zfspool: s3500
pool s3500
content rootdir,images
nodes pve01

zfspool: emphase01
pool emphase01
content images,rootdir
nodes pve3,pve01
sparse 0

Node is 192.168.1.11
pool: hgstzfs2
state: ONLINE
scan: scrub repaired 0B in 2h52m with 0 errors on Sun Feb 10 03:16:32 2019
config:

NAME STATE READ WRITE CKSUM
hgstzfs2 ONLINE 0 0 0
sda ONLINE 0 0 0

errors: No known data errors

pool: rpool
state: ONLINE
scan: scrub repaired 0B in 0h7m with 0 errors on Sun Feb 10 00:31:06 2019
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sdb2 ONLINE 0 0 0

errors: No known data errors
NAME USED AVAIL REFER MOUNTPOINT
hgstzfs2 1.10T 1.53T 120K /hgstzfs2
hgstzfs2/backupUSB 868G 1.53T 868G /hgstzfs2/backupUSB
hgstzfs2/backup_silvia_laptop 158G 1.53T 158G /hgstzfs2/backup_guest22_laptop
hgstzfs2/db 96K 1.53T 96K /hgstzfs2/db
hgstzfs2/db2 84.8G 1.53T 84.8G /hgstzfs2/db2
hgstzfs2/intel320 14.8G 1.53T 14.8G /hgstzfs2/intel320
hgstzfs2/subvol-110-disk-0 2.90G 5.10G 2.90G /hgstzfs2/subvol-110-disk-0
rpool 63.9G 26.2G 104K /rpool
rpool/ROOT 14.1G 26.2G 96K /rpool/ROOT
rpool/ROOT/pve-1 14.1G 26.2G 14.1G /
rpool/data 42.4G 26.2G 16.3G /rpool/data
rpool/data/subvol-101-disk-1 169M 3.84G 169M /rpool/data/subvol-101-disk-1
rpool/data/subvol-109-disk-0 736M 7.28G 736M /rpool/data/subvol-109-disk-0
rpool/data/vm-100-disk-1 12.7G 26.2G 7.82G -
rpool/data/vm-100-state-snap_26092018 1.23G 26.2G 1.23G -
rpool/data/vm-102-disk-0 5.24G 26.2G 5.24G -
rpool/data/vm-103-disk-0 1.27G 26.2G 1.27G -
rpool/data/vm-103-disk-1 3.12G 26.2G 2.68G -
rpool/data/vm-103-state-snaptest 241M 26.2G 241M -
rpool/data/vm-104-disk-0 1.40G 26.2G 1.40G -
rpool/swap 7.44G 26.8G 6.83G -
dir: local
path /var/lib/vz
content backup,iso,vztmpl

zfspool: local-zfs
pool rpool/data
content images,rootdir
sparse 1

zfspool: hgstzfs2
pool hgstzfs2
content rootdir,images
nodes pve02

zfspool: s3500
pool s3500
content rootdir,images
nodes pve01

zfspool: emphase01
pool emphase01
content images,rootdir
nodes pve3,pve01
sparse 0

Node is 192.168.1.12
pool: emphase01
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
emphase01 ONLINE 0 0 0
ata-G5RM3G032-M_3RSC1AT6J6GSAZ4ELJ ONLINE 0 0 0

errors: No known data errors
pool: rpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sdb3 ONLINE 0 0 0

errors: No known data errors
NAME USED AVAIL REFER MOUNTPOINT
emphase01 1.28G 27.5G 25K /emphase01
emphase01/irc01 1.28G 27.5G 1.28G /emphase01/irc01
rpool 1.03G 228G 96K /rpool
rpool/ROOT 1.03G 228G 96K /rpool/ROOT
rpool/ROOT/pve-1 1.03G 228G 1.03G /
rpool/data 96K 228G 96K /rpool/data
dir: local
path /var/lib/vz
content backup,iso,vztmpl

zfspool: local-zfs
pool rpool/data
content images,rootdir
sparse 1

zfspool: hgstzfs2
pool hgstzfs2
content rootdir,images
nodes pve02

zfspool: s3500
pool s3500
content rootdir,images
nodes pve01

zfspool: emphase01
pool emphase01
content images,rootdir
nodes pve3,pve01
sparse 0
 
Migrating containers from one node to the other and also changing the targetstorage is not supported ATM

move the volume to a pool available on all nodes and migrate afterwards.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!