Moving ZFS based LXC containers

DynFi User

Well-Known Member
Apr 18, 2016
147
16
58
48
dynfi.com
Hello,

We are trying to move LXC containers from one host to another.

We have more than one zfs mount point on the target system :

root@proxmonster:/home/xxxx# zfs list
NAME USED AVAIL REFER MOUNTPOINT
monster 640G 2.54T 192K /monster
monster/data 639G 2.54T 208K /monster/data
monster/data/subvol-102-disk-1 211M 47.8G 211M /monster/data/subvol-102-disk-1
monster/data/subvol-117-disk-1 25.4G 62.6G 25.4G /monster/data/subvol-117-disk-1
monster/data/subvol-119-disk-1 1.07G 19.3G 755M /monster/data/subvol-119-disk-1
monster/data/subvol-120-disk-1 473M 3.55G 457M /monster/data/subvol-120-disk-1
monster/data/subvol-121-disk-1 1.07G 19.3G 714M /monster/data/subvol-121-disk-1
monster/data/subvol-122-disk-1 1.17G 6.89G 1.11G /monster/data/subvol-122-disk-1
monster/data/subvol-123-disk-1 3.70G 6.55G 1.45G /monster/data/subvol-123-disk-1
monster/data/subvol-124-disk-1 173G 83.3G 173G /monster/data/subvol-124-disk-1
monster/data/subvol-126-disk-1 1.43G 6.72G 1.28G /monster/data/subvol-126-disk-1
monster/data/subvol-127-disk-1 1.25G 6.75G 1.25G /monster/data/subvol-127-disk-1
monster/data/subvol-132-disk-1 2.88G 17.2G 2.81G /monster/data/subvol-132-disk-1
monster/data/subvol-134-disk-1 484M 4.53G 484M /monster/data/subvol-134-disk-1
monster/data/vm-1000-disk-1 20.6G 2.55T 8.50G -
monster/data/vm-1000-disk-2 33.8G 2.56T 13.0G -
monster/data/vm-1001-disk-1 71.7G 2.57T 40.7G -
monster/data/vm-1002-disk-1 164G 2.62T 81.2G -
monster/data/vm-110-disk-1 137G 2.59T 90.2G -
monster/templates 303M 2.54T 303M /monster/templates
rpool 11.0G 42.8G 96K /rpool
rpool/ROOT 2.37G 42.8G 96K /rpool/ROOT
rpool/ROOT/pve-1 2.37G 42.8G 2.37G /
rpool/data 1.28G 42.8G 96K /rpool/data
rpool/data/subvol-101-disk-1 1.28G 6.72G 1.28G /rpool/data/subvol-101-disk-1
rpool/swap 7.31G 45.8G 4.34G -

In fact on this system rpool/data was only used to store the system data have been stored on the monster/data

Problem is that if we want to move from one cluster node to another, we can only move LXC containers in one ZFS place on the server identified on the storage.cfg as local-zfs

The other zfs pool, which is also declared with the same setting is only available locally and not shared.
There is no "nodes" definition in the local-zfs storage…


zfspool: local-zfs
pool rpool/data
content images,rootdir
sparse 0

nfs: newmail_data
path /mnt/pve/newmail_data
server 192.168.210.140
export /mnt/data/newmail/virtual
options vers=4
content backup,rootdir
maxfiles 1

nfs: tide_vzbackup
path /mnt/pve/tide_vzbackup
server 192.168.210.28
export /mnt/tank/NFS/vzbackup
options vers=3
content backup,images
maxfiles 9

zfspool: mondata
pool monster/data
content rootdir,images
nodes proxmonster




My questions are :
  • can we have more than one zfs storage pool shared among "n" hosts ?
  • If so how are we supposed to manage this ?
    • shall we remove the "nodes" definition on the mondata zfs pool ?
    • or add hosts inside It ?
 
The other zfs pool, which is also declared with the same setting is only available locally and not shared.

ZFS is a local storage (it is impossible to make it shared magically).

But you can still migrate VMs and Containers (offline) - or what is the problem?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!