Export subvolume of lxc container on zfs to qcow2

maturos

Member
Apr 26, 2022
30
1
8
Hi all,
I have an lxc container with two volumes:
Code:
arch: amd64
cores: 10
features: nesting=1
hostname: foo
memory: 4096
mp0: local-zfs:subvol-112-disk-1,mp=/media/foo,backup=0,size=2000G
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=<mac>,ip=dhcp,type=veth
onboot: 1
ostype: debian
rootfs: local-zfs:subvol-112-disk-0,size=20G
startup: order=3
swap: 512
unprivileged: 1

I need to export the volume on mp0 to move my machine to another pve. As i cannot backup it (too large) i want to export it and reimport it via qm import. I tried to export the zfs volume to a .qcow2 file following this post. Unfortunately the disks are not shown in /dev/zvol/rpool/data/ (but many others). I guess it's because they're named differently as "subvol" in zfs. Here is a (shortened) zfs list:
Code:
root@pve:/etc/pve# zfs list
NAME                           USED  AVAIL     REFER  MOUNTPOINT
rpool                         5.16T  5.24T      151K  /rpool
rpool/ROOT                    3.23T  5.24T      140K  /rpool/ROOT
rpool/ROOT/pve-1              3.23T  5.24T     3.23T  /
rpool/data                    1.93T  5.24T      209K  /rpool/data
rpool/data/subvol-112-disk-0  1.75G  18.3G     1.75G  /rpool/data/subvol-112-disk-0
rpool/data/subvol-112-disk-1   401G  1.56T      401G  /rpool/data/subvol-112-disk-1
rpool/data/vm-100-disk-0      2.13G  5.24T     2.13G  -

So how can I export the volume? Can you also give me the trick how to pipe dd (if done with dd) into qemu-img and this into scp to send it directly to the new pve?

This will help me a lot!
 
I need to export the volume on mp0 to move my machine to another pve. As i cannot backup it (too large) i want to export it and reimport it via qm import. I tried to export the zfs volume to a .qcow2 file following this post. Unfortunately the disks are not shown in /dev/zvol/rpool/data/ (but many others). I guess it's because they're named differently as "subvol" in zfs. Here is a (shortened) zfs list:
Subvols are datasets (= filesystems) and not zvols (= block devices). A qcow2 is a file containing a block device and "qm import" imports block devices, as VMs can only use block devices. So you would need to copy all the files and folders from your dataset to a filesystem on a block device.

What comes to my mind:
Shutdown LXC on node A. Mount the dataset there. Create a new virtual disk for your VM on node B, format it but don't start it. Mount that virtual disk on node B. Rsync the whole content of the dataset over the network to the mounted filesystem on node B. Unmount dataset and virtual disk before starting the VM.
 
Last edited:
I think i mixed up something. I thought the "MOUNTPOINT" would be under /dev/... but I can actually access the file system over /rpool/data/subvol-112-disk-1/ which is shown as mountpoint.

So I did following to solve my problem now:
  1. Stopping LXC so that it doesn't restart after backup is completed
  2. Backup LXC without including the second volume
  3. Restore LXC on new pve without starting it
  4. Mountpoint with new, empty volume (subvol-103-disk-1) is created automatically on restore
  5. ls -la on old volume, and adjusting permissions for current directory on the new volume (default set automatically to 100000:100000)
  6. Run rsync -avzr /rpool/data/subvol-112-disk-1 root@new-pve:/hdd-data/subvol-103-disk-1
  7. Start LXC on new pve
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!