Migrate zfs from mountpoint to vm image

pmuch

Member
Sep 10, 2020
11
0
21
46
Hi.
I have an LXC with 1,5TB mountpoint - it works great, the only problem is backup - it takes >20h. I want to migrate to VM. Is there a way to convert mountpoint to qemu image? I
 
I use ZFS - but I can't find any information about using it with PBS - is it possible, or should I use another tool?
 
I use ZFS - but I can't find any information about using it with PBS
PBS is filesystem agnostic, it simply uses any locally mounted filesystem. (Nearly, must be POSIX iirc.)

So for ZFS: create a new dataset (via cli, for example: zfs create -o compression=off mypool/mypbs), let it mount automatically (default) and tell PBS to use "/mypool/mypbs" as the "Backing Path:".

If present/not present leading "/" irritates you: please read some ZFS primer first.
 
PBS is filesystem agnostic, it simply uses any locally mounted filesystem. (Nearly, must be POSIX iirc.)

So for ZFS: create a new dataset (via cli, for example: zfs create -o compression=off mypool/mypbs), let it mount automatically (default) and tell PBS to use "/mypool/mypbs" as the "Backing Path:".

If present/not present leading "/" irritates you: please read some ZFS primer first.
I'm making backup to ZFS - but it doesn't make snapshots and every backup takes very long.
 
Ah, okay. The source also, or only the destination? Perhaps source and destination on the very same hardware?

How is it configured? Show us zpool list -v of the involved pools. (Please use [code]...[/code]-Tags for this.) What kind of hardware system are we talking about? CPU? Ram? What is the result of a simple pveperf /path/to/mountpoint/ | grep FSYNCS on source and destination?

The more info you give us the higher the chance for a helpful answer...
 
  • Like
Reactions: Kingneutron
Source - pve1:

Code:
NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool       3.62T  3.12T   521G        -         -    62%    85%  1.00x    ONLINE  -
  mirror-0  3.62T  3.12T   521G        -         -    62%  86.0%      -    ONLINE
    sda2    3.64T      -      -        -         -      -      -      -    ONLINE
    sdb2    3.64T      -      -        -         -      -      -      -    ONLINE

zfs list gives the subvol used as mountpoint in caontainer:
Code:
rpool/data/subvol-103-disk-1  1.60T  75.6G  1.29T  /rpool/data/subvol-103-disk-1
Code:
pveperf  /rpool/data/subvol-103-disk-1
CPU BOGOMIPS:      60798.40
REGEX/SECOND:      4432707
HD SIZE:           1400.00 GB (rpool/data/subvol-103-disk-1)
FSYNCS/SECOND:     100.86

Destination - pve2 - with Proxmox backup server installed:

Code:
root@pve2:~# zpool list -v
NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool       3.62T  2.92T   718G        -         -    53%    80%  1.00x    ONLINE  -
  mirror-0  3.62T  2.92T   718G        -         -    53%  80.7%      -    ONLINE
    sda2    3.64T      -      -        -         -      -      -      -    ONLINE
    sdb2    3.64T      -      -        -         -      -      -      -    ONLINE

Datastore is defined in /backup directory, which is not a separate ZFS:
Code:
rpool/ROOT/pve-1              1.68T   338G  1.68T  /
Code:
pveperf /backup
CPU BOGOMIPS:      70400.00
REGEX/SECOND:      2740006
HD SIZE:           2058.23 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND:     78.87

I know it's slow, but as it's not often modified data, I would like to take advantage of a feature like dirty bitmap, which works great, but only on VMs. So I wanted to migrate to VM and i'm looking for a way to do it easily, without copying this data.
 
Well...

This looks like rotating rust. And for VMs and also for a PBS Datastore SSDs are highly recommended.

The PBS cuts down all backup-data into 4 MB (or smaller) chunks. For each and every chunk a write has to happen - thanks to write amplification multiple times.

And if the VM was not continously(!) running since the previous(!) backup the system has to read the whole 1.5 TB - which needs some time. (The next backup will be faster - thanks to the magic of "dirty bitmaps"...)

You are just confirming that SSDs are must nowadays, sorry...

(For PBS you might add a (small, but mirrored) ZFS-"Special Device" if you want to use large capacity rotating disks.)
 
  • Like
Reactions: Kingneutron
For an LXC container mountpoint, you could uncheck "Backup" in the GUI and just rclone it from inside the container with parallel = 4 once a week/month or so ;-)
 
For an LXC container mountpoint, you could uncheck "Backup" in the GUI and just rclone it
Yes.

But then... I would prefer to use "zfs send" to send a snapshot to the PBS which is fortuitously also running ZFS...

Of course @Kingneutron knows the following, but it may be interesting for @pmuch as this is not mentioned yet: The problem with PBS in the given scenario is that the source has to be read in completely for each and every backup. This sucks as it means actually reading 1.5 TB without actually sending it to the PBS. Hence the idea to convert it to a ZVOL connected to a VM. This would eliminate this problem as long as the VM runs continuously - "dirty bitmap".

ZFS Datasets can be snapshotted (exactly as ZVOLs); these ZFS-Snapshots have a persistent "state". This mechanism is NOT used by the PVE backup mechanisms as those scripts are implemented storage-type agnostic.

You can find several backup scripts for ZFS out there in the wild - and also in the Debian repository. I will not recommend one as I do not have much experience with this approach. My point here is: these scripts would read (and transmit) ONLY the data that was actually changed.

Disclaimer: you are leaving the supported area with this. This is no recommendation but only a hint to another theoretical posibility.
 
I'm aware of my main problem - the need of reading all the lxc by PBS. That's why I want to migrate to VM (I have many huge VMs and backup is very fast thank to the dirty bitmap feature), and I'm looking for the way of doing it without copying data from zfs subvol (defined as mountpoint in lxc) to zfs disk mounted into VM. I thought I could just attach subvol to VM as disk, but it doesn't work. That's why I'm asking for a method of conversion.
 
That's why I'm asking for a method of conversion.
Please tell us when you find the required magic stick.

A Dataset is a "set of files" presented by ZFS inside of a mointpoint. A ZVOL is a block device that can contain any datastructure but usually contains an equivalent of a harddisk - including a partition table.

A simple conversion is just not possible. And for copying the data I tend to use ssh (or sometimes 9p) - with the drawback of the need for double space during that process.

Best regards
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!