[SOLVED] Backup and Restore Much Larger

Jarvar

Well-Known Member
Aug 27, 2019
317
10
58
I found this odd.
I had an old Server with 2 x 1 TB SSDs setup. One Nvme and one Sata SSD setup as a Raid ZFS 1 or mirror.
The Storage pool I setup with ZFS mirror is a little smaller than the new server, but contains way more VM Disks but takes up less space. It's weird.
Can somebody help explain what is going on?

First node
node1a.PNG
The amount of disks on them
node1b.PNG

Now the second node

node2a.PNG

And the amount of disks on the second node

node2b.PNG

Does this compress down after?

Basically I backuped the VMs using vzdump to stopped gzip onto NFS. Connected the Node2 to the NFS and restored to the ZFS storage same as the first node.
What is going on?
Please and thank you.
 
My guess (having trouble to follow you what is the new and the old node) is: on the server having more disks on it - you have used (either on purpose or by accident) thin provisioned disks. Could be compression or dedup as well - but the ratios would be awesome. So my guess is: thin provisioning.

Technically the disks on node 1 wouldn't even fit on a 1TB disk. So one or the other efficiency-feature needs to be involved.
I had it the other way around lately. Migrating disks with ZFS-send/receive somewhat seemed to cause the target disks to be created in a thin-provisioned way. That can easily be seen in the "zfs list" overview - e.g. when the "USED" column is equal (or larger) than the disk size - it is a full blown disk. If not, it is likely much less an thin provisioned.
 
  • Like
Reactions: Jarvar
My guess (having trouble to follow you what is the new and the old node) is: on the server having more disks on it - you have used (either on purpose or by accident) thin provisioned disks. Could be compression or dedup as well - but the ratios would be awesome. So my guess is: thin provisioning.

Technically the disks on node 1 wouldn't even fit on a 1TB disk. So one or the other efficiency-feature needs to be involved.
I had it the other way around lately. Migrating disks with ZFS-send/receive somewhat seemed to cause the target disks to be created in a thin-provisioned way. That can easily be seen in the "zfs list" overview - e.g. when the "USED" column is equal (or larger) than the disk size - it is a full blown disk. If not, it is likely much less an thin provisioned.

Thanks the top two images are from Node1 and the bottom two pictures are from Node2.
I installed node2. However, to get the VM from Node1 I backed up to NFS and then restored to Node2 from NFS.

You mentioned using ZFS-send/receive. Can you do that between two nodes that are not on the same cluster?
You seem to be right. Although it does seem to be a full blown disk on node 2.

node2_zfs.PNG

On node 1, it is a lot smaller.
 
My guess (having trouble to follow you what is the new and the old node) is: on the server having more disks on it - you have used (either on purpose or by accident) thin provisioned disks. Could be compression or dedup as well - but the ratios would be awesome. So my guess is: thin provisioning.

Technically the disks on node 1 wouldn't even fit on a 1TB disk. So one or the other efficiency-feature needs to be involved.
I had it the other way around lately. Migrating disks with ZFS-send/receive somewhat seemed to cause the target disks to be created in a thin-provisioned way. That can easily be seen in the "zfs list" overview - e.g. when the "USED" column is equal (or larger) than the disk size - it is a full blown disk. If not, it is likely much less an thin provisioned.

Thank you so much. It looks like I figured out a way to zfs send/receive and it reflects the thin provisioned disks of node1.
I really appreciate your assistance.
 
You are welcome. Out of curiosity (might get handy at some point): how did you actually solve it?
 
You are welcome. Out of curiosity (might get handy at some point): how did you actually solve it?

I don't know if I have actually solved it, but I did follow your lead about zfs send/receive

I found a link from Proxmox
https://pve.proxmox.com/wiki/PVE-zsync

Code:
zfs send <pool>/[<path>/]vm-<VMID>-disk-<number>@<last_snapshot> | [ssh root@<destination>] zfs receive <pool>/<path>/vm-<VMID>-disk-<number>

I haven't solved it yet, but I think I am on my way there, I am sending and receiving the vm disks. I made he mistake of using SSH, but I only read its needed if sending remotely, which they are not They are within the same network.

I'll also have to copy the vm configuration file over manually. I used Filezilla to copy the conf file over to /etc/pve/qemu-server when I did that I nano /etc/pve/qemu-server/example.conf and made sure the disks matched my changes if I made any. I also renamed the .conf file to be the next in line for the new server

Looks like I may have found out a simpler solution.
I notice I have copied a lot of VM's over to my home lab and suprisingly I have many of the same ones minus a few and plus a few more.
If I use the Backup Snapshot mode instead of Stop and lzo fast instead of gzip it looks like the VM disks are the same thin provisioned and easier to backup and restore.
I am going to try it out.
 
I might have figured it out. Under Data Centre view-> storage I noticed Thin Provision was unchecked. I check it seems to make a BIG difference.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!