ZFS and snapshots

I was hoping someone could help me clear something up.

When making a ZFS snapshot and sending it offsite the sizes don't really look to be what they should be.
What I mean is this, I made a snapshot and the size of the snapshot is 26.4M.

When transferring this same snapshot with zfs send / recv, it is transferring 8.74GB.
The whole pool is the same on both sides, except maybe the "main" image.
Can someone explain what it is, that it is syncing?

It seems to do this for every snapshot that it is being synced, everytime it syncs much more data then the snapshot size.
 
Hi,

The surce / destination snap size could be different if you have different model disks(using the same zfs pool kind mirror/raidz/and so on).

Think that zfs send/receive is like a rsync that use disk bloks insted of files. So you can have node A with block disk size = 512. You make a single snapshot, 1000 bloks => 1000x512 = 512 000
Then you make a zfs send/receive from A to node B. B have disks with 4k block size (8x512 size). So it will copy each block from A to B. But any 512 on A will be a 4k block on B (8 x size A). In the end you will see 1000 x 8 x 512 on node B.

I hope you will understend the ideea ;)

Good luck!
 
Hi,

The surce / destination snap size could be different if you have different model disks(using the same zfs pool kind mirror/raidz/and so on).

Think that zfs send/receive is like a rsync that use disk bloks insted of files. So you can have node A with block disk size = 512. You make a single snapshot, 1000 bloks => 1000x512 = 512 000
Then you make a zfs send/receive from A to node B. B have disks with 4k block size (8x512 size). So it will copy each block from A to B. But any 512 on A will be a 4k block on B (8 x size A). In the end you will see 1000 x 8 x 512 on node B.

I hope you will understend the ideea ;)

Good luck!

yes I definitely understand what you are saying, although I don't think that is where this huge difference is coming from.
You are right though source record size was set to 8k and the destination to 128k, so changed that now.
Lets see if it helps ;)
 
I have observed a similar behaviour and verified the recordsize parameter of the corresponding filesystems on both source and destination. They are identical. The pools are both setup with ashift=12. Any other ideas?
 
I have observed a similar behaviour and verified the recordsize parameter of the corresponding filesystems on both source and destination. They are identical. The pools are both setup with ashift=12. Any other ideas?


ANd your zfs pool geometry is identicaly on both sides? What about the disks? Are the same model?
 
Hmm I think I see what is happening.
Most if not all snapshots are very small, like 20MB.
But then once a day there is a snapshot that is very big +2GB.

It is syncing a range of snapshots so everything together is like 4GB.
For the VM's even more.

Now only to find out why some snapshots are so big......
 
Looking by the size they indicate at zfs list -t snapshot
although I'm not sure if that measurement is correct as the same snapshot has a different size on the other side.

No it's not - accoring to the zfs manpage:

usedbysnapshots
The amount of space consumed by snapshots of this dataset. In particular, it is the amount of space that would be freed if all of this dataset's snapshots were destroyed. Note that this is not simply the sum of the snapshots' used properties because space can be shared by multiple snapshots.​
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!