Hi ZFS lovers,
I play around with best practice of a backup of a VM. The VM stores ISO files and has a data disk of 768 GB. This is very huge to backup and I'd like to do it more efficient.
I moved the data from a ext4-based 768 GB disk to a 768 GB ZFS-based one inside the VM and excluded the disk image from backup. The backup itself should be done from "outside" via ZFS send/receive.
I built a new backup server with ZFS to store my proxmox backups (a proxmox installation itself) and it'll be used to receive the data. The system uses a RAID-z2 for the data (6x3 TB), no L2ARC/ZIL and 16 GB of RAM.
All pools have compression enabled (on), 4K recordsize and the send/receive synchronization has just finished and I do not know how the numbers add up:
Source system:
Actual transfer (initial sync):
So, we received 30 GB more due to (I think) uncompressed data and also metadata. The transfer was really slow, but there ware a complete cluster backup from 5 nodes running beside the send/receive, so no surprise there.
But on the backup server, the pool is really huge, too huge in my opinion.
One can see that before yesterday, we had the data inside the 2007 filesystem and it was not as much as it is now. I do not know why the filesystem uses now 220 GB more than before or in the original system.
Can anyone explain this?
Best,
LnxBil
I play around with best practice of a backup of a VM. The VM stores ISO files and has a data disk of 768 GB. This is very huge to backup and I'd like to do it more efficient.
I moved the data from a ext4-based 768 GB disk to a 768 GB ZFS-based one inside the VM and excluded the disk image from backup. The backup itself should be done from "outside" via ZFS send/receive.
I built a new backup server with ZFS to store my proxmox backups (a proxmox installation itself) and it'll be used to receive the data. The system uses a RAID-z2 for the data (6x3 TB), no L2ARC/ZIL and 16 GB of RAM.
All pools have compression enabled (on), 4K recordsize and the send/receive synchronization has just finished and I do not know how the numbers add up:
Source system:
Code:
$ zfs list -t all -r -o space isodump/samba
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
isodump/samba 311G 429G 264K 429G 0 0
isodump/samba@2015_12_16-13_35_53 - 0 - - - -
isodump/samba@2015_12_17-10_57_26 - 0 - - - -
isodump/samba@2015_12_18-06_38_25 - 43K - - - -
Actual transfer (initial sync):
Code:
$ ssh -C isodump zfs send -R isodump/samba@2015_12_16-13_35_53 | zfs receive -Fduv rpool/proxmox/2007
receiving full stream of isodump/samba@2015_12_16-13_35_53 into rpool/proxmox/2007/samba@2015_12_16-13_35_53
received 459GB stream in 34387 seconds (13,7MB/sec)
So, we received 30 GB more due to (I think) uncompressed data and also metadata. The transfer was really slow, but there ware a complete cluster backup from 5 nodes running beside the send/receive, so no surprise there.
But on the backup server, the pool is really huge, too huge in my opinion.
Code:
$ zfs list -r -t all rpool/proxmox/2007
NAME USED AVAIL REFER MOUNTPOINT
rpool/proxmox/2007 1,86T 2,85T 2,64G /rpool/proxmox/2007
rpool/proxmox/2007@2015_11_15-23_01_15 346G - 346G -
rpool/proxmox/2007@2015_11_20-18_16_54 189M - 553G -
rpool/proxmox/2007@2015_11_27-18_17_10 114M - 630G -
rpool/proxmox/2007@2015_12_04-18_28_18 114M - 671G -
rpool/proxmox/2007@2015_12_14-09_45_23 5,00G - 676G -
rpool/proxmox/2007@2015_12_17-12_55_34 0 - 2,64G -
rpool/proxmox/2007/samba 877G 2,85T 877G /rpool/proxmox/2007/samba
rpool/proxmox/2007/samba@2015_12_16-13_35_53 0 - 877G -
One can see that before yesterday, we had the data inside the 2007 filesystem and it was not as much as it is now. I do not know why the filesystem uses now 220 GB more than before or in the original system.
Can anyone explain this?
Best,
LnxBil
Last edited: