Dear all,
for a fileserver holding about 10 TB of data I need to estimate the actual amount of disk space required for the server's VM. I'm trying to understand the relation between the size of the Logical Volumes in a Thinpool to the actual space used on the VM's disk. For example, for the logical volume I get
while the VM's disks seem to hold considerably less disk space
16.55 % of 2,93 TB are about 480 GB, while the VM itself uses about 360 GB. What about the difference of 120 GB? How does this scale when the amount of data on the server increases to several TB?
In addition to that, I'm trying to understand the memory requirements for snapshots and backups:
Thanks
Maxwell
for a fileserver holding about 10 TB of data I need to estimate the actual amount of disk space required for the server's VM. I'm trying to understand the relation between the size of the Logical Volumes in a Thinpool to the actual space used on the VM's disk. For example, for the logical volume I get
Rich (BB code):
root@pmx:~# lvs SATA_ARRAY_0
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
SATA_ARRAY_0 SATA_ARRAY_0 twi-aotz-- <3.61t 13.48 1.20
snap_vm-102-disk-0_Ansible_Config_Done SATA_ARRAY_0 Vri---tz-k <2.93t SATA_ARRAY_0
snap_vm-102-disk-0_OS_installed SATA_ARRAY_0 Vri---tz-k <2.93t SATA_ARRAY_0
vm-102-disk-0 SATA_ARRAY_0 Vwi-aotz-- <2.93t SATA_ARRAY_0 snap_vm-102-disk-0_Ansible_Config_Done 16.55
vm-102-state-Ansible_Config_Done SATA_ARRAY_0 Vwi-a-tz-- <20.02g SATA_ARRAY_0 2.49
vm-102-state-OS_installed SATA_ARRAY_0 Vwi-a-tz-- <20.02g SATA_ARRAY_0 2.60
while the VM's disks seem to hold considerably less disk space
Rich (BB code):
root@ananas:/shares/fotos$ df -H
Dateisystem Größe Benutzt Verf. Verw% Eingehängt auf
udev 5,1G 0 5,1G 0% /dev
tmpfs 1,1G 2,0M 1,1G 1% /run
/dev/sda2 3,2T 363G 2,7T 13% /
tmpfs 5,1G 0 5,1G 0% /dev/shm
tmpfs 5,3M 0 5,3M 0% /run/lock
tmpfs 1,1G 0 1,1G 0% /run/user/1000
16.55 % of 2,93 TB are about 480 GB, while the VM itself uses about 360 GB. What about the difference of 120 GB? How does this scale when the amount of data on the server increases to several TB?
In addition to that, I'm trying to understand the memory requirements for snapshots and backups:
- How many disk space is required for a snapshot? Let's assume the file server gets 1 TB of additional data - do I need to hold this data twice, once on the actual disk and once in a snapshot done after copying the data?
- How can I free up space in the LV / thin pool again? What happens if I delete a large chunk of data from VM. Is the LV deflated again automatically or is this impossible. If yes, can this be triggered manually?
- Bonus question : How to determine the memory requirement on the PBS for backing up the VM with all the payload data? I'd assume that I need to provide the same amount of storage as consumed by the VM for the initial backup while later backups are considerably smaller due to deduplication?
Thanks
Maxwell