Original post from the DE support board translated:
After this I was informed in a now deleted comment that there is no functionality like this and using Proxmox Backup Server was recommended. Our Proxmox Backup Server deployment is in the works but will still take some time, so we need a solution until then.
Does it make sense in this case to fork the
Good morning,
In our deployment, we have some very large VM disks (over 20 TB, raw disks on an iSCSI flash array). When we want to back up these VMs, we encounter the problem that our archive storage accepts files up to a maximum of 7 TB (this could be changed, but it would have a significant performance impact).
Therefore, my question: is it possible to segment ZSTD backup files and keep the individual chunks smaller than 5 TB?
Unfortunately, I haven't been able to find anything about this in my online research, so if I missed something in the documentation, please just point me in the right direction.
After this I was informed in a now deleted comment that there is no functionality like this and using Proxmox Backup Server was recommended. Our Proxmox Backup Server deployment is in the works but will still take some time, so we need a solution until then.
Does it make sense in this case to fork the
VZdump
utility and modify the PVE::VZDump::exec_backup_task
method by inserting a split
command before the backup file is moved from tmptar
to the target
, and then make an equivalent change with cat
in qmrestore
to rebuild the image?