PBS Backup to Tape / Collected Infos and proposals

juergen852

Member
Jun 17, 2021
11
1
8
75
Germany
After going through all the threads in this forum regarding Tape Backup and opening a ticket at Proxmox without real solution, I would like to ask/propose some Ideas

Please correct me, If some of my points are wrong.
Comments also welcome.

BTW: 3.2 would be my favourite solution.

Conclusions from other threads
1.1 To achieve continuous writing to the tape, you need a storage array able to provide around 300MB read speed.
1.2 Backups on PBS are distributed to max. 4MB chunk files. 300MB/Sec. non-sequential access read speed is not that easy.
1.3 FIO may be some indicator for disk read speed, but it cannot simulate Backup-Reads.


Proposed solutions / tests others made
2.1 Even larger EXT4 Raid5-HDD setups with 10 or more HDD will struggle in reading fast enough continuously.
2.2 ZFS on HDD with "SSD Special Device" will not be able to cache enough data for the backups. Tape still underruns.
2.3 Backup to SSD: --> You need more Backup SSD space, than on your production servers.
-> Expensive and disks will be write-load-killed after some years, which will make the solution even more expensive.
2.4 Backup daily to smaller SSD and then sync internally to larger HDD space with longer history:
-> You still need SSD-Space as large as your production storage for initial full backup + HDDs for longer history.
-> Still high write load on that disk.
2.5 Change PBS to use multiple threads for reading from HDD: --> May help a bit on Raid-Controllers, but still Race Conditions...


Ideas:
3.1 Do a backup to HDD and then internally sync the daily data to some SSD from where the tape reads the daily.
-> Every full backup still needs to read from slow HDD.
Problem: Is there a way to only sync last incremental from HDD to SSD and will restores be consistent?

3.2 Code a solution which buffers the data from HDD to some SSD (1 to 4TB) before writing from SSD to tape.
-> When the buffer is full, Tape starts writing this buffer and then pauses till next buffer can be written
Checksums are calculated while tape is sleeping, so no strong CPU needed.
Like this, the tape may write at full speed at least for 60 to 120 minutes before re-positioning.
Pro:
- Only one SSD needed.
- Fast and not tape speed critical.
Problem:
- This still kills the SSD with write load, but it may be a cheap SSD and daily backups are not that large...
3.2.2 Include a logic to support a second SSD to Tape drive and do automatic failover if first drive dies. Leaves enough time to replace dead first drive.

3.3 Alternatively use RAM as cache, expensive, but longer lasting.
(Is RAM already used as buffer and would adding 1TB of RAM help?)
 
Last edited:
Hello @floh8 ,
thanks for your reply. As I went through all the Tape threads, I also knew that one. (Another one without solution.....)

Long story short, current PBS to Tape implementation is fast enough using SSDs, but not HDDs.
Technically, there would be solutions like I mentioned in 3.2 or 3.3, or much larger buffers as @dcsapak mentioned.
If Proxmox was willing to invest by implementing such things.....
Tapes seem to have lower priority, maybe not many customers use Tape drives....

Financially I would prefer paying 20-30% more for the subscription (or a "Tape Add-On") instead of investing into SSDs....
(Put the money for Proxmox instead of hardware.)
In the end, chasing around for solutions and testing also takes a lot of time and money......
 
this thread might be helpful when u decide to use HDDs: https://forum.proxmox.com/threads/low-zfs-read-performance-disk-tape.139494/
also interessting is the fact that if u use btrfs instead of zfs the prune and GC will be much faster like random reads.
of course, U should use a raid 10 configuration to get the max. speed.
btrfs is still in experimental support and is still prone to loosing data in certain setups. And it has less features than btrfs so it's somehow expected that it performs better. However I would still prefer zfs since one of it's features is bitrot protection. So I would take this advice with a grain of salt.
For reference regarding btrfs versus zfs:
https://forum.proxmox.com/threads/q...s-how-it-is-working-in-proxmox-anyone.156775/
https://forum.proxmox.com/threads/actual-zfs-send-receive-backups.136222/