Round 3: PBS vs Dell Data Domain

Oct 9, 2025
13
1
3
This is the third (hopefully last?) post in the saga of me trying to use our Dell EMC Data Domain as a datastore for our Proxmox Backup Server.

If I understand the issue correctly, the problem is this: when PBS goes to do garbage collection, it starts by 'touching' each data chunk in the datastore that it still requires, which updates the chunk's file's 'access' timestamp. Then, it sweeps through and deletes chunks that haven't had the access timestamp updated recently. The problem is, Dell Data Domains (and possibly other Dell products?) are constructed in such a way that the access timestamp does not change. So, if you're using a Data Domain as a backup repository, PBS will start deleting chunks out of the backup chain thinking they are no longer in use.

Does anyone know of a resolution to this problem? It's not precisely Proxmox or Dell's "fault" per se, it's just a generic incompatibility. But, it means we can't use our primary backup storage destination, and a very well known and popular backup/archive technology in general. We'd really love to get this resolved, if anyone has any suggestions.

Thanks!
Matt H
 
Access time support is part of a filesystem which is defined as POSIX conform. Some filesystems have slow metadata performance for to support atime and so advised to disable atime but be not able to provide atime is sad. Maybe it's that way because DD (was bought by Dell) thought there's normally no access to backup and archive data and secretly don't do/allow atime settings in their appliances to be faster (than the competition).
 
Sadly as workaround you can create really big files (as many TB), make a filesystem on and mount it as loop device in your pbs.
PS: That way you could eg speed up a nfs installed matlab version on a nfs client by factor 3 because of it's endless tiny and even endless 0byte files inside.
 
Are you mounting the DataDomain via SMB, NFS or BoostFS? I have played only with SMB shares (not with PBS) but I didn't notice anything weird with time stamps etc. I think DD and PBS won't work well togheter since PBS already does data deduplication and then DD tries to deduplicate it again, though I don't think it should cause any issues like this, data just won't compress and poor perfomance.

Work uses DataDomain with Veeam and DDBoost protocol.
 
We'd really love to get this resolved, if anyone has any suggestions.
Unfortunately, I have no suggestion for you :-(.

Just an idea for the Proxmox developers: adding a feature to garbage collection process in the "Phase one (Mark)" to optionally use an equivalent of the default touch filename command (contrary to touch -a filename ). The default touch changes "modification time" of the file.
Then in the "Phase two (Sweep)" check for modification time of files instead of access time.

That is, provided that Dell EMC Data Domain supports modification timestamps at least...
 
Are you mounting the DataDomain via SMB, NFS or BoostFS? I have played only with SMB shares (not with PBS) but I didn't notice anything weird with time stamps etc. I think DD and PBS won't work well togheter since PBS already does data deduplication and then DD tries to deduplicate it again, though I don't think it should cause any issues like this, data just won't compress and poor perfomance.

Work uses DataDomain with Veeam and DDBoost protocol.
I have tried via NFS, CIFS, and DDBoost, in all cases the timestamps do not update. It can still write to it just fine so the only time you'd experience a problem is if you have a process that depends on being able to update the 'access' timestamp
 
Sadly as workaround you can create really big files (as many TB), make a filesystem on and mount it as loop device in your pbs.
PS: That way you could eg speed up a nfs installed matlab version on a nfs client by factor 3 because of it's endless tiny and even endless 0byte files inside.
Not sure I understand this. I tried to create a QCOW2 disk on an NFS share off of my Data Domain and attach it to my PBS, and while I eventually got it to sort of work, the backups would crash after a few minutes of runtime. Seemed really unstable unfortunately.
 
A qcow2 file is an vm image file while for PBS you need a "normal" filesystem which could be local or remote.
 
Interesting ... and the question is where is the problem, it's pve, pbs, qcow2 or the DD nas ... when it crash easily ?! Each of is called stable but together no more ... sad point.
:)