Whats the point of S3?

ChrisJM

Well-Known Member
Mar 12, 2018
85
2
48
39
So i have upgraded to v4 hoping to use S3 only to find out it is not pass through and will use local storage anyway and if that cache folder is removed for example if the server dies i cant retrieve it anyway so what is the whole point of it?

I wanted to avoid using the local storage...
 
So i have upgraded to v4 hoping to use S3 only to find out it is not pass through and will use local storage anyway and if that cache folder is removed for example if the server dies i cant retrieve it anyway so what is the whole point of it?

I wanted to avoid using the local storage...
You do can recreate the datastore from the data in the s3 bucket. Just use the reuse-datastore and overwrite-in-use flags during datastore creation of the new instance. Also, the local storage is a cache, it does not need to hold all the chunks all the time.

Edit: Please see the docs for further details https://pbs.proxmox.com/docs/storage.html#datastores-with-s3-backend
 
Last edited:
You do can recreate the datastore from the data in the s3 bucket. Just use the reuse-datastore and overwrite-in-use-marker flags during datastore creation of the new instance. Also, the local storage is a cache, it does not need to hold all the chunks all the time.

Edit: Please see the docs for further details https://pbs.proxmox.com/docs/storage.html#datastores-with-s3-backend

As far as i have read lets say my cache is only 500GB and i use more than that... ? i have already tried removing a folder and it removes it from the PBS config..

and lets say if i use the "reuse-datastore" if i dont have enough space locally, how would that work?

why did you not just design it for pass through and cause all these issues?
 
As far as i have read lets say my cache is only 500GB and i use more than that... ?
Old chunks will get evicted,it's al LRU cache.

i have already tried removing a folder and it removes it from the PBS config..
From the cache? If so, yes if you delete it form the local cache it will not be shown anymore, do a s3 refresh (you will find in the datastores contents tab in the more dropdown) and it will be there again since it was never removed from the s3 backend. Also, you should of course not manually edit the cache contents, that will mess up the state...

and lets say if i use the "reuse-datastore" if i dont have enough space locally, how would that work?
the chunk cache will get filled on demand, only metadata (which are relatively small) need to be persistent.

why did you not just design it for pass through and cause all these issues?
because of the reasons mentioned in the docs