Hello,
My PBS has something along the lines of 20 000+ backups and listing backups for a VM is becoming very slow and I usually get connection timed out (596). Now the underlying storage is fast and all and there is no CPU bottleneck.
"pvesm list backup" command does work fine but each time I run it it takes around 3 minutes. Is it possible to speed up this task? Each vm only gets a new backup every week so the data fetched isn't exactly outdated the next time I run it. A smart caching or I'm thinking that there's somewhere a workers count or something else I can adjust?
The API is also timing out.
X/api2/json/nodes/cluster-12-vm/storage/backup/content
Do you recommend me otherwise to split the datastore and have say 10 separate datastore on each server to keep it smaller?
My PBS has something along the lines of 20 000+ backups and listing backups for a VM is becoming very slow and I usually get connection timed out (596). Now the underlying storage is fast and all and there is no CPU bottleneck.
"pvesm list backup" command does work fine but each time I run it it takes around 3 minutes. Is it possible to speed up this task? Each vm only gets a new backup every week so the data fetched isn't exactly outdated the next time I run it. A smart caching or I'm thinking that there's somewhere a workers count or something else I can adjust?
The API is also timing out.
X/api2/json/nodes/cluster-12-vm/storage/backup/content
Do you recommend me otherwise to split the datastore and have say 10 separate datastore on each server to keep it smaller?
Last edited: