2020-07-23T00:38:52+02:00: sync group vm/3292 failed - Too many open files (os error 24)
2020-07-23T00:38:52+02:00: sync group vm/333 failed - last_successful_backup: unexpected error - EMFILE: Too many open files
2020-07-23T00:38:52+02:00: sync group vm/3333 failed - last_successful_backup...
While syncing backups to a second pbs, after a few hours this error appears:
2020-07-20T17:32:49+02:00: sync group vm/3631 failed - HTTP Error 401 Unauthorized: authentication failed - invalid ticket - timestamp too old.
First I thought it was beacause of wrong time, but now everything is...
FYI: the statistics from the datastore remains empty, on both of my test-pbs-server, running each on 6 sata HDs as RaidZ2 (created during install)
and maybe that works as intended: had a short shocking moment when I wanted to see if the last night backup is available for my tested vms: the...
I tested remote sync today, setup a similar server to the pbs - first of all I ran into the same mistake as mentioned in another post:
on the PBS I configured the "remote" and afterwards the sync, which removed all my (test)Backups under a second
It must be clear that the remote and sync has...
OK - it worked: ...604.0 MiB dirty of 87.0 GiB total ... Backup took only 26s on an old and slow Server - great!
Maybe the time between the first two backups was too short, it was 40 min - not an hour as mentioned before
pls get hte file-browsing done and release for production, we were...
I just setup the first test-backup- server and made the first backup of a VM with a 87GB Disk - workes as intended and backed up the 87 GB.
1 hour later I made a second backup, but all the 87GB are backed up again - what could be the issue?
INFO: using fast incremental mode (dirty-bitmap)...
We have the same Issue with the same SSDs CT1000MX500SSD1 (Crucial 1TB).
I also noticed that smart does not know many attributes from the ssds:
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1...
you can set
osd max backfills = 1
osd recovery max active = 1
in /etc/pve/ceph.conf
if not already done
and
ceph osd set noscrub
ceph osd set nodeep-scrub
on the commandline - when done activate again with unset the same
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.