Then it's just a suggestion to mention this in the documetation, since, as we all know CT online migration is impossible, so such a restart requires a reboot of all the CTs.
How to restart lxcfs, or, rather, how to resurrect /proc on running containers?
If lxcfs gets restarted by some or other reason all the CT's get choked:
# ps awuxf
Error: /proc must be mounted
To mount /proc at boot you need an /etc/fstab line like:
proc /proc proc defaults
In...
You're right, "large" is subjective.
The one in question is 2.6 TB with about 18 million files.
There may be some relevant factors here:
full filesystem traversal time (have to be done in all cases)
stat()'ing the file (also had to be done to see incrementality based on update time)
reading...
On my Large container the pbs takes 25 hours now. The same contaner with backuppc (incremental, rsync) takes about 90 minutes.
Backing up a VM image is fast.
(Non-incremental backuppc take 12 - 40 hours.)
It seems pbs could use a feature where files with old atime/mtime would be skipped.
It is not clear to me how lxcfs error related to container config but anyway, I have only unprivileged CTs, and have this spamming error for quite a while now.
Tracking it back there was a reboot (due to various network problems and, eventually, getting fenced) and after reboot and start and...
Yes, and and it fails first usually:
2020-09-07T00:00:32+02:00: TASK ERROR: Unable to open dynamic index "/mnt/datastore/pub/ct/106/2020-08-20T22:22:02Z/root.pxar.didx" - No such file or directory (os error 2)
Then comes again in 1 minute and report a lot of missing chunks. Probably killing...
So, I have pruned all the backup for the specific container, and a whole [new] backup was created.
Started verification. It ran for a few hours then started to find errors, again, though much less in numbers.
I really wonder what and when deletes these chunks, since it seems to consistently...
It seems that the problem is not in the container but the method itself; I suspec that somewhere in the snapshotting the FS gets an inconsistent snapshotted state and the backup complains. Visibly the inconsistent files are recent ones and there is no reason to have any problem there, and a...
Apologies if I should have detailed what I meant. Let me:
There is two side of pbs+pve backup process: the pve sender and the pbs receiver side.
On the sender side the total volume may be known, especially if vzdump made a snapshot, so percentages ("progress") can be calculated.
I don't...
Sure, and I'm pretty sure it will be fixed then.
That's the only way right now to fix it anyway....
Hmm, I am not sure this GC looks very promising though:
2020-09-01T21:32:46+02:00: Removed bytes: 8824296342
2020-09-01T21:32:46+02:00: Removed chunks: 4177
2020-09-01T21:32:46+02:00: Pending...
Interesting to read it: lot have changed, something not at all.
My personal opinion: I still try to avoid ZFS since it still has that weird aura that some bugs are unfixed for years now, and it's still an external kernel sidecar which still occasionally became uncompilable and have to wait for...
When you create a new, incremental backup, and you have a chunk with H1 hash, which is already known to the system, do you actually verify its existence in the disk or just reference it (based on some hatabase of known chunks)?
I suspect that H1 was in the old backup and the file is missing...
So, there was the successful GC on 08-19 00:02.
One minute before that there was a
2020-08-19T00:00:00+02:00: starting garbage collection on store pub...
* what disks are used for the LVM? are they healthy?
Dell PERC 740p hardware RAID-6 array, 2 PVs, 1 VG/LV, 18TB in size. Everyone's healthy and happy.
* how big is the CT (rough estimate)
1.81 TB, give or take a few gigs.
* what big files are in there, or are there just so many small...
2020-08-25T10:21:57+02:00: WARN: warning: unable to access chunk xx, required by "/mnt/datastore/pub/ct/666/2020-08-21T02:33:47Z/root.pxar.didx" - update atime failed for chunk "/mnt/datastore/pub/.chunks/fa39/xxx" - ENOENT: No such file or directory
This have been killing GC (which is fixed in...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.