Dear bennetgallein,
xattr support or lack of it is a clear cause of the symptoms, however, in my case, it's a turnkey container with Nextcloud and enabled full text search with elssticsearch (and readonlyrest). When backup fails, I have to reboot the container and then trigger a file scan and...
So, hab getestet.
Ein einfaches
proxmox-boot-tool reinit
proxmox-boot-tool refresh
hat das Problem gelöst!
Vielen Dank! ☘️ ♂️ \\EDIT: bevor jemand am Männersymbol Anstoß nimmt, auf meinem Android-Handy war es der winkende Typ. Email von Android zu Outlook unter Windows zu Evolution unter Linux...
Jetzt kann ich es beantworten.
zpool get all | grep dnode
ist nicht dasselbe wie
zfs get all | grep dnode
Wenn man also grub benutzt, darf man das keinesfalls irgendwo auf rpool machen, weil das Flag dann für rpool aktiviert wird und aktiv bleibt, weil ja sehr schnell eine andere Inodengröße...
Hey, vielen Dank, sehr gern, ich habe gerade den anderen Thread gelesen.
Ich muss nur heute 20 Stunden arbeiten, dann ins Koma und am Wochenende meine alte Wohnung abwracken.
Danach probiere ich es sofort und gebe Rückmeldung.
Schlaft schön!
Ich habe mal eine Frage und selbst wenn sie dumm ist, kann ich sie mir selbst nicht sicher genug beantworten. ;-)
Im Wiki steht, dass man beim ZFS dnodesize=auto nicht auf rpool setzen soll, weil das Probleme beim Booten mit Grub gibt.
Meine Frage nun: darf ich dnodesize=auto auf rpool/data...
Andere ESPs gibt es nicht - es könnte aber sein, dass es daran liegt, dass das System seit Version 6.0 läuft und ja irgendwann von Grub zum Proxmox Boot Tool migriert wurde und Reste von Grub eventuell noch da reinspielen, obwohl man die beim Booten nicht sieht.
Ich werde nächstes Wochenende mal...
Lieber Fabian,
vielen Dank für Deine Antwort und entschuldige bitte die Verzögerung.
Hier die Outputs:
root@pve1:/boot/efi# bootctl status
System:
Firmware: UEFI 2.40 (American Megatrends 5.11)
Secure Boot: disabled
Setup Mode: setup
Boot into FW: supported
Current Boot Loader...
Vielen Dank für Eure Antworten.
@fabian: So dachte ich mir das auch, es war nur ein Versuch.
root@pve1:~# proxmox-boot-tool kernel list
Manually selected kernels:
5.13.19-6-pve
Automatically selected kernels:
5.15.30-2-pve
5.15.35-1-pve
Pinned kernel:
5.13.19-6-pve
root@pve1:~#...
Liebe Mitstreiter,
vielleicht weiß ja jemand etwas Erhellendes.
Ich möchte vorerst den Kernel 5.13.19-6 zum Booten verwenden, weil mit den 5.15er Versionen unsere Windows Server 2019 VMs immer abschmieren.
Das Problem ist nun, dass
proxmox-boot-tool kernel pin 5.13.19-6
ohne Funktion bleibt und...
Me too, same thing.
Strange thing is the occurrence of patterns.
Backup runs at 00:30 a.m. It usually finishes at 3 a.m.
During backup, machine 103 crashes sometimes.
By 6 a.m., machine 100 and 103 are always crashed.
Sometimes, they crash again at 11 a.m.
Machine 102 hardly ever crashes...
I observed the problem for some time now.
It's not the ACLs. Nor do I use them nor does disabling ACL on root fs solve the problem.
Still, 3/5 of all backup runs succeed, 2/5 fail.
Still no clue.
Is this really no common issue? I didn't setup fancy tinker shit, it's all pretty standard. ;-)
I pay a Green Manalishi for your 2 Cents. ;-) Thank you, that's great. I'm going to try this out. In the past three day, suspend backup fall back worked on that container. Just like everyone, I like spontanuous errors without logging, but thanks to you, I sleep well now. ;-)
So, after a while, it turns out as follows: once or twice a week, the daily backup in question works, the rest of the week, it fails.
Logs of rsync don't show errors, I still don't have a clue, I just worked around by setting up a stop mode backup for that specific container.
Happy new year to...
Ok, my fault, I'm fine with suspend mode if it works. I always had containers in directories on a ZFS Storage. Never mind.
But backing up container 108 in suspend mode only worked once since upgrading proxmox (when I wrote it above).
New normal is that I have to stick to stopp mode for that...
Very strange: the problem partially disappeared on its own by advanced wizardry and shady magic.
Container 108 with that suspend mode failure finally completed suspend mode backup successfully tonight.
What still remains is that no other container is able to run backups in snapshot mode (due to...
So, tested restoring backups. No solution. What I did exactly was restoring containers from backups, turning off protection in case someting needs to be upgraded, reinstalled 'pve-container', rebooted the node.
Something must have changed concerning backups, because they are way more faster now...
Just in case it is relevant: apt install --reinstall libpve-storage-perl did NOT revert changes in /usr/share/perl5/PVE/VZDump/LXC.pm
Maybe some files that should have been replaced during the upgrade to 7.1 were not upgraded?
It's just a wild fairly uneducated guess. ;-)
Good morning (and thanks Dunuin for joining us)!
I already did grep for "error" and "fail" - without success.
Here's an non-mp0-mount config:
And here's /etc/pve/storage.cfg :
I'm going to try to restore containers from backups in order to see whether that turns anything into the good. ;-)
Another thing I only noticed now is that snapshot mode doesn't work anymore for any of the other conatiners. They all fall back to suspend mode.
Before upgrading to PVE 7.1, all containers exept the one with that bind mount worked in snaphot mode.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.