Usecase is immediately cleaning up a huge /var/log/proxmox-backup folder filling up the root FS after inserting "task-log-max-days" to /etc/proxmox-backup/node.cfg
No, we never really solved this. For other reasons we had to completely reinstall the entire Proxmox Cluster. Since then the problem never came back till today.
Gut, drüber gesprochen zu haben... manchmal kommt man auf bessere Alternativen selbst nicht :D
Inspiriert von hier: https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#RSTP_Loop_Setup sind unsere 3 Proxmox Nodes mit 2 Ringen a 40GBit direkt untereinander verkabelt, ein Ring für...
Der Hintergrund ist, dass aus den Servern die Netzwerkkarte raus muß, wo gerade Ceph drüber läuft und daher auf einer anderen, auch schon für andere (kleinere) Dinge genutzte Netzwerkkarte gepackt werden muß, deren IP wir nicht ändern können / sollten. Aber vllt. packen wir auf der einfach die...
Und wenn ich die Ceph Doku richtig verstehe, finden sich die mons ja über diese interne monMap, die nicht neu erzeugt wird, wenn man die ceph.conf ändert. Dann dürfte doch das Neustarten der mon-Dienste auch nicht reichen oder? Zumindest haben wir das auch noch versucht, als wir die obige...
Vielen Dank schon mal für den Hinweis.
Wenn ich mir die Ceph Doku diesbzgl. so ansehe, sollte man in diesem Fall also so vorgehen:
1) Hinzufügen neuer Mons mit den neuen IPs
2) Entfernen der alten Mons
3) Anpassen der ceph.conf, damit die clients die neuen mons finden
Über die Proxmox GUI geht...
@Max Carrara Thx a lot for this finding in the docs. Now it gets clearer why there is some space "FREE" but "No space left".
As can be read, in the meantime we were able to initially free enough space to make GC Job run again (which is still running at the time of writing).
This PBS is our Backup Target for all productive / critical VMs of a connected productive PVE Cluster. So as we urgently need it running again, and no one seems to have an idea we did the following in the meantime:
-> Moved away ~30-40 of the oldest directories in ".chunks" to another FS and so...
I just noticed this:
root@pbs:/mnt/datastore/backup# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
backup 6828558 6828558 0 100% /mnt/datastore/backup
Could this be an explanation?
I'm wondering if / that ZFS really also has inodes like ext4...
After googling around, another thing we tried was this:
-> copy an old dir under .chunks to another FS
-> "cat /dev/null >..." to all files in that dir
-> rm all files in that dir
Didn't help either, FS remains 100% full... as a plus what we noticed and is even more confusing:
Before this step...
Hi everybody,
at a PBS the backup storage, which is a ZFS, is completely full, so the GC Job fails:
In the Server all physical ports are used so we can't simply add more HDDs for extending the pool.
In theory we just need some few megs of space so the GC Job can do it's work.. we even tried...
So for finishing this thread, here the current situation:
First of all, we never really found out, what was going wrong in the Proxmox/Ceph Cluster. What we did till today is this:
Took an older HP Proliant Server, installed Proxmox on it
Migrated all VMs and their data to this single Server...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.