Hey, I configured a backup job on the Datacenter node, and set "keep-last=1", but it is ignored, backups keep piling up.
Each affected node's logs just states
2024-03-18 21:05:07 INFO: prune older backups with retention: keep-last=1
2024-03-18 21:05:07 INFO: pruned 0 backup(s)
What could be...
Where do I find this config? The setup is the default, with a single physical eth0.
This only seems to affect Debian CT-s, I tried release 11 and 12, I don't have VMs atm.
The problem only appears after 14-20h uptime.
If I manually run dhclient eth0 the network becomes operational again...
Thanks for the response, but I simply performed a full reinstall since, so I can't research this anymore. I marked the thread as "solved" still, although it would be useful to know how to remedy this kind of situation.
Sure @Moayad, here it is
Container is Debian 12 Standard 6.2.16-3-pve #1 SMP PREEMPT_DYNAMIC PVE 6.2.16-3 (2023-06-17T05:58Z) x86_64
Note that seemingly only Debian containers need DHCP ip settings, STATIC doesn't work (unlike Alpine containers)
I couldn't see this problem yet with Debian 11...
All of my Debian Standard containers lose internet connection 1-2 times a day. They can ping local ip-s, but not internet ones. Only reboot solves it.
Journalctl shows just this:
E: Sub-process /lib/systemd/systemd-networkd-wait-online returned an error code (1)...
/etc/pve is empty, probably because the node can't connect (as per forum searches for the same problem).
I've posted parts of status pve-cluster above, the full response is
pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service...
Yes. The only unique lines in journalctl -b are:
localhost pveproxy[1644]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 2009.
localhost cron[1120]: (*system*vzdump) CAN'T OPEN SYMLINK (/etc/cron.d/vzdump)
Using the latest proxmox, I changed the system time (to earlier) - I assume this is the problem, as this is the only change I did that could result in this - and after a restart PVE-Cluster fails to start.
systemd[1]: pve-cluster.service: Scheduled restart job, restart counter is at 5...
Nur eine Nebenbemerkung, ich habe dieses Problem umgangen, ohne UID-mapping.
Wie im OP angegeben, wollte ich nur Bind-Mounts zwischen Containern teilen, ohne dass die Möglichkeit besteht, dass Dateien für einige Container nicht verfügbar oder nur lesbar sind. Dazu sind keine bestimmten Benutzer...
Just a sidenote, I circumvented this issue, without UID mapping.
As stated in OP, I just wanted to share Bind mounts between containers, without the possibility that files are unavailable or read only to some containers. This doesn't necessitate specific users, just preset rights. So I added...
I didn't had such problems (using the stock heatsink) - I also ran some stress tests back then. A typical personal homelab server is rarely running on extended periods of 100% CPU for this to be a big issue, depends on the use case.
There are downsides of enterprise products - HW might be more exotic, there might be less support (for a private user) if anything goes wrong - at least in terms of googleability.
2 options could be:
A J4501-based motherboard + ITX case. It's not the latest, but still powerful enough for most...
I'm trying to do a very typical uid mapping. I have both a Guest and Host user id 5000.
Just for reference I'm adding my config, it is valid, because it works. I see Bind mount files on Guest/Host owned by #5000 user as their respective users in Guest/Host.
lxc.idmap: u 0 100000 5000
lxc.idmap...
Das ist mein sub{u,g}id config - share ist die Host #5000 user:
root:100000:65536
share:165536:65536
root:5000:1
Ich hätte kein großes Problem damit, dies individuell für Containers zu tun - da ich es nur einmal tun müsste - solange es funktioniert...
Ja, der schlimmste Fall ist SMB, aber...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.