Recent content by poisonborz

  1. P

    Backup retention ignored (not using PBS)

    Hey, I configured a backup job on the Datacenter node, and set "keep-last=1", but it is ignored, backups keep piling up. Each affected node's logs just states 2024-03-18 21:05:07 INFO: prune older backups with retention: keep-last=1 2024-03-18 21:05:07 INFO: pruned 0 backup(s) What could be...
  2. P

    Debian containers regularly lose internet connection, require reboot to restore

    Where do I find this config? The setup is the default, with a single physical eth0. This only seems to affect Debian CT-s, I tried release 11 and 12, I don't have VMs atm. The problem only appears after 14-20h uptime. If I manually run dhclient eth0 the network becomes operational again...
  3. P

    [SOLVED] PVE-Cluster fails to start after time change

    Thanks for the response, but I simply performed a full reinstall since, so I can't research this anymore. I marked the thread as "solved" still, although it would be useful to know how to remedy this kind of situation.
  4. P

    Debian containers regularly lose internet connection, require reboot to restore

    Sure @Moayad, here it is Container is Debian 12 Standard 6.2.16-3-pve #1 SMP PREEMPT_DYNAMIC PVE 6.2.16-3 (2023-06-17T05:58Z) x86_64 Note that seemingly only Debian containers need DHCP ip settings, STATIC doesn't work (unlike Alpine containers) I couldn't see this problem yet with Debian 11...
  5. P

    Debian containers regularly lose internet connection, require reboot to restore

    All of my Debian Standard containers lose internet connection 1-2 times a day. They can ping local ip-s, but not internet ones. Only reboot solves it. Journalctl shows just this: E: Sub-process /lib/systemd/systemd-networkd-wait-online returned an error code (1)...
  6. P

    [SOLVED] PVE-Cluster fails to start after time change

    /etc/pve is empty, probably because the node can't connect (as per forum searches for the same problem). I've posted parts of status pve-cluster above, the full response is pve-cluster.service - The Proxmox VE cluster filesystem Loaded: loaded (/lib/systemd/system/pve-cluster.service...
  7. P

    [SOLVED] PVE-Cluster fails to start after time change

    Yes. The only unique lines in journalctl -b are: localhost pveproxy[1644]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 2009. localhost cron[1120]: (*system*vzdump) CAN'T OPEN SYMLINK (/etc/cron.d/vzdump)
  8. P

    [SOLVED] PVE-Cluster fails to start after time change

    Sadly doesn't seems so: ipcc_send_rec[1] failed: Connection refused ipcc_send_rec[2] failed: Connection refused ipcc_send_rec[3] failed: Connection refused Unable to load access control list: Connection refused
  9. P

    [SOLVED] PVE-Cluster fails to start after time change

    Using the latest proxmox, I changed the system time (to earlier) - I assume this is the problem, as this is the only change I did that could result in this - and after a restart PVE-Cluster fails to start. systemd[1]: pve-cluster.service: Scheduled restart job, restart counter is at 5...
  10. P

    Hinzufügen von LXC-id mapping reversiert das Eigentümerschaft aller Benutzerdateien innerhalb des Containers

    Nur eine Nebenbemerkung, ich habe dieses Problem umgangen, ohne UID-mapping. Wie im OP angegeben, wollte ich nur Bind-Mounts zwischen Containern teilen, ohne dass die Möglichkeit besteht, dass Dateien für einige Container nicht verfügbar oder nur lesbar sind. Dazu sind keine bestimmten Benutzer...
  11. P

    Valid LXC UID mapping makes Guest /home folder owned by nobody

    Just a sidenote, I circumvented this issue, without UID mapping. As stated in OP, I just wanted to share Bind mounts between containers, without the possibility that files are unavailable or read only to some containers. This doesn't necessitate specific users, just preset rights. So I added...
  12. P

    Building an silent/fanless server

    I didn't had such problems (using the stock heatsink) - I also ran some stress tests back then. A typical personal homelab server is rarely running on extended periods of 100% CPU for this to be a big issue, depends on the use case.
  13. P

    Building an silent/fanless server

    There are downsides of enterprise products - HW might be more exotic, there might be less support (for a private user) if anything goes wrong - at least in terms of googleability. 2 options could be: A J4501-based motherboard + ITX case. It's not the latest, but still powerful enough for most...
  14. P

    Valid LXC UID mapping makes Guest /home folder owned by nobody

    I'm trying to do a very typical uid mapping. I have both a Guest and Host user id 5000. Just for reference I'm adding my config, it is valid, because it works. I see Bind mount files on Guest/Host owned by #5000 user as their respective users in Guest/Host. lxc.idmap: u 0 100000 5000 lxc.idmap...
  15. P

    Hinzufügen von LXC-id mapping reversiert das Eigentümerschaft aller Benutzerdateien innerhalb des Containers

    Das ist mein sub{u,g}id config - share ist die Host #5000 user: root:100000:65536 share:165536:65536 root:5000:1 Ich hätte kein großes Problem damit, dies individuell für Containers zu tun - da ich es nur einmal tun müsste - solange es funktioniert... Ja, der schlimmste Fall ist SMB, aber...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!