Search results

  1. fstrankowski

    Update & Risk Management Best Practices? How to ensure real HA between Clusters

    We do use Enterprise on all of our Clusters. That has nothing todo with what i was looking for in this thread though.
  2. fstrankowski

    Update & Risk Management Best Practices? How to ensure real HA between Clusters

    My god, this is an excellent idea. Does the POM support Mirroring Debian base packages aswell, not just Proxmox packages? So we do not run into problems using a Proxmox Snapshot from the POM but install more recent debian base updates alongside? Is it possible to specify a snapshot-id on our...
  3. fstrankowski

    Update & Risk Management Best Practices? How to ensure real HA between Clusters

    Hey guys, today in one of our standup meetings we thought about improving out update strategy. Currently we run multiple clusters running on different versions of Proxmox. So far, to ensure we have our HA services always running, we update one cluster at a time, let it run to ensure its stable...
  4. fstrankowski

    Proxmox / Ceph / Backups & Replica Policy

    Hello everyone! We've recently upgraded our backbone to 50G and are having some interesting findings in our (3 node) cluster . We're running on latest Proxmox 8.3 with Ceph 18.2. Ceph VM-Pool is configured with 3x replication over all 3 nodes (so one copy resides on each node). When we're...
  5. fstrankowski

    Proxmox 7.3.3 / Ceph 17.2.5 - OSDs crashing while rebooting

    If you'd have taken a look into the original thread, you'd have noticed that i've encountered this error with Quincy. Currently we're running Reef with the exact same happening. Also your initial response to this thread shows that you were using Quincy aswell.
  6. fstrankowski

    [SOLVED] Chinese File in my backup folder. Have i been compromised or was it a kernel panic?

    Thats what has been my thought. I'll close this thread and just delete the file in question. Thanks everyone!
  7. fstrankowski

    [SOLVED] Chinese File in my backup folder. Have i been compromised or was it a kernel panic?

    Hey! I've recently had an issue with one of my PBS. In short: I'm running a PBS at Hetzner which has a Storage Box mounted via CIFS under its own user: PBS <-> CIFS <-> Storage Box Then, yesterday, in the root-directory of one of my storages, i've found a weired file with chinese letters...
  8. fstrankowski

    Update-Problem: "You are attempting to remove the meta-package 'proxmox-ve'!"

    @fabian hat es bereits gelöst. Updates sind jetzt wieder ohne Probleme möglich. Edit: Fabi hat zwischen Kaffee holen und Antworten schon geantwortet :)
  9. fstrankowski

    Upgrading PVE Tries to Remove proxmox-ve package

    This is not the way to "fix" that error. Dont upgrade is the better choice. The "pvetest"-Repository is not ment for any live-workload but only development reasons. So 99.9% of the users in here are better off to NOT UPGRADE AT THE MOMENT
  10. fstrankowski

    Proxmox HA & "Start at boot"

    Moin! Wir nutzen HA für unsere VMs. Laut Dokumentation wird die Option "Start at boot" nicht verwendet, sobald eine VM im HA ist. Was jetzt passiert ist: Wenn durch Probleme eine Maschine umgezogen wurde auf einem neuen Host, aber noch auf einem anderen Host vorhanden ist (eine Art...
  11. fstrankowski

    [SOLVED] [TAGS] Datacenter wide tag color override not applied to vms/lxc

    You've pointed me in the right direction. Even though we only use LARGE written tags, which are UNIQUE, we still have to select the "Case-Sensitive" option, even though we only use LARGE tags. This is kinda misleading but i'm happy that we've found out. P.S.: By default, even though you only...
  12. fstrankowski

    [SOLVED] [TAGS] Datacenter wide tag color override not applied to vms/lxc

    While testing the 'new' possibility of setting up tags for different categories we've stumbled upon the fact that when we setup tags on a datacenter level and color-override them accordingly, those tags are then available for selection on lxc/vms but the predefinied colors are not correctly...
  13. fstrankowski

    [SOLVED] Stuck in "menu timeout set to"

    Found the answer in the archives: See here > https://forum.proxmox.com/threads/hot-to-modify-timeout-in-systemd-boot-menu-its-too-long-5-102-000-sec.124109/#post-544152
  14. fstrankowski

    [SOLVED] Stuck in "menu timeout set to"

    I have just setup two new "Tiny Mini Micro" servers (HP EliteDesk 800 G4 / i5 8500) and installed Proxmox. Idetical systems. One system is stuck at the bootloader screen. Problem: The "menu timeout set to" is constantly counting up (increasing) without any limit. When pressing any key the...
  15. fstrankowski

    [SOLVED] Proxmox 8 | Sporadic ICMP / Scheduler Problems

    I've found the problem: Reason for the behavior has been a faulty DNS server in the line. This caused the resolving reverse names for the ipv6 adresses to be slower than 1 second and thus caused the problem shown above.
  16. fstrankowski

    [SOLVED] Proxmox 8 | Sporadic ICMP / Scheduler Problems

    I've setup a fresh Proxmox 8 Box at Hetzner DC and have a new kind of problem with it i never had before. After setting up the box, i used to outward ping hosts to check the connectivity from time to time and its link latency - which is fine. But while pinging servers, waiting for the reply made...
  17. fstrankowski

    Proxmox 7.3.3 / Ceph 17.2.5 - OSDs crashing while rebooting

    So we can atleast say that our problem is unrelated to the kernel version. You're running 6.1 while we're at 5.15. Same issue on both systems.
  18. fstrankowski

    Proxmox 7.3.3 / Ceph 17.2.5 - OSDs crashing while rebooting

    We've recently (yesterday) updated our test-cluster to the latest PVE-Version. While rebooting the system (upgrade finished without any incidents), all OSDs on each system crashed: ** File Read Latency Histogram By Level [default] ** 2023-01-30T10:21:52.827+0100 7f5f16fd1700 -1 received...
  19. fstrankowski

    Ceph 17.2 Quincy Available as Stable Release

    You're indeed correct. So in the longrun it could be an idea to develop a ceph-ansible/cephadm inspired, proprietary Proxmox approach, to automatically calculate and adjust osd_memory_target values. Wdyt? Thats why i've been referring to it using values in between 0.1 <> 0.2 ;-)