Search results

  1. G

    [TUTORIAL] PVE 7.x Cluster Setup of shared LVM/LV with MSA2040 SAS [partial howto]

    Still running it as latest post, now with PVE8.1.4 - Glowsome
  2. G

    [TUTORIAL] PVE 7.x Cluster Setup of shared LVM/LV with MSA2040 SAS [partial howto]

    Hi there, Again going top-down as to your questions: if a node crashes / or is poison-pilled/STONITH'ed the rest functions without issues after. The crashed node gets removed from a/the lockspace, and thus is no longer apart of it. I have tested it by just hard-resetting a node, and...
  3. G

    [TUTORIAL] PVE 7.x Cluster Setup of shared LVM/LV with MSA2040 SAS [partial howto]

    Hi there, To go top-down in answering your questions: No Its stable as far as i can tell, i have not had any issues with it going down on me, nor locks, nor FS-corruption. (i mean if i were having above i would have searched for solutions and reported/updated the tutorial i wrote) As you...
  4. G

    LXC backup stuck on "starting final sync"

    Turns out it was the FUSE feature being in use on the LXC. As soon as i took it off - or downed the affected host - after reading Forum posts and docs and the issues with it - backups went fine. For reference as to where i got my answers : This forum Post In my case we are talking about a...
  5. G

    Roadmap for integration with Ansible

    For your info, i have not yet gone into the deep regarding actual guest-management. I just require (for now) managing the nodes of my cluster. - Glowsome
  6. G

    LXC backup stuck on "starting final sync"

    I am experiencing the same on a (just now created) new LXC container. Running latest PVE8: proxmox-ve: 8.0.2 (running kernel: 6.2.16-15-pve) pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390) pve-kernel-6.2: 8.0.5 proxmox-kernel-helper: 8.0.3 pve-kernel-5.15: 7.4-4 pve-kernel-5.13...
  7. G

    PVE8.04: After upload of images temp file was not removed - filled up root filesystem

    Hello, See below for output of pveversion -v proxmox-ve: 8.0.2 (running kernel: 6.2.16-6-pve) pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390) pve-kernel-6.2: 8.0.5 proxmox-kernel-helper: 8.0.3 pve-kernel-5.15: 7.4-4 proxmox-kernel-6.2.16-6-pve: 6.2.16-7 proxmox-kernel-6.2...
  8. G

    PVE8.04: After upload of images temp file was not removed - filled up root filesystem

    Conditions: - PVE8.04 / latest - iso/templates are mounted as a separate LVM volume under /data/iso ( seen as iso storage in PVE UI) - logged in as federated OIDC user ( with Enterprise admin privileges) Behavior experienced: - Uploaded new iso to /data/iso - iso was correctly added - temp...
  9. G

    Workaround to Cluster not ready no quorum (500) permanently!

    In reading your situation i do not see a risk, as you are placing a/the firewall outside/ in front of of the whole cluster setup. Meaning you are not running into an infinite loop when your pfsense ( as a cluster resource) is not up, but is needed to be up for all nodes to reach quorum.
  10. G

    Bug: Create Bond - invalid name bond-primary

    FYI: i've created it as a bug : https://bugzilla.proxmox.com/show_bug.cgi?id=4911
  11. G

    Bug: Create Bond - invalid name bond-primary

    I will make a report about this, as imho if a field/option is optional, then it should be optional in the GUI, and not confuse someone (less skilled) in manually making these changes over the command-line. They should be able to create a Linux Bond via the GUI. As you point out ... if one has...
  12. G

    Bug: Create Bond - invalid name bond-primary

    The thing i stumbled over is the fact that in the GUI the field is treated as a 'mandatory', and not optional if i interpret your reply correctly. IMHO ( and that was why i opened the thread) it should also be an optional field there.
  13. G

    Workaround to Cluster not ready no quorum (500) permanently!

    In essence, with a cluster, you do NOT want to run something that handles your routing. In my opinion : for the love of god, and peace of mind, set your router/firewall by ways of hardware outside your cluster. For a Non-clustered env its all fine, clustered, you dont want the firewall on it. -...
  14. G

    Bug: Create Bond - invalid name bond-primary

    With instaling a new Proxmox Box i ran into the following issue when trying to create a Linux Bond: Now, in previous installs i created the bond via command-line by editing /etc/network/interfaces auto bond0 iface bond0 inet manual bond-slaves eno1 eno2 bond-miimon 100...
  15. G

    Cluster frozen after one node / VM behaved badly

    This just happened : On backup i got on one node on my cluster ( Bookworm/8.0.4/latest release Proxmox) weird behavior which in general blocked my whole cluster : Message from syslogd@node01 at Aug 14 01:11:59 ... kernel:[1233000.842855] watchdog: BUG: soft lockup - CPU#33 stuck for 1863s...
  16. G

    [TUTORIAL] PVE 7.x Cluster Setup of shared LVM/LV with MSA2040 SAS [partial howto]

    Hi, might be a late reply on this, but if i read correctly then dropping an addition in: This means you are not adding, but actually overriding. (for reference) see ...
  17. G

    [TUTORIAL] PVE 7.x Cluster Setup of shared LVM/LV with MSA2040 SAS [partial howto]

    This whole thing started as a project to get around limitations i encountered. and as a whole, the usage of OCFS2 is oracle, and after the took over MySQL, i dread being locked in by using this. Due to the nature it grew.. and grew ... and maybe yes its turned into a beast .. but still all in...
  18. G

    [TUTORIAL] PVE 7.x Cluster Setup of shared LVM/LV with MSA2040 SAS [partial howto]

    A little thing i encountered in my setup : After a reboot (needed after upgrading PVE7 -> PVE8) i got errors from DLM, and it basically meant that it was getting blocked (sk_err 110/0 was seen on consoles). Then i remembered i had tightened security by turning on firewalls on the individual...
  19. G

    [TUTORIAL] PVE 7.x Cluster Setup of shared LVM/LV with MSA2040 SAS [partial howto]

    My setup is running with a single (well bonded) interface in active-backup config network. I did not see a/the need to work with a separate network just for cluster information.