Search results

  1. G

    PVE8.04: After upload of images temp file was not removed - filled up root filesystem

    Hello, See below for output of pveversion -v proxmox-ve: 8.0.2 (running kernel: 6.2.16-6-pve) pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390) pve-kernel-6.2: 8.0.5 proxmox-kernel-helper: 8.0.3 pve-kernel-5.15: 7.4-4 proxmox-kernel-6.2.16-6-pve: 6.2.16-7 proxmox-kernel-6.2...
  2. G

    PVE8.04: After upload of images temp file was not removed - filled up root filesystem

    Conditions: - PVE8.04 / latest - iso/templates are mounted as a separate LVM volume under /data/iso ( seen as iso storage in PVE UI) - logged in as federated OIDC user ( with Enterprise admin privileges) Behavior experienced: - Uploaded new iso to /data/iso - iso was correctly added - temp...
  3. G

    Workaround to Cluster not ready no quorum (500) permanently!

    In reading your situation i do not see a risk, as you are placing a/the firewall outside/ in front of of the whole cluster setup. Meaning you are not running into an infinite loop when your pfsense ( as a cluster resource) is not up, but is needed to be up for all nodes to reach quorum.
  4. G

    Bug: Create Bond - invalid name bond-primary

    FYI: i've created it as a bug : https://bugzilla.proxmox.com/show_bug.cgi?id=4911
  5. G

    Bug: Create Bond - invalid name bond-primary

    I will make a report about this, as imho if a field/option is optional, then it should be optional in the GUI, and not confuse someone (less skilled) in manually making these changes over the command-line. They should be able to create a Linux Bond via the GUI. As you point out ... if one has...
  6. G

    Bug: Create Bond - invalid name bond-primary

    The thing i stumbled over is the fact that in the GUI the field is treated as a 'mandatory', and not optional if i interpret your reply correctly. IMHO ( and that was why i opened the thread) it should also be an optional field there.
  7. G

    Workaround to Cluster not ready no quorum (500) permanently!

    In essence, with a cluster, you do NOT want to run something that handles your routing. In my opinion : for the love of god, and peace of mind, set your router/firewall by ways of hardware outside your cluster. For a Non-clustered env its all fine, clustered, you dont want the firewall on it. -...
  8. G

    Bug: Create Bond - invalid name bond-primary

    With instaling a new Proxmox Box i ran into the following issue when trying to create a Linux Bond: Now, in previous installs i created the bond via command-line by editing /etc/network/interfaces auto bond0 iface bond0 inet manual bond-slaves eno1 eno2 bond-miimon 100...
  9. G

    Cluster frozen after one node / VM behaved badly

    This just happened : On backup i got on one node on my cluster ( Bookworm/8.0.4/latest release Proxmox) weird behavior which in general blocked my whole cluster : Message from syslogd@node01 at Aug 14 01:11:59 ... kernel:[1233000.842855] watchdog: BUG: soft lockup - CPU#33 stuck for 1863s...
  10. G

    [TUTORIAL] PVE 7.x Cluster Setup of shared LVM/LV with MSA2040 SAS [partial howto]

    Hi, might be a late reply on this, but if i read correctly then dropping an addition in: This means you are not adding, but actually overriding. (for reference) see ...
  11. G

    [TUTORIAL] PVE 7.x Cluster Setup of shared LVM/LV with MSA2040 SAS [partial howto]

    This whole thing started as a project to get around limitations i encountered. and as a whole, the usage of OCFS2 is oracle, and after the took over MySQL, i dread being locked in by using this. Due to the nature it grew.. and grew ... and maybe yes its turned into a beast .. but still all in...
  12. G

    [TUTORIAL] PVE 7.x Cluster Setup of shared LVM/LV with MSA2040 SAS [partial howto]

    A little thing i encountered in my setup : After a reboot (needed after upgrading PVE7 -> PVE8) i got errors from DLM, and it basically meant that it was getting blocked (sk_err 110/0 was seen on consoles). Then i remembered i had tightened security by turning on firewalls on the individual...
  13. G

    [TUTORIAL] PVE 7.x Cluster Setup of shared LVM/LV with MSA2040 SAS [partial howto]

    My setup is running with a single (well bonded) interface in active-backup config network. I did not see a/the need to work with a separate network just for cluster information.
  14. G

    [SOLVED] Failed to run lxc.hook.pre-start

    i also faced the issue on my cluster, but i didnt figure it out untill i saw the post, as i migrated one of the LXC's to a node which apparently had binutils already installed ( even tho i do try to take good care of keeping package-installations all in sync on the nodes). But as said, this is...
  15. G

    [SOLVED] Odoo + Wordpress VMs resources Optimization

    You deal with it in the same way as you would do when running VM's. Nothing different in approach other then that both a/the wordpress box and the database-box are LXC instead of VM's.
  16. G

    [SOLVED] Odoo + Wordpress VMs resources Optimization

    just my opinion,... Resource control for something like a Wordpress box/site is better done not creating them as a full VM, but as a LXC container. i am running like 12 LXC container on Rockylinux 8.6 ( base template deployed, configured further via Ansible, and have not yet had issues in...
  17. G

    Proxmox Monitoring LXC via Check_MK

    Hi all, Seeking an explanation to why LXC containers popping up in Check-MK monitoring after x time regarding shared memory. The only solution to get rid of the notification in Check_MK monitoring for now is to just bluntly reboot the container. What i am after is an understanding as to what...
  18. G

    Valid OIDC login (full admin) - still needs root@pam to change features on LXC container

    I understand your concern in regards of an action in the *very dangerous* categories. Do not want 'root@pam' execute and to be shown in a/the log, but my federated user which is a 1:1 relation to a person in a/the team .. So for auditing purposes would like to see no root@pam - executed actions.
  19. G

    Valid OIDC login (full admin) - still needs root@pam to change features on LXC container

    Situation: OIDC correctly setup OIDC user is part of Administrators - group, with the correct rights. change LXC options ( ie. nesting/Fuse tick if unticked) Result: Is this by design? - in need of explanation here ... .. i mean i explicitly went for a federative method so i have control...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!