Search results

  1. G

    Valid OIDC login (full admin) - still needs root@pam to change features on LXC container

    Situation: OIDC correctly setup OIDC user is part of Administrators - group, with the correct rights. change LXC options ( ie. nesting/Fuse tick if unticked) Result: Is this by design? - in need of explanation here ... .. i mean i explicitly went for a federative method so i have control...
  2. G

    Roadmap for integration with Ansible

    For reference, my setup of the cluster was documented in another Forum post : [TUTORIAL] PVE 7.x Cluster Setup of shared LVM/LV with MSA2040 SAS [partial howto] So do keep in mind that what i a doing in my playbook(s) is aimed at keeping my (system-)setup coherant. Next to that, i have one...
  3. G

    Clustering nodes just for management

    To add to that, you will even be able to transfer VM's/LXC's between the managed (cluster-)nodes. Just be aware that as you do not have shared storage it will mean when you transfer a box it will shutdown, and then gets sent over the wire, and when sent will be available on the other box. Prior...
  4. G

    Roadmap for integration with Ansible

    @ednxzu I know its not as TS is after but i do have a set of playbooks to set my ProxMox env to how i want/need it. Remember, i'm sort of running an exotic env with specific demands, and its absolutely not optimised in regards of tasks, but it works. Happy to share it tho. - Glowsome
  5. G

    OpenID Connect not working in PVE-7.2-1

    Wait, what ... first you are asking about 7.2, and now you are asking about 6.2 ? - you have not yet given any details regarding your first setup ( except the mention you changed something in the '/usr/share/perl5/PVE/API2/OpenId.pm' - file without letting us in on what exactly you have...
  6. G

    Question: Dashboard counts available storage double ?

    When Multipath was introduced (with new servers rotated in) it was configured correctly since start. On the old servers no Multipath was present and servers only had 1 connection to the storage. I distinctly remember also seeing same behaviour back then, but as said since i created the post and...
  7. G

    Question: Dashboard counts available storage double ?

    @m Expectation is to see the correct amount of storage being offered to PVE instead of the now (seemingly) doubled amount displayed. If you look at the LVM-screenie/view and calculate all space together you get to about 22TB , but was seeing 44TB total displayed on the Dashboard. However...
  8. G

    Question: Dashboard counts available storage double ?

    In my setup i have a 4-node cluster with shared SAS-Storage configured with Multipath. When i look at the Datacenter-> Summary report i see alot more storage which is actually available being calculated. As the usage of 'local' storage (in any way) within a cluster when having shared storage...
  9. G

    [SOLVED] LC_PVE_TICKET not set, VNC proxy without password is forbiddenTASK ERROR: Failed to run vncproxy.

    @Arakmar thankyou for clarifying it to me, i managed to solve my issue with your information :)
  10. G

    one VM behaving badly - backup fails and it becomes unresponsive/HighCPU use

    As i have not found any means to further debug or resolve this i will be planning on replacing this box. However that will have quite the impact, as it is running some vital roles - making it quite painfull to implement changes on this box.
  11. G

    Roadmap for integration with Ansible

    in regards of Ansible - and running it actively on a 4-node cluster for management/keeping settings in check i do not really see an issue. this is excluding the absence of the needs-reboot -detection (as currently not implemented/present in proxomox - specified in reboot required ) As you are...
  12. G

    Unable to upgrade to 7.1-10

    i had similar flukes on installing new boxes on my cluster due to not having my documentation in order. Just a suggestion - mark your post/thread as [solved] - topic so others wont look into this as if its still not solved :)
  13. G

    one VM behaving badly - backup fails and it becomes unresponsive/HighCPU use

    Still no closer to a solve on my end.. Anyone out there who might have found a solution ti this behaviour ?
  14. G

    [SOLVED] LC_PVE_TICKET not set, VNC proxy without password is forbiddenTASK ERROR: Failed to run vncproxy.

    Sorry to (re-)activate this thread, but i am also facing this issue. As to the above provided solution i am a bit cautioned by introducing wildcards. My situation : - LXC console (across the cluster) works fine. - VM console only works when i am on the GUI of the node the VM is on. -...
  15. G

    one VM behaving badly - backup fails and it becomes unresponsive/HighCPU use

    Tonight it even got more weird ... the VM was shutdown after it was touched by the backup-job : INFO: VM Name: vm-lx-01 INFO: include disk 'scsi0' 'vms01:125/vm-125-disk-0.qcow2' 64G INFO: include disk 'scsi1' 'vms01:125/vm-125-disk-1.qcow2' 64G INFO: backup mode: snapshot INFO: ionice priority...
  16. G

    one VM behaving badly - backup fails and it becomes unresponsive/HighCPU use

    Well unfortunately no change in the issue, so reinstalling the agent did not solve it :( Below is a screenie of the agent's consumption of resourses on the VM
  17. G

    one VM behaving badly - backup fails and it becomes unresponsive/HighCPU use

    FYI: As far as i have been able to determine it was due to a/the qemu agent just sucking up all runtime.... So as a measure i have re-installed the agent forcefully with : zypper in -f qemu-guest-agent Need to determine if this solves anything, but hey, its atleast an action aimed at the...
  18. G

    one VM behaving badly - backup fails and it becomes unresponsive/HighCPU use

    Extra info, this is not a 'Pure SLES box, its an Open Enterprise Server: vm-lx-01:~ # cat /etc/novell-release Open Enterprise Server 2018 (x86_64) VERSION = 2018.2 PATCHLEVEL = 2
  19. G

    one VM behaving badly - backup fails and it becomes unresponsive/HighCPU use

    Additional info on the VM ( logged from an SSH-session i still had running : Message from syslogd@vm-lx-01 at Nov 28 03:11:29 ... kernel:[39135.014041] NMI watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [sshd:25223] Message from syslogd@vm-lx-01 at Nov 28 03:11:29 ...