Search results

  1. M

    Run commands on a guest

    Thanks, I tried creating a token and filling in all the values in the script, but I get this error: https://proxmox:8006/api2/json/nodes/proxmox/qemu/306/agent/exec Gets executed in PID None Traceback (most recent call last): File "/home/m/./p.py", line 28, in <module>...
  2. M

    Run commands on a guest

    Okay, thanks so much for your time. I think we'll move to the Ansible solution and check to see if the issue persists with a newer version of Proxmox as soon as we upgrade or open a ticket with our Proxmox subscription. Have a great day!
  3. M

    Run commands on a guest

    Here it is: # socat - /var/run/qemu-server/3006.qga {"execute": "guest-exec", "arguments": {"path": "/bin/bash", "arg": ["-c", "apt update"], "capture-output": true}} {"return": {"pid": 3566564}} {"execute": "guest-exec-status", "arguments": {"pid": 3566564}} {"return": {"exitcode": 100...
  4. M

    Run commands on a guest

    Sure, thanks! proxmox-ve: 8.0.2 (running kernel: 6.2.16-18-pve) pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390) proxmox-kernel-helper: 8.0.3 pve-kernel-5.19: 7.2-15 proxmox-kernel-6.2.16-18-pve: 6.2.16-18 proxmox-kernel-6.2: 6.2.16-18 pve-kernel-5.19.17-2-pve: 5.19.17-2 ceph-fuse...
  5. M

    Run commands on a guest

    pvesh create /nodes/node01/qemu/906/agent/exec -command "apt" -command "update"
  6. M

    Run commands on a guest

    Thanks fba, yes, I'm currently using pvesh in scripts like this: for vmid in 808 811 813 844; do nodename=$(pvesh get /cluster/resources --type vm --output-format json | jq -r " .[] | select(.vmid == $vmid) | .node") vmname=$(pvesh get /cluster/resources --type vm --output-format json | jq...
  7. M

    Run commands on a guest

    Hi everyone, what's the best way to run commands from an host on a guest (Linux) that can also be on a different host? Something like: qm guest exec 100 --node node01 -- apt update -y, or: pvesh set /nodes/node01/qemu/100/agent/exec -command "apt update"? M.
  8. M

    VMs with CPU at 100%

    In that case "host", but normally we use "Default (kvm64)", and the behaviour is the same. source:~$ pveversion -v proxmox-ve: 7.4-1 (running kernel: 5.19.17-2-pve) pve-manager: 7.4-15 (running version: 7.4-15/a5d2a31e) pve-kernel-5.15: 7.4-3 pve-kernel-5.19: 7.2-15 pve-kernel-5.4: 6.4-20...
  9. M

    VMs with CPU at 100%

    Hi, we still have the problem, here are some messages from a VM that got stuck: kernel:[2261815.282065] watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [JournalFlusher:870] Message from syslogd@graylog5 at Sep 11 14:13:07 ... kernel:[2261815.283198] watchdog: BUG: soft lockup - CPU#0 stuck...
  10. M

    Backups cause problems in VMs

    Hi, we still noticed a strange behavior, after restarting ceph.target: 3 and only 3 VMs out of 200 still suffer the problem that backups crash and crash the VM itself. In particular, these are VMs with windows 2019 os and a disk (P) dedicated to the 15GB pagefile, a configuration different...
  11. M

    Backups cause problems in VMs

    Hi, I confirm that problem is still present on ceph version 16.2.10 pacific (stable) and that tha solution worked for me, too, for one VM. As soon as I get the OK to proceed with others I'll do. Thanks for writing down your solution and to floh8 who pointed me to this thread! Matteo
  12. M

    Some VMs get stuck during backup (full)

    Hello, since some weeks some VMs (5 over 250) have problems during backup on PBS. We have the same problem on any cluster node we move the VMs on: What we can see but that we can't fully comprehend is that these VMs at a certain point (always the same) get stuck, that is both backup and the...
  13. M

    Live migration problems between higher to lower frequencies CPUs

    ... until few weeks ago: VMs with CPU at 100% While kernel 5.19 seems it fixed the original problem, it also seems it introduced another one that we didn't have in previous versions of PVE on the same hardware. Do you know if by chance what solved the problem of live migration in version 5.19...
  14. M

    VMs with CPU at 100%

    Thanks, yes, we installed a 5.19 kernel just by suggestion in these forums to solve the problem of migrating VMs to nodes with different CPU frequencies: Live migration problems between higher to lower frequencies CPUs While this seemed it fixed the problem, it also seems it introduced another...
  15. M

    VMs with CPU at 100%

    Hi everyone, we are experiencing a strange problem; since the update, a couple of months ago, to: occasionally we find some machine with 100% CPU and completely unusable. We found that just by live-migrating th VM to another node it starts working again, without reset. Has anyone had...
  16. M

    Out of memory

    Yes, I thought it, too, but the problem is that it happens with almost all the VMs turned on from a certain point on (so with those turned on before the problem occurrd, at least it seems). However, it also happens when the minimum RAM is raised to 8 or 16 GB out of 32: when processes that...
  17. M

    Out of memory

    Yes, "out of memory" inside the machine. Also, with "info balloon" in monitor we see that "actual" memory is low and processes on guests do not start or get killed with OoM error. Also, on windows machines we have the same situation in which we do not have all configured RAM available. This is...
  18. M

    Out of memory

    In fact, we opened the ticket because it seems to us that it already occurs at 50% use and not at 80%.
  19. M

    Out of memory

    Still having the problem: why ballooning occurs so early and so heavily?
  20. M

    Live migration problems between higher to lower frequencies CPUs

    Hi, I can finally confirm that from the tests done so far, with all nodes at kernel v.5.19.17-1 and PVE 7.3.3, the VM migration works perfectly! Thanks and have a good day, Matteo