Search results

  1. L

    Error: disk 'lvmid/***' not found, grub rescue.

    Hey, am I right in thinking that you're yet to nail this one down (again)? Or is the PVE v8 side resolved, and just PVE v7 remaining, grub-wise?
  2. L

    (LIR IPv6 /32 into 2x Routers) Desire: /48-per-Type, Routing to /64-per-VM & /128-per-Domain?

    I think half my issue is explaining anything to do with networking, something I struggle with, well at all or close to it. Hopefully someone is able to confirm how IPv6 in particular should look into a Proxmox cluster via a redundant network. Thanks!
  3. L

    [SOLVED] pmg-smtp-filter: Many instances/children running, each at 50% CPU; 6-core machine exhausted after 30-90 minutes

    Thank you for the insights, that makes a lot of sense. I've been monitoring the systems since the resource increases, and they're OK. Loads are around more normal levels, however it does seem to correlate to your hunch - sometimes volume does not change really, however it is crunching harder to...
  4. L

    [SOLVED] PVE 7.4-x to 7.4-latest: grub failed to write to /dev/sda (then grub-install gives Disk Not Found) Debian bug 987008

    Thank you Friedrich, that sounds likely to be related. I spent time reading https://pve.proxmox.com/wiki/Recover_From_Grub_Failure (bottom section) however the below may also relate(?): https://pve.proxmox.com/wiki/Upgrade_from_7_to_8#Unable_to_boot_due_to_grub_failure Which talks to...
  5. L

    [SOLVED] PVE 7.4-x to 7.4-latest: grub failed to write to /dev/sda (then grub-install gives Disk Not Found) Debian bug 987008

    Hi there, Looking back at these posts and wondering what to do: https://forum.proxmox.com/threads/proxmox-update-from-7-2-4-grub-update-failure.114951/page-2 https://forum.proxmox.com/threads/problem-with-grub-upgrading-pve-6-4-15.115376/ The first being the most relevant. On just 1 machine...
  6. L

    [SOLVED] pmg-smtp-filter: Many instances/children running, each at 50% CPU; 6-core machine exhausted after 30-90 minutes

    A few hours ago nodes' loads were around 50-70% each. Now they are around 15% each... If it keeps flaring, we will check on it again. Scope for journalctl would help us give info? :)
  7. L

    [SOLVED] pmg-smtp-filter: Many instances/children running, each at 50% CPU; 6-core machine exhausted after 30-90 minutes

    How much do you want? Just for a few hours last night, and for pmg-smtp-filter only? If it's a lot of logging, tricky to redact parts - what would you like? Oct 04 19:00:46 1st-gate freshclam[768]: Received signal: wake up Oct 04 19:00:46 1st-gate freshclam[768]: ClamAV update process started...
  8. L

    [SOLVED] pmg-smtp-filter: Many instances/children running, each at 50% CPU; 6-core machine exhausted after 30-90 minutes

    Hi there, Weird one, forked from this other thread about a similar issue. @Stoiko Ivanov pmg-smtp-filter after a while has many instances running, and with each taking about half a core, the machine is CPU-overloaded fairly quickly. We updated 7.x branch to latest (on same final sub-major)...
  9. L

    [SOLVED] PMG eats CPU and pmg-smtp-filter child is just spamming

    Same problem now returned on both nodes after some hours following 7.3 update to latest minor/build, then major up to 8.0. EDIT: Changelog says the below. So via Admin > Configuration > Spam Detector > Use Bayesian filter > is now OFF rather than ON. Load OK.
  10. L

    Fingerprint error

    I think your PVE 8.0 changelog may have the fix @Stoiko Ivanov - as we are on 7.4 when experiencing this: - cloud-init: If the VM name is not a FQDN and no DNS search domain is configured, the automatically-generated cloud-init user data now contains an additional fqdn option. This fixes an...
  11. L

    [SOLVED] PMG eats CPU and pmg-smtp-filter child is just spamming

    In our case we had 2 years of all-good, then suddenly at 9pm last night pmg-smtp-filter had 20+ children, and the server actual CPU was overutilised by 1,000% and beyond (ie. 2 cores had load of 30-40). Multiple nodes, same condition on each. Tripled CPU from 2 cores to 6 per each, doubled RAM...
  12. L

    [SOLVED] noVNC over API: PVEAuthCookie (PVE Ticket) and Tunnel Auth (VNC Ticket) - How? :-)

    Hi there, Just trying to get to the bottom of this after 2 days working on it. UPDATE: We were able to work through the niggles. Still I am getting 401 No Ticket despite there being a VNC Ticket from vncproxy passed into vncwebsocket by noVNC (via path), and a cookie set with PVE Ticket I...
  13. L

    [SOLVED] LXC/QEMU via API - Parameter Verification Failed - Request params seem OK though?

    Perfect, I was hopeful that you had it configured in your pipeline. Thank you once more! (edit: I can see the new range now) :D
  14. L

    [SOLVED] LXC/QEMU via API - Parameter Verification Failed - Request params seem OK though?

    Thank you! Will that update the API Viewer for LXC and QEMU once approved?
  15. L

    [SOLVED] LXC/QEMU via API - Parameter Verification Failed - Request params seem OK though?

    Problem was the API Viewer saying that vmid was 1-N when in actual fact it is 100-N. Once we re-submitted with a vmid >100 it was OK. API Viewer says otherwise! I was also curious about the DNS, though I suppose it didn't relate. @fiona - do you agree re: the above disparity?
  16. L

    [SOLVED] LXC/QEMU via API - Parameter Verification Failed - Request params seem OK though?

    We're getting Parameter Verification Failed for the below request parameters to create a Linux Container - any thoughts as to why? ( [vm_settings] => Array ( [vmid] => 97 [ostemplate] => local:vztmpl/ubuntu-20.04-standard_20.04-1_amd64.tar.gz [swap] => 512 [rootfs] => local:8 [bwlimit] => 0...
  17. L

    [SOLVED] LXC/QEMU via API - Parameter Verification Failed - Request params seem OK though?

    This is what we're working on at the moment. I was hopeful that pveproxy stashed request data as well. No worries. :) Thank you for your reply!
  18. L

    [SOLVED] LXC/QEMU via API - Parameter Verification Failed - Request params seem OK though?

    Hi there, We're troubleshooting a module which talks to Proxmox to provision VMs/CTs. In the pveproxy log we can see the request come in and get a 400 response for the actual creation attempt. Getting a ticket, etc, is OK (200). Trying to work out the 'why' there, we're trying to find more...
  19. L

    [SOLVED] Cross-cluster Migration: Leaves VM on Old Host powered Off but still "Migrate Locked"

    Sounds like it's a good safety net! It makes sense to have it that way. I guess I interpret that icon as a VM in-flight, though it was a first-time experience so now the behaviour is normal. :)
  20. L

    [SOLVED] Cross-cluster Migration: Leaves VM on Old Host powered Off but still "Migrate Locked"

    Thanks for that, so it's intentional behaviour for it to leave the VM on the old-node in a locked state?