Recent content by Sptrs_CA

  1. S

    Too many HTTP 599 in pve-proxy, pve-manger, pve-api

    I thought it caused by this issue: https://forum.proxmox.com/threads/slow-in-pve-realm-login.52763/ But no one follows it now. Our team uses LDAP to bypass this issue and it has been fixed.
  2. S

    Slow in PVE REALM login.

    We have total 1500 PVE account, 1 PAM account.
  3. S

    Slow in PVE REALM login.

    I thought it caused by this issue. After I switch to PAM, the 599 errors less to around 1-2 each day, some nodes have no 599 error any more. Maybe, we can merge these two issues. Because the login(Create Ticket) takes 8-15 sec to respond. I thought it caused by our APP use too much thread to...
  4. S

    Slow in PVE REALM login.

    I use "time pvesh create /access/ticket" real 0m10.634s user 0m10.336s sys 0m0.274s
  5. S

    Slow in PVE REALM login.

    Hi, The cluster is in good health. 10.1.20.11 : unicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.115/0.166/4.321/0.177 10.1.20.11 : multicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.125/0.193/4.337/0.177 10.1.20.12 : unicast, xmt/rcv/%loss = 600/600/0%...
  6. S

    Slow in PVE REALM login.

    This image shows login with LINUX REALM. I also tried it via PHP CURL + PVE API. curl complete 6.2170920372009 for login.
  7. S

    Slow in PVE REALM login.

    I just found that the API and GUI are really slow when login via PVE REALM account. The "ticket" API takes over than 6s when login with PVE REALM. (<1s when using Linux Login) BTW, We got a thousand of PVE REALM account. (1VM with 1account)
  8. S

    Too many HTTP 599 in pve-proxy, pve-manger, pve-api

    Hi, Also when I tried to use "CURLOPT_HEADER" in CURL. The responding always take over than 4s.
  9. S

    Too many HTTP 599 in pve-proxy, pve-manger, pve-api

    root@2JWSS42:~# journalctl -xe | grep -v 'pmxcfs\|@pam\|pvesr\|replication\|--' Mar 22 14:53:52 2JWSS42 pveproxy[43166]: Clearing outdated entries from certificate cache Mar 22 14:53:53 2JWSS42 pveproxy[34937]: Clearing outdated entries from certificate cache Mar 22 14:53:56 2JWSS42...
  10. S

    Too many HTTP 599 in pve-proxy, pve-manger, pve-api

    root@2JWSS42:~# ulimit -aH core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 773418 max locked memory (kbytes, -l) 64 max...
  11. S

    Too many HTTP 599 in pve-proxy, pve-manger, pve-api

    - Enterprise repo, latest version. - debsum all "OK" - Remote Self-Dev Program using "CURL" to access REST API. We use lots of "/node/xx/qemu/xx/status/current" and "/node/xx/qemu/x/rrddata?timeframe=hour&cf=AVERAGE" to update VM bandwidth and running status. This is important. How to fix the...
  12. S

    Too many HTTP 599 in pve-proxy, pve-manger, pve-api

    root@BT1HS01:~# cat /var/log/daemon.log | grep 'error' ... Mar 19 03:23:38 BT1HS01 pveproxy[18486]: internal error at /usr/share/perl5/PVE/RESTHandler.pm line 378. Mar 19 10:55:01 BT1HS01 pveproxy[44052]: EV: error in callback (ignoring): Can't call method "push_write" on an undefined value at...
  13. S

    Too many HTTP 599 in pve-proxy, pve-manger, pve-api

    This is a report with omping -c 600 -i 1 The multicast ping test result looks better. 10.1.20.11 : unicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.115/0.166/4.321/0.177 10.1.20.11 : multicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.125/0.193/4.337/0.177 10.1.20.12 ...
  14. S

    Too many HTTP 599 in pve-proxy, pve-manger, pve-api

    Hi, I already tried that running server independently. But the errors are still pop up. We have 1XE NIC for Storage and 1XE for Network and 1GE for cluster only. As you can see the average latency is less than 1ms. The max value may caused by other reasons. Is there any other possible things...
  15. S

    Too many HTTP 599 in pve-proxy, pve-manger, pve-api

    I guess that if it is possible the API pvedaemon worker stuck by our API using too much "guest-ping". The guest-ping spend too much time resources? we got lots of this timeout report. VM 2014 qmp command failed - VM 2014 qmp command 'guest-ping' failed - got timeout Also, is there any other way...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!