Slow in PVE REALM login.

Discussion in 'Proxmox VE: Installation and configuration' started by Sptrs_CA, Mar 25, 2019.

  1. Sptrs_CA

    Sptrs_CA New Member
    Proxmox Subscriber

    Joined:
    Dec 8, 2017
    Messages:
    26
    Likes Received:
    0
    I just found that the API and GUI are really slow when login via PVE REALM account.

    The "ticket" API takes over than 6s when login with PVE REALM. (<1s when using Linux Login)

    BTW, We got a thousand of PVE REALM account. (1VM with 1account)
     

    Attached Files:

  2. Sptrs_CA

    Sptrs_CA New Member
    Proxmox Subscriber

    Joined:
    Dec 8, 2017
    Messages:
    26
    Likes Received:
    0
    This image shows login with LINUX REALM.


    I also tried it via PHP CURL + PVE API.

    curl complete 6.2170920372009

    for login.
     

    Attached Files:

  3. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,199
    Likes Received:
    496
    cannot reproduce this here - is this on a clustered system? is the cluster healthy?

    for several thousands users, the PVE realm is so close to PAM with a single user performance wise that the measurements are affected by caching too much to say anything meaningful (<30ms)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  4. Sptrs_CA

    Sptrs_CA New Member
    Proxmox Subscriber

    Joined:
    Dec 8, 2017
    Messages:
    26
    Likes Received:
    0
    Hi,

    The cluster is in good health.

    10.1.20.11 : unicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.115/0.166/4.321/0.177
    10.1.20.11 : multicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.125/0.193/4.337/0.177
    10.1.20.12 : unicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.069/0.145/0.397/0.027
    10.1.20.12 : multicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.079/0.166/0.445/0.030
    10.1.20.13 : unicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.090/0.160/0.969/0.045
    10.1.20.13 : multicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.104/0.185/1.017/0.050
    10.1.20.15 : unicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.067/0.134/0.406/0.026
    10.1.20.15 : multicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.098/0.154/0.454/0.029
    10.1.20.16 : unicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.103/0.155/2.591/0.111
    10.1.20.16 : multicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.114/0.171/2.610/0.112
    10.1.20.17 : unicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.090/0.151/0.301/0.036
    10.1.20.17 : multicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.104/0.164/0.313/0.036
     
  5. Sptrs_CA

    Sptrs_CA New Member
    Proxmox Subscriber

    Joined:
    Dec 8, 2017
    Messages:
    26
    Likes Received:
    0
    I use "time pvesh create /access/ticket"
    real 0m10.634s
    user 0m10.336s
    sys 0m0.274s
     
  6. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,199
    Likes Received:
    496
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  7. Sptrs_CA

    Sptrs_CA New Member
    Proxmox Subscriber

    Joined:
    Dec 8, 2017
    Messages:
    26
    Likes Received:
    0
    I thought it caused by this issue. After I switch to PAM, the 599 errors less to around 1-2 each day, some nodes have no 599 error any more.

    Maybe, we can merge these two issues. Because the login(Create Ticket) takes 8-15 sec to respond. I thought it caused by our APP use too much thread to create the login ticket.
     
    #7 Sptrs_CA, Apr 1, 2019
    Last edited: Apr 1, 2019
  8. Sptrs_CA

    Sptrs_CA New Member
    Proxmox Subscriber

    Joined:
    Dec 8, 2017
    Messages:
    26
    Likes Received:
    0
    We have total 1500 PVE account, 1 PAM account.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice