Slow in PVE REALM login.

Sptrs_CA

Member
Dec 8, 2017
27
0
6
I just found that the API and GUI are really slow when login via PVE REALM account.

The "ticket" API takes over than 6s when login with PVE REALM. (<1s when using Linux Login)

BTW, We got a thousand of PVE REALM account. (1VM with 1account)
 

Attachments

  • 20190325-171727.png
    20190325-171727.png
    138.3 KB · Views: 24
This image shows login with LINUX REALM.


I also tried it via PHP CURL + PVE API.

curl complete 6.2170920372009

for login.
 

Attachments

  • 20190325-171907.png
    20190325-171907.png
    63.2 KB · Views: 20
cannot reproduce this here - is this on a clustered system? is the cluster healthy?

for several thousands users, the PVE realm is so close to PAM with a single user performance wise that the measurements are affected by caching too much to say anything meaningful (<30ms)
 
Hi,

The cluster is in good health.

10.1.20.11 : unicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.115/0.166/4.321/0.177
10.1.20.11 : multicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.125/0.193/4.337/0.177
10.1.20.12 : unicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.069/0.145/0.397/0.027
10.1.20.12 : multicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.079/0.166/0.445/0.030
10.1.20.13 : unicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.090/0.160/0.969/0.045
10.1.20.13 : multicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.104/0.185/1.017/0.050
10.1.20.15 : unicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.067/0.134/0.406/0.026
10.1.20.15 : multicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.098/0.154/0.454/0.029
10.1.20.16 : unicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.103/0.155/2.591/0.111
10.1.20.16 : multicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.114/0.171/2.610/0.112
10.1.20.17 : unicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.090/0.151/0.301/0.036
10.1.20.17 : multicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.104/0.164/0.313/0.036
 
is this testing done in parallel to what you are describing in your other thread (https://forum.proxmox.com/threads/t...pve-manger-pve-api.52528/page-2#post-243871)?

I thought it caused by this issue. After I switch to PAM, the 599 errors less to around 1-2 each day, some nodes have no 599 error any more.

Maybe, we can merge these two issues. Because the login(Create Ticket) takes 8-15 sec to respond. I thought it caused by our APP use too much thread to create the login ticket.
 
Last edited:
On my side, it is the same problem.

I have 6 nodes and ~1500 VM and 4724 accounts.

API request using root@pam is so fast but using users from @pve realm is very slow more than 15sec.

pve realm1674383125130.png

pam realm
1674383233914.png
 
you haven't really given much information.. which version are you on? are all requests slow with @pve users, or just certain ones? how many ACLs/groups/.. do you have? root@pam will short-circuit a lot of checks since it's allowed to do everything, so I am not surprised that it is faster (although the level of difference is bigger than I would have expected!).
 
Hello, fabian,

Thanks for your answer.

The cluster runs on version 7.2.3
Yes, all requests are slow for @pve users. I created a user with admin rights on the @pve realm, which also has very slow response to all options on the cluster.
On de cluster are configured 1669 ACL.



# pveversion pve-manager/7.2-3/c743d6c1 (running kernel: 5.15.19-2-pve)


# pveum acl list --output-format yaml | grep path | wc -l 1669
 
I will try to reproduce this and see if there are any obvious bottle necks that are easily fixable. would you mind filing and issue at bugzilla.proxmox.com ?
 
We have the same issue with +- 800 ACLs.

Code:
pveum acl list --output-format yaml | grep path | wc -l
798

When we check the time for ticket and user we get those values:
Code:
#time pvesh create /access/ticket

real    0m1.683s
user    0m1.505s
sys     0m0.169s

#time pveum user permissions client_9_26@pve > /dev/null

real    0m10.124s
user    0m9.940s
sys     0m0.176s

We mentioned this issue a few years ago:
https://forum.proxmox.com/threads/t...y-pve-manger-pve-api.52528/page-2#post-256440

Then we did not get any response and we hope that you will find a solution to resolve this issue.
 
see the linked bugzilla entry (and the patch linked there)
 
glad to hear it worked as expected :)