Proxmox search is really slow

itvietnam

Renowned Member
Aug 11, 2015
132
4
83
Hi,

I have fews cluster: some of them have 3 nodes, some of them have 9 nodes ... all cluster is fast while selecting search box.

There is a cluster with 20 nodes, it takes me nearly 1 minute to show things up. May i know how to investigate this root cause

1577173484097.png

Our pveversion:

Code:
proxmox-ve: 6.1-2 (running kernel: 5.3.10-1-pve)
pve-manager: 6.1-3 (running version: 6.1-3/37248ce6)
pve-kernel-5.3: 6.0-12
pve-kernel-helper: 6.0-12
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.10-1-pve: 5.3.10-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-2-pve: 5.0.21-7
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-2
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
openvswitch-switch: 2.10.0+2018.08.28+git.8ca7c82b7d+ds1-12+deb10u1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.1-2
pve-container: 3.0-14
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191002-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-2
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2

Thanks,
 
Hi @udo, it takes few 2 seconds to display:

Code:
root@node01:~# pvesm status
Name             Type     Status           Total            Used       Available        %
backup            nfs     active     46740281344     44594970624      2145310720   95.41%
hdd               rbd     active     24070836803     15064779668      9006057134   62.59%
local             dir     active        98559220         6451512        87058160    6.55%
local-lvm     lvmthin     active       330125312               0       330125312    0.00%
nvm           zfspool   disabled               0               0               0      N/A
nvmnode03       zfspool   disabled               0               0               0      N/A
ssd80             rbd     active     18812220480     15669124160      3143096320   83.29%
root@node01:~#

nvm and nvmnode03 is local storage of other nodes.
 
less than 5 seconds to return this info

Code:
root@node01:~# pvesh ls nodes/node01/qemu
Dr--d        103
Dr--d        213
Dr--d        443
Dr--d        495
Dr--d        513
Dr--d        553
Dr--d        581
Dr--d        655
Dr--d        795
Dr--d        801
Dr--d        803
Dr--d        806
Dr--d        808
Dr--d        810
Dr--d        812
Dr--d        818
Dr--d        821
Dr--d        822
Dr--d        823
Dr--d        828
Dr--d        829
Dr--d        832
Dr--d        835
Dr--d        836
Dr--d        837
Dr--d        838
Dr--d        843
Dr--d        849
root@node01:~#

none of below returned data:
Code:
#ls pools
ls: cannot access 'pools': No such file or directory

Code:
 #pvesh ls ssd80
no such resource 'ssd80'

Any trouble with the cluster communication?

corosync show me these data:

Code:
systemctl status corosync
● corosync.service - Corosync Cluster Engine
   Loaded: loaded (/lib/systemd/system/corosync.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2019-12-24 09:01:09 +07; 10h ago
     Docs: man:corosync
           man:corosync.conf
           man:corosync_overview
 Main PID: 2381 (corosync)
    Tasks: 9 (limit: 7372)
   Memory: 168.9M
   CGroup: /system.slice/corosync.service
           └─2381 /usr/sbin/corosync -f

Dec 24 09:01:51 node01 corosync[2381]:   [KNET  ] host: host: 8 (passive) best link: 0 (pri: 1)
Dec 24 09:01:51 node01 corosync[2381]:   [KNET  ] host: host: 11 (passive) best link: 0 (pri: 1)
Dec 24 09:01:51 node01 corosync[2381]:   [KNET  ] host: host: 8 (passive) best link: 0 (pri: 1)
Dec 24 09:01:51 node01 corosync[2381]:   [KNET  ] pmtud: PMTUD link change for host: 19 link: 0 from 469 to 1397
Dec 24 09:01:51 node01 corosync[2381]:   [KNET  ] pmtud: PMTUD link change for host: 19 link: 1 from 469 to 1397
Dec 24 09:01:51 node01 corosync[2381]:   [KNET  ] pmtud: PMTUD link change for host: 11 link: 0 from 469 to 1397
Dec 24 09:01:51 node01 corosync[2381]:   [KNET  ] pmtud: PMTUD link change for host: 11 link: 1 from 469 to 1397
Dec 24 09:01:51 node01 corosync[2381]:   [KNET  ] pmtud: PMTUD link change for host: 8 link: 0 from 469 to 1397
Dec 24 09:01:51 node01 corosync[2381]:   [KNET  ] pmtud: PMTUD link change for host: 8 link: 1 from 469 to 1397
Dec 24 09:02:01 node01 corosync[2381]:   [TOTEM ] Retransmit List: 76 75 80 89


but we use latest version from proxmox already:
Code:
#pveversion -v
proxmox-ve: 6.1-2 (running kernel: 5.3.10-1-pve)
pve-manager: 6.1-3 (running version: 6.1-3/37248ce6)
pve-kernel-5.3: 6.0-12
pve-kernel-helper: 6.0-12
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.10-1-pve: 5.3.10-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-2-pve: 5.0.21-7
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-2
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
openvswitch-switch: 2.10.0+2018.08.28+git.8ca7c82b7d+ds1-12+deb10u1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.1-2
pve-container: 3.0-14
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191002-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-2
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2