Slow Web Interface Performance

twocell

Member
May 12, 2010
59
0
6
Hi,

I've moved my Proxmox 1.7 server cluster from my home office to a data center, and now the web interface responds very slowly. Now my face looks like this: :mad:

Everything else seems to run quickly: SSH, the VNC terminals, virtual machine performance seems fine.

Things that have changed since it last worked well: installed 1.7 (bare metal fresh install), web interface is nat-ed behind a firewall, newly formed cluster.

Code:
moonraker:~# pveversion
pve-manager/1.7/5323

moonraker:~# pveperf
CPU BOGOMIPS:      34134.54
REGEX/SECOND:      765518
HD SIZE:           94.49 GB (/dev/mapper/pve-root)
BUFFERED READS:    121.61 MB/sec
AVERAGE SEEK TIME: 7.43 ms
FSYNCS/SECOND:     946.58
DNS EXT:           117.42 ms
DNS INT:           117.23 ms (adja.org)

moonraker:~# ping goldfinger
PING goldfinger.adja.org (192.168.2.61) 56(84) bytes of data.
64 bytes from goldfinger.adja.org (192.168.2.61): icmp_seq=1 ttl=64 time=0.626 ms
64 bytes from goldfinger.adja.org (192.168.2.61): icmp_seq=2 ttl=64 time=0.650 ms
64 bytes from goldfinger.adja.org (192.168.2.61): icmp_seq=3 ttl=64 time=0.706 ms
64 bytes from goldfinger.adja.org (192.168.2.61): icmp_seq=4 ttl=64 time=0.655 ms
64 bytes from goldfinger.adja.org (192.168.2.61): icmp_seq=5 ttl=64 time=0.678 ms
64 bytes from goldfinger.adja.org (192.168.2.61): icmp_seq=6 ttl=64 time=0.644 ms

"moonraker" is the master while "goldfinger" is a node.

What can I check to troubleshoot this issue?
 
Solved

It turns out the node was version 1.6, I upgraded to 1.7, rebooted the node and the web interface runs much smoother now.
 
Re: Solved

Or not. Looks like some operations are still pretty slow. Adding disks takes forever. It seems like it's mainly operations that deal with the "hardware" tab.
 
Last edited:
Re: Solved

Could it be my network setup?:

auto lo
iface lo inet loopback

auto eth1
iface eth1 inet static
address 192.168.3.61
netmask 255.255.255.0

auto vmbr0
iface vmbr0 inet static
address 192.168.2.61
netmask 255.255.255.0
gateway 192.168.2.1
bridge_ports eth0
bridge_stp off
bridge_fd 0
 
Re: Solved

Could it be my network setup?:

auto lo
iface lo inet loopback

auto eth1
iface eth1 inet static
address 192.168.3.61
netmask 255.255.255.0

auto vmbr0
iface vmbr0 inet static
address 192.168.2.61
netmask 255.255.255.0
gateway 192.168.2.1
bridge_ports eth0
bridge_stp off
bridge_fd 0
Hi,
looks normal. Why do you thinks that this do something nasty things?
For what is eth1?

What performance do you reach between the clusternodes with iperf? Is there all ok?
Are your defined storage allways reachable?

Udo
 
Re: Solved

Eth1 is the storage backend. Right now I've got the vm's running locally though. I thought my network config could be the problem since that's one of the few things I changed since I moved the cluster from my home office.