Strange hickups

Aron Dijkstra

Well-Known Member
Aug 6, 2016
41
1
48
44
Hi,

We have a small proxmox network (2x supermicro servers with Xeon processors and 1 INtel NUC for Quorum)
For the storage we have 1 Synology NAS 4 bay with SSD chaching.

One host is beeing fairly used. the other almost not.
CPU load on the server is 18% and memory usage arround 73%.
The other server 7% and 3%

The strange thing is that we experience small delays you can say glitches where the server hangs for a milisecond. when the server is on a higher load the delays become more noticable. VoIP is haveing small delays through this. and terminal servers hangs sometimes.

We are running the latest version of proxmox. We use OVS. and the web interface shows no IO delay but sometimes 0.6%

Hope you can help us!

Aron
 
also post your network setup and your pveversion -v.

cluster and storage in the same network?
 
Hi TOm,

Our network setup is as follows:

Vlan 1: Public Net
Vlan 2: Voip network
Vlan 3: Management network.
Vlan 4 Storage network (jumbo frames enabled)

Our cluster is in the same network (management network)

Output of interfaces:

cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual
pre-up ( ifconfig eth0 mtu 9000 )

auto eth1
iface eth1 inet manual
pre-up ( ifconfig eth1 mtu 9000 )

allow-vmbr0 bond0
iface bond0 inet manual
ovs_bonds eth0 eth1
ovs_type OVSBond
ovs_bridge vmbr0
ovs_options other_config:lacp-time=fast bond_mode=balance-tcp lacp=active
mtu 9000

auto vmbr0
iface vmbr0 inet manual
ovs_type OVSBridge
ovs_ports bond0 mgmtvlan1101 storagelan102
mtu 9000

allow-vmbr0 mgmtvlan1101
iface mgmtvlan1101 inet static
address X.X.X.X
netmask 255.255.255.0
gateway X.X.X.X
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_options tag=1101
post-up route add -net X.X.X.X/24 gw X.X.X.X
mtu 1500

allow-vmbr0 storagelan102
iface storagelan102 inet static
address X.X.X.X
netmask 255.255.255.0
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_options tag=102
mtu 9000


OUtput of pveversion -v:
pveversion -v
proxmox-ve: 4.4-88 (running kernel: 4.4.62-1-pve)
pve-manager: 4.4-13 (running version: 4.4-13/7ea56165)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.4.21-1-pve: 4.4.21-71
pve-kernel-4.4.19-1-pve: 4.4.19-66
pve-kernel-4.4.62-1-pve: 4.4.62-88
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+1
libqb0: 1.0.1-1
pve-cluster: 4.0-50
qemu-server: 4.0-110
pve-firmware: 1.1-11
libpve-common-perl: 4.0-95
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-2
pve-docs: 4.4-4
pve-qemu-kvm: 2.7.1-4
pve-container: 1.0-100
pve-firewall: 2.0-33
pve-ha-manager: 1.0-40
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80
openvswitch-switch: 2.6.0-2


Thanks so far!!

Aron
 
Hi,

you separate the Storage network with vlan.
This means you share a physical link with all outer networks.
I assume you mean with sever goes on "height load" Disk activity?
If so it would couplet normal that you have network problems,
because you need to much Bandwidth for the storage network.

It is highly recommended to have a dedicated Storage Network.
 
That is the strange thing. The network load is not there... i have 3Gb connection to the network. but we use SMB for storage.
So the max load per server is only 1Gbit per share (we have one share for disk images)
Besides this the NAS load is almost non existent... peak iops is arround 250. Data transfer no more that 80MBp/s