PVE is not working correctly

WebCode

Member
Jun 11, 2012
12
0
21
Hello.

There is a server:

CPU: 2x Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
RAM: 32Gb (DDR3 1066 MHz)
HDD: 2x500 Gb SATA (LSI RAID)

Install it PVE 2.3.

Code:
# pveversion -v
pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)
running kernel: 2.6.32-19-pve
proxmox-ve-2.6.32: 2.3-96
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-19-pve: 2.6.32-96
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-20
pve-firmware: 1.0-21
libpve-common-perl: 1.0-49
libpve-access-control: 1.0-26
libpve-storage-perl: 2.3-7
vncterm: 1.0-4
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-10
ksm-control-daemon: 1.1-1

Immediately after the installation starts to grow LA, although there is no VDS:

Code:
root@node0:~# w
 07:20:33 up 7 min,  2 users,  load average: 2.10, 1.51, 0.71
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    router.miralab.r 07:13    0.00s 37315days  0.00s w
root     pts/2    router.miralab.r 07:15   43.00s 3960days 3960days -bash
root@node0:~# vzlist -a
Container(s) not found
root@node0:~# qm list
root@node0:~#

It should soszdat 1 KVM VDS, not even including its growth begins LA.

Code:
root@node0:~# w
 07:25:43 up 12 min,  2 users,  load average: 3.15, 2.42, 1.31
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT

If you run the VDS load immediately rises to 30-40, and after a while the server stops responding.

In dmesg there is such a:

Code:
root@node0:~# dmesg
INFO: task kvm:3835 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kvm           D ffff88087a0882c0     0  3835      1    0 0x00000000
 ffff88087cf8fbc8 0000000000000086 0000000000000000 ffffffff8100984c
 ffff8800397de9c0 ffff88087cf8ffd8 ffff88047dcea410 ffff8800397dc540
 0000000000000000 000000010003edc5 ffff88087a088888 000000000001e9c0
Call Trace:
 [<ffffffff8100984c>] ? __switch_to+0x1ac/0x320
 [<ffffffff8151f185>] schedule_timeout+0x215/0x2e0
 [<ffffffff8151ed84>] wait_for_completion+0xe4/0x120
 [<ffffffff8105a520>] ? default_wake_function+0x0/0x20
 [<ffffffff81065861>] synchronize_sched_expedited+0x1c1/0x280
 [<ffffffff8109cf23>] __synchronize_srcu+0x53/0xf0
 [<ffffffff810656a0>] ? synchronize_sched_expedited+0x0/0x280
 [<ffffffff8109cfd5>] synchronize_srcu_expedited+0x15/0x20
 [<ffffffffa027ad3e>] kvm_io_bus_register_dev+0xae/0xc0 [kvm]
 [<ffffffffa027f77b>] kvm_coalesced_mmio_init+0x7b/0xc0 [kvm]
 [<ffffffffa027c624>] kvm_dev_ioctl+0x474/0x4b0 [kvm]
 [<ffffffff811ac682>] vfs_ioctl+0x22/0xa0
 [<ffffffff811ac82a>] do_vfs_ioctl+0x8a/0x590
 [<ffffffff811acd7f>] sys_ioctl+0x4f/0x80
 [<ffffffff8100b102>] system_call_fastpath+0x16/0x1b
root@node0:~#

I thought that the problem with the memory, but the server has 100 + hours of testing memtest (screenshot added) and found no problems with memory.

The same behavior is observed on the server and on Debian 7 + PVE 3.0.

Please tell me, what could be wrong?

Thanks.
 

Attachments

  • node0_memtest.png
    node0_memtest.png
    26.2 KB · Views: 8

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!