@dipe you can create some VMs on your hypervisors and create a kubernetes cluster. It uses containers, but provides much more in terms of their management.
Here's an older system that I have access to:
# cat /proc/cpuinfo | grep "model name" | head -n 1
model name : Intel(R) Xeon(R) CPU E7-L8867 @ 2.13GHz
# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s)...
I didn't do anything. It works on my setup. Just googled for "exceeds logical package map". And considering that you and @Thalhammer present these issues and a patch regarding CPU topology is recent, it makes sense to assume that something is going on in this area.
LE: What concerns me is that...
What I can see in lscpu output is that your NUMA nodes == 1. Try to enable NUMA in BIOS?
Here's my output for 2 sockets, 6 cores/socket, 2 threads/core:
# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 24...
Delete the extended partition (/dev/vda2). Increase first partition (/dev/vda1) to max - 4GB, re-create the swap partition as primary (no need for extended nowadays).
I've noticed something in your "free" output. You have tons of RAM available.
For example here (use free -m, it is more readable):
Sat Sep 17 16:43:24 CEST 2016
total used free shared buffers cached
Mem: 131915736 131505520 410216 48164...
You cannot do that. KVM mounts a block device, which is a dumb array of bytes. LXC "mounts" a file system which is a higher abstraction over the block device (has files & directories, for example).
The only way to have access in both is to share that filesystem (NFS, SMB) in one of the machines...
There is "perf" on Linux that can help you to see where a process spends most time. Search for "Brendan Gregg" on Google and you will find tons of ideas on how to monitor performance. You don't know if you are CPU bound, I/O bound or simply some network service is slow (e.g. DNS).
http://glennklockwood.blogspot.ro/2013/12/high-performance-virtualization-sr-iov.html
This doesn't say that the standard config will "decrease network throughput tremendously". SR-IOV provides better latency (12% average).
For a volume named storage/vm-100-disk0
qemu-img convert -f raw /dev/zvol/storage/vm-100-disk0 -O qcow2 vm-100-disk0.qcow2
https://forum.proxmox.com/threads/import-convert-export-raw-images-to-zfs-volume.21241/
P.S. Forum has a search function :)
A VM boot is very aggressive in terms of both I/O and CPU. If you've assigned 4 cores/VM, then a parallel boot will start the load at 12 (4*3) and that is normal. Of course, the real load will be much higher because there are other tasks too running on your 4 physical cores. So 30 is not that...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.