Search results

  1. S

    running docker inside LXC

    @dipe you can create some VMs on your hypervisors and create a kubernetes cluster. It uses containers, but provides much more in terms of their management.
  2. S

    [SOLVED] After Update -> Grub rescue console

    The dataset seems to be corrupted. Try to repair it booting from a rescue system with ZFS support.
  3. S

    Missing CPU Cores

    What's the output of numastat on your systems? # numastat node0 node1 numa_hit 1475179414 704657639 numa_miss 14347898 119440717 numa_foreign 119440717 14347898 interleave_hit 22634...
  4. S

    Missing CPU Cores

    Here's an older system that I have access to: # cat /proc/cpuinfo | grep "model name" | head -n 1 model name : Intel(R) Xeon(R) CPU E7-L8867 @ 2.13GHz # lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s)...
  5. S

    Missing CPU Cores

    I didn't do anything. It works on my setup. Just googled for "exceeds logical package map". And considering that you and @Thalhammer present these issues and a patch regarding CPU topology is recent, it makes sense to assume that something is going on in this area. LE: What concerns me is that...
  6. S

    Missing CPU Cores

    Looks like there is a very recent (Aug 2016?) kernel patch floating around for this: http://www.gossamer-threads.com/lists/linux/kernel/2503792
  7. S

    Missing CPU Cores

    What I can see in lscpu output is that your NUMA nodes == 1. Try to enable NUMA in BIOS? Here's my output for 2 sockets, 6 cores/socket, 2 threads/core: # lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 24...
  8. S

    Extending Primary KVM Partitiion

    Delete the extended partition (/dev/vda2). Increase first partition (/dev/vda1) to max - 4GB, re-create the swap partition as primary (no need for extended nowadays).
  9. S

    zfs: finding the bottleneck

    I've noticed something in your "free" output. You have tons of RAM available. For example here (use free -m, it is more readable): Sat Sep 17 16:43:24 CEST 2016 total used free shared buffers cached Mem: 131915736 131505520 410216 48164...
  10. S

    Cannot see new files added from KVM on LXC container using bind mounts

    You cannot do that. KVM mounts a block device, which is a dumb array of bytes. LXC "mounts" a file system which is a higher abstraction over the block device (has files & directories, for example). The only way to have access in both is to share that filesystem (NFS, SMB) in one of the machines...
  11. S

    KVM Hypervisor slows down over time

    There is "perf" on Linux that can help you to see where a process spends most time. Search for "Brendan Gregg" on Google and you will find tons of ideas on how to monitor performance. You don't know if you are CPU bound, I/O bound or simply some network service is slow (e.g. DNS).
  12. S

    [SOLVED] PVE 4.1 how to passthrough Nic to LXC

    http://glennklockwood.blogspot.ro/2013/12/high-performance-virtualization-sr-iov.html This doesn't say that the standard config will "decrease network throughput tremendously". SR-IOV provides better latency (12% average).
  13. S

    [SOLVED] ZFS change ashift after pool creation

    As far as I know, there are usage issues with raidz. If you stick to mirrors and stripes, it is fine.
  14. S

    [SOLVED] ZFS change ashift after pool creation

    You cannot change that value on an existing pool.
  15. S

    KVM Hypervisor slows down over time

    strace a hanging process and find out where it stalls.
  16. S

    Converting zvols to raw or qcow2

    For a volume named storage/vm-100-disk0 qemu-img convert -f raw /dev/zvol/storage/vm-100-disk0 -O qcow2 vm-100-disk0.qcow2 https://forum.proxmox.com/threads/import-convert-export-raw-images-to-zfs-volume.21241/ P.S. Forum has a search function :)
  17. S

    ZFS and high iowait / server load

    A VM boot is very aggressive in terms of both I/O and CPU. If you've assigned 4 cores/VM, then a parallel boot will start the load at 12 (4*3) and that is normal. Of course, the real load will be much higher because there are other tasks too running on your 4 physical cores. So 30 is not that...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!