Search results

  1. PVE host swapping issue

    Yes, we have very few windows servers. We do leave swap enabled for Windows but usually set it to a specific size instead of allowing Windows to manage it.
  2. PVE host swapping issue

    We currently have 20 nodes in production with no swap. The nodes range from 32GB RAM to 256GB RAM with the majority of them having 128GB RAM. Most of the VMs are configured NUMA aware. I usually do not set it if the VM uses little RAM and very few cores. I cannot recall ever having a VM get...
  3. PVE host swapping issue

    I have wrestled with this problem for years and never found a great solution, most of my nodes are NUMA too. Changing swappiness never prevented it. Any process that is idle will end up having its RAM swapped to disk if the kernel thinks that the RAM would be better used for buffer/cache. In my...
  4. BUG: soft lockup

    While running swapoff on a couple of nodes the swapoff task would hang, unable to turn swap off on zram devices. They would hang generating task hung messages. I believe these systems are still running. Could we get any diagnostic data from these systems that might help discover the source of...
  5. BUG: soft lockup

    We have not had any issues since turning off zRam over a month ago.
  6. BUG: soft lockup

    Server has 128GB RAM, Virtual servers all combined are assigned just under 60GB. We have zfs_arc_max set to 20GB We have not had any issues since turning off zram on the 15th. It needs to run stable for at least a month to have confidence that turning off zram fixed anything. I am considering...
  7. BUG: soft lockup

    If I am not mistaken zfs module was upgraded recently and I have already run zpool upgrade. I do not think it would be OK to boot up kernel with older zfs module, right? I went digging in the logs, these are attached as text files. All of these occurred when we had zfs swap and zram enabled...
  8. BUG: soft lockup

    Hello again everyone, been too long since my last post here. I have one server randomly locking up for over a month now, now a 2nd server is also having this problem. Unfortunately I've not captured all of the kernel messages that would help diagnose this but I have a couple screenshots from...
  9. Increase performance with sched_autogroup_enabled=0

    @Alwin I've tried setting the values back to default so I can test before and after but performance stays the same. Its possible that there is some other explanation to my initial results.
  10. Increase performance with sched_autogroup_enabled=0

    @Alwin I used dd and 'time cp' on a VM where I could only copy anything at about 40MB/sec and now I can get 140MB/sec.
  11. Increase performance with sched_autogroup_enabled=0

    Yup, that thread is what gave me the idea to try this. Found it this morning. I suspect these settings will help anyone facing IO related performance issues.
  12. Proxmox 5 very slow

    @nicko I'm curious to know if this helps you or not: https://forum.proxmox.com/threads/increase-performance-with-sched_autogroup_enabled-0.41729/ For me it made a huge difference in IO performance on numerous servers.
  13. Increase performance with sched_autogroup_enabled=0

    Changing sched_autogroup_enabled from 1 to 0 makes a HUGE difference in performance on busy Proxmox hosts Also helps to modify sched_migration_cost_ns I've tested this on Proxmox 4.x and 5.x: echo 5000000 > /proc/sys/kernel/sched_migration_cost_ns echo 0 >...
  14. pvesr segfault

    Have you rebooted into memtest and checked your RAM? That would be my first suggestion. If you cannot afford the downtime you might try installing and using memtester: apt-get install memtester
  15. VM filesystem regular fails while running backup (snapshot)

    What is the IO Wait on the Proxmox host during the backup? If its high then you are starving the VM of disk IO causing the VM to think its disks are bad because they are not responding. I've only started using ZFS a few months ago so I am far from an expert. It seems that ZFS has its own...
  16. Memory allocation failure

    Not enough contiguous free RAM to allocate the RAM requested. This will display how many contiguous allocations of each 'order' are available: cat /proc/buddyinfo From left to right each column represents the count of allocations available for each order starting with order 0. The size of each...
  17. Proxmox 5.0 and Live Migration

    You can offline migrate when using storage replication, wish it worked live.
  18. DRBD Diskless after first reboot

    I've got DRBD setup on some 5.x servers using a setup similar to the old wiki article. @fwf DRBD will end up diskless on reboot when it cannot find the disk you specified in the configuration. How did you reference the disks in drbd config? I've found that using /dev/sdX is a bad idea because...
  19. General Protection Fault with ZFS

    This is reported upstream already by someone else, I added my info there too. https://github.com/zfsonlinux/zfs/issues/6781 I setup DRBD on top of a ZVOL. When making heavy sequential writes on the primary, the secondary node throws a General Protection Fault error from zfs. The IO was from a...
  20. PVE 5.1 and Infiniband Issues

    @fabian I just ran into this problem myself Installing pve-kernel-4.13.8-3-pve_4.13.8-30_amd64.deb from pve-test seems to have resolved the issue. I would be happy to give Proxmox one of these cards to put into the test servers. Would you like me to ship it to you?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!