Search results

  1. E

    BUG: soft lockup

    Hello again everyone, been too long since my last post here. I have one server randomly locking up for over a month now, now a 2nd server is also having this problem. Unfortunately I've not captured all of the kernel messages that would help diagnose this but I have a couple screenshots from...
  2. E

    Increase performance with sched_autogroup_enabled=0

    @Alwin I've tried setting the values back to default so I can test before and after but performance stays the same. Its possible that there is some other explanation to my initial results.
  3. E

    Increase performance with sched_autogroup_enabled=0

    @Alwin I used dd and 'time cp' on a VM where I could only copy anything at about 40MB/sec and now I can get 140MB/sec.
  4. E

    Increase performance with sched_autogroup_enabled=0

    Yup, that thread is what gave me the idea to try this. Found it this morning. I suspect these settings will help anyone facing IO related performance issues.
  5. E

    Proxmox 5 very slow

    @nicko I'm curious to know if this helps you or not: https://forum.proxmox.com/threads/increase-performance-with-sched_autogroup_enabled-0.41729/ For me it made a huge difference in IO performance on numerous servers.
  6. E

    Increase performance with sched_autogroup_enabled=0

    Changing sched_autogroup_enabled from 1 to 0 makes a HUGE difference in performance on busy Proxmox hosts Also helps to modify sched_migration_cost_ns I've tested this on Proxmox 4.x and 5.x: echo 5000000 > /proc/sys/kernel/sched_migration_cost_ns echo 0 >...
  7. E

    pvesr segfault

    Have you rebooted into memtest and checked your RAM? That would be my first suggestion. If you cannot afford the downtime you might try installing and using memtester: apt-get install memtester
  8. E

    VM filesystem regular fails while running backup (snapshot)

    What is the IO Wait on the Proxmox host during the backup? If its high then you are starving the VM of disk IO causing the VM to think its disks are bad because they are not responding. I've only started using ZFS a few months ago so I am far from an expert. It seems that ZFS has its own...
  9. E

    Memory allocation failure

    Not enough contiguous free RAM to allocate the RAM requested. This will display how many contiguous allocations of each 'order' are available: cat /proc/buddyinfo From left to right each column represents the count of allocations available for each order starting with order 0. The size of each...
  10. E

    Proxmox 5.0 and Live Migration

    You can offline migrate when using storage replication, wish it worked live.
  11. E

    DRBD Diskless after first reboot

    I've got DRBD setup on some 5.x servers using a setup similar to the old wiki article. @fwf DRBD will end up diskless on reboot when it cannot find the disk you specified in the configuration. How did you reference the disks in drbd config? I've found that using /dev/sdX is a bad idea because...
  12. E

    General Protection Fault with ZFS

    This is reported upstream already by someone else, I added my info there too. https://github.com/zfsonlinux/zfs/issues/6781 I setup DRBD on top of a ZVOL. When making heavy sequential writes on the primary, the secondary node throws a General Protection Fault error from zfs. The IO was from a...
  13. E

    PVE 5.1 and Infiniband Issues

    @fabian I just ran into this problem myself Installing pve-kernel-4.13.8-3-pve_4.13.8-30_amd64.deb from pve-test seems to have resolved the issue. I would be happy to give Proxmox one of these cards to put into the test servers. Would you like me to ship it to you?
  14. E

    VZDump slow on ceph images, RBD export fast

    I have seen one report that with Luminous and Proxmox 5.0 the situation is much better: https://forum.proxmox.com/threads/ceph-luminous-backup-improvement.34678/ Apparently some fixes in librdb to reduce memory copies is the source of the improvement. If the 64k reads in QEMU backup were...
  15. E

    convert VM to raw image/disk/bare metal

    Looks like you are using zvols so all you need to do is copy the volume. Something like this: dd if=/dev/zvol/rpool/data/vm-100-disk-1 of=/dev/sdX bs=1M
  16. E

    100% swap usage on machine with 50% free RAM

    Yes, disable disk swap and use zram. Every Monday testing VMs would be swapped to disk after being idle all weekend and testers complain their VMs are slow. Even setting swappiness=1 would not prevent the problem. Not had a single complaint or OOM event since removing disk swap and adding...
  17. E

    convert VM to raw image/disk/bare metal

    If your using raw the vm disk data can be copied directly to a drive. You could boot a linux live cd on the bare metal then use dd and netcat to clone the vm disk to the bare metal. Be sure to stop the vm first. The only tricky part, because of Windows, is making sure the driver for your...
  18. E

    Anyone using Chef?

    I don't create new VMs frequently enough to invest time into automating the process. I just use the Windows and Ubuntu templates I created and then bootstrap Chef via SSH or winrm. In Proxmox 5 we will hopefully have cloudinit, that will make using templates even easier. Would be nice to see...
  19. E

    Ceph Luminous backup improvement?

    Is this with or without KRBD?
  20. E

    [SOLVED] Watchdog fence for physical nodes

    Each node has a dedicated group that looks like this: group: NodeName nodes Node_Name nofailback 0 restricted 0 Each node has a diskless VM like this: bootdisk: scsi0 cores: 1 freeze: 1...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!