Search results

  1. Windows guest slow after pve 6 to 7 upgrade

    Possibly related (this comment and the one immediately after it):
  2. Proxmox VE 7.0 released!

    It looks like this is changed with the cpupower tool: cpupower frequency-set -g SCHEDULER # Examples cpupower frequency-set -g performance cpupower frequency-set -g schedutil
  3. ZFS 2.0.5?

    2.0.5 is a bugfix release while 2.1.0 is a new branch with functional ZFS changes that increase the scope of the change. I'd be happy with either one.
  4. ZFS 2.0.5?

    Thanks for the update and glad to hear it's on your radar!
  5. ZFS 2.0.5?

    ZFS 2.0.5 contains bugfixes for issues that hang ZFS threads and prevent send/receive from functioning (until the next reboot). Particularly (from the changelog): Do not hash unlinked inodes #9741 #11223 #11648 #12210 It was released on June 23, 2021 and we're eager to get this change on to our...
  6. /proc/swaps is incorrect in LXC (Bug?)

    I filed this bug report for the issue:
  7. /proc/swaps is incorrect in LXC (Bug?)

    I traced the bug to lxcfs: Working: 4.0.3-pve2 Broken: 4.0.3-pve3 I think help is needed from a Proxmox team member on this one.
  8. /proc/swaps is incorrect in LXC (Bug?)

    We've noticed that in recent versions of Proxmox, that /proc/swaps is wrong. It boils down to this inside the container: # free -m total used free shared buff/cache available Mem: 64738 118 64610 7 8 64619...
  9. LXC container reboot fails - LXC becomes unusable

    I have seen the issue on a Proxmox node without any (client or server) NFS.
  10. LXC container reboot fails - LXC becomes unusable

    I have removed "Solved" from the title as the only solution is to manually install and maintain a 4.18+ kernel which isn't feasible / desirable for most users.
  11. Can you help me build the kernel of proxmox 4.19?

    This thread may be relevant to your situation:
  12. Unable to shutdown/stop lxc container

    Based on your listing: Yes, 17754 is the process you want to kill if the nice ways of shutting down the container have failed.
  13. [SOLVED] How to recovery files in VM-Disk on zfs pool

    LnxBil is right, snapshots make a terrible situation a non-issue through rollback. I consider this tool mandatory on all ZFS systems: If you have snapshots and end up with a system that won't boot, you can use a ZFS enabled rescue CD to do the...
  14. Boot issues

    When it hits the Grub boot screen, hit 'e'dit. Then remove "quiet" from the kernel options and continue on with the boot. I don't remember the exact key to boot the modified options, but instructions will be on the bottom of the screen when you're editing. That should hopefully give you more...
  15. ZFS performance regression with Proxmox

    Although we have Oracle support, we know we're running an unsupported configuration so we focus on MetaLink articles. The biggest gotcha so far has been been that even with all IO set to ASYNCH, Oracle Automatic Diagnostic Repository (ADR) still does Direct IO (which ZFS doesn't support). The...
  16. Running postgres and reducing I/O overhead

    Have you considered running it in a container (LXC) instead of KVM? That would give you bare metal performance.
  17. ZFS performance regression with Proxmox

    It could be a few things. 1) Original Debian install likely wasn't based on ZFS 0.7.12 (latest). After cutting over to the new PVE kernel a ZFS upgrade process will run for 1+ hours in the background impacting performance. If you want to watch for it, hit 'K' (capital) in htop to show kernel...
  18. [SOLVED] Slow ZFS performance

    With sync=disabled, writes are buffered to RAM and flushed every 5 seconds in the background (non-blocking unless it takes longer than 5s to flush). With sync=standard, writes must be flushed to disk anytime software issues a sync request and sync operations block until the disk has acknowledged...
  19. LXC container reboot fails - LXC becomes unusable

    If AppArmor doesn't work you can boot back into an your current kernel and it will be fine. I suspect you won't be able to take libapparmor and apparmor (the package that contains apparmor_parser) from Ubuntu's repos without breaking a bunch of dependencies. If you decide to try the Ubuntu...
  20. LXC container reboot fails - LXC becomes unusable

    As far as I know, this issue is only resolved by 4.18+. You may be able to use a kernel from Ubuntu or Debian Backports, but I didn't have any luck due to missing ZFS support and/or hardware modules in those kernels. I'm currently building my own kernels to track 4.19 + ZFS + hardware I need...


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!