Search results

  1. LXC container reboot fails - LXC becomes unusable

    I have seen the issue on a Proxmox node without any (client or server) NFS.
  2. LXC container reboot fails - LXC becomes unusable

    I have removed "Solved" from the title as the only solution is to manually install and maintain a 4.18+ kernel which isn't feasible / desirable for most users.
  3. Can you help me build the kernel of proxmox 4.19?

    This thread may be relevant to your situation: https://forum.proxmox.com/posts/241473/
  4. Unable to shutdown/stop lxc container

    Based on your listing: Yes, 17754 is the process you want to kill if the nice ways of shutting down the container have failed.
  5. [SOLVED] How to recovery files in VM-Disk on zfs pool

    LnxBil is right, snapshots make a terrible situation a non-issue through rollback. I consider this tool mandatory on all ZFS systems: https://github.com/zfsonlinux/zfs-auto-snapshot If you have snapshots and end up with a system that won't boot, you can use a ZFS enabled rescue CD to do the...
  6. Boot issues

    When it hits the Grub boot screen, hit 'e'dit. Then remove "quiet" from the kernel options and continue on with the boot. I don't remember the exact key to boot the modified options, but instructions will be on the bottom of the screen when you're editing. That should hopefully give you more...
  7. ZFS performance regression with Proxmox

    Although we have Oracle support, we know we're running an unsupported configuration so we focus on MetaLink articles. The biggest gotcha so far has been been that even with all IO set to ASYNCH, Oracle Automatic Diagnostic Repository (ADR) still does Direct IO (which ZFS doesn't support). The...
  8. Running postgres and reducing I/O overhead

    Have you considered running it in a container (LXC) instead of KVM? That would give you bare metal performance.
  9. ZFS performance regression with Proxmox

    It could be a few things. 1) Original Debian install likely wasn't based on ZFS 0.7.12 (latest). After cutting over to the new PVE kernel a ZFS upgrade process will run for 1+ hours in the background impacting performance. If you want to watch for it, hit 'K' (capital) in htop to show kernel...
  10. [SOLVED] Slow ZFS performance

    With sync=disabled, writes are buffered to RAM and flushed every 5 seconds in the background (non-blocking unless it takes longer than 5s to flush). With sync=standard, writes must be flushed to disk anytime software issues a sync request and sync operations block until the disk has acknowledged...
  11. LXC container reboot fails - LXC becomes unusable

    If AppArmor doesn't work you can boot back into an your current kernel and it will be fine. I suspect you won't be able to take libapparmor and apparmor (the package that contains apparmor_parser) from Ubuntu's repos without breaking a bunch of dependencies. If you decide to try the Ubuntu...
  12. LXC container reboot fails - LXC becomes unusable

    As far as I know, this issue is only resolved by 4.18+. You may be able to use a kernel from Ubuntu or Debian Backports, but I didn't have any luck due to missing ZFS support and/or hardware modules in those kernels. I'm currently building my own kernels to track 4.19 + ZFS + hardware I need...
  13. Backing up a proxmox server

    You would have had to manually select it (it's not the default).
  14. Backing up a proxmox server

    Proxmox is (Debian) Linux so you may want to Google "Linux backup software". If you used ZFS for your install, you can take a snapshot and send it to a file. The issue with just copying files is that the system won't be in a crash-consistent state and you won't be getting a copy of the Master...
  15. [SOLVED] ZFS Raid 10 with 4 SSD and cache...SLOW.

    You may want to check out: https://www.phoronix.com/scan.php?page=article&item=freebsd-12-zfs&num=1 Every filesystem has a use case where it shines. If you're looking for raw sequential throughput, no CoW filesystem is going to compete with ext4 in RAID0. You can try these safe tuning options...
  16. [SOLVED] Proxmox 5.1.46 LXC cluster error Job for pve-container@101.service failed

    The issue is still present but less frequently encountered in the 4.15.x line. See: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1779678 I saw it as recently as 4.15.18-8-pve and moved to custom 4.18 and 4.19 kernels afterwards. As in the bug report, I haven't seen the issue on these...
  17. ZFS resilvering gone bad

    If the new drive is at least as big as the one you're replacing, you can add it without any partitioning; ZFS will take care of that for you during the replace operation. You can use either format to reference the drive but /dev/disk/by-id is the recommended approach as it won't vary if you...
  18. ZFS resilvering gone bad

    Looking at the output of your zpool status, I see the "old" (original?) drive and another drive that was presumably the first replacement that failed. I would be inclined to leave those for now, add the new drive and do: zpool replace POOLNAME ORIGINAL_DRIVE SECOND_NEW_DRIVE If that does not...
  19. ZFS data loss

    A little trick for next time: zfs mount -O -a That tells ZFS that it's ok to "O"verlay existing directories which will allow the mounts to succeed.
  20. LXC container reboot fails - LXC becomes unusable

    Yes, a reboot will clear it up -- I'm not aware of any way to recover a system in this state without a reboot. My experience has been the same as in that Ubuntu kernel bug report; it's an infrequent condition that presents like a deadlock. We typically go months between incidents on 4.15 kernels...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!