Search results

  1. G

    Regression in zfs-linux 0.7.7

    About a week ago the zfs-linux package on pve-no-subscription was updated to 0.7.7, but I was alarmed to see on zfsonlinux.org: Are there any plans to either downgrade to v0.7.6 or upgrade to v0.7.8 soon? Thank you.
  2. G

    VM live migration with local storage

    What type of local storage are you using (ZFS, LVM, etc.)? Also are you using the VirtIO drivers in your guest and what is the guest OS? In my experience Linux guests with VirtIO drivers seem to work the most consistently when using ZFS as the local storage.
  3. G

    [LXC Stability] Cached memory leaks (killing processes (wtf!))

    Ah, good to know. This is worth testing, but this thread says restarting the 'systemd-journald' process will result in logging stopping: https://unix.stackexchange.com/questions/379288/reloading-systemd-journald-config I can confirm however that sending SIGUSR2 instead will not cause this issue.
  4. G

    [LXC Stability] Cached memory leaks (killing processes (wtf!))

    journalctl(1): --vacuum-size=, --vacuum-time=, --vacuum-files= Removes archived journal files until the disk space they use falls below the specified size Which makes me think this option operates on '/var/log/journal' (on disk) rather than '/run/log/journal' (tmpfs). I need to keep...
  5. G

    [LXC Stability] Cached memory leaks (killing processes (wtf!))

    Correct. Setting RuntimeMaxFileSize and RuntimeMaxFiles in journald.conf (see 'man journald.conf') will restrict how much space under /run/log/journal is used by journald. From Manual page journald.conf(5): The options prefixed with "Runtime" apply to the journal files when stored on a...
  6. G

    [LXC Stability] Cached memory leaks (killing processes (wtf!))

    tmpfs is not full but using 785MB of your 1024MB RAM, which is a lot. Try deleting files under '/run/log/journal/$UID/' (you can leave the newest file named 'system.journal') and see if the 'available' number of RAM megabytes increases.
  7. G

    LXC Container stuck on startup, hangs pveproxy

    Couldn't edit my last post, said it was spam-like or something. I'm seeing: # ps aux|grep pmxcfs root 5612 0.2 0.0 812048 17652 ? Ds Jan16 19:03 /usr/bin/pmxcfs # cat /proc/5612/stack [<ffffffff9f129497>] call_rwsem_down_write_failed+0x17/0x30 [<ffffffff9e9d22a9>]...
  8. G

    LXC Container stuck on startup, hangs pveproxy

    For what it's worth, I believe I am also having this same issue. Although my storage is local ZFS rather than ceph. '/etc/pve' is empty, corosync is running, 'pve-cluster' won't start, 'pmxcfs' is in D status and can't be killed. At first I tried to restart pveproxy and pvestatd, which didn't...
  9. G

    [LXC Stability] Cached memory leaks (killing processes (wtf!))

    Yep I think @mailinglists figured it out. The issue happened with one of my containers with a small RAM allocation (512M RAM, 512M swap) again. The /run tmpfs mount was using 945M and swap was nearly full: tmpfs 71G 945M 70G 2% /run Swap: 512 494...
  10. G

    [LXC Stability] Cached memory leaks (killing processes (wtf!))

    Interesting...I never thought to run a 'df' when the container was exhibiting the problem to check tmpfs usage. I've since increased the memory limit of the container from 512MB to 1GB and so far it hasn't seemed to run out of memory, although maybe it will just take twice as long to happen.
  11. G

    Blue screen with 5.1

    I've run into a similar error...see if this helps: https://forum.proxmox.com/threads/installing-proxmox-5-0b2-on-hp-dl360-g5.35382/
  12. G

    Cronjob to Mail not longer in 5.1?

    Also make sure myhostname= is set to a valid, externally resolvable hostname in /etc/postfix/main.cf and restart postfix with systemctl restart postfix
  13. G

    Cronjob to Mail not longer in 5.1?

    Do you mean the email sent on success/failure of backup jobs or any cronjob? Have you checked/set the MAILTO variable in crontab? https://www.cyberciti.biz/faq/linux-unix-crontab-change-mailto-settings/
  14. G

    Very high IO Delay on any load

    Solid advice. Especially since you'll be destroying the pool and re-creating it. Also as a general rule, if you're not testing your backups regularly by restoring them, you don't have backups.
  15. G

    Very high IO Delay on any load

    Yes, that should work. I would format it as ext4 for simplicity's sake and mount it as directory storage in Proxmox. Sorry, I really have no idea. From the situation you've described though, it's going to take a long time. Shutting down the VM you're backing up rather than trying to back it up...
  16. G

    Very high IO Delay on any load

    I've deployed both VMWare and Proxmox in production environments. VMWare is very expensive to license, especially if you want to use the live migration features (Storage vMotion). It's a solid platform and there's tons of quality documentation and technicians available to support it. However...
  17. G

    Very high IO Delay on any load

    Honestly, you have several serious issues with the way the zpool was configured and unfortunately the only way to recover from this is to back up all data, destroy the pool, re-create it and then restore all data. The easiest way to accomplish that would be to use Proxmox's built in backup...
  18. G

    [LXC Stability] Cached memory leaks (killing processes (wtf!))

    I believe I am running into the same issue with a Ubuntu 16.04 LXC container (built from the Proxmox provided ubuntu-16.04-standard_16.04-1_amd64.tar.gz template) on ZFS storage. The container runs just postfix and nagios-nrpe-server and the issue occurred after ~60 days uptime. # free -m...
  19. G

    lxc NFS

    For mounting NFS file systems and running nfs-server from within a LXC container on Proxmox 5: sed -i '$ i\ mount fstype=nfs,\n mount fstype=nfs4,\n mount fstype=nfsd,\n mount fstype=rpc_pipefs,' /etc/apparmor.d/lxc/lxc-default-cgns && systemctl reload apparmor

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!