Recent content by gsupp

  1. G

    Regression in zfs-linux 0.7.7

    About a week ago the zfs-linux package on pve-no-subscription was updated to 0.7.7, but I was alarmed to see on zfsonlinux.org: Are there any plans to either downgrade to v0.7.6 or upgrade to v0.7.8 soon? Thank you.
  2. G

    VM live migration with local storage

    What type of local storage are you using (ZFS, LVM, etc.)? Also are you using the VirtIO drivers in your guest and what is the guest OS? In my experience Linux guests with VirtIO drivers seem to work the most consistently when using ZFS as the local storage.
  3. G

    [LXC Stability] Cached memory leaks (killing processes (wtf!))

    Ah, good to know. This is worth testing, but this thread says restarting the 'systemd-journald' process will result in logging stopping: https://unix.stackexchange.com/questions/379288/reloading-systemd-journald-config I can confirm however that sending SIGUSR2 instead will not cause this issue.
  4. G

    [LXC Stability] Cached memory leaks (killing processes (wtf!))

    journalctl(1): --vacuum-size=, --vacuum-time=, --vacuum-files= Removes archived journal files until the disk space they use falls below the specified size Which makes me think this option operates on '/var/log/journal' (on disk) rather than '/run/log/journal' (tmpfs). I need to keep...
  5. G

    [LXC Stability] Cached memory leaks (killing processes (wtf!))

    Correct. Setting RuntimeMaxFileSize and RuntimeMaxFiles in journald.conf (see 'man journald.conf') will restrict how much space under /run/log/journal is used by journald. From Manual page journald.conf(5): The options prefixed with "Runtime" apply to the journal files when stored on a...
  6. G

    [LXC Stability] Cached memory leaks (killing processes (wtf!))

    tmpfs is not full but using 785MB of your 1024MB RAM, which is a lot. Try deleting files under '/run/log/journal/$UID/' (you can leave the newest file named 'system.journal') and see if the 'available' number of RAM megabytes increases.
  7. G

    LXC Container stuck on startup, hangs pveproxy

    Couldn't edit my last post, said it was spam-like or something. I'm seeing: # ps aux|grep pmxcfs root 5612 0.2 0.0 812048 17652 ? Ds Jan16 19:03 /usr/bin/pmxcfs # cat /proc/5612/stack [<ffffffff9f129497>] call_rwsem_down_write_failed+0x17/0x30 [<ffffffff9e9d22a9>]...
  8. G

    LXC Container stuck on startup, hangs pveproxy

    For what it's worth, I believe I am also having this same issue. Although my storage is local ZFS rather than ceph. '/etc/pve' is empty, corosync is running, 'pve-cluster' won't start, 'pmxcfs' is in D status and can't be killed. At first I tried to restart pveproxy and pvestatd, which didn't...
  9. G

    [LXC Stability] Cached memory leaks (killing processes (wtf!))

    Yep I think @mailinglists figured it out. The issue happened with one of my containers with a small RAM allocation (512M RAM, 512M swap) again. The /run tmpfs mount was using 945M and swap was nearly full: tmpfs 71G 945M 70G 2% /run Swap: 512 494...
  10. G

    [LXC Stability] Cached memory leaks (killing processes (wtf!))

    Interesting...I never thought to run a 'df' when the container was exhibiting the problem to check tmpfs usage. I've since increased the memory limit of the container from 512MB to 1GB and so far it hasn't seemed to run out of memory, although maybe it will just take twice as long to happen.
  11. G

    Blue screen with 5.1

    I've run into a similar error...see if this helps: https://forum.proxmox.com/threads/installing-proxmox-5-0b2-on-hp-dl360-g5.35382/
  12. G

    Cronjob to Mail not longer in 5.1?

    Also make sure myhostname= is set to a valid, externally resolvable hostname in /etc/postfix/main.cf and restart postfix with systemctl restart postfix
  13. G

    Cronjob to Mail not longer in 5.1?

    Do you mean the email sent on success/failure of backup jobs or any cronjob? Have you checked/set the MAILTO variable in crontab? https://www.cyberciti.biz/faq/linux-unix-crontab-change-mailto-settings/
  14. G

    Very high IO Delay on any load

    Solid advice. Especially since you'll be destroying the pool and re-creating it. Also as a general rule, if you're not testing your backups regularly by restoring them, you don't have backups.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!