About a week ago the zfs-linux package on pve-no-subscription was updated to 0.7.7, but I was alarmed to see on zfsonlinux.org:
Are there any plans to either downgrade to v0.7.6 or upgrade to v0.7.8 soon?
Thank you.
What type of local storage are you using (ZFS, LVM, etc.)? Also are you using the VirtIO drivers in your guest and what is the guest OS? In my experience Linux guests with VirtIO drivers seem to work the most consistently when using ZFS as the local storage.
Ah, good to know.
This is worth testing, but this thread says restarting the 'systemd-journald' process will result in logging stopping: https://unix.stackexchange.com/questions/379288/reloading-systemd-journald-config
I can confirm however that sending SIGUSR2 instead will not cause this issue.
journalctl(1):
--vacuum-size=, --vacuum-time=, --vacuum-files=
Removes archived journal files until the disk space they use falls below the specified size
Which makes me think this option operates on '/var/log/journal' (on disk) rather than '/run/log/journal' (tmpfs). I need to keep...
Correct. Setting RuntimeMaxFileSize and RuntimeMaxFiles in journald.conf (see 'man journald.conf') will restrict how much space under /run/log/journal is used by journald.
From Manual page journald.conf(5):
The options prefixed with "Runtime" apply to the journal files when stored on a...
tmpfs is not full but using 785MB of your 1024MB RAM, which is a lot. Try deleting files under '/run/log/journal/$UID/' (you can leave the newest file named 'system.journal') and see if the 'available' number of RAM megabytes increases.
Couldn't edit my last post, said it was spam-like or something. I'm seeing:
# ps aux|grep pmxcfs
root 5612 0.2 0.0 812048 17652 ? Ds Jan16 19:03 /usr/bin/pmxcfs
# cat /proc/5612/stack
[<ffffffff9f129497>] call_rwsem_down_write_failed+0x17/0x30
[<ffffffff9e9d22a9>]...
For what it's worth, I believe I am also having this same issue. Although my storage is local ZFS rather than ceph. '/etc/pve' is empty, corosync is running, 'pve-cluster' won't start, 'pmxcfs' is in D status and can't be killed. At first I tried to restart pveproxy and pvestatd, which didn't...
Yep I think @mailinglists figured it out. The issue happened with one of my containers with a small RAM allocation (512M RAM, 512M swap) again. The /run tmpfs mount was using 945M and swap was nearly full:
tmpfs 71G 945M 70G 2% /run
Swap: 512 494...
Interesting...I never thought to run a 'df' when the container was exhibiting the problem to check tmpfs usage. I've since increased the memory limit of the container from 512MB to 1GB and so far it hasn't seemed to run out of memory, although maybe it will just take twice as long to happen.
Also make sure myhostname= is set to a valid, externally resolvable hostname in
/etc/postfix/main.cf and restart postfix with systemctl restart postfix
Do you mean the email sent on success/failure of backup jobs or any cronjob?
Have you checked/set the MAILTO variable in crontab? https://www.cyberciti.biz/faq/linux-unix-crontab-change-mailto-settings/
Solid advice. Especially since you'll be destroying the pool and re-creating it. Also as a general rule, if you're not testing your backups regularly by restoring them, you don't have backups.
Yes, that should work. I would format it as ext4 for simplicity's sake and mount it as directory storage in Proxmox.
Sorry, I really have no idea. From the situation you've described though, it's going to take a long time. Shutting down the VM you're backing up rather than trying to back it up...
I've deployed both VMWare and Proxmox in production environments. VMWare is very expensive to license, especially if you want to use the live migration features (Storage vMotion). It's a solid platform and there's tons of quality documentation and technicians available to support it. However...
Honestly, you have several serious issues with the way the zpool was configured and unfortunately the only way to recover from this is to back up all data, destroy the pool, re-create it and then restore all data. The easiest way to accomplish that would be to use Proxmox's built in backup...
I believe I am running into the same issue with a Ubuntu 16.04 LXC container (built from the Proxmox provided ubuntu-16.04-standard_16.04-1_amd64.tar.gz template) on ZFS storage. The container runs just postfix and nagios-nrpe-server and the issue occurred after ~60 days uptime.
# free -m...
For mounting NFS file systems and running nfs-server from within a LXC container on Proxmox 5:
sed -i '$ i\ mount fstype=nfs,\n mount fstype=nfs4,\n mount fstype=nfsd,\n mount fstype=rpc_pipefs,' /etc/apparmor.d/lxc/lxc-default-cgns && systemctl reload apparmor
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.