I was able to boot into a grub shell on the same server and run the following command which gives some error but the server boot fine so I'm not sure if this is related. You'll find the output attached
Ok after exporting the pool and rebooting everything is fine. I'm not sure what might have caused this. Any idea ? I have multiple servers to update and not wanting this to happen again.
Thanks
Hey Fabian,
Thanks for the quick response. Here's what my zpool import report
This can't be good. Everything was working correctly before the update so I'm assuming it can't be hardware related
I have an issue after applying the latest update for Proxmox 5.1. The server is a a ProLiant DL380 G6 using ZFS and an HBA.It no longer boot and is stuck on grub recovery with the error: checksum verification failed
Is there a way to fix this ? Is booting to the old kernel from the grub rescue...
Also stuck on grub recovery on a ProLiant DL380 G6 using ZFS
How do you boot to the old kernel from the grub rescue shell ? Pretty sure my last working kernel was 4.13.8-2-pve with Proxmox 5.1 latest version
Here are my current options:
Booting from the live ISO repair doesn't work either...
There is definetely a problem with the way the free ram is calculated. One of our container was reporting low ram usage in the webui and our Nagios pluging, which is based on the same calculation, never reported anything critical but yet the container was hit by the OOM. Which crashed the whole...
Sorry for reopening this thread but it seems related to this: https://github.com/lxc/lxcfs/issues/175
It seems to have been fixed 18 days ago.
This would also explain why my containers would start swapping even when plenty of ram was available.
Also it seems someone else open an issue with...
This is for installing on a new system.
For a running system you only need to use those 2 commands to update to the latest version
apt-get update
apt-get dist-upgrade
Thanks for your input. So if I understand correctly I could get the limited memory from /sys/fs/cgroup/memory/lxc/id/ minus the RSS mem used by the container and that should give me the RAM usage of the container ? How does the webUI calculate the ram usage ?
Also when using lxc-top I'm getting a different value than the webui.
Here's a screenshot of container 105 which shows 10GB ram of usage
Container CPU CPU CPU BlkIO Mem
Name Used Sys User Total...
Hello,
I was wondering if maybe the memory reported in the webUI for LXC containers is wrong.
Here's a 'free -m' from a container with 2GB ram and 0 swap
total used free shared buff/cache available
Mem: 2048 978 9 2757...
We had the same error and changing it to lxc.apparmor.profile fixed our issue. Here are our settings so you can compare.
/etc/apparmor.d/lxc/lxc-default-with-cifs
# Do not load this file. Rather, load /etc/apparmor.d/lxc-containers, which
# will source all profiles under /etc/apparmor.d/lxc...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.