You clear the space usually. Have you checked your nfs location for actual diskspace used aswell as your local disks on your node? one of them surely has run out of space.
try loading on both servers.
df -i
and
df -h
to see.
Think I solved it.
I thought about it a bit and I knew it worked before so it must have been some update from CentOS side.
I decided to install an old version of CentOS 7 quota file. Current new version is quota-4.0.17 and I decided to install quota-4.0.14 from ...
Think I solved it.
I thought about it a bit and I knew it worked before so it must have been some update from CentOS side.
I decided to install an old version of CentOS 7 quota file. Current new version is quota-4.0.17 and I decided to install quota-4.0.14 from ...
How do we limit diskio with LXC containers.
Lets say we have one LXC container killing the diskio constantly hence we would like to limit diskio to a good value that is just enough for all containers.
Any way to perform this?
Not sure. But it may be related to the same issue I had awhile ago:
https://forum.proxmox.com/threads/mesg-change-dev-pts-10-mode-failed-read-only-file-system.45080/#post-215115
check your mounts and see if u have the same as some of us as per my temp fix
https://forum.proxmox.com/threads/mesg-change-dev-pts-10-mode-failed-read-only-file-system.45080/
Fixed it doing the following:
Getting closer.
I ran this :
mount -t devpts -o remount,gid=5,mode=620 devpts /dev/pts
and now it doesnt say read only but getting new error now which made me think some stale things still left behind.
So I ran
find /sys/fs/cgroup/*/lxc/113* -type d | tac |...
Getting closer.
I ran this :
mount -t devpts -o remount,gid=5,mode=620 devpts /dev/pts
and now it doesnt say read only but getting new error now which made me think some stale things still left behind.
So I ran
find /sys/fs/cgroup/*/lxc/113* -type d | tac | xargs rmdir
where 113 is the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.