Hi Everyone,
I'm in a bit of a situation here.
We identified a bad drive (but still running) and decided we needed to remove it. Therefore we followed these instructions believing that it would work without a hitch and all our containers/vms would continue to run. Unfortunately, not the case...
Hi Everyone,
I'm running a PVE 5.2-2 . On one of our machines, which has 4 containers and 3 VMs, we noticed one of the containers showing 100% Swap. The Swap allocated to the container is 1GB. Upon further investigation, (running 'top' from within the container), I noticed it reported a total...
I'm planning on upgrading from PVE 5.1 to 5.2 and currently making a step-by-step plan of action to ensure the upgrade runs smoothly.
From what I can tell, it's as simple as pressing the 'upgrade' button in the web interface for each machine. However, prior to that, is there anything else I...
Unfortunatly, it looks like it's not just the container. I decided to migrate a copy of working container from another machine to the problematci machine and it still wouldn't boot with the same errors. However, loading the container on a known working machine works fine.
Me neither, apart from the fact that it's still not started (45 minutes now). I'm trying not to complicate things, I simply want to get this container restarted. However, would cloning the container might be a possible solution?
I don't think so, I think it's happened from each time I've ran 'lxc-start --name 110 --foreground' and killed the process manually via 'kill -9'
So, I've renamed all instances of /lxc/110/ to /lxc/110-bak/ from within sys/fs/cgroup in hopes this might prevent the errors.
I've started running...
from what I can see, when it crashed (and everytime i've killed the process manually) it's added another set of files within /sys/fs/cgroup/lxc entitled '110'. if I rename one, and then try to start the container again, it moves ont othe next located file with the name of 110 that needs fixing...
Ok, i've done that, and re ran 'lxc-start --name 100 --foreground' and got the following now:
lxc-start: 110: cgroups/cgfsng.c: create_path_for_hierarchy: 1337 Path "/sys/fs/cgroup/systemd//lxc/110" already existed.
lxc-start: 110: cgroups/cgfsng.c: cgfsng_create: 1433 Failed to create...
I've done some further reading, and came across this post on here (can't find it) which indicated that there might be a process which is locking any other commands from being executed on the container. So I ran 'ps aux | grep 110' :
root@xen2:~# ps aux | grep 110
root 110 0.0 0.0...
Thanks for the response. Unfortunately, as mentioned, hesitant upgrading (as per reasons above) and because i've come across plenty of threads with similar issues which do the upgrade, and then report the problem isn't resolved. I'm more interested in trying to figure out what happened before...
Hi Everyone,
Firstly, i've recently taken on the management of a proxmox cluster which I have had no experience managing previously (i'm completely new to cluster management, but not too bad at linux).
pve-manager/5.1-46/ae8241d4 (running kernel: 4.13.13-6-pve)
I have 2 xen nodes which run...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.