I'm attaching the last thing shown on the console (dmesg cannot be captured on this srv, sorry).
The server (and other servers) stay on until manually reset. The other server with the same problem is a Supermicro X8DTT-H MB.
Might be a common problem, actually. I have some other servers, different bios and mb, with same behavior.
But like I say, it does the correct shutdown sequence but no reboot. Stays in "shutdown state". This is something kernel related (perhaps acpi) and was working fine in proxmox 3.4 and...
I have been looking at the server monitoring in an attempt to help you isolate the problem. All seems to point to a memory leak somewhere inside lxcfs.
When the problem starts, I see this:
1. The processes in interruptable state start to increase linearly.
2. The number of total processes...
Hi,
There's an issue with the latest proxmox 4.1. It will not reboot the servers (my servers at least), but will just stay in the "shutdown" state:
To reproduce:
#reboot
....
...
Server has reached shutdown state.
pveversion -v
proxmox-ve: 4.1-39 (running kernel: 4.2.8-1-pve)
pve-manager...
I got one more hang this morning. CPU load avg > 1200 (never seen it before).
There were 3 lxcfs processes on the server. One was eating huge amounts of memory.
It was not possible to debug. See attached session:
~# ps auxw|grep lxcfs
root 2067 0.0 0.0 751588 1412 ? S 03:49...
I had a dev env in the past and did ploop support for openvz which was not accepted. This is why I need to get a clear go from you to do dual-stage migration for LXC container on local storage.
I am not sure if there is a way to keep the LXC container running on the first rsync, then stop, the...
I do not plan to use shared storage for LXC containers at all because of the speed limitations of shared storage in my usage scenario. Also live migration is nice but not necessary.
What I am trying to obtain is the least possible downtime when migrating services from one node to the other. A...
With OpenVZ containers, we had two-step migration process, where on step 1 there was an initial rsync with a running container. Then the container was stopped, rsynched again and then started on the new node.
This would shorted migration downtime by an order of magniture on big containers...
Yes, it will probably be an issue.
On 4.2.8-1, when the OOM kicks in it dumps all processes of the server in the dmesg.
If, after this, one runs dmesg in the LXC, he gets to see everything and more from the running server.
I have recently upgraded a cluster from 3.4 to 4.1
There's a security issue with LXC that I would like to bring to your attention.
Running dmesg inside a CT will show you the base server information. In some cases this reveals process info from other containers.
I would not expect this to be...
Which way of installing a sheepdog cluster is correct:
https://pve.proxmox.com/wiki/Sheepdog_cluster_install (manual) or
https://pve.proxmox.com/wiki/Storage:_Sheepdog (proxmox-way).
And which is the recommended way? I would prefer the latest source version from github and it's still not very...
Yes, you're right. I'm only "afraid" of in-place upgrading a remote node where I have no remote console available.
The fear is that it will spit a boot-time error (kvm loading or so) and will stop the boot process.
Ceph is CPU and DISK hog. Specially DISK hog. It has to run on it's own nodes. I've seen loads up to 100 during ceph reconstruction and these are considered "normal" in order to squeeze the last 0.x% from the disks.
So, best recommendation is to do as everybody: have disk nodes and compute...
Will it hang on reboot ? I now KVM is not possible without virt support, but will there be problems for LXC containers or even worse, a hangup during boot ?
The N2800 is ok with 3.4 and kernel 2.6.x
Last time I attempted to use the same nodes for ceph and VM, I used 4 nodes, 4 disks per node and just one VM per node. The result was disastrous. Ceph would eat all resources not leaving anything for proxmox and kvm.
I wad advised at that time (around 3.1 era) that it was not a good idea to...
Ok, I see that there's no hope for this to get fixed. I've patched my install and made some regression tests. It all works, so I'll add it to the set of patches I maintain locally.
If at any stage you change your mind, feel free to contact me.
cheers.
Man you make me feel guilty. LOL! I'm not asking you to fix proxmox.
Forget free. Not even free software is free.
Proxmox is actually very expensive, but not in the commonly accepted way ;)
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.