Search results

  1. J

    Proxmox 4.1 will not reboot server, kernel 4.2.8-1

    I'm attaching the last thing shown on the console (dmesg cannot be captured on this srv, sorry). The server (and other servers) stay on until manually reset. The other server with the same problem is a Supermicro X8DTT-H MB.
  2. J

    Proxmox 4.1 will not reboot server, kernel 4.2.8-1

    Might be a common problem, actually. I have some other servers, different bios and mb, with same behavior. But like I say, it does the correct shutdown sequence but no reboot. Stays in "shutdown state". This is something kernel related (perhaps acpi) and was working fine in proxmox 3.4 and...
  3. J

    [SOLVED] PVE suddunly stopped working, all CTs unrecheable

    I have been looking at the server monitoring in an attempt to help you isolate the problem. All seems to point to a memory leak somewhere inside lxcfs. When the problem starts, I see this: 1. The processes in interruptable state start to increase linearly. 2. The number of total processes...
  4. J

    Proxmox 4.1 will not reboot server, kernel 4.2.8-1

    Hi, There's an issue with the latest proxmox 4.1. It will not reboot the servers (my servers at least), but will just stay in the "shutdown" state: To reproduce: #reboot .... ... Server has reached shutdown state. pveversion -v proxmox-ve: 4.1-39 (running kernel: 4.2.8-1-pve) pve-manager...
  5. J

    [SOLVED] PVE suddunly stopped working, all CTs unrecheable

    I got one more hang this morning. CPU load avg > 1200 (never seen it before). There were 3 lxcfs processes on the server. One was eating huge amounts of memory. It was not possible to debug. See attached session: ~# ps auxw|grep lxcfs root 2067 0.0 0.0 751588 1412 ? S 03:49...
  6. J

    Reducing migration time for LXC container

    I had a dev env in the past and did ploop support for openvz which was not accepted. This is why I need to get a clear go from you to do dual-stage migration for LXC container on local storage. I am not sure if there is a way to keep the LXC container running on the first rsync, then stop, the...
  7. J

    Reducing migration time for LXC container

    I do not plan to use shared storage for LXC containers at all because of the speed limitations of shared storage in my usage scenario. Also live migration is nice but not necessary. What I am trying to obtain is the least possible downtime when migrating services from one node to the other. A...
  8. J

    Reducing migration time for LXC container

    With OpenVZ containers, we had two-step migration process, where on step 1 there was an initial rsync with a running container. Then the container was stopped, rsynched again and then started on the new node. This would shorted migration downtime by an order of magniture on big containers...
  9. J

    [SHEEPDOG]: Correct install

    Thank you. I'll install from source then. At least I'll be using an up-to-date version.
  10. J

    SECURITY: LXC can read server dmesg

    Yes, it will probably be an issue. On 4.2.8-1, when the OOM kicks in it dumps all processes of the server in the dmesg. If, after this, one runs dmesg in the LXC, he gets to see everything and more from the running server.
  11. J

    SECURITY: LXC can read server dmesg

    I have recently upgraded a cluster from 3.4 to 4.1 There's a security issue with LXC that I would like to bring to your attention. Running dmesg inside a CT will show you the base server information. In some cases this reveals process info from other containers. I would not expect this to be...
  12. J

    [SHEEPDOG]: Correct install

    Which way of installing a sheepdog cluster is correct: https://pve.proxmox.com/wiki/Sheepdog_cluster_install (manual) or https://pve.proxmox.com/wiki/Storage:_Sheepdog (proxmox-way). And which is the recommended way? I would prefer the latest source version from github and it's still not very...
  13. J

    Kernel 4.x and Atom N2800 possible?

    Yes, you're right. I'm only "afraid" of in-place upgrading a remote node where I have no remote console available. The fear is that it will spit a boot-time error (kvm loading or so) and will stop the boot process.
  14. J

    HA Proxmox with CEPH

    Ceph is CPU and DISK hog. Specially DISK hog. It has to run on it's own nodes. I've seen loads up to 100 during ceph reconstruction and these are considered "normal" in order to squeeze the last 0.x% from the disks. So, best recommendation is to do as everybody: have disk nodes and compute...
  15. J

    Kernel 4.x and Atom N2800 possible?

    Will it hang on reboot ? I now KVM is not possible without virt support, but will there be problems for LXC containers or even worse, a hangup during boot ? The N2800 is ok with 3.4 and kernel 2.6.x
  16. J

    Custom zfs arguments during installation?

    I can't see any evidence that you did something wrong. There brand new too (in terms of TBW). Probably a bad batch or bad product.
  17. J

    HA Proxmox with CEPH

    Last time I attempted to use the same nodes for ceph and VM, I used 4 nodes, 4 disks per node and just one VM per node. The result was disastrous. Ceph would eat all resources not leaving anything for proxmox and kvm. I wad advised at that time (around 3.1 era) that it was not a good idea to...
  18. J

    [SOLVED] Migration of LXC on ZFS looses ZFS underlying snapshots

    For anyone interested, here's the patch: # diff -u Storage.pm.org Storage.pm --- Storage.pm.org 2016-03-16 18:13:01.086242490 +0100 +++ Storage.pm 2016-03-16 18:13:12.753853338 +0100 @@ -512,7 +512,7 @@ my $snap = "zfs snapshot $zfspath\@__migration__"; - my $send...
  19. J

    [SOLVED] Migration of LXC on ZFS looses ZFS underlying snapshots

    Ok, I see that there's no hope for this to get fixed. I've patched my install and made some regression tests. It all works, so I'll add it to the set of patches I maintain locally. If at any stage you change your mind, feel free to contact me. cheers.
  20. J

    Migration LXC CT with bind mount point

    Man you make me feel guilty. LOL! I'm not asking you to fix proxmox. Forget free. Not even free software is free. Proxmox is actually very expensive, but not in the commonly accepted way ;)

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!