Search results

  1. W

    abysmal write performance on ZFS lxc container? (now with kernel oops)

    Ah, that explains it, I was working with a later pve-container version, that does not set the kmem limit anymore to check if it set kmem limit :-). I wasn't aware of this change. It may be that when the container gets blocked, that all of ZFS gets blocked. This is the diff of the change...
  2. W

    Crashes and Kernel Panic

    @evg32, Sorry, I meant, IF that is what you are experiencing. There is no actual setting for this. But what happens if you give one machine "four" cpus, can you then only start three extra vm's with one cpu? If there is this "hard" limit on 7 vcpus, and given the type of processor in your...
  3. W

    abysmal write performance on ZFS lxc container? (now with kernel oops)

    At this point, it is just a supposition. When the container is blocked, you could record the output of this command on the host: lxc-info -n <containerid> By default the Proxmox containers do not account for kernel usage, and you should see a 0 in the output of lxc-info in the Kmem use line...
  4. W

    [SOLVED] problem mount /dev/sda1 on new install Proxmox 4.1

    The support for these ddf containers must have been added recently to debian, can be that this superblock has gone unnoticed for a long time :-) That is why people disassembling an array should always zero the superblocks, kind of gives nasty suprises later on, if you don't.
  5. W

    [SOLVED] problem mount /dev/sda1 on new install Proxmox 4.1

    Ah, we crossed posts :-) There seems to be some ddf metadata on sda. I don't know yet, if that is the cause of not being able to mount sda1, but if lsblk shows a md126 or md127 attached under your sda1, then it is. If so, you could try: mdadm --stop md127 mdadm --remove md127 And try if you...
  6. W

    Crashes and Kernel Panic

    If your limit is consistent at 7 hardware accelerated processors committed in KVM, then that seems the point to start looking.
  7. W

    [SOLVED] problem mount /dev/sda1 on new install Proxmox 4.1

    Did you at one point maybe have mdraid on sda1? Does /proc/mdstat exist and show sda1?
  8. W

    abysmal write performance on ZFS lxc container? (now with kernel oops)

    It almost seems that you have a deadlock because of the LXC settings and some ZFS process being completely attributed to the memory and CPU settings of the container. So if you have a vcpu setting of 1 and a certain memory limit, it appears that you can deadlock the container because it is not...
  9. W

    Crashes and Kernel Panic

    It seems an odd coincidence that the number of XP's you can run hits a max at 7, if you have a processor with 8 hypertreads. You probably set one cpu per vm, and hardware acceleration enabled, right? Maybe you could try and disable hardware acceleration for a moment.
  10. W

    Crashes and Kernel Panic

    Did you use ZFS in the Proxmox installer?
  11. W

    Any chance of newer ZFS and LVM packages in PVE 3.4 ?

    Hello @gkovacs In the 3.4 version you can install package pve-kernel-2.6.32-42-pve (At least if you have the no-subscription repository, didn't check others). It gives you a 0.6.5.2-47_g7c033da ZFS kernel module. With the different memory dimm configurations, was your system always running in...
  12. W

    Crashes and Kernel Panic

    So, what settings did you use for your Windows XP kvms? Emulated hardware/processor? Are you installing XP 64-bit or 32-bit?
  13. W

    Security Problem server Violated

    It seems that more people have run into this backdoor installed on their system. The gates.lod seems to be quite identifying. http://blog.benhall.me.uk/2015/09/what-happens-when-an-elasticsearch-container-is-hacked/ http://ubuntuforums.org/showthread.php?t=2246312&page=4...
  14. W

    Any chance of newer ZFS and LVM packages in PVE 3.4 ?

    @gkovacs I read the zfsonlinux thread you mention. Did you have the Linux swap file on a ZVOL when you had those data corruptions? Did you have any KVM machines running with direct hardware access? Also I came across this guy doing some heavy testing of ZOL on a large NUMA system...
  15. W

    Proxmox installer question

    One problem with the out-of-the-box ZFS setup of Proxmox, and this has not been changed to my knowledge. It puts swap on a ZFS volume, and this is fairly certain to crash the host after a little swap usage in Proxmox. This also puts a little doubt on the behaviour of ZVOLs generally under memory...
  16. W

    Permission error w/ sockets inside CT since migration to PVE 4.1

    Happy new year, I have put a patch in the bugzilla for Proxmox, which you could use to workaround it with lxc.rootfs.options. https://bugzilla.proxmox.com/show_bug.cgi?id=864
  17. W

    v3.4 to v4.1 OpenVZ backup restore permission errors

    Hello @Craig, You are probably dealing with the default ACL permissions problem on 4.1 here. Try setfacl -b -R / on the new container before starting any of your own services and group write enable all socket files that you can find in your service if they are already created. See also...
  18. W

    Should systemd work within containers?

    The included centos 7 template also runs with systemd, it uses version 219. The Proxmox Jessie host uses 215, but the Debian 8 template does not have systemd pre-installed.
  19. W

    lxc containers have extended permissions - acl by default???

    Yes, and also, if you put lxc.rootfs.options: noacl (or noatime etc.) in your /etc/pve/lxc/nnn.conf this is not applied. The PVE prestart hook for LXC mounts the root without applying the lxc.rootfs.options and with system defaults that also do NOT (probably a jessie bug) honor the...
  20. W

    Cannot open /proc/stat: Transport endpoint is not connected

    @SPQRInc, Ok, so that does not seem out of order or anything. But you are still getting the lockups of htop and php-fpm reloads?