Search results

  1. W

    How to mount LVM disk of VM

    It is good that you included the output of lvs. It shows the logical volume as inactive. This would explain why you can't access it. Is this on a cluster storage? What is the output of lvdisplay pve/vm-104-disk-1 ? Can you do lvchange -ay pve/vm-104-disk-1 ?
  2. W

    [SOLVED] ZFS swap crashes system

    No experience, but from what I read about it, it should be great if you have a lot of assigned RAM hanging around in VMs that are hardly used.
  3. W

    Why LXC Disk I/O Slower than KVM?

    @aychprox, I think you should start a new topic for that, I mean, if you want to do a general compare. In terms of disk performance there is no reason to expect a very big difference, at least if you do NOT use files for the storage of KVM disks.
  4. W

    How to mount LVM disk of VM

    I find that kpartx is a very useful tool in such cases. kpartx -av /dev/dm-5 lsblk #Look for the correct /dev/ node in the hiearchy under dm-5 #Maybe you will like to mount it read-only, this is safest, to avoid journal recovery: mount -o ro,noload -t ext4 /dev/|node| /mnt/ #afterwards kpartx...
  5. W

    [SOLVED] ZFS swap crashes system

    @Erk, Usually it is OK and works correct, but only as long as your swapfile is indeed contiguous and fully block allocated. From the 2.6 kernel Linux will record the blocks used at time of swapon and try to bypass the filesystem. However as the differences in stability by enabling/disabling zfs...
  6. W

    Why LXC Disk I/O Slower than KVM?

    @aychprox, In my experience, the only way to accurately gauge the true writing to storage speed with a system under KVM has been to download a file several times bigger then the host RAM memory from a server with SSD on a Gbit network connection and time it. Atempts at using benchmark tools...
  7. W

    [SOLVED] ZFS swap crashes system

    IMHO, the *best* thing would be if the Proxmox installer would just reserve a swap partition at the start of disk and keep it outside of ZFS. Putting a filesystem between memory and swap disk is risky.
  8. W

    [SOLVED] ZFS swap crashes system

    Some further info: I can get the 2.6.32 kernel to hang also if I start up 9 concurrent untars. With all ZFS features disabled on the swapping ZVOL on the 4.2 kernels it seems about as stable as the 2.6 kernel (with features enabled). I do not think you lose anything by disabling those features...
  9. W

    [SOLVED] ZFS swap crashes system

    @LnxBill, Yes, I can definitely say we have seen problems with swap on ZFS zvol in Proxmox 4.0. Also reproduced it on a virtual machine with the 4.2.6-1-pve kernel. On 4.0 machines we are seeing sudden reboots, that are absent when we turn swap off. Sometimes these reboots are preceded by...
  10. W

    Spontaneous reboot

    I can report some progress on this issue. I have been able to reproduce the sudden reboots on a Virtualbox with simulated Sata controller and 2 disks in a ZFS raid 0 configuration installed with the 4.0 installer and updated to the 4.2.6-1-pve kernel and other updates with the no subscription...
  11. W

    Spontaneous reboot

    Yes, it is clear that avoiding swapping will avoid the reboot. If the poster Marcus Reid on the freebsd bug is right, then vm.min_free_kbytes may avoid the problem altogether even while swapping. I notice that it currently is only about one third of default Ubuntu server settings. Can't test...
  12. W

    Spontaneous reboot

    Hello Thomas, The system is running with the default ZFS settings made by the installer. The rpool/swap has volblocksize=4K which matches the system page size. The total rpool has caching turned off. The freebsd bug report may be helpful. Maybe we can find some comparable sysctl setting...
  13. W

    Spontaneous reboot

    Hello, We are having problems with spontaneous reboots on version 4 stable on supermicro hardware with ZFS mirror (Installed with installer from ISO cd). Version 4 proxmox with 4.2.2-1-pve and 4.2.3-2-pve kernel both. Same kinds of installs on 3.4 and 4 stable on mdraid do not show this...
  14. W

    [solved]HP proliant Randoms reboots

    Re: HP proliant Randoms reboots Hello debi@n, Thanks we are using the latest updates on pve-no-subscription. I am beginning to think that ZFS mirror is a rare setup and that a lot of people will be using hardware raid or mdraid. It does not give any problems for us, until sometime after the OS...
  15. W

    [solved]HP proliant Randoms reboots

    Re: HP proliant Randoms reboots Hello debi@n, That is of course very nice. :D I am interested to see how rare the ZFS setup is. Are you using ZFS on your systems? Regards, Gerrit
  16. W

    [solved]HP proliant Randoms reboots

    Re: HP proliant Randoms reboots Hello, We are also having problems with spontaneous reboots on version 4 albeit on supermicro hardware. Version 4 proxmox with 4.2.2-1-pve and 4.2.3-2-pve kernel both. Same kinds of installs on 3.4 did not show this behaviour. There is usually nothing on the...
  17. W

    PVE 4.0 Beta 2: Difficulties with zfs {Large kmem_alloc}

    Hello Dietmar, Ok, great. You guys are really up to date :D We will update also... Best regards, Gerrit
  18. W

    PVE 4.0 Beta 2: Difficulties with zfs {Large kmem_alloc}

    Hello Tom, Thanks for the reply. Could you confirm that this commit made it in into PVE 4 release? Dietmar Maurer [Thu, 24 Sep 2015 10:47:22 +0000] update pkg-zfs to master/debian/jessie/0.6.5.1-4 Best regards, Gerrit Venema WIND Internet
  19. W

    PVE 4.0 Beta 2: Difficulties with zfs {Large kmem_alloc}

    Hello all, We have some intermittent crashes on a PVE 4.0 Beta 2 server. Server loses connectivity and kernel log shows a message that is correlated in time. We run ZFS with a raid10 setup done with the installer. This message shows up in kern.log: Large kmem_alloc(35496, 0x1000), please...