Search results

  1. K

    Hyper converged setup

    Hi, Your questions: 1. yes it sounds like a really good starting point. 2. If you can afford the SSD's for budget reasons, that's fine! But you should keep an eye on the SSD type (should be enterprise grade, and most important is latency) 3. on full SSD setup a separate journal does not...
  2. K

    ZFS RAID Types and Speed

    much more important than the max read/write speeds for you deployment is the IOPS. Common sense for ZFS is that one VDEV is as fast as one of the drives in terms of IOPS for random reads/writes. So for your deployment I would recommend a setup of 3 mirrored VDEV in one pool instead of a RaidZ2...
  3. K

    OVS MTU help

    You have to set the jumbo frames for the underlying bond0. Due to some deficiencies of the debian network setup, you have to do it with pre-up commands: in the iface bondo: pre-up ( ip link set mtu 9000 ... && ip link set mtu 9000 ............ ) mtu 9000 then you have to set the MTU of course...
  4. K

    PVE Cluster - number of Nodes

    No, you need 3 nodes for proper maintenance (1 at at time). 2 nodes of a 3 node cluster are quorate, but no further node is allowed to fail. Generally you need always (n/2)+1 nodes for quorum
  5. K

    Fix SSD Raid1 Issues / Suggestions?

    ZFS in Mirror configuration would be nice, but be aware that Megaraid doesn't support HBA Mode, so think of cross flashing the megaraid into IT Mode (HBA). If it is an original LSI 9240, it should be possible.
  6. K

    caching NFS reads (fscache)

    did you check the cachefilesd.conf , is the caching directory defined?
  7. K

    Differential backups

    Yeah what is the reason to not add this feature? It would be really very helpful!
  8. K

    ssd how big?

    you can run with just one ZIL device, just a warning that on failure of the ZIL device you probably cannot replace online. Arc size: "arcstat" For VM disk performance: which driver did you use? for best speed virtio-scsi or virtio-blk (prefer virtio-scsi as it supports scsi unmap)
  9. K

    [SOLVED] Continuing woes with OVH, Proxmox and Routes

    Are you sure that netmask 255.255.255.255 is correct ???? what are you doing with the post-up add route commands, they seem to be unnecessary. And where is the router from vmbr172 to the outside net ???????
  10. K

    ssd how big?

    No, for log usually 8 GByte are enough, the SSD only has to store only the sync data written in a sync interval. The Read Cache can be any size you like, its effectiveness depends on your working set, but it should be at least same size as ARC. Some more hints: If the ZIL device (log) is not...
  11. K

    Ceph and Enclosure disk light?

    Sorry, I don't, i had not yet a failure of a CEPH OSD
  12. K

    Ceph and Enclosure disk light?

    Raid-0 is a less-than-ideal solution for controllers which do not support HBA mode. With an HBA controller you can flash the light also for JBOD's (with some script at least)
  13. K

    transform virtio-blk to scsi-virtio

    I looked into the "rbd du" command, no change here. As sdelete zeroes blocks, this could of course help with backups and qcow
  14. K

    transform virtio-blk to scsi-virtio

    At least with CEPH in the background it will not help (just tested it).
  15. K

    transform virtio-blk to scsi-virtio

    Too bad, no. Windows 7 supports discard only with s-ata trim Command, which is not supported in qemu sata. The SCSI unmap Command is only supported with Windows 8 and newer versions. I searched for Utilities, but with no success. There is One from VMware, but it had no Effect with...
  16. K

    best practice: proxmox keep up to date

    On occasion, especially when security updates are outstanding. Usually once a month
  17. K

    transform virtio-blk to scsi-virtio

    which distribution is it? depending on the distribution you can change it in /boot/grub/grub.conf (oldish grub1) or with ubuntu "update-grub" will fix it
  18. K

    best practice: proxmox keep up to date

    I do the following: -> set the ha group for a node to no-fallback (if not yet done) -> migrate out all VM's via bulk migrate (view memory load of the servers) -> apt-get update; apt-get dist-upgrade on the node -> reboot -> check if anything is working -> unset no-fallback -> all VM's will...
  19. K

    Stratis: New storage management solution for Linux

    There is nothing in the Paper which leds to multi node filesystem. XFS isn't a clustered filesystem. Btw. Clustered filesystem are all slow with small files because of the latencys of locks, theya re good for HPC like Lustre etc. for real large files.