Search results

  1. M

    First drive in raidz1 about to fail, non-hot-swap

    I wouldn't recommend this at all. A rackmount server I recently worked on made some sparks on the chassis when I opened it, while running. Luckily nothing bad happened but you never know.
  2. M

    Searching for the right setup (Hardware and Software).

    It's not useless. I am actually following this with interest. My response just bumped the thread, so hopefully someone could give an answer.
  3. M

    Ceph + Supermicro Ledmon Script

    This might help: https://serverfault.com/questions/480524/identify-disks-on-supermicro-server-running-freebsd and http://en.community.dell.com/techcenter/os-applications/w/wiki/3753.using-ledmonledctl-utilities-on-linux-to-manage-backplane-leds-for-pcie-ssd-software-raid-drives
  4. M

    Ceph Cluster Fencing

    hi, did you ever get an answer on this?
  5. M

    10Gbe high availability without switches?

    Hi I wonder if this is at all possible: If I setup a 3 node CEPH or GlusterFS cluster and I want to dedicated a 4port 10Gbe NIC in each server for storage network. Can I set it up in such a way that I don't need a switch, but still achieve high availability? i.e. run cables from: From...
  6. M

    10g copper cannot get to 10g speed

    Did you ever get this resolved?
  7. M

    Advice requested on storage setup.

    /following with interest I'm planing on the same configuration, though with different size hard drives. Have you done the installation yet? How's it going?
  8. M

    Setup with Intel 10 Gigabit X710-DA2 SFP+ Dual Port

    Did you ever get the cards working? I am considering using the SuperMicro AOC-STG-b4S 4port 10GbaseT, Intel XL710 and X557 NIC's and want to know if it will work before we purchase.
  9. M

    Custom zfs arguments during installation?

    Do you know what caused the failure?
  10. M

    PArtitioning on a SATA DOM

    Did you ever get a answer to your question?
  11. M

    3/4 node setup high availability best practice?

    Yes, I have noticed that but does it necessarily make CEPH better? From what I undersrand, CEPH is better for larger deployments. How well will it work on a 3 node setup, if 1 node is offline, for say example hardware failure? And what about offsite replicated backups?
  12. M

    3/4 node setup high availability best practice?

    Thanx, but why CEPH over GlusterFS?
  13. M

    Cluster reliability when removing nodes

    The way I understand it, is that only applies when you want to permanently remove a host node from the cluster. What is your usage, that it might be a problem for you?
  14. M

    3/4 node setup high availability best practice?

    Hi all, I'm new to Proxmox, though not new to Linux or virtualization. Can someone please tell me (I couldn't get a definite answer while searching through the forums) What is the best configuration for Proxmox on 3 or 4 nodes? I need to virtualize about 12x physical servers and will be...
  15. M

    Multi-GPU Passthrough To One VM

    /following with interest.