Search results

  1. M

    Disaster Concept: 2 SANs with same data

    Hello Marco, my name is Marco, too. I would say, please try drbd in async mode. So you will have the performance of the fast SAN. greetings marco
  2. M

    Two-Node High Availability Cluster problem with RgManager

    Thank you for the info. I´m planing to configure HA on an Dell Cluster with OMSA. Hope there will be a fix (need OMSA for Raid Management).
  3. M

    Little Question / Planning

    Hi guys, have a little question. Could I do an 3-Node-DRBD-Cluster. 1st option with Dual-Port 10GB Nics connected in a row. 2nd option with a 10GB-Switch. Another question...could I improve DRBD-Speed by bonding 2 10GB Ports with Balance-RR (online 2 nodes). Thanks for your thoughts. macday
  4. M

    New 2.6.32 Kernel for Proxmox VE 2.1 stable

    How does it work with the 2.6.32-13 from pvetest ?
  5. M

    VM performance problem.

    ...please try the xfs-mount options in ubuntu and test again...
  6. M

    Migration error after PVETest-Update

    wonderful...I have some questions about your subscription...and support...I´m planing to buy for a new customer...German Languange prefered ;) thanks Dietmar
  7. M

    Migration error after PVETest-Update

    do I have to restart something ?
  8. M

    Migration error after PVETest-Update

    Thanks Dietmar. Do I have to restart something after that ? Guest, Host ? Hope not ;)
  9. M

    VM performance problem.

    Is drbd really limitied to 2 nodes ?...Question to Proxmox Team.
  10. M

    Migration error after PVETest-Update

    always and only with big vm´s (bigger than 1gb ram)...I had some sysctl.conf tweak for infiniband wich I disabled for this test (I rebooted after reverting)
  11. M

    VM performance problem.

    hmmm....try again with a debian 6 "nas" with xfs as storage (but also a hardware raid, no fake raid)...you will see the difference
  12. M

    VM performance problem.

    drbd is the way go...if you have the posibility to make a real life test...
  13. M

    VM performance problem.

    2 things....local raid10 sas storage with drbd on 2 hosts (10gb intel nic) ....and sas san storage fibrechannel shared...thats the only way to get happy
  14. M

    VM performance problem.

    i also tried ssd zil...no luck...sorry
  15. M

    VM performance problem.

    don´t use zfs...I also tried zfs on openindiana and nexenta...it is simply damn slow for virtualization...it is good for backups and fileservers but not as storage for kvm...
  16. M

    Migration error after PVETest-Update

    Hi guys, This weekend I updated my clutser to pvetest-repo to test the stable kvm 1.1 No I have just one problem when migrating one machine to another host. Windows KVM´s are working after this "error" migration. Linux KVM´s (mostly debian squeeze) are crashing. Any idea ? cheers macday
  17. M

    New 2.6.32 Kernel for Proxmox VE 2.1 stable

    thank you. that might be the cause.
  18. M

    New 2.6.32 Kernel for Proxmox VE 2.1 stable

    Same to me. Glad to hear I´m not the only one. I had kernel panic on 2 prod servers. Should I try the 2.6.32-13-pve ?
  19. M

    Super slow Backup with pve 2.x

    Hi guys I have the same problem with fast qnap storage.