Search results

  1. T

    lvm move disk problem

    It was a new LV created by the move disk command from gui. So I guess with the VM being shut down the move disk left it active and HA started it on the other node and also made it active on the other node. This could be what happened?
  2. T

    Dell PERC H310 out of the box? And performance? Suggestions?

    I have 2 Dell R720 with H710.. they work out of the box and the performance is great.
  3. T

    lvm move disk problem

    I mentioned that in my first post.. it was stopped.
  4. T

    lvm move disk problem

    Then it means I hit a certain state that after the move disk finalized ( from gui ) and I activated HA for that VM and got started by moving to the other node I got the same LV activated on both nodes. This actually happened twice in a row with 2 different VMs, but with a third one didn't happen..
  5. T

    lvm move disk problem

    I run a 2-node HA cluster and I have successfully moved a disk from one VG to another. The VGs are on top of 2xLVM which runs on top of 2xDRBD. What happened was that after the operation terminated ok, I activated the HA for that node ( it was in shutdown and without HA ) and it was moved to the...
  6. T

    [ISSUE] Network get down

    Something happens with your setup after those 4-5 hours that you mention. Check the logs from /var/log, check dmesg.. maybe is something obvious..
  7. T

    [ISSUE] Network get down

    It doesn't explain the fact that you don't have network connectivity from your VMs.. but you should have 2 different IPs on different machines even. Btw, what do you mean by the machines don't have internet after a while? Can you ping the gateway from the VM? maybe something else is going on?
  8. T

    [ISSUE] Network get down

    That cannot be the cause. What dns have you setup in /etc/resolv.conf on the proxmox server? The way I checked your domain the IP address for consolight.ro is 224.90 Maybe there is just some DNS problem. This domain was moved from another hosting? which maybe it is routed by the same ISP and...
  9. T

    [ISSUE] Network get down

    If the network config I have suggested for the host works then it has to be correct from the network topology point of view. You have to debug what is going on when things stop working. Can't you get help from the local ISP/hosting provider? They could tell you for example if they have the...
  10. T

    [ISSUE] Network Server Problem Please Help

    I'm glad you sorted it out.. just be patient and read some more when you face a challenge :)
  11. T

    [ISSUE] Network Server Problem Please Help

    1st option: auto lo iface lo inet loopback auto eth0 iface eth0 inet manual auto vmbr0 iface vmbr0 inet static address 176.223.223.114 netmask 255.255.255.252 gateway 176.223.223.113 bridge_ports eth0 bridge_stp off bridge_fd 0 And then you assign ip addresses to your VMs from 176.223.224.90...
  12. T

    [ISSUE] Network Server Problem Please Help

    Assuming that you have only eth0 as interface connected to your ISP you should remove iface eth0 setup ( just iface eth0 inet manual ) On auto vmbr0 section add bridge_ports eth0 leave the ip address of vmbr0: 176.223.223.114 And use the IP addresses from the additional subnet directly inside...
  13. T

    DRBD Diskless after 48 hours

    I have experienced problems with drbd when using bonding over 3 links in round-robin. 2 ports belong from the same quad network cards and 1 from another one ( I wanted for this to work if one of the cards fails ). It seems when using high sync-rate it happens that packets from time to time don't...
  14. T

    Testing Cluster, Failover Domains and such

    For everything that is not handled through the GUI, I modified the cluster.conf ( for example fencing.. and HA ).. if you wonder why you need Proxmox then you should start another topic :P
  15. T

    Testing Cluster, Failover Domains and such

    https://fedorahosted.org/cluster/wiki/FailoverDomains You are using RHCS at core..
  16. T

    ProxMox VLAN configuration

    you have the same network for all the vlans because you use /8 netmask. Maybe you want /24 assignment for each vlan?
  17. T

    Slow Proxmox 2.1 Performance on Dell H310 Raid Card

    on Perc modules read cache and write cache are different settings/policies.. In my case a BBU was 200USD and working with write back caching makes a huge difference.. you could buy H710 with 1GB cache..
  18. T

    Slow Proxmox 2.1 Performance on Dell H310 Raid Card

    If you really want WB caching, get BBU you can loose a lot of data with a sudden loss of power..
  19. T

    windows 2003 drivers issues

    In my case all the versions above 0.49 didn't work .. I was getting BSOD ( I run 7 VMs W2003 R2 64-bit )