Search results

  1. B

    "tar: write error" during a Restore

    Hello, restoring a kvm this displays: extracting archive '/var/lib/vz/dump/vzdump-qemu-100-2012_05_01-10_52_28.tar.lzo' extracting 'qemu-server.conf' from archive extracting 'vm-disk-ide0.raw' from archive Rounding up size to full physical extent 4.00 GiB Logical volume "vm-103-disk-1"...
  2. B

    Bug in DRBD causes split-brain, already patched by DRBD devs

    e100 thanks for making the bug report... hopefully drbd gets updated in the kernel soon. to prevent this issue I'll use suspend backups for kvm's on drbd . have you tried doing that to prevent split brain?
  3. B

    Failover Domains question.

    So I've been reading threads about fd's . [ no more floppy disks around so fd = Failover Domains :) ] . I'm setting up a 3 node cluster with two drbd resources on nodeA and nodeB per what e100 has written. nodeC is to provide proper quorum and development. from this thread...
  4. B

    HA problem

    that looks like a hardware disk issue.
  5. B

    glusterfs and high availability.

    I followed http://docs.redhat.com/docs/en-US/Red_Hat_Storage_Software_Appliance/3.2/html/User_Guide/ and used replicated volumes per http://docs.redhat.com/docs/en-US/Red_Hat_Storage_Software_Appliance/3.2/html/User_Guide/sect-Administration_Guide-Setting_Volumes-Replicated.html The set up...
  6. B

    glusterfs and high availability.

    I tested glusterfs over the last week and am not going to use for our high availability necessary data. It worked great until a reboot and umount caused split brain. I'll post details if there are responses to this. In the future I think glusterfs version 3.3+ on top of zfs will be a way...
  7. B

    VM is locked (migrate)

    thank you, that unlocked it, then I was able to do the migrate.
  8. B

    /etc/network/interfaces question

    on a new install from most recent iso interfaces looks like this: auto lo iface lo inet loopback auto vmbr0 iface vmbr0 inet static address 10.100.100.73 netmask 255.255.0.0 gateway 10.100.100.2 bridge_ports eth0 bridge_stp off bridge_fd 0...
  9. B

    VM is locked (migrate)

    A migrate failled, and the kvm will not start. more details on that later. the KVM was controlled by HA . I tried to do an on line migrate , the target system rebooted - not sure yet if it was a panic or what caused that. now trying to start the kvm this was in the log: Task started...
  10. B

    HA cluster question about node priority.

    We have a 3 node cluster . 2 of the nodes have more memory and newer hardware then the 3-rd node. Is there a way to assign a higher priority to nodes, so that when "/etc/init.d/rgmanager stop" is run the KVM's go to the 2 stronger nodes then the 3-rd only as a last resort?
  11. B

    High risk of dataloss through human oversight

    You could write an add on for people who click a warning with out paying attention to it. some kind of sensor that checks the length of time between the amount of time between the pop up and doing a click. check out the patch submitter process.
  12. B

    High risk of dataloss through human oversight

    Sascha - make sure backups are done often and rsynced to other server disks. We also use usbmount and a script to copy to usb for offsite backup. I have no problem with the PVE interface. I do have problems with sometimes working too long and making mistakes.
  13. B

    after "pvecm delnode " deleted node shows with "pvecm nodes "

    Before reading your reply , I had added a fence device to a node, activated the changes, then the issue was solved. So I do not have good debugging info.. <?xml version="1.0"?> <cluster name="fbcluster" config_version="24"> <cman keyfile="/var/lib/pve-cluster/corosync.authkey"> </cman>...
  14. B

    after "pvecm delnode " deleted node shows with "pvecm nodes "

    Hello, We removed a node with " pvecm delnode s002 " . then powered off s002 . however " pvecm nodes " shows: fbc241 s012 ~ # pvecm nodes Node Sts Inc Joined Name 1 M 264 2012-04-17 06:58:42 fbc240 2 M 324 2012-04-18 19:50:40 fbc246 3 X 204...
  15. B

    Online migration fails

    glusterfs online migration is working for us.
  16. B

    Connect failed: connect: Connection refused; Connection refused (500)

    the ssh issue had something to do with the old server running one of our cronscripts which was ok for non cluster setup.
  17. B

    Connect failed: connect: Connection refused; Connection refused (500)

    the ssh issue fixed it self. I do not know how, my guess is a cluster cronscript did something.
  18. B

    Connect failed: connect: Connection refused; Connection refused (500)

    after checking other threads : ls -l /etc/pve/local/pve-ssl.pem ls: cannot access /etc/pve/local/pve-ssl.pem: No such file or directory pvecm updatecerts fbc241 s012 ~ # ls -l /etc/pve/local/pve-ssl.pem...
  19. B

    Connect failed: connect: Connection refused; Connection refused (500)

    Connect failed: connect: Connection refused; Connection refused (500) *SOLVED* this issue was solved by running pvecm updatecerts see 2-nd post. --- Hello I have a server which had been running Proxmox 2.0 since December. according to aptitude logs it originally had proxmox-ve-2.6.32...
  20. B

    High Availability 2

    I've got a question when adding the Volume group created on primary/primary drbd to pve storage : Nodes -- should I choose both of the drbd nodes? thanks.