Search results

  1. T

    don't get Access to VM

    First of all the bridging configuration seems to be very wrong. Why would you raise so many bridges with the same physical NIC? You also assign ip addresses from the same subnet on different bridges and although it works, is just like having secondary ip addresses ( or you could call them...
  2. T

    [SOLVED] HUGE Fencing problem with IPMI

    Fencing a node is decided by the cluster system as a total through "voting". He has 3 nodes so that should do it just fine. iLO is like an external system so you can count on it to reboot your node in such case. If iLO has no power then the node has no power, so it is fenced.. I just wanted to...
  3. T

    [SOLVED] HUGE Fencing problem with IPMI

    Coming back to the issue of HA not migrating the VMs to the other hosts, you should check in /var/log/cluster/rgmanager.log and the other logs maybe to see hints of potential problems.. Is the shared storage configured properly?
  4. T

    [SOLVED] HUGE Fencing problem with IPMI

    That's a node already "fenced". However you should read again the first post. And to reply to you in the same idea: Why would you fence a node with no-power? Quote: "We are trying to setup fecing and its somewhat working." ( I copy-pasted along with the typo ) But then he confused HA...
  5. T

    HA migration on node failure restarts VMs

    I was hoping to have some time to perform some extended and extensive tests, but didn't.. thank you e100 and mir for pointing out something I missed ( actually it's my first try-out of using clustering and drbd ). I got some important pointers from you and I'll think about a proper drbd fencing...
  6. T

    [SOLVED] HUGE Fencing problem with IPMI

    You already have the fencing capability through iLO you shouldn't buy a UPS if you don't need one ( like for example having the power backed-up by UPS and generator in a datacenter ). I would focus only on fencing functionality at first just try to send commands manually ( use...
  7. T

    SSD for Host PVE OS will help in VMs performance?

    The way I read it, it's not what he asked.. However what you mentioned above, would be a better practice ofc.
  8. T

    SSD for Host PVE OS will help in VMs performance?

    Of course, but it will make a difference if the swap is on ssd :) You should have enough RAM not to ever swap..
  9. T

    [SOLVED] HUGE Fencing problem with IPMI

    From what I see you have put the proxmox IP addresses on the fencedevice and it should be the IP address of the iLO for IPMI. ( this is also obvious in your example of trying to fence manually and you have the IP address of pve, but when you use fence_ipmilan you use the correct IP ) However...
  10. T

    SSD for Host PVE OS will help in VMs performance?

    It should not make any difference except for the moment you start using the swap of the host..
  11. T

    HA Cluster with 2 Nodes, Backup

    There is a need for quorum. You can have it with a 3rd proxmox node which can be any machine ( you'll not run VMs on it.. just for quorum ) or you can have it via a quorum disk. Basically you need a 3rd party to tell you which node is going "nuts".. Also quorum disk can be very easily...
  12. T

    Prevent IP conflicts in VMs

    You could use something like ebtables to make sure that the bridge you are using doesn't forward traffic for unwanted src-ip from a specific interface. It would be a problem though from the fact that the tap interfaces gets added to the bridge only on startup of VM ( obvious ofc ), but I guess...
  13. T

    New server, VM mbr problems

    Could it be something related to the bios emulation? it's the same bios you are starting up with kvm on both hosts? Are the windows partition marked as active?
  14. T

    HA Cluster with 2 Nodes, Backup

    You can check my topic: http://forum.proxmox.com/threads/17382-HA-migration-on-node-failure-restarts-VMs I just finished the setup of such a scenario, except that I didn't want to use lvm and just put GFS2 on top of drbd. Right now I am doing I/O performance tests just to make sure I won't hit...
  15. T

    Error kernel: kvm: 3499: cpu1 unhandled rdmsr

    Did you check logs on both guest and host to see anything "special" ?
  16. T

    [ask]trouble with ssh

    You could use nohup. Just issue nohup wget...etc .
  17. T

    Error kernel: kvm: 3499: cpu1 unhandled rdmsr

    If you setup processor type to "host" then the guest VM tries to use debug MSRs from the CPU, although that only makes sense from the host perspective: https://bugzilla.redhat.com/show_bug.cgi?id=874627 You can just ignore this, or you could use kvm64 as cpu type..
  18. T

    HA migration on node failure restarts VMs

    Regarding the question of whether CLVM is required for GFS2, I found in Redhat's documentation that you can use straight-forward GFS2, however they don't offer support for this kind of setup in a cluster environment.. I am just afraid not to start this system and then with live applications on...
  19. T

    HA migration on node failure restarts VMs

    I don't have a shared storage, I simulate one with the help of drbd. I use a partition on the internal disk of the Dell server that I mirror it with the help of drbd on the other identical partition on the internal disk of the second server. Multipath either from iscsi or FC or other SAN means a...
  20. T

    HA migration on node failure restarts VMs

    I couldn't find anything related to such a functionality for gfs2. Maybe you mean glusterfs? ( for which I don't know details )