Search results

  1. S

    Vlan tagging from inside KVM Guest Issues

    not 100% sure but i think you should only assign a bridge or vlan to the phy interface and not both;
  2. S

    lvm - drbd - lvm

    i would say lvm -> drbd -> lvm make sense if you have a large storage server where you can expand the capacity easy by adding as much disks as you need; here you can resize the first lvm to the new size, then grow the drbd device and on the second lvm on top of the drbd you would have multiple...
  3. S

    lvm - drbd - lvm

    i'm using drbd -> lvm because my if i have to resize the underlying device several times i did a bad job on planning/sizing the hardware; if i need to expand the underlying device on production servers that means changing all the harddisks to bigger ones and we are using 1u servers with 8 disks...
  4. S

    High availibility with nodes in different subnet

    you could insert a vde2 switch in your network config and span the vlan over both locations - in this case all cluster nodes are in the same subnet; but the quorum issue still exists like spirit already mentioned;
  5. S

    Any way to disable the nagware screen without recompiling?

    come on, there is nothing to cpmplain about - you have a full working virtualization environment... for free... with all features..... not like other projects where the free 'community version' is limited to certain features and therefore not really usable;
  6. S

    Proxmox 3.1 no stable windows guest network!

    how is your network connectivity on the node itself? have you checked your /etc/network/interfaces file if there is something wrong? miitool?
  7. S

    local storage: Container option not removable?

    Hi, Seems there is a GUI Bug: The GUI in PVE 3.1 shows always 'Containers' as content option on local storage but not in storage.cfg; Tried to disable it in GUI, storage.cfg is written correctly but GUI still shows Containers; It's also not possible to disable local storage - but not sure if...
  8. S

    using drbd in a 2+ node cluster

    That means you have both DRBD in primary/secondary mode running? If you have a VM in a LV on Server 1 running, can you migrate this single VM over to Server 2 or do you have to migrate all VM's on this DRDB volume over to Server at once and set DRBD to primary on Server 2? Because i have only...
  9. S

    using drbd in a 2+ node cluster

    Hi, may i ask you how you have your DRBD setup configured? you have one DRBD device over all 16 nodes or 8 two node DRBD devices? thx, Alex
  10. S

    How to secure every vm from host?

    Hi, It's not so clear to me what you are trying; My setup contains several VLAN's for the VM's, the host has it's ip in the management VLAN; When you have no routing instance which forwards the traffic between management VLAN and the VLAN's of your VM's you are on the save side; If you need to...
  11. S

    Proxmox 3: KVM don't start

    Also if you install PVE on your existing Debian host you need to manually load the required modules or restart the host after installation;
  12. S

    Disk configuration suggestions

    Hi, Nowadays 1TB are not so expensive anymore, you already think about a possible later upgrade - buy additional 2x 1TB harddisks 'now' and create a Raid-10; You have then more space, better performance and no need to think how to move or re-install your system again later.... Alex
  13. S

    When to Balloon?

    From my understanding how Balloonig works, i generally would not enable it any critical or production system - only on non-productive systems as it causes the guest to swap memory to disk and reclaim memory back when needed - which slows down the guest; I would rather partition properly the...
  14. S

    Default network config with eth0/vmbr0 on LAN, can't get eth1 up with public static.

    Re: Default network config with eth0/vmbr0 on LAN, can't get eth1 up with public stat I don't think you will get it to work this way; If i understand it correctly you have a public subnet from your ISP which you want to assign to your VM's, protected by your firewall; Three possible solutions...
  15. S

    Bond Performance

    More throughput than 1GBit/s is only possible with balance-rr; If you need that only for drbd-sync just connect the two servers directly together without switch in between, and cross cables are normally not needed as modern Nic's can do MDI/MDI-X; 4 Nic's does not give you 4x 1GBit/s - each...
  16. S

    Network Setup Advice - Jumbo Frames

    Hi, Jumbo frames gave you a little bit more noticeable performance some years ago then yet; Actual switches and nic's (with tcp offload engine) have already very low latencies in packet forwarding, in my tests i saw about 4-6% increase; Use iperf and run some tests with and without jumbo frames...
  17. S

    NFS backups lock up VMs

    ack; as your backup works fine locally, it seems not to be a disk io issue; you wrote the servers becoming unresponsive - does this only affect the vm's or the host system too? how does your network setup look like? is the nfs server on the same subnet? do you use vlan's?
  18. S

    Hanged NFS backup share froze guest disk

    Hi, I assume you mounted NFS via TCP - try mounting via UDP as you only can reboot the node to get rid of a hanging TCP NFS mount Alex
  19. S

    NFS backups lock up VMs

    Do you have an lvm volume below your vm's? If not and you do a backup with snapshot it falls back to suspend which freezes the vm during the backup and resumes the vm when finished; Alex
  20. S

    Whole system slows down while backup is running

    Hi, I have a productive 1.9 cluster with better components and a non-productive test system on 1.9 with cheaper hardware; Not seeing any performance issues on the productive servers but on the test system; The test system has one 3GHz Quad-Core Xeon, 8GB Ram, a cheap Raid-Controller and 4 S-ATA...