Search results

  1. J

    PVE 4 two nodes cluster wihout HA possible ?

    I have the same setup and worked this out just yesterday - Try this: man votequorum pvecm status nano /etc/pve/corosync.conf quorum { provider: corosync_votequorum two_node: 1 wait_for_all: 0 } totem { config_version: INCREASE_BY_1 !! interface { bindnetaddr: <ensure set...
  2. J

    OpenVZ venet network problems after host network changes

    I have determined the cause of this issue. It was only happening on NFS storage, not local storage, and was also after a recent update of my FreeNAS server. I don't know what had changed, but it appeared that that update caused an incompatibility with ProxMox and NFS on ZFS! After the following...
  3. J

    OpenVZ venet network problems after host network changes

    Hello, As below: root@proxmox1:~# cat /etc/pve/openvz/105.conf ONBOOT="no" PHYSPAGES="0:512M" SWAPPAGES="0:512M" KMEMSIZE="232M:256M" DCACHESIZE="116M:128M" LOCKEDPAGES="256M" PRIVVMPAGES="unlimited" SHMPAGES="unlimited" NUMPROC="unlimited" VMGUARPAGES="0:unlimited"...
  4. J

    OpenVZ venet network problems after host network changes

    I discovered that after the reinstall of node "proxmox1" the NICs were in the wrong order. I have now corrected (in /etc/udev/rules.d/70-persistent-net.rules) and so the output of brctl is now: root@proxmox1:~# brctl show bridge name bridge id STP enabled interfaces vmbr0...
  5. J

    OpenVZ venet network problems after host network changes

    /etc/network/interfaces: # network interface settings auto lo iface lo inet loopback iface eth0 inet manual auto eth1 iface eth1 inet static address 192.168.9.14 netmask 255.255.255.0 auto eth2 iface eth2 inet static address 10.0.0.1 netmask...
  6. J

    OpenVZ venet network problems after host network changes

    Hi, I have been running Proxmox 3.x for 12+ months (mostly) without problems, with both OpenVZ containers with venet IPs, and KVM VMs. Until recently my network config looked like this: eth0,eth1 -> bond0 -> vmbr0, with IP on the bridge only I have now separated out the eth1 interface to...
  7. J

    Proxmox VE 3.1 slow NFS reads

    Please take a look at the end of http://pve.proxmox.com/wiki/Performance_Tweaks The issue with backups may be due to the default bandwidth limit of 10Mb/s
  8. J

    Proxmox VE 3.1 slow NFS reads

    This is probably due to default bandwidth limit set for backups - see end of http://pve.proxmox.com/wiki/Performance_Tweaks
  9. J

    2-Node HA Cluster with SBD Fencing?

    Hello, I am new to Proxmox VE, and have recently setup a 2-node cluster with a Quorum disk on iSCSI as per http://pve.proxmox.com/wiki/Two-Node_High_Availability_Cluster, but currently without fencing as I do not have a device to use. I am asumming that this is why the 'rgmanager' service will...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!