Search results

  1. W

    CPU MHZ and Cache Different in VM than host.

    What CPU type did you assign to the VM? I don't know, but may be a limitation of the default CPU (kvm64), I think you are using. Can you try to change CPU type to "host" in Proxmox and check again? Please not that keeping CPU type to "host" is only a good idea if you have only the same...
  2. W

    PVE 4.1, systemd-timesyncd and CEPH (clock skew)

    This problem doesn't seems to be related to the topic. Looks like you just have no connection to 2 NTP servers anymore. Both NTP servers are from the same company according to Whois, so probably a problem at their side, the transit between you and them or your internet connection.
  3. W

    Add node - reboot all vm's

    I think if you only disable HA for the VM's the node will still crash. Only change you will acknowledge is that the VM's running on the crashed node will not be moved automatically to another node (cause HA is disabled). If you want to test this you need to disable HA for the VM's and disable...
  4. W

    Node Restarts

    Test multicast traffic, please see https://pve.proxmox.com/wiki/Multicast_notes If this is ok, is there enough free memory on the PVE nodes?
  5. W

    Two Network card

    The config example I provided is for use on the PVE node. After that you assign the vmbr interfaces to the VM's.
  6. W

    Add node - reboot all vm's

    Anything special in the logs on the PVE nodes at the time of the crash?
  7. W

    Two Network card

    Normally with bonding you have 2 (or more) links that can handle the same traffic (i.e. have access to the same VLANs). This way you can eliminate any SPOF in your network and you don't have your users to connect to another IP in case a link fails. For example two switches with both 1 link to...
  8. W

    Two Network card

    That's correct. You can consider to write a simple shell script that monitors eth0, if it's down (no reply on ping i.e.) you let the script delete the default gateway and adds the gateway again using the other interface (eth1). Should be something like: # ip route del default # ip route add...
  9. W

    Add node - reboot all vm's

    Are you sure you only have 1 IGMP querier active in your network (the second one/others needs to be silent)? Can you see anything in the logs of your switch(es) that can clarify something (STP/loop-errors?)? What fencing device do you use and how about the timers when this occurs?
  10. W

    Feature Request: Show HA-group at VM status info

    Sorry, I was looking at the wrong page. I think I need new glasses ;).
  11. W

    Feature Request: Show HA-group at VM status info

    Feature seems to be gone in version 4.2-18?
  12. W

    -

    -
  13. W

    ProxMox 4.2 less stable than 3.4

    Sure it's not a time issue (clock skews)? That is a known problem since 4.x (4.x is based on Debian Jessie, where systemd-timesyncd is introduced) and Ceph on the same host. If it is, here is how to fix it: https://forum.proxmox.com/threads/pve-4-1-systemd-timesyncd-and-ceph-clock-skew.27043/
  14. W

    Docker support in Proxmox

    However I'm not planning to use it myself currently, I agree. I think adding Docker support is a nice feature for PVE, because I think Docker will be the no. 1 choice when it comes to containers for lots of people (however if I was using containers, I think I would prefer LXC. But Docker simply...
  15. W

    cluster issue

    You run PVE 4.x, so: corosync-cmapctl -g totem.interface.0.mcastaddr
  16. W

    [SOLVED] apt-get update with IPv6 gives error

    Thanks for checking. In that case I guess it's apt that has the strange behavior to force v6, even if the host doesn't exists. :(
  17. W

    [SOLVED] apt-get update with IPv6 gives error

    But, I assume then there is also no AAAA record for it, so it only will resolve on v4 and only work on v4? However, if apt is trying to connect to v6, is guess there are AAAA records for this hostnames. Maybe Proxmox better remove this records until v6 is fully functional?
  18. W

    Need Help Setting up VLANs

    Never done this myself, but as far as I know all you have to do is configure a bridge for each customer (i.e. vmbr0001, vmbr0002 etc) and assign this bridge to VM's that need internal traffic.
  19. W

    Installation of VE on an SSD

    # sysctl vm.swappiness=1 && echo "vm.swappiness = 1" > /etc/sysctl.d/swappiness.conf && swapoff -a && swapon -a To minimize the use of SWAP.
  20. W

    PVE 4.x: Hardware or software watchdog preffered?

    Does this mean that when a total system failure occurs, the VM's running on the crashed node are not moved to another node (because this node can't be fenced)? Or does this only mean the node isn't rebooted automaticly?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!