What CPU type did you assign to the VM? I don't know, but may be a limitation of the default CPU (kvm64), I think you are using.
Can you try to change CPU type to "host" in Proxmox and check again?
Please not that keeping CPU type to "host" is only a good idea if you have only the same...
This problem doesn't seems to be related to the topic. Looks like you just have no connection to 2 NTP servers anymore. Both NTP servers are from the same company according to Whois, so probably a problem at their side, the transit between you and them or your internet connection.
I think if you only disable HA for the VM's the node will still crash. Only change you will acknowledge is that the VM's running on the crashed node will not be moved automatically to another node (cause HA is disabled). If you want to test this you need to disable HA for the VM's and disable...
Normally with bonding you have 2 (or more) links that can handle the same traffic (i.e. have access to the same VLANs). This way you can eliminate any SPOF in your network and you don't have your users to connect to another IP in case a link fails. For example two switches with both 1 link to...
That's correct. You can consider to write a simple shell script that monitors eth0, if it's down (no reply on ping i.e.) you let the script delete the default gateway and adds the gateway again using the other interface (eth1). Should be something like:
# ip route del default
# ip route add...
Are you sure you only have 1 IGMP querier active in your network (the second one/others needs to be silent)?
Can you see anything in the logs of your switch(es) that can clarify something (STP/loop-errors?)?
What fencing device do you use and how about the timers when this occurs?
Sure it's not a time issue (clock skews)? That is a known problem since 4.x (4.x is based on Debian Jessie, where systemd-timesyncd is introduced) and Ceph on the same host. If it is, here is how to fix it: https://forum.proxmox.com/threads/pve-4-1-systemd-timesyncd-and-ceph-clock-skew.27043/
However I'm not planning to use it myself currently, I agree. I think adding Docker support is a nice feature for PVE, because I think Docker will be the no. 1 choice when it comes to containers for lots of people (however if I was using containers, I think I would prefer LXC. But Docker simply...
But, I assume then there is also no AAAA record for it, so it only will resolve on v4 and only work on v4? However, if apt is trying to connect to v6, is guess there are AAAA records for this hostnames. Maybe Proxmox better remove this records until v6 is fully functional?
Never done this myself, but as far as I know all you have to do is configure a bridge for each customer (i.e. vmbr0001, vmbr0002 etc) and assign this bridge to VM's that need internal traffic.
Does this mean that when a total system failure occurs, the VM's running on the crashed node are not moved to another node (because this node can't be fenced)? Or does this only mean the node isn't rebooted automaticly?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.