Hi,
I have changed the IP address of my vmbr0 network device to be a local one (10.254.254.254) and my cluster broke.
After digging around it turns out that my /etc/hosts looks like:
My eth0 (and eth0:0, eth0:1 and so on) nics all have the correct publicly visible IP addresses.
Changing /etc/hosts manually doesn't help as it is rebuilt on the next reboot.
How do I prevent /etc/hosts being rebuilt, or alternatively how do I tell /etc/hosts to use the IP address on eth0?
Surely using eth0 would be a better strategy anyway?
Ta.
Col
I have changed the IP address of my vmbr0 network device to be a local one (10.254.254.254) and my cluster broke.
After digging around it turns out that my /etc/hosts looks like:
Code:
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
10.254.254.254 XXX.ovh.net.ovh.net XXX.ovh.net pvelocalhost
# The following lines are desirable for IPv6 capable hosts
#(added automatically by netbase upgrade)
::1 ip6-localhost ip6-loopback
feo0::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
My eth0 (and eth0:0, eth0:1 and so on) nics all have the correct publicly visible IP addresses.
Changing /etc/hosts manually doesn't help as it is rebuilt on the next reboot.
How do I prevent /etc/hosts being rebuilt, or alternatively how do I tell /etc/hosts to use the IP address on eth0?
Surely using eth0 would be a better strategy anyway?
Ta.
Col