[SOLVED] Proxmox VE picks up the hostname of one of its virtual machines when it is turned off and back on.

yondkoo

New Member
Dec 6, 2023
11
0
1
Last night, the data center lost power and the servers went down. When the power came back and the servers were turned on, the two Proxmox VE servers stopped working. When entering the servers via SSH, the hostnames were changed to the hostname of the virtual machine.
First server has pve-node1 hostname originally, now it's pgsql2 (which is one of its virtual machines name)
Second server has pve-node2 hostname too, but its VM9000 (its just template image...)

Any way to fix this?
 
Is this GUI or is this also when you e.g. SSH connect showing in hostname? The nodes are all static network configs, correct? No DHCP options, nothing like that...
 
When entering the servers via SSH, the hostnames were changed to the hostname of the virtual machine.
Are you sure you're connecting to the PVE host and not a VM? Is it possible the host (and VMs) are configured to use DHCP, and then they were powered back on they were handed the IP of pgsql2 and VM9000?
 
When entering the servers via SSH

Sorry, I missed this.

EDIT: Are you connecting through a standalone SSH connection and via IP or hostnames are being resolved by some DNS?

Are you sure you do not have e.g. DHCP handing out an IP address to a VM that you assigned to your node? Or another way, could there be an IP conflict because your DHCP pool has the nodes IPs within?
 
Last edited:
Sorry, I missed this.

EDIT: Are you connecting through a standalone SSH connection and via IP or hostnames are being resolved by some DNS?

Are you sure you do not have e.g. DHCP handing out an IP address to a VM that you assigned to your node? Or another way, could there be an IP conflict because your DHCP pool has the nodes IPs within?
I'm connecting through a standalone SSH connection with exact IP. DHCP is disabled on the router, i don't think server gets new address.
 
Are you sure you're connecting to the PVE host and not a VM? Is it possible the host (and VMs) are configured to use DHCP, and then they were powered back on they were handed the IP of pgsql2 and VM9000?
I'm 100% sure that I'm connecting to the PVE host.
 
Does it happen because of cloud init?
Right now, I'm guessing PVE host gets the host info (hostname and hosts) from the last cloud-init configuration that has used via VM.
 
Just to bring more clarity, do you mind posting /etc/corosync/corosync.conf on the nodes (hosts) that exhibit this problem? Please ssh connect by IP from the outside (non-cluster machine) and show also ip a and routel.

Do you use any VLANs in that setup? Networking with guests is bridged?
 
Just to bring more clarity, do you mind posting /etc/corosync/corosync.conf on the nodes (hosts) that exhibit this problem? Please ssh connect by IP from the outside (non-cluster machine) and show also ip a and routel.

Do you use any VLANs in that setup? Networking with guests is bridged?
ip a and routel:


Code:
root@pve-node1:~# routel
Dst             Gateway         Prefsrc         Protocol Scope   Dev              Table
default         172.16.100.1                    kernel           vmbr0
172.16.100.0/24                 172.16.100.31   kernel   link    vmbr0
127.0.0.0/8                     127.0.0.1       kernel   host    lo               local
127.0.0.1                       127.0.0.1       kernel   host    lo               local
127.255.255.255                 127.0.0.1       kernel   link    lo               local
172.16.100.31                   172.16.100.31   kernel   host    vmbr0            local
172.16.100.255                  172.16.100.31   kernel   link    vmbr0            local
root@pve-node1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: eno8303: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether d0:8e:79:b9:31:4c brd ff:ff:ff:ff:ff:ff
    altname enp4s0f0
3: eno12399np0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether e4:3d:1a:10:3a:a0 brd ff:ff:ff:ff:ff:ff
    altname enp75s0f0np0
4: eno8403: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d0:8e:79:b9:31:4d brd ff:ff:ff:ff:ff:ff
    altname enp4s0f1
5: eno12409np1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether e4:3d:1a:10:3a:a1 brd ff:ff:ff:ff:ff:ff
    altname enp75s0f1np1
6: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d0:8e:79:b9:31:4c brd ff:ff:ff:ff:ff:ff
    inet 172.16.100.31/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::d28e:79ff:feb9:314c/64 scope link
       valid_lft forever preferred_lft forever
root@pve-node1:~# cat /etc/hosts
# Your system has configured 'manage_etc_hosts' as True.
# As a result, if you wish for changes to this file to persist
# then you will need to either
# a.) make changes to the master file in /etc/cloud/templates/hosts.debian.tmpl
# b.) change or remove the value of 'manage_etc_hosts' in
#     /etc/cloud/cloud.cfg or cloud-config from user-data
#
127.0.1.1 VM9000 VM9000
127.0.0.1 localhost

# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

root@pve-node1:~#

corosync.conf:

Code:
root@pve-node1:~# cat /etc/corosync/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: pve-node1
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 172.16.100.31
  }
  node {
    name: pve-node2
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 172.16.100.32
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: pve-cluster1
  config_version: 2
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  link_mode: passive
  secauth: on
  version: 2
}

I didn't setup VLAN. All guests are bridged.
 
What is the output of "lsblk", "df", and "free", when you ssh into the PVE server?
Code:
root@pve-node1:~# lsblk
NAME                            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda                               8:0    0  1.7T  0 disk
|-sda1                            8:1    0 1007K  0 part
|-sda2                            8:2    0    1G  0 part /boot/efi
`-sda3                            8:3    0  1.7T  0 part
  |-pve-swap                    253:0    0    8G  0 lvm  [SWAP]
  |-pve-root                    253:1    0   96G  0 lvm  /
  |-pve-data_tmeta              253:2    0 15.9G  0 lvm
  | `-pve-data-tpool            253:4    0  1.6T  0 lvm
  |   |-pve-data                253:5    0  1.6T  1 lvm
  |   |-pve-vm--501--disk--0    253:6    0    4M  0 lvm
  |   |-pve-vm--501--disk--1    253:7    0    4M  0 lvm
  |   |-pve-vm--501--disk--2    253:8    0   70G  0 lvm
  |   |-pve-vm--502--disk--0    253:9    0   50G  0 lvm
  |   |-pve-vm--502--cloudinit  253:10   0    4M  0 lvm
  |   |-pve-vm--201--cloudinit  253:11   0    4M  0 lvm
  |   |-pve-vm--503--disk--0    253:12   0   40G  0 lvm
  |   |-pve-vm--9000--cloudinit 253:13   0    4M  0 lvm
  |   |-pve-vm--602--cloudinit  253:14   0    4M  0 lvm
  |   |-pve-vm--602--disk--0    253:15   0  100G  0 lvm
  |   |-pve-vm--702--disk--0    253:16   0   40G  0 lvm
  |   |-pve-vm--702--cloudinit  253:17   0    4M  0 lvm
  |   |-pve-vm--702--disk--1    253:18   0  300G  0 lvm
  |   |-pve-vm--801--cloudinit  253:19   0    4M  0 lvm
  |   |-pve-vm--803--cloudinit  253:20   0    4M  0 lvm
  |   |-pve-vm--801--disk--0    253:21   0  100G  0 lvm
  |   |-pve-vm--803--disk--0    253:22   0  100G  0 lvm
  |   `-pve-vm--503--cloudinit  253:23   0    4M  0 lvm
  `-pve-data_tdata              253:3    0  1.6T  0 lvm
    `-pve-data-tpool            253:4    0  1.6T  0 lvm
      |-pve-data                253:5    0  1.6T  1 lvm
      |-pve-vm--501--disk--0    253:6    0    4M  0 lvm
      |-pve-vm--501--disk--1    253:7    0    4M  0 lvm
      |-pve-vm--501--disk--2    253:8    0   70G  0 lvm
      |-pve-vm--502--disk--0    253:9    0   50G  0 lvm
      |-pve-vm--502--cloudinit  253:10   0    4M  0 lvm
      |-pve-vm--201--cloudinit  253:11   0    4M  0 lvm
      |-pve-vm--503--disk--0    253:12   0   40G  0 lvm
      |-pve-vm--9000--cloudinit 253:13   0    4M  0 lvm
      |-pve-vm--602--cloudinit  253:14   0    4M  0 lvm
      |-pve-vm--602--disk--0    253:15   0  100G  0 lvm
      |-pve-vm--702--disk--0    253:16   0   40G  0 lvm
      |-pve-vm--702--cloudinit  253:17   0    4M  0 lvm
      |-pve-vm--702--disk--1    253:18   0  300G  0 lvm
      |-pve-vm--801--cloudinit  253:19   0    4M  0 lvm
      |-pve-vm--803--cloudinit  253:20   0    4M  0 lvm
      |-pve-vm--801--disk--0    253:21   0  100G  0 lvm
      |-pve-vm--803--disk--0    253:22   0  100G  0 lvm
      `-pve-vm--503--cloudinit  253:23   0    4M  0 lvm
root@pve-node1:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                   63G     0   63G   0% /dev
tmpfs                  13G  2.2M   13G   1% /run
/dev/mapper/pve-root   96G   15G   82G  15% /
tmpfs                  63G  8.1M   63G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
/dev/sda2            1022M  356K 1022M   1% /boot/efi
tmpfs                  13G     0   13G   0% /run/user/0
root@pve-node1:~# free
               total        used        free      shared  buff/cache   available
Mem:       131296092     2091744   129691044       13520      282364   129204348
Swap:        8388604           0     8388604
root@pve-node1:~#
 
ip a and routel:


Code:
root@pve-node1:~# routel
Dst             Gateway         Prefsrc         Protocol Scope   Dev              Table
default         172.16.100.1                    kernel           vmbr0
172.16.100.0/24                 172.16.100.31   kernel   link    vmbr0
127.0.0.0/8                     127.0.0.1       kernel   host    lo               local
127.0.0.1                       127.0.0.1       kernel   host    lo               local
127.255.255.255                 127.0.0.1       kernel   link    lo               local
172.16.100.31                   172.16.100.31   kernel   host    vmbr0            local
172.16.100.255                  172.16.100.31   kernel   link    vmbr0            local
root@pve-node1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: eno8303: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether d0:8e:79:b9:31:4c brd ff:ff:ff:ff:ff:ff
    altname enp4s0f0
3: eno12399np0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether e4:3d:1a:10:3a:a0 brd ff:ff:ff:ff:ff:ff
    altname enp75s0f0np0
4: eno8403: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d0:8e:79:b9:31:4d brd ff:ff:ff:ff:ff:ff
    altname enp4s0f1
5: eno12409np1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether e4:3d:1a:10:3a:a1 brd ff:ff:ff:ff:ff:ff
    altname enp75s0f1np1
6: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d0:8e:79:b9:31:4c brd ff:ff:ff:ff:ff:ff
    inet 172.16.100.31/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::d28e:79ff:feb9:314c/64 scope link
       valid_lft forever preferred_lft forever
root@pve-node1:~# cat /etc/hosts
# Your system has configured 'manage_etc_hosts' as True.
# As a result, if you wish for changes to this file to persist
# then you will need to either
# a.) make changes to the master file in /etc/cloud/templates/hosts.debian.tmpl
# b.) change or remove the value of 'manage_etc_hosts' in
#     /etc/cloud/cloud.cfg or cloud-config from user-data
#
127.0.1.1 VM9000 VM9000
127.0.0.1 localhost

# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

root@pve-node1:~#

corosync.conf:

Code:
root@pve-node1:~# cat /etc/corosync/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: pve-node1
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 172.16.100.31
  }
  node {
    name: pve-node2
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 172.16.100.32
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: pve-cluster1
  config_version: 2
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  link_mode: passive
  secauth: on
  version: 2
}

I didn't setup VLAN. All guests are bridged.

Since you included also the /etc/hosts it kind of explains what's been going wrong, just not why. Debian usually uses 127.0.1.1 to map system's name to localhost in that peculiar way. What PVE expects to have the entry there for the static IP (172.16.100.31 in this case) and of course match the name from /etc/hostname (which is presumably correct based on how your prompt appears). So if you correct the entry to the routable IP and actual name, your problem should be gone.

The mystery that would remain is why this has happened.

EDIT: Hang on a second, normal PVE install does not have manage_etc_hosts setup for this, is this something you had configured on purpose? Is this a VPS?
 
Last edited:
Since you included also the /etc/hosts it kind of explains what's been going wrong, just not why. Debian usually uses 127.0.1.1 to map system's name to localhost in that peculiar way. What PVE expects to have the entry there for the static IP (172.16.100.31 in this case) and of course match the name from /etc/hostname (which is presumably correct based on how your prompt appears). So if you correct the entry to the routable IP and actual name, your problem should be gone.

The mystery that would remain is why this has happened.

EDIT: Hang on a second, normal PVE install does not have manage_etc_hosts setup for this, is this something you had configured on purpose? Is this a VPS?
Sorry for giving you a wrong info. (yesterday I edited the hostname using hostnamectl. it was just VM9000 before)
I hadn't configured manage_etc_hosts or something related stuff. I only installed one thing, that is cloud init for the VM.
 
Sorry for giving you a wrong info. (yesterday I edited the hostname using hostnamectl. it was just VM9000 before)

That's alright, but the entry in the /etc/hosts is wrong - it needs the routable IP, not 127.0.1.1 and it needs to be correct one.

I hadn't configured manage_etc_hosts or something related stuff. I only installed one thing, that is cloud init for the VM.

I would suspect you had executed the command in the host instead. Sine you have corosync there, I will not entertain the possibility you were connected to somewhere wrong when giving the outputs. Check if the cloud-init is installed on the host and remove it, then fix the file.
 
  • Like
Reactions: yondkoo
That's alright, but the entry in the /etc/hosts is wrong - it needs the routable IP, not 127.0.1.1 and it needs to be correct one.



I would suspect you had executed the command in the host instead. Sine you have corosync there, I will not entertain the possibility you were connected to somewhere wrong when giving the outputs. Check if the cloud-init is installed on the host and remove it, then fix the file.
I followed this steps: https://pve.proxmox.com/wiki/Cloud-Init_Support
Looks like it has to be installed on PVE host, no?
 
This command is not intended to be executed on the Proxmox VE host, but only inside the VM.
It is mentioned right after the apt-get install cloud-init command :) . Remove it from the host and make sure the hostname in /etc/hosts and /etc/hostname are correct and then do a reboot.
 
Last edited:
  • Like
Reactions: yondkoo
*FACEPALM* *SLAPPING FOREHEAD*
Thanks all for helping out for these days. I really should have read them.

Have a great weekend!
 
It is mentioned right after the apt-get install cloud-init command :)
Could perhaps the docs underline it in the sentence prior to the command, like so:
install the Cloud-Init packages inside the VM

Remove it from the host and make sure the hostname in /etc/hosts and /etc/hostname are correct and then do a reboot.
@yondkoo Also make sure the static IP is there, not 127.0.1.1.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!