Strange Proxmox Errors

Art2motion

New Member
Jun 23, 2023
2
0
1
Hi All,

Proxmox was working fine on an ip 10.0.0.200.
Lost connections to vm's and could not connect to the host.
Gui is also down.

When I logged into the local hardware the ip changed to 10.0.0.38 (had to edit the host file back to 10.0.0.200)
<cat /etc/hosts>
Code:
127.0.0.1 localhost.localdomain localhost
10.0.0.200  hostname.doman.com hostname

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

Had a look on the forums, and saw most requested the following info to assist :
Could someone point me in the right direction? Thank you so much.

Code:
pveversion -v
systemctl status pveproxy
systemctl status pvedaemon

< pveversion -v >
Code:
proxmox-ve: 7.1-1 (running kernel: 5.13.19-2-pve)
pve-manager: 7.1-7 (running version: 7.1-7/df5740ad)
pve-kernel-helper: 7.1-6
pve-kernel-5.13: 7.1-5
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-4
libpve-storage-perl: 7.0-15
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.1.2-1
proxmox-backup-file-restore: 2.1.2-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-4
pve-cluster: 7.1-2
pve-container: 4.1-2
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3
root@icon:~# proxmox-ve: 7.1-1 (running kernel: 5.13.19-2-pve)
pve-manager: 7.1-7 (running version: 7.1-7/df5740ad)
pve-kernel-helper: 7.1-6
pve-kernel-5.13: 7.1-5
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-4
libpve-storage-perl: 7.0-15
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.1.2-1
proxmox-backup-file-restore: 2.1.2-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-4
pve-cluster: 7.1-2
pve-container: 4.1-2
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3
proxmox-ve: 7.1-1 (running kernel: 5.13.19-2-pve)
zfsutils-linux: 2.1.1-pve3831-2-12-1 7.1-7/df5740ad)5740ad)1-7/df5740ad)
pve-kernel-helper: 7.1-6
pve-kernel-5.13: 7.1-5
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
zfsutils-linux: 2.1.1-pve3831-2-12-1

< systemctl status pveproxy >

Code:
● pveproxy.service - PVE API Proxy Server
     Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2023-06-23 15:51:36 SAST; 34min ago
    Process: 853 ExecStartPre=/usr/bin/pvecm updatecerts --silent (code=exited, status=111)
    Process: 854 ExecStart=/usr/bin/pveproxy start (code=exited, status=0/SUCCESS)
   Main PID: 855 (pveproxy)
      Tasks: 4 (limit: 19059)
     Memory: 150.8M
        CPU: 1min 18.862s
     CGroup: /system.slice/pveproxy.service
             ├─ 855 pveproxy
             ├─2618 pveproxy worker
             ├─2619 pveproxy worker
             └─2620 pveproxy worker

Jun 23 16:26:18 hostname pveproxy[2618]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1904.
Jun 23 16:26:18 hostname pveproxy[2323]: worker exit
Jun 23 16:26:18 hostname pveproxy[2324]: worker exit
Jun 23 16:26:18 hostname pveproxy[855]: worker 2323 finished
Jun 23 16:26:18 hostname pveproxy[855]: worker 2324 finished
Jun 23 16:26:18 hostname pveproxy[855]: starting 2 worker(s)
Jun 23 16:26:18 hostname pveproxy[855]: worker 2619 started
Jun 23 16:26:18 hostname pveproxy[855]: worker 2620 started
Jun 23 16:26:18 hostname pveproxy[2619]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1904.
Jun 23 16:26:18 hostname pveproxy[2620]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1904.

< systemctl status pvedaemon >
Code:
● pvedaemon.service - PVE API Daemon
     Loaded: loaded (/lib/systemd/system/pvedaemon.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2023-06-23 15:51:34 SAST; 39min ago
    Process: 837 ExecStart=/usr/bin/pvedaemon start (code=exited, status=0/SUCCESS)
   Main PID: 848 (pvedaemon)
      Tasks: 4 (limit: 19059)
     Memory: 129.7M
        CPU: 1.946s
     CGroup: /system.slice/pvedaemon.service
             ├─848 pvedaemon
             ├─849 pvedaemon worker
             ├─850 pvedaemon worker
             └─851 pvedaemon worker

Jun 23 15:51:31 hostname systemd[1]: Starting PVE API Daemon...
Jun 23 15:51:34 hostname pvedaemon[848]: starting server
Jun 23 15:51:34 hostname pvedaemon[848]: starting 3 worker(s)
Jun 23 15:51:34 hostname pvedaemon[848]: worker 849 started
Jun 23 15:51:34 hostname pvedaemon[848]: worker 850 started
Jun 23 15:51:34 hostname pvedaemon[848]: worker 851 started
Jun 23 15:51:34 hostname systemd[1]: Started PVE API Daemon.

Just for kicks
 
The PVE network configuration is in /etc/network/interface. Can you share the contents of that file? I don't see why it would change IP address unless it is set to DHCP.
Thanks for the quick reply. It was always set to manual. Might have just changed in the host file. Only showed up on the boot screen at first startup.

Code:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

auto vmbr0
iface vmbr0 inet static
        address 10.0.0.200/24
        gateway 10.0.0.1
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0

auto vmbr1
iface vmbr1 inet static
        address 10.10.10.200/24
        bridge-ports eno2
        bridge-stp off
        bridge-fd 0
 
Last edited:
hum....
My issue seems to come form the /etc/hosts file.
The node name appeared too many times on the line 127.0.1.1
I've commented this and restarted successfully the pve-cluster.
BUR, when I reboot the server, a new line with same erronous values has been added :
Code:
cat  /etc/hosts
127.0.0.1       localhost
::1     localhost       ip6-localhost   ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

A.B.C.D  ovh11
172.16.100.10   ovh10
172.16.100.11   ovh11
172.16.100.12   ovh12
127.0.1.1        ovh11 ovh11 #This line cause the service to crash at boot. If commented they are readded at boot.
 
hum....
My issue seems to come form the /etc/hosts file.
The node name appeared too many times on the line 127.0.1.1
I've commented this and restarted successfully the pve-cluster.
BUR, when I reboot the server, a new line with same erronous values has been added :
Code:
cat  /etc/hosts
127.0.0.1       localhost
::1     localhost       ip6-localhost   ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

A.B.C.D  ovh11
172.16.100.10   ovh10
172.16.100.11   ovh11
172.16.100.12   ovh12
127.0.1.1        ovh11 ovh11 #This line cause the service to crash at boot. If commented they are readded at boot.
are you sure that is not a ovh custom boot script ? I'm not aware of any proxmox service adding 127.0.1.1 in /etchosts.
 
  • Like
Reactions: Stoiko Ivanov
found in /usr/share/perl5/PVE/LXC/Setup/Base.pm
it explicitely add 127.0.1.1 in /etc/hosts files
If you already have set the host name elsewhere it will cause pve-cluster to not start
 
found in /usr/share/perl5/PVE/LXC/Setup/Base.pm
it explicitely add 127.0.1.1 in /etc/hosts files
If you already have set the host name elsewhere it will cause pve-cluster to not start
my bad, it concerns the LXC part
 
I have this problem on a OVH server.
After the update, the line 127.0.0.1 appears every time I reboot.
The problem is coming from the OVH proxmox template, which integrates a cloud-ini.
When updating pve7to8, if you install the conf file associated with the cloud-ini upgrade (/etc/cloud/cloud.cfg), it modifies the hosts file.
There have already been topics about this problem:
https://forum.proxmox.com/threads/ovh-proxmox-6-no-web-gui-access.70644/
https://forum.proxmox.com/threads/cloud-init-and-hosts-files.67372/

3 possible solutions:
-disable cloud-ini
touch /etc/cloud/cloud-init.disabled
-comment line in /etc/cloud/cloud.cfg
- update-etc-hosts
-restore the old conf that was kept during the upgrade (before deleting, check that you have the cloud.cfg.dpkg-old in /etc/cloud)
rm /etc/cloud/cloud.cfg; mv /etc/cloud/cloud.cfg.dpkg-old /etc/cloud/cloud.cfg
 
Last edited:
  • Like
Reactions: Dunuin and flotho
I have this problem on a OVH server.
After the update, the line 127.0.0.1 appears every time I reboot.
The problem is coming from the OVH proxmox template, which integrates a cloud-ini.
When updating pve7to8, if you install the conf file associated with the cloud-ini upgrade (/etc/cloud/cloud.cfg), it modifies the hosts file.
There have already been topics about this problem:
https://forum.proxmox.com/threads/ovh-proxmox-6-no-web-gui-access.70644/
https://forum.proxmox.com/threads/cloud-init-and-hosts-files.67372/

3 possible solutions:
-disable cloud-ini
touch /etc/cloud/cloud-init.disabled
-comment line in /etc/cloud/cloud.cfg
- update-etc-hosts
-restore the old conf that was kept during the upgrade (before deleting, check that you have the cloud.cfg.dpkg-old in /etc/cloud)
rm /etc/cloud/cloud.cfg; mv /etc/cloud/cloud.cfg.dpkg-old /etc/cloud/cloud.cfg
which kind of model of server ? is it a VPS ?
 
3 possible solutions:
-disable cloud-ini
touch /etc/cloud/cloud-init.disabled
-comment line in /etc/cloud/cloud.cfg
Just to mention it explicitly - usually cloud-init is not supposed to be installed on the PVE server itself - you install it inside guests to configure them with the settings from their VM conf.

(I'm not sure if OVH has something with cloud-init provisioning for their servers, but that would be definitely external to PVE)

so in case someone installed cloud-init on their node by mistake - consider removing it!
 
which kind of model of server ? is it a VPS ?
No it's a baremetal server (kimsufi)
Just to mention it explicitly - usually cloud-init is not supposed to be installed on the PVE server itself - you install it inside guests to configure them with the settings from their VM conf.

(I'm not sure if OVH has something with cloud-init provisioning for their servers, but that would be definitely external to PVE)

so in case someone installed cloud-init on their node by mistake - consider removing it!
OVH seems to use cloud-ini to automate the installation of PVE. (and certainly all other installable linux distributions they have as templates)
For example the hosts file is automatically filled with the server hostname and IP.
OVH templates allow OS deployment without customer intervention.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!