pveproxy: failed to get address info

Fantomas

Renowned Member
Aug 7, 2014
9
0
66
Hello together,

I have a cluster consisting of two machines runnning current PVE release.
After upgrading Megaraid Storage Manager to the current version, I can no longer connect to the machine, because pveproxy does not start anymore.
Uninstalling all MSM components doesn't make a difference.

systemctl status pveproxy.service shows this output:
Code:
● pveproxy.service - PVE API Proxy Server
   Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled)
   Active: failed (Result: exit-code) since Sun 2017-03-05 18:14:55 CET; 12min ago
  Process: 1985 ExecStart=/usr/bin/pveproxy start (code=exited, status=255)

Mar 05 18:14:55 vmhost2 pveproxy[1985]: start failed - failed to get address info for: vmhost2: System error
Mar 05 18:14:55 vmhost2 pveproxy[1985]: start failed - failed to get address info for: vmhost2: System error
Mar 05 18:14:55 vmhost2 systemd[1]: pveproxy.service: control process exited, code=exited status=255
Mar 05 18:14:55 vmhost2 systemd[1]: Failed to start PVE API Proxy Server.
Mar 05 18:14:55 vmhost2 systemd[1]: Unit pveproxy.service entered failed state.

Threads that mention the same problem didn't help me fixing my issue. Is there a way to debug the failing start process of pveproxy.service?

Thanks and best
Jan
 
Hi,

this means your pveproxy can't resolve the ip adress in /etc/hosts.
Or the ip is not accessible.

Make sure /etc/hosts has a 'pvelocalhost' in the host line.
This is needed by the proxy to get the address.
 
Is the nic with this address up?
 
It has an identical network configuration to the second machine. I also checked /etc/network/interfaces.
Because I can ssh to it, I don't assume a network issue...

*EDIT*

I further found out, that I am unable to ping the local machine or another machine on the network.
Nslookup works and ping via ip seems also to work properly.

*EDIT*

After installing and starting nscd on the affected machine, I am able to ping via hostname, again.
Anyway, trying to start pveproxy shows this result:

Code:
Mar 06 10:59:29 vmhost2 systemd[1]: Starting PVE API Proxy Server...
-- Subject: Unit pveproxy.service has begun with start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit pveproxy.service has begun starting up.
Mar 06 10:59:30 vmhost2 pveproxy[3635]: start failed - Unrecognised protocol tcp at /usr/share/perl5/PVE/Daemon.pm line 834.
Mar 06 10:59:30 vmhost2 pveproxy[3635]: start failed - Unrecognised protocol tcp at /usr/share/perl5/PVE/Daemon.pm line 834.
Mar 06 10:59:30 vmhost2 systemd[1]: pveproxy.service: control process exited, code=exited status=255
Mar 06 10:59:30 vmhost2 systemd[1]: Failed to start PVE API Proxy Server.
-- Subject: Unit pveproxy.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit pveproxy.service has failed.
--
-- The result is failed.
Mar 06 10:59:30 vmhost2 systemd[1]: Unit pveproxy.service entered failed state.
 
Last edited:
As this is a production system, not operating properly at the moment, I will wait until this afternoon before reinstalling the pve cluster node.
Any help still appreciated.
Thanks
 
Mar 06 10:59:30 vmhost2 pveproxy[3635]: start failed - Unrecognised protocol tcp at /usr/share/perl5/PVE/Daemon.pm line 834.

For post-world, I stumbled around the same issue today and here is the solution.

For some reason after a broken (?) upgrade or operating process, the /etc - directory rights on my system was 0750 instead 0755. The perl process is started as user www-data, therefore it can't read /etc/protocols and other needed files.

Fix:
chmod 0755 /etc

And if needed:
chown root:root /etc
 
I want to add that I had to set the rights to other folders!
I will leave a screenshot of the rights of the root folders for those who may have one host and nowhere to quickly get the correct list

1747381307107.png