Why doesn't PVE populate my /etc/hosts (with other cluster node names)?

esi_y

Active Member
Nov 29, 2023
1,670
245
43
github.com
Seriously, it's static IPs, it's installed with specific changes to /etc/hosts, when adding nodes, they are taken into corosync.conf under both name and ringX_addr, so why isn't a line appended to every node's /etc/hosts to make for a sensible name resolution (without dependency on DNS)?
 
Last edited:
Good idea!

Nice that you bring it up yet in the end: please just create a feature request and link it here so that we (or at least I) can watch it ;)
 
I will, but I wanted to solicit (from user base) any "we do not even want that because" e.g. it would clash with how we are doing it ... I am filing to many reports to my own liking lately even.
 
I will, but I wanted to solicit (from user base) any "we do not even want that because" e.g. it would clash with how we are doing it ... I am filing to many reports to my own liking lately even.
You have my vote! We do this all the time with all (different) cluster system we take care of, including a small ASCII-based (does not always support UTF-8) schematic in /etc/hosts.
 
You have my vote! We do this all the time with all (different) cluster system we take care of, including a small ASCII-based (does not always support UTF-8) schematic in /etc/hosts.
You may want to brainstorm that method here?

Because I would also have additional points (before filing it) in mind to e.g. provide for entries also for migration network, etc.
 
You may want to brainstorm that method here?
Depending on the setup, we also add a small server schematic with "plugged" ports, so that the remote support staff is able to tell the people at the DC "what goes where". This is part of a 1 HE pizzabox:

Code:
#                                              bond0
#                                                |
#                                           +----+----+
#                                           |         |
#  +-+-------------+-+------------+-----+---+---+-+---+---+----+----+---+--------+-+-------+----------------------------------------------+
#  | +-------------+ +------------+     +-------+ +-------+    |    |   +--------+ +-------+                                              |
#  | |             | |            |     | enp1  | | enp1  |    |    |   |  FC 1  | |  FC 2 |                +------+ +------+ +------+    |
#  | |             | |            |     | s0f0  | | s0f1  |    |    |   +--------+ +-------+                |      | |      | |      |    |
#  | |     PSU     | |    PSU     |     +------++ ++------+    |    |                       +---------+     | IRMC | | eno1 | | eno2 |    |
#  | |             | |            |     +------+   +------+    |    |                       |   VGA   |     |      | |      | |      |    |
#  | +-------------+ +------------+     | USB  |   | USB  |    +----+                       +---------+     +------+ +------+ +------+    |
#  +-+-------------+-+------------+-----+------+---+------+----+----+-----------------------+---------+-----+---+--+-+------+-+------+----+
#                                                                                                          iRMC |

Because I would also have additional points (before filing it) in mind to e.g. provide for entries also for migration network, etc.
Yes, migration network(s) / interconnect networks are also part of our /etc/hosts, yet that's all - just "internal" PVE stuff.
 
Depending on the setup, we also add a small server schematic with "plugged" ports, so that the remote support staff is able to tell the people at the DC "what goes where". This is part of a 1 HE pizzabox:

Code:
#                                              bond0
#                                                |
#                                           +----+----+
#                                           |         |
#  +-+-------------+-+------------+-----+---+---+-+---+---+----+----+---+--------+-+-------+----------------------------------------------+
#  | +-------------+ +------------+     +-------+ +-------+    |    |   +--------+ +-------+                                              |
#  | |             | |            |     | enp1  | | enp1  |    |    |   |  FC 1  | |  FC 2 |                +------+ +------+ +------+    |
#  | |             | |            |     | s0f0  | | s0f1  |    |    |   +--------+ +-------+                |      | |      | |      |    |
#  | |     PSU     | |    PSU     |     +------++ ++------+    |    |                       +---------+     | IRMC | | eno1 | | eno2 |    |
#  | |             | |            |     +------+   +------+    |    |                       |   VGA   |     |      | |      | |      |    |
#  | +-------------+ +------------+     | USB  |   | USB  |    +----+                       +---------+     +------+ +------+ +------+    |
#  +-+-------------+-+------------+-----+------+---+------+----+----+-----------------------+---------+-----+---+--+-+------+-+------+----+
#                                                                                                          iRMC |
I see :D I can't possibly add feature request for this. Maybe on April 1 though ;)

Yes, migration network(s) / interconnect networks are also part of our /etc/hosts, yet that's all - just "internal" PVE stuff.
I am aware at least of one system that does not like (ignores it altogether) symlinks on /etc/hosts, I am also aware PVE core team loves symlinks. So I want to figure out how to do it best before I have it turned down immediately as a non-starter basically.
 
Last edited:
I am aware at least of one system that does not like (ignores it altogether) symlinks on /etc/hosts, I am also aware PVE core team loves symlinks. So I want to figure out how to do it best before I have it turned down immediately as a non-starter basically.
Yes, I also think that the logic needs to add/rm the hosts on each file. Some logic is already present in the LX(C) container part in which the container name is automatically added to /etc/hosts.
 
(I will be filing this after the weekend, just letting anyone know it's not been forgotten ... and bumping up this before the weekend if there are any more comments.)
 
this IP might not be as static as you think it is - by default we write the one that was configured in the installer, but that can later be changed by the admin (or even not encoded at all in /etc/hosts, it just needs to resolve!). the corosync link addresses are completely independent in any case.

pmxcfs resolves the local hostname once at startup, and then broadcasts this over corosync, so we could theoretically have a segment in /etc/hosts that gets filled with this broadcasted information (and updated on changes). it might be confusing/surprising though for setups where DNS is configured properly, since it would effectively duplicate potentially outdated information.
 
  • Like
Reactions: esi_y
this IP might not be as static as you think it is - by default we write the one that was configured in the installer, but that can later be changed by the admin (or even not encoded at all in /etc/hosts, it just needs to resolve!). the corosync link addresses are completely independent in any case.

i suppose that's why names as well as ringX_addr is used there and theoretically could be a complete mismatch with what the machine thinks it is

pmxcfs resolves the local hostname once at startup, and then broadcasts this over corosync

i see!

, so we could theoretically have a segment in /etc/hosts that gets filled with this broadcasted information (and updated on changes). it might be confusing/surprising though for setups where DNS is configured properly

this is why i have been waiting with the filing, i had this going through my head because admittedly I also have DNS set up and it would go on clashing :)

, since it would effectively duplicate potentially outdated information.

yeah I am stuck on this part for now (despite it looks like I enjoy filing reports I actually take my time before I think it's feasible;))

Thanks for the replies!
 
i suppose that's why names as well as ringX_addr is used there and theoretically could be a complete mismatch with what the machine thinks it is
without an explicit address, corosync even does some weird manual resolution depending on whether corosync.conf tells it to prefer ipv4 or ipv6, so you might get different results based on that alone ;) but yes, we always explicitly use an address there, which in many cases will not be the one that the hostname resolves to. some admins use a separate name for these addresses (either via DNS or /etc/hosts) to still make them resolvable though. similarly, there might be other addresses that are not related to the "main" record: migration or storage networks, public IPs, VPN endpoints, ..
 
  • Like
Reactions: esi_y
without an explicit address, corosync even does some weird manual resolution depending on whether corosync.conf tells it to prefer ipv4 or ipv6, so you might get different results based on that alone ;)

i didn't test that, but i generally was wondering why that "duplicity" exists ... now I see how it's init'd, still would prefer to have some sort of authoritative source (not necessarily /etc/hosts)

but yes, we always explicitly use an address there, which in many cases will not be the one that the hostname resolves to. some admins use a separate name for these addresses (either via DNS or /etc/hosts) to still make them resolvable though.

i understand there will always be "a machine goes by multiple names" (shall I call them, ahm, aliases?), or it has multiple NICs and corosync (ideally) is on entirely different subnet / vlan at the least and thus makes any one particular alias not what necessarily everyone (on fresh install) expects

similarly, there might be other addresses that are not related to the "main" record: migration or storage networks, public IPs, VPN endpoints, ..

but I was still considering there should be some canonical name that e.g. SSH (as in manual conn) will connect on no matter what because that's built into the cluster logic (note I am NOT suggesting it should be used by pve-* .pm's, I actually like it uses explicit IPs including they are included as SANs in the SSL certs)
 
I am just back here to revisit this, least because I have never filed the intended request, so that deserves an explanation.

In the long run, I found it counter-productive, syncing / populating something (/etc/hosts) that should be long obsolete in 2020s. I figured from the changelog [1] that 8.0.2 fixed "pmxcfs: check all addresses from getaddrinfo to find non-loopback one" accepting the reality of where the world has been moving for a while.

Taking advantage of knowing what the actual requirements are [2], I have eventually came to proper DHCP deployments and compiled it as a "tutorial" [3] for anyone's benefit.

I am not soliciting any sort of reaction (approval or lack thereof), but just wanted to give this train of thoughts of mine back from when I started experimenting with PVE some conclusion.

Thanks to @fabian for providing factual & reasoned feedback on the matter here. I believe I have at least offloaded BZ of one anachronistic request this way. :)

[1] https://github.com/proxmox/pve-cluster/blob/master/debian/changelog
[2] https://forum.proxmox.com/threads/dhcp-pve-install-on-top-of-debian-not-well-documented.154467/
[3] https://forum.proxmox.com/threads/dhcp-cluster-deployment-no-static-ips.154780/
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!