PVE nodes with DHCP assigned IPs and hostnames

D

Deleted member 205422

Guest
As I had just followed the instructions to install PVE on top of Debian, I realised the node relies on static IP - it was kind of obvious during the ISO install, but during APT install it clearly has to go into /etc/hosts with a static IP - why is this and does it have to be hardcoded for each node?

Before I figure out how to bypass this - as I would like DHCP to hand out IPs based on MACs and hand out also node names as DHCP option 12 - has anyone already tackled this? Or am I missing some very important reason why PVE is all static? This surely does not scale well.

Thank you.
 
When you start clustering PVE you cannot easily change IP addresses afterwards. This may be why PVE starts with static network configuration.
Thanks for the reply, but I am afraid that cannot really be the reason.

1) The static IP of just the sole node being configured has to go into /etc/hosts, not all the nodes in the cluster. Even after joining more nodes in, each node's hosts file only keeps reference to its own IP.

2) The cluster setup is all very much in the corosync.conf, where all these IP are basically (assuming here) copied over at the initial setup of the cluster. I do not see any issue if, say, one node dropped off the cluster, then reappeared with different IP (assume very low DHCP lease time). It would still be able to find the other nodes, either at their remembered IPs, or - that is also in the conf file - hostnames.

3) Going full circle here, because PVE docs say that the /etc/hosts entry is essential, literally disregarding whatever nameservers will be in resolv.conf, it's just all very strange approach and without appropriate explanation for the rationale anywhere in the docs.

NB Other solutions doing clustering do DHCP just fine, as long as the nodes could find each other in any way to establish membership, there's a quorum and there's a live cluster.

NB2 Really do not understand what's the point of all the hostnames and SSH known_hosts entries and certificates when essentially moving a cluster to different network is then a chore.
 
Last edited by a moderator:
The closest piece of post here on this topic I found here:
https://forum.proxmox.com/threads/corosync-configuration.55595/

It is definitely possible to use hostnames in the cluster conf file, the reply even suggests it's all ok if they resolve to the same IP of the node for all nodes in the cluster. That would imply that the entry in the hosts file is not then necessary, the sad part is the missing docs on this and I will have to go trial and error.

Also for corosync, there can be multiple rings how nodes can connect to each other, so it would still be possible to have a "backup" if something went terribly wrong with DHCP / DNS on a backup ring.
 
No one uses PVE with DHCP assigned IPs to nodes?
 
I'm using dhcp on one proxmox node (without clustering). Mainly because I need IPv6 on that node and ifupdown2 doesn't seem to support a mix of static IPv4 and dynamic IPv6. This node is also based on Debian 12 and I just took the default (automatic) network config during install. I just adapted /etc/hosts afterwards.
No issues so far. As said, no clustering.

EDIT:
You can't use the default proxmox network config for that node in the webif for such config. All changes must be directly done to /etc/network/interfaces.
 
Last edited:
I'm using dhcp on one proxmox node (without clustering). Mainly because I need IPv6 on that node and ifupdown2 doesn't seem to support a mix of static IPv4 and dynamic IPv6. This node is also based on Debian 12 and I just took the default (automatic) network config during install. I just adapted /etc/hosts afterwards.
No issues so far. As said, no clustering.
Thanks! :) I actually worry more about the clustering part, I also got Debian 12 install with DHCP, then get PVE on top, no issues. In fact I think I even manually changed /etc/network/interfaces on ISO install and it worked ... with single node. :D The moment I started clustering it ... the issue is PVE expects that static entry in /etc/hosts for itself apparently.

PS Cool nickname ‍ ‍:D
 
Right, IPv4 in hosts file.
I think in that sense you are using static IP, even if you had a cluster and it was routable locally, they would be all basically on statically assigned IPv4 addresses. The extra address (IPv6 in this case) is used for outside access only, the IPv4 subnet, whether routable to the internet (e.g. behind NAT, CGNAT, etc.) does not matter to PVE.

I basically want to find out why Proxmox needs that IP in the /etc/hosts, why it can't have the original 127.0.0.1 (or 1.1 in Debian, not sure). It has to do with the Corosync for clustering.
 
ifupdown2 doesn't seem to support a mix of static IPv4 and dynamic IPv6
Sorry for my scattered replies, but it now got to me. I might need this for later myself .. you are saying that you need a DHCP for that IPv4 even you have it static (possibly DHCP static assigned), because ifupdown would not work then with your dynamic IPv6 (alongside the static one), this is when you have IPv6 via SLAAC or DHCPv6?
 
Sorry for my scattered replies, but it now got to me. I might need this for later myself .. you are saying that you need a DHCP for that IPv4 even you have it static (possibly DHCP static assigned), because ifupdown would not work then with your dynamic IPv6 (alongside the static one), this is when you have IPv6 via SLAAC or DHCPv6?
Exactly.
 
Ok so I digged down some more and found e.g. this personal blog where they simply added a hook to dhclient on BOUND and RENEW.

https://weblog.lkiesow.de/20220223-proxmox-test-machine-self-servic/proxmox-server-dhcp.html

It was exactly what I was going to ask about next, if I can get away with populating the /etc/hosts whenever the IP changes.

The issue for me is I really would like to know WHAT is the /etc/hosts entry used for (presumably in corosync) and while I can imagine the update hook on the BOUND to be safe, I wonder what happens if it actually changes in a RENEW of running such a node in a cluster. What does update of /etc/hosts cause during runtime? Also, why can't the feature simply use DNS lookup? Latency?
 
Last edited by a moderator:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!